Mercer Law Review Mercer Law Review
Volume 71
Number 2
Articles Edition
Article 4
3-2020
When Your Plate is Already Full: E?cient and Meaningful When Your Plate is Already Full: E?cient and Meaningful
Outcomes Assessment for Busy Law Schools Outcomes Assessment for Busy Law Schools
Melissa N. Henke
Follow this and additional works at: https://digitalcommons.law.mercer.edu/jour_mlr
Part of the Legal Education Commons
Recommended Citation Recommended Citation
Melissa N. Henke,
When Your Plate is Already Full: E?cient and Meaningful Outcomes Assessment for
Busy Law Schools
, 71 Mercer L. Rev. 529 (2020). Available at: https://digitalcommons.law.mercer.edu/
jour_mlr/vol71/iss2/4
This Article is brought to you for free and open access by the Journals at Mercer Law School Digital Commons. It
has been accepted for inclusion in Mercer Law Review by an authorized editor of Mercer Law School Digital
Commons. For more information, please contact repository@law.mercer.edu.
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
529
When Your Plate is Already Full:
Efficient and Meaningful Outcomes
Assessment for Busy Law Schools
by Melissa N. Henke*
I. INTRODUCTION
The American Bar Association (ABA) accreditation standards
involving outcome-based assessment are a game changer for legal
education.
1
The standards reaffirm the importance of providing
students with formative feedback throughout their course of study to
assess and improve student learning. The standards also require law
schools to evaluate their effectiveness, and to do so from the perspective
of student performance within the institution’s program of study. The
relevant question is no longer what are law schools teaching their
students, but instead, what are students learning from law schools in
terms of the knowledge, skills, and values that are essential for those
entering the legal profession. In other words, law schools must shift
their assessment focus from one centered around inputs to one based on
student outputs.
*
Robert G. Lawson & William H. Fortune, Associate Professor of Law and Director of
Legal Research and Writing, University of Kentucky J. David Rosenberg College of Law.
Professor Henke thanks her legal writing colleagues, Professors Jane Grisé, Kristin
Hazelwood, and Diane Kraft, for their generous time and effort in developing and using
the rubrics discussed in this article; UK Law Associate Dean of Research Scott Bauries for
his guidance in preparing this article for publication; and UK Law student Aaron Meek
for his help with article research and editing. She also thanks those involved in the Legal
Writing Institute’s 2018 Writers’ Workshop, namely the facilitators, Professors Cynthia A.
Adams, Kenneth Dean Chestek, and Mary Beth Beazley, for their invaluable comments
on an earlier draft of this article and overall support for her scholarly endeavors. This
article was written with the generous support of a writing grant from UK Law and Dean
David A. Brennen.
1. From the Editors, J. LEGAL EDUC., Volume 67, No. 2, at 373 (Winter 2018) (“These
new requirements are sparking some of the most significant, systemic changes to law
school pedagogy that we have seen in many years.”).
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
530 MERCER LAW REVIEW [Vol. 71
Compliance with the ABA’s assessment mandate comes at a time
when law school resources are spread thinner than ever. Indeed, faculty
already work with plates that are full with students, scholarship, and
service. Thus, while not all in the legal academy are on board with the
ABA’s approach to outcomes assessment or to outcomes assessment
generally, as busy educators, we should all at least agree that the
requisite response should be efficient, given that resources are limited,
and meaningful, such that the work done can benefit our learners.
2
To
do so, law schools should begin at their own tables set with full plates,
so to speak, taking stock of what institutions and their faculty are
already doing in terms of assessment. And it is important to think
broadly here, as faculty may be surprised to learn how many of their
colleagues are already doing relevant work.
While law schools may already be inclined to begin from within, this
Article outlines concrete strategies they can use when working with
existing faculty expertise and resources to respond to the ABA’s
assessment mandate in a meaningful way for students, and with the
goal of maximizing efficiency and gaining broad buy in. While prior
scholarship has outlined best practices for outcomes assessment and
even shared examples of how to engage in the process in the law school
setting, this Article is unique in its depth and breadth of coverage by
setting out a detailed case study
3
that illustrates the process of
developing an authentic assessment tool and beginning the process for
adapting that tool to respond to both the individual student assessment
and law school assessment required by the ABA.
To be clear, this Article does not suggest that only those with existing
expertise or resources should be the ones to actually engage in the
outcomes assessment work now required by the ABA. The goal should
not be to add to the plates of a few. Instead, to create a productive and
meaningful culture of assessment, experts in the field proclaim that
administrators and faculty must all be involved.
4
The ABA agrees.
5
In
2. Marie Summerlin Hamm, et al., The Rubric Meets the Road in Law Schools:
Program Assessment of Student Learning Outcomes as a Fundamental Way for Law
Schools to Improve and Fulfill Their Respective Missions, 95 UNIV. DETROIT MERCY L.
REV. 343, 368–69 (2018) (explaining that the ABA’s assessment mandate is an
opportunity for real change but involves a lot of work).
3. The case study involves the legal research and writing faculty at the University of
Kentucky J. David Rosenberg College of Law (UK Law) in their efforts to evaluate the
effectiveness of changes made to the school’s required first-year Legal Research and
Writing Course (LRW Course) beginning in 2011.
4. Larry Cunningham, Building a Culture of Assessment in Law Schools, 69 CASE
W. RES. L. REV. 395, 40304, 412, 422 (2018) (positing that implementing a collaborative
and faculty-driven process, not just relying on a small group of faculty or an individual,
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
2020] WHEN YOUR PLATE IS ALREADY FULL 531
addition to encouraging broad buy in, a more collaborative approach
helps ensure that assessment work is equitably spread among faculty.
Part II reviews the ABA standards relevant to outcomes assessment,
discussing the two types of outcomes assessment required by those
standardsindividual student assessment and law school assessment
and sharing the underlying theory behind both. Part III outlines the
stages of outcomes assessment, with a specific focus on the
measurement stage of the process, because it is arguably the most
time-intensive stage of the process and the one in which existing
resources can prove most valuable. Part IV focuses on one common
direct assessment measure, the analytic rubric, detailing how UK Law’s
legal writing faculty collaboratively designed a rubric for the LRW
Course appellate brief assignment, and responding to concerns that
have been raised about using rubrics for assessment. Finally, Part V
provides specific suggestions on how to adapt and use existing
assessment measures most efficiently when responding to the ABA’s
assessment mandate at both the individual student and law school
levels. In other words, assessment measures, like the rubric project
described in Part IV, can be adapted and used more broadly than the
purpose for which they were originally designed. While the LRW Course
appellate brief assignment rubric serves as the primary example to
illustrate these ideas, this Article will touch on other examples and
share ideas about how a variety of existing resources can transfer to the
current assessment landscape mandated by the ABA.
The message here is that law schools need not panic, as they are
likely to find they have more relevant assessment knowledge and
can build a culture of assessment and thus foster wider improvement); see also LORI E.
SHAW & VICTORIA L. VANZANDT, STUDENT LEARNING OUTCOMES AND LAW SCHOOL
ASSESSMENT: A PRACTICAL GUIDE TO MEASURING INSTITUTIONAL EFFECTIVENESS 4849
(Carolina Academic Press 2015) (discussing the need for faculty involvement and
cooperation). Professor Cunningham cautioned, however, that in his experience law school
representative attendance at assessment conferences held around the time the new ABA
standards were launched was “overwhelming[ly] female and drawn from legal writing and
clinical contract ranks.” 69 CASE W. RES. L. REV. at 405 n.67. Thus, a more “full faculty”
approach to assessment should also help avoid these gender and status disparities.
5. AM. BAR ASSN, Managing Director’s Guidance Memo, Standards 301, 302, 314
and 315, at 3 [hereinafter ABA June 2015 Guidance Memo] (June 2015),
https://www.americanbar.org/content/dam/aba/administrative/legal_education_and_admis
sions_to_the_bar/governancedocuments/2015_learning_outcomes_guidance.authcheckdam
.pdf (“Different types of faculty— doctrinal, clinical, legal writing and othersplay
important roles in identifying and assessing learning.”); see also Victoria L. VanZandt,
The Assessment Mandates in the ABA Accreditation Standards and Their Impact on
Individual Academic Freedom Rights, 95 U. DETROIT MERCY L. REV. 253, 26970 (2018)
(noting that ABA Standard 404(a)(2) explicitly mentions “assessing student learning at
the law school” when discussing full-time faculty member responsibilities).
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
532 MERCER LAW REVIEW [Vol. 71
materials to work from than first thought. If professors are willing to
share their relevant experience and resources, work collaboratively to
expand and adapt from that base as needed, and spread the related
assessment responsibilities widely and fairly among the faculty, then
the ABA’s call for outcomes assessment can be answered with meaning
and without forcing any one faculty member’s plate to overflow.
II. THE ABA STANDARDS ON LEARNING OUTCOMES, FORMATIVE
ASSESSMENT, AND INSTITUTIONAL ASSESSMENT
This Part offers general background on the ABA standards relating
to learning outcomes and assessment. Section B then follows with a
more in-depth look at the theory behind the types of assessment law
schools must engage in under the described standards.
A. The Relevant ABA Standards
In 2008, the Council of the Section of Legal Education and
Admissions to the Bar charged the Standards Review Committee to
lead a comprehensive review of the accreditation standards governing
legal education. Two important components of the review are the
Special Committee on Output Measures and the Student Learning
Outcomes Subcommittee (Output Measures Committee). The Output
Measures Committee was charged with determining “whether and how
output measures, other than bar passage and job placement, might be
used in the accreditation process.”
6
The focus historically had been on a
law school’s inputs, in terms of resources invested into the educational
process, and on indirect output data regarding bar passage and job
placement rates.
7
The Output Measures Committee issued a seventy-
one-page report analyzing how other accreditation bodies use outcomes
measures (all ten of the other professional accrediting bodies reviewed
used outcome measures in their standards) and noting that regional
accreditation agencies have also been focused on student learning
6. ABA June 2015 Guidance Memo, supra note 5, at 3.
7. Jamie R. Abrams, Experiential Learning and Assessment in the Era of Donald
Trump, 55 DUQ. L. REV. 75, 79 (2018) (citing Cara Cunningham Warren, Achieving the
ABA’s Pedagogy Mandate, 14 CONN. PUB. INT. L.J. 67 (2014)). Common inputs include
faculty qualifications, nature of facilities, classes offered, readings and assignments given
(versus student work product resulting from those assignments). SHAW & VANZANDT,
supra note 4, at 10; see also From the Editors, 67 J. LEGAL EDUC. 373, 373 (noting
input-based model “focus[es] on budget, facilities, academic metrics of incoming students
and number of faculty”).
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
2020] WHEN YOUR PLATE IS ALREADY FULL 533
outcomes.
8
The report concluded that current ABA accreditation
standards should be reviewed and revised “to reduce their reliance on
input measures and instead adopt a greater and more overt reliance on
outcome measures.”
9
The Standards Review Committee responded by
studying the matter and making recommendations to the Council,
which included input from the Student Learning Outcomes
Subcommittee.
The Standards Review Committee recommendations resulted in new
and revised standards adopted by the Council, which went into effect on
August 12, 2014. The most relevant standards for this Article are
Standards 301, 302, 314, and 315.
10
As they relate to this Article, the Assessment Standards set out new
requirements regarding learning outcomes and assessment. A key
8. ABA June 2015 Guidance Memo, supra note 5, at 3. The 2008 report relies on two
well-known 2007 publications that also support the use of outcomes assessment: WILLIAM
M. SULLIVAN, ET AL., EDUCATING LAWYERS: PREPARATION FOR THE PROFESSION OF LAW
[hereinafter CARNEGIE REPORT] (John Wiley & Sons, Inc. 2007), and ROY STUCKEY, ET AL.,
BEST PRACTICES FOR LEGAL EDUCATION: A VISION AND A ROAD MAP [hereinafter BEST
PRACTICES] (2007). In addition, the 2008 report correctly notes that university-level
accreditation bodies (regional accreditors) have been requiring outcomes assessment
plans for the universities they accredit; as a result, some universities had already started
requiring law schools to prepare assessment plans even before the ABA did. Cunningham,
supra note 4, at 401; David Thomson, When the ABA Comes Calling, Let’s Speak the Same
Language of Assessment, 23 PERSPECTIVES: TEACHING LEGAL RES. & WRITING 68, 68
(2014); see also Anthony Niedwiecki, Prepared for Practice? Developing a Comprehensive
Assessment Plan for a Law School Professional Skills Program, 50 U.S.F. L. REV. 245, 247
(2016); Ruth Jones, Assessment and Legal Education: What is Assessment, and What the
*# Does It Have to Do with the Challenges Facing Legal Education?, 45 MCGEORGE L.
REV. 85, 93 (2013).
9. ABA June 2015 Guidance Memo, supra note 5, at 3 (noting that “shifting towards
outcomes measures is consistent with the latest and best thinking of both the higher
education and legal education communities”).
10. Standards 301, 302, 314, and 315 are referred to collectively in this article as “the
Assessment Standards.” Given the time involved in implementing the Assessment
Standards, the ABA created a transition and implementation (or phase-in) plan for
compliance. Under this plan, law schools were to begin applying the Assessment
Standards in the 201617 academic year. AM. BAR ASSN, Transition to and
Implementation of the New Standards and Rules of Procedure for Approval of Law
Schools, at 2 (Aug.13, 2014),
https://www.americanbar.org/content/dam/aba/administrative/legal_education_and_admis
sions_to_the_bar/governancedocuments/2014_august_transition_and_implementation_of_
new_aba_standards_and_rules.authcheckdam.pdf. In the initial stages of a law school’s
implementation of the Assessment Standards, the ABA will focus on “the seriousness of
the school’s efforts to establish and assess learning outcomes,” including the “ongoing
process of gathering information” about students’ progress toward achieving those
outcomes, but not on achieving a certain level of achievement for any particular learning
outcome. Id.
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
534 MERCER LAW REVIEW [Vol. 71
guiding principle in the implementation of the standards is that “[t]he
focus on outcomes should shift the emphasis from what is being taught
to what is being learned by the students.”
11
Generally speaking, the
goal of “outcomes assessment is to understand how educational
programs are working and to determine whether they are contributing
to student growth and development.”
12
An example I used with my
faculty colleagues considers when a parent tells her child to feed the
dog each morning before leaving for school. Inputs assessment
measures effectiveness simply by looking to what the parent said to the
child about feeding the dog (morning reminders, a written note on the
refrigerator). However, outcomes assessment shifts the focus to the
results of those reminders by looking to whether there is actually food
in the dog’s bowl each morning. It is not enough to just claim success by
“teaching” the child to feed the dog if the results show that the child has
not actually learned to complete the task and the dog is left hungry.
While outcomes assessment is new for law schools, it is unlikely to be
a fleeting trend in legal education.
13
Many view the change as a positive
and long overdue one for legal education, and one that law schools can
truly benefit from.
14
According to proponents, outcomes assessment
promotes active student learning, which can better prepare students to
enter the legal profession, and to do so as more self-directed learners.
15
They say it also promotes reflective teaching, which can result in
important curricular changes where needed.
16
But not everyone in the
academy has been so quick to embrace the Assessment Standards and
11. ABA June 2015 Guidance Memo, supra note 5, at 3; see also SHAW & VANZANDT,
supra note 4, at 11.
12. TRUDY W. BANTA & CATHERINE A. PALOMBA, ASSESSMENT ESSENTIALS:
PLANNING, IMPLEMENTING, AND IMPROVING ASSESSMENT IN HIGHER EDUCATION 910 (2d
ed. 2015).
13. E.g., SHAW & VANZANDT, supra note 4, at 25, 29 (noting that “[o]utcomes
assessment has been entrenched in K12 and undergraduate education for the last
decade and is not waning” and that “law schools are among the last of the professional
schools to face mandated outcomes assessment).
14. E.g., Abrams, supra note 7, at 80 n.22 (citing several helpful articles for general
background on this topic).
15. GREGORY S. MUNRO, INSTITUTE FOR LAW SCHOOL TEACHING, OUTCOMES
ASSESSMENT FOR LAW SCHOOLS 1617 (2000) (explaining that assessment is not just
about measuring student or institutional effectiveness after the fact, but is instead “an
instrument of learning” because the purpose is to actually improve student learning while
the course of study is ongoing).
16. SHAW & VANZANDT, supra note 4, at 32 (noting that outcomes assessment serves
an institution “by providing concrete evidence to guide [its] budgeting, curriculum design,
teaching, and strategic planning”); Warren, supra note 7, at 7476 (positing that the
mandate for outcomes assessment supports academic success, promotes graduate success,
and encourages improved pedagogy).
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
2020] WHEN YOUR PLATE IS ALREADY FULL 535
related changes, especially given the time and resources involved.
17
Regardless of one’s view on their merit, the Assessment Standards have
been described as “the most significant change in law school
accreditation standards in decades.”
18
As one scholar put it, “[t]he new
ABA accreditation standards reflect a ‘fundamental shift’ in the
delivery of legal education and curricular design . . . .”
19
Others have
used words like “revolutionary” and “sea change.”
20
There are two key components to the ABA’s assessment mandate.
First, law schools must engage in formative assessment in addition to
summative assessment, at least in some courses, to inform individual
student learning. Second, each accredited law school must engage in a
formal and ongoing evaluation of its effectiveness as an institution, and
must do so from the perspective of its students’ performance within the
law school’s program of study. In doing so, each law school will have to
answer two crucial questions: What does the law school want its
“students to know and be able to do when they graduate,” and how will
the law school know that its students have achieved such
competencies?
21
The next few subsections review the language of the Assessment
Standards themselves.
17. E.g., Steven C. Bahls, Adoption of Student Learning Outcomes: Lessons for
Systemic Change in Legal Education, 67 J. LEGAL EDUC. 376, 377 (2018) (stating the
change to “outcome assessment has been highly controversial” where opponents believe
the change will “divert resources from traditional doctrinal faculty, thereby diminishing
their role”); Abrams, supra note 7, at 8485 (noting concerns regarding need for training
and support, all while law schools are forced to do more with fewer resources) (citing
Warren, supra note 7, at 79); Niedwiecki, supra note 8, at 246 (noting legal educators
anxiety over time and resources involved in complying with the Assessment Standards);
see also Molly Worthen, The Misguided Drive to Measure ‘Learning Outcomes’, THE NEW
YORK TIMES (Feb. 23, 2018),
https://www.nytimes.com/2018/02/23/opinion/sunday/colleges-measure-learning-
outcomes.html (arguing that the drive to measure learning outcomes in higher education
has become misguided and “devour[s] a lot of money for meager results”).
18. Bahls, supra note 17, at 376.
19. Abrams, supra note 7, at 79 (quoting Niedwiecki, supra note 8, at 247).
20. Bahls, supra note 17, at 376 (attributing these quotes to the former President of
the American Law Schools (“revolutionary”) and chair of the relevant ABA subcommittee
(“sea change”).
21. Niedwiecki, supra note 8, at 246 (emphasis added); see also SHAW & VANZANDT,
supra note 4, at 29 (“Articulating outcomes is not sufficient to satisfy the accreditation
standardsyour school needs to measure student performance to determine if the
outcomes are being achieved.”)
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
536 MERCER LAW REVIEW [Vol. 71
1. Standards 301 & 302. Objectives of Programs of Legal
Education & Learning Outcomes
First, new Standard 301(b) and revised Standard 302 call for law
schools to develop and publish learning outcomes that explicitly state
what they want their students to be able to do and know upon
completion of the law school curriculum.”
22
In other words, law schools
must establish outcomes that cover competencies related to the practice
of law.
23
Under the revised Standard 301, law schools must “establish
and publish learning outcomes” designed to achieve objectives that
include preparing their graduates “for effective, ethical, and responsible
participation as members of the legal profession.”
24
Standard 302
provides the following specific guidance about those institutional
learning outcomes:
A law school shall establish learning outcomes that shall, at a
minimum, include competency in the following: (a) Knowledge and
understanding of substantive and procedural law; (b) Legal analysis
and reasoning, legal research, problem-solving, and written and oral
communication in the legal context; (c) Exercise of proper
professional and ethical responsibilities to clients in the legal system;
and (d) Other professional skills needed for competent and ethical
participation as a member of the legal profession.
25
It is important to clarify what is meant by learning outcomes. They
are not aspirational goals. Instead, they are “clear and concise
statements of knowledge that students are expected to acquire, skills
students are expected to develop, and values that they are expected to
understand and integrate into their professional lives.”
26
For purposes
22. Niedwiecki, supra note 8, at 24647.
23. ABA June 2015 Guidance Memo, supra note 5, at 4.
24. AM. BAR ASSN, STANDARDS AND RULES OF PROCEDURE FOR APPROVAL OF LAW
SCHOOLS, at 15 (201718) [hereinafter ABA STANDARDS],
https://www.americanbar.org/content/dam/aba/publications/misc/legal_education/Standar
ds/2017-
2018ABAStandardsforApprovalofLawSchools/2017_2018_standards_chapter3.authcheckd
am.pdf.
25. ABA STANDARDS, supra note 24, at 15. Law schools can also add outcomes that
reflect their unique mission. Id. Note that competency is not defined in the standards, and
its meaning is likely to be an ongoing discussion among legal educators. See Judith Welch
Wegner, Contemplating Competence: Three Meditations, 50 VAL. U. L. REV. 675, 676
(2016) (offering reflections on understanding competence and its significance, namely as
its relates to implementation of the Assessment Standards).
26. ABA June 2015 Guidance Memo, supra note 5, at 4. The CARNEGIE REPORT and
BEST PRACTICES also organize around the idea of knowledge, skills and values,
emphasizing that skills and professional identify are as important as knowledge (and law
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
2020] WHEN YOUR PLATE IS ALREADY FULL 537
of law school assessment, the outcomes selected should be essential to a
graduate.
27
And because law schools will be required to measure
whether students are achieving the outcomes (discussed in more detail
below), they should be written to “require a student to ‘do’ something
that you can observe and measure.”
28
In other words, the outcomes
should be written as actions students should be able to perform to
demonstrate what they have learned.
2. Standard 314. Assessment of Student Learning
Second, the new Standard 314 requires law schools to “utilize both
formative and summative assessment methods in its curriculum to
measure and improve student learning and provide meaningful
feedback to students.”
29
In other words, law schools must engage in
individual student assessment, or meaningful assessment of their
progress in helping students achieve outcome goals.”
30
Thus, while both
formative and summative assessment methods are not required in
every course, the addition of Standard 314 makes clear that formative
assessment must “be integrated into the law school’s program to . . .
‘provide meaningful feedback to improve student learning’ in the law
school’s overall program.”
31
schools should thus strive for more of a balance with all such competencies). CARNEGIE
REPORT, supra note 8, at 12; BEST PRACTICES, supra note 8, at 94
27. SHAW & VANZANDT, supra note 4, at 58.
28. Id. at 66. For example, the UK Law’s learning outcome regarding communication
calls for students to be able demonstrate that they can do the following:
[C]ommunicate clearly and effectively in oral and written form by: a.
[p]resenting material in a clear, concise, well-organized and professional
manner that is appropriate to the audience and the circumstances; and b.
[s]electing and using the appropriate legal terminology to accomplish a desired
legal effect (e.g., in contracts, wills, motions, jury instructions, discovery
documents).
Learning OutcomesABA Standard 302, UNIVERSITY OF KENTUCKY COLLEGE OF LAW,
http://law.uky.edu/academics/learning-outcomes-aba-standard-302 (last visited May 24,
2018).
29. ABA STANDARDS, supra note 24, at 23.
30. ABA June 2015 Guidance Memo, supra note 5, at 5 (emphasis added).
31. Id. (quoting ABA STANDARDS, supra note 24, at 23). As noted above, the Outcome
Measures Committee’s 2008 report relied on the CARNEGIE REPORT and BEST PRACTICES.
Both publications criticized legal education for its overreliance on summative assessment,
which does not support students in becoming metacognitive about learning, and proffered
that the primary form of assessment in legal education should be formative assessment.
CARNEGIE REPORT, supra note 8, at 173; BEST PRACTICES, supra note 8, at 25556; see
also SHAW & VANZANDT, supra note 4, at 27 (noting, “Legal education has been criticized
over the years for its failure to provide sufficient feedback to students.”).
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
538 MERCER LAW REVIEW [Vol. 71
Section B of this Part provides more detail about individual student
assessment in law school courses, including a discussion of formative
and summative assessment methods, but for now, it is important to
understand that both forms of assessment are contemplated in the
Assessment Standards.
3. Standard 315. Evaluation of Program of Legal Education,
Learning Outcomes, and Assessment Methods
Third, the new Standard 315 responds to the Output Measures
Committee’s recommendation that the emphasis on outcomes, or
student outputs, “reflects a shift in focus from what is being taught in
law schools to what is being learned by students” when it comes to
measuring the effectiveness of that school’s program of legal
education.
32
Specifically, Standard 315 requires the following:
The dean and the faculty of a law school shall conduct ongoing
evaluation of the law school’s program of legal education, learning
outcomes, and assessment methods; and shall use the results of this
evaluation to determine the degree of student attainment of
competency in the learning outcomes and to make appropriate
changes to improve the curriculum.
33
Put another way, law school “assessment requires collective faculty
engagement and critical thinking about our students’ overall
acquisition of the skills, knowledge, and qualities that ensure they
graduate with the competencies necessary to begin life as
professionals.”
34
The ABA has neither defined nor set a threshold for
“competency,”
35
which has apparently been left to individual law schools
to consider.
36
32. ABA June 2015 Guidance Memo, supra note 5, at 5; see also Andrea A. Curcio, A
Simple Low-Cost Institutional Learning-Outcomes Assessment, 67 J. LEGAL EDUC. 489,
491 (2018) (“Rather than look at achievement just in our own courses, institutional
outcome-measures assessment requires collective faculty engagement and critical
thinking about our students’ overall acquisition of the skills, knowledge, and qualities
that ensure they graduate with the competencies necessary to begin life as
professionals.”).
33. ABA STANDARDS, supra note 24, at 23.
34. Curcio, supra note 32, at 491. This Article refers to a law school’s response to
Standard 315 as law school assessment (to contrast it with the individual student
learning assessment that is mandated by Standard 314), but note that some literature
refers to Standard 315 as institutional assessment or institutional outcomes assessment,
e.g., Curcio, supra note 32, at 489, while others use programmatic assessment, e.g.,
Cunningham, supra note 4, at 396 and Hamm, supra note 2, at 344.
35. ABA June 2015 Guidance Memo, supra note 5, at 5 (“It is not the goal of
assessing the level of attainment, and probably not realistic to expect, that each student
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
2020] WHEN YOUR PLATE IS ALREADY FULL 539
In conclusion, “assessment involves ‘the systematic collection, review,
and use of information about educational programs undertaken for the
purpose of improving student learning and development.’”
37
The
Assessment Standards call on law schools to do this in two main ways
at the individual student level (or course level) and at the law school
level. Section B offers more detail on both.
B. Outcomes Assessment for Student Learners and Law Schools
As explained above, there are two types of outcomes assessment at
issue in the Assessment Standardsindividual student assessment and
law school assessment.
38
This Section offers more detail about each in
turn.
1. Individual Student Assessment
Law professors are familiar with the first type of assessment,
individual student assessment. In other words, as educators, we
consistently engage in classroom assessment, or assessment of student
learning at the course level. We provide our students with critiques or
grades that indicate a measure of their individual performance in a
particular course.
39
Individual student assessment takes two forms,
formative assessment methods and summative assessment methods,
both of which are now expressly required by Standard 314.
40
This
Article takes each in turn.
First, the ABA defines formative assessment methods as
“measurements at different points during a particular course or at
different points over the span of a student’s education that provide
meaningful feedback to improve student learning.”
41
In other words,
will achieve the same level of mastery for every outcome. Some students will master some
outcomes in a more proficient manner than others.”).
36. See SHAW & VANZANDT, supra note 4, at 126 (explaining that “a threshold of
100% may not always be realistic” and noting that “experts argue for an 80% standard for
thresholds”); see also supra note 25.
37. Warren, supra note 7, at 71 (quoting Jones, supra note 8, at 87).
38. SHAW & VANZANDT, supra note 4, at 27. A third type of outcomes assessment is
often referred to as program assessment, which focuses on assessing the effectiveness of a
series of program-specific courses (such as intellectual property, alternative dispute
resolution, international studies, law & economics, etc.). See MUNRO, supra note 15, at
100; see also Niedwiecki, supra note 8, at 247, 27479 (discussing an assessment plan for
a professional skills program at The John Marshall Law School). When referring to law
school assessment, this Article means assessment of the law school’s entire program of
study (not some sub-set or specialty set of courses) as envisioned by Standard 315.
39. SHAW & VANZANDT, supra note 4, at 6.
40. ABA STANDARDS, supra note 24, at 23.
41. ABA STANDARDS, supra note 24, at 23.
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
540 MERCER LAW REVIEW [Vol. 71
formative assessment methods are designed to provide students with
feedback during the learning process,
42
meaning during a particular law
school course or over the span of the student’s three years in law school,
as a way to promote active learning.
43
Moreover, because the feedback
often leaves professors with a sense of what their students do and do
not know while the course is still in progress, they can respond by using
additional or different teaching techniques where needed to increase
learning.
44
Thus, formative assessment methods not only foster active
learning, but also more active (or reflective) teaching.
The most meaningful “[f]ormative assessment helps a student see
where in the learning process he made a wrong (or a correct) turn [on a
particular assignment] and make any needed changes on his next
assignment.”
45
In other words, the feedback should respond to the
student work product being evaluated and the process employed to
create it. This way students are armed with information on how to
emulate (or not emulate, depending on the comment) that process in
later assignments. For example, when reviewing the Discussion section
of a formal office memorandum, one approach would be to indicate that
the stated rule for the memo’s legal issue is “a good one and yet the
rule explanation is “lacking.” However, the more meaningful approach
would be to explain the stated rule is proficient because it is accurate,
concrete, and adequately supported by mandatory authority (using
synthesis if needed), while the rule explanation is still developing
because the discussion of the prior case(s) to apply the rule could be
more complete in terms of the court’s reasoning or holding. The same
42. SHAW & VANZANDT, supra note 3, at 67; MUNRO, supra note 15, at 73.
43. See Anthony Niedwiecki, Teaching for Lifelong Learning: Improving the
Metacognitive Skills of Law Students through More Effective Formative Assessment
Techniques, 40 CAP. U. L. REV. 149, 177 (2012) (“Formative assessment identifies a gap in
learning, provides feedback to the student about the gap and closing the gap, involves the
student in the process, and advances the students’ learning.”); see also MUNRO, supra note
15, at 73 (describing student involvement in the “assessment, discussion, and critique
that follow their performance” after which the student should perform again “to integrate
what they have just learned”).
44. See Olympia Duhart, “It’s Not For a Grade”: The Rewards and Risks of Low-Risk
Assessment in the High-Stakes Law School Classroom, 7 ELON L. REV. 491, 498 (2015) (“In
addition to helping students understand their learning strengths and deficiencies,
formative assessment can also help professors learn what is working and not working
about their teaching.”); CARNEGIE REPORT, supra note 8, at 171 (“[S]tudies of how
expertise develops across a variety of domains are unanimous in emphasizing the
importance of feedback as the key means by which teachers and learners can improve
performance.”).
45. SHAW & VANZANDT, supra note 4, at 7. Given the goal, formative assessment
methods may or may not factor into the student’s final grade. See LINDA SUSKIE,
ASSESSING STUDENT LEARNING: A COMMON SENSE GUIDE 11 (2d ed. 2010).
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
2020] WHEN YOUR PLATE IS ALREADY FULL 541
holds true, for example, in a Torts or Products Liability mid-term essay
exam in which students are called on to apply the rule for negligence to
a hypothetical set of facts. Instead of just noting that the student’s
application is sparse” or “unsupported,” the more meaningful approach
would be to explain that the student should be more explicit in
discussing which facts support the predicted outcome resulting from the
rule’s application and why (perhaps including an analogy to similar
facts from a case discussed at length in class). In short, offering
feedback that involves students in the process helps advance student
learning.
46
Second, summative assessment methods are defined by the ABA as
“measurements at the culmination of a particular course or at the
culmination of any part of a student’s legal education that measure the
degree of student learning.”
47
For this reason, summative assessment is
referred to “as assessment after the fact.”
48
The primary goal of
summative assessment methods are to assign grades by indicating a
student’s level of achievement on a standardized scale or as compared
to the student’s peers, which is known as norm-referenced grading.
49
Given this goal, there is usually very little to no student feedback, as
the student is not being given the chance to improve learning in a
46. Niedwiecki, supra note 43, at 177. Some would say this is not a realistic
expectation for professors teaching in large casebook classes such as Torts. First, not
every casebook class is sixty to one hundred-plus students. And second, there are ways to
engage students in the learning process on a particular assignment even without
engaging in the particularly time-intensive task of giving feedback to each individual
student. See Heather M. Field, A Tax Professor’s Guide to Formative Assessment, 22 FLA.
TAX REV. 363, 39495, 397414, 43031 (2019) (describing a variety of formative
assessment options in this vein, including multiple choice questions or in-class exercises
where explanations are then provided to the group for why an answer was right or
wrong). By way of further example, a professor could provide feedback to the entire class
through a model answer for a practice exam question or actual exam question (explaining
the strengths and weaknesses of the answer), or a feedback memo that offers global
strengths and weaknesses identified from a review of student exam answers. See Andrea
A. Curcio, Moving in the Direction of Best Practices and The Carnegie Report: Reflections
on Using Multiple Assessments in a Large-Section Doctrinal Course, 19 WIDENER L.J. 159,
167 (2009) (discussing an annotated model answer). And other viable options include TA
grading, self assessment, or peer grading using model answers and rubrics. Field, supra,
at 43839; Curcio, supra, at 17172.
47. ABA STANDARDS, supra note 24, at 23.
48. SHAW & VANZANDT, supra note 4, at 7.
49. Id. at 93; see also Leslie Rose, Norm-Referenced Grading in the Age of Carnegie:
Why Criteria-Referenced Grading is More Consistent with Current Trends in Legal
Education and How Legal Writing Can Lead the Way, 17 J. LEGAL WRITING INST. 123
(2011). Note, however, that not all summative assessment is norm-referenced. For
example, the bar exam is a criterion-referenced exam.
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
542 MERCER LAW REVIEW [Vol. 71
future assignment.
50
Final course grades and the bar exam are common
examples of summative assessment.
Much has been written about law schools’ overreliance on one single,
summative assessment method in most courses (namely, one end of the
semester exam), which is primarily for purposes of assigning grades
and ranking students. Gregory S. Munro is a legal educator who is well-
known for his long-standing work on outcomes assessment. He explains
that, because law schools are educating students to become practicing
lawyers and professionals, the focus of student assessment in law
school should be on enhancing student performance, providing multiple
evaluations of student performance, and giving appropriate feedback to
students.”
51
The Carnegie Report
52
also called for using formative
assessment in training professionals, because the essential goal should
“be to form practitioners who are aware of what it takes to become
competent in their chosen domain” and arm “them with the reflective
capacity and motivation to pursue genuine expertise.”
53
2. Law School Assessment
In contrast to individual student assessment, law school assessment
(or institutional assessment) is about the collective result. In other
words, each law school must now also “use the collective performance of
[its] students” to assess the law school’s own performance as
educators.”
54
In order to do so, faculty must decide “what it means to be
‘effective’ as a law school,” as well as how and where the law school will
50. SHAW & VANZANDT, supra note 4, at 7; see also CARNEGIE REPORT, supra note 8,
at 164–65 (“Reliance on summative evaluation provides no navigational assistance, as it
were, until the voyage is over.”); id. at 16467 (focusing in particular on the challenges
first-year law students face with this approach).
51. MUNRO, supra note 15, at 11; see also CARNEGIE REPORT, supra note 8, at 171
(“From our observations, we believe that assessments should be understood as a
coordinated set of formative practices that, by providing important information about the
students’ progress in learning to both students and faculty, can strengthen law schools’
capacity to develop competent and responsible lawyers.”).
52. CARNEGIE REPORT, supra note 8.
53. CARNEGIE REPORT, supra note 8, at 173 (noting law students “must become
‘metacognitiveabout their own learning”).
54. SHAW & VANZANDT, supra note 4, at 6 (emphasis in original); see also id. at 10
(explaining that law schools have historically focused on the quality of their inputs when
trying to measure their effectiveness, and while that analysis is still relevant, the ABA is
“now asking law schools to shift their attention to the quality of their students’ outputs)
(emphasis in original). For purposes of this article, note that institutional assessment
refers only to a law school’s evaluation of its educational program under Standard 315,
and not any larger university-wide assessment that may be required by the larger
institution with which a law school is associated (including assessments required by
regional accreditors).
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
2020] WHEN YOUR PLATE IS ALREADY FULL 543
measure such effectiveness.
55
Professors Shaw & VanZandt posit, The
effectiveness of any institution ultimately is measured by whether it is
achieving its stated mission,” and learning outcomes can help round out
a way to measure that mission.
56
For example, if a law school seeks to
prepare graduates to be “responsible members and leaders of the legal
profession,”
57
then the school will develop a list of learning outcomes
or the essential knowledge, skills, and valuesthat it seeks its students
to achieve by graduation in light of this stated goal or mission.
58
Faculty
must then decide what level of achievement they hope their students to
reach collectively, and how they will measure that achievement.
59
Unlike individual student assessment, schools can use a representative
student sample when conducting law school assessment to determine if
their students are accomplishing the stated outcomes, and thus avoid
engaging in the more time-intensive process of assessing each student
individually.
60
Moreover, while individual student assessment can
involve benchmarks that are norm-referenced or criterion-referenced,
benchmarks used for law school assessment are typically
criterion-referenced, meaning “competency is measured based on
whether a student satisfies certain [of] the prerequisites set by the
assessor,” and not by comparing a student’s performance to other
students as is done with norm-referenced assessment.
61
55. Id. at 7.
56. Id. at 78 (citing ABA Standard 204, which states that law schools must submit a
mission statement as part of the accreditation process).
57. About Us, UNIVERSITY OF KENTUCKY COLLEGE OF LAW http://law.uky.edu/about-
us (last visited July 1, 2019).
58. SHAW & VANZANDT, supra note 4, at 8–9. For example, UK Law’s curriculum
learning outcomes are listed on its website at http://law.uky.edu/academics/learning-
outcomes-aba-standard-302 (last visited on July 1, 2019). Learning outcomes are
discussed in more detail in Part III below.
59. Susan Hanley Duncan, They’re Back! The New Accreditation Standards Coming
to a Law School Near You—A 2018 Update, Guide to Compliance, and Dean’s Role in
Implementing, 67 J. LEGAL EDUC. 462, 482 (2018). While the ABA has identified examples
of assessment methods that may be used in this measurement process, schools are not
required to use any particular method. ABA STANDARDS, supra note 24, at 24
(contemplating that “[t]he methods used to measure the degree of student achievement of
learning outcomes are likely to differ from school to school”). The stages of outcomes
assessment, including the measurement stage, are discussed further in Part IV below.
60. Curcio, supra note 32, at 502 (citing SHAW & VANZANDT, supra note 4, at 11415
and ANDREA SUSNIR FUNK, THE ART OF ASSESSMENT: MAKING OUTCOMES ASSESSMENT
ACCESSIBLE, SUSTAINABLE, AND MEANINGFUL, at 37 (Carolina Academic Press 2017) for
resources with more detail on using sufficient sample sizes).
61. SHAW & VANZANDT, supra note 4, at 93 (emphasis omitted). Given the difference,
norm-referenced assessments are not necessarily reflective of a “competent graduate.” Id.
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
544 MERCER LAW REVIEW [Vol. 71
Law school assessment also envisions using the aggregate data of
student performance collected to make changes to the law school’s
program of legal education as needed. In other words, it is not enough
for a law school simply to grade itself. The ABA expects schools to use
the assessment data collected to make improvements to their
educational program where needed.
62
With a better understanding of the “what” and “why” of outcomes
assessment as mandated by the ABA, this Article will now turn to
outline the “how” of that process.
III. THE STAGES OF OUTCOMES-BASED ASSESSMENT
There are four common stages to the outcomes assessment process,
regardless of whether the assessment plan being created is for
individual student assessment or law school assessment. The four
stages are as follows: (1) the learning outcomes stage; (2) the
measurement stage; (3) the analysis stage; and (4) the response stage.
63
First, in the learning outcomes stage, the assessor develops student
learning outcomes that describe the fundamental knowledge, skills, and
values of successful new lawyers.
64
Second, in the measurement stage,
the assessor designs or implements existing measures that will
determine whether students have actually achieved each of the
identified learning outcomes.
65
Next, in the analysis stage, the assessor
analyzes the data obtained from the measurement stage.
66
Finally, in
the response stage, the data collected is used to improve student
learning where needed, which is often referred to as closing the loop.
67
Put another way, the stages of outcomes assessment can be broken
down into the phases of development (the learning outcomes stage),
implementation (the measurement stage), and evaluation (the analysis
and response stages).
68
62. SHAW & VANZANDT, supra note 4, at 7; see also id. at 32 (“A fundamental
principle underlying outcomes assessment is that teachers and institutions can get better
at what they do, but doing so requires self-reflections and a willingness to try something
new.”); CARNEGIE REPORT, supra note 8, at 182 (discussing the importance and benefits of
institutional intentionality in the context of assessment).
63. SHAW & VANZANDT, supra note 4, at 1113.
64. Id. at 5758; see also Curcio, supra note 32, at 491 (describing law school learning
outcomes as “the core knowledge, skills, behaviors, and attributes of successful new
lawyers”).
65. SHAW & VANZANDT, supra note 4, at 1113.
66. Id.
67. Id.
68. Id. at 54; see also Warren, supra note 7, at 71; Jones, supra note 8, at 88.
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
2020] WHEN YOUR PLATE IS ALREADY FULL 545
A great deal has already been written about the outcomes
assessment process generally and in the law school setting.
69
There are
several helpful resources that specifically address the first stage of the
process, drafting learning outcomes.
70
As noted above, the ABA has
identified some learning outcomes that all new lawyers should possess,
and thus that all law schools should include in their list of learning
outcomes for law school assessment.
71
Those outcomes include:
“Knowledge and understanding” of law; Legal analysis and reasoning,
legal research, and problem-solving;” communication in the context of
law; professionalism; and “Other professional skills.”
72
The Assessment
Standards give law schools freedom to add to this list to include
outcomes that may reflect a particular school’s mission or culture.
73
Moreover, a professor’s identification of student learning outcomes for a
particular course (or for individual student assessment) can be more or
less inclusive, depending on the course. In other words, the professor
should identify the big picture goal of the course in terms of the
knowledge, skills, and values the students should be able to accomplish
69. E.g., SUSKIE, supra note 45 (addressing outcomes assessment in higher
education); MUNRO, supra note 15 (focusing specifically on outcomes assessment for law
schools); SHAW & VANZANDT, supra note 4.
70. Two excellent resources for developing learning outcomes and related
performance criteria (the first stage of outcomes assessment) are SUSKIE, supra, note 45,
at 11534 (individual student assessment) and SHAW & VANZANDT, supra note 4, at 5782
(law school assessment). And because the balance of this Article focuses on the
measurement stage of the outcomes assessment process, a detailed discussion of the
analysis and response stages (the third and fourth stages of outcomes assessment) is
outside its scope. Professors Shaw & VanZant discuss these stages in great detail. Id. at
13582.
71. ABA STANDARDS, supra note 24, at 15.
72. Id.
73. Id. at 16. Once the law school’s learning outcomes are identified, the school can
create a curriculum map, or “a grid of the courses [in a law school’s] curriculum that
identifies which learning outcomes and [related] performance criteria are addressed and
assessed in each course.” SHAW & VANZANDT, supra note 4, at 79. The map can indicate
where in the curriculum the outcome is introduced, where it is practiced, and at what
point students are expected “to have attained the desired level of competence.” Hamm,
supra note 2, at 372; see also FUNK, supra note 60, at 120 (explaining that curriculum
maps can be used to identify the level of depth in which a course addresses a certain
learning outcome, which include: being introduced to the knowledge, skill, or value; being
required to demonstrate competency in it; or receiving advanced instruction or additional
practice); SHAW & VANZANDT, supra note 4, at 210 (discussing the same three categories,
but labeling them introduced, competency, and proficiency). Sample curriculum mapping
documents are fairly easy to come by, and thus schools need not reinvent the wheel when
creating a format. E.g., id. at Appendix E (sample curriculum map) and Appendix F
(curriculum mapping survey sample form); FUNK, supra n. 60 at Appendix D (includes
curriculum mapping survey and sample curriculum map).
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
546 MERCER LAW REVIEW [Vol. 71
by completion of the course.
74
Thus, the course learning outcomes may
touch on knowledge, skills, or values that the law school has identified
for all of its graduates more broadly (such as legal analysis and
reasoning or professionalism), and it may also include an outcome that
is not specifically referenced at the law school level (for example,
knowledge of a specific subject matter, like international law or
securities law).
75
This Article focuses on the second stage, or the resource-intensive
measurement stage. In particular, the Article seeks to lay out one
possible way the stage can be implemented for both individual student
assessment and law school assessment once the relevant learning
outcomes have been identified. The measurement stage involves (A)
identifying or designing the assessment measures to be used and (B)
determining the sources (or outputs) that will be measured.
76
While
some principles underlying this two-part process apply to both types of
assessment, instances where the process differs for individual student
learning or law school assessment are noted below.
A. The Measurement Stage: Assessment Measures Generally
As an initial matter, there are two main types of assessment
measurementdirect and indirect measures. A direct measure requires
students to demonstrate their achievement in a tangible, visible way,
such as taking an exam or completing a writing assignment.
77
In other
words, students must actually create work product in some form
(written or oral) so the assessor can directly examine or observe the
student work product to measure whether and what student learning is
taking place. In contrast, an indirect measure requires the assessor to
infer whether learning has occurred through the student’s opinion or
another observer’s opinion (without directly reviewing student work
product).
78
Common examples include surveys, interviews, focus groups,
and reflection papers.
79
When it comes to direct measures, there is no
need for guesswork or inference because there is student work product
to review. For this reason, direct assessment measures are “viewed with
74. FUNK, supra note 60, at 4344.
75. Refer to FUNK, supra note 60, at Appendix D for examples of course learning
outcomes.
76. ABA 2015 Guidance Memo, supra note 5, at 56.
77. MARY J. ALLEN, ASSESSING ACADEMIC PROGRAMS IN HIGHER EDUCATION 67
(Anker Publg. 2004); see also SHAW & VANZANDT, supra note 4, at 10506.
78. SHAW & VANZANDT, supra note 4, at 104, 10609; see also ALLEN, supra note 77,
at Chapter 6.
79. SHAW & VANZANDT, supra note 4, at 104.
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
2020] WHEN YOUR PLATE IS ALREADY FULL 547
great favor” by assessment experts.
80
That said, indirect measures are
still valuable for assessment purposes because they can assess what
students and employers perceive students have learned.
81
Thus, both
types of measures are worth using in an assessment plan, especially for
purposes of law school assessment. In fact, assessment experts
recommend using multiple, varied assessment measures to evaluate
student learning for purposes of outcomes assessment.
82
Moreover, when creating an assessment measure, be it direct or
indirect, the three core principles of validity, reliability, and fairness
should be considered to ensure the measure is a worthwhile one.
83
First,
validity looks to how well a method actually measures what it is
supposed to be assessing.
84
For individual student assessment, validity
requires the assessment method to measure whether one or more
course goals has been achieved.
85
The question for law school
assessment is whether the method measures if the law school is
meeting the institutional outcome(s) at issue.
86
Second, reliability
confirms whether the assessment method produces the same results
during repeat attempts.
87
This principle involves both “representative
content samplingand “scoring consistency.”
88
In terms of sampling, for
individual student assessment, the assessment method must sample
enough of the course content so that the student’s performance (or
80. Id. at 105; see also Niedwicki, supra note 8, at 255 (noting that indirect measures
alone “do not fully capture what particular skills the students have mastered or the exact
knowledge they gained in law school”).
81. For example, an externship supervisor can offer perceptions on how a student
extern has performed without sharing work product that may be subject to the attorney
client privilege.
82. BEST PRACTICES, supra note 8, at 253 (discussing best practices for assessing
student learning); SHAW & VANZANDT, supra note 4, at 112 (discussing the use of
“methodological triangulation,” which involves using three different assessment tools,
using both direct and indirect measures, when conducting institutional assessment).
83. BEST PRACTICES, supra note 8, at 239.
84. MUNRO, supra note 15, at 106. For example, if the outcome being measured
relates to effective written communication, a multiple-choice exam would not be a valid
method for measuring the outcome because the method must measure what has actually
been learned by the student with respect to the student’s written communication (not
likely through the student’s selection of multiple choice answer options drafted by a
professor). Id.
85. Id. at 107 (explaining that there “must be a reasonable connection between that
which is being taught in the course and that which is being assessed”). There must also be
clear instructions and adequate time to complete the assignment. SHAW & VANZANDT,
supra note 4, at 11011; BEST PRACTICES, supra note 8, at 241.
86. MUNRO, supra note 15, at 107.
87. Id.; SHAW & VANZANDT, supra note 4, at 111.
88. MUNRO, supra note 15, at 10708; SHAW & VANZANDT, supra note 4, at 111.
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
548 MERCER LAW REVIEW [Vol. 71
output) can reflect the extent to which the student met the course
goals.
89
For law school assessment, however, the question is whether
the sampling of student outputs being measured is sufficiently
representative of the student body.
90
In terms of consistency, the
inquiry for individual student assessment is usually whether the
results are consistent across assessment methods in the same course in
a given year (usually all scored by the same professor), while the
inquiry for law school assessment is whether there is consistency across
scorers.
91
Third, fairness contemplates equity in terms of the
assessment method used and in the results of that method.
92
Moreover,
an assessment method that fails for validity or reliability would also fail
for fairness.
93
B. The Measurement Stage: Assessment Sources To Be Measured
Once the assessment method has been identified, the second aspect
of the measurement stage is to identify the sources to be measured for
purposes of assessment. In other words, the goal is to discern what
student work product or other outputs exist, or could be created, for
purposes of assessing achievement of a particular learning outcome (at
the course or law school level). Again, the first stage of the outcomes
assessment process involves identifying what the learning outcomes are
for a particular course (when it comes to individual student learning) or
for the institution overall (for purposes of law school assessment). The
second stage, which is at issue in this Section, gets at measuring
specific sources to determine whether the identified outcomes are being
achieved.
As an initial matter, the goal should be to identify and use
assessment sources that already exist. In other words, try to identify
student outputs that are already being created by students because
89. MUNRO, supra note 15, at 107.
90. Id.; SHAW & VANZANDT, supra note 4, at 111; see also id. at 11415 (discussing in
more detail important questions and considerations regarding reliable representative
samples).
91. SHAW & VANZANDT, supra note 4, at 111, 188 (defining reliability and scorer
reliability); MUNRO, supra note 15, at 108; see also SHAW & VANZANDT, supra note 4, at
145 (discussing the need for inter-rater reliability when multiple scorers are involved);
Hamm, supra note 2, at 383 (discussing training for evaluators).
92. MUNRO, supra note 15, at 109. For example, “[e]xercises which assume familiarity
with dominant culture may present problems of fairness for those of minority cultures.”
Id.
93. Id. at 110.
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
2020] WHEN YOUR PLATE IS ALREADY FULL 549
they are assigned as part of a course.
94
Referred to as embedded
assessments (versus add-on assessments), existing assessment sources
support validity because they are likely to be closely aligned with
faculty expectations in terms of student learning in a given course
(namely, there is likely to be a tie between what is being taught in the
course and what is being assessed in the source), and students are
motivated to perform well because they are part of the assigned work
(and could also be tied to the course grade).
95
Embedded assessments
are also more efficient than add-on assessments because they call upon
existing resources rather than require time be spent to create or
complete new tests or assignments that would yield student outputs.
96
How to locate existing assessment sources turns on the type of
assessment at issue. For individual student assessment, the professor
for the course in question is intimately familiar with the tests or
assignments created for the course, and thus also what student work
product or other outputs are generated in response. When it comes to
law school assessment, the curriculum map created for the first stage of
outcomes assessment can be very useful in discerning which courses
have outputs that could be collected for the learning outcome at issue.
97
The depth and breadth of outputs needed also depends on the type of
assessment at issue. When it comes to individual student assessment,
the professor usually reviews the outputs from all students in the
course, as the goal is to discern what student learning has been
94. Lori A. Roberts, Assessing Ourselves: Confirming Assumptions and Improving
Student Learning by Efficiently and Fearlessly Assessing Student Learning Outcomes, 3
DREXEL L. REV. 457, 470 (2011) (citing Allen, supra note 77, at 1314).
95. SHAW & VANZANDT, supra note 4, at 100; see also Victoria L. VanZandt, Creating
Assessment Plans for Introductory Legal Research and Writing Courses, 16 LEGAL
WRITING 313, 341 (2010) (explaining that embedded assessment means that “faculty [can]
examine learning where it occurs, students are motivated to demonstrate their learning,
and assessment planning contributes to an aligned curriculum”). While the assessment
source can also be tied to a course grade, the grade itself is not a viable assessment
source. That is because a grade usually says something about the students’ performance
vis-à-vis the class (through the grade distribution), “[b]ut it does not usually convey direct
information about which of the course’s goals and objectives for learning have been met or
how well they have been met by the student.” BANTA & PALOMBA, supra note 12, at 53;
see also SHAW & VANZANDT, supra note 4, at 13 (“When you think about a grade, it is
essentially an artificial construct designed to compare the performance of one student to
another and rank them accordingly.”). Instead, it is the underlying tests or assignments
on which grades are based that can be a source for meaningful assessment. Id.
96. ANDREA LESKES & BARBARA D. WRIGHT, THE ART & SCIENCE OF ASSESSING
GENERAL EDUCATION OUTCOMES: A PRACTICAL GUIDE 36 (2005) (explaining that
“[e]mbedd[ed] assessment is an efficient way to collect high-quality, direct evidence of
learning with minimal disruption and maximum utility”).
97. SHAW & VANZANDT, supra note 4, at 7778, 103; see also supra note 73.
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
550 MERCER LAW REVIEW [Vol. 71
accomplished by each student in the course in a given semester (and the
professor may also be grading the assignment). However, when
conducting law school assessment, the “data is typically culled across
courses, professors, and dates using a variety of tools.”
98
Using multiple
assessment sources and methods for each learning outcome can
increase the validity and reliability of the results for law school
assessment.
99
Referred to as triangulation, using three different
assessment measures, including both direct and indirect measures,
allows for a more comprehensive view of assessment sources and, thus,
student performance and attitudes.
100
Doing so also makes assessment
more “accessible to different learning styles and strengths” and
“bring[s] in a wider range of evaluators.”
101
Finally, while there are general principles and best practices to
consider in designing assessment methods, which have been discussed
in this Part, the ABA acknowledges that there is no uniform method to
conduct assessment, and no specific measures are required by the
Assessment Standards. Rather, this aspect of outcomes assessment
should be school-specific.
102
Part IV will explore rubrics in more detail
as one possible direct assessment measure law schools can consider
using, especially given that many faculty already design or use this tool
in their classrooms.
IV. RUBRICS AS AN ASSESSMENT METHOD
Rubrics are the most common direct assessment method that can be
used for both individual student assessment and institutional
assessment.
103
Rubrics are also tools that many professors are already
familiar with creating and using in all types of law school courses,
which is particularly important when it comes to the goal of working
from existing resources when trying to comply with the Assessment
98. SHAW & VANZANDT, supra note 4, at 13, 111 (emphasizing that the sampling of
outputs used must “represent the characteristics of the student body as a whole” in order
to be reliable).
99. Id. at 112 (“Even if it is extremely well designed and well executed, no single
tool/assessment activity can provide the comprehensive view needed to determine
whether a criterion is being achieved.”); see also Jones, supra note 8, at 101.
100. SHAW & VANZANDT, supra note 4, at 112; see also id. at 13 (explaining that using
multiple assessment measures yields a “more nuanced view of student achievement of the
learning outcome” in question).
101. Id. at 112.
102. ABA June 2015 Guidance Memo, supra note 5 at 5.
103. Hamm, supra note 2, at 375.
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
2020] WHEN YOUR PLATE IS ALREADY FULL 551
Standards. This Part (A) first discusses rubrics generally,
104
and then
(B) provides a detailed case study of how the UK Law legal writing
faculty developed a rubric for its LRW Course.
A. Rubrics Generally
“A rubric is a set of detailed written criteria used to assess student
performance.”
105
In other words, in the most general sense, an analytic
rubric is a method of setting out the specific expectations for an
assignment in a way that divides the assignment into its parts and
conveys “a detailed description of what constitutes acceptable and
unacceptable levels of performance for each of those parts.”
106
Rubrics
can be used to determine a numerical score or letter grade for an
assignment through application of the articulated criteria (or
descriptions) to student work product.
107
Moreover, given the way
rubrics can lay out levels of performance for knowledge, skills, and
values, and indicate what competent performance looks like for each,
they can also be used to measure student achievement of learning
outcomes for purposes of course or law school assessment.
108
The
assessment connection is discussed in this Part where needed to
understand rubric theory and design, and then more fully in Part V
104. This Article focuses on the analytic rubric, which looks separately at the different
relevant characteristics of a performance or product, and not the holistic rubric, which
looks collectively at the performance or product with one single overall score or overall
impression. Hamm, supra note 2, at 375 (citing Allen, supra note 77, at 138; BANTA &
PALOMBA, supra note 12, at 100.) Both may be used by law school faculty.
105. Curcio, supra note 32, at 493 (quoting Sophie M. Sparrow, Describing the Ball:
Improve Teaching by Using RubricsExplicit Grading Criteria, 2004 MICH. ST. L. REV. 1,
7 (2004)).
106. DANNELLE D. STEVENS & ANTONIA J. LEVI, INTRODUCTION TO RUBRICS: AN
ASSESSMENT TOOL TO SAVE GRADING TIME, CONVEY EFFECTIVE FEEDBACK, AND PROMOTE
STUDENT LEARNING 3 (2d ed. 2013). The level of detail provided in a rubric varies by
professor. For example, some rubrics focus only on acceptable levels of performance and
omit descriptions of unacceptable levels, some describe expectations with specific
reference to law or facts at issue in an assignment while others are more general in
nature, and some are written just for use by the professor when evaluating the
assignment (and not also to be shared with a student). In other words, there is no such
thing as a template for the “perfect” rubric. Thus, this Article focuses on general
principles for designing a valid, reliable, and fair analytic rubric for use with outcomes
assessment.
107. Jessica Clark & Christy DeSanctis, Toward a Unified Grading Vocabulary: Using
Rubrics in Legal Writing Courses, 63 J. LEGAL EDUC. 3, 78 (2013).
108. Curcio, supra note 32, at 493. Indeed, many scholars have discussed the benefits
of using rubrics as an assessment tool. E.g., SUSKIE, supra note 45, at Chapter 9; BEST
PRACTICES, supra note 8, at Chapter 7; BANTA & PALOMBA, supra note 12, at Chapter 12;
Clark & DeSanctis, supra note 107, at 35.
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
552 MERCER LAW REVIEW [Vol. 71
when the connection to the Assessment Standards is explored in more
detail.
Rubric design is a detailed process with several stages. First, the
designer identifies the levels (or scales) of performance that will be used
(e.g., mastery, progressing, and emerging or distinguished, proficient,
intermediate, novice).
109
Next, the designer sets out the categories (or
dimensions) to be evaluated in the assignment, which are usually tied
to one or more learning outcomes for the course (individual student
learning) or institution (law school assessment).
110
This tie to a learning
outcome(s) is important to ensuring the rubric’s validity as an
assessment measure because the rubric must actually evaluate, or
assess, what is being taught.
111
Under each category, the designer must
then draft narratives that explain what constitutes each level of
performance.
112
This is referred to as criterion-referenced (versus
norm-referenced) assessment, which means that competency is
measured by looking at whether a student satisfies certain
requirements for the dimension that are set by the assessor(s).
113
109. STEVENS & LEVI, supra note 106, at 89; Curcio, supra note 32 at 496497, 499.
110. STEVENS & LEVI, supra note 106, at 10; Clark & DeSanctis, supra note 107, at 9
10; Curcio, supra note 32, at 499501.
111. See MUNRO, supra note 15, at 106; see also Curcio, supra note 32, at 499501
(providing examples of rubric narratives that are tied to specific learning outcomes). It is
also important to make sure the rubric is broken down into a sufficient number of
categories so that there are not too many dimensions, or topics, covered in one category.
Otherwise, the rubric may become too confusing or cumbersome to use when evaluating a
student output that will demonstrate numerous competencies, such as an essay exam or
legal document.
112. STEVENS & LEVI, supra note 106, at 1014. In doing so, consider what knowledge,
skills, and values students will need to have or develop to successfully complete the tasks
associated with the assignment, and identify what types of evidence will show that
students have accomplished those tasks (and related student learning outcomes). See id.
at 2938. One critique of rubrics as an assessment tool is that their use of categories or
narratives are too rigid or standardized. Deborah L. Borman, De-grading Assessment:
Rejecting Rubrics in Favor of Authentic Analysis, 41 SEATTLE L. REV. 713, 73031 (2018)
(arguing that rubrics cannot capture the “subjective component to grading [legal writing]
assignments” like a more holistic evaluation can). However, as discussed in more detail
below in Parts IV(B) and V(A), the key is structuring and dividing the rubric categories to
allow for capturing variation and nuance in legal analysis where it arises, and drafting
the corresponding performance level narratives so they clearly describe the legal reader’s
common expectations for analytical writing while using the professor’s preferred
language.
113. SHAW & VANZANDT, supra note 4, at 93. Some casebook professors may also use
the term rubric” when referring to the grading tool created for evaluating final exam
essays. By definition, however, a rubric is a criterion-referenced assessment tool. Thus, if
the grading tool is being used to assign grades in a norm-referenced framework, then it is
not really a “rubric” as defined and used in this Article.
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
2020] WHEN YOUR PLATE IS ALREADY FULL 553
Narrative content and clarity are important for purposes of fairness, as
the criteria for each performance level must be easily understood by the
evaluator (and the student for individual student assessment), and
reliability, given that the evaluator must be able to apply the criteria
consistently across outputs and at different points in time.
114
Consistent
application of the assessment measure is particularly relevant for law
school assessment, because there are likely to be multiple evaluators
involved.
115
Finally, if the assignment is also being scored or graded, the
designer ends by assigning a narrow point range to each rubric category
and each level within that category.
116
In short, intentional and thoughtful rubric design can result in a
valid, reliable, and fair assessment measure. Section B will flesh these
ideas out, and respond to related critiques, using a specific example.
B. Specific Rubric Example
In 2012, UK Law’s legal writing faculty set out to design a series of
rubrics to use for all seven or eight (given the year) sections of the LRW
Course, and did so with two goals in mind. First, the designing faculty
wanted a way to reliably and fairly grade the students’ major writing
assignments, which are standard across all sections. Second, as the
Director of the LRW program, I wanted to share whether students were
achieving the student learning outcomes for the course as part of a
report I was writing to evaluate the success of changes made to the
LRW Course. In other words, the legal writing faculty had already
engaged in the first stage of outcomes assessment, identifying student
learning outcomes for the LRW Course, and we wanted to engage in the
second stage by using a rubric as the direct assessment measure for
discerning whether our students were accomplishing those learning
outcomes.
117
While it was a time-intensive endeavor on the front-end,
114. Id. at 111.
115. Id.; see also SUSKIE, supra note 45, at Chapter 15; CARNEGIE REPORT, supra note
8, at 17071; BEST PRACTICES, supra note 8, at 24345.
116. Clark & DeSanctis, supra note 107, at 811. Again, the Assessment Standards do
not require that the underlying assessment source (output) be a graded assignment, much
less that the assessment measure also be used for grading. ABA STANDARDS, supra note
24, at 3 (“Law schools are not required by Standard 314 to use any particular assessment
method.”); id. at 24 (“The methods used to measure the degree of student achievement of
learning outcomes are likely to differ from school to school and law schools are not
required by this standard to use any particular methods.”).
117. The learning outcomes we identified are common ones for a foundational legal
research and writing course, including: reading, comprehending, and writing about legal
authorities; working with the analytical paradigms customarily used by U.S. lawyers;
identifying the expectations of the legal reading audience; effectively organizing the legal
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
554 MERCER LAW REVIEW [Vol. 71
and it has required tweaks along the way, the resulting rubrics are a
valuable and successful tool that have been embraced by both the
faculty and students who use them.
118
The remainder of this Section
details the collaborative and thoughtful process the designing faculty
used when creating the rubrics, focusing on the rubric used for the final
writing assignment of the LRW Course.
119
As an initial matter, the designing faculty selected three existing
writing assignments that would be the assessment sources for the
rubric project. Specifically, we selected two predictive writing
assignments that involved rewriting an informal and formal office
memorandum in the fall, and one persuasive writing assignment that
involved rewriting an appellate brief in the spring.
120
The appellate
brief rewrite is also the final major writing assignment for the year-long
course and the score is factored into the students’ overall course grade,
which means the students’ work product would reflect many of the
topics taught in the course and students would be motivated to do well
on the assignment. This made the corresponding rubric prime for
meaningful assessment of whether students had achieved many of the
learning outcomes for the LRW Course. Professor VanZandt, who has
written extensively on outcomes assessment, agrees that memos and
briefs are “excellent,” direct, embedded assessment methods that can be
used for the dual purpose of grading and assessment.
121
Thus, this
analysis at both the large and small scale levels; creating accurate citations; and using
proper grammar and punctuation. See AM. BAR ASSN SECTION OF THE LEGAL EDUCATION
AND ADMISSIONS TO THE BAR, SOURCEBOOK ON LEGAL WRITING PROGRAMS, at 512 (Eric
B. Easton, et al. eds., 2d ed. 2006) (hereinafter ABA SOURCEBOOK). We added the learning
outcomes to our course policies & procedures.
118. The legal research faculty who teach the legal research component of the LRW
Course underwent a similar process to design a rubric for the major research
assignments, with similar goals in mind. However, that process and resulting rubric
exceed the scope of this Article.
119. Although the rubric project began before the Assessment Standards were enacted,
and was not developed with those specific standards in mind, the designing faculty did
rely on outcomes assessment literature and best practices for rubric design.
120. The rewrites occur after the students have received written feedback on the
initial memos or brief and conference with the writing professor about that feedback.
While the rewrite assignments are scored and factor into the final course grade, the initial
assignments are worth little or no points, because the primary goal is for the students to
focus on incorporating the formative feedback into the rewrite. In other words, the initial
assignments are what Professor Duhart refers to as “low-stakes assignments” where
“[t]he goal is to provide students an opportunity to practice—and even ‘fail’—with very
little risk.” Duhart, supra note 44, at 493 (internal quotation omitted); see also Borman,
supra note 112, at 716 (asserting that removing numbers as evaluation allows students to
focus on the feedback rather than the score for purposes of improving analytical writing).
121. VanZandt, supra note 95, at 342.
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
2020] WHEN YOUR PLATE IS ALREADY FULL 555
Article will focus on the design process we used for the appellate brief
rewrite rubric.
122
Next, the designing faculty dove into the large scale organization of
the rubric, which is reflected in Figure 1. After some discussion, we
settled on the four performance levels (or scales) to use across the top of
the rubricbeginning, developing, proficient, and highly proficient.
123
With the levels set, it was time to identify the categories to be evaluated
in the assignment, and thus included along the left-hand side of the
rubric. We started by creating categories for each component or part of
the appellate brief assignment. For example, we had categories for the
shorter initial parts of the brief, such as the Statement Concerning Oral
Argument and the Question Presented, along with longer and more
substantive parts of the brief, like the Statement of Facts and the
Argument.
124
Moreover, because the Argument is the most important
and complex part of the brief, as it sets out the student’s legal analysis
(including efforts to incorporate techniques for subtle persuasion), we
further broke that part of the brief down into several organizational and
substantive categories for the rubric (specifically, deductive
organization, advanced organization, rule statements, rule explanations
or explanatory synthesis, and application of the rule to the client’s facts
using rule-based and analogical reasoning).
125
We ended this phase of
122. That said, we used a similar process for the memo rubrics, using the same four
levels of performance and substantially similar narrative content for the organization,
content, and mechanics of the legal analysis. This is why students (and faculty) could
track progress over the duration of the entire course, which is called “developmental
assessment.” VanZandt, supra note 95, at 340 (citing Allen, supra note 77, at 9); see also
BEST PRACTICES, supra note 8, at 24547 (noting development of expertise occurs over
time, “and there are stages with discernable differences” that should be communicated to
students). The benefits of development assessment are discussed in more detail in Part V.
123. We intentionally declined to use a term like master or mastery, because a
first-year foundational course like legal research and writing is not about mastering
knowledge, skills, or values. Instead, it is about introducing new, core skills and
techniques for our novice legal writers to learn and practice. Later courses are needed to
give students a chance for additional practice as they progress toward competency. See
DEBORAH MARANVILLE, ET AL., BUILDING ON BEST PRACTICES: TRANSFORMING LEGAL
EDUCATION IN A CHANGING WORLD 123 (LexisNexis 2015) (“The best practice is for
students to have at least one significant writing experience each semester of law
school . . . .”).
124. For the memo rubrics, we included the common initial parts of an office
memorandum (Issue, Brief Answer, and Statement of the Facts).
125. For the memo rubrics, we did the same thing with the Discussion section of the
office memorandum. Again, breaking the rubric categories down into discrete topics, or
even sub-topics, ensures that the evaluator is not left trying to assess too many different
ideas or techniques within one category, which makes the feedback (and any resulting
score) more focused and fair, and thus more likely valid.
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
556 MERCER LAW REVIEW [Vol. 71
rubric design by identifying categories that would apply to the entire
brief such as legal citation and formatting. Finally, because the rubric
would be used for individual student assessment, we confirmed that
each rubric category tied back to one or more of the course learning
outcomes.
126
Doing so ensures the validity of the rubric as an
assessment method because there is a direct tie between what is being
taught in the course, and what should be reflected in the writing
assignment to be assessed by the rubric.
127
Figure 1: Large Scale Organization of UK Law Appellate Brief Rewrite Rubric
Categories
Beginning
Developing
Proficient
Highly
Proficient
Cover
Introduction
Statement Concerning
Oral Argument
Statement of Points &
Authorities
Question Presented
Statement of the Case
(Facts)
Organization of the
Argument (CREAC)
Advanced Organization
of the Argument
Argument Content
(Persuasive Headings)
Argument Content
126. For example, one of the course learning outcomes states that students should be
able “to design the organization of legal analysis using effective, reasoned choices that
anticipate the expectations of the legal reading audience and are easy to follow from the
perspective of flow and logic.” LRW Course Policies & Procedures (on file with the author).
This outcome aligns with the rubric’s two organization categories: deductive organization
(following a paradigm such as IRAC or CREAC); and advanced organization (further
explored in Figure 2). Another outcome calls for students to be able to “provide accurate
citations where needed by employing the conventions of the Bluebook and local citation
rules.” Id. This outcome aligns with the rubric’s citation category.
127. See Sparrow, supra note 105, at 18 (“We may have already identified our learning
goals to students in our syllabus and other materials . . . [h]owever, breaking these goals
into more specific components that describe what the students have learned and how we
know if they have demonstrated that learning forces us to think at a deeper level.”)
(emphasis added); see also, supra note 85.
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
2020] WHEN YOUR PLATE IS ALREADY FULL 557
(Applicable Standard of
Review and Rule
Statements)
Argument Content
(Rule Explanations/
Explanatory Synthesis)
Argument Content
(Rule Applications/
Rule-Based Reasoning
and Analogical
Reasoning)
Conclusion
Clarity & Conciseness
Mechanics (Grammar &
Punctuation)
Mechanics (Polish)
Mechanics (Citation)
Formatting for Brief
With the rubric categories identified and aligned with the student
learning outcomes for the LRW Course, the designing faculty turned to
fill in the content of the rubric, which, for us, was the most
time-intensive yet affirming aspect of rubric design. In other words, we
had to draft the narrative that describes each level of performance for
each rubric category. An example can be found in Figure 2.
Collaboration was crucial here, because the rubric would be used by all
of the legal writing faculty, and thus each needed to understand and
agree with the narratives as written in order to ensure consistent, and
thus reliable, application of the rubric to the briefs written by their
students.
128
We started by setting out our collective expectations for
student work that reflects application of the skill(s) or technique(s) at
issue for each rubric category at the beginning, developing, proficient,
and highly proficient levels. In other words, we drafted narratives to
reflect common heuristic strategies we teach our students for
128. See STEVENS & LEVI, supra note 106, at 178 (“Using rubrics created by those with
a stake in the program being assessed also begins a much-needed process in changing
how assessment is carried out, presented, and acted on.”); see also BANTA & PALOMBA,
supra note 12, at 32, 10203 (discussing importance of having high level of consistency
among different rubric raters, and noting lack of sufficient local input when discussing
potential rubric issues such as inter-rater reliability).
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
558 MERCER LAW REVIEW [Vol. 71
organizing and writing their legal analysis.
129
This involved
anticipating common errors or problems that first-year students often
demonstrate on the way to proficiency (for the beginning and developing
levels), reaching agreement on what performance evidences proficiency,
and deciding what performance would demonstrate the highly
proficient level (that is, the ultimate goal for legal writers).
130
For
example, we agreed on what performance would demonstrate high
proficiency in using the advanced organizational techniques covered in
the course. Next, we agreed on what a paper would look like that
demonstrated proficiency in the techniques. Then we talked through
how a paper would differ if still in the developing and beginning stages
for the same techniques.
131
Figure 2 reflects the narratives for the
“Advanced Organization of the Argument” category.
129. As Professor Beazley explains, legal writing faculty teach students “heuristic
strategies,” which she “describe[s] as a principle of providing course content that gives
students ‘generally effective’ techniques for accomplishing certain common tasks.” Mary
Beth Beazley, Better Writing, Better Thinking: Using Legal Writing Pedagogy in the
“Casebook” Classroom (Without Grading Papers), 10 LEGAL WRITING 23, 46 (2004). The
strategies do not dictate the content, and thus do not give the answer or “wreck the
curve,” but instead offer “a set of questions [for the writer] to answer in particular
rhetorical situations.” Id. at 46, 6465. As such, our narratives do not “give the answer
away” to the students, nor do they necessarily “decrease[ ] students’ ability to practice
critical thinking skills,” which are both critiques cited for rubrics. Borman, supra note
110, at 741. Instead, they call on both the students and professors who use them to think
more deeply about how certain aspects of the writing assignment compare to the well-
stated expectations set out in a relevant rubric category. See Curcio, supra note 32, at 497
(explaining that “rubrics allow assessment via descriptors of higher-order thinking rather
than via correct versus incorrect answers”).
130. See BANTA & PALOMBA, supra note 104, at 100 (“Well-designed rubrics contain
specific descriptive language about what the presence or absence of a quality looks like.”);
STEVENS & LEVI, supra note 106, at 11 (preferring rubrics that contain “a description of
the most common ways in which students fail to meet the highest level of expectations”);
Clark & DeSanctis, supra note 107, at 89 (explaining that “narrative descriptions
mirrored the material professors taught in classes leading up to completion of the
particular writing assignment”).
131. When creating a rubric for law school assessment, Professors Shaw & VanZandt
suggest waiting to draft the narratives until after having read a few student outputs.
SHAW & VANZANDT, supra note 4, at 142 (discussing how to make a rubric “hot,” or
complete, during the implementation stage of outcomes assessment). We effectively did
this during the design stage, because when drafting the narratives, we considered what
we had seen in appellate brief rewrites submitted by students in past years. See id.
(discussing the value of experienced teachers with specialized expertise when drafting
rubrics for assessment).
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
2020] WHEN YOUR PLATE IS ALREADY FULL 559
Figure 2: Excerpt from UK Law Appellate Brief Rewrite Rubric
Category
Developing
Proficient
Highly Proficient
Advanced
Organ. of
the
Argument
Some
arguments
could be
better
organized
logically or
with strategy.
Roadmap
paragraphs
may be
missing
where needed
or could
usually be
used more
effectively.
Paragraphing
and/or use of
strong topic
(thesis)
sentences
could often be
improved.
Arguments
are ordered
logically, but
may not
always be
ordered
strategically
where
possible.
Roadmap
paragraphs
are usually
used
effectively
where needed.
A few
paragraphs
may have
been better
executed (in
terms of
length and
unity).
There likely
could be
improved use
of strong topic
(thesis)
sentences or
evident
transitions in
a few
instances.
Arguments are
ordered logically
and strategically,
such as strongest
arguments first,
unless there is a
threshold matter
or logic dictates
otherwise.
Roadmap
paragraphs
(umbrella
passages) are used
effectively where
needed.
Paragraphing is
effective in terms
of length and
unity. The
paragraphs within
each CREAC are
organized around
main ideas, such
as the rule or parts
of the rule, not the
cases.
Transitions are
used where
needed. Topic
(thesis) sentences
are strong in that
they convey main
ideas.
One of the most rewarding aspects of this stage of the design process
was that the designing faculty realized it was easier to reach agreement
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
560 MERCER LAW REVIEW [Vol. 71
on narrative content than initially anticipated. This gave the group
confidence that while we may approach and teach the foundational
skills and techniques relevant to a first-year legal writing course in
different ways (giving thought both to our student learners and our
teaching styles), we not only agreed on the course goals and related
learning outcomes, but also on how our students could demonstrate
achievement of the outcomes in their written work product. In fact, we
regularly reached consensus on the expectations for each performance
level for each rubric category.
132
This is not altogether surprising, given
that the heuristic strategies we teach our students are fairly common
from professor to professor, and they are tied to the idea that effective
legal writing anticipates the expectations of the legal reader.
133
Where
further discussion did ensue, it was often over precise language to use
rather than broad ideas to include. For example, for the advanced
organization of the Argument category, some faculty preferred the term
topic sentence while others preferred thesis sentence (often based on
the term used in a professor’s chosen text and classroom terminology).
This was an easy fix, however, by drafting a narrative that
encompasses both terms, thus satisfying all involved faculty and
ensuring all could consistently, and thus reliably, apply the rubric.
Refer to Figure 2 above. Thus, rubrics can be designed to avoid the
132. We are not alone in finding more commonality than first expected. See STEVENS &
LEVI, supra note 12, at 69 (explaining that when several professors who taught the same
course (but using different approaches, assignments and texts) sat down to design a
rubric, “they differed far less than expected,” and with some discussion and assistance
from an outside consultant, were able to produce a rubric acceptable to all); see also id. at
24 (describing the reaction of faculty who worked together on a single rubric for a shared
assignment as “surprised and reassured to discover that their standards and expectations
were not wildly out of line with those of their colleagues”).
133. See Beazley, supra note 129, at 53 (explaining that legal writing must consider
the reader’s needs and expectations when it comes to form, structure and content); Mary
Beth Beazley, Finishing the Job of Legal Education Reform, 51 WAKE FOREST L. REV. 275,
303, 310 (2016) (discussing legal writing scholarship on the substance of legal writing,
which says that students must consider the needs of their readers); see also Beazley,
supra, note 129 (discussing legal writing professors’ common use of heuristic strategies).
For example, we all teach a similar heuristic strategy for finding the information that a
reader expects from past cases used to support legal analysis (for past case descriptions or
case discussions). There are similar expectations across professors for what type of
information is necessary to include in the case discussions that make up a rule
explanation—namely, the court’s holding and the court’s reasoning with related trigger
factseven though the actual content to be drafted by the student writer will vary
depending on the case, the legal issue being explained, and the legal problem being
resolved. Beazley, supra note 129, at 46, 68 (explaining the relevance of case descriptions
to legal analysis). The narratives we drafted to embody the particular expectations
described here are set out in Figure 5.
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
2020] WHEN YOUR PLATE IS ALREADY FULL 561
concerns some have raised about rubrics being too rigid to use for
meaningful assessment.
134
Appropriate and inclusive narrative
language also results in a more fair assessment method because the tool
must speak the language that students are familiar with and
understand.
135
Finally, because the rubric would also be used to score the appellate
brief rewrite assignment, we assigned each rubric category a total
number of points possible, as reflected in the rubric excerpt in Figure
3.
136
The points assigned to a particular category reflect the focus or
priority given to the skills and techniques.
137
In doing so, we considered
the relevant importance of the category vis-à-vis the assignment and
the related student learning outcomes, the amount of time spent on
that topic throughout the course, and the number of opportunities
students had to practice the relevant skills/techniques leading up to the
final writing assignment.
138
For example, the Cover Page and
Conclusion are both worth one possible point each, and the Question
Presented is worth three possible points. In contrast, the content of the
Argument is worth twenty-one possible points (divided into five possible
points for rule statements, eight possible points for rule explanations,
134. See Borman, supra note 112, at 73031, 740 (asserting that rubrics are too
standardized and cannot capture the “subjective component to grading [legal writing]
assignments” like a more holistic evaluation can). The point is that the narratives we
drafted do not use words like “effective” or “good” in the abstract, but instead more fully
convey the legal reader’s expectations for successful use of the skill or technique in
question. See Beazley, supra note 129, at 66 (discussing the “rules” of analytical writing).
135. See MUNRO, supra note 92. Some faculty engage students in designing a rubric,
including categories and narrative content, where the professor gets the last word say on
what to include or omit in the rubric’s final version. This approach could help with rubric
fairness, as students are more likely to understand the narratives they help draft.
136. As noted above, students already received written feedback on an earlier version
of their appellate brief, which had little impact on their course grade. Moreover, in
addition to the score, students also receive formative feedback, which is discussed more in
Part V.
137. One noted concern is that students will focus only on the categories with high
point totals, but this has not been my experience in practice. Some professors may even be
okay with a student who takes this approach, given that the high point total categories
effectively reflect the primary goals of the assignment. And again, if the rubric is only
being used for outcomes assessment (not also for grading), the points are omitted.
138. See Clark & DeSanctis, supra note 107, at 8 (explaining their goal in designing a
rubric for use by multiple faculty teaching different sections of the same first-year LRW
course “was to come to a uniform conclusion for each assignment about the value of each
[rubric] component related to the time spent teaching it”); see also STEVENS & LEVI, supra
note 106, at 22 (explaining that assigning points or percentages according to the
importance of the rubric category can still message value for substantive and technical
aspects of the writing).
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
562 MERCER LAW REVIEW [Vol. 71
and eight possible points for rule applications). We then spread the
available points for a category over the four progress levels. We used
small ranges in the first three stages (beginning, developing, and
proficient) to give professors some flexibility to account for variation or
nuance even within papers that fall within the same progress level.
139
We declined to use a range for the highest progress levelif a paper
reflects high proficiency for the category, there is no need to make
further gradations.
Figure 3: Excerpt from UK College of Law Appellate Brief Rewrite Rubric
Category
Beginning
Developing
Proficient
Highly
Proficient
Advanced
Organ. of
the
Argument
Five
Points
Possible
Arguments
are not
ordered
logically or
with strategy.
Roadmap
paragraphs
are likely
missing
where needed.
Paragraphs
likely could be
better
executed.
Topic (thesis)
sentences are
usually
Some
arguments
could be better
organized
logically or
with strategy.
Roadmap
paragraphs
may be missing
where needed
or could
usually be used
more
effectively.
Paragraphing
and/or use of
strong topic
(thesis)
Arguments
are ordered
logically, but
may not
always be
ordered with
strategy
where
possible.
Roadmap
paragraphs
are usually
used
effectively
where
needed.
A few
Arguments are
ordered logically
and
strategically,
such as
strongest
arguments first,
unless there is a
threshold
matter or logic
dictates
otherwise.
Roadmap
paragraphs
(umbrella
passages) are
used effectively
where needed.
Paragraphing is
139. See Clark & DeSanctis, supra note 107, at 9 (explaining that using a range of
points gave professors flexibility to distinguish between two or three papers that all met
the narrative criteria for a rubric subcategory, but yet were still “distinguishable from
each other as more or less successful given those criteria”). We kept the point range small
and contemplated that a professor could award quarter and half points if needed for
flexibility. In my experience, students do not try to nit-pick about the individual score for
a category or the overall score on the rubric, not even in terms of trying to gain a quarter
or half point more. This is likely because the basis for the score is clearly supported by the
feedback provided in the completed rubric or supporting written feedback embedded in
the related writing assignment, which is discussed in more detail in Part V(A) below.
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
2020] WHEN YOUR PLATE IS ALREADY FULL 563
missing or fail
to introduce
the topic of
the
paragraph.
Zero Points
sentences
could often be
improved.
One to two
Points
paragraphs
may have
been better
executed
organization
ally (in
terms of
length and
unity).
There likely
could be
improved
use of strong
topic (thesis)
sentences or
evident
transitions
in a few
instances.
Three to four
Points
effective in
terms of length
and unity. The
paragraphs
within each
CREAC are
organized
around main
ideas, such as
the rule or parts
of the rule, not
the cases.
Transitions are
used where
needed. Topic
(thesis)
sentences are
strong in that
they convey
main ideas.
Five Points
It is important to note that while rubrics are a common assessment
method used in legal research and writing courses, we are not alone
here. Professors routinely design and use rubrics to assess student work
in a variety of law school courses.
140
While the number of progress
categories and components may vary depending on what learning
outcomes are being measured and what type of assessment source
(output) is being evaluated, the underlying design process is the same.
The content and level of detail will also turn on the designing faculty
member and the purpose of the rubric, be it one to distribute to
students while the assignment is ongoing, one that is used only by the
professor for grading, or one that is designed specifically to assess
student learning outcomes.
141
For example, Professor Duhart has
shared a rubric she designed to evaluate a required practice essay in
her Constitutional Law course, which is divided into categories for
140. Curcio, supra note 32, at 498 (explaining that rubrics “allow for nuanced
assessment of skills acquisition over a wide range of courses as well as a wide range of
learning outcomes”).
141. STEVENS & LEVI, supra note 104.
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
564 MERCER LAW REVIEW [Vol. 71
format, legal issue, statement of the rule, application of the rule to
hypothetical facts, conclusion, and writing style.
142
The narratives focus
just on what the professor is looking for in a response (that is, what is
expected) in terms of student performance, and they are specific to the
law and legal authorities relevant to the question regarding the
commerce clause.
143
In short, the message here is that there could be a number of faculty
members with experience designing or using potentially relevant
rubrics in their classroom. Thus, law schools should look broadly across
the faculty for information that could prove useful to responding to the
Assessment Standards. Some specific ideas for how a law school might
use existing expertise and resources like that mentioned here will be
addressed in more detail in Part V below.
V. RESPONDING TO THE ABA ASSESSMENT STANDARDS
This final Part shares ideas on how a rubric like the examples
described in Part IV can also be used when responding to the
Assessment Standards, even though originally created for another
purpose. Doing so can save precious time in a busy law school while also
resulting in meaningful assessment. Beginning from within, so to
speak, could also help with buy-in from faculty, which is important
when trying to build a culture of assessment in a law school.
144
It is worth reemphasizing that the primary example used in Part IV
is not meant to suggest by any means that all assessment work should
fall to the legal research and writing faculty at a law school, faculty who
often are already asked to take on more than their fair share of
institutional work and while being paid less and having less security or
status. Rather, to create a productive and meaningful culture of
assessment, assessment expertsand the ABAcounsel that all faculty
should be involved.
145
And as explored more in this final Part, a variety
of law school faculty could have knowledge and experience that can
contribute to the outcomes assessment endeavor.
146
142. Duhart, supra note 44, at 51314 and Appendix E.
143. Duhart, supra note 44, at Appendix E.
144. Cunningham, supra note 4, at 424 (“One way to combat faculty perceptions that
assessment is externally driven is to use data from locally developed and course
embedded assessments rather than tests that are developed from the outside.”).
145. SHAW & VANZANDT, supra note 4, at 4748; ABA June 2015 Guidance Memo,
supra note 5, at 3.
146. Professor Funk reiterated the call to avoid reinventing the wheel in her recent
text, saying “[t]he goal is not to spend an inordinate amount of time and energy creating
something to be used for the sole purpose of assessment, but rather to harness what
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
2020] WHEN YOUR PLATE IS ALREADY FULL 565
Moreover, while rubrics are by no means a magic bullet for outcomes
assessment (or grading, for that matter), and they may not be
appropriate for evaluating every assignment or for assessing every
learning outcome, they can play an important role in responding to the
ABA’s assessment mandate. With that in mind, this Part begins in
section (A) with a discussion of how a rubric like the examples
discussed in Part IV can be used in responding to Standard 314’s call
for individual student assessment that involves formative assessment,
and then turns in section (B) to suggest how those rubrics could also be
adapted for use in conducting law school assessment as required by
Standard 315. While the goals of individual student assessment and
law school assessment differ, there is some relationship between the
two, and this Article seeks to show how each can serve the other. This is
particularly helpful in busy law schools with limited resources.
Information gained from law school assessment can “trickle down to
benefit students at the individual level” because the faculty may opt to
make changes to curriculum or teaching methods in light of that
information.
147
Moreover, “the outputs gathered as a result of individual
student assessment can be repurposed to assist in [law school]
assessment[,]” and most student outputs (writing assignments, exams,
etc.) will already be embedded in courses.
148
In addition, the rubric project described in Part IV serves as just one
specific example of how a law school can benefit from the existing work
and experience of its own, and even share that work with other schools
who are faced with the same requirements, challenges, and
opportunities afforded by the Assessment Standards. It is not meant to
be a blue print that will work for every law school, but instead, to add to
“the much-needed dialogue of shared experiences and methodologies of
assessing student learning outcomes and to show how simple, efficient,
and valuable the process can be.”
149
A. Individual Student AssessmentStandard 314
As discussed above, Standard 314 calls for law schools to engage in
individual student assessment that includes formative assessment.
150
That is because the ABA guiding principle for outcomes assessment
calls for schools to “shift the emphasis from what is being taught to
[professors] are already doing in the classroom to provide the [assessment] information
you need.” FUNK, SUPRA note 60, at 6364.
147. SHAW & VANZANDT, supra note 4, at 16.
148. Id.
149. Roberts, supra note 94, at 459.
150. ABA STANDARDS, supra note 24, at 23.
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
566 MERCER LAW REVIEW [Vol. 71
what is being learned[.]”
151
Rubrics like the ones described in Part IV
can be a meaningful way to respond to Standard 314. In addition to (or
even lieu of) using rubrics to grade, “[r]ubrics . . . are also valuable
pedagogical tools because they make us more aware of our individual
teaching styles and methods, allow us to impart more clearly our
intentions and expectations, and provide timely, informative feedback
to our students.”
152
As an initial matter, providing a rubric to students before the
assignment is due gives them clear notice of the professor’s expectations
regarding performance, and can form the basis of formative feedback.
153
This also responds directly to the fairness principle of assessment
method design, because the content of a rubric can help level the
playing field for all students by translating what teachers are talking
about in the classroom, regardless of background and experience, and
while there is time to ask questions about the rubric’s content before
the assignment is due.
154
For example, UK Law students receive the
appellate brief rewrite rubric well before the writing assignment is due
so that they can get a sense of professor expectations for the
assignment, specifically using the narratives in the highly proficient
progress level of each rubric category as a “roadmap” of what to strive
for in writing and rewriting the brief.
155
We also use class time to
discuss the narratives in the highly proficient progress levels and their
connection to legal writing techniques or heuristic strategies students
are trying to use when writing the assignment. This way students can
more clearly see the connection between what they are learning and
what they will be evaluated on.
156
One colleague gives her class an
anonymous excerpt of the Argument section from a former student’s
151. ABA June 2015 Guidance Memo, supra note 5, at 3.
152. STEVENS & LEVI, supra note 106, at 15.
153. Clark & DeSanctis, supra note 107, at 8, 16. As discussed above, when rubrics
convey heuristic strategies that students should try to apply, versus just the content
sought in an assignment or exam answer, there is no risk that they will somehow give
students “the answer” if provided in advance. Supra note 129.
154. See STEVENS & LEVI, supra note 106, at 26.
155. See STEVENS & LEVI, supra note 106, at 19; see also Sparrow, supra note 105, at 9,
23, 35 (noting that this approach works for a writing assignment or a final exam in all
types of courses). Thus, contrary to the critique that providing a rubric to students before
the assignment provides information that will “compromise[] the quality of teaching and
standardize[] learning[,]” Borman, supra note 112, at 741, providing the information in
advance can actually encourage active learning when students use the rubric to identify
and raise questions with the professor.
156. STEVENS & LEVI, supra note 106, at 19 (“[B]ecause we discuss the rubric and
thereby the grading criteria in class, the student has a much better idea of what these
details mean.”).
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
2020] WHEN YOUR PLATE IS ALREADY FULL 567
appellate brief, and then has the students complete the relevant rubric
category for the sample as part of an in-class exercise to evaluate the
way the writer used techniques for proving the rule statement using
past cases to apply it (that is, to identify what progress level the
students would assign to the paper for the rule explanation category of
the rubric).
157
The exercise often raises questions students have about
their own working draft, which are addressed globally in class or
individually during office hours (and either way, before the assignment
is due). This kind of exercise responds directly to the goal of using
formative feedback as a way for students to “become ‘metacognitive’
about their own learning[.]”
158
Second, a completed rubric that is returned to the student after the
assignment is submitted responds directly to Standard 314’s call for
formative assessment, because it provides individual feedback about
that students performance on a specific assignment and while the
course is ongoing.
159
Take the appellate brief rewrite from Part IV as an
example. The completed rubric conveys the progress level achieved for
each category of the rubric, and thus for each underlying skill or
technique discussed in the narrative for that category.
160
And each
rubric category is tied to one or more student learning outcomes for the
LRW Course.
161
When completing the rubric, the professor can engage
with the narrative text to make sure the student learns why the paper
reflects a particular progress level,
which is the most meaningful kind
of formative assessment.
162
Figure 4 shares an example of one way to
provide that meaningful feedback in an excerpt of a completed rubric
(specifically, for a student’s use of advanced organization in the
Argument). Thus, contrary to concerns raised by Professor Borman in
157. Figure 5 depicts the rubric excerpt that the students use for this exercise.
158. CARNEGIE REPORT, supra note 8, at 173; see also ABA June 2015 Guidance Memo,
supra note 5, at 3 (discussing ABA reliance on the CARNEGIE REPORT).
159. STEVENS & LEVI, supra note 106, at 1718, 7884; see also Clark & DeSanctis,
supra note 107, at 1314 (citing Sparrow, supra note 105, at 8).
160. STEVENS & LEVI, supra note 106, at 19 (“The highest level descriptions of the
[rubric categories] are, in fact, the highest level of achievement possible, whereas the
remaining levels, circled or checked off, are typed versions of the notes we regularly write
on student work explaining how and where they failed to meet that highest level.”). And
as discussed above, each rubric category is tied to one or more student learning outcomes
for the course.
161. Supra note 117.
162. STEVENS & LEVI, supra note 106, at 19 (“The student [] receives all the necessary
details about how and where the assignment did or did not achieve its goal, and even
suggestions (in the form of the higher levels of [performance]) as to how it might have
been done better.”); see also supra Part II(B)(1) (discussing most meaningful formative
assessment).
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
568 MERCER LAW REVIEW [Vol. 71
her article critiquing rubrics as an assessment tool,
163
the example
shows how the existing narrative text is helpful to the professor and
student, and how the professor can add to it as needed to account for
individuality and respond to nuances in the student’s work product.
164
Figure 4: Sample Excerpt of a Completed Appellate Brief Rewrite Rubric
Category
Beginning
Developing
Proficient
Highly
Proficient
Advanced
Organ. of the
Argument
Five Points
Possible
Two Points
Earned
Arguments are
not ordered
logically or
strategically.
Roadmap
paragraphs are
likely missing
where needed.
Paragraphs
likely could be
better
executed.
Topic (thesis)
sentences are
usually
missing or fail
to introduce
the topic of the
paragraph.
Zero Points
Some
arguments
could usually
be better
organized
logically or
strategically.
[Refer to my
related margin
comment in
your paper.]
Roadmap
paragraphs
may also be
missing where
needed or
could usually
be used more
effectively.
Arguments
are ordered
logically, but
may not
always be
ordered with
strategy
where
possible.
Roadmap
paragraphs
are usually
used
effectively
where needed.
Arguments
are ordered
logically
and with
strategy,
such as
strongest
arguments
first, unless
there is a
threshold
matter or
logic
dictates
otherwise.
Roadmap
paragraphs
(umbrella
passages)
are used
effectively
where
needed.
163. Borman, supra note 112, at 740.
164. The relevant part(s) of the narrative is underlined, and additional text is added in
blue, bracketed text. And again, the completed rubric is just one aspect of the formative
assessment we provide to students. We also engage directly with the student’s text using
margin comments, which is usually tied to the rubric categories (and related narratives).
See STEVENS & LEVI, supra note 106, at 18 (“The use of [a] rubric does not, of course,
preclude notes specific to the student that can be placed on the rubric, the paper itself, or
elsewhere.”). Thus, we never feel constrained by the rubric when offering feedback on the
nuances of the law or facts for a particular writing assignment, or about the student’s
legal analysis, which can be noted on the rubric or the student’s paper.
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
2020] WHEN YOUR PLATE IS ALREADY FULL 569
[You include a
roadmap, but it
is missing
helpful visual
cues when
stating the
overall rule of
law.]
Paragraphing
and/or use of
strong topic
(thesis)
sentences
could often be
improved.
[Your
paragraphs are
organized
around ideas
and each only
takes on one
main idea,
however, you
are usually
missing a topic
sentence for
your rule
explanation
(“E”)
paragraphs
and sometimes
also for your
rule
application
(“A”)
paragraphs.]
One to two
Points
A few
paragraphs
may have
been better
executed
organizationa
lly (in terms
of length and
unity).
There likely
could be
improved use
of strong topic
(thesis)
sentences or
evident
transitions in
a few
instances.
Three to four
Points
Paragraphi
ng is
effective in
terms of
length and
unity. The
paragraphs
within each
CREAC are
organized
around
main ideas,
such as the
rule or
parts of the
rule, not
the cases.
Transitions
are used
where
needed.
Topic
(thesis)
sentences
are strong
in that they
convey
main ideas.
Five Points
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
570 MERCER LAW REVIEW [Vol. 71
In addition, the structure of the appellate brief rubric allows the
professor to account for variation within an individual paper, and the
content ensures the student appreciates the complexity of legal
analysis. First, the rubric is designed so that the professor can signal
where the student’s paper demonstrates one progress level in some
aspects of a rubric category and a different progress level for others.
165
Figure 5 provides an example of this fairly common situation. Here, the
student demonstrated proficiency in discussing the relevant
information in most of the case illustrations included in the rule
explanations.
166
However, the choice of cases used in supporting the
legal arguments was still developing, because there were more helpful
binding cases in some instances and helpful persuasive authority could
have been used to supplement binding authority in others. The example
also shows that a professor can complete the rubric in a way that uses
the existing narrative as a start, and can then add to that language as
needed to clarify the particular student’s performance (including
reference to related comments the professor embedded in the margins of
the student’s paper to engage directly with the text). The substance of
the example also shows that, notwithstanding Professor Borman’s
stated concern with rubrics, not all rubrics boil down to “[a] checklist
[that] “encourages one-dimensional, black-and-white thinking” or a
document that makes the legal writing process look “neat” or overly
simple.
167
Thus, the process of completing the rubric, along with how it
was structured when first designed, work together to allow for
meaningful formative assessment.
165. See supra note 139. Thus, a rubric with this structure can react to variation in a
student’s paper even when one rubric category captures more than one idea or technique,
directly responding to a concern Professor Borman has raised when it comes to using
rubrics for assessment. Borman, supra note 112 at 740. And if the rubric is also used for
scoring, then the point range will also afford flexibility here. See supra note 139.
166. The reader’s expectation regarding the content of a rule explanation, and
heuristic strategies that legal writing professors teach to help students in discerning and
writing about this information can be found above in notes 127 and 131.
167. Borman, supra note 112, at 735, 741.
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
2020] WHEN YOUR PLATE IS ALREADY FULL 571
Figure 5: Sample Excerpt of a Completed Appellate Brief Rubric (In Between
Progress Categories)
Category
Beginning
Developing
Proficient
Highly
Proficient
Content of the
Argument
(Rule
Explanations)
Eight Points
Possible
4.75 Points
Earned
Binding
authority
and
persuasive
authority
could both
be used
more
effectively.
Additional
research is
needed.
An
explanation
of the rule
is
completely
missing in
one or more
instances,
and where
one is
included, it
likely could
be more
accurate or
complete.
Zero to one
Point
Binding
authority is
only
sometimes
used
effectively
where
available,
and
persuasive
authority
could also be
more
effectively
used to
supplement
binding
authority
where gaps
exist.
Additional
research is
most likely
needed.
[I offered
specific
thoughts on
this in
margin
comments,
especially in
part I(A) of
the
Argument.]
Binding
authority is
usually used
effectively
where
available, and
persuasive
authority is
often used
effectively to
supplement
binding
authority
where gaps
exist.
The statement
of the rule is
explained in
each section
and
sub-section
(where
applicable), but
the
explanation
could be more
complete or
effective in a
few instances.
That said,
most
explanations
include
accurate,
sufficient
information
about the
Binding
authority is
used
effectively
where
available, and
persuasive
authority is
used
effectively to
supplement
binding
authority
where gaps
exist.
For each
section and
sub-section
(where
applicable) of
the
Argument,
the statement
of the rule is
explained in a
sophisticated
manner
through
well-reasoned
and
well-written
explanatory
synthesis that
includes an
accurate
discussion of
the relevant
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
572 MERCER LAW REVIEW [Vol. 71
The
statement of
the rule in a
section or
sub-section
(where
applicable)
could usually
be better
explained. In
other words,
only some
case
discussions
include
accurate,
sufficient
information
about the
authorities.
Two to four
Points
authorities
used (for prior
cases, this
means
including the
relevant
facts/trigger
facts,
reasoning, and
holding).
[See margin
comments for
examples of
where you
have been
complete and a
few instances
where you
could be more
complete.]
Five to seven
Points
information
from the
authorities
(for prior
cases, this
means
including the
relevant
facts/trigger
facts,
reasoning,
and holding).
Eight Points
Moreover, the UK Law rubric project uses the favored approach of
providing multiple formative assessments in the same course.
168
As
noted in Part III(B), the designing faculty use a similar rubric at three
different points in the LRW Course: the rewrite of each of the two major
assignments in the fall and the appellate brief rewrite in the spring.
169
168. See supra Part II(B)(1). Use of multiple formative assessments methods that help
students understand and then correct issues with legal analysis and legal writing is
nothing new to legal research and writing courses like UK Law’s LRW Couse (the same
goes for other applied or experiential courses).
See MUNRO, supra note 15, at 16 (noting
that formative assessment has long been a part of clinical and legal writing programs in
American law schools); see also Hamm, supra note 2, at 377 (stating that “skills professors
have long been committed to the use of formative assessment”); Susan Hanley Duncan,
The New Accreditation Standards Are Coming to a Law School Near YouWhat You
Need to Know About Learning Outcomes & Assessment, 16 LEGAL WRITING 605, 621, 622
n.66 (2010) (“Traditionally, legal writing classes are designed applying many of the
concepts found in the assessment literature and are excellent models to imitate.”) (citing
other relevant articles in note 68).
169. First-year legal research and writing courses usually give student a series of
writing assignments (often of increasing complexity) over the duration of the course, and
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
2020] WHEN YOUR PLATE IS ALREADY FULL 573
The rubrics use the same four progress levels, and they include similar
categories and corresponding narratives for the organization, content,
and mechanics of the legal analysis.
170
This way students can use the
completed rubrics for the fall assignments to improve their learning
while the course is still ongoing in the fall and spring, which is known
as developmental assessment.
171
Each completed rubric shows a
student which rubric categories are marked as beginning or developing,
which tells the student where to focus on further practicing the skills
and techniques outlined in the relevant categories (and ideally also seek
professor assistance along the way) when completing future writing
assignments in the course.
172
For example, a student’s completed rubric
for the formal office memo rewrite may indicate that the student’s
attempt to explain the rule of law is still developing because the
discussions of past cases to apply the rule could usually be more
accurate or complete. The student can prioritize this important aspect
of legal analysis when writing the appellate brief in the spring. The
student can seek feedback on this topic in the initial version of the
appellate brief, and then has the chance to incorporate that feedback in
the rewrite. The excerpt of the completed rubric for the appellate brief
rewrite, shown in Figure 5 above, confirms that the extra focus and
practice paid off by indicating that the student’s rewrite demonstrates
proficiency in this technique because most case discussions were
complete and accurate.
Perhaps just as important, however, is that students can use the
completed rubrics to self-discover their effective use of skills and
techniques where a professor has marked the progress level for a
the professor critiques each assignment (in writing or orally during a student conference)
with an eye toward how the students can incorporate the feedback into a rewrite of that
assignment or transfer the feedback to the next writing assignment in the course. Thus,
the feedback provided encourages the students to grow and learn from their own writing
strengths and weaknesses while the course is ongoing. See ABA SOURCEBOOK, supra note
117 at 24; see also Beazley, supra note 129 at 4749 (discussing the use of writing process
theory in legal writing courses, where the professors “intervene in their students’ writing
before the final draft, so they can give students feedback on their research, writing and
thinking”).
170. The key difference is that the fall rubrics also include categories for the other
parts of the memo, while the spring rubric omits those categories and adds in categories
for the parts of the appellate brief (and enhances some narratives to reflect the transition
to rhetorical writing techniques where relevant). Refer also to the discussion about the
fall assignment rubrics, supra note 125.
171. Supra note 122.
172. See STEVENS & LEVI, supra note 106, at 20 (explaining that students can use the
rubrics from completed assignments to draw their own conclusions about weaknesses in
their work and identify plans for improvement, which “is a form of intrinsic motivation”).
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
574 MERCER LAW REVIEW [Vol. 71
certain category as proficient or even highly proficient (especially where
that marks progress from an earlier assignment in the course). This
information can bolster the student’s confidence when using the
relevant techniques in future assignments. For example, when it comes
to the completed rubrics for the fall assignments in the LRW Course,
students have confidence to apply their “proficient” techniques to the
appellate brief assignment in the spring, and they may also transfer
that confidence into the energy needed to push themselves to move to
the next progress level on other skills and techniques that are still
developing or perhaps even beginning.
173
And when it comes to the
completed appellate brief rewrite rubric, where categories are marked
as proficient or even highly proficient, students are more likely to enter
their summer jobs and later law school courses with confidence they can
successfully apply the skills and techniques related to those categories.
It is important to stop and note that not all formative feedback need
be this detailed or individualized in order to be meaningful, and doing
so may not be possible given the nature or size of a course. Indeed, a
variety of law school courses can include formative assessment
methods, and some casebook professors are already using such methods
in their classes. For example, Professor Curcio has assigned a complaint
drafting exercise in her Civil Procedure classes, which calls on students
“to understand and apply the procedural law of complaints as well as
tort law concepts of negligence, negligent hiring and retention, and
respondeat superior.”
174
She has done the exercise as both graded and
ungraded, and in both instances, students receive detailed rubrics.
175
173. Thus, to respond to concerns raised by Professor Borman in her recent critique of
rubrics, when properly designed and implemented by faculty, this assessment tool can be
used by students to encourage critical thinking and aid in the “transfer of learning”
through self-reflection, and thus rubrics can respond to one of her seven principles for
good feedback. See Borman, supra note 112, at 733, 74445; see also STEVENS & LEVI,
supra note 112, at 21 (“Because of the rubric format, students may notice for themselves
the patterns of recurring problems or ongoing improvement in their work, and this
self-discovery is one of the happiest outcomes of using rubrics.”); Sparrow, supra note 105,
at 23 (explaining that “rubrics encourage students to become metacognitive, or reflective,
independent learners.”).
174. Curcio, supra note 46, at 163–64 (explaining that the assignment also “served as
a learning tool for other procedural concepts we covered during the semester”).
175. Curcio, supra note 46, at 16364, 174. Other ways professors may already
incorporate formative feedback in their course include by assigning an in-class quiz
(multiple choice or short answer), a client advisory letter, a take-home essay question, or
a mid-term exam, and then providing feedback on the students’ performance through such
methods as an annotated model answer, group discussion regarding strengths and
weaknesses of answers, or individual feedback in rubric or narrative form. E.g., Curcio,
supra note 46; Field, supra note 46. Other professors may assign third party quizzes or
exercises to be completed online outside of class, such as TWEN quizzing or CALI lessons,
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
2020] WHEN YOUR PLATE IS ALREADY FULL 575
Moreover, even if individual feedback is provided using a rubric, it need
not be as detailed as the appellate brief rubric. For example, in addition
to the final exam in an insurance law class, students could also draft a
client advice letter during the course and receive written feedback on
the assignment.
176
A rubric for this type of assignment would not
require nearly as many categories as an appellate brief involving
specific formatting requirements and multiple legal issues, and could
even be further limited only to feedback on the substance of the
analysis (given the likely student learning outcomes for the course).
177
And these are just examples. Law schools should survey their faculty to
discern what types of formative assessment methods are already being
used, by whom, and for what courses, and thus what existing resources
and expertise may be useful for compliance with Standard 314. The key
is that students receive meaningful feedback while the course is in
progress, and thus while there is still time to improve student learning
before the final exam (which is more likely to be summative and norm-
referenced).
178
Third, depending on how a rubric is designed and used, a completed
rubric can serve as formative assessment even when it evaluates a final
assignment in a course. The ABA defines formative assessment
methods to include those that provide meaningful feedback at different
points in the student’s course of study (in addition to different points in
the same course).
179
In other words, some summative assessments may
even offer the type of feedback that promotes student learning.
180
When
which can also provide feedback to students. Field, supra note 46 at 43132 & n.200
(mentioning CALI QuizWright).
176. MUNRO, supra note 15, at 16.
177. By way of further example, the rubric example shared by Professor Duhart
(discussed above in Part III) is only a page and a half in length, focusing on identifying
where the student’s work product satisfies her expectations for the Constitutional Law
practice essay (and not also where the assignment is beginning or developing). Duhart,
supra note 142, at Appendix E.
178. ABA STANDARDS, supra note 24.
179. ABA STANDARDS, supra note 24, at 23 (defining formative assessment methods as
“measurements at different points during a particular course or at different points over
the span of a student’s education that provide meaningful feedback to improve student
learning”) (emphasis added).
180. Duhart, supra note 44, at 497 (noting that “the terms ‘formative’ and ‘summative’
apply not to the actual assessments but rather the functions they serve”); see also
Carnegie Mellon University Eberly Center for Teaching Excellence & Educational
Innovation, What is the difference between formative and summative assessment?,
https://www.cmu.edu/teaching/assessment/basics/formative-summative.hmtl (lasted
visited June 15, 2019) (“Information from summative assessment can be used formatively
when students or faculty use it to guide their efforts and activities in subsequent
courses.”).
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
576 MERCER LAW REVIEW [Vol. 71
it comes to the appellate brief rewrite rubric, students can use their
completed rubric for the final assignment in the LRW Course to identify
where their skills are not yet proficient, and then use this information
when prioritizing where they should seek additional practice afforded
by future legal writing assignments given in other law school courses
and summer jobs. For example, a completed rubric for the appellate
brief rewrite may indicate that the student’s paper demonstrates
proficiency in stating and explaining legal rules, but the students use of
analogical reasoning to support the application of the rule to the client’s
facts may still be developing. This gives the student a specific priority to
focus on and continue to practice while the student’s course of study is
still ongoing (even though the present course has come to an end).
181
And that student can even refer back to the rubric for a reminder of
how to demonstrate rule application that is highly proficient.
182
Furthermore, the designing faculty have discovered other benefits in
using the rubrics both before and after the students complete the
relevant writing assignment in their LRW Course. I will offer a few
examples. When commenting on an earlier version of one of the three
relevant assignments, we often use narrative language from the rubric
that will be used to evaluate the assignment rewrite. This helps ensure
that what we are using the initial assignment to teach, in terms of legal
writing skills or techniques, is what we intend to evaluate in the
rewrite. Doing so confirms the validity of the rubric.
183
The designing
faculty have also commented that the rubric aids in consistently
evaluating all of their students’ assignments, which is relevant to the
181. It is true that most casebook faculty do not complete, much less share with their
students, an analytic rubric like this one when grading final exams, because they use
norm-referenced assessment. It exceeds the scope of this article to argue that all faculty
should use criterion-referenced benchmarks or incorporate formative assessment into
their courses. Doing so is neither required by the ABA nor realistic. This Part of the
article instead focuses on where faculty may already be engaging in assessment practices
that could translate to, or be adapted for, the type of formative assessment contemplated
by Standard 314. I offered some examples above where casebook faculty may already be
engaging in formative feedback (or could be) while the course is ongoing. My goal here is
simply to get faculty thinking about the fact that even feedback offered at the end of a
course (instead of just a score or grade) can still prove meaningful for other points in time
in the student’s course of study, and some of us may already be trying to do this.
182. See STEVENS & LEVI, supra note 106, at 19 (“The demand for an explanation of
the highest level of achievement possible . . . is fulfilled in the rubric itself.”). Moreover, if
the same or a substantially similar rubric was used in advanced legal writing courses to
evaluate and assess a student’s performance (and thus ongoing student learning) in
applying the relevant skills or techniques, then the rubric itself continues to offer
additional formative feedback.
183. See sources cited supra notes 8283.
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
2020] WHEN YOUR PLATE IS ALREADY FULL 577
assessment method’s reliability.
184
And as discussed above in Part III, a
valid and reliable assessment method is also more likely to be a fair
one.
185
Moreover, many of the designing faculty use the rubric to
jumpstart or enhance deeper conversations with students about their
legal analysis,
186
which is yet another illustration of how a rubric can
encourage critical thinking.
187
Finally, the completed rubrics have offered the designing faculty a
reliable way to assess whether and where student learning has
occurredfor each assignment and upon completion of the LRW
Course.
188
Doing so responds directly to Standard 314’s call to engage in
individual student assessment.
189
As an initial matter, a professor can
compare the student’s first completed rubric in the fall to the second
completed rubric in that same semester to determine if (and where) the
student is making progress during the course. For example, if the
completed rubric for the first memo assignment indicates that a student
is beginning or developing when it comes to synthesizing and stating a
complete rule statement, the professor can compare to the progress
level earned on the rule statement category on the rubric completed for
the second memo assignment to see if there was improvement (to
developing or proficient). If further progress is needed, there is still
time to engage with the student while the course continues in the
184. See sources cited supra notes 8488.
185. See sources cited supra note 90.
186. For example, I review the completed rubric in advance of and during an
individual student conference about the assignment. The review gives me a quick
reminder of the particular student’s strengths and areas for further progress (given that
papers can run together depending on the number of students I have in a given year). I
can also engage with the completed rubric itself during the conference, which can be
particularly helpful for the student who says something like, “I don’t have any questions
about your feedback,” or “I am disappointed in my score,” when it is clear the student has
not dug deeper into the specific feedback provided to generate questions or to try to
understand the basis for the score. Focusing on the written feedback helps move the
student beyond the score and to the skills and techniques underlying that score that
matter when it comes to understanding what the student has learned and still needs to
learn. See Sparrow, supra note 103, at 3031 (explaining how rubrics can enhance
conversations between students and professors about performance and grades).
187. STEVENS & LEVI, supra note 106, at 21 (“Used in conjunction with good academic
advising, rubrics can play a major role in contributing to students’ development of a more
scholarly form of critical thinkingthat is, the ability to think, reason, and make
judgments . . . .”).
188. See STEVENS & LEVI, supra note 106, at 20 (“Using rubrics for overall assessment
as well as immediate grading meets the demand . . . for determining whether a student’s
work is actually improving over time.”).
189. ABA STANDARDS, supra note 24, at 23.
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
578 MERCER LAW REVIEW [Vol. 71
spring.
190
Moreover, the professor can use the completed appellate brief
rewrite rubric to determine whether the student achieved the learning
outcomes for the LRW Course. Again, each rubric category is tied to one
or more learning outcomes for the course, so the professor would look to
see if the student achieved the proficient progress level (or higher) for
each category (and thus related student learning outcome).
191
In
contrast, the overall score that the student earned on the assignment
(the sum of the points earned for each rubric category) only tells a
professor if the student’s overall performance on the final assignment in
the course was below average, average, or above average overall. In
other words, the score only says how the student compares to his/her
peers, which is norm-referenced assessment, while the progress levels
provide the criterion-referenced assessment that is more relevant for
outcomes assessment.
192
In sum, rubrics can serve as a valuable formative assessment tool
when responding to the ABA’s call for individual student assessment.
And there are a variety of ways that rubrics can be used to make
individual student assessment meaningful. Before law schools think
they must start from scratch or reinvent the assessment wheel in its
entirety, they should take the time to discern where existing knowledge
and resources can at least serve as a starting point when responding to
Standard 314, even if those resources were not specifically created with
the Assessment Standards in mind.
190. The collective rubric information can also facilitate reflection and action by the
professor. For example, I begin the second semester of my LRW Course with a collective
view of the students’ completed rubrics from the fall. I can identify if a majority of
students are still in the developing level of a rubric category, especially on a technique I
expected to see more progress on given the focus in the fall, such as, deductive
organization using IRAC or CREAC as a guide, or synthesizing and stating legal rules.
See STEVENS & LEVI, supra note 106, at 25–26 (“[C]ollected rubrics provide a record of the
specific details of how students performed on any given task, allowing us to quickly notice
and correct any across-the-class blind spots or omissions.”); see also Sparrow, supra note
105, at 2728 (discussing ways rubrics provide helpful data about teaching). If so, I still
have time to alter teaching plans to provide further global guidance and practice on the
relevant topic(s) before moving on to the more advanced topics to be covered in the spring.
191. For purposes of individual student or course assessment, the designing faculty
reached consensus on the relevant assessment benchmark for the LRW Course. See SHAW
& VANZANDT, supra note 4, at 93, 125 (explaining that a benchmark in this context is
“based on whether a student satisfies certain prerequisites set by the assessor[,]” and
thus in theory, every student should be able to reach the benchmark (or every student
could fail to meet it)). Because the LRW Course is an introductory one, we concluded that
the goal for our students would be to achieve proficiency in the rubric categories. Supra
note 123.
192. See supra notes 49 and 61.
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
2020] WHEN YOUR PLATE IS ALREADY FULL 579
B. Law School AssessmentStandard 315
Rubrics can serve dual assessment purposes by also responding to
Standard 315, which requires law schools to “conduct ongoing
evaluation[s] of the law school’s program of legal education, learning
outcomes, and assessment methods[.]”
193
This Part will also use the UK
Law’s rubric project as the primary example for illustrating how an
existing rubric can prove helpful here, but as noted above, it is not
meant to be a blue print for every law school.
First, an existing rubric could be used as an embedded assessment
measure for law school assessment that involves collecting and
reviewing a sampling of outputs from several related courses. As noted
above, Standard 302 specifically mandates law schools include “[l]egal
analysis and reasoning,” as well as “written . . . communication in the
legal context” in the law school learning outcomes they establish.
194
Thus, the UK Law rubric described in Part IV, which measures
achievement of course learning outcomes related to legal analysis and
reasoning, as well as written communication, can also be used as part of
the law school’s required evaluation of “the degree of student
attainment of competency in the [corresponding law school] learning
outcomes[.]”
195
In other words, in addition to using the appellate brief
rewrite rubric to conduct individual student assessment for the LRW
Course, it could also be used as the common rubric for assessing a
sampling of outputs (student writing assignments) embedded in
another course or a series of courses
196
that align with the two above-
identified law school learning outcomes required by Standard 302.
197
Courses likely to have relevant assessment sources include advanced
legal writing courses and other upper-level courses that build on the
legal analysis and persuasive legal writing techniques that are first
introduced in the LRW Course.
198
To be clear, the rubric would not have
193. ABA STANDARDS, supra note 24, at 23.
194. ABA STANDARDS, supra note 24, at 15.
195. ABA STANDARDS, supra note 24, at 23. This would be just one viable component of
a more robust institutional assessment plan, as it is best practice to use multiple
assessment measures of different types. E.g., SHAW & VANZANDT, supra note 4, at 112
(discussing triangulation); see also FUNK, supra note, 60 at 3233, 69 n.7, and 75 n.3.
196. The selection of assessment sources and use of sampling in law school assessment
is discussed in Part III above.
197. See Curcio, supra note 32, at 497–98 (“[R]ubrics acknowledge that learning
develops across multiple courses, over time, and the learning process varies from student
to student.”).
198. Relevant courses include advanced legal writing, seminars, advanced appellate
advocacy, and other “writing experience[s] after the first year” as required by ABA
Standard 303(a)(2), which is where techniques and skills first introduced in a first-year
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
580 MERCER LAW REVIEW [Vol. 71
to be used to grade the assignments embedded in these courses.
Instead, the focus here is on how the rubric could serve as one possible
assessment measure for purposes of conducting law school assessment
of the relevant learning outcomes.
199
Moreover, even where an existing rubric requires adaptation before
serving as a common rubric for law school assessment, it can still
provide a solid foundation to work from so that faculty are not starting
from scratch.
200
As part of this adaptation process, it will be important
to bring in other relevant faculty to work along with the one(s) who
designed the existing rubric.
201
In other words, the law school should
involve faculty who teach the courses with identified student outputs to
be used in assessing achievement of a particular law school learning
outcome and those who will use the rubric when conducting the related
assessment.
202
That is because there must be a common understanding
of, and agreement on, student performance expectations in terms of
what is competent and not competent, as well as the related rubric
narratives that will measure such performance.
203
However structured,
this larger collaboration, just like the collaboration among the UK Law
legal writing faculty that is described in Part IV(B), will help ensure
that the adapted common rubric is valid and fair, and that the results
legal research and writing courses are likely to be covered and practiced in more depth.
ABA STANDARDS, supra note 24 at 16; see supra note 73.
199. Curcio, supra note 32, at 503 (emphasizing that professors do not change what
they test or how they grade students, and explaining that the approach is for professors
“[i]n courses designated for outcomes measurement” to add an additional step after
grading to “complete an institutional faculty-designed rubric[,]” which may be applied to
“a random student sample”).
200. While Professors Shaw & VanZandt appear to view course rubrics as different
from rubrics used for law school assessment, supra note 4, at 11819 and 14146,
Professor Curcio posits that common rubrics used for law school assessment could be
adapted from a faculty member’s existing rubric, supra note 32, at 501.
201. STEVENS & LEVI, supra note 106, at 6869, 17778; BANTA & PALOMBA, supra
note 12, at 100; Curcio, supra note 32, at 498.
202. SHAW & VANZANDT, supra note 4, at 142. The group may work within a larger
assessment committee, or they may be a designated working group that reports to an
assessment committee. For example, at Georgia State University College of Law (GSU
COL), a team of faculty who taught the relevant skills designed each common rubric, and
then the entire assessment committee vetted those rubrics. Curcio, supra note 32, at 498
(noting this sometimes resulted in redrafting). That said, there are a variety of ways to
structure faculty involvement in the creation of an assessment plan, including in
particular the measurement (implementation) stage. SHAW & VANZANDT, supra note 4, at
4045, 12629.
203. SHAW & VANZANDT, supra note 4, at 142.
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
2020] WHEN YOUR PLATE IS ALREADY FULL 581
from using the rubric are reliable.
204
Using UK Law’s appellate brief
rewrite rubric as an example of a rubric that could be adapted to serve
as a common rubric, the adaptation process would likely involve
compressing the rubric by removing the rubric categories that address
specific parts of a legal document that may not be taught in other
courses with relevant assessment sources (that is, if the assessment
sources are not appellate briefs but instead include other types of legal
documents), and considering whether any other categories should be
omitted or added in light of the particular law school learning outcome
at issue. In addition, the narratives for the categories that remain
(namely, organization, content of legal analysis, use of persuasive
writing techniques where applicable, citation, and other aspects of
mechanics) must include language that all involved in the adaptation
process can understand and agree upon. This matters both for rubric
design (validity and fairness) because the measurement language must
be consistent with what is being taught, and rubric use because the
evaluators must understand the narrative language to consistently
apply it (reliability). Finally, the designers will need to consider
whether the existing rubric progress levels are clear enough, or whether
proficient should become competent given Standard 315’s focus on
competence.
205
Once again, the appellate brief rewrite rubric is offered as just one
example, as rubrics “allow for nuanced assessment . . . over a wide
range of courses as well as a wide range of outcomes,” and thus, existing
rubrics could also be used to measure other mandated law school
learning outcomes, including both knowledge and value outcomes.
206
For example, Professor Curcio’s recent article provides examples of
common rubrics she and her faculty designed to measure law school
learning outcomes relating to “legal knowledge and analysis” and
“effective and professional engagement,” among others.
207
Existing
204. Curcio, supra note 32, at 509 (explaining that involving “faculty members who
teach and assess the outcome the rubric assesses” is important so that the rubric
“dimensions and descriptors,” which are comparable to this article’s use of categories and
narratives, “capture students’ achievement of that outcome”); Hamm, supra note 2, at 375
(stating that faculty should be given a chance to offer feedback if a smaller group creates
a draft); see also supra notes 128 and 135.
205. ABA STANDARDS, supra note 24, at 24; see also Hamm, supra note 2, 38082
(explaining that earlier versions of the standards used proficiency rather than
competency, and noting that practicing attorneys could be helpful in describing
competence as contemplated in Standard 315).
206. See Curcio, supra note 32, at 498.
207. See Curcio, supra note 32, at 498. The article describes the approach taken at
GSU COL, where faculty designed eight new rubrics, corresponding to the law school’s
eight institutional learning outcomes, for purposes of law school assessment. See Curcio,
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
582 MERCER LAW REVIEW [Vol. 71
rubrics could likewise serve as the starting point for such design.
208
And
an article published in 2013 suggested that “uniform rubrics can be
employed in courses across the curriculum so that the process of
providing feedback to students can also be used to collect valuable
information about the learning process.”
209
It is important to
acknowledge that not just any grading “rubric” used by a casebook
faculty could serve as the foundation for a common rubric contemplated
here. As this Article clarifies above in Part IV, the focus here is on
analytic rubrics, which are criterion-referenced, and not tools that
faculty may use for norm-referenced assessment.
210
Given the need for a
criterion-referenced tool, the most likely existing resources may be
rubrics used for formative assessment, such as the one Professor
Duhart uses for a required practice essay in her Constitutional Law
class.
211
Second, anonymous rubric data from individual student or course
assessment can be collected and reviewed by faculty when
implementing a law school assessment plan. For example, the
completed appellate brief rewrite rubrics described above could be
supra note 32, at 498. (describing approach as “backward design” and relying on rubrics
from the Association of American Colleges and Universities and medical educators). In
addition, The Holloran Center, which is associated with St. Thomas School of Law, has
developed rubrics for law school assessment of learning outcomes involving
professionalism, cultural competency, self-directedness, and teamwork/collaboration.
Holloran Center, Holloran Competency Milestones,
www.stthomas.edu/hollorancenter/resourcesforlegaleducators (last visited May 11, 2019).
208. For example, perhaps a professor who teaches Professional Responsibility has
developed a rubric for grading exams that could also serve as the basis of a common
rubric used to measure achievement of learning outcomes relating to professionalism.
209. Jones, supra note 8, at 101 (noting that “a cost-effective system could at least
partly embed collection of information into existing systems”); see also Niedwicki, supra
note 8, at 26364, 267 (describing the use of a common rubric for assessing a professional
skills program (programmatic assessment) like writing and trial practice, and noting that
rubrics can also be an effective tool for institutional assessment).
210. BARBARA WALVOORD, ASSESSMENT CLEAR AND SIMPLE: A PRACTICAL GUIDE FOR
INSTITUTIONS, DEPARTMENTS, AND GENERAL EDUCATION (2d ed. 2010).
211. Duhart, supra note 44, at 51314 and Appendix E; see also related discussion in
Part IV. Moreover, given that “effective writing instruction means teaching students how
to perform rigorous analysis[,]” some aspects of the appellate brief rubric that get at the
substance of a student’s legal analysis could even be useful if faculty are drafting
narratives for a common rubric that is assessing the “legal analysis and reasoning”
learning outcome in assessment sources (outputs) other than from legal writing courses
such as essay exams. Beazley, supra note 129, at 43. See also Beazley, supra note 129, at
43 (explaining that “there is increasing recognition that a Legal Writing course is a
particularly good place for students to learn the process of analytical thought at the heart
of ‘thinking like a lawyer’”). Again, a law school’s curriculum map would be a useful place
to pinpoint courses with relevant outputs. See supra note 73.
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
2020] WHEN YOUR PLATE IS ALREADY FULL 583
collected, the student names omitted, and then the anonymous rubric
data aggregated across sections of the LRW Course. This data would
identify how many students achieved at least proficiency in each of the
rubric categories that are tied to the relevant law school learning
outcomes required by Standard 302. Indeed, Interpretation 315-1 of the
Assessment Standards expressly states that while assessment methods
are likely to differ among law schools, possible methods to measure the
degree to which students have attained competency in the school’s
student learning outcomes include review of the records the law school
maintains to measure individual student achievement pursuant to
Standard 314.”
212
This is the second of two approaches for aggregating
student work for law school assessment that Dean Susan Duncan
offers; specifically, she explains that “individual professors ‘piggyback’
on the grading process and submit summaries of their students’
strengths and weaknesses or rubric scores[,]” which “are collected from
multiple classes.”
213
The multiple classes could include both 1L and
upper level courses, and need not be limited to writing courses.
214
The
advantage of this approach is that faculty avoid having to allocate time
for additional reading or “scoring” of the assignments that were first
part of course assessment that has already been aggregated by rubric
category (tied to a learning outcome) and performance level. In a time
where resources are already spread thin, this approach could save time
212. ABA STANDARDS, supra note 24, at 24 (referencing other methods, including
student evaluation of the sufficiency of their education; student performance in capstone
courses or other courses that appropriately assess a variety of skills and knowledge; bar
exam passage rates; placement rates; [and] surveys of attorneys, judges, and alumni”).
213. Duncan, supra note 59, at 483 (citing WALVOORD, supra note 146, at 2021);
FUNK, supra note 60, at 64 n.3 (“In many instances, if done properly, course assessment
may support program and institutional assessment.”); see also Banta & Palomba, supra
note 12, at 10305 (discussing use of faculty grading to provide program-level information
without requiring a second scoring of artifacts); Andrea Susnir Funk & Kelley M.
Maureman, Starting From the Top: Using a Capstone Course to Begin Program
Assessment in Legal Education, 37 Okla. City U. L. Rev. 477, 49293, 49798 (2012)
(discussing legal writing program assessment where professor grades first and then later
collects sampling for assessment where identifying information is removed).
214. Curcio, supra note 32, at 50102 n.51 (explaining decision to assess both 1L and
upper level students). Again, rubrics (and resulting data) are criterion-referenced. If that
information is not available because the professor uses norm-referenced assessment, then
the faculty could still follow Professor Duncan’s idea of having professors in relevant
courses provide a summary of the students’ strengths and weaknesses, which would be
focused on whether the student outputs (likely exams) demonstrated competency in
criteria tied to one or more law school learning outcomes. This may prove particularly
useful for knowledge learning outcomes, because casebook faculty are less likely to use
rubrics or otherwise engage in criterion-referenced assessment when grading final exams.
See supra note 49 (discussing summative and norm-referenced assessment).
[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
584 MERCER LAW REVIEW [Vol. 71
and yet still provide meaningful law school assessment data, because
the underlying individual student assessment methoda rubricwould
already be tested for validity, reliability, and fairness.
215
In short, an important lesson learned by the UK Law rubric project
discussed in this Article is that law schools should explore where an
existing embedded assessment measure for individual student
assessment could also respond to the law school assessment mandate,
especially where the student learning outcome(s) measured at the
course level overlap with the law school learning outcomes to be
measured. The rubric may look different than the one described in this
Articleit may be used for a different law school course and thus
measure entirely different law school outcomes. And the existing rubric,
wherever it comes from, will likely need adaptation. But the key is that
law schools should explore where existing resources and faculty
expertise can be used as the starting point when the entire faculty gets
to work responding to the ABA Assessment Standards.
VI. CONCLUSION
Outcomes assessment is a fundamental change in legal education
because it refocuses the assessment inquiry on whether law students
are actually learning the knowledge, skills, and values necessary for
those entering the legal profession. The endeavor has benefits for
students and law schools alike, but it takes time and resources. Thus,
busy law schools need to implement the Assessment Standards in a
meaningful and efficient way. Using a rubric project from UK Law’s
LRW Course as the primary example, this Article sought to show how
law schools can take advantage of what some law faculty are already
doing with rubrics, even when designed for a different reason, when
responding to the ABA’s Assessment Standards. Evaluating what
knowledge and resources already exist at a law school can save time
and encourage greater buy in when the full faculty takes on the ABA’s
assessment mandate, which is important when so many in legal
education are already working with a very full plate. While there is no
blueprint for assessment that can be applied across all law schools, the
hope is that the ideas shared here add to the growing dialogue about
how law schools can successfully respond to the ABA’s assessment
mandate.
215. See BANTA & PALOMBA, supra note 12, at 10405 (discussing in the context of
general education assessment and noting technological advances make collection and
aggregation of information in this way easier than ever).