[2] WHEN YOUR PLATE IS ALREADY FULL-CP (CORRECTED) (DO NOT DELETE) 3/11/2020 10:35 AM
552 MERCER LAW REVIEW [Vol. 71
when the connection to the Assessment Standards is explored in more
detail.
Rubric design is a detailed process with several stages. First, the
designer identifies the levels (or scales) of performance that will be used
(e.g., mastery, progressing, and emerging or distinguished, proficient,
intermediate, novice).
109
Next, the designer sets out the categories (or
dimensions) to be evaluated in the assignment, which are usually tied
to one or more learning outcomes for the course (individual student
learning) or institution (law school assessment).
110
This tie to a learning
outcome(s) is important to ensuring the rubric’s validity as an
assessment measure because the rubric must actually evaluate, or
assess, what is being taught.
111
Under each category, the designer must
then draft narratives that explain what constitutes each level of
performance.
112
This is referred to as criterion-referenced (versus
norm-referenced) assessment, which means that competency is
measured by looking at whether a student satisfies certain
requirements for the dimension that are set by the assessor(s).
113
109. STEVENS & LEVI, supra note 106, at 8–9; Curcio, supra note 32 at 496–497, 499.
110. STEVENS & LEVI, supra note 106, at 10; Clark & DeSanctis, supra note 107, at 9–
10; Curcio, supra note 32, at 499–501.
111. See MUNRO, supra note 15, at 106; see also Curcio, supra note 32, at 499–501
(providing examples of rubric narratives that are tied to specific learning outcomes). It is
also important to make sure the rubric is broken down into a sufficient number of
categories so that there are not too many dimensions, or topics, covered in one category.
Otherwise, the rubric may become too confusing or cumbersome to use when evaluating a
student output that will demonstrate numerous competencies, such as an essay exam or
legal document.
112. STEVENS & LEVI, supra note 106, at 10–14. In doing so, consider what knowledge,
skills, and values students will need to have or develop to successfully complete the tasks
associated with the assignment, and identify what types of evidence will show that
students have accomplished those tasks (and related student learning outcomes). See id.
at 29–38. One critique of rubrics as an assessment tool is that their use of categories or
narratives are too rigid or standardized. Deborah L. Borman, De-grading Assessment:
Rejecting Rubrics in Favor of Authentic Analysis, 41 SEATTLE L. REV. 713, 730–31 (2018)
(arguing that rubrics cannot capture the “subjective component to grading [legal writing]
assignments” like a more holistic evaluation can). However, as discussed in more detail
below in Parts IV(B) and V(A), the key is structuring and dividing the rubric categories to
allow for capturing variation and nuance in legal analysis where it arises, and drafting
the corresponding performance level narratives so they clearly describe the legal reader’s
common expectations for analytical writing while using the professor’s preferred
language.
113. SHAW & VANZANDT, supra note 4, at 93. Some casebook professors may also use
the term “rubric” when referring to the grading tool created for evaluating final exam
essays. By definition, however, a rubric is a criterion-referenced assessment tool. Thus, if
the grading tool is being used to assign grades in a norm-referenced framework, then it is
not really a “rubric” as defined and used in this Article.