I am wondering whether and how one might anchor expert rater scores on a set of essays. The raters used an analytic rubric (with five criteria, thus five scores per essay), and I would like to anchor these scores across two separate analyses of two different groups of raters. How might I go about anchoring the "expert rater" scores and allow the rest to "float"? How would this be denoted in the specification file? The goal is to provide an anchor across the two analyses (same 20 essays, 2 different groups of raters) and be able to compare their scores to the expert rater scores.
LB, it sounds like your are using Facets. It also sounds like you have ratings made by expert raters and ratings made by others. So, 1) Do an analysis using only the expert raters. For convenience, center them. Output an Anchorfile=expert-anc.txt . This contains anchor values for raters, essays, rating scales, etc.
2) Add the other raters and their data to expert-anc.txt and remove the expert rater data . Analyze this file. Now the measures for the other raters will show their lenience/severity relative to the expert raters, who were centered on zero. The displacements for all the other elements will show the differences between the experts and the others.
Thank you, Dr. Linacre. I was on the right track, but it's so helpful to have a confirmation.
I also have a few other questions. For my analysis, I am interested in looking at how novice rater severity has changed based on the use of two rubrics. I have data at two time points (using two rubrics which are exactly the same, except that rubric criteria categories are in a different order), and I would like to compare the novice rater ratings between the two rubrics. Specifically, I am interested to see whether their severity on the individual rubric criteria differ between the two rounds where they used the slightly different rubrics. Currently the facets that I have in my model are Rubric (to represent the two rubrics), Essay (to represent the 20 essays that were scored at both time points), Raters (to represent the 30 novice raters), and Criteria (to represent the five criteria from the analytic rubric). I also have two separate groups of novice raters, each group starting on one rubric and then switching to the other (for a counterbalanced design). I plan to run two separate analyses, one for each group of novice raters, and then link them via anchoring. So, my questions are:
1. For the expert anchor file, should I also center the Rubric facet and the Criteria facet? For my subsequent analysis of novice-rater data, I don't necessarily want to compare how the novices did compared to the experts. Rather, I am interested in just anchoring the necessary facets across the two analyses of the two groups of novice raters so that I can compare those two analyses.
2. Is it possible to plot the Criteria severity of the raters on the two rubrics within the same Wright map? I haven't seen any examples of this being done, so I'm not sure that it is possible or how this would be done.