Part of Turnitin's goal is to make assessment more robust, not only by holding up academic integrity and ensuring accurate measurement of knowledge but also fostering feedback loops.
As instructors, it’s easy to grade or mark an assessment and consider the cycle complete. We evaluate students for what they do or do not know, and it’s tempting to end the process there and then move on to the next unit. In fact, grading is often the most tedious and most dreaded component of teaching, because it can be so time consuming.
Assessment is an intersection with rich data , however, and a way to gain insights on student learning and exam effectiveness. The information gleaned from assessments is critical for teaching and learning; moreover, it is an inflection point through which students can learn, assignments can be bolstered, and curriculum improved.
That’s where item analysis and category tagging can pinpoint knowledge gaps or trends across student cohorts.
Let’s take a look at item analysis, which can be executed manually or via tools like Gradescope and ExamSoft .
Item analysisItem analysis is the act of analyzing student responses to individual exam questions with the intention of bettering exam quality. Which questions did every student in a class get correct? And which questions were particularly difficult? Was it your intention for that question to be difficult to answer? Do the questions on your exam discriminate between students who learned the material and students who did not? When it comes to multiple-choice questions, are your distractors—the wrong answers— effective, or did no students choose those distractors?
Many instructors conduct item analysis, even if unconsciously, when we absorb whether or not most students responded correctly to a particular question or when we see repeated incorrect student answers to the same question. This kind of analysis allows instructors to spot a question that may need to be reworded or a concept that needs to be reviewed.
These data points provide insight into exam quality and therefore accurate measurement of learning. They also increase transparency into what students have or have not learned. This information, in turn, provides an opportunity for instructors to fortify curriculum and teaching as well as make any adjustments to an exam. By actively reviewing the data provided by item analysis, instructors can meaningfully improve their assessments and better understand student outcomes.
Bringing item analysis into your instructional workflow can be an easy adjustment that provides the following improvements:
- Optimizes exam design and exam quality.
- Promotes accurate evaluation of student learning by ensuring assessments are fair and reflective of course content.
- Strengthens multiple-choice exams. Multiple-choice exams are great for testing a wide swath of concepts in a shorter amount of time. That said, it is difficult to measure higher-order thinking with this format. Item analysis helps ensure that multiple-choice questions aren’t too easy; for instance, if the item distractors (i.e., the wrong choices) are too easy to discern, item analysis can highlight whether or not any student chose those alternatives.
- Bolsters teaching efficacy. When students get the same question wrong, it may be that a question that can be further clarified. But also, it may be that the material needs to be reviewed in class. This is particularly important because learning is cumulative; in order to expand learning, students need to have a deep understanding of fundamental concepts.
- Upholds academic integrity; item analysis can pinpoint answers that are exactly the same. Additionally, by having a digital record, instructors can be reassured that answers aren’t edited or manipulated for re-grade requests.
By conducting item analysis as part of your grading process, educators and students alike benefit.
Category taggingCategory tagging is different from item analysis, which focuses on each test question’s efficacy. Category tagging provides data at the student, course, or program level. These insights can help pinpoint student preparation for licensure exams, for instance.
Gradescope offers general tagging, enabling course and department reporting or accreditation.
Category tagging is a particularly robust feature in ExamSoft , a Turnitin product solution that focuses on higher education assessment in health sciences and law programs.
ExamSoft’s category tagging feature offers sweeping insights about what students do and do not know, providing institutions opportunities to better student test outcomes. In addition to item analysis, which gives information about item discriminators, item distractors, and item difficulty, ExamSoft enables instructors and administrators to analyze categories via its category tagging feature. Are students well versed in anatomy versus pharmacology? With ExamSoft, educators have the data to enable effective next steps in both teaching and learning.
ExamSoft enables tagging flexibility. Questions can be tagged in myriad ways, from subject area to instruction methods as well as accreditation standards. Did your students excel in anatomy? Or will they need supplemental instruction? Or was a subject taught in lecture less effective than one taught in small class discussion? This information can help instructors make adjustments to enable learning.
Category tagging also enables educators to track student progress over multiple exams, enabling faculty and program administrators to quickly find the student performance data they need to monitor student progress against key learning objectives, target remediation efforts, and inform improvements at the course and program level (e.g. curriculum, instruction). Tags also allow admins to provide data proving that students have mastered specific areas as part of accreditation requirements.
With software tools like Gradescope and ExamSoft, item analysis can provide even more granularity with low effort on the instructor’ s behalf. Both item analysis and category tagging provide a meaningful return on investment and are worth prioritizing in an instructor’s grading and marking flow.