Cindy Stowell

Improving Questions: The Sapling Learning Quality Control Process

Blog Post created by Cindy Stowell on Jun 30, 2016

Sapling Learning is committed to creating the highest quality content, but we also devote a considerable effort towards maintaining and improving the efficacy of existing questions. This post covers what happens to a question after it is "live" (available for use in assignments).

There are two general opportunities for question revision during a question's lifetime: revision prompted by instructor or student comments, and revision through periodic, statistical reviews. The types of flaws caught in each instance tend to be very different in nature.

Occasionally, a Tech TA or Sapling Learning Support will receive an email or call from an instructor or student alerting us to a potential error with a question. The question may have a wording issue, the tolerances on the question may be too tight, or, rarely, the correct answer may be incorrectly coded. Regardless, the issue with the question is often specific and the action is immediate: we may remove the question from the assignment, or replace the question with a corrected version.

There can be other issues with a question that are more subtle, and that is why we perform periodic, statistical reviews. During these reviews, Sapling Learning Tech TA’s analyze the statistical data for the questions that we have written. The statistics for each question include information about student views, attempts, average scores, and information about thetriggered feedback. For each difficulty rating (easy, medium, and hard) within a discipline, we look for outliers—that is, questions for which students have taken significantly more or less attempts or have received more or less points than average for that difficulty rating. Those questions are flagged, and we examine the questions to see if we can find anything that might cause the unusual behavior. Sometimes the incorrect feedback could be improved, sometimes an issue only occurs for some values of a randomly generated variable, and sometimes question tolerances are too tight. In the instances where we cannot find a flaw in the question, we consider giving the question a new difficulty rating.

Quickly addressing potential question flaws, regardless of how the flaw is brought to our attention, is one way that we continuously improve our question bank.  However, there are always opportunities to make this process better. As an instructor, what do you consider the hallmarks of our best content?

Outcomes