Here's my situation: I'm creating a hypothetical class's results on an assessment set. There's twenty levels of difficulty on the test, from A-T. Getting 5 errors on a level A test is terrible, but because there's so much more material on a level T test, getting 5 errors on a Level T test is great.
Since I didn't know what level of test each student was working with, I created a scale from 1% to 100%, and then created a data table showing what a 95% would look like on each level of the test.
Each test has three scoring brackets. Essentially, good/fair/poor. From test to test, the definition of what range of scores is, say, "fair" changes. But a 90% or less is always "poor," and a 95% or better is always "good."
So my challenge was to create a formula that would look at the percentile, pull up the score ranges for that level of the test, and figure out, if the student was fair, what that score range was, and based on how fair (a 91% is low fair, a 94% is high fair), the number of errors the student would have had to have gotten to reach that % score, had he ACTUALLY taken the test.
Here's my final formula:
but in retrospect, I just feel kind of dirty after committing it. The hardest part was remembering my high-school math so that a *higher* percent score produced a *lower* number of errors. *sigh* English majors shouldn't have to do this sort of thing!