A level fiasco

 A level fiasco


Having read the Ofqual report on this debacle I can confidently say that this is what happened and that it shows that relying on an algorithm rather than common sense leads to disaster.


The overall requirement was to "equalise" the results to the average of the last three years on a school by school basis, allowing for variations of performance of schools but still based on previous years, so if a schools results had improved over the last two years then a further improvement was allowed relative to the previous level of improvement. However no allowance was made of the relative merits of this year's students as against previous years. 


Teachers were assumed to be the best judges of the ability of students so they were asked to come up with an expected grade for each student. So far so good, but then they were asked to come up with a ranking for the students and this is where the problems start. To rank 5 students that you expect to get A* in order 1-5 will not accurately reflect the relative abilities of those students. One may be the best but the other 4 about the same and the actual rank on any exam day between these 4 be interchangeable. An exam result would not show this externally (they would all get A*). What the algorithm did was to say this school had 3 A* average over the last three years so of the 5 A* candidates 1-3 ranked got A* and the other two did not. Then even worse if the average said 1 A grade for this school rank 4 would get an A but rank 5 would get a B even though the teacher rated this student an A*. This is obviously ridiculous when as I said the relative abilities of the 5 students is unavailable to the algorithm and the 5 may actually be very close.


Johnson says this system is "robust and dependable" - clearly nonsense. Beware spin and beware algos.