It is that time of year when results of various 10th and 12th standard board exams start coming in. It is that time of year when parents, or at least some of them, exult that their children got that 0.2 percent more than others because it helped them top the class or get into one or the other college, or alternatively despair that they did not get that fraction of a percentage more that might have helped them edge ahead of others. Such reactions are exactly what I heard after results of the Indian Certificate Secondary Education, or ICSE, standard 10 examination came in on Monday.

So it is also that time of year when I feel that something has gone seriously wrong with a society that spends huge amounts of time and resources to design, administer and prepare for such exams, and then sets so much store by their results. I am not commenting here on the quality of any of these exams, although this is also an important issue, especially because they play such a big role in a student’s life. While it is indeed important to ask how well these exams actually test understanding and thinking ability, I am not addressing that question here. I am commenting only on our inordinate dependence on them.

I understand students and parents worrying about marks for practical reasons, namely whether they will be good enough for securing admission into a particular college or a subject stream. What I find disturbing, however, is that we are still not exercised enough about the fact that we need to worry so much about it, and that more than a few people believe that these results are definitive signs of intellectual achievement or failure.

Flaws in the system

In reality, our board exam system is a solution that administrators have hit upon to allocate the scarce resource of college seats. On the surface, such a system has some merits: it is a transparent one based on a quantifiable parameter. But it also has many flaws.

A fundamental flaw is that using the results of one exam can lead to mistakes in selection, as Kamala Mukunda has shown in her book, What did you ask at school today? A handbook of child learning. In a chapter titled Measuring Learning, Mukunda, an educational psychologist who teaches at the Centre for Learning in Bangalore, shows two hypothetical graphs plotting test scores and job performance of several candidates, similar to the one below. In the graph, each dot represents a student.


In this graph, the scatter diagram, or the shape made by the dots, moves from the left-hand bottom to the right-hand top. This indicates that there is an overall correlation between test scores and job performance, but only if an organisation hires all students. So most high-scorers will do well on the job while most low-scorers won’t. Yet, as Mukunda, points out, a small group of low-scorers will also perform well on the job – they are those represented by dots in quadrant A, and a group of high-scorers who will not – they are those represented by dots in quadrant C. So if an organisation hires students based on a cut-off, represented by the vertical line, it will make the wrong decision with respect to all those people in quadrants A and C.

The better designed a test, the narrower the scatter diagram, and the fewer the students who will fall in quadrants A and C. If the test is a perfect indicator of job performance then the scatter diagram will be a straight line. But in reality, exam tests are far from perfect indicators of job performance, so the scatter diagrams are likely to have a width. In the graph below, the test is an even more imperfect indicator of job performance than in the previous one. This means that the number of students in quadrants A and C will be quite large, and using a cut-off will mean making a wrong decision for many more students.


Coaching classes

In addition to the fundamental problem that exams are imperfect measures of ability, our reliance on one exam to make crucial decisions about children’s futures has other negative consequences: apart from leading to huge amounts of stress for students and parents, it also spawns a cottage industry of tuitions and coaching classes – because a percentage point here or there can matter so much for college admissions and also because “topping” is given so much importance. Even bright, independent-minded students are tempted to take tuitions or attend coaching classes.

While at one time, tuitions outside school were mainly for students with genuine difficulties in specific subjects when schools did not have the resources to provide extra help, today they are almost de rigueur. There is no shame attached to taking them.

On the contrary, in some cases coaching for exams has entirely supplanted education. Some coaching classes for IIT and medical entrance tests have physically moved into junior colleges to save students the trouble of travelling to them after college hours. When coaching has gone from being a supplement to education to a substitute for it, we should be seriously worried.

Some ways out

What can be done? Clearly, what the system needs is a complete overhaul. But in the short term, we can do many things to redress the serious flaws of depending upon one exam.

For one, as Mukunda recommends, institutions should use multiple parameters to determine entry. “Even if there is nothing we can do to improve a test’s validity, the least we can do is use multiple criteria for decision making,” she writes. By validity, she means how well-designed the test is. Using multiple parameters will mean more work, but will also mean more effective selection.

An even shorter-term solution is to use a combination of marks and a lottery system, a solution that several people, including the political scientist Pratap Bhanu Mehta, have suggested. For instance, an institution could decide that it will admit anyone with 90% and above, and if this yields twice the number of students that the institution has place for, it could decide to use a lottery to narrow the choice down. I am not prescribing any particular band of marks; I am merely proposing the concept of using bands of marks along with a lottery.

People might protest against this solution and say it is unfair because someone who has struggled to get 96% might get left out after the lottery while someone with a lower score of 92% might get in. But this is not a good argument. For one, as Mukunda has shown, the correlation between test scores and ability is far from watertight. So while relying fully on a test might give the appearance of being scientific, it is not. Randomness is built into a system that depends on the cut-off of just one test.

Second, if there are only broad bands of marks that students need to strive for, then perhaps they will not kill themselves getting that one or two percentage point extra. This will not eliminate the flaws entirely because this system also involves using a cut-off, with all the attendant problems described above. But it will blunt the obsession over marks as well as dilute the raison d’être of tuitions and coaching classes. It will also free up time for students to explore subjects a little more freely on their own time, rather than sticking precisely to the exam syllabus.

As it stands, our system does not encourage risk-taking and free exploration and is designed mainly to produce students who are good at one thing: taking exams.