This article first appeared on the Wonkhe website (8th April 2018) under the heading ‘TEF won’t sweeten my rankings rancour’.
Which is the best university? It’s a seductive question to ask, but that doesn’t mean there’s a sensible answer. League tables, aka rankings, is the nonsensical answer you’re likely to get.
They weigh the wrong factors – a very narrow idea of best, based on counting what’s measured rather than measuring what counts. Traditionally, this has led to a dominance of rankings by research-led institutions.
But even if the factors weighed were the right ones, the rankings use poor proxies to measure them – as if research citations, for example, were an unambiguous marker of quality, rather than being hugely dependent on publication in English, in the right journals and in the right disciplines.
But even if they were the right proxies, the data is often of poor quality: out of date, non-comparative, self-reported.
But even if the data were good, what rankers do with it isn’t: aggregating and weighting arbitrarily.
But even if the methodology were sound, the way the results are presented suggests an equal distance between say, first and thirty-first place as between fortieth and seventieth. Anyone who has ever seen a bell curve knows that is misrepresentation.
But even if league tables didn’t make all these mistakes and more, their worst crime is to imagine that there is such a thing as a single best university, rather than many different ways in which universities can be good at different things. Indeed, it is the very diversity of the higher education sector that is its strength. It means the sector as a whole can paint a rainbow of objectives catering to the divergent needs of particular students, communities, employers, economies and societies.
No platform for rankings
You can’t ban league tables, sadly. If we want information about higher education to be transparent, then there are those who will put it in a pop chart. That will attract attention, because offering an answer to that “best university” question is sexy.
The answer might not be to have fewer league tables, but instead to have more: an infinity of rankings so that each person can pick the one that combines just the factors they want, weighted perfectly to their needs. No ranking would be authoritative, because the array would reflect the personal and diverse nature of the question.
THE’s latest rankings product (its Global Impact Ranking) is a step in the direction of infinity in that it adds another league table to the shop window, incrementally diminishing the value of the ever-increasing heap.
However, perhaps we should welcome the desire to rate universities according to criteria such as recycling, fair labour practice and admissions policies, even if the process is as flawed as all the others? After all, the sexiness of rankings does shine a light on issues that might get overlooked (especially when the desire to do well in other rankings distracts universities from considering what else matters).
TEF: just another ranking?
That was explicitly the government’s intention when it introduced its own form of ranking – the Teaching Excellence Framework (TEF), which the then-minister Jo Johnson said would “introduce new incentives for universities to focus on teaching”. The idea was to rank universities’ teaching quality to get them to improve it and to drive student choice based on quality.
The problem is that TEF repeats the mistakes of other rankings. It weighs the wrong factors: the metrics (as was later acknowledged with the change of the name to include student outcomes) have little to do with teaching. It uses poor proxies, such as measuring employment not employability. The data is poor: the NSS component was downgraded after an NUS boycott undermined it. The methodology is arbitrary: for example, benchmarking by disciplines, but not regions.
The list goes on, but TEF is unlike other rankings in at least three respects. First, being the government’s own ranking, TEF bears more responsibility than most. It purports to be a truer truth – an authority that it hasn’t earned.
Second, most league tables – even though they are rarely entirely open about their methodology – do tend to stick to it. TEF, however, recognises the failings of its metric methodology and adds a subjective element: the review panel. It may be the best part of TEF, but it’s the least transparent and most susceptible to inconsistency.
Third, most league tables’ misrepresentation is a single hierarchical list. TEF retains the hierarchy, but shrinks distinctions to three categories: good (bronze), better (silver) and best (gold). This, of course, creates a cliff edge where a fine judgement between silver and bronze, say, translates into a presentational gulf.
Informing student choice
Interestingly, there is no “mediocre” or “bad” in this hierarchy, but that’s not how students see it. Bronze is no one’s idea of an endorsement. This highlights an absolutely critical issue about rankings – TEF included – which would be the case even if they were more rigorous in their approach: how do they inform student choice?
Human choices are rarely rational. They emerge from a soup of feelings and preconceptions, sprinkled with croutons of information fried in confirmation bias. When it comes to a complex decisions, such as which university to choose, we don’t devise a personal list of criteria, sourcing objective data on each, and then coolly and fairly appraising the options relatively. Instead we latch on to something that provides a basis for beliefs we already hold.
In other words, we use heuristics: rules of thumb that often bear little resemblance to nuanced realities, but which hurt our brains less. This is precisely the quality about league tables that makes them so sexy. They say, don’t you worry your head about the real differences between two institutions that are both good in their own way, we’ve made the whole process simpler. Misleading, but simpler.
The same is true of TEF. Rather than providing information that disrupts misplaced beliefs and encouraging students to examine what kind of educational experience will support their own learning, TEF short-circuits the thinking and provides a yes/no/maybe checklist.
The Government was right to shine a light on teaching (well, on learning), but not the seedy neon beam of TEF. There are other approaches and, as Dame Shirley Pearce proceeds with her review of TEF, I hope she will think boldly about options that promote diversity and innovation rather than aping league tables that suppose there is a single model of “good” and which play blind darts to see who gets closest.