Next month, the world will witness an event that, in some circles, commands as much interest as the Super Bowl or a Presidential election: the U.S. News and World Report (USNWR) will release its 2015 list ranking all U.S. colleges and universities.
The candidates will be on edge. These rankings will direct our nation’s attention to whether Princeton has retained its top spot, to the top ten “A+ schools for B Students,” and to whether any given school is on the rise or slipping. Colleges nervously wait to see where their institution sits in the national higher-education pecking order, and either pop corks in celebration or scurry to form committees to boost their metrics. College-bound students—and their anxious parents—will scan the list with zeal, with many changing their plans based on the position of a given college or university.
Let’s admit what we all know in our guts but may not be willing to state for the record: These rankings are sheer lunacy.
Leave aside the folly of placing an ordinal ranking on colleges relative to other colleges, when, in many instances, colleges are as different from one another as can be. The real trouble is the process by which USNWR arrives at its rankings. These influential rankings are based on weighted calculations of, among other factors, acceptance rate, average SAT scores, research budget, perceived reputation among college administrators, and graduation/retention rates.
Consider the obvious fact: Not a single one of the criteria used by USNWR to rank colleges reflects how much students actually learn while at college. Two of the most significant criteria—SATs and admission rates—reflect students’ pre-college life, but have little to nothing to do with what happens to them when they begin their studies. The same arbitrariness applies to research budgets: Research dollars have limited connection to educational outcomes, which means that while this metric might be valuable for a handful of administrators ranking their schools against one another, they are a poor measure of anything connected to student life.
And yet, in spite of the strong evidence of the arbitrariness of the USNWR rankings, college leaders sweat over them; parents and students pore over them; and no one seems willing or able to stand up to them. Is the USNWR system, like the old quip about democracy, the worst system except for all the others?
Simply put, no.
There are alternatives to the the USNWR system. In their groundbreaking book Academically Adrift, Richard Arum and Josipa Roksa explore how much students actually learn during their college years. Arum and Roksa’s research is based on a test that gets far less attention than SATs or ACTs: the Collegiate Learning Assessment (CLA), administered by the Council for Aid to Education. In contrast to other standardized tests, CLA measures the skills that higher education claims to impart: critical thinking, analytical reasoning, problem-solving, and writing.
While Arum and Roksa’s data is reported on an aggregate basis, it could form the basis for an alternative way of ranking colleges: one based on criteria that actually matter to our students, and their parents. Imagine the public having access to longitudinal data, sorted by university, on how students perform on the CLA. Prospective students could use this data to inform their college selection and put priority on colleges that help their students learn.
As the old adage goes, what gets measured, gets managed. Measuring colleges by using the CLA, or a similarly rigorous instrument that judges education, could change the way colleges are managed. There would be far more focus on helping students develop critical skills. Excellent teaching would be weighted at least as much as research. A college’s success would be tied not to the past performance of its students on SATs, or the number of applications it managed to draw, or shifting research budgets—but on their students’ academic growth during their education.
There is a federal effort underway to rethink the college ranking system. Alarmed by the rising costs and low graduation rates of many colleges, President Obama recently suggested that the federal government step in and rank colleges on criteria like graduation rates and post-graduation earnings data. While well-intentioned, this suggestion is counterproductive. This type of ranking would encourage colleges to make it easier to graduate, or concentrate their student bodies on young adults interested in more lucrative careers. In other words, an effort of this kind could exacerbate the problems it set out to fix.
In absence of a brand new ranking system or a government effort to bring sanity to this process, what we can recommend is this: the consumers of this information need to take a critical eye to it. For parents and students, remember that the US News and World Report system has a unique set of inputs and built-in biases, which may not match your educational or personal goals. For college administrators: To whatever extent you can, ignore the rankings. We know it’s hard not to be influenced by the pecking order, but, just as we would wish our students to be more focused on knowledge for its own sake and less on grades as a goal in themselves, shouldn’t your college or university have the same ambition?
If nothing else, everyone affected by these rankings should keep this in mind: In a media environment in which print subscribers are in sharp decline, not paying attention to the US News and World Report rankings (read: not buying the issue, or looking at the website) can have a real impact. It can force a re-examination of their findings and their purpose. It can force a more sober conversation on how colleges ought to be assessed. That may be too much to ask for a culture that’s eager to judge and measure and rank—but it may be the only way out of a system that is doing a deep disservice to higher education and to our students.
This post is based on content from Most Likely to Succeed: Preparing Our Kids for the Innovation Era by Tony Wagner and Ted Dintersmith.