Breaking Away from Rankings

The Growing Movement to Reform Research Assessment and Rankings

By Dean Hoke, September 22, 2025: For the past fifteen years, I have been closely observing what can only be described as a worldwide fascination—if not obsession—with university rankings, whether produced by Times Higher Education, QS, or U.S. News & World Report. In countless conversations with university officials, a recurring theme emerges: while most acknowledge that rankings are often overused by students, parents, and even funders when making critical decisions, few deny their influence. Nearly everyone agrees that rankings are a “necessary evil”—flawed, yet unavoidable—and many institutions still direct significant marketing resources toward leveraging rankings as part of their recruitment strategies.

It is against this backdrop of reliance and ambivalence that recent developments, such as Sorbonne University’s decision to withdraw from THE rankings, deserve closer attention

In a move that signals a potential paradigm shift in how universities position themselves globally, Sorbonne University recently announced it will withdraw from the Times Higher Education (THE) World University Rankings starting in 2026. This decision isn’t an isolated act of defiance—Utrecht University had already left THE in 2023, and the Coalition for Advancing Research Assessment (CoARA), founded in 2022, has grown to 767 members by September 2025. Together, these milestones reflect a growing international movement that questions the very foundations of how we evaluate academic excellence.

The Sorbonne Statement: Quality Over Competition

Sorbonne’s withdrawal from THE rankings isn’t merely about rejecting a single ranking system. It appears to be a philosophical statement about what universities should stand for in the 21st century. The institution has made it clear that it refuses to be defined by its position in what it sees as commercial ranking matrices that reduce complex academic institutions to simple numerical scores.

Understanding CoARA: The Quiet Revolution

The Coalition for Advancing Research Assessment represents one of the most significant challenges to traditional academic evaluation methods in decades. Established in 2022, CoARA has grown rapidly to include 767 member organizations as of September 2025. This isn’t just a European phenomenon—though European institutions have been early and enthusiastic adopters. The geographic distribution of CoARA members tells a compelling story about where resistance to traditional ranking systems is concentrated. As the chart shows, European countries dominate participation, led by Spain and Italy, with strong engagement also from Poland, France, and several Nordic countries. This European dominance isn’t accidental—the region’s research ecosystem has long been concerned about the Anglo-American dominance of global university rankings and the way these systems can distort institutional priorities.

The Four Pillars of Reform

CoARA’s approach centers on four key commitments that directly challenge the status quo:

1. Abandoning Inappropriate Metrics The agreement explicitly calls for abandoning “inappropriate uses of journal- and publication-based metrics, in particular inappropriate uses of Journal Impact Factor (JIF) and h-index.” This represents a direct assault on the quantitative measures that have dominated academic assessment for decades.

2. Avoiding Institutional Rankings Perhaps most relevant to the Sorbonne’s decision, CoARA commits signatories to “avoid the use of rankings of research organisations in research assessment.” This doesn’t explicitly require withdrawal from ranking systems, but it does commit institutions to not using these rankings in their own evaluation processes.

3. Emphasizing Qualitative Assessment The coalition promotes qualitative assessment methods, including peer review and expert judgment, over purely quantitative metrics. This represents a return to more traditional forms of academic evaluation, albeit updated for modern needs.

4. Responsible Use of Indicators Rather than eliminating all quantitative measures, CoARA advocates for the responsible use of indicators that truly reflect research quality and impact, rather than simply output volume or citation counts.

European Leadership

Top 10 Countries by CoARA Membership:

The geographic distribution of CoARA members tells a compelling story about where resistance to traditional ranking systems is concentrated. As the chart shows, European countries dominate participation, led by Spain and Italy, with strong engagement also from Poland, France, and several Nordic countries. This European dominance isn’t accidental—the region’s research ecosystem has long been concerned about the Anglo-American dominance of global university rankings and the way these systems can distort institutional priorities.

The geographic distribution of CoARA members tells a compelling story about where

Prestigious European universities like ETH Zurich, the University of Zurich, Politecnico di Milano, and the University of Manchester are among the members, lending credibility to the movement. However, the data reveals that the majority of CoARA members (84.4%) are not ranked in major global systems like QS, which adds weight to critics’ arguments about institutional motivations.

CoARA Members Ranked vs Not Ranked in QS:

The Regional Divide: Participation Patterns Across the Globe

What’s particularly striking about the CoARA movement is the relative absence of U.S. institutions. While European universities have flocked to join the coalition, American participation remains limited. This disparity reflects fundamental differences in how higher education systems operate across regions.

American Participation: The clearest data we have on institutional cooperation with ranking systems comes from the United States. Despite some opposition to rankings, 78.1% of the nearly 1,500 ranked institutions returned their statistical information to U.S. News in 2024, showing that the vast majority of American institutions remain committed to these systems. However, there have been some notable American defections. Columbia University is among the latest institutions to withdraw from U.S. News & World Report college rankings, joining a small but growing list of American institutions questioning these systems. Yet these remain exceptions rather than the rule.

European Engagement: While we don’t have equivalent participation rate statistics for European institutions, we can observe their engagement patterns differently. 688 universities appear in the QS Europe ranking for 2024, and 162 institutions from Northern Europe alone appear in the QS World University Rankings: Europe 2025. However, European institutions have simultaneously embraced the CoARA movement in large numbers, suggesting a more complex relationship with ranking systems—continued participation alongside philosophical opposition.

Global Participation Challenges: For other regions, comprehensive participation data is harder to come by. The Arab region has 115 entries across five broad areas of study in QS rankings, but these numbers reflect institutional inclusion rather than active cooperation rates. It’s important to note that some ranking systems use publicly available data regardless of whether institutions actively participate or cooperate with the ranking organizations.

This data limitation itself is significant—the fact that we have detailed participation statistics for American institutions but not for other regions may reflect the more formalized and transparent nature of ranking participation in the U.S. system versus other global regions.

American universities, particularly those in the top tiers, have largely benefited from existing ranking systems. The global prestige and financial advantages that come with high rankings create powerful incentives to maintain the status quo. For many American institutions, rankings aren’t just about prestige—they’re about attracting international students, faculty, and research partnerships that are crucial to their business models.

Beyond Sorbonne: Other Institutional Departures

Sorbonne isn’t alone in taking action. Utrecht University withdrew from THE rankings earlier, citing concerns about the emphasis on scoring and competition. These moves suggest that some institutions are willing to sacrifice prestige benefits to align with their values. Interestingly, the Sorbonne has embraced alternative ranking systems such as the Leiden Open Rankings, which highlight its impact.

The Skeptics’ View: Sour Grapes or Principled Stand?

Not everyone sees moves like Sorbonne’s withdrawal as a noble principle. Critics argue that institutions often raise philosophical objections only after slipping in the rankings. As one university administrator put it: “If the Sorbonne were doing well in the rankings, they wouldn’t want to leave. We all know why self-assessment is preferred. ‘Stop the world, we want to get off’ is petulance, not policy.”

This critique resonates because many CoARA members are not major players in global rankings, which fuels suspicion that reform may be as much about strategic positioning as about values. For skeptics, the call for qualitative peer review and expert judgment risks becoming little more than institutions grading themselves or turning to sympathetic peers.

The Stakes: Prestige vs. Principle

At the heart of this debate is a fundamental tension: Should universities prioritize visibility and prestige in global markets, or focus on measures of excellence that reflect their mission and impact? For institutions like the Sorbonne, stepping away from THE rankings is a bet that long-term reputation will rest more on substance than on league table positions. But in a globalized higher education market, the risk is real—rankings remain influential signals to students, faculty, and research partners.
Rankings also exert practical influence in ways that reformers cannot ignore. Governments frequently use global league tables as benchmarks for research funding allocations or as part of national excellence initiatives. International students, particularly those traveling across continents, often rely on rankings to identify credible destinations, and faculty recruitment decisions are shaped by institutional prestige. In short, rankings remain a form of currency in the global higher education market.

This is why the decision to step away from them carries risk. Institutions like the Sorbonne and Utrecht may gain credibility among reform-minded peers, but they could also face disadvantages in attracting international talent or demonstrating competitiveness to funders. Whether the gamble pays off will depend on whether alternative measures like CoARA or ROI rankings achieve sufficient recognition to guide these critical decisions.

The Future of Academic Assessment

The CoARA movement and actions like Sorbonne’s withdrawal represent more than dissatisfaction with current ranking systems—they highlight deeper questions about what higher education values in the 21st century. If the movement gains further momentum, it could push institutions and regulators to diversify evaluation methods, emphasize collaboration over competition, and give greater weight to societal impact.

Yet rankings are unlikely to disappear. For students, employers, and funders, they remain a convenient—if imperfect—way to compare institutions across borders. The practical reality is that rankings will continue to coexist with newer approaches, even as reform efforts reshape how universities evaluate themselves internally.

Alternative Rankings: The Rise of Outcome-Based Assessment

While CoARA challenges traditional rankings, a parallel trend focuses on outcome-based measures such as return on investment (ROI) and career impact. Georgetown University’s Center on Education and the Workforce, for example, ranks more than 4,000 colleges on the long-term earnings of their graduates. Its findings tell a very different story than research-heavy rankings—Harvey Mudd College, which rarely appears at the top of global research lists, leads ROI tables with graduates projected to earn $4.5 million over 40 years.

Other outcome-oriented systems, such as The Princeton Review’s “Best Value” rankings, emphasize affordability, employment, and post-graduation success. These approaches highlight institutions that may be overlooked by global research rankings but deliver strong results for students. Together, they represent a pragmatic counterbalance to CoARA’s reform agenda, showing that students and employers increasingly want measures of institutional value beyond research metrics alone.

These alternative models can be seen most vividly in rankings that emphasize affordability and career outcomes. *The Princeton Review’s* “Best Value” rankings, for example, combine measures of financial aid, academic rigor, and post-graduation outcomes to highlight institutions that deliver strong returns for students relative to their costs. Public universities often rise in these rankings, as do specialized colleges that may not feature prominently in global research tables.

Institutions like the Albany College of Pharmacy and Health Sciences illustrate this point. Although virtually invisible in global rankings, Albany graduates report median salaries of $124,700 just ten years after graduation, placing the college among the best in the nation on ROI measures. For students and families making education decisions, data like this often carries more weight than a university’s position in QS or THE.

Together with Georgetown’s ROI rankings and the example of Harvey Mudd College, these cases suggest that outcome-based rankings are not marginal alternatives—they are becoming essential tools for understanding institutional value in ways that matter directly to students and employers.

Rankings as Necessary Evil: The Practical Reality

The CoARA movement and actions like Sorbonne’s withdrawal represent more than just dissatisfaction with current ranking systems. They reflect deeper questions about the values and purposes of higher education in the 21st century.

If the movement gains momentum, we could see:

Diversification of evaluation methods, with different regions and institution types developing assessment approaches that align with their specific values and goals

Reduced emphasis on competition between institutions in favor of collaboration and shared improvement

Greater focus on societal impact rather than purely academic metrics

More transparent and open assessment processes that allow for a better understanding of institutional strengths and contributions

Conclusion: Evolution, Not Revolution

The Coalition for Advancing Research Assessment and decisions like Sorbonne’s withdrawal from THE rankings represent important challenges to how we evaluate universities, but they signal evolution rather than revolution. Instead of the end of rankings, we are witnessing their diversification. ROI-based rankings, outcome-focused measures, and reform initiatives like CoARA now coexist alongside traditional global league tables, each serving different audiences.

Skeptics may dismiss reform as “sour grapes,” yet the concerns CoARA raises about distorted incentives and narrow metrics are legitimate. At the same time, American resistance reflects both philosophical differences and the pragmatic advantages U.S. institutions enjoy under current systems.

The most likely future is a pluralistic landscape: research universities adopting CoARA principles internally while maintaining a presence in global rankings for visibility; career-focused institutions highlighting ROI and student outcomes; and students, faculty, and employers learning to navigate multiple sources of information rather than relying on a single hierarchy.

In an era when universities must demonstrate their value to society, conversations about how we measure excellence are timely and necessary. Whether change comes gradually or accelerates, the one-size-fits-all approach is fading. A more complex mix of measures is emerging—and that may ultimately serve students, institutions, and society better than the systems we are leaving behind. In the end, what many once described to me as a “necessary evil” may persist—but in a more balanced landscape where rankings are just one measure among many, rather than the single obsession that has dominated higher education for so long.


Dean Hoke is Managing Partner of Edu Alliance Group, a higher education consultancy. He formerly served as President/CEO of the American Association of University Administrators (AAUA). Dean has worked with higher education institutions worldwide. With decades of experience in higher education leadership, consulting, and institutional strategy, he brings a wealth of knowledge on colleges’ challenges and opportunities. Dean is the Executive Producer and co-host for the podcast series Small College America.

Accreditation, Quality and Your Institution

Chet(1)By Dr. Chet Haskell Partner of Edu Alliance Group, Inc. and a member of the Board of Directors. Dr. Haskell has served as President of the Monterey Institute of International Affairs, and Cogswell Polytechnical College. He is an expert in the field of regional and international accreditation.

If you are a leader of a regionally accredited higher education institution, worries about maintaining accreditation is not at the top of the list of things keeping you awake at night.

American college and university leaders are well acquainted with accreditation. Institutional accreditation is the necessary precondition for access to essential student aid and research funding. Programmatic or specialized accreditation has been achieved by many of your academic programs. Some of your faculty members and senior administrators and, occasionally, you yourselves participate in the processes of accreditation, serving on review teams or on accreditation commissions. The routines and reports that are necessary when an institution or a program is reviewed are familiar. Accreditation is a fact of life for you and your institution, as it is with most of American academe.

But are there aspects of accreditation that should concern you? I can suggest a couple.

First, the accreditation game may change. The regulations of the Department of Education are in the hands of a new administration with views that contrast with those of the previous administration. The PROSPER initiative of the House Committee on Education and the Workforce would change many rules and incentives that affect institutions and their accrediting bodies. More importantly, the Congress is inching its way to a reauthorization of the Higher Education Act. While accreditation reform is not a major theme in the current discussions, various proposals have been put forward that would have an impact. The decoupling of Title IV financial aid from accreditation is the most radical of these, but others such as allowing regional accreditors to compete or mandating new oversight would alter accreditation in significant ways. And because there are a number of imposed mandates on accreditors, they are in many ways de facto agents of the U.S. government. The present debates may influence any or all of these factors. You can be sure that all the accrediting organizations, both institutional and programmatic, are paying close attention to these proposals.

Second, the accreditors themselves are seeking change. Among the initiatives discussed at the recent CHEA meeting were:

  1. Greater emphasis on mission appropriate measures of student success
  2. More efforts designed to inform the public about accreditation, its purposes and processes
  3. More attention to the roles and responsibilities of institutional trustees
  4. New international partnerships
  5. An increased focus on competency based education
  6. Better recognition of the diversity of student demographics and the role of lifelong learning. These and similar steps will influence institutions in numerous ways.

Third, the stakeholders of your institution understanding of accreditation as it relates to academic quality. Accreditation’s credibility and value may be at risk and thus your institution also may be at risk. Indeed, it can be argued, as reflected in recent polls, that more and more Americans not only do not understand higher education, but also doubt its worth.

This is perilous territory for higher education in general and individual institutions in particular. Such sentiments feed potential Congressional and Federal actions that would damage higher education. Furthermore, every higher education institution is dependent on the support of a wider community of students, parents, alumni, donors and local businesses and governments. Institutions are on unsteady ground if these actors question the very value of your institution.

What this really comes down to is the meaning of quality in higher education. Accreditation is one piece of the quality discussion, but it is not well understood that institutional accreditation is just a threshold, establishing minimum standards and playing a sort of minimalist consumer protection role. This approach is the only way to accommodate the diversity of institutional missions and approaches that characterizes American higher education and gives it much of its energy and creativity. Threshold accreditation is not about excellence or quality improvement.

There is no accepted definition of objective quality for institutions. Even the smallest institutions are too complex for simple definitions of quality. Yet the stakeholders, including the public and governments, are often seeking clear indicators of the expected return on their investments. This creates a vacuum into which the various ranking systems have entered, meeting a broad demand for simple statements of relative quality and reputation.

Only institutions can be responsibility for assuring and improving quality. The best American institutions demonstrate this, despite hardly needing accreditation. The top research universities are accredited only because of the desire to access Federal Title IV and research funding. Yet, they constantly seek excellence in an extremely rigorous and competitive environment. Crucial to their success are internal traditions and processes of accountability and a shared culture of academic freedom, faculty based shared governance, meritocratic selection of students and faculty members and similar characteristics. Accreditation does not force them to do this. They would operate the same way even if accreditation did not exist.

So your institution’s internal mechanisms and processes for addressing questions of quality are critical. This is particularly true because not all goals are convertible to metrics. Making measures of outcomes like employment as a standard is not only misleading, but according to longtime Bard College president, Leon Botstein, “dangerous.” Rigorous and continual internal accountability for defining and assessing quality is one of the most important responsibilities of any institution.

Part of internal quality accountability is being open to the rest of the world, since knowledge is a global commodity. How does your institution define benchmarks and select benchmarking targets? How can you assure you are avoiding the provincialism common to higher education institutions? How can you be certain you are educating students not only for their personal success, but also to fight for the principles of higher education and to confront the anti-intellectualism too common in our country? How do you make sure your students are educated for local engagement and also for life in interconnected national and international environments?

Isolationism is a dead end. Despite current national policies (and the fact that fewer than 40% of American adults even own a passport), successful institutions of the future will be those driven to improve quality for their own purposes and to do so in line with their missions, both local and international. At the same time, success will also depend on the capacity to explain all this to often-skeptical parents, students, donors and governments. Transparency matters, but accessibility that assures understanding matters even more.

Thus, something that should keep you up at night is guaranteeing your institution’s constant concentration on defining, assuring and improving educational quality in your own terms. The accreditation processes should be merely a validation of your own commitment to quality. Faculty and staff members, boards of trustees, alumni and donors and, most important of all, students and parents, need to understand the value of what is being offered and accomplished in your institution.

Of course, academic leaders need to pay attention to changes in the policy and accreditation contexts. However, at the end of the day they should worry most about the quality of their own institutions and programs. Crafting and succoring a culture of quality appropriate to one’s own institution is fundamental. Assuring and improving quality in all aspects of an institution and its programs is essential. After all, there is no market for being mediocre. Rather, the value of academic quality will only increase in a competitive global environment. Accreditation may change in many ways, but you can cope with that. But accreditation is not central to high quality. Only you and your institution are.


cropped-edu-alliance-logo-square1.jpgEdu Alliance is a higher education consultancy firm with offices in the United States and the United Arab Emirates. The founders and its advisory members have assisted higher education institutions on a variety of projects, and many have held senior positions in higher education in the United States and internationally.

Our specific mission is to assist universities, colleges and educational institutions to develop capacity and enhance their effectiveness.