Thoughts About Accreditation and Small Colleges

April 20, 2026, By Chet Haskell – Like all higher education institutions in the United States, small colleges operate within a complex regulatory framework known as institutional accreditation. Originally an initiative by colleges and universities to ensure basic quality and a level of consumer protection, various accreditation entities have evolved in multiple ways, most importantly as gatekeepers to access to Federal Government Title IV student financial aid resources. Over the twentieth century, this framework developed what is often referred to as the “triad”: the federal government, state agencies, and accrediting bodies (formerly known as “regional accreditors”).

Key Aspects of Accreditation

Prior to 2020, the nation was divided into six large regions, covering every state and US territory. This monopsony controlled higher education but had somewhat varied approaches, policies, and practices. It is now possible to be accredited by one “regional” despite being located outside their region (or even outside of the US). This situation is about to change further, as pending changes to the US Department of Education’s accreditation requirements will allow more types of accreditors and greater competition.

It is also important to understand that there are other types of “specialized” accreditors. These bodies focus on specific disciplines or professional fields to ensure minimum quality standards in academic programs. For example, ABET (Accreditation Board for Engineering and Technology) oversees accreditation of engineering programs in the US and in 40 other countries, covering more than 950 institutions. There are competing specialized accreditors for business programs, many health care programs, and a plethora of other specialized professional fields.

The key thing to remember is that specialized accreditation assumes the base institution is also accredited. Specialized accreditors are generally less important to small colleges that largely lack graduate professional programs.

US Title IV aid requirements mean that virtually every institution needs accreditation recognition by the Federal government in order for its students to receive Federally related financial aid. Such student aid is the lifeblood of most institutions and especially so for small, private non-profit colleges. These institutions generally rely almost totally on revenue from enrollments, and Federal student aid typically accounts for at least 35% to 40% of that revenue. However, access to Title IV comes with strings. Most importantly, institutional accreditation bodies authorized by the Federal Government are supposed to hold institutions to certain standards in order to be accredited.

The final important element is that the institutional accreditors are membership organizations that receive their funding from member fees and similar sources. Unlike the case in many other countries, these accreditors do not receive funding from the US Government.

Thus, the situation exists in which the institutional accreditors (originally the six “regional accreditors”) are de facto agents of the US government while strenuously defending their independent roles as peer-dominated institutions committed to quality assurance and improvement. While access to Title IV is the essential link to government, the fact of an institution’s accreditation is, of itself, of great reputational value. Every accredited institution proclaims its status as an accredited institution as a seal of approval or badge of excellence. This can be critically important for the recruitment of students, faculty, and administrators, as well as attractiveness for research and other grants.

Accreditation as Adequacy, not Excellence

In reality, accreditation standards are a lowest-common-denominator model. For example, WSCUC (formerly WASC, now the WASC Senior College and University Commission), the traditional accreditor for California, Hawaii, and the Pacific territories, has a set of standards that must apply equally and fairly to top universities like the University of California or Stanford University, as well as tiny religious schools and every type of academic institution in between. The general standards remain the same, but the expected outcomes cannot. Institutional accreditation is a necessary condition for an institution to exist, but it is hardly an indicator of more than minimal quality. (There are examples of smaller institutions with none of the assets of a Stanford proclaiming that they have the same accreditation as Stanford as evidence of their quality. This, of course, is misleading, at best.)

This entire situation arises from one of the strengths of American higher education – its diversity. Institutions have different missions, academic approaches, scales, specializations, and so on. Students seeking education have a tremendous range of institutional opportunities –from huge public universities to minuscule specialized schools. However, all of these institutions are bound to a single accreditation regime due to Title IV student aid funding.

It also creates a paradox: institutional diversity within a complex ecosystem is generally seen as valuable, yet accreditation requirements often constrain the expression of that diversity. There are significant accreditation pressures that push institutions to become similar in many ways. These standards also make it difficult for institutions to be truly innovative. There is a set of isomorphic norms, expectations, leadership requirements, and best practices. All the diverse institutions, in some ways, look quite similar. The student outcomes – degrees that certify a certain level of educational achievement – are similar whether one attends an elite research university, a small regional public institution, or a minuscule independent school. Yes, there are subjective reputational differences, but in many ways, a degree is a ticket punched.

Objectivity and Subjectivity in Accreditation

The process by which standards are set and evaluated is, by its very nature, highly subjective despite efforts at measurement. The actual standards have objective elements. For example, each institution must have a CEO and a CFO, an independent board of trustees, a statement of mission, and meet certain (minimal) financial standards; it must have ways of measuring student outcomes, and so forth. But assessing the degree to which an institution a) meets these minimum standards and b) can demonstrate some measure of quality is largely subjective in nature. Indeed, assessment is conducted by volunteer peer panels undertaking periodic reviews and reporting their findings. The essential value inherent in the process is “peer review.” Colleges and universities essentially review and rate each other, rather than having a government agency or other private, non-educational entity do so. Peer review has its strengths and weaknesses, but is often considered a preferred alternative to having the government perform the task.

Peer review teams typically include a senior financial officer, whose assessment of the financial status of the institution is combined with assessments of other members focusing on non-financial topics. While the comprehensive reports are important, the stark reality is that viability of institutional finances is key. After all, nothing else matters if the institution is not financially sustainable.

Accreditors also look at other sources of information. Institutions are required to submit annual reports, augmented by independent audits. Some accreditors have dashboards that provide partial financial data. However, most rely heavily on the Combined Federal Index (CFI, compiled by the Department of Education). While such data sources are valuable, they suffer from two inevitable limitations. First, broad comprehensive indices may not explain much about the particular financial issues of an individual institution. Second, all of these sources are retrospective in nature and may be of little value in looking forward.

An accrediting body staff (sometimes assisted by outside experts) will look carefully at an institution when the CFI and other indices are too low or when a peer review team raises significant financial concerns. But accreditors covering hundreds of institutions do not have the capacity to examine each institution’s situation in detail, leading to a necessary triaging approach. But are no “bright lines” unless an institution cannot make payroll or otherwise demonstrates extreme stress and by then, it is usually too late.

An institution seeking accreditation must demonstrate that it has the financial wherewithal to operate for the foreseeable future. After all, it is reasonable for prospective students and their parents to assume the institution will survive at least long enough for degrees to be completed. However, the typical reaccreditation cycle of 8-10 years means that outside of standard annual reports, the accreditor has little information about institutions that may be in financial trouble. There are no effective early warning systems, and institutions in trouble have few incentives to inform their accreditors, lest word of the problem further endanger the institution’s fiscal health by discouraging students from staying or even applying.

At the end of the day, the institutions themselves are fundamental to their own financial situations. The accreditors and the Department of Education may ring alarms in extreme cases, but the institutions –and especially the private institutions – are basically on their own in many ways.

How do these aspects of accreditation affect small colleges and universities?

Much attention has been paid of late to the number of smaller private institutions that have had to close and the growing risk of many more in the years ahead. Such closures – or the growing number of mergers and acquisitions among small colleges that are alternatives to closure—are at root financial in nature. No institution is saying that they have sufficient financial resources but do not care to continue.

The basic economics of small private colleges are well known. They have limited endowment resources and are almost totally dependent on tuition revenues. Their costs are rising, but the pool of traditional-age students is falling. Competition among all manner of institutions is increasing for the same students. In this situation, some institutions seek additional revenue sources by offering non-degree certificates or microcredentials, adding limited graduate programs, pursuing distance education, or increasing auxiliary activities. But at the end of the day, the core of any small college is its academic programs, and the only significant source of additional revenue for most is fundraising.

Competition among small institutions takes many forms. In some cases, it refers to academic environments, programs, and opportunities. In others, it refers to reputation, faculty, or facilities. Crucially, however, a principal competition is in pricing. These institutions have posted sticker prices, but almost all offer significant discounts (often exceeding 50%) to attract more students. The impact of this financial arms race is to constrain further the resources available to fund the institution.

A central problem for these institutions is one of scale, or more accurately, lack of scale. They have few opportunities to operate with any economies of scale. The cost of providing a class to 10 students is essentially the same as providing one to 30 students. Unlike larger institutions, these small schools do not have large introductory classes of 100 or more students that can, in effect, subsidize smaller specialized classes.

Another impact of institutional scale concerns the process of meeting accreditation standards. Successful accreditation requires various institutional commitments. For example, there are data requirements on student achievement, retention and completion. Such requirements mean an institution must have the administrative capacity to produce data. Furthermore, a central element of the process is the engagement of faculty in both ongoing student assessment and the creation of the documentation needed for demonstration of progress. Such processes cannot be done by staff members alone and require considerable time and commitment on the part of faculty members. Larger institutions have the necessary administrative staff such as institutional researchers to support this process. Smaller institutions are often challenged in this aspect. Again, a large university has plenty of faculty members among whom can be spread the required levels of faculty engagement. This is not the case with smaller institution. Simply put, small institutions carry an extra burden because of accreditation that is more easily borne in larger institutions.

As noted, there are increasing pressures for institutional consolidation. One current barrier is the time and complexity required to put a merger or partnership in place. The actual process of institutional negotiations is complex and difficult. But then the proposed arrangement requires approvals from accreditors, state higher education regulatory bodies, and the US Department of Education, all of which can take years. The cost of pursuing first an agreement and then the approvals is extensive –legal fees, financial advisors, project management and other consultants all add up. Additionally, there are the opportunity costs of institutional leadership being consumed by the merger or partnership process, rather than focusing on the institution’s regular business or on alternative institutional directions.

The pending Trump Administration changes to the accreditation processes are, in some ways, designed to mitigate these constraints. For example, there is a proposal to streamline the Department of Education approval process. And, as noted, the Administration also seeks to promote increased flexibility for institutions and accreditors, in part through more market-centered, competitive approaches to accreditors.

While increased flexibility would be welcome, one expected outcome is to facilitate the entry of for-profit institutions into the competitive space. Prior administrations acted to curb the perceived excesses of earlier for-profit models (think Trump University). A resurgence of for-profit institutions might be welcomed from an institutional diversity perspective. Still, the  impact on the small private colleges is likely to be negative, as it will further increase competition for students.

Another change that is discussed is institutional transparency. While there are efforts to provide dashboards, student cost calculators, and other data-oriented information sources, the fact of the matter is that higher education is complicated. Currently, most of the accreditors post their findings about an institution on their websites. This may simply be a statement that Institution X is accredited. Or it may include more basic information.

One accreditor, WSCUC, posts the actual reports of peer review committees, as well as the formal outcomes of accreditor decisions. The problem with this is that such reports are generally arcane for people outside higher education and are written in a stylized manner designed for other academics and the top accreditor decision-making body. And these reports and decisions are written with great care to avoid, as much as possible, further undercutting institutions. A review that focuses on a college’s financial weaknesses can easily become a self-fulfilling prophecy. Nonetheless, consumer protection goals should tilt toward greater, not lesser, transparency.

Most small colleges need support of various kinds. This may come in the form of advice or access to specialized expertise, the provision of which might be a useful accreditor task. Most accreditors already share experience and knowledge through conferences, workshops and the osmotic effects of peer review itself. Accreditors should consider ways to ease the consolidation process, seeking a balance between becoming more supportive and less regulatory

However, at the end of the day, most problems are not rooted in definitions of academic quality or lack thereof, but in raw finances. All too often, accreditor focus on an institution’s financial problems comes too late and the only remaining task is to ensure options for students to complete their studies through transfers or teachouts. Finding ways to identify such problems earlier and providing access to supportive advice would be salutary.

Small colleges are an essential component of American higher education. The fact of the matter is that most could not exist without governmental support. Rather than direct governmental control, the provision of student financial aid is the principal means for doing this. (While there are other forms of Federal aid, notably research funding, most small institutions have limited capacity to access these resources, which, as has been demonstrated by the Trump Administration, come with additional strings attached.)

Accrediting bodies need to explore ways to fulfill their basic functions while also serving as sources of advice and support for their member institutions, especially the smallest of them. It is in everyone’s interest that they do so.


Dr. Chet Haskell serves as Co-Head for the College Partnerships and Alliances for the Edu Alliance Group. Chet is a higher education leader with extensive experience in academic administration, institutional strategy, and governance. He recently completed six and a half years as Vice Chancellor for Academic Affairs and University Provost at Antioch University, where he played a central role in creating the Coalition for the Common Good with Otterbein University. Earlier in his career, he spent 13 years at Harvard University in senior academic positions, including Executive Director of the Center for International Affairs and Associate Dean of the Kennedy School of Government. He later served as Dean of the College at Simmons College and as President of both the Monterey Institute of International Studies and Cogswell Polytechnical College, successfully guiding both institutions through mergers.

An experienced consultant, Dr. Haskell has advised universities and ministries of education in the United States, Latin America, Europe, and the Middle East on issues of finance, strategy, and accreditation. His teaching and research have focused on leadership and nonprofit governance, with a particular emphasis on helping smaller institutions adapt to financial and structural challenges. He earned DPA and MPA degrees from the University of Southern California, an MA from the University of Virginia, and an AB cum laude from Harvard University.

Breaking Away from Rankings

The Growing Movement to Reform Research Assessment and Rankings

By Dean Hoke, September 22, 2025: For the past fifteen years, I have been closely observing what can only be described as a worldwide fascination—if not obsession—with university rankings, whether produced by Times Higher Education, QS, or U.S. News & World Report. In countless conversations with university officials, a recurring theme emerges: while most acknowledge that rankings are often overused by students, parents, and even funders when making critical decisions, few deny their influence. Nearly everyone agrees that rankings are a “necessary evil”—flawed, yet unavoidable—and many institutions still direct significant marketing resources toward leveraging rankings as part of their recruitment strategies.

It is against this backdrop of reliance and ambivalence that recent developments, such as Sorbonne University’s decision to withdraw from THE rankings, deserve closer attention

In a move that signals a potential paradigm shift in how universities position themselves globally, Sorbonne University recently announced it will withdraw from the Times Higher Education (THE) World University Rankings starting in 2026. This decision isn’t an isolated act of defiance—Utrecht University had already left THE in 2023, and the Coalition for Advancing Research Assessment (CoARA), founded in 2022, has grown to 767 members by September 2025. Together, these milestones reflect a growing international movement that questions the very foundations of how we evaluate academic excellence.

The Sorbonne Statement: Quality Over Competition

Sorbonne’s withdrawal from THE rankings isn’t merely about rejecting a single ranking system. It appears to be a philosophical statement about what universities should stand for in the 21st century. The institution has made it clear that it refuses to be defined by its position in what it sees as commercial ranking matrices that reduce complex academic institutions to simple numerical scores.

Understanding CoARA: The Quiet Revolution

The Coalition for Advancing Research Assessment represents one of the most significant challenges to traditional academic evaluation methods in decades. Established in 2022, CoARA has grown rapidly to include 767 member organizations as of September 2025. This isn’t just a European phenomenon—though European institutions have been early and enthusiastic adopters. The geographic distribution of CoARA members tells a compelling story about where resistance to traditional ranking systems is concentrated. As the chart shows, European countries dominate participation, led by Spain and Italy, with strong engagement also from Poland, France, and several Nordic countries. This European dominance isn’t accidental—the region’s research ecosystem has long been concerned about the Anglo-American dominance of global university rankings and the way these systems can distort institutional priorities.

The Four Pillars of Reform

CoARA’s approach centers on four key commitments that directly challenge the status quo:

1. Abandoning Inappropriate Metrics The agreement explicitly calls for abandoning “inappropriate uses of journal- and publication-based metrics, in particular inappropriate uses of Journal Impact Factor (JIF) and h-index.” This represents a direct assault on the quantitative measures that have dominated academic assessment for decades.

2. Avoiding Institutional Rankings Perhaps most relevant to the Sorbonne’s decision, CoARA commits signatories to “avoid the use of rankings of research organisations in research assessment.” This doesn’t explicitly require withdrawal from ranking systems, but it does commit institutions to not using these rankings in their own evaluation processes.

3. Emphasizing Qualitative Assessment The coalition promotes qualitative assessment methods, including peer review and expert judgment, over purely quantitative metrics. This represents a return to more traditional forms of academic evaluation, albeit updated for modern needs.

4. Responsible Use of Indicators Rather than eliminating all quantitative measures, CoARA advocates for the responsible use of indicators that truly reflect research quality and impact, rather than simply output volume or citation counts.

European Leadership

Top 10 Countries by CoARA Membership:

The geographic distribution of CoARA members tells a compelling story about where resistance to traditional ranking systems is concentrated. As the chart shows, European countries dominate participation, led by Spain and Italy, with strong engagement also from Poland, France, and several Nordic countries. This European dominance isn’t accidental—the region’s research ecosystem has long been concerned about the Anglo-American dominance of global university rankings and the way these systems can distort institutional priorities.

The geographic distribution of CoARA members tells a compelling story about where

Prestigious European universities like ETH Zurich, the University of Zurich, Politecnico di Milano, and the University of Manchester are among the members, lending credibility to the movement. However, the data reveals that the majority of CoARA members (84.4%) are not ranked in major global systems like QS, which adds weight to critics’ arguments about institutional motivations.

CoARA Members Ranked vs Not Ranked in QS:

The Regional Divide: Participation Patterns Across the Globe

What’s particularly striking about the CoARA movement is the relative absence of U.S. institutions. While European universities have flocked to join the coalition, American participation remains limited. This disparity reflects fundamental differences in how higher education systems operate across regions.

American Participation: The clearest data we have on institutional cooperation with ranking systems comes from the United States. Despite some opposition to rankings, 78.1% of the nearly 1,500 ranked institutions returned their statistical information to U.S. News in 2024, showing that the vast majority of American institutions remain committed to these systems. However, there have been some notable American defections. Columbia University is among the latest institutions to withdraw from U.S. News & World Report college rankings, joining a small but growing list of American institutions questioning these systems. Yet these remain exceptions rather than the rule.

European Engagement: While we don’t have equivalent participation rate statistics for European institutions, we can observe their engagement patterns differently. 688 universities appear in the QS Europe ranking for 2024, and 162 institutions from Northern Europe alone appear in the QS World University Rankings: Europe 2025. However, European institutions have simultaneously embraced the CoARA movement in large numbers, suggesting a more complex relationship with ranking systems—continued participation alongside philosophical opposition.

Global Participation Challenges: For other regions, comprehensive participation data is harder to come by. The Arab region has 115 entries across five broad areas of study in QS rankings, but these numbers reflect institutional inclusion rather than active cooperation rates. It’s important to note that some ranking systems use publicly available data regardless of whether institutions actively participate or cooperate with the ranking organizations.

This data limitation itself is significant—the fact that we have detailed participation statistics for American institutions but not for other regions may reflect the more formalized and transparent nature of ranking participation in the U.S. system versus other global regions.

American universities, particularly those in the top tiers, have largely benefited from existing ranking systems. The global prestige and financial advantages that come with high rankings create powerful incentives to maintain the status quo. For many American institutions, rankings aren’t just about prestige—they’re about attracting international students, faculty, and research partnerships that are crucial to their business models.

Beyond Sorbonne: Other Institutional Departures

Sorbonne isn’t alone in taking action. Utrecht University withdrew from THE rankings earlier, citing concerns about the emphasis on scoring and competition. These moves suggest that some institutions are willing to sacrifice prestige benefits to align with their values. Interestingly, the Sorbonne has embraced alternative ranking systems such as the Leiden Open Rankings, which highlight its impact.

The Skeptics’ View: Sour Grapes or Principled Stand?

Not everyone sees moves like Sorbonne’s withdrawal as a noble principle. Critics argue that institutions often raise philosophical objections only after slipping in the rankings. As one university administrator put it: “If the Sorbonne were doing well in the rankings, they wouldn’t want to leave. We all know why self-assessment is preferred. ‘Stop the world, we want to get off’ is petulance, not policy.”

This critique resonates because many CoARA members are not major players in global rankings, which fuels suspicion that reform may be as much about strategic positioning as about values. For skeptics, the call for qualitative peer review and expert judgment risks becoming little more than institutions grading themselves or turning to sympathetic peers.

The Stakes: Prestige vs. Principle

At the heart of this debate is a fundamental tension: Should universities prioritize visibility and prestige in global markets, or focus on measures of excellence that reflect their mission and impact? For institutions like the Sorbonne, stepping away from THE rankings is a bet that long-term reputation will rest more on substance than on league table positions. But in a globalized higher education market, the risk is real—rankings remain influential signals to students, faculty, and research partners.
Rankings also exert practical influence in ways that reformers cannot ignore. Governments frequently use global league tables as benchmarks for research funding allocations or as part of national excellence initiatives. International students, particularly those traveling across continents, often rely on rankings to identify credible destinations, and faculty recruitment decisions are shaped by institutional prestige. In short, rankings remain a form of currency in the global higher education market.

This is why the decision to step away from them carries risk. Institutions like the Sorbonne and Utrecht may gain credibility among reform-minded peers, but they could also face disadvantages in attracting international talent or demonstrating competitiveness to funders. Whether the gamble pays off will depend on whether alternative measures like CoARA or ROI rankings achieve sufficient recognition to guide these critical decisions.

The Future of Academic Assessment

The CoARA movement and actions like Sorbonne’s withdrawal represent more than dissatisfaction with current ranking systems—they highlight deeper questions about what higher education values in the 21st century. If the movement gains further momentum, it could push institutions and regulators to diversify evaluation methods, emphasize collaboration over competition, and give greater weight to societal impact.

Yet rankings are unlikely to disappear. For students, employers, and funders, they remain a convenient—if imperfect—way to compare institutions across borders. The practical reality is that rankings will continue to coexist with newer approaches, even as reform efforts reshape how universities evaluate themselves internally.

Alternative Rankings: The Rise of Outcome-Based Assessment

While CoARA challenges traditional rankings, a parallel trend focuses on outcome-based measures such as return on investment (ROI) and career impact. Georgetown University’s Center on Education and the Workforce, for example, ranks more than 4,000 colleges on the long-term earnings of their graduates. Its findings tell a very different story than research-heavy rankings—Harvey Mudd College, which rarely appears at the top of global research lists, leads ROI tables with graduates projected to earn $4.5 million over 40 years.

Other outcome-oriented systems, such as The Princeton Review’s “Best Value” rankings, emphasize affordability, employment, and post-graduation success. These approaches highlight institutions that may be overlooked by global research rankings but deliver strong results for students. Together, they represent a pragmatic counterbalance to CoARA’s reform agenda, showing that students and employers increasingly want measures of institutional value beyond research metrics alone.

These alternative models can be seen most vividly in rankings that emphasize affordability and career outcomes. *The Princeton Review’s* “Best Value” rankings, for example, combine measures of financial aid, academic rigor, and post-graduation outcomes to highlight institutions that deliver strong returns for students relative to their costs. Public universities often rise in these rankings, as do specialized colleges that may not feature prominently in global research tables.

Institutions like the Albany College of Pharmacy and Health Sciences illustrate this point. Although virtually invisible in global rankings, Albany graduates report median salaries of $124,700 just ten years after graduation, placing the college among the best in the nation on ROI measures. For students and families making education decisions, data like this often carries more weight than a university’s position in QS or THE.

Together with Georgetown’s ROI rankings and the example of Harvey Mudd College, these cases suggest that outcome-based rankings are not marginal alternatives—they are becoming essential tools for understanding institutional value in ways that matter directly to students and employers.

Rankings as Necessary Evil: The Practical Reality

The CoARA movement and actions like Sorbonne’s withdrawal represent more than just dissatisfaction with current ranking systems. They reflect deeper questions about the values and purposes of higher education in the 21st century.

If the movement gains momentum, we could see:

Diversification of evaluation methods, with different regions and institution types developing assessment approaches that align with their specific values and goals

Reduced emphasis on competition between institutions in favor of collaboration and shared improvement

Greater focus on societal impact rather than purely academic metrics

More transparent and open assessment processes that allow for a better understanding of institutional strengths and contributions

Conclusion: Evolution, Not Revolution

The Coalition for Advancing Research Assessment and decisions like Sorbonne’s withdrawal from THE rankings represent important challenges to how we evaluate universities, but they signal evolution rather than revolution. Instead of the end of rankings, we are witnessing their diversification. ROI-based rankings, outcome-focused measures, and reform initiatives like CoARA now coexist alongside traditional global league tables, each serving different audiences.

Skeptics may dismiss reform as “sour grapes,” yet the concerns CoARA raises about distorted incentives and narrow metrics are legitimate. At the same time, American resistance reflects both philosophical differences and the pragmatic advantages U.S. institutions enjoy under current systems.

The most likely future is a pluralistic landscape: research universities adopting CoARA principles internally while maintaining a presence in global rankings for visibility; career-focused institutions highlighting ROI and student outcomes; and students, faculty, and employers learning to navigate multiple sources of information rather than relying on a single hierarchy.

In an era when universities must demonstrate their value to society, conversations about how we measure excellence are timely and necessary. Whether change comes gradually or accelerates, the one-size-fits-all approach is fading. A more complex mix of measures is emerging—and that may ultimately serve students, institutions, and society better than the systems we are leaving behind. In the end, what many once described to me as a “necessary evil” may persist—but in a more balanced landscape where rankings are just one measure among many, rather than the single obsession that has dominated higher education for so long.


Dean Hoke is Managing Partner of Edu Alliance Group, a higher education consultancy. He formerly served as President/CEO of the American Association of University Administrators (AAUA). Dean has worked with higher education institutions worldwide. With decades of experience in higher education leadership, consulting, and institutional strategy, he brings a wealth of knowledge on colleges’ challenges and opportunities. Dean is the Executive Producer and co-host for the podcast series Small College America.