Monday, October 7College Admissions News

How to Hold Institutions Accountable for Student Success

Who doesn’t love a top 10 or a top 100 or a top 200 list? That true whether we’re speaking of college rankings or college football and basketball ratings, or, yes, rankings of scholars.

Each year, EdWeek, the Inside Higher Ed or Chronicle of K-12 education, publishes a list of the 200 university-based scholars who it claims did the most last year to shape educational practice and policy.

Many of the names come as no surprise. Within the top 10 are such big names as Angela Duckworth of grit fame; Carol Dweck, who coined the terms “fixed” and “growth” mind-sets; Howard Gardner, who challenged the notion of a single type of intelligence; Linda Darling-Hammond, president of the California State Board of Education; and Daniel Willingham, whose many books and articles examine the application of cognitive psychology and neuroscience to education.

Readers of the higher ed press will recognize several names, including educational historian Jonathan Zimmerman (at 14), economist Raj Chetty (at 22), higher ed finance expert Robert Kelchen (at 24) and sociologist Richard Arum (at 71). Those in my discipline will note that Sam Wineburg, a leader in efforts to promote historical thinking and digital literacy in K-12 schools, is ranked 15th.

But like any list, the omissions are as striking as the inclusions. Larry Cuban is there (at 34), but many of the leading historians of education, like Roger Geiger and John Thelin, aren’t. Nor are a number of the sociologists of education whom I consider extraordinarily important, like Steven Brint and David F. Labaree. Many of the figures included on the list, like the 88-year-old Yale psychologist James B. Comer (at 81), exerted their greatest influence years ago, which makes it surprising that those of equal or perhaps greater impact are missing, like E. D. Hirsch, of cultural literacy fame, or Uri Treisman, the MacArthur award–winning proponent of math pathways.

Since only those who are university-affiliated are listed, it’s not surprising that figures like Salman Khan isn’t mentioned. But no Diane Ravitch, who taught at NYU, no Ted Mitchell, no Freeman Hrabowski and no John King? In other words, like many and perhaps most rankings, this one appears to uneasily combine several elements: a degree of arbitrariness, a preference toward those with institutional clout, a bias toward name-brand institutions and the attributes of a popularity contest.

This ranking certainly carries the hallmarks of objectivity. Among the variables the listing takes into account are citations on Google Scholar and in syllabi, and mentions in newspapers, Twitter and the educational press, along with points for books and references in the Congressional Record. Yet I was astonished by the number of names that struck no bells, while figures like Michael McPherson, the former Spencer Foundation and Macalester College president and senior fellow in the Center on Education Data and Policy at the Urban Institute, and Colin Diver, the former Reed president who is a leading authority on the impact of college rankings, are absent.

I mention all this to introduce my key point—that we need to do a better job of recognizing scholarship that should drive public policy. Here, I’d like to thank Thomas Carey, a leading driver of educational innovation for the Higher Education Quality Council of Ontario, the Los Angeles Community College District, the California State University Office of the Chancellor and the Carnegie Foundation for the Advancement of Teaching, for directing me to an important book chapter that policy makers ought to read.

Written by Michelle Lu Yin, the American Institutes for Research’s principal economist, “Rethinking Student Outcomes” offers a methodology through which accreditors and others, including state higher education coordinating boards and public university systems, can use to compare and contrast actual and predicted graduation and retention rates at universities.

No one wants to compare apples and oranges. Some institutions enroll students with higher needs: more low-income, first-generation undergraduates who received an uneven high school education. Some institutions have more students who historically have much lower completion levels: more older students, more male students, more transfer students, more part-time students and more students from underrepresented backgrounds, especially from Indigenous communities.

But public and institutional policies and priorities also make a big difference. It turns out that expenditures on instruction and academic support and administration vary widely—and have a sizable impact on graduation rates. Note, for example, that cost of attendance and instructional spending is, on average, less than half as much at a comprehensive university than at a public research university, while the proportion of Pell Grant recipients is about 50 percent at comprehensives compared to 35 percent at their research-oriented counterparts. Even among public comprehensives, spending on instruction and support differs significantly.

When I was at the University of Texas system, it was widely recognized that UT campuses with similar demographics had radically different retention and graduation rates. Nor could the differences be attributed to location or different recruitment or economic markets. But in the absence of a rigorous, valid, reliable methodology, it was hard to hold institutional leadership to account. Yin’s book chapter spells out that methodology.

To understand which universities exceed expectations and which lag behind, Yin created a formula that compares expected and actual graduation and retention rates given the institution’s characteristics. Interestingly, Yin’s risk-adjusted predictive model does not incorporate admission rates or standardized test scores.

At the time of her research, the predicted graduation rate for first-time, full-time students at public comprehensives was 42 percent, while the actual rate was about 7 percent lower (39 percent), suggesting that these institutions, as a whole, had significant room for improvement, given their student bodies and resources—while public research campuses actually exceeded their predicted score by about 8 percent (53.6 percent versus 58.3 percent).

Some comprehensives do exceedingly well, according to Yin’s model. Albany State—where over 80 percent of the undergraduates receive Pell Grants—had a six-year graduation rate of 41 percent, despite a predicted rate of 24 percent. In stark contrast, Texas A&M Galveston, with just 22 percent Pell Grant students, had a 29 percent graduation rate, versus a predicted rate of 51 percent.

Kentucky State University had a predicted retention rate for full-time students of 50 percent, versus a predicted rate of 65 percent, while Cal State San Bernardino was its inverse, with a retention rate of 89 percent versus a predicted rate of 72 percent.

As Yin acknowledges, poorer-performing institutions may still add value. But these campuses need to demonstrate their value added. More than that, however, these campuses have much to learn from their more successful counterparts.

Here are my two takeaways. One, it is indeed possible to create models that can predict graduation rates drawing upon demographic data and instructional and support spending. And two, underperforming institutions need to be held to account.

Too often, the current discourse treats “accountability” as a four-letter word: as a way to shame and embarrass individuals, groups or institutions that suffer largely as a result of externally imposed inequities. I agree; we mustn’t do that. But the real shame is that accreditors, faculty and other stakeholders fail to insist that underperforming schools function at least as well as their institutional peers. I consider that the real “bigotry of low expectations.”

Steven Mintz is professor of history at the University of Texas at Austin.

Source: www.insidehighered.com