This Week In Student Success

Fragmented support, authority and accountability

Was this forwarded to you by a friend? Sign up for the On Student Success newsletter and get your own copy of the news that matters sent to your inbox every week.

There are so many stressful things going on in the world, and especially in higher education, that even the folks on my pre-recorded meditation app are starting to sound tired. But what is happening in student success?

Much of what I read this week points to the same uncomfortable conclusion: our biggest student success problems are not about data, tools, or even resources, but about organizational capacity and institutional will.

Online students as afterthoughts

A white paper based on research by Don Hossler, J. T. Allen, and Luke Schultheis sheds some sobering light on the kinds of support provided to students studying online. It draws on a larger research study, which I will cover when it is released. The report is sponsored by MyFootpath, a vendor that provides enrollment and re-enrollment support, so its framing should be read with that in mind. Even so, it is solid research and worth reading, as it offers much-needed insight into the state of online student success, albeit based on a small, self-reported sample.

The study is exploratory in nature, relying on interviews with university administrators who met the following criteria.

image showing the sample was 24 institutions with online enrollment less than 33% amd with 70% retention

These thresholds meant that fully online institutions were excluded. At the same time, the institutions that were included represent a wide range of experience with online learning, from Stony Brook and Illinois State University, with just 1% and 2% of students studying fully online, to online heavyweight Oregon State University, where 35% study fully at a distance (yes, that is more than 33%). The findings should therefore be interpreted in light of this variation and the fact that these institutions are at very different stages of online development.

Even with this caveat, the picture that emerges is not encouraging. In their interviews, the authors found that support for online student success is, for the most part, haphazard.

  • 83% of institutions reported having no well-defined organizational structure for online student success.

  • Institutions employed a range of organizational models, from more centralized approaches to highly distributed arrangements in which responsibility was split across central units and academic departments. Tracking of student success was far more systematic in the centralized models.

  • Only 42% of institutions believed their university had a strong institutional focus on online retention and graduation.

  • 69% did not regularly track online student retention, and staff often lacked independent access to these data.

Taken together, these findings point to a deeper problem: many institutions have expanded online learning faster than they have built the organizational capacity to support it. Online students generate enrollment and revenue, but not always institutional priority.

Early alerts, late responses

The organizational failures described in the online student study are not unique. A similar pattern appears in how institutions use—or fail to use—early alert systems. I have written before about the way higher education tends to be far more interested in collecting data than in acting on it. A recent post persuaded me that this is not just a US phenomenon.

In the UK, the higher education regulator, the Office for Students (OfS), has for several years required universities to meet benchmarks of at least 80% continuation (or persistence, as we would call it here—potato, potahto) and 75% completion for first-time degree students. In principle, such mandates should encourage widespread and systematic use of early alert systems that flag students at risk. Writing on Wonkhe, Carmen Miles argues that this has often not happened. Even where these systems are in place, universities frequently fail to act when students are flagged as at risk. She writes.

This implementation gap isn’t about technology or data quality. It’s an organisational challenge that exposes fundamental tensions between how universities are structured and what regulatory compliance now demands.

She argues that this is a readiness and a governance issue.

The problem is organisational readiness: who has authority to act on probabilistic signals? What level of certainty justifies intervention? Which protocols govern the response? Most institutions lack clear answers, leaving staff paralysed between the imperative to support students and uncertainty about their authority to act.

I think this gets at part of the problem, but not all of it. There are also important weaknesses in early alert systems themselves. Some are calibrated to be overly sensitive, triggering flags so frequently that staff begin to ignore them. At the same time, students, especially first-generation students, may interpret “at risk” warnings as confirmation that they do not belong at university, turning what is meant to be support into a self-fulfilling prophecy.

Governance matters. But so does system design. Over-sensitive alerts, weak calibration, and poor communication can turn well-intentioned tools into sources of anxiety rather than sources of support.

Three guys from Boston

After all that grim news, a little levity. For reasons I do not entirely understand, Massachusetts accents seem to provoke disproportionate hilarity. Although I am not from Boston, my own accent appears to have much the same effect. I once had a delightful colleague who, every time she passed my office, would stick her head through my doorway, say the word “banana” in my accent, roar with laughter, and disappear.

And now back to scheduled depressing programming

But dashboards don’t fix structures

I have long threatened to write a book about the graduate school experience titled The Pedagogy of the Depressed. For many people, especially in PhD programs, it is a long, grinding process in which classmates gradually fall by the wayside.

Despite this, graduate persistence and completion remain surprisingly understudied. Much of higher education’s attention is focused on undergraduate student success—which is rightly seen as critical—but this emphasis leaves a significant gap in our understanding of the challenges facing graduate students.

A new report by Jeffrey Denning and Lesley Turner from the PEER Center begins to fill that gap. Drawing on administrative data from Texas for cohorts entering between 2003–04 and 2012–13, the authors document several important patterns.

Most strikingly, they find that only 62% of graduate students complete a degree within six years, with wide variation across fields of study.

Chart showing graduate program completion rates by field of study

All of this makes the report a useful start. But serious analysis of graduate completion needs to move well beyond simple breakdowns by discipline.

Master’s degrees and PhDs are fundamentally different enterprises and should not be analyzed together. The same is true of professional programs such as JDs and MBAs, which are typically more structured, time-bound, and better supported than disciplinary master’s programs. Treating these programs as interchangeable obscures important differences in risk, resources, and student experience.

At the same time, much graduate study now takes place online, making it essential to disaggregate outcomes by modality. Many online students, and some in-person students, also study part time, which complicates standard completion timelines and calls for more flexible benchmarks.

Without this level of disaggregation, we risk reproducing in graduate education the same blind spots we have long tolerated in undergraduate and online programs. Any serious effort to improve outcomes is likely to falter if it rests on such incomplete analysis.

I also have reservations about the authors’ policy recommendations, which I will explore in more detail in a separate post. They lean heavily on the familiar call for “more data,” while largely sidestepping the harder question of how graduate education is structured and supported.

Even so, it is encouraging to see sustained attention to this neglected area. This is a solid study and an important starting point.

Taken together, this week’s readings paint a sobering picture. Whether we are talking about online learners, undergraduates flagged as “at risk,” or graduate students struggling to finish, the same pattern appears again and again: institutions have invested heavily in data and systems, but far less in the organizational capacity needed to act on what those systems reveal. Student success remains too often a technical project layered on top of structures that were never designed to support it.

Musical coda

In keeping with the overall grim tone of this post, I am sharing my favorite cover of what I believe is a truly depressing song, delivered in the way I think it should be delivered (sorry, The Boss).

On Student Success is free, and you are welcome to share it (with attribution).

Last week I encouraged enthusiastic forwarding.

This week, I would like to retract that advice. Please do not forward this. I prefer to imagine On Student Success as a small, slightly secret society for people who enjoy thinking too much about higher education.