This Week in Student Success

Pulling students out of the river isn’t a strategy

Was this forwarded to you by a friend? Sign up for the On Student Success newsletter and get your own copy of the news that matters sent to your inbox every week.

It has been a hellish week of allergies, exhaustion, and frustration with datasets, balanced by a birthday celebration in the OSS household. But apart from that, what is the news in student success?

There comes a point where we need to stop just pulling people out of the river.

We need to go upstream and find out why they’re falling in.”
― Archbishop Desmond Tutu

This week’s readings all point to a familiar pattern: we keep building better ways to pull students out of the river, but we are far less willing to redesign the system that keeps pushing them in.

It’s a design problem, again

The Trellis Strategies Some College, No Credential (SCNC) Survey Report describes a landscape familiar to many of us. There are 43.1 million people with some college but no credential, including 37.6 million working-age adults. Trellis’s large survey of former undergraduate students produces a set of insights that are both encouraging and frustrating.

On the encouraging side, a majority believe that higher education is a good investment, and nearly 73% say that returning to earn a credential would improve their career and earning potential. Sixty-three percent of respondents report intending to return to finish their studies.

On the frustrating side, the report presents a familiar list of reasons why students stop out: family responsibilities, finances, health issues, and employment demands.

Chart showing students reasons for leaving their former institution - by 2 year and 4 year

The report reinforces something we have seen repeatedly: students are not leaving because they cannot succeed academically. They are leaving because higher education is structured around assumptions of time and financial stability that no longer hold.

I found some of the differences between two- and four-year schools striking. On one level, the greater reported financial stress at four-year institutions makes some sense, given the often lower cost of two-year schools. Differences in the role of campus life also make sense, since more students at two-year institutions are commuters and may have less time for campus engagement. But the divergence in other areas—particularly the much higher share of students at four-year institutions citing health and academic reasons for leaving—is cause for concern.

But I was mostly frustrated by the implicit (and sometimes explicit) assumption that institutions can solve these issues simply by identifying which barriers affect their students and layering on more services. That assumption becomes even clearer when you look at the toolkit.

The recommended actions are largely reactive. For example, the report states that 71% of SCNC students never spoke to anyone before stopping out. The toolkit responds with recommendations around referral channels and scripts for advisors. Similarly, it recommends instituting exit surveys, tracking students who leave, and monitoring payment plans. These are symptom-detection mechanisms, not system redesign. Institutions are not redesigning the system to accommodate student reality; they are building increasingly complex processes to manage the consequences of a system that does not fit that reality.

This is less a critique of Trellis than a reflection of how the sector as a whole approaches student success. It is less a student problem than a system design problem. The result is a system that becomes increasingly complex without becoming more effective.

Managing around constraints

Reading the Trellis report, I kept thinking about this video of a man going to extraordinary lengths to park in an impossibly narrow garage.

Students are optimizing for reality

That mismatch between system design and student reality doesn’t just show up in retention. It may also explain another trend. The National Student Clearinghouse Research Center recently released its Undergraduate Degree Earners Report. Some coverage focused on the fact that, for the first time, students ages 18–20 comprise the largest share of first-time associate degree earners. But what caught my eye was the increase in students earning certificates rather than associate (two-year) or bachelor’s degrees.

Certificate completers are up 5.7% year over year, compared to more modest gains for bachelor’s degrees (2.8%) and associate degrees (2.6%).

Chart showing changes in the number of completers by award type

This year’s increase in certificate completers is less dramatic than last year’s—5.6% compared to 16.0%. But over the two-year period, certificates have grown by more than 21%, while bachelor’s and associate degrees have seen much more modest growth in the 2%–4% range. This may be less about increased demand for certificates and more about students seeking faster, more flexible pathways that better fit their lives.

Same problem, same playbook

That same pattern shows up in institutional responses as well. A news article in The Philadelphia Inquirer paints a picture of significant student success challenges at Temple. Based on an internal university report obtained by the newspaper (and not shared with readers), the university is experiencing notable declines in retention—not only from freshman to sophomore year, but also from fall to spring. The consequences are clear in terms of students not staying enrolled, but the report also frames the issue as a substantial loss of revenue.

Temple University is retaining fewer students from freshman to sophomore year than it did a decade ago, and the rate at which students progress from fall to spring semester has declined, too. The retention issue a problem for many schools nationwide comes as the North Philadelphia-based university faces increased budget pressures. Temple has lost 27% of its U.S. enrollment over the last eight years, amounting to an average of more than $200 million in lost revenue annually, according to an internal university report obtained by The Inquirer.

[snip]

A decade ago, 90% of Temple freshmen returned for their sophomore year. By 2024, that figure declined to 82%, and early projections show it likely will slide below 80% this fall, according to the report.

The news report does not give us a lot of information about why the drop in retention happened. Officials attribute the decline in enrollment to financial pressure as well as some academic challenges. In seeking to address the challenges Temple is following a tried and true playbook, according to the President John Fry.

Temple is using technology tools to identify students missing classes and exhibiting other warning signs, he said, and this month hired a new vice provost for undergraduate education from Purdue University in part because of that school's track record on improving student success. A new orientation and first-year support system will launch in the fall, he said.


And longer term, an audit underway by the National Institute for Student Success is expected to yield new recommendations, from recruitment to career planning.
"I'm sure there are a lot of things that we're doing which are not state of the art, which we will change as a result," Fry said.

Temple also plans to continue increasing its financial aid budget, likely to more than $200 million this year. Fry said a major fundraising campaign likely to begin its public phase within the next year will focus in part on funding financial aid.

The response is familiar: more tools, more tracking, more interventions. But note what is missing—any indication that the underlying structure might be part of the problem.

The story also surfaced something else that is easy to overlook but deeply related. The report included a study by NACUBO showing that the majority of graduates are concentrated in a small number of programs (the Pareto principle at work). At first glance, this might appear to have little to do with the retention challenge.

Temple-commissioned study by the National Association of College and University Business Officers found that the vast majority of undergraduate students who get degrees are concentrated in about 20% of Temple's 197 programs.

Forty-one programs enroll more than 75% of students who get degrees, the NACUBO study found. Meanwhile, 120 programs serve about 10%. Thirty-four programs had no students graduating.

This raises an uncomfortable question: how much of the curriculum actually supports student pathways, and how much exists because of internal structures and incentives? Again, the response is to manage symptoms rather than redesign the system.

It’s a design problem—assessment edition

This question about what we are actually designed to deliver shows up in another domain as well: assessment. There has been no shortage of bad takes on higher ed this week, but there was at least one strong piece from Mark Bassett, a faculty member at Charles Sturt University (which I always think of as the Johns Hopkins of Australia—always one letter away from predictability).

Bassett writes about the impact of AI on assessment. Many of us have been stumbling toward the conclusions he reaches for some time, but he frames the issue in an especially cogent way, using Stephen Covey’s circles—control, influence, and concern.

I am going to quote from the piece more than I usually would, because he frames the issue so well. He begins by arguing that GenAI has thrown the question of assessment into sharp relief, laying bare a set of assumptions and weaknesses that were always there. The emphases are mine.

What makes the GenAI 'moment' unusual is that it has forced a reckoning with something that was already true long before large language models existed. As I and many others before me have argued, unsupervised written assessment, in isolation, has never provided a defensible evidentiary basis for student learning. GenAI has not created this problem, although it has removed the conditions that allowed institutions and academic staff to avoid confronting it.

But GenAI also raises questions about professional identity that are embedded in those practices and the role assessment plays in higher education.

The identity dimension of the disruption rarely gets named directly, and it makes the realisation of lost control considerably more fraught than it might otherwise be.

Academic identity is partly constructed around professional expertise in assessment. The ability to design tasks that genuinely distinguish understanding from its absence, to set the conditions under which learning can be demonstrated and evaluated, is not peripheral to what it means to be an academic. For many [faculty], it is central to it. [snip]


There is a second dimension to this, one that cuts more personally for those whose academic identity is bound up in the craft of writing itself. Many have spent years, often decades, developing a disciplinary voice, learning to construct arguments with precision and authority, to handle evidence with care, to produce prose that reflects genuine expertise. That craft is not incidental. For many it is the primary medium through which intellectual identity is expressed and recognised by peers.

All faculty face these challenges in assessment from GenAI, but how they respond depends on whether they operate within the circle of concern or the circle of control. Those in the concern camp see the impact of GenAI as something whose root causes they cannot influence. They respond by emphasizing detection and punishment.

The institutions that are struggling most with GenAI are those that have treated the whole problem as sitting in the concern circle and have responded accordingly. [snip]'AI detection' tools, tightened academic integrity procedures, process-tracking software, watermarking proposals, and [insert any other technical 'solution']. These responses do not address what assessment experts continue to flag as the key issue: validity. Unsupervised written work, regardless of who produced it, has a limited evidentiary relationship to learning. Knowing who wrote it does not change that relationship.

What these surveillance-led responses do achieve is to move responsibility from institutional assessment design onto individual students and [faculty]. [snip] The result is an escalation of procedural burden without a corresponding increase in confidence about what student work actually evidences.

For faculty or institutions in the control camp, Gen AI is seen as something where the root causes of impact can be understood and action taken.

[Faculty] who experience the current GenAI 'moment' as a clarification rather than a loss tend to share a disposition. They have processed, rather than avoided, the professional cost [snip]. They have accepted that they cannot control what happens outside a supervised environment, and they have stopped treating that as a failure of their professional authority. They have accepted that AI-produced text can rival that of any expert, and they have stopped treating that as a verdict on the value of their expertise. They have moved their energy to the territory where genuine agency sits.

In the control circle, that territory is assessment design at the unit level. The design of what is being asked, how it is structured, and what it treats as evidence of learning are decisions an individual can make and implement without institutional permission.

It is not a perfect piece. It does not address how to solve these problems. But as an analysis, it surfaces issues I haven’t seen framed this clearly before and will, I believe, ultimately help us address one of the core challenges facing higher ed today.

What GenAI exposes in assessment is the same problem we see everywhere else: we have built systems that assume stability, and we are trying to patch them in a world where that assumption no longer holds.

I started with a quote from Desmond Tutu, whom I deeply admired. I have a funny story about the time I got to meet him. For the cost of a fake beer or a real coffee, I will share it with you the next time we meet.

Musical coda

To the firecracker, on the occasion of her birthday.

If you know someone who spends a lot of time pulling students out of the river, feel free to send this newsletter along to them. Or better yet, someone who might be willing to walk upstream.

Thanks for being a subscriber.