Go High or Go Low: Surviving AI in the College Classroom

Go High or Go Low: Surviving AI in the College Classroom
Photo by Jerin John / Unsplash

It should be becoming clear now to people outside Higher Education in the US that Generative AI tools are becoming a problem, though perhaps not in the sense that you might think.

Are all college students using Generative AI tools like Gemini or Perplexity to do their work for them? No.

Are some college students using Generative AI on the regular to "cheat" on their schoolwork? Absolutely.

Are they doing it all the time on all their assignments? No (or at least the data we have available doesn't seem to support that).

Is the potential usage of AI by any student on any assignment creating problems? Absolutely. As Seva Gunitsky puts it, "At stake here is not just cheating but the total erosion of trust inside the university."

Trust is the problem. And it's a battle we're all losing.

"Nobody trusts anybody now...and we're all very tired."

We've been overwhelmed with anecdata lately about how AI is destroying higher education as we know it. In the absence of real data shared by universities, it has the feel of one of Higher Ed's recurring Moral Panics (though I will not deny that we're beginning to see the problems emerge as adoption scales and OpenAI and Google push their tools on students in a way that seems a lot like promoting cheating). The buzz around this New York magazine piece was deafening for awhile.

Everyone Is Cheating Their Way Through College
ChatGPT has unraveled the entire academic project.

The Fear is palpable in this one.

Why is cheating likely to occur with these tools? It isn't the fault of the AI. I mean, you had students using Chegg before that, hiring under-employed grad students to write papers for them, Cliff's Notes, printing test answers on coke bottle labels, etc. The current AI Trust Dilemma is really just the logical endpoint of a process that's been going on for 40 years (or longer). When we decided to turn higher ed into a gatekeeper of credentials for a job rather than helping students to understand the purpose and value of becoming educated.

[Look, defunding higher education and burdening students with excessive debt didn't help. I mean, Neoliberalism, yeah? Like I said, it's the result of a system, not an accident.]

"At stake here is not just cheating but the total erosion of trust inside the university."

However, we still have the issue of students using AI to do the work for them. Generally, I recommend that clients do nothing. If an assignment is well-designed and the faculty members is grading closely, you can just give the AI-generated student submission the grade it deserves, which is generally not that great. It works in most cases, but not all.

But this week I've seen two different recommended approaches to the AI Cheating Dilemma I think are worth pointing out, mostly because they take two wildly different directions: Going High and Going Low. What do I mean by this?

Going High

The first approach, which I'll call Going High, consists of moving your assignments up Bloom's Taxonomy to the Synthesis (especially) and Evaluation levels, where Generative AI has a much more difficult time producing anything close to excellent student work.

When I was Associate Director of the Center for the Enhancement of Learning and Teaching (a looooong time ago), we constantly worked to help faculty understand the importance of moving their student work up the ladder of Bloom's Taxonomy as a key way to engage students and reduce their incentive and ability to cheat. Problem-Based and Project-Based learning, simulations, e c. were excellent ways to engage students as active learners, as owners of their own education.

As it turns out, this is a great way to deal with Generative AI usage as well.

Teaching in the Age of AI
how I avoided a shitty semester

Two examples of Going High in response to student AI usage from a UofT political scientist.

Seva Gunitsky, a political scientist at the University of Toronto, recently shared two examples of moving up Bloom's Taxonomy as an approach to concerns about student AI usage.

  • For a seminar on the Soviet collapse, Seva had their students build a video game "in which the player takes on the role of Gorbachev in 1985 and tries to avoid the collapse of the USSR."
  • For a seminar on the Global Politics of Science Fiction, Seva structured the course around in-class creative writing workshops and peer review to produce creative works of writing that analyzed, evaluated and synthesized course concepts in ways that LLMs still find difficult.

Seva acknowledges that these approaches are hard to scale (their seminars were about 23 students each), and required the assistance of outside coding resources funded by the department. When it's time to teach the 500-student Intro course next year? Seva says, "It's Bluebook Time."

Which leads us to the next (and increasingly popular option): Going Low with handwritten assignments.

Going Low

If you can't (or won't) find a way to rethink your assessments to go higher up Bloom's taxonomy, then the next most likely strategy is to Go Low by shedding technology and resorting to the use of in-class handwritten assignments, likely using the old-school Bluebook, much like Bill Adama saving the human race by keeping networked computer systems off the Galactica.

#BillAdamaWasRight

Back in May, the Wall Street Journal reported that bluebook sales were on the rise at a number of more prestigious universities, from UC Berkeley to Texas A&M. https://www.wsj.com/business/chatgpt-ai-cheating-college-blue-books-5e3014a6

And now Inside Higher Ed, normally one of the foremost cheerleaders for technology in higher education, tells us that Bluebooks may be our only hope. But there's a good reason for this, and it relates to the comment above about Trust being the real issue here when it comes to increasing student AI usage. As Tricia Bertram Gallant (The Opposite of Cheating: Teaching for Integrity in the Age of AI) tells us:

“We’re not just in the business of facilitating learning; we’re in the business of certifying learning. There has to be secure assessments to say to the world, ‘This is a person who has the knowledge and abilities we are promising they have"

Handwritten assignments would appear to have two very important advantages, according to IHE:

  • They prevent the use of AI to cheat on those assessments if they are done as well-structured in-class writing assignments, and
  • By preventing the use of AI to remove friction in the learning process, students appear to place higher value on the process of learning - and report fewer distractions as a bonus!
Amid AI Plagiarism, More Professors Turn to Handwritten Work
Five semesters after ChatGPT changed education forever, some professors are taking their classes back to the pre-internet era.

Going lower-tech would appear to work, at least for assignments further down on Bloom's Taxonomy. Although we do have some faculty in the article exploring ways to do more advanced work through in-class handwritten work.

So What Do We Do Now?

Well, that's the question, isn't it?

I would recommend to clients that they do the following:

  1. In the short term, encourage faculty to switch to in-class, handwritten assignments for high-stakes assessment. #BillAdamaWasRight
  2. If you haven't already, engage your faculty in a public discourse around producing a consistent AI policy to guide faculty and students alike. Make the process and results as open, transparent, and publicly available as possible. You're building Trust here. (Oh, and you should really include students - and not just the Usual Suspects from student leadership - in this process. Trust me - you'll all reap the benefits of this.)
  3. DON'T just spray and pray when it comes to being an AI-First University (or whatever FOMO term of art is going around now). Students need to understand AI critically in terms of its existence as a system rooted in social, political and economic contexts, beyond just "It's inevitable so we need to teach our students how to use it"). AI Literacy is more than some workshops on How to Prompt.
  4. Work with your Teaching and Learning Center and give them the resources to work with faculty to transform their assignments and move them up Bloom's taxonomy.
  5. Provide College and Department resources to aid creativity in this work. Publicize faculty who are brave enough to take the lead on this and perhaps provide a moratorium on student evaluations for these faculty as they struggle to adapt.

As always, I'd be happy to talk with leadership at your institution to practice good strategic foresight around AI so that you can avoid the worst of the mistakes and put your institution in a place to better serve its students, faculty and community as this technology and its impacts shake out.

For a deeper conversation, please reach out for a phone or video call!

Let's Talk!