You have /5 articles left.
Sign up for a free account or log in.

There have been several points during this era of AI availability in education where I’ve been genuinely shocked that something that seems to me to be clearly out of bounds or incredibly rash is viewed by others as quite workable, or even desirable.

One of these is so-called AI peer review. Granted, academic research is not actually my thing, but I was under the impression that the goal of research and peer review is to deploy the reasoned judgment of subject matter experts in adjudicating whether or not a proposed new contribution is worthy of being heard and disseminated.

The key words there are “reasoned judgment,” something a large language model may be able to simulate but cannot actually do. I am aware the system of academic peer review has become strained to breaking for all kinds of reasons, but I cannot fathom how taking a system that’s predicated on reasoned judgment and outsourcing it to a simulation is acceptable, and yet I am aware some people believe this is a solution to the peer-review bottleneck.

Another no-go in my book that is being pursued with some measure of enthusiasm by others is outsourcing grading and response to student writing to generative AI. I do not know how to ask students to write something that is not going to be read, because I think even the most enthusiastic AI folks will admit that large language models do not read or communicate with intention the way humans do. It’s simply a betrayal of the student-instructor compact.

I had another moment of pause while reading a recent New York Times feature on OpenAI’s push onto college campuses, featuring the California State University system’s partnership, which will make ChatGPT available to its 460,000 students in pursuit of “the nation’s first and largest A.I.-empowered university system.”

I’ll tell you what gave me pause. For the last, what … 18 months … we’ve been receiving testimonies from many faculty across many disciplines declaring that ChatGPT (and its cousins) are essentially injecting poison into the classroom dynamics around learning, and here is one of the largest university systems in the country saying, “Let’s make sure every student gets a nice healthy dose of the stuff.”

I can testify firsthand from the talks and faculty development workshops I’ve been giving around preserving the experience of writing to communicate and learn that this worry is very real. While the people I’ve been interacting with are engaged and adaptable, and many of them are actively exploring how generative AI could aid their students in their learning, I have yet to meet the person who thinks they have it all figured out.

While I try not to be judgmental about these things, I can’t help but read what’s being described in that Times story and think, “That’s nuts.” This is why I’m thankful for reporting like what appears in the Times, because it gives me a chance to better understand the mindset of people who see the world so differently from me.

While there are several examples of faculty who make use of generative AI tools in their courses and one example of a student who uses ChatGPT as a study aid, the primary voice in the article is Leah Belsky, OpenAI’s vice president of education.

Formerly at Coursera, an early company that promised and failed to revolutionize education, Belsky has as her charge to create “AI native universities.” How you feel about these initiatives may depend on how you reflexively respond to that phrase. My response is some mix of “ugh” and “yikes."

One of the drier paragraphs in the entire article struck me as the most important thing we should be considering about these initiatives:

“OpenAI’s push to A.I.-ify college education amounts to a national experiment on millions of students. The use of these chatbots in schools is so new that their potential long-term educational benefits, and possible side effects, are not yet established.”

A national experiment on millions of students. I don’t know—to me, that sounds risky or reckless or heedless. I can’t quite decide which is the best descriptor.

Belsky says OpenAI is starting to look into these issues. At a conference late last year she remarked, “The challenge is, how do you actually identify what are the use cases for A.I. in the university that are most impactful? And then how do you replicate those best practices across the ecosystem?”

Good questions. Thank goodness we’re simultaneously experimenting on millions of students. This is a very good way to generate reliable data.

A large language model would have a hard time detecting the sarcasm in that previous sentence, but I hope it’s clear to my human readers.

For the privilege of making its 460,000 students available to OpenAI, the Cal State system is paying $17 million over 18 months. In the grand scheme of university budgets this does not sound like much, but for a perpetually strapped system like Cal State, every dollar counts. Martha Lincoln, an anthropology professor at San Francisco State reacting to the announcement, told a SiliconValley.com reporter, “This is so deeply distressing. It’s absolutely shocking. For a while we didn’t even have regular paper in our copier: It was all three-hole punch. We don’t have enough counselors on our campus. When students have mental health concerns, they’re waitlisted for weeks if not months.”

All this is happening against a backdrop of AI companies that have overtly declared their goal is to subsume the vast majority of economic activity to their technology. Economic activity means jobs, labor, and here is a system that is supposed to empower people heading into the workforce hastening their own obviation by partnering with the company that aims to subsume those jobs to their technology.

Personally, I think Altman is well over his skis with AI hype, but he isn’t shy about his intentions

Ohio State apparently looked at Cal State and said, “Hold my beer,” declaring that starting in the fall, using AI in class will be a requirement. Ravi V. Bellamkonda, executive vice president and provost, announced, “Through AI Fluency, Ohio State students will be ‘bilingual’—fluent in both their major field of study and the application of AI in that area.”

There are two important questions that go betting in this statement:

  1. Is working with AI in a field of study equivalent to learning a new language? And,
  2. If it is like a new language, what does fluency look like?

We don’t have answers to either of these questions. We don’t even know if they’re the right questions to ask because we don’t know if treating AI competency through the lens of fluency even makes sense!

Normally, I find the relatively slow pace of change in how higher ed institutions shift orientations frustrating, but in this case, it is the sudden lurch by some schools toward an AI-inevitable future that is baffling. It appears to be a by-product of swallowing AI hype whole. This is Ohio State president Ted Carter: “Artificial intelligence is transforming the way we live, work, teach and learn. In the not-so-distant future, every job, in every industry, is going to be impacted in some way by AI.”

Where is the evidence of this? For sure, we’ve seen signs of some impacts, particularly around entry-level jobs, but we also may be looking at a scenario where AI is, in the words of Arvind Narayanan and Sayash Kapoor (co-authors of AI Snake Oil) “normal technology,” where the diffusion of AI through industry and society is going to follow a similar timeline to other powerful general purpose technologies like electricity and the internet.

I am a strong believer that we must be AI-aware while carefully and purposefully experimenting with this technology, keeping student learning at the center of the equation. The overwhelming preponderance of evidence rooted in both present and past experience suggests that if (or when) generative AI has a demonstrative positive effect on student learning, this positive effect will be apparent and unambiguous. If (or when) this happens, access to the benefit will not be scarce and institutions can adjust accordingly.

This leap into a future that does not yet exist and that we have only a limited idea of what it might be like is beyond shortsighted and has the potential to unnecessarily harm students while also delaying the ultimate adjustments that will be necessary for higher ed institutions to survive.

Partnering with or funneling customers to companies that aim to obviate your existence and exploit your work to develop their applications while paying them for the privilege—I know I said I was trying to not be too judgmental—but, honestly, that’s nuts.

Next Story

Written By

Share This Article