Unthinking Robots for the Man?
AI, of course, stands for artificial intelligence, and I’ve played with it here at Bracing Views. I’ve used ChatGPT and DeepSeek to write critical essays on the military-industrial complex, critiquing the results in my posts. Overall, I was impressed—and glad that I no longer have to wade through student essays completed outside of class.
I stopped teaching eleven years ago, before AI was available. Of course, the Internet was, and I did have students who cut and paste from sources online. Usually, I could tell; I would do a search using a “student” passage that just sounded a bit too good, and often whole paragraphs would come up that the student lazily cut and pasted into an assignment as their own work. Those were easy papers to grade. F!
Today’s AI programs make this more difficult. If I were teaching today, I’d assign fewer essays outside of class, and I’d probably bow to reality and allow students to use AI to help clarify their arguments.
The challenge remains: In this new world of AI, how do you evaluate student performance in a humanities course where research and writing skills are important, along with some command of the facts and an ability to think critically about them?
I’d likely employ a mix of the old and new. Standard exams—the usual multiple choice, short answer, written essay, all completed in the classroom—still have a role. But I’d incorporate AI too, especially for class discussion.
Consider, for example, debating the merits (and demerits) of the military-industrial complex. AI can easily write short essays both in favor and against (or even an essay that examines the pros and cons of the MIC). Those essays could then be used in class to tease out the complexities of the MIC, and how evidence can be used (manipulated?) to tell vastly different stories.
Another example: Should atomic bombs have been used at Hiroshima and Nagasaki? Again, AI can easily write essays in favor, or against, or “neutral” (pros and cons again). Those short essays could then form the basis for class discussion and further debate.
In a way, AI is a selective manifestation of evidence that is already out there. And there’s the rub. Who’s doing the selecting? Who’s writing the algorithms? Which evidence is being favored and which is being suppressed or disregarded?
AI, as I understand it, uses algorithms that favor certain kinds of evidence over other kinds. Generally speaking, AI favors “official” sources, e.g. government documents, mainstream media reporting, scholarly think tanks with credentials, and so on.
Alternatively, it’s possible AI could gather information from less than reputable sources. Again, what algorithms are being used? What are the agendas of those behind the AI in question?
To students, AI is something of a black box. It spits out answers without a lot of sourcing (unless you specifically ask for it). Students in a hurry may not care—they just want answers. But as Tom Cruise demands In A Few Good Men: “I want the truth.” What happens when AI Colonel Jessup decides, “You can’t handle the truth” and feeds us convenient half-truths and propaganda. Will students even care? Will they have the skills to recognize they’re being misled? Or that they’re not getting the full story?
That’s what I worry about. Students who simply accept what AI has to say. Not that they learned nothing—but that they learned exactly what they were programmed to learn. Strangely, in this scenario, the students themselves are reduced to automatons. And I don’t think most students want to be unthinking robots for the Man.
Or do they?
Postscript: Over at his new Substack site, Mike Neiberg is tackling AI and the humanities. Check it out at michaelneiberg.substack.com.



