Authored by Inma Naima Zanoguera
What will remain of the essence of the human as the unfolding (the inevitably, exponentially, vertiginously unfolding) world of Artificial Intelligence (henceforth referred to as AI, in case you’d been living in a bubble, kind of like the bubble I wish still existed for me, and find yourself unfamiliar with this two-letter acronym) gains more and more grounds on more and more varied domains of human life? I start here because ultimately, for those of us resistant, skeptic, or flat out scared of the rise of robot world, this question is at the core of our worries, the unseen portion of the iceberg.
And I start there also because the conversation of AI in education—the topic of this writing—is also in the first instance a conversation about the “human” in the humanities. Above water of the AI conversation iceberg, we find reactionary measures to limit students’ use of AI all over the US. We find fatalistic headlines like “The College Essay is Dead” and “The End of High-School English” and while I have not read these essays because The Atlantic’s paywall is higher than my Graduate Assistant stipend, it’s obvious by now that there is a sector of educators for whom the disruptive power of ChatGPT is second only to ghostwriters in how they may shred to pieces the pedagogical process. (Or first, really, as even those unwilling or unable to pay somebody to write their essays can now afford not to write their essays, for free.)
Doomsday inclined writings about AI in education abound, and so I will not repeat them here. I would perhaps take the other stance and defend AI as just another tool to be harnessed by educators to our and our students’ benefits, but I find this argument to be self-evidently true, at least for as long as “AI in the classroom” equates to students’ use of ChatGPT, as it does now. What I am interested in, as a person who has taught English 1010 for more semesters than I can count, are the questions that loom around—before and after—the pros/cons of AI debate as it stands in educational settings generally, and in writing in particular.
Grosso modo, what is considered worrisome about AI in education falls under three epicenters, two very obvious and a latter one less discussed about: number one, easy and effective tool for academic dishonesty; number two, the impoverishment of students’ writing skills, (specifically coming out of high school); and number three, the fear of automation, that it may render our precious human skills obsolete, and the question of what makes us essentially human.
The first two, being more directly linked to the education system, prompt me to reflect on my experience teaching writing. As a graduate student in English, I’ve earned my funding packages by teaching writing to undergrads. In addition, for the last year and a half, I have worked in community colleges both as a Humanities Alliance Fellow and as a writing instructor at a CUNY writing center. Though I said I’m interested in questions—which implies a philosophical inclination—in truth my first and most pressing question is not philosophical but rhetorical: just what college essay do we think AI will kill? The five-paragraph essay? GOOD. I am not just being facetious, I am also genuinely concerned that we may proxy the shortcomings of the education system (the true reason, I think, behind students’ writing quality seeming devolution) on our new friend AI.
Teaching writing has always been a difficult task in an educational environment that is not hermetically sealed from a society/marketplace where all the values we may attribute to the humanities are, to put it gracefully, unappreciated. Not to go all big picture on you, but if we think in scales, we can create connections between the decreasing quality of students’ writing and the fact that those who appear most successful to them—and most community college students in NYC are either first gen, BIPOC, or working class, holding sometimes multiple jobs and counting the days till graduation so they can get a better job with a better pay—those public figures modeling the “good life” (mostly men) thrive precisely on their lack of nuance, their utterly uncreative solutions to human problems, and generally their incompetency in the realms of empathy and compassion. Students, in my experience, are eager to work hard, to do the right thing and improve themselves in the process, but they’re also in a game of chess with an educational model with its idiosyncratic rules, rewards, strategies, and opportunity costs—and I am the first to tell them that the five paragraph essay is a strategy like any other, that though it may not help them develop the most creative and innovative ideas, it may just get the job done and that is alright.
I would love for my students—why, for myself, for all of us—to realize and be able to act upon the fact that creative and critical thinking are, in fact what makes us human. That the well thought-out college essay—and perhaps any one task premised, for its successful completion, upon the alchemical process of creating new connections in the articulation of new ideas—is valuable for its own sake should be commonsensically a rewardable skill. The fact that no direct benefit seems to come from it in monetary terms, or in career advancement, is not reflective of the incalculable loss of failing to cultivate these human(istic) skills.
What I mean is—I would love to live in a world where that is possible—which is to say, rewarded according to its value. This is not our world, nor our education system, and that is a much older debate and problem that we are all too familiar with.
I offer my second question thusly: is it possible that our fear of AI comes from realizing, anew, or from another angle, that our humanity is more fragile than we thought? That what makes us human cannot be taken for granted without consequence? In a world where we could always be aware of the precious gift of human creative thought, ChatGPT would not be a threat, because most students would be too afraid to pass on the opportunity to develop an idea of their own, and have the opportunity to discuss it with others (this, in its utopian iteration, is what college writing offers). If punitive systems of grading weren’t the order of the day, if standardized testing would just be thrown into a pile of trash—its only rightful place—, if education funds weren’t cut systematically, as in, as matter-of-factly as time passing, as they seem to be, then perhaps students’ writing would improve, and ChatGPT as a threat to education would be a non-issue, and we could maybe start thinking about how to harness AI for our and our students benefit.