English professors weigh in on the 'death of the English major' and the future of the humanities

The English major is dead. Or at least that seems an increasingly common perception since the advent of artificial intelligence and the release of OpenAI’s chatbot, ChatGPT.

Much of this pessimism could be attributed to the sensationalism surrounding AI. Some AI advocates exalt it as a revolutionary force that is already helping us overcome our most challenging problems. Some skeptics denounce it as a harbinger of doom. Many who actually use the technology, however, disagree with these views, noting that the reality is much more nuanced.

Current AI applications can perform impressive, but limited, tasks. They rely on machine learning models based on training data and human input.

Ted Underwood, professor of English and information sciences, says that distrust of this technology may stem from a familiar source.

“A lot of the conversation about machine learning is shaped by science fiction, fantasies and fears which are — given the nature of fiction — fantasies and fears about independent agents,” he said, citing sci-fi antagonists and Frankenstein’s monster as examples.

“That’s not what we’re actually dealing with,” Underwood said. “These are models of culture, and, in a way, Barth and Foucault are really a bit more relevant to what they are capable of than ‘2001 A Space Odyssey.’”

Melissa Littlefield, a professor of English who specializes in literature, science, and technology, claims that skepticism toward a new technology is not a novel phenomenon.

She compared the impulse to ban generative AI like chatbots in the classroom to initial reactions toward now commonplace tools, like grammar and spell-checking in word processors, which were also considered academically dishonest by some at one point.

John Gallagher, a professor of English who researches technical writing and machine learning, performed an experiment asking participants to judge two scenarios involving academic integrity. The only difference between them was whether someone received help from another person or AI. The study found that, on average, people saw using AI as more academically dishonest, even though the actions were the same in principle.

While certain critics may believe that people will become dependent on these technologies at the expense of their own abilities to think deeply and write well, Littlefield acknowledged that some might also fear being replaced altogether.

“I think certain people in certain professions are worried that jobs are going to be taken away,” she said. “That’s what happened with industrialization, and that's what happens when we get digital technologies.”

For her, AI is here to stay — something we need to adapt to, as we have done in the past. Machine learning tools that had been used for some time in the “hard sciences” have now made their way into the humanities, as well. Since they can efficiently analyze large volumes of data, it makes sense that they would be used with digital texts and electronic media.

The digital humanities, as this field is called, can be controversial. Even the definition of the term “digital humanities” is debated. Implementing these applications of AI can be practically, legally, and ethically challenging. Some scholars question the academic rigor and merit of using these methods. However, digital humanities scholars in the Department of English recognize it as an opportunity to do things that would otherwise be impossible.

“Some of the first and most influential work in the digital humanities said, ‘You want to understand what literature was like in the 1890s? Well, why only read 12 books? Why not read 20,000 books? And because you can't do that with your own eyes, why don't you just train a computational model to analyze that text?” described Tess McNulty, professor of English and digital humanities researcher.

Artificial intelligence may seem like the antithesis of writing and literature to some, but many in the digital humanities consider this a false dichotomy. They acknowledge that the lines between the humanities and tech are becoming blurred and endorse a paradigm shift so that scholars and students can work across disciplines to address the important issues facing us from a humanistic perspective. As AI becomes incorporated into all aspects of our lives, the skills taught in the humanities become more important.

Academic institutions have always recognized the value of what is taught in English and creative writing courses. That recognition, in part, explains why the Department of English provides one of the largest general education requirement courses at the University of Illinois. Few students will complete a degree without taking a class offered by the department.

Employers also consistently rate communication and critical thinking as two of the most desirable skills in employees. According to the university’s most recent data, 94% of English and Creative Writing majors secured employment, acceptance to graduate school, or volunteer work within six months of graduating. Still, the myth of obsolescence persists, and students are encouraged to choose STEM over the humanities.

“I would argue that moving away from the humanities because of AI is exactly the wrong move,” said Ryan Cordell, professor of English and information sciences and director of Skeuomorph Press.

“When a language model produces text, it's not producing text based on some perfect idea of what English is. It's based on the data that's been provided to the model, and it's subject to all of the same oversights and biases and flaws that human knowledge always is. And so if anything, I think the skills that we teach in the humanities — skills for critical reading, evaluation, understanding context — are more urgent.

“These are not technological problems; the technology is being developed. The problems that we really face are cultural problems and social problems that humanities students are going to be really well-equipped to help us deal with,” he continued.

For some, the teaching of English needs to evolve to keep up with a rapidly changing society. The AI tools that have been generally limited to graduate studies need to gradually filter down and become part of the undergraduate core curriculum, according to Underwood.

Students need to learn the principles of how these tools operate and how to interact with them, he says, because in the 2030s, the tools we know today will likely be replaced by something else. Littlefield, who edits Configurations: A Journal of Literature, Science and Technology, shares that view. The journal has started receiving many papers focusing on AI and chatbots, but believes that research topics will quickly become dated.

“In five or 10 years, a lot of the questions we're asking now are going to be in the past tense,” she added, suggesting that this technology will be accepted as a standard rather than a novelty. Many students already express interest in studying digital media, for which AI can be a valuable asset.

“The new content we're seeing is just a new cultural form, like any form before it that we study in the humanities, from television to film to the novel,” Littlefield said.

“If we're not bringing a humanities perspective to it, I think we are doing a disservice to our students and what they want to think about.”

McNulty said that it will be crucial for students to develop both the technical skills needed to interact with these technologies and the critical, philosophical thinking that is a hallmark of the humanities.

“Even if we have a very traditional view of the fact that we study literature, at least some students in the English major have to understand digital textual generation and how it works,” she said.

“I'm all for that as long as the classic questions and approaches of the humanities remain in the forefront of the English major, and I think you can allow computation in without eroding those questions.”
 

Editor's note: This article was originally published in the Fall '24 Department of English newsletter.