HACKED: Rice’s humanities scholars explore why AI cannot be left to technologists alone in new discussion series

HACKED AI event

Artificial intelligence has become a familiar presence in higher education. It drafts papers, summarizes readings, generates images, assists research and increasingly influences how students learn and how faculty teach. Much of the conversation has focused on efficiency and capability. Less visible is a parallel conversation about cost, consequence and human impact.

Nicole Waligora-Davis
Technologies that move quickly often acquire legitimacy before they acquire scrutiny, said Nicole Waligora-Davis, associate dean of undergraduate programs and special projects. (Photos by Ali Raza Sial)

This spring, Rice University’s School of Humanities and Arts is stepping into that gap with HACKED: The HUMAN OS — Humanities and Arts perspectives on the ethics and costs of AI and the future of teaching,” a four-part series of discussions that reframes AI as a cultural, ethical and historical problem, not simply a computational one.

“This series responds to two prevailing headwinds: namely the university’s shared ambition to be the definitive voice and training ground for responsible AI and the accelerated proliferation and increasingly pervasive role of AI across every sector of human existence — personal, professional, educational, cultural, political, etc.,” said Nicole Waligora-Davis, associate dean of undergraduate programs and special projects.

Technologies that move quickly often acquire legitimacy before they acquire scrutiny, Waligora-Davis said. They become normalized before their consequences are fully legible. They slip into daily practice long before institutions decide what values should govern their use.

“Whenever we are confronted with technologies — whether built or in development — that have the capacity to alter and even substitute for human interaction; supplant, and for some replace the importance of, human judgment; mimic what might stand for creativity and imagination; reproduce and disseminate biased, flawed, erroneous data that is packaged as credible and authoritative; reify or proliferate inequities and thereby deepen inequalities; and risk resources essential to the survival of human and nonhuman life, you are confronting the very challenges at the core of humanistic research and creative work that we are disciplinarily trained to identify and address,” Waligora-Davis said.

Timothy Morton
Timothy Morton, the Rita Shea Guffey Professor of English and Creative Writing, discussed how faculty leverage AI to re-center the value of close reading, slow thinking and critical analysis in the Jan. 28 event titled "The Failure of Superintelligence: Activating AI to Teach the Value of Deep Deliberation."

The classroom has become one of the earliest and most intimate testing grounds for AI. Students encounter it not as an abstract future but as a daily option. Faculty encounter it not as a product demo but as a force quietly reshaping assignments, originality and assessment. Beneath logistical questions about detection and policy sits a more uncomfortable one: What is the purpose of education in an environment where generation is cheap but judgment is not?

“As we begin to integrate artificial intelligence into our lives, which is surely inevitable, we have to put the human side of things first and avoid the trap of letting excitement about what we can do distract us from what we should do or even what we want to do,” said Robert Howell, the Yasser El-Sayed Professor of Philosophy and department chair.

Universities have always absorbed new technologies, from calculators to word processors to the internet. Each arrival produces anxiety. Each also produces adaptation. But the series explores how AI differs in kind, not just degree. It does not just extend human capacity. It imitates human output.

“In many ways the humanities plumb the value of who we are as people, bringing to the fore both our triumphs and our missteps,” Howell said. “To develop technology without the humanities is to sail without a compass and arguably without an intelligible map.”

The speed of AI’s adoption can create the illusion that this moment is unprecedented. History complicates that assumption. Societies have repeatedly greeted new tools with both wonder and blindness. The patterns are rarely obvious in real time.

“Understanding our current moment in historical context can help us learn how people grappled with emerging technologies in the past,” said Kirsten Ostherr, director of the Medical Humanities Research Institute. “This can help us learn how people interact with technologies, perhaps in unintended ways, and it can also help us identify unintended consequences that may make us better prepared to anticipate and avoid harmful consequences in the present moment with AI.”

Ian Schimmel
Associate professor of history Elizabeth Petrick (left) and Ian Schimmel, associate teaching professor of English and creative writing, joined Morton as part of the Jan. 28 discussion along with Rodrigo Ferreira, assistant teaching professor of computer science.

One of the most contested arenas in the AI conversation is creativity. Text generators can write poems. Image models can mimic artistic styles. The outputs can look convincing. What remains harder to measure is what disappears when creative practice becomes primarily automated.

“Understanding the ways that creative practices like writing and art-making help us understand the human experience can shed light on what domains of practice should be especially protected from automation to preserve the benefits of art as an integral aspect of our humanity,” Ostherr said.

For HACKED organizers, the series is meant to move beyond debate and into shared practice.

“These events lay the groundwork for educating ourselves and one another about AI and generative technologies in ways that attend to the artifice of artificial intelligence; that equip faculty to leverage these tools to train students to think critically, ask questions, make informed judgments; that make visible the costs both hidden and visible of these technologies; and that invite conversations on and propose pathways for what it might mean to ethically and responsibly use AI with all its transformative capacities,” Waligora-Davis said.

The structure of HACKED reflects that philosophy. The series moves between conceptual framing and applied experimentation, between large questions and close looking. The Feb. 10 session will bring together scholars from medical humanities, philosophy, history and science and technology studies to examine ethical, social, political, legal, cultural and environmental challenges realized through AI.

Later sessions will push participants into direct engagement with the tools themselves. “The AI Sandbox” scheduled for March 11 is designed as a prompt lab where faculty work in small groups to critique large language model output and test what these systems produce when placed under human scrutiny. “Against the Algorithm: AI, Ethics and the Archive” set for April 2 will turn attention toward the datasets that quietly power contemporary AI systems and the historical records, omissions and biases embedded within them.

Taken together, the series treats AI not as a solved problem nor passing trend but as an unfinished social experiment already underway. For more information about upcoming HACKED events, click here.

Body