‘A natural fit’: Rice philosophy professor explains relationship between philosophy, technology

Artificial intelligence
Artificial intelligence

When Robert Howell contemplates the future of artificial intelligence (AI), he foresees a world where an app might guide your moral decisions just as Google Maps helps you navigate a road trip. Though he explored the provocative idea in a 2014 paper titled “Google Morals, Virtue and the Asymmetry of Deference,” this isn’t just a thought experiment for Howell, chair of Rice University’s philosophy department and the Yasser El-Sayed Professor of Philosophy. Instead, it’s a harbinger of the pressing ethical dilemmas that AI presents and a perfect example of why philosophy has become integral to the conversation about technology.

“Imagine an app that tells you, ‘Here’s what you should do,’” Howell said. “Something uniquely human is being offloaded onto AI. We need to recognize the danger of sort of offloading important intellectual, emotional human work onto artificial intelligence in such a way that it actually vitiates our own humanity.”

This idea of “Google Morals” serves as a gateway into Howell’s broader exploration of AI and tech ethics.

“Philosophy is a natural fit in the conversation about technology ethics,” said Howell, who is spearheading Rice’s Ethics of Technology program. “Technology moves quicker than our reflection on its ethics, but the goal is to at least improve the response rate and do some anticipatory thinking.”

Robert Howell
For Robert Howell, chair of Rice University’s philosophy department and the Yasser El-Sayed Professor of Philosophy, AI ethics can be broken down into three critical areas: the development of AI, its deployment and its use, each area presenting distinct ethical challenges that demand careful consideration. (Photo by Jeff Fitlow)

In addition to teaching, Howell is also fostering interdisciplinary collaborations on the topic of technology ethics. He works closely with Rice colleagues, including the Karen Ostrum George Distinguished Service Professor in Computational Engineering Moshe Vardi and assistant teaching professor of computer science Rodrigo Ferreira, to address the ethical challenges posed by AI.

“Philosophers have spent a lot of time asking these questions and learning the false pathways,” Howell said, noting that this expertise allows philosophers to guide the conversation more effectively, helping society avoid unproductive or harmful directions. “Engineers know what the next product is, but philosophers think about ethics. It’s a pretty helpful partnership.”

For Howell, AI ethics can be broken down into three critical areas: the development of AI, its deployment and its use, each area presenting distinct ethical challenges that demand careful consideration.

When it comes to the development of AI, Howell emphasized the importance of privacy and intellectual property considerations.

“One of the challenges we have is ensuring that our students can use AI tools without losing their intellectual property,” Howell said, explaining the concern that AI systems could absorb student input. “Suddenly, you’ve lost your property and it could reappear in someone else’s document.”

Beyond privacy, there’s the issue of bias in AI training data. Howell pointed out that AI can inadvertently perpetuate societal inequalities if it’s trained on biased or unrepresentative data sets.

“Making sure that AI isn’t trained on racist data or data that overrepresents one segment of the population is crucial,” Howell said, adding that ethical boundaries for AI development must include rigorous oversight to prevent such biases from taking root.

Once AI is developed, the next challenge is how to deploy it ethically. Howell raises concerns about the integration of AI into widely used platforms like Microsoft Word or Gmail.

“What are you doing to this population’s ability to express themselves if you integrate AI into Microsoft Word?” Howell asked.

Robert Howell
“The deployment of AI requires a careful balancing act between innovation and ethical responsibility,” Howell said. (Photo by Jeff Fitlow)

The potential consequences of such integration include a diminishment of personal expression and creativity, particularly if users become overly reliant on AI for writing and other tasks.

Howell also highlights the ethical implications of AI tools like ChatGPT, which have become ubiquitous despite their potential for misuse.

“The deployment of AI requires a careful balancing act between innovation and ethical responsibility,” Howell said.

Perhaps the most nuanced area of AI ethics is its use by individuals. Howell said he believes that too much emphasis is often placed on companies and developers, when in fact, users also bear significant ethical responsibilities.

“We need to think about what our obligations are as users,” Howell said, falling back on his example of the potential danger that lies in offloading important intellectual and emotional work onto AI, which could, in turn, erode our humanity. “If we’re using AI for things that require deep thought, we might be offloading an important project of self-construction and development.”

The philosophy department is committed to exploring those questions and concerns via courses such as Technology, Society and Value, which is offered every semester, ensuring that students are equipped to navigate the complex ethical landscape of the future.

“We're having really fun conversations,” Howell said. “It’s pretty rare that philosophy gets to deal with ripped-from-the-headlines topics. This gives us an opportunity to see where the rubber hits the road.”

Learn more about Ethics in Technology courses here. To schedule an interview with Howell, contact media relations specialist Brandi Smith at brandi.smith@rice.edu or 713-348-6769.

Body