Rice faculty experts discuss ethical guidelines for building responsible AI

AI symposium

As artificial intelligence (AI) rapidly evolves, experts from Rice University in computer science, philosophy, business and psychology spoke to a crowd of around 100 about how AI is reshaping society, industry and human behavior — along with what must be done to ensure its benefits for humanity.

ai symposium
Ferreira, Howell, Perley, and Oswald answer questions from the audience. Photo by Jeff Fitlow/Rice University.

Rice’s Office of Ethics, Compliance and Enterprise Risk hosted the third annual Ethics and Compliance Symposium April 29 at Farnsworth Pavilion. As shared at the onset of the symposium by associate vice president and chief compliance officer Chetna Koshy, the event’s theme, Building a Responsible AI Future, aligns with Rice’s Momentous: Personalized Scale for Global Impact strategic plan.

“We must ensure the development and deployment of AI is not only innovative but responsible,” Koshy said.

Several faculty members on the university’s AI advisory committee addressed various topics, including the implications of artificial general intelligence (AGI), workforce disruption, education, data transparency, chip supply vulnerabilities and the erosion of moral agency in an AI-driven world.

Paul Padley: How disruptive technologies have reshaped society

Paul Padley, vice president for IT and chief information officer, linked the rise of AI to previous technological revolutions, from the printing press to modern computing. He emphasized in his opening remarks that while machine learning is rooted in rule-based logic and symbolic manipulation that dates back to the 1960s, today’s neural networks present unique ethical and educational challenges due to their reduced transparency.

Padley acknowledged AI’s role in research breakthroughs such as the Higgs boson discovery but cautioned against overestimating its capabilities. “AI is here to stay, and its implications for education, research and leadership demand thoughtful engagement,” he said. “We need to carefully consider what this means.”

Fred Oswald: Responsible AI in education, work and the workforce

Fred Oswald, the Herbert S. Autrey Chair in Social Sciences and chair of Rice’s AI advisory committee, advocated for preserving critical thinking, empathy and judgment as AI becomes more embedded in the workplace. He underscored AI’s expansive role, from writing essays to evaluating job applicants.

“Students must understand how AI is redefining job requirements and their future careers,” he said. “As AI evolves, our understanding of what it means to be responsible will also evolve.”

While machines can optimize tasks, Oswald argued that we must maintain uniquely human qualities such as empathy and judgment. “You risk undermining critical thinking by overusing AI,” he said. “Responsible AI means balancing efficiency with preserving these essential human traits.”

Robert Howell: Avoiding the vices of AI

Robert Howell, the Yasser El-Sayed Professor of Philosophy, cautioned that core moral virtues may quietly erode as humans increasingly delegate decisions to machines. “In the process of gaining efficiency through artificial intelligence, we must not lose sight of the human cost,” he said.

Howell explained that offloading ethical choices risks weakening qualities such as generosity, courage and intellectual curiosity — traits involving behavior, reasoning, emotion and focused attention, which AI cannot develop or embody for us.

“Attention is a moral commodity,” Howell said. “We must be careful about artificial intelligence’s lower recognition threshold. Responsible AI requires us to remain vigilant about maintaining our moral and ethical standards.”

Rodrigo Ferreira: What we mean by responsible AI

Rodrigo Ferreira, assistant teaching professor of computer science, critiqued oversimplified views of fairness and transparency, advocating for nuanced, evolving dialogue. “Responsibility follows this response process and the ability to respond,” he said.

While tech companies promote ethical AI, Ferreira said the terms are often oversimplified. Transparency depends on the audience’s understanding, fairness varies across cultures, and accountability is muddied by competing interests among stakeholders and regulators.

“Responsible AI means engaging in continuous dialogue and adapting our frameworks to address these complexities,” Ferreira said.

Kathleen Perley: When technology outpaces ethics

Kathleen Perley, a Rice Business instructor and adviser on AI initiatives, discussed the rapid evolution of AI, particularly with the advent of AGI.

She advocated for global safeguards, the implementation of safety protocols and control over chip supply, citing that 92% of advanced chips used in cutting-edge AI models come from Taiwan and Korea. “We are one earthquake away from a single point of failure that could delay AI advancements,” she said.

Perley called for global regulations based on safety protocols, including red-teaming — using ethical hackers to simulate real-world cyberattacks — and kill switches. She emphasized the challenge of balancing innovation with responsibility as the global race intensifies.

“Ensuring responsible AI development means preparing for and mitigating these risks,” Perley said.

A panel discussion and Q&A session moderated by Shawn Miller featuring Oswald, Howell, Ferreira and Perley followed the speakers’ presentations.

This year’s symposium was co-sponsored by the Rice’s AI advisory committee and executive sponsors Amy Dittmar, the Howard R. Hughes provost and executive vice president of academic affairs, and Padley. The annual event brings together faculty and administrators in a collaborative environment where the Rice community can engage in meaningful discussions, share insights and advance academic scholarship on ethics and compliance matters.

Body