Rice joins federal consortium on AI safety

Ken Kennedy Institute helping advance the development and deployment of safe, trustworthy AI under new U.S. Government Safety Institute

stock image

Rice University has joined the nation’s leading artificial intelligence (AI) stakeholders to participate in a Department of Commerce initiative to support the development and deployment of trustworthy and safe AI. Established by the Department of Commerce’s National Institute of Standards and Technology (NIST), the U.S. AI Safety Institute Consortium (AISIC) will bring together AI creators and users, academics, government and industry researchers and civil society organizations to meet this mission.

stock image

“Rice is thrilled to engage with the U.S. AI Safety Institute Consortium as an inaugural member,” said Ramamoorthy Ramesh, Rice’s executive vice president for research. “Aligning with the AISIC, we will work to define the best practices and standards to develop and deploy AI systems that will positively impact our society.”

At Rice the effort is led by the Ken Kennedy Institute, which supports research on AI, data and computing to solve critical global challenges. The institute, comprised of more than 250 renowned faculty and senior research members, enables new conversations that drive convergent research to impact the development of new technology and advance training and education.

“Our faculty are pushing the boundaries of AI and collaborating widely in conceptualizing AI in diverse applications, including health, energy and urban resilience,” said Lydia Kavraki, director of the Ken Kennedy Institute. “We are concerned about AI safety, trustworthiness, transparency and fairness in decision making. The need to pull together efforts from academia and industry is urgent, and we congratulate NIST on their initiative.”

The consortium includes more than 200 member companies and organizations that are on the frontlines of developing and using AI systems, as well as civil society and academic teams that are building the foundational understanding of how AI can and will transform our society. The consortium also includes state and local governments as well as nonprofits, and will work with organizations from like-minded nations that have a key role to play in setting interoperable and effective safety protocols around the world.

“The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence,” U.S. Secretary of Commerce Gina Raimondo said. “President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem. That’s precisely what the U.S. AI Safety Institute Consortium is set up to help us do. Through President Biden’s landmark executive order, we will ensure America is at the front of the pack, and by working with this group of leaders from industry, civil society and academia, together we can confront these challenges to develop the measurements and standards we need to maintain America’s competitive edge and develop AI responsibly.”

The full announcement for the creation of AISIC is here. See the full list of consortium participants here.

Links:

The Ken Kennedy Institute: https://kenkennedy.rice.edu/

Department of Computer Science: https://csweb.rice.edu/

George R. Brown School of Engineering: https://engineering.rice.edu/

Kavraki Lab: https://www.kavrakilab.org/

Body