Oil and gas industry faces new reality with better computing

Jan Odegard, executive director of the Ken Kennedy Institute for Information Technology, opens the annual Rice Oil & Gas High-Performance Computing Workshop on March 2.

Jan Odegard, executive director of the Ken Kennedy Institute for Information Technology, opens the annual Rice Oil & Gas High-Performance Computing Workshop on March 2. Photo by Darryl Howard

Rice hosts industry professionals to discuss how high-performance computing can enhance production

By Patrick Kurp
Special to the Rice News

“We’re seeing increasing costs and decreasing returns,” said François Alabert, vice president for geotechnology solutions at Total S.A., the French multinational oil and gas giant. “The only way to improve performance is to improve technology and know-how.”

It was a message often reiterated at the ninth annual Rice Oil & Gas (O&G) High-Performance Computing (HPC) Conference. Hosted at the BioSciences Research Collaborative auditorium March 2-3 by the Ken Kennedy Institute for Information Technology (K2I), the event drew more than 550 leaders from the oil and gas industry, the high-performance computing and information technology industries and academics.

“It’s a new world,” said Jan Odegard, K2I executive director and associate vice president in Rice’s Office of Information Technology. “More than ever we are coming to understand the importance of high-performance computing in the gas and oil industry. Under the current conditions we must work smarter, do more with less and continue aiming for exascale.”

François Alabert addresses the Rice Oil & Gas High-Performance Computing Conference. Photo by Darryl Howard

François Alabert addresses the Rice Oil & Gas High-Performance Computing Conference. Photo by Darryl Howard

At the time of the 2015 conference, the price of a barrel of oil had dropped to about $55 from $110 in the years 2012 through 2014. After a brief rebound, the price now hovers around $36.

In his opening keynote address about how high-performance computing is reshaping exploration and production, Alabert said his company is responding to the oil crash by investing in HPC as the volume of data resulting from simulations, lab measurements and field measurements grows exponentially.

“This huge quantity of data has opened an era of data analytics and deep learning,” he said. “By taking advantage of the next generation of high-performance computing capabilities, the driver for the evolution of our technology, we will remain competitive. The return on exploration is going down dramatically, and we will respond with better technology.”

Sverre Brandsberg-Dahl put it another way: “We are ‘Vikings,’ and we operate boats and need more computing power,” he said. “It’s a well-kept secret: The driver of our industry is HPC. We must work smarter and do more with less.” Brandsberg-Dahl is global chief geophysicist of the imaging and engineering division of PGS (Petroleum Geo-Services), which operates 12 offshore seismic vessels and 21 data-processing centers around the world.

Last year PGS installed a five-petaflop Cray XC40 supercomputer, among the most powerful in the commercial sector. “It gives us a large amount of memory, a strong interconnect and the ability to scale the problem size,” Brandsberg-Dahl said.

Douglas Kothe, deputy associate director in the Computing and Computational Sciences Directorate at the U.S. Department of Energy’s Oak Ridge National Laboratory, told the audience that “it’s a very exciting time, like the early ’90s. It’s an interesting time algorithmically speaking. We’re still in the early days of real-time imaging.”

He said Oak Ridge presently runs Titan, a Cray XK7, ranked by TOP500 as the second-most powerful supercomputer in the world. Titan is presently handling 40 oil industry projects, Kothe said. In 2018 a new hybrid CPU/GPU computing system called Summit, being built by IBM, will be installed and provide at least five times the performance of Titan.

Brent Gorda, general manager of HPC storage for Intel, talked about the company’s open source Lustre clustered file system, used by nine of the world’s top-10 supercomputers.

“Lustre is still evolving and adapting,” he said. “Are the HPC Achilles heels going to drive the architecture? I see some interesting storage hardware on the horizon. There will be a converging architecture with HPC, the cloud and Big Data.”

Several speakers addressed job prospects for computer science majors.

“We need people with the skills to create large-scale applications and maintain large-scale facilities,” said Barbara Chapman, a professor of computer science at the University of Houston. “What we need in the large Department of Energy labs is a skilled computing workforce. The demand for computer science expertise far exceeds the supply of trained graduates.”

Dan Stanzione, executive director of the Texas Advanced Computing Center at the University of Texas at Austin, said, “We have data everywhere but not sufficient analysis. I think machine learning and deep learning will be the great drivers for future systems.”

Paul Messina is senior strategic adviser and Argonne Distinguished Fellow at the Argonne National Laboratory, where Mira, the 10-petaflop IBM Blue Gene/Q system, is the fifth most powerful supercomputer in the world. He spoke about the path to capable exascale computing. He noted that President Barack Obama signed an executive order last year authorizing the creation of the National Strategic Computing Initiative.

“This order accelerates delivery of a capable exascale computing system,” Messina said. “The goal is a computer with 100 times the performance of the current 10-petaflop systems, and we want to do it by 2023.” He said the project will foster U.S. economic competitiveness and scientific discovery.

 

About Special to Rice News

The Rice News is produced weekly by the Office of Public Affairs at Rice University.