Kennedy Institute’s 10th Oil & Gas HPC Conference highlights

BY PATRICK KURP
Special to Rice News

“Deciding where to drill hasn’t been getting any easier. We still need accurate, detailed subsurface images, computed from seismic survey data.”

So said Alan Lee, corporate vice president of research and advanced development at AMD, stating the recurrent theme of the 10th annual Oil & Gas High-Performance Computing Conference at Rice University. Hosted March 15-16 by Rice’s Ken Kennedy Institute for Information Technology, the event drew more than 435 leaders from the oil and gas industry, the high-performance computing and information technology industries and academics.

Oil & Gas High-Performance Computing Conference

10th-annual Oil & Gas High-Performance Computing Conference at Rice University

Calling the 10th anniversary gathering “a family reunion,” Jan Odegard, executive director of the Kennedy Institute and associate vice president of information technology, said, “We’re not a fly-by-night operation. I have referred to 2015 and 2016 as the ‘Wile E. Coyote Years’ for the oil and gas industry, but we’re not plummeting.”

In his plenary talk, “Big Compute: Under the Hood,” Lee stressed the need for what he called “big compute architectures.”

“All the easy oil and gas has already been found,” Lee said. “We need much better velocity models, and we need to look at things more probabilistically.”

John Eastwood, geophysics manager at ExxonMobil for Seismic Imaging/Processing/FWI Research and Acquisition Research, echoed the theme in his opening keynote address, “High Performance Computing and Full Waveform Inversion.”

John Eastwood

John Eastwood

“We’re seeing a paradigm shift from the conventional processing we’ve used for building models of the subsurface,” he said. “We need to use the entire seismic wavefield to generate high-resolution velocity models for imaging.”

Greater accuracy of imaging, Eastwood said, reduces the expense and environmental cost of drilling additional, sometimes unproductive wells. ExxonMobil’s proprietary algorithms and use of supercomputers enable the company to exploit the promise of full wavefield inversion and reveal the actual geological and geophysical properties of subsurface rock layers.

“The trend, as we see it, is to use more bandwidth the more complicated the geology becomes,” Eastwood said. “This technology requires a collaboration between geophysical researchers, software engineers and systems engineers. We have to maximize HPC capabilities. Computing advances enable imaging technology to progress.”

Debra Goldfarb

Debra Goldfarb

In her plenary talk, “Things to Consider: The Changing Landscape of HPC and Data Center,” Debra Goldfarb, chief analyst and senior director of market intelligence for Intel’s Data Center Group, predicted that within two years as much as 60 percent of the world’s data may have migrated to the cloud. At present, the amount totals about 4 percent.

“The noncloud architecture is shrinking,” she said. “The industry must be ready for the shift. I can tell you the top companies are offering kids contracts in their sophomore and junior years. They want to be ready.”

In just the last year, she noted, much progress has been reported in the development of machine learning (ML) and deep learning (DL), and Goldfarb said operators of high-performance computers are developing and running ML/DL workloads on their systems.

“Users and algorithm scientists are optimizing their codes and techniques that run their algorithms, and system architects are working out the challenges they’re facing on various system architectures,” she said.

David Keyes

David Keyes

Keynote speaker David Keyes, director of the Extreme Computing Research Center at King Abdullah University of Science and Technology in Saudi Arabia, spoke on “Algorithmic Adaptations to Extreme Scale.”

He said the U.S. Department of Energy’s Exascale Computing Project expects the first postpetascale system to be deployed by 2021. That’s sooner than the original timeline and will make the U.S. project more competitive with similar projects underway in China and Japan.

“We must be ready for what is coming,” Keyes said. “We need the algorithms for where the new architectures are going to be. There will be more burdens on software than on hardware.

“Algorithms must span the widening gap between ambitious applications and austere architectures,” he added. “With great computing power comes great algorithmic responsibility.”

Jim Kahle

Jim Kahle

Jim Kahle, CTO and chief architect for IBM’s Data Centric Deep Computing Systems and IBM Fellow at IBM Research in Austin, Texas, spoke on “Data Centers Impacts From the Convergence of High Performance and Cognitive Computing.”

“Data is the new basis of computational value,” he said. “We have a lot of work to do. The new technologies may not be ready for high-end applications in time to meet the end of scaling.”

He added: “We have massive data requirements driving a composable architecture for big data, complex analytics, modeling and simulation. Cognitive solutions are getting high-performance computing to work smart, not hard. The fastest calculation is the one you don’t run.”

Plenary speaker Peter Ungaro, president and CEO of Cray Inc., spoke on “Supercomputing: Yesterday, Today and Tomorrow.” He characterized the industry’s present state as “transitional,” and said, “Consider the transition from a ‘lowest-cost’ perspective to a ‘competitive-edge and return-on-investment’ perspective.”

Ungaro said parallelism in computing is now considered mandatory to achieve processing efficiency. “It’s the new normal. Architectures will change to harness the power of ‘wider’ computers. Interconnects must improve rapidly to deal with congestion and throughput.”

Ungaro offered another series of predictions: “Exascale will be the end of the CMOS era. Within five years we will see 10-plus teraflops on a single node. The world is shifting.”

The Oil & Gas High-Performance Computing Conference closed with a panel discussion moderated by John Mellor-Crummey, professor of computer science at Rice. One of the panelists, Peter Bramm, CEO of Campaign Storage, stressed the importance of introducing students to “big computing” as early as possible. “It has become the main thing that has happened in the last 30 years,” Bramm said.

Odegard said, “Peter Braam really captured the program committee’s larger vision, one of bringing together the three communities — oil and gas, IT and academics — to address technology needs, build a community and support workforce development, the much-needed talent pipeline we will depend on over the next decade.”

The 10th annual workshop included three tutorial sessions, a mini-workshop, two keynotes, six plenary sessions, six “disruptive technology talks” and 25 student poster presentations.

 

–Patrick Kurp is a science writer in the George R. Brown School of Engineering.

 

About Jade Boyd

Jade Boyd is science editor and associate director of news and media relations in Rice University's Office of Public Affairs.