What is High Performance Computing?
High Performance Computing (HPC) is the use of highly parallel computer hardware consisting of a cluster of processors or microprocessors to execute applications at very high speeds. HPC architecture can be used for both supercomputing and local, or desktop, computing. The fastest supercomputers are based on massively parallel processing designs; they employ a very large number of processors working in parallel to do calculations.
The major challenges in supercomputing include managing the hardware and software complexity of systems that scale to thousands or even millions of computing nodes, writing applications that can effectively use the available processing power, and ensuring high application performance on a variety of interconnects.
HPC applications need to be rewritten, or at least significantly modified, to run on a massively parallel machine. This is because the programming models are different from those used in small- and medium-scale computing environments. In most cases, HPC applications have been developed for high performance serial processors with message passing as the only means of communication between them, whereas massively parallel machines rely on fast point-to-point connections where all nodes can exchange messages with each other simultaneously.
Most distributed memory systems consist of a large number of identical processors connected by a network over which they communicate using various protocols such as OpenMPI, MPICH2 or PVM (Parallel Virtual Machine). One can think about a cluster as a single computer made up of many individual computers working together.
HPC has been used by the oil and gas industry to optimize petroleum recovery, design new products and to forecast market conditions. In these applications HPC has been applied at a very detailed level, typically down to the field level where petrochemical plants are located. This approach is being extended from optimizing individual production fields for single companies to optimizing portfolios of assets (fields) within an energy company and across multiple companies in collaboration with other organizations such as universities and research institutions. For example, using synthetic oil well tests that can be conducted using HPC resources we can help our clients better understand reservoir performance under different future scenarios that may involve changes such as technology improvements, changes in tax rates (for both exploration and development activities), changes in the energy mix (coal, nuclear, renewables) and environmental policies.
HPC can be used to solve complex problems that are otherwise intractable due to their scale or complexity. The oil industry is already using HPC for reservoir modelling, certification of reserves, determination of drilling locations and evaluation of alternative development schemes. HPC also allows us to perform more accurate forecasts on how increased regulation will affect the production profile of individual assets within a portfolio. It further enables us to understand impacts that various changes in technology may have on our clients’ business over time. For example, if we know which technologies are most likely to gain market acceptance over the next 20 years then we can use our HPC models to simulate these technologies under different assumptions to understand their impact on the industry.
A typical oil reservoir simulation takes months (or even years) to perform because thousands of samples are typically needed for each 3D grid block in the model. Using HPC we can reduce this to days or hours by “smoothing” the samples using, for example, optimized direct solvers that use multigrid techniques to solve large sparse systems of linear equations. There are many other optimizations one could apply, but our main focus is on increasing productivity while ensuring that accuracy is not sacrificed. This requires knowledge of the physics involved and where computational cost savings can be achieved without affecting accuracy, which is especially challenging when dealing with complex physical models such as non-Newtonian fluid flow and chemical reactions.
HPC also enables us to perform more detailed analysis on a particular asset or portfolio of assets. For example, we can consider the potential impact of a new regulation in a given country by taking into account both its direct and indirect consequences on related production fields in nearby countries. We can also provide our clients with better insights into how their technical spending decisions today will affect their business over time. In addition, HPC enables our clients to optimize portfolios not only across different companies but also across different end-markets such as oil exploration and development activities, chemical manufacturing, power generation and even agriculture. As mentioned earlier, ideally this requires access to tens of thousands of processors within a single timeframe so that the models can be run in parallel without impacting turnaround time. This capability is already being used by our clients for purposes such as geothermal reservoir simulation, which has been used to help them better understand the long-term thermal performance of a reservoir and optimize extraction rates over several decades.
Finally, HPC extends beyond traditional oil and gas models to create new opportunities across all sectors of the industry – from upstream activities such as development drilling and resource estimation, to midstream where it can be applied at both the planning and operational phases – for example optimizing a refinery’s throughput or a pipeline network’s design – to downstream activities that include refining where HPC enables improved models of current processes along with optimization of future production schemes. For more information on how HPC is being used in different areas of oil & gas, please refer to the article titled “HPC in Oil & Gas: a changing landscape” that appears elsewhere in this issue.
The Role of HPC Software Tools
In addition to hardware advances, software is playing an increasingly important role as we begin to use HPC for more advanced purposes such as optimization and uncertainty quantification. For example, classical reservoir simulators are often extended with optimization algorithms that allow for the identification of optimal solutions across large solution spaces. Uncertainty quantification models can also be either independently developed or integrated into existing software to provide a more complete picture of model performance under different scenarios and thus improve its interpretation.
In addition to traditional sources such as the oil and gas industry, where there is a good deal of experience in developing modeling applications with high performance computing capabilities, HPC can also be used to solve big data problems in the broader energy sector. For example, our partners at IBM recently completed work on an exciting new project that computes the optimal paths for electric vehicle (EV) charging stations in the U.S., taking into account traffic patterns, population density and energy costs across the country. This allows utilities to plan for EV charging stations by understanding where they’re most needed, how much power they’ll require and what their associated costs might be.
The Role of HPC in Big Data Analytic
Finally, it is worth mentioning that tools developed by companies such as IBM have been able to link modeling and simulation capabilities with Hadoop-based big data platforms to provide a more complete picture of a complex system or process. By using a combination of modeling and simulation along with big data analytics, it becomes possible to do things such as predict the impact of an upcoming regulation on a very complex process, as well as optimize the design of a refinery over time to meet changing market conditions.
In the next section, we’ll look at some actual examples from our experience with clients to understand how HPC is being used in oil and gas today.