导航:全球石油化工网 >> 资讯频道 >> 技术装备

Reservoir modeling drives IT requirements

[加入收藏][字号: ] [时间:2008-12-30 E&P 关注度:0]
摘要:Reservoir modeling drives IT requirements ...

Reservoir modeling drives IT requirements

 

The advancements of reservoir modeling software require high density server technology, supportable in sophisticated data centers. 
Below is a simulation of airflow in high density data centers running 
reservoir modeling software. (Image courtesy of Cyrus One)

As the performance capabilities of reservoir modeling software applications have increased over the last several years, translating into better reservoir visualization, enhanced production forecasting, and improved business decision-making, it has come at a price. The unparalleled productivity of today’s reservoir characterization software requires extensive upgrades in system hardware and data management architectures. These upgrades are usually cost-prohibitive and can be short-lived if not properly configured. As a result, an increasing number of E&P companies are realizing the significant capital investment required to maintain the productivity of the advanced hardware technologies.

Advanced hardware required

Characterizing mature reservoirs, which typically have multiple wells and large volumes of production data, generally requires large datasets to capture minimally observable dynamics from a detailed analysis of historic flow and production. As the geostatistical software used to capture the data becomes more sophisticated, the datasets are becoming larger and more cumbersome to manage on conventional computing hardware. Early development and exploration projects rely more heavily on integrated seismic data that supplies accurate information regarding scope, continuity, and core dynamics. This volume of data is even larger than that used for mature reservoirs ? often dozens of terabytes or more ? and is channeled in parallel packets into visualization tools that must work at maximum speed to provide highly detailed real-time images that help upstream exploration units make accurate business decisions about whether or not to drill in a particular region.

Early-stage and mature characterizations represent varying levels of uncertainty. To mitigate this uncertainty, geoscience experts depend on the most advanced technology to achieve the fullest level of geostatistical performance and reservoir characterization. Optimizing business performance requires not only the most advanced software and tools, but also best-in-class servers and storage hardware coupled with data architectures that can support the growing dependence on vast datasets and increasingly complex applications. While this helps ensure maximum performance and exponential scalability, it can potentially require huge IT investments in computer hardware and back-office technology.

Today’s world of high-demand, real-time visibility business and engineering applications ? including reservoir modeling and seismic visualization ? are powered by high-density hardware and data infrastructures that deliver exponentially superior performance over previous generations. High-density blade servers and storage equipment are designed to minimize floor space; improve system speed, redundancy, and scalability; and support the most demanding applications and data utilization protocols. However, this performance comes at a steep cost: High-density equipment requires much higher power and cooling requirements, including power infrastructures exceeding 250 watts/sq ft and additional power distribution units and computer room air conditioning modules. Many companies looking for a solution that optimizes hardware and software performance find that they must retrofit an existing data center or build an entirely new facility to meet all of the power, cooling, and configuration requirements.

Impact of high density

The alternative to which many E&P companies are turning is third-party or collocated data centers.

The costs and manpower associated with building or retrofitting an in-house data center can exceed US $2,500/sq ft, much too high for agile E&P companies that need to dedicate all available time and resources to achieving business results in the field. Using a dedicated, hosted infrastructure provides cost-conscious E&P companies with an affordable solution that delivers future-proof reliability in the form of cost predictability, maximum redundancy, and performance dependability.

Best-in-class data centers are rated at a Tier 3+ level and are built to provide scalable architectures designed for optimal high-density and dedicated server needs. This helps reduce the need for in-house IT staff to manage reservoir characterization and related modeling and visualization applications. Tier 3+ data centers are designed to meet each of the three principal criteria essential to running highly sophisticated, data-rich applications: speed, scalability, and accuracy.

Application speed is ensured through the use of dedicated, high-density servers. Although the datasets are enormous, the ability to host and run these applications on best-in-class servers allows access to real-time modeling with seamless updates. This means all back-office hardware (including storage, servers, and switches) is fully integrated and optimized for the highest speed and availability. In the case of reservoir characterization and related seismic visualization, this can require an incomparably dense hardware footprint to accommodate all necessary hardware for running 10, 20, 30, or even more terabytes of data simultaneously.

To address scalability, Tier 3+ data centers are designed using “application-agnostic” architectures. This enables a seamless fit for a wide variety of applications and system requirements with the full availability of any chosen operating system. This also allows three-dimensional applications full access to use open libraries that provide greater resource availability but are more demanding and challenging than conventional, non-open protocols. These architectures also facilitate process load-balancing rather than user load-balancing; this protocol works better for three-dimensional modeling environments. This, in turn, provides real-time adaptability and load shift to guarantee optimal performance at all times.

Data accuracy and reliability are delivered through the deployment of 2N redundant architectures. Each system runs on a parallel architecture, providing full 1:1 data and application backup in the event of failure. Each dedicated server has a backup unit to allow immediate, seamless switchover. Using a 2N protocol guarantees business continuity; a system can remain up and running despite any adverse events, whether external or internal to the data center. This feature is essential to upstream exploration, where system failure, even for a brief time, could mean the loss of valuable business opportunities.

As new applications and software become necessary, the role of the data center becomes even more paramount. Under the supervision of best-in-class data center personnel, newer applications can be seamlessly uploaded and integrated with zero downtime. Some in-house data centers have these capabilities, but are typically built out on an ad-hoc basis or require retrofit architectures and physical relocation of data storage and system hardware.

As many E&P companiess have noted, collocated data centers afford “future-proof peace of mind” and the ability to dedicate IT resources and staff to other customer- and business-facing projects and initiatives.

关键字: 油藏模拟 IT 
关于我们 | 会员服务 | 电子样本 | 邮件营销 | 网站地图 | 诚聘英才 | 意见反馈
Copyright @ 2011 CIPPE.NET Inc All Rights Reserved 全球石油化工网 版权所有 京ICP证080561号