Simulation

Why science and technology is evolving so fast, like never before? One of the many factors contributing to this vast and fast expansion is the advent of computers. Computers enabled scientist to wide their research spectrum, reducing some previous limitations regarding physical experimentation (time required to perform the experiment, availability of materials, safety of operating the experiment and etc.). This specific branch of science is known as simulation. Surprisingly, few people know how computers evolved to become what they are today.

Computers, as we know today (digital electronic programmable machines), were born for military operations during the Second World War. Among the most notorious applications during that time were the deciphering of German enigma codes besides many uses in the Manhattan project (estimating cross sections, i.e. the probability that atoms would interact in a certain reaction, critical mass calculation and etc.), ballistic trajectory tables and etc. These examples all exemplify the use of computer simulations!

Computer simulations have become a useful part of mathematical modelling of many natural systems, for example: weather forecasting, reservoir simulation, flight simulators, design of electrical circuits, traffic engineering, fluid flow dynamics, structural engineering and etc. However computer simulations are no longer restricted to mimic physical phenomena as it finds constant use in financial applications like risk forecasting, price models and many more.

Simulations are heavily structured around a mathematical model which aims to describe the phenomena being simulated. However there are several short falls in this process. Most notably the incompleteness and or inaccuracy of analytical models to accurately model real world cases, weather due to uncertainties in the estimated initial conditions or due to improper approximations and simplifications of the problem. These challenges are tackled through a process known as data assimilation (in the oil industry commonly referred as history matching). However calibration alone is not sufficient to ensure consistent simulation models, two additional steps are required: verification and validation.

Model calibration is achieved by adjusting uncertain input parameters in order to better reproduce past events. For example, in reservoir simulations, typical input parameters are rock quality, porosity, permeability, saturation and net to gross ratio distributions. These parameters influentiate the field performance (oil production rate, water breakthrough, pressure distribution). Adjusting these parameters have a direct impact on well performance.

Verification is typically performed by comparing the simulation outputs against the measured historical values. In the case of a reservoir simulation model pressure surveys and field allocation data are reasonably well known. Model validation is achieved by comparing the simulated field performance to analogous fields that were produced using the same strategy. There are three sources of errors can lead to poor calibration results: noisy data, model errors, and parameter errors. In general, input data and parameters can be adjusted easily by the user. Model errors however are caused by the methodology used in the model and may not be as easy to fix. Simulation models are typically crude approximations of the reality and can produce poor results. Some models are more generalized while others are more detailed. If model error occurs as a result, in may be necessary to adjust the model methodology to make results more consistent.

Simulation models can be used as a tool to verify engineering theories, but they are only valid if calibrated properly. Once satisfactory estimates of the parameters for all models have been obtained, the models must be checked to assure that they adequately perform the intended functions.

The importance of model validation underscores the need for careful planning, thoroughness and accuracy of the input data collection program that has this purpose. Efforts should be made to ensure collected data is consistent with expected values. The resulting models and forecasts will be no better than the data used for model estimation and validation.