Over a five week period beginning in late May 2020, John Doherty, Catherine Moore and Jeremy White delivered a series of five webinars on the theory and practice of decision support groundwater modelling. This page directs you to videos of these webinars, and describes their contents.
The webinar series was sponsored by GMDSI. “GMDSI” stands for “Groundwater Modelling Decision Support Initiative”. This is a 3.5 year project, funded by BHP and Rio Tinto, and conducted under the auspice of the National Centre for Groundwater Research and Training (NCGRT), Flinders University, Australia.
The GMDSI project seeks to promote discussion within the groundwater industry on how decision support groundwater modelling is best carried out. At the same time, it seeks to assist the industry in raising the standard for this kind of modelling through education, worked examples and strategic software development.
There is no “best” way to model. However certain principles must prevail. These principles are based on the scientific method. In accordance with these principles, data assimilation and quantification of the uncertainties of decision-critical model predictions must be central to decision-support modelling. These should not be a modelling afterthought. Simulation should serve data assimilation, and not the other way around.
Visit the GMDSI web pages to find out more about this project.
Webinar 1: Concepts and Fundamentals
This webinar questions the notion that we are able to simulate groundwater systems, and the processes that operate within them, very well at any particular site. It also questions whether we need to. It examines the notion of the “conceptual model” that underpins any numerical groundwater model. It suggests that unless this conceptual model gives as much importance to the flow of information as it does to the flow of water, then it does not provide a solid foundation for a decision-support numerical model.
The webinar goes on to discusses Bayes equation, and the principles that it enshrines. It suggests that Bayes equation should be the guiding light of decision-support environmental modelling, especially groundwater modelling. Bayes equation makes it clear that decision-support groundwater modelling requires the blending of information of two very different types from two very different sources.
The webinar notes that while Bayes equation tells us what to do, it does not say too much about how to do it. So attention is then drawn to subspace methods for the perspectives which they can provide on how information flows from observations to parameters, and then from parameters to predictions. This has some fundamental repercussions for the design of a decision-support model. Three categories of prediction are identified. These are:
- Those that are informed by the past behaviour of a system;
- Those that are uninformed by the past behaviour of a system;
- Those that are partially informed, but not entirely informed, by the past behaviour of a system.
The making of these different types of predictions calls for three different modelling strategies. A decision-support model should therefore be designed with a prediction in mind.
Webinar 2: Repercussions for Decision-Support Modelling
Webinar 1 asserted that the focus of decision support modelling should shift from attempts to precisely simulate what happens under the ground to employing simulation as a basis for data assimilation and uncertainty quantification. With this shift in focus, the metrics for what constitutes good decision-support modelling must also change. So too must the ways in which decision-support modelling is reported and reviewed.
In webinar 1, predictions were classified into three types. In webinar 2, repercussions for model design in each of these three predictive contexts are explored; examples are provided. It is pointed out that the third type of prediction – that which is partly informed by the historical behaviour of a system but not fully informed – poses the greatest challenges. This requires a highly parameterized approach to model construction and history matching in order to avoid predictive bias, and to ensure that the uncertainties of decision-critical model predictions are not under-stated.
Goodness of fit metrics are examined. The importance of processing field measurements and their model-calculated counterparts before matching one with the other is stressed; this can mitigate calibration-induced predictive bias at the same time as it may reduce the uncertainties of decision-critical model predictions. The benefits and drawbacks of model complexity are also discussed. It is pointed out that an abstract representation of system geometries, properties and processes may sometimes serve data assimilation and uncertainty quantification imperatives better than “realistic” representation of these things in a model.
The concept of model validation is discussed. The applicability of this concept to decision-support modelling is questioned.
Webinar 3: Model Calibration, Linear Analysis and Sensitivity Analysis
Webinar 3 begins by exploring the important role played by model “dancing partners” such as PEST and PEST++ in decision-support modelling. Their two basic design requirements are identified. These are a non-intrusive interface with a model, and an ability to parallelize model runs. Next “model calibration” is given a precise definition. The reasons why one would seek a unique solution to a fundamentally nonunique, ill-posed inverse problem instead of pursuing a strictly Bayesian approach to history matching are discussed. The use of pilot points as a parameterization device is then briefly addressed.
The discussion then turns to linear analysis. Linearization of a model’s behaviour under both calibration and predictive conditions requires a Jacobian (i.e. a sensitivity) matrix. However two other matrices are also needed before linear analysis can be implemented. These are the prior covariance matrix of parameter uncertainty, and a covariance matrix that expresses the statistics of “measurement noise”; the latter matrix is normally considered to be diagonal.
Armed with all of these matrices, it becomes an easy matter to calculate the prior and posterior uncertainties of model parameters and model predictions. Contributions to the pre- and post-calibration uncertainty of a particular prediction by different parameter groups are also easily enumerated. The worth of existing or imaginary data can also be readily assessed under the premise that the worth of data increases with its ability to reduce the uncertainties of decision-critical predictions. Examples of linear analysis in exploring all of these issues are provided.
Finally the webinar says a few words about global sensitivity analysis.
Webinar 4: Options for Nonlinear Uncertainty Analysis
The concepts enshrined in Bayes equation are revisited; its terms are explained. It is shown that quantification of parameter and predictive uncertainty requires that the posterior parameter probability distribution be sampled. Where a model has many parameters (as a groundwater model should), this poses some serious numerical problems, particularly if model run times are high and/or if a model’s numerical stability is questionable.
A number of different methods for sampling the posterior parameter probability distribution are discussed. Rejection sampling is the easiest to understand and implement. However this is only viable where decision-critical model predictions are uninformed by the past behaviour of a system. Pseudo-linear methods such as null space Monte Carlo following model calibration are much more numerically efficient. Even more efficient are ensemble methods, for the number of model runs required to explore the posterior parameter probability distribution can be far less than the number of parameters that a model actually employs. Examples are provided.
The notion of compatibility between field observations of system state on the one hand, and assumptions underpinning model construction and definition of prior parameter probability distributions on the other hand is explored. The webinar finishes with a discussion of how model-based predictive hypothesis-testing can provide the means to directly implement the scientific method in the context of environmental management.
Webinar 5: Question and Answer
During this webinar John, Catherine and Jeremy join Blair Douglas (BHP) and Keith Brown (Rio Tinto) on a panel. The session is chaired by Craig Simmons (Flinders University). Listeners pose questions to the panellists.
Watch the video.