aglo

Permafrost mapping
Inversions


 

On this page:

This page concentrates upon inversions of dipole-dipole data performed using UBC-GIF methods. Software available from the instrument manufacturer performs a similar inversion, but with some differences in the model objective function. For information on the commercial facility, see AGI's web page describing the Res2DInv program. Results obtained in the field using Res2DInv are compared to UBC-GIF results on the interpretation page.

For an integrated interpretation of the geology, involving all three geophysical data sets, see the interpretation page.


Data and errors; inputs for inversion

Inversion is the process of estimating true distributions of intrinsic electrical properties. Data plotted in pseudosections are apparent resistivities. These are "average" values, taking into account all materials within range of each measurement. The UBC-GIF inversion process requires the original voltages observed, and a corresponding reasonable estimate of each datum's reliability. Usually an error estimate of 5% - 10% is adequate, but it is common to have occasional data points that are excessively noisy. In the data set shown below, severely noisy data points have been removed, other data points have been assigned large errors, and the remaining "good" data are assigned errors as a percentage plus a small constant value. This constant factor ensures that data that are close to the noise level (especially for large electrode separations) are assigned larger error factors. Data and corresponding errors are shown in figures below.


Raw data, with the worst data points removed. This is the same as the pseudosection
displayed on the previous page on pre-processing.
 
Errors assigned to each data point are plotted in the same way as a pseudosection.
Most data points have close to 10% error assigned, but data suspected of being noisey
have larger error factors assigned.

Default model

Once data and corresponding errors have been prepared, inversion can proceed. The inversion methodology must provide many options for controlling the severe non-uniqueness of the inversion problem. In this case history various results are shown, all of which are consistent with the data gathered.

Using default parameters for controlling the outcome, the errors shown above were found to be too small. Rather than trying to deal with each data point, the global data misfit was increased to allow the inversion to converge with a model that was consistent with large-scale trends in data. The figure below shows the complete model, including the "padding zones" outside the region of interest which are required to allow for controlled boundary conditions. It illustrates how the cells in the earth model that are not constrained by data (i.e. the padding zone) revert to the assigned reference model value, in this case, an automatically chosen constant resistivity of 2184 Ohm-m. Recall that the line surveyed extends from 0 to 400 metres.

The two graphs illustrate convergence behaviour as the program works its way through a number of iterations. Usually, non-linear inverse problems must be solved iteratively. As inversion progresses, the program first adjusts the model to obtain the desired data misfit (blue curve). Then this value of difference between predicted and observed data is held constant while the "size of the model" (or model norm - red curve) is adjusted to reach a minimum. Thus, the combined minimization of a model objective function and the data misfit are illustrated.
 

Quality control

How should quality of results be assessed? There are several aspects to consider.

  1. Convergence behaviour should be checked. The graphs of misfit and model norm above show that the algorithm behaved consistently with guidelines discussed in program documentation.
  2. The difference between observed data and data predicted by the resulting model should be examined. It is common to plot the observed and predicted data sets side-by-side. However, it is usually more instructive to plot a map of the difference between the predictions and the observations. The misfit map should be essentially random, and evenly distributed about zero. This is basically true here, although the known "bad" data points are dominating the error map (as expected). The next figure shows the misfit between observed and predicted data. Values are normalized by assigned errors so that values can be compared regardless of their position within the pseudosection.
  3. The model itself should be considered in terms of expectations. Were reasonable structures and physical property values recovered? Does the model appear more "chaotic" than expected (try loosening the misfit) or smoother than it should (try tightening misfit)?
  4. Alternate models should be generated so that features that are required by data can be more reliably interpreted.

Alternate models

Changing the background model
Several different inversions should be performed to help identify those features required by the data versus features that are artifacts of the inversion. For example, it is interesting to ask whether the conductive (blue) region stops at roughly 60 metres because the data could not see any deeper, or whether this conductor actually extends to greater depths. The two figures here show two valid models which help address this question.

The original model is shown first.

The recovered model from a second inversion which used an assigned background resistivity of 250 Ohm-m is shown next.  It appears that the zone under the conductor, where resistivities are around 500 Ohm-m, is required because it is present in both models. 

Adjusting errors to improve convergence
This would probably be a useful step for data such as these because convergence was only attained by setting a global criterion to be six times the theoretical optimum. Assigning errors is one of the trickier aspects of inversion because there is rarely any rigorous information about the reliability of each data point. Performing the identical survey a statistically reasonable number of times would provide this information, but that is clearly out of the question for practical reasons. For this case history, we have not taken the time to examine each datum carefully in the context of its neighbours, so we cannot say whether this would improve our understanding of the subsurface.

A "smaller" survey
For this survey, it is interesting to ask what results would have been obtained if the survey had incorporated only the shallower data. Since the "deeper" data are rather noisy, it is worth identifying what Earth's properties are when only the best data points are used in the inversion. For the model below, all data below n=6 were eliminated. The inversion was run using all default parameters.

Convergence curves show behaviour in the default mode. Data misfit is reduced until it stabilizes, then misfit is increased a small amount and model norm is further reduced. This takes longer than when a specific target misfit is assigned, but it does tend to minimize the amount of trial and error needed to find an appropriate misfit value, especially when data are noisy. Note the colour scale is slightly different from other models on this page.

Comparison of the earlier models with this result of inverting a reduced data set suggests that the extra data were indeed of great value. This reduced model has only "seen" down to roughly 60 metres rather than down to 120 metres, which is the depth of investigation we estimate for the complete data set in the following section. On the other hand, if the job had only required information about the top 50 metres or less, then the smaller scale (cheaper) survey would likely have been adequate.

Depth of investigation

A more rigorous way of defining the depth of investigation involves carrying out two inversions with very different background values. Then a modified ratio of these two models is applied at each model cell, and the resulting value (between 0 and 1) is used to assign a level of confidence to each model cell. A confidence level of 1 implies that the cell's value is primarily dependant on data, and a confidence level of 0 implies that the cell's value is controlled primarily by the reference value. These cells can be ignored in the interpretation since the survey provided no constraints on that region of the earth. The result of this type of analysis can be conveniently displayed as an image of a model which has a hatched region for cells that have a confidence level below an assigned threshold value.

To summarize, this image was generated by inverting the data twice, once with the reference model a halfspace with resistivity of 250 Ohm-m, and a second time with a reference of model 10,000 Ohm-m. These two models were compared pixel by pixel and a confidence threshold 0.3 was set. Finally the original model was displayed with regions that were not constrained by data shown as a hatched zone. The resulting depth of investigation appears to be near 120 m. Regions to the left and right of the survey line are also recognized to contain unreliable material property estimates. 


© UBC-GIF  April 23, 2007  
preproc.