Recommendations regarding problems with use of MAG3D 1) We recommend that only real data get used because interpolated data have additional errors on them, and they do not add any real new information. There is no need for data to correspond exactly with the mesh. Keeping the total number of data points within reason (a few thousand) is important, but this can be done simply by removing data points so that the number of data points you invert is around 1 or 2 or 3 times the number of surface cells in your mesh. Also upward continuation is important so data appear as if gathered at roughly 1/2 a cell thickness above the topography. 2) Errors on data must be sensible. If you specify, say 10 nT for every datum, which then works out to roughly 0.3% of your maximum values, you will find that his is much too tight a tolerance for the program to try and fit, especially when point 3 (next) is considered. Errors should be specified as Error = (percent of datum) + (a minimum value). 3) Try to avoid having to work with anomalies that are characterized with only one or two data points. It is difficult for an algorithm to account for such "point" data using only the normal sized grid cells. Anomalies should be characterized by several data points if the program is to have any chance of recovering sensible models. 4) Several anomalies are rather close to the edges of the mesh. See our tutorial at http://www.eos.ubc.ca/research/ubcgif/tutorials/mag3d/practical/index.htm (especially slide 8) for some comments on this. In general, I would recommend using only real data, specifying sensible errors, working with smaller regions, and trying to keep interesting zones will within the mesh boundaries. Add extra padding cells if the anomaly is not completely characterized by the data available. Finally, for problem data sets, it often helps to work with a smaller region of the area of interest to try and find parameters that produce sensible results. Then you have a starting point for working with bigger problems. Some of our sponsers have managed huge inversion problems by "tiling" the data - that is splitting it into overlaping regions that can be inverted individually, then concatinating the results at the end. This should be done with a bit of care, but it has been done.