University of Texas at El Paso
Banner
Pan American Center for Earth and Environmental Studies
   
Using and Understanding Gravity Data    

Introduction | Gravity | Corrections | Density | References
 
Show as single page
The Role of Density

A knowledge of the density of various rock units is essential in gravity studies for several reasons. In fact, a major limitation in the quantitative interpretation of gravity data is the need to estimate density values and to make simplifying assumptions about the distribution of density with the Earth. The earth is complex and the variations in density in the upper few kilometers of the crust are large. The use of a single average density in Bouguer and terrain corrections is thus a major source of uncertainty in the calculation of values for these corrections. This fact is often overlooked as we worry about making very precise measurements of gravity and then calculate anomaly values whose accuracy is limited by our lack of detailed information on density.

A basic step in the reduction of gravity measurements to interpretable anomaly values is calculation of the Bouguer correction that requires an estimation of density. At any specific gravity station, one can think of the rock mass whose density we seek as being a slab extending from the station to the elevation of lowest gravity reading in the study area (Figure 1). If the lowest station is above the datum (as is usually the case), each station shares a slab which extends from this lowest elevation down to the datum so this portion of the Bouguer correction is a constant shared by all of the stations (Figure 1).

No one density value is truly appropriate, but when using the tradition approach it is necessary to use one value when calculating Bouguer anomaly values. When in doubt, the standard density value for upper crustal rocks is 2.67 gm/cc.

In order to make terrain corrections, a similar density estimation is needed. However in this case, the value sought is the average density of the topography near a particular station. It is normal to use the same value as was used in the Bouguer correction, but this need not necessarily be the case when the topography and geology is complex.

As mentioned in the discussion of the Bouguer correction, modern digital elevation data are making it possible to construct realistic models of topography that include laterally varying density. Although preferable, this approach still requires the estimation of density of the column of rock between the Earth's surface and the reduction datum. From a traditional point of view, this approach represents a merging of the Bouguer and terrain corrections that are then applied to Free Air anomaly values. One can also extend this approach to greater depths and vary the density laterally, and consider it a geologic model of the upper crust that attempts to predict Free Air anomaly values. The Bouguer and terrain corrections then become unnecessary since the topography simply becomes part of the geologic model which is being constructed.

When one begins to construct computer models based on gravity anomalies, densities must be assigned to all of the geologic bodies that make up the model. Here one needs to use all of the data at hand to come up with these density estimates. Geologic mapping, drill hole data, measurements on samples from the field, etc. are examples of information one might use estimate density.

Measurements of Density

Density can be measured (or estimated) in many ways. In general, in situ measurements are better because they produce average values for fairly large bodies of rock that are in place. With laboratory measurements, one must always worry about the effects of porosity, temperature, saturating fluids, pressure, and small sample size as factors that might make the values measured unrepresentative of rock in place.

Many tabulations of typical densities for various rock types have been compiled (e.g., Telford et al., 1990). Thus one can simply look up the density value expected for a particular rock type (Table 1).

Samples can be collected during field work and brought back to the laboratory for measurement. The density of cores and cuttings available from wells in the region of interest can be also be measured.

Most wells that have been drilled during the exploration for petroleum, minerals, and water are surveyed by down hole geophysical logging techniques, and these geophysical logs are a good source of density values. Density logs are often available and can be used directly to estimate the density of rock units encountered in the subsurface. However in many areas, sonic logs (seismic velocity) are more common than density logs. In these areas, the Nafe-Drake or a similar relationship between seismic velocity and density (e.g., Barton, 1986) can be used to estimate density values.

The borehole gravity meter is an excellent (but rare) source of density data. This approach is ideal because it infers density from down hole measurements of gravity. These measurements are thus in situ averages based on a sizable volume of rock not just a small sample.

The Nettleton technique (Nettleton, 1939) involves picking a place where the geology is simple and measuring gravity across a topographic feature. One then calculates the Bouguer gravity anomaly profile using a series of density values. If the geology is truly simple, the gravity profile will be flat when the right density value is used in the Bouguer and terrain corrections.

One can also use a group of gravity readings in an area and simply find the density value where the correlation between topography and Bouguer anomaly values disappears.

Enhancement of Gravity Anomalies (Filtering)

Gravity and magnetic anomalies whose wavelengths are long relative to the dimensions of the geologic objectives of a particular investigation are called regional anomalies. Because shallow geologic features can have large lateral dimensions, one has to be careful, but regional anomalies are usually thought to reflect the effects of relatively deep features. Anomalies whose wavelengths are similar to the dimensions of the geologic objectives of a particular investigation are called local anomalies. In the processing of gravity data, it is usually preferable to attempt to separate the regional and local anomalies prior to interpretation. The regional anomaly can be estimated employing a variety of analytical techniques. Once this is done, the simple difference between the observed gravity anomalies and the interpreted regional anomaly is called the residual anomaly.

The techniques used to separate regional and local gravity anomalies take many forms and can all be considered as filtering in a general sense (e.g., Blakley, 1996). Many of these techniques are the same as those employed in enhancing traditional remote sensing imagery. The process usually begins with a data set consisting of Bouguer gravity anomaly or magnetic anomaly values, and the first step is to produce an anomaly map such as the one shown in Figure 3.

Gridding

The initial step in processing gravity and is the creation of a regular grid from the irregularly spaced data points. This step is required to even create a simple contour map, and in general purpose software, it may not receive the careful attention it deserves since all subsequent results depend on the fidelity of this grid as a representation of the actual data. On land, gravity data tend to be very irregularly spaced with areas of dense data and areas of sparse data. This irregularity is often due to topography in that mountainous areas generally have more difficult access than valleys and plains. It may also be due to difficulty in gaining access to private property and sensitive areas. In the case of marine data, the measurements are dense along the tracks that the ships follow with relatively large gaps between tracks. Airborne and satellite gravity measurements involve complex processing that is beyond the scope of this discussion. However once these data are processed, the remainder of the analysis is similar to that of land and marine data.

There are a number of software packages that have been designed for the processing of gravity data, and several gridding techniques available in these packages. The minimum curvature technique works well and is illustrative of the desire to honor individual data points as much as possible while realizing that gravity has an inherent smoothness due to the behavior of the Earth's gravity field. In this technique, the surface of minimum curvature is fitted to the data points surrounding a particular grid node, and the value on this surface at the node is determined. One can intuitively conclude that the proper grid interval is approximately the mean spacing between readings in an area. A good gridding routine should honor individual gravity values and not produce spurious values in areas of sparse data. Once the gridding is complete, the grid interval (usually 100's of meters) can be thought of as being analogous to the pixel interval in remote sensing imagery.

Filtering

The term filtering can be applied to any of the various techniques that attempt to separate anomalies on the basis of their wavelength and/or trend (e.g., Blakely, 1996). The term separate is a good intuitive one because the idea is construct an image (anomaly map) and then use filtering to separate anomalies of interest to the interpreter from other interfering anomalies (see regional versus local anomalies above). In fact, fitting a low order polynomial surface (3rd order is used often) to a grid to approximate the regional is a common practice Then subtracting the values representing this surface from the original grid values creates a residual grid that represents the local anomalies.

In gravity studies, the familiar concepts of high pass, low pass, and bandpass filters are applied in either the frequency or spatial domains. In Figure 4 and Figure 5 for example, successively longer wavelengths have been removed from the Bouguer anomaly map shown in Figure 3. At least to some extent, these maps enhance anomalies due to features in the upper crust at the expense of anomalies due to deep-seated features.

Directional filters are also used to select anomalies based on their trend. In addition, a number of specialized techniques have been developed for the enhancement of maps of anomalies based on the physics of the gravity and magnetic fields and are discussed below. The various approaches to filtering can be sophisticated mathematically, but the choice of filter parameters or design of the convolution operator always involves a degree of subjectivity. It is useful to remember that the basic steps in enhancing a map of gravity anomalies in order to emphasize features in the Earth's crust are: 1) First remove a conservative regional trend from the data. The choice of regional is usually not critical but may greatly help in interpretations (e.g., Simpson et al., 1986). Because the goal is to remove long wavelength anomalies, this step consists of applying a gentle high pass filter. Over most continental areas, Bouguer anomaly values are large negative numbers, thus the usual practice of padding the edges of a grid with zeros prior to applying a Fourier transform and filtering will create large edge effects. One way to avoid this effect is to first remove the mean in the data and gridding an area larger than the image to be displayed. However in areas where large regional anomalies are present, it may be best to fit a low order polynomial surface to the gridded values, and then continue the processing with the residual values with respect to this surface. 2) One can then apply additional filters as needed to remove unwanted wavelengths or trends.

In addition to the usual wavelength filters, a variety of specialized filters have been developed for gravity data that include:

Upward continuation - A process (low pass filter) by which a map simulating the result if the survey had been conducted on a plane at a higher elevation is constructed. This process is based on the physical fact that the further the observation is from the body causing the anomaly, the broader the anomaly. It is mathematically stable because it involves extracting long wavelength anomalies from short wavelength ones.

Downward continuation - A high pass filter by which a map simulating the result if the survey had been conducted on a plane at a lower elevation is constructed. In theory, this process enhances anomalies due to relatively shallow sources. However, care should be taken when applying this process to anything but very clean, densely-sampled data sets, because of the potential for amplifying noise due to mathematical instability.

Vertical derivatives - In this technique, the vertical rate of change of the gravity or magnetic field is estimated (usually the 1st or 2nd derivative). This is a specialized high pass filter, but the units of the resulting image are not milligals or nanoteslas and they cannot be modeled without special manipulations of the modeling software. As in the case of downward continuation, care should be taken when applying this process to anything but very clean data sets because of the potential for amplifying noise. This process has some similarities to non-directional edge enhancement techniques used in the analysis of remote sensing images.

Strike filtering - This technique is directly analogous to the directional filters used in the analysis of remote sensing images. In gravity processing, the goal is to remove the effects of some linear trend with a particular azimuth. For example in much of the central U.S., the ancient processes that formed the Earth's crust created a northeast trending structural fabric that is reflected in gravity and magnetic maps in the area and can obscure other anomalies. Thus, one might want to apply a strike-reject filter which deletes linear anomalies whose trends (azimuths) range from N30oE to N60oE.

Horizontal gradients - In this technique, faults and other abrupt geologic discontinuities (edges) are detected based on the high horizontal gradients that they produce. Simple difference equations are usually employed to calculate the gradients along the rows and columns of the grid. A linear maximum in the gradient is interpreted as a discontinuity such as a fault. These features are easy to extract graphically to be used as an overlay on the original gravity or magnetic map or on products such as Landsat images.

Computer Modeling

In most applications of gravity techniques, the data processing and qualitative interpretation of maps is followed by a quantitative interpretation in which a profile (or grid) of anomaly values is modeled by constructing an earth model whose calculated gravitational and/or magnetic effect closely approximates the observed profile (or grid). Modeling of profiles of anomaly values has become common place and should be considered a routine part of any investigation of the subsurface. For example, a model for a profile across Figure 3 is shown in Figure 6. In its simplest form, the process of constructing an earth model is one of trial and error iteration in which one's knowledge of the local geology, data from drill holes, and other data such as seismic surveys are valuable constraints in the process. As the modeling proceeds, one must make choices concerning density and geometry of the bodies of rock that make up the model. In the absence of any constraints (which is rare), the process is subject to considerable ambiguity since there will be many subsurface structural configurations which can fit the observed data. With some constraints, one can usually feel that the process has yielded a very useful interpretation of the subsurface. However, ambiguities will always remain just as they do in all other geophysical techniques aimed at studying the structure of the subsurface.

There are countless published articles on possible mathematical approaches to the modeling. However, for the two dimensional case (i.e. the modeling of profiles drawn perpendicular the structural grain in the area of interest) a very flexible and easy approach is used almost universally. This technique is based on the work of Hubbert (1948), Talwani et al. (1959), and Cady (1980), although many groups have written their own versions of this software with increasingly effective graphical interfaces and output. The original computer program was published by Talwani et al. (1959), and Cady (1980) was among the first to introduce an approximation (called 2 1/2 D) that allows for a certain degree of three dimensionality. In the original formulation of Hubbert (1948), the earth model was composed of bodies of polygonal cross section that extended to infinity in and out of the plane of the profile of gravity readings. In the 2 1/2 D formulation, the bodies can be assigned finite strike-lengths in both directions. Today, anyone can have a 2 1/2 D modeling running on their PC.

The use of three dimensional approaches is not as common as it should be because of the complexity of constructing and manipulating the earth model. However, there are many 3-D approaches available (e. g., Blakely, 1996). As discussed above, a full 3-D calculation of the gravitational attraction of the topography using a modern digital terrain model is the ultimate way to calculate Bouguer and terrain corrections as well as construct earth models. This type of approach will be employed more often in the future as terrain data and the computer software needed becomes more readily available.

Gravity modeling is an ideal field in which to apply formal inverse techniques. This is a fairly complex subject mathematically. However, the idea is to let the computer automatically make the changes in a starting earth model that the interpreter constructs. Thus, the interpreter is saved from tedious "tweaking" of the model to make the observed and calculated values match. In addition, the thinking is that the computer will be unbiased relative to a human. The process can also give some formal estimates of the uncertainties in the interpretation. Inverse modeling packages are readily available and can also run on PCs.


Corrections | Page 4 of 5 | References