University of Texas at El Paso
Banner
Pan American Center for Earth and Environmental Studies
   
Getting Started with Remote Sensing    

Basic Steps to Getting Started | Basic Principles | Resolution | Guide for Teachers | Guide to Internet Resources
 
Show as single page

Remote Sensing
The science of acquiring, processing and interpreting measurements acquired from aircraft and satellites. (Sabins, 1977)

The Basic Steps to Get Started
Images constructed from remote sensing data are playing a constantly increasing role in our daily lives, scientific research, and a wide variety of applications to monitor and study the Earth. A vast array of data are available at little or no cost. Thus, it is important for more non-specialists to be able to understand and use these data. It is possible to get started using remote sensing data with only modest thought and effort.

  1. Know the physics of the problem!
    This is not all that complicated. We are dealing with electromagnetic energy (light, heat, radar), and we are all familiar with visible light. Much of the physical intuition needed comes naturally because we deal with light every day (i.e., bright means high energy). As you look at images constructed from remote sensing data, do not be afraid to use your physical intuition to understand terms such as:

    See -- Resolve -- Recognize -- Detect -- Signature Contrast -- Brightness -- Tone --Texture

  2. Know the numerical aspects of the problem!
    Here again, the basics are not all that complicated. However, it is crucial to understand the digital nature of modern data. The digital values of these data represent the average energy of the electromagnetic energy reflected or emitted from a pixel covering a small area of the Earth's surface (~30m2 in the case of Landsat). The mathematical operations are not complex, but the numerical manipulations of these data are. One has to beware of the "garbage in garbage out" rule. Be sure you have some idea of what a good result of a process would look like before applying it to your data.

  3. Get to know one of the popular software packages!
    Processing of digital data to make images is an essential part of remote sensing. The main processing steps are standard, and if you can execute these using one software package, you will be able to quickly learn how to use another. There are free software packages available to get you started (Globe, and Multispec). Getting the data into a processing package can be a challenge that involves moving files from one computer system to another. Bite the bullet and learn how to do this yourself.
  4. Know the problem you are trying to solve!
    Generic remote sensing means little. You have to be knowledgeable about a problem before you can do much to solve it. Ground truth is always important!

  5. Know the sources of data!
    There is a huge amount of free and low-cost data in the public domain. It is a challenge just to keep up with what is available. Those who know where to find data quickly and cheaply are always valuable to their organization.

  6. The web is your friend!
    The web is a major tool in remote sensing. It is the primary source of data, information on new developments, and educational materials.

Digital Image Processing
Digital processing involves many possible procedures in which the numbers which represent the image are manipulated. Many computer processes involve resampling which must keep in mind the basic resolution of the data. Most modern data is procesed so that geometrical effects are corrected prior to delivery to the user. However, the basic steps in processing are:

  1. Image restoration
    This step involves repairing flaws in the image, and most importantly, adjusting the image so that it is an accurate cartographic product.

  2. Image enhancement
    This step involves a wide variety of processes which make various aspects of the image clearer. Examples of processes which are commonly applied are improving (increasing) the contrast ratios on the image (contrast stretch), mosaiking adjacent images, spatial filtering, and enhancing edges in the data.

  3. Information extraction
    This step generally consists of statistical processes which often automatically extract information and present it in either image or graphical form.

  4. Linkage to a data base and a GIS system
    The job is not really complete until the data are stored in a structured system and geographic information is added to the image to help the user locate features in a map sense.

Some Rules to Remember

  1. Garbage in; garbage out (i.e., always look at your data first).
  2. Never apply a process unless you are sure that you would recognize a good result if you saw one. If it does not look right, it probably isn't right.
  3. Do not hesitate to try different processes or several values of parameters required for a certain process. You cannot hurt anything and you will develop an experience base and learn what parameters are most important. Remember that, from an image analysis point of view, the processing steps that produce an image that provides the information you need are, by definition, the right steps.

Image Restoration

  • Noise removal
    There are several possible sources of noise in scanning systems. Since the data are so dense, simple moving window operators can usually spot spurious values and replace them with some sort of average of neighboring values. Examples of such operations include restoring periodic line dropouts, removal of banding, removal of random noise. Such operations are generally foolproof and simply viewing the image after application of the process can assure that it worked properly. Most modern data come with such problems already fixed.

  • Correction for atmospheric scattering
    The effect of haze is to reduce contrast. In a digital image, the effect is to make all the values in a band high relative to an IR band where atmospheric effects are minimal. The cure for this is simply a DC shift (i.e., subtraction of a constant) to the hazy data. More sophisticated approaches exist which involve models of the atmosphere’s response to the incoming EM energy from the Sun.

  • Geometric Restoration
    Images usually include geometric distortions due changes in the attitude and altitude of the spacecraft. These distortions are not systematic and must be dealt with on a image by image basis. The procedure is to pick a series of ground control points which are places on the ground that can be clearly seen on the image. The coordinates of these points are determined from maps or Global Positioning System (GPS) surveys. A surface is then fitted to these points mathematically, and this surface is then used to predict corrected locations for the pixels in the distorted image. The process of moving the pixels to these new locations is often referred to as warping the image. This process can also require resampling of the digital values since a given pixel in the original image may overlap several pixel locations in the corrected image. The "nearest neighbor" approach avoids this procedure by taking each new pixel location and assigning it the digital value of the pixel from the original image that come closest to overlying it after correction.

Image Enhancement

Contrast Enhancement
This is a very common procedure whose goal is to increase the contrast ratio on an image. The process is simply one of looking at a histogram of the digital values for an image and resampling them after stretching the values which made up the bulk of the variation on the image over the full range of possible values. There are a number of approaches available in most software packages, but a simple linear stretch is usually sufficient.

A major cause of low contrast in images is scattering, which is due to multiple collisions of the electromagnetic waves with particles and gases in the atmosphere. However, some terranes inherently have low contrast. For example, polar areas covered with snow are uniformly bright and a basaltic volcano is uniformly dark. The key control on the nature and amount of scattering is the relationship between the wavelength of the electromagnetic wave and the size of the particles and molecules being encountered.

  • Selective Scattering
    Short wavelengths are scattered more producing blue sky for example.
  • Nonselective
    Due to large particles - all wavelengths are scattered producing random white light. For example, water droplets produce white clouds.

Scattering produces illumination but reduces contrast ratio. In the case of film, contrast is increased by using filters (haze filter or IR film). In the case of digital images, contrast stretching (transformation) is a basic step in image processing that is applied to almost every data set. The concept is simple in that for most images a histogram showing the number of pixels which assume a certain digital value reveals that the values span a limited range of the radiometric resolution available (usually 0 to 255). Thus, the contrast is reduced. The most common approach to employ is the linear contrast stretch (often with the highest 1-2% and lowest 1-2% of the values excluded) where the highest recorded value is transformed to the highest value possible and the lowest values recorded transformed to the lowest value possible. Everything in between is adjusted in a linear fashion. A gaussian stretch is a similar transformation but assumes the digital values should have a bell-shaped curve distribution. A histogram equalization approach attempts to put a similar number of digital values in every portion of the distribution.

    Density Slicing
    This process is nothing more than assigning a (slice) single color or shade of gray to a whole range of digital values. This makes a map appear terraced, and the idea is that each level of the slice corresponds to a some aspect of the image.

    Edge Enhancement
    The fact that a digital image is composed of resolution cells that have finite dimensions tends to blur edges (i.e., faults, roads, land use boundaries, etc.). Thus, edge enhance is a process which attempts to sharpen up the edges in an image. Mathematically this process amounts to taking spatial derivatives (gradients). Where there is only a small change in the image there will be little effect. Areas with changes will be sharpened. This approach can be applied so that edges with any geographic trend will be enhanced (non-directional filter) or so that only edges with a certain trend will be enhanced (directional filter).

    IHS Transformation
    The intensity, hue, and saturation approach is an alternative way to think of the colors in an image. This approach can be used to enhance an image by for instance contrast stretching the saturation component and then transforming back to the RGB system. (see Plate 11 of Sabins, 1987). Another reason to employ this simple transformation is to replace the intensity with some other sort of data such as SPOT or IRS-C and then do the inverse transformation to obtain the RGB color image at a higher resolution.

    Digital Mosaics
    It always seems that one’s area of interest lies near the boundary between two scenes. Thus, it is necessary to merge the data from these scenes into a mosaic. This process involves spatially merging the data sets and then matching their histograms so that the color schemes match and the seam between the two scenes is not visible.

    Stereo Pairs and Perspective Images
    With the emergence of readily available digital elevation data, many new approaches to enhancing images are emerging. For example, the digital elevation data can be used to create synthetic stereo pairs and to create perspective views by draping the image over the topography.

    Information Extraction

    Principal Component Images
    In this approach, the goal is to make linear combinations of data from multiple bands so that most of the variation (i.e., independent data) can be expressed in a small number (ideally 3) of components of transformed data. The mathematics is messy but straightforward.

    Any three principal components can be displayed as an RGB image and interpreted. A subset of the n principal components can be thought of as reducing the dimensionality of the data set thus simplifying further analysis. The advantage of this approach is that all of the bands of the data set make some contribution to the principal components.

    A challenge with multispectral data is finding an efficient way to extract the information it contains. One consideration is that multispectral data sets usually contain certain bands that are highly correlated (Figure 1). This implies that the data from these two bands provide essentially the same information, and principal component analysis is a technique commonly employed to exploit this fact. The mathematics behind this approach is simple when the case of two or three bands is considered but gets more abstract as one tries to imagine the process in n-dimensions. In the case of two bands (Figure 1), one finds a linear transformation that defines a new set of axes one of which displays the maximum variation. This first principal component thus becomes the basis for a new set of orthogonal axes each of which in turn contain decreasing amounts of variance.


    Figure 1

    Ratio Images
    Ratio images are derived by simple pixel-by-pixel division of digital values from two bands of data. One advantage of ratio images is that they minimize differences in illumination (reduce the effects of shadows. Another advantage is that they emphasize differences in spectral reflectance (Figure 2).

    Figure 2. A ratio image tends to emphasize differences in relfectance at specific wavelengths.

    One disadvantage is that they lose the information contained in the absolute values of reflectance data especially in the overall reflectivity or albedo of a region of an image (Figure 3). Another disadvantage is that they tend to amplify noise (Figure 4).


    Figure 3. A ratio image tends to demphasize difference
    in overall relfectance (albedo).

    Figure 4. A ratio image tends to emphasize noise.

    Multispectral classification
    In general, classification schemes involve some form of pattern recognition in an effort to compress all of the information in a multispectral data set into a single image which depicts the major types of surface reflectivity in a manageable number of colors (shades of gray, symbols, etc.). For example, a few types of rocks and vegetation, urban areas, and water might be identified as representing the classes. The basis for the classification is the spectral signature of each class. This procedure takes many forms, but the goal is to extract from the image or measure in the field the spectral signature of a particular rock type, vegetation type, etc. and then search the image for this signature. These approaches can be supervised (a human defines the classes) or unsupervised (automatic).

    • Supervised
      In supervised classification schemes, a series of sites which represent known classes (i. e., you know that a certain field recognizable on an image was filled by well developed corn plants or that a certain rock outcrop is composed of sandstone). The spectral signature of each class (a pattern) is then used as the basis for a search for pixels which have this signature. The process employs all of the bands (or principal components) so it is n-dimensional. The 2-D example given Figure 5 shows what the process is generally like with idealized data. The process requires some work to choose classes that are distinct and meaningful. There are many different statistical approaches which can be used in classification schemes, but one needs to consider basic aspects of the data such as whether they are tightly grouped (A and D, Figure 5) or dispersed (B and C, Figure 5) or whether they are correlated strongly (A and B).
    • Unsupervised
      Unsupervised classification schemes proceed in a similar manner as supervised ones. However, the classes are assigned automatically by the computer. This as the advantage of being unbiased by the interpreter and may turn up patterns that would have been missed. On the other hand, it may produce unlikely class groupings.

    Figure 5. Examples of classes having different statistical properties.

    Change Detection


    With today's emphasis on global change and environmental impacts, the detection of change between images acquired at two different times is a common procedure. There are several approaches to this procedure, but the most simple is simply to carefully correct two images for effects of differences in sun angle, distance to the sun (anniversary images are ideal), atmospheric conditions, and sensor variations. Then one does a careful georeferencing of the images and simply subtracts the pixel values in one from the other. The differences can then be contrast stretched and displayed.

    The figure on the right illustrates urban sprawl changes in El Paso, Texas from 1991 to 1997. Image by John Seeley

    Color Representations

    We usually display a single band of an image as shades of gray (intensities of 0 to 255 for 8 bit data). However, color formed from combining three bands brings out the full richness of remote sensing data.

    RGB

    Red, green and blue are called additive colors (think of mixing paint) and most displays are produced this way. In remote sensing, the colors are not truly added in that the screen of the computer monitor is actually composed of pixels that consist of three segments which are illuminated by the red, green, and blue guns of the system. These segments are so small as to merge and give the full range of colors. For 8 bit data, the combinations of 256 intensity values for three colors lead to millions of colors.

    Subtractive colors
    Yellow, magenta, and cyan. These colors are mixtures of the additive colors (yellow - red & green, magenta - blue & red, cyan - blue & green). We use the term subtractive because we think of light passing through a filter such as a yellow one where blue is subtracted and red and green remain.

    IHS
    Intensity - brightness of the color; Hue - dominant wavelength; Saturation - pureness of the color (pastels have intermediate saturation values). One set of the several transformation equations that are used are as follows:
    I = R+G+B; H=G-B/I-3B; S=I-3B/I

    A common procedure to increase the richness (saturation) of the color in an image is to apply the IHS transform to the data, stretch the saturation values, and return to RGB space and view the image.


    Formation of Landsat (TM) False Color Composite Image - El Paso, Texas.
     
    Image by John Seeley

    Image Analysis versus Spectral Analysis

    Two ways to employ remote sensing data

    1. The first step in analyzing remote sensing data is image analysis. This step involves some amount of data processing, but its initial goal is to produce an image of the Earth's surface that "shows" features of interest. One can then either directly or indirectly (e.g. image classification schemes) "map" the distribution and/or characteristics of these features. In this case, the desire is to have the smallest pixels possible so that the clearest and most detailed image possible is obtained. The analysis may stop here because the image or map provides the information needed.

    2. Another way to examine remote sensing data is by analyzing the spectral reflectance of individual pixels or groups of pixels. Here the attempt is to identify the specific materials (minerals, pollutants, vegetation, etc.) that are present in the surface area represented by the pixels. This approach requires careful comparison with known reflectance spectra for materials of interest. In this case, the desire to have data from as many spectral bands as possible so that the material identification can be as exact as possible.

    References

    Lilles and Thomas M., and R. W. Kiefer, 1987, Remote Sensing and Image Interpretation: John Wiley and Sons, New York, 721pp.

    Sabins, Floyd F., 1997, Remote Sensing: Principals and Interpretation: W. H. Freeman and Company, New York, 494pp.

    Vincent, Robert K., 1997, Fundamentals of Geological and Environmental Remote Sensing: Prentice Hall, Upper Saddle River, New Jersey, 366pp.


    Basic Steps to Getting Started | Page 1 of 5 | Basic Principles