University of Texas at El Paso
Banner
Pan American Center for Earth and Environmental Studies
   
Getting Started with Remote Sensing    

Basic Steps to Getting Started | Basic Principles | Resolution | Guide for Teachers | Guide to Internet Resources
 
Show as single page
Resolution

This is a key term used in remote sensing, but has several meanings.

Introduction

Most modern instrumentation which acquires remote sensing data from satellites or aircraft works by scanning the Earth's surface as they travel along their orbital path or flight line. The Multispectral Scanner (MSS) is an example as is the new Landsat 7 instrument. Cameras [which may be digital or analog (i.e., record on film)] and videocams (i.e., video cameras) are exceptions which capture data in frames. These framing systems work like an every day camera, and instantaneously record a "picture" of the entire frame. Scanning systems acquire data for a given range (band) of wavelengths for each pixel (element) of the image separately, and an image consists of a display of these pixels with shades of gray representing the intensity of the energy recorded. This can be thought of as the same basic process as operates in the small scanners which are attached to many personal computers. The number of levels of intensity at which the incoming data can be digitized varies but is often 256. This corresponds to what is called 8 bit recording which means there are 28 discrete intensity levels that can be recorded. In the vast majority of cases, the user of remote sensing data need not be concerned with the technical details of the scanning system or how the initial processing of the data for effects such as geometrical distortion is conducted, because one usually works with proven systems with well established data collection and archival procedures. In other words, the user does not design the scanning system or the data handling system which gets the data out of the instrument, to a data center, and into a format which is organized for further processing and display with standard software packages.

Spatial Resolution

There are a variety of ways to approach the subjective issue of our ability to resolve objects on an image. Sabins (1997) defines spatial resolution as "the ability to distinguish between two closely spaced objects on an image" and points out that spatial resolution is "not the size of the smallest object that can be seen". He also points out that other terms such detectability, recognizability, signature, and texture have a place in the qualitative analysis of images. His approach comes from the perspective of aerial photograph interpretation and is a valid way to approach this issue. However, one can also approach this issue from a purely digital perspective. In small scanners (and dot matrix printers), the resolution is described in units of dots per inch. If an object on a diagram which is to be scanned is smaller than a dot, it gets smeared out into adjacent objects. On the Earth's surface in similar fashion, we often think of spatial resolution in terms of the size of the ground resolution cell, which corresponds to a pixel in the digital image. The image of Wahington D.C. (next page) is from the Thematic Mapper (TM) instrument and the 30 m pixels consituting these data are evident. Here objects smaller than one of these cells (pixels) are also smeared out.

If the pixel size is decreased, the spatial resolution increases. The image to the right is based on data with 5 m pixels.


(from Univ. of New Hampshire GLOBE site - Larry Ryan)

The digital approach is less formal than traditional photogrammetric approaches, but in today's digital world, it is a common. One must remember that the intensity values recorded represent the average value over the pixel. In the Landsat Thematic Mapper system, the size of this cell is 30m while the relatively new Indian satellite (IRS-C) provides data (1 band only) with a resolution of 5 m. Spatial resolution is the key consideration which usually first comes to mind because it governs how well we can "see" things on an image. The larger the pixels, the poorer the resolution. High resolution (small ground resolution cells) is of course desirable, but no matter how fine the spatial resolution, one can always eventually enlarge an image on a computer screen until the individual pixels can be seen producing checkerboard pattern. One practical limit to resolution which is often overlooked is the capacity of the recording system to handle the data. Higher resolution means a much larger amount of data. For example, a single band of a Thematic Mapper image of a square area 90km x90 km will contain about 9 million pixels whereas an image of the same area from the new Indian satellite will contain about 324 million pixels.

Another consideration limiting how small a ground resolution cell can be is the strength of the signal which can be recorded. There are a number of factors which control the signal strength, but the basic consideration is that there always some background noise in the system (like hum in the speakers of your stereo system) and the sensor needs to be able to record the signal (i.e., the incoming electromagnetic waves) so that it is always considerably stronger than this noise. A technical way of stating this is that the signal to noise ratio (S/N) needs to be >>1.

    Factors determining the strength of the recorded signal:

    1. The most obvious point is that the more intense (high amplitude) the incoming waves are, the stronger the recorded signal will be. For example, we all realize that on a bright, cloudless day, the intensity of the Sun's radiation is stronger than on a cloudy day. The technical term used to characterize the energy being reflected or radiated from the Earth's surface is energy flux.

    2. Another obvious point is that the higher the altitude the weaker the incoming signal because the instrument is simply further away from the source. This is the result of spherical spreading which shows that the amplitude is inversely related to distance (A~1/r). Thus energy is inversely related to the distance squared since A~E2.

    1. The spectral bandwidth (see right) of the sensor is a less obvious factor. A detector is sensitive to incoming energy over a range of wavelengths. Each of these wavelengths contributes to the signal recorded. Thus, the broader the range of wavelengths, the more energy there is available to be recorded. Thus, a broad bandwidth produces a strong signal, but a broad range of wavelengths is undesirable in that it decreases the spectral resolution (i.e., the ability of the detector to assign the energy recorded to a specific wavelength).

    1. With a scanner, the instantaneous field of view (IFOV) is the angle over which a measurement is being made at any instant. This angle is usually measured in milliradians (mrad). The IFOV and the altitude of the sensor govern the size of the ground resolution cell just as the size of a particular object one sees when looking into a camera varies with the distance from the object. As one zooms in with a telescopic lens, a close-up (high resolution) picture is obtained because the IFOV decreases, but the amount of light decreases. Thus, it is desirable for the detector to have the smallest IFOV possible. However, there must be enough energy available to make a valid measurement.

    2. Dwell time is a measure of how long the scanner is focused on a particular ground resolution cell (pixel) and is analogous to shutter speed in that it is the measure of how long the ground resolution cell is "exposed". If the ground resolution cell is small, a moving sensor will have a short dwell time relative to larger cells. The limiting factor on resolution is that the dwell time must be long enough to obtain a valid reading (i.e., good S/N).

      Excluding radar, most scanning systems are either rotating cross-track or along track in nature. The advantages of along-track systems are increased dwell time and lack of moving parts. However, multiple detectors are required. Thus for a given image, the measurements in a particular band have not all been made with the same detector. This can result in variations that are due to the detectors not the Earth's reflectivity.

    Spectral Resolution

    Almost all modern remote sensing systems make measurements in more that one band of the electromagnetic spectrum. Spectral resolution can be thought of in two ways. Sabins (1997) discusses the first which is the spectral bandwidth. It would be would be ideal to make thousands of measurements for single wavelengths, but this is not technically possible. Thus, any real measurement is the weighted average over some range of wavelengths (bandwidth). The narrower the bandwidth, the better the spectral resolution.

    Another way to consider spectral resolution is simply in terms of the number of bands for which measurements are made. In this case, Thematic Mapper data with measurements in 7 bands clearly has less resolution than hyperspectral AVIRIS data with measurements in 224 bands. Again, data volume is a practical limit, but the basic concept is to make it possible to identify materials based on their spectral reflectance curves. Systems such as AVIRIS have been given the name hyperspectral, but they are like any other system except for the number of bands for which data are available.
    A SPOT multispectral image of Peace New Hampshire - 20 m pixels (from A SPOT
    pancromantic image of Peace New Hampshire - 10 m pixels
    (from Univ. of New Hampshire GLOBE site - Larry Ryan)

    Basic Principles | Page 3 of 5 | Guide for Teachers