Chapter 3. The Basics of Remote Sensing

Authors:
Marc Steininger, Conservation International
Ned Horning, Center for Biodiversity and Conservation of the American Museum of Natural History
Reviewers:
Eugene Fosnight, UNEP-Sioux Falls
James Strittholt, Conservation Biology Institute

3.1 Background

Remote sensing is well recognized for the integral role it plays in environmental assessment and monitoring. Field surveys provide higher levels of accuracy than remote sensing, but using remote sensing techniques makes it possible to increase the speed and frequency with which one can analyse a landscape. Therefore, remote sensing can aid in making quick and focused decisions. Furthermore, remote sensing contributes to the development of objective and comprehensive assessments over larger geographic extents than is possible with fieldwork alone. Remote sensing facilitates objective, repeatable analyses that can help detect and monitor change over time, which is critical to the development of indicators for the 2010 target.
As a rule, remote sensing studies require additional (ancillary) data to allow the imagery to be interpreted. Ground sampling, familiarity with land cover and land use of the area in question, and expert knowledge of species trends and habitat usage, ecological communities, and ecological systems are needed to form a solid basis for interpretation. Although remote sensing provides repeated observation of the Earth’s surface, it has limited spatial, temporal, and thematic resolution (defined later in this chapter). Field sampling provides detailed, local biological information for small areas, but it can be expensive. Used together, strategic ground sampling, expert knowledge, and interpretation of remote sensing imagery can form a reliable, repeatable, and cost-effective analytical framework for accurately assessing the rate of change in biological diversity.
There are two general approaches to using remote sensing in assessing biodiversity. One is direct remote sensing, which maps individual organisms, species assemblages, or ecological communities by use of airborne or satellite sensors. The other approach, indirect remote sensing, facilitates assessments of biodiversity elements through analysis of such environmental parameters as general land cover, geology, elevation, landform, human disturbance, and other surrogates for the actual features of interest. When mapping the distribution of species of interest (focal species), a common approach is to map specific habitat types by use of a combination of remote sensing and environmental data themes. For example, woodland caribou (Rangifer tarandus), a species at-risk in the boreal zone of North America, is dependent on old-growth forest isolated from human disturbance. Mapping and monitoring woodland caribou distribution can be achieved by relying on remote sensing to map the vegetation modified by spatially explicit data on human disturbance, including roads, agricultural development, forestry activities, and energy development. Combining spatial data sets is the most common means of assessing and tracking selected species. Additional examples are discussed in more detail in Chapter 4.

3.2 What exactly is remote sensing?

Any method of observing the Earth’s surface without being directly in contact with it falls under the definition of remote sensing. These methods allow us to obtain information about our planet and human activities from a distance, which can reveal features, patterns, and relationships that may not be possible or affordable to assess from ground level. Remote sensing provides an overview of the interaction of our complex biosphere components and is especially useful in monitoring landscape change.
The most common tools used for remote sensing are airborne sensors installed on fixed-wing planes or helicopters, as well as on various satellite sensors orbiting the Earth. The airborne sensors are typically used to collect data on demand, when they are needed. Airborne data collection is most valuable for small areas, for specific events, to complement less detailed data, and for validation of data collected by satellite sensors over larger areas. Historical aerial photography archives can also be critical in change detection for periods before the widespread availability of satellite technology, which occurred in the mid-1970s.
Remote sensing satellite systems for land cover assessment are operated by a growing number of countries, including Brazil, Canada, China, France, India, Japan, Russia, and the United States. Many satellites monitor the earth, with different sensors gathering different types of environmental data. The sensors acquire images of the earth and transmit them to ground receiving stations located throughout the world. Once these raw images are processed and analysed, indicators of biodiversity change can be assessed.

3.3 Spectral images

Satellites view the planet with sensors that provide digital images of several discrete areas of the light spectrum. Human vision is limited to the visible wavelengths from blue to red, but satellite sensors are not so limited. Most satellite sensors also record images in wavelengths invisible to us, such as in the near-infrared and thermal regions (figure 3.1).



Some can detect even longer wavelengths, such as in the microwave region. Images from these different wavelength areas or bands can be combined in different ways to produce false-colour images for human viewing, interpretation, and analysis ( figure 3.2 ).
Because the images are collected in a digital format, they can be analysed by computer. Just as different objects reflect different visible wavelengths, resulting in the colours the human eye can see, objects also reflect other non-visible wavelengths in different ways that are also important in classification. For example, geologists can distinguish rock and mineral types from space by using images taken in the middle-infrared, although most rocks and minerals look fairly similar in the visible wavelengths.
The most commonly used data in land cover mapping are from optical sensors—instruments that detect short-wave solar energy reflected off the Earth’s surface. These sensors analyse three main regions of the spectrum, the visible, near-infrared, and middle-infrared. Most remote sensing of land cover types is dependent on the fact that in these main regions, reflectance differs significantly for different surfaces such as soil, leaves, wood, ash, water, and snow (figure 3.3). In vegetation, the canopy structure affects shading, which, in turn, strongly affects reflectance from that canopy back to the sensor. For example, tall forests tend to appear darker than shorter forests because of greater canopy shading with increasing tree height. Variable tree height can also increase shadowing. The combination of reflectance properties allows for the differentiation between many vegetation types with satellite imagery, and these vegetation types are often good indicators of (or surrogates for) land cover or habitat type.
The sensors discussed thus far are referred to as “passive” because they rely solely on capturing reflected light emitted from the sun. There is another major class of sensors called “active”; these sensors emit a pulse energy off a surface and then capture the return to the sensor. Two increasingly popular active sensors include radar and lidar. Radar sensors, which emit microwave pulses, are a more established technology than lidar sensors, which emit laser pulses. Both sensors are particularly useful in mapping forest characteristics, including age, density, and biomass. In radar images, different forest types also have different detailed textures (spatial patterns of variability produced by objects too small to be detected individually), which result from the amount of variation in canopy height as observed from above (Saatchi et al. 2000). Radar signals are sensitive to water and wet soils, thus radar is also valuable for tracking spatial and seasonal patterns of flooding.
Radar is even more valuable when more than one wavelength is used, because energy in shorter wavelengths tends to bounce off small and large branches and leaves, and energy in longer wavelengths tends to bounce off only larger branches. A major advantage of radar is that it can penetrate clouds. A major disadvantage is that it is very difficult to use in vegetation mapping in hilly or mountainous areas because the topography dominates the patterns observed. Also, radar satellites are currently not multispectral (they record in only one spectral region), and the data are relatively expensive compared to most optical satellite data. Although the use of both radar and lidar is increasingly common for specific mapping needs (such as mapping forest density and age or the use of radar in cloudy regions of the world such as the humid tropics), these technologies are still in development and products generated from passive optical sensors are often more appropriate for operational monitoring for the near future. For these reasons, the bulk of this book addresses applications with optical imagery.


3.4 Issues that affect selection of images

Remote imaging sensors can differ in many ways. Most satellite and airborne sensors produce digital images from several spectral bands, each band corresponding to a specific wavelength range within the electromagnetic spectrum. The sensors discussed in this chapter focus on the visible and infrared wavelengths.

The main characteristics of a sensor of importance to the user include:
  • image size or path width
  • region of the earth from which images are acquired
  • spatial resolution (the size of the unit at which data are collected)
  • number of bands and wavelengths detected
  • frequency of image acquisition
  • date of origin of the sensor.

3.4.1 Image size (path width)
The area covered by a single satellite image is defined by the path width and the distance of the satellite along its path. The path width is limited by how far to each side of the sensor’s center reflected light from the earth’s surface can be collected. The path width can vary from as little as 8 to more than 2,000 kilometres. Larger image sizes usually correspond to a coarser spatial resolution; however, it is easier to work with fewer images because mosaicking them (combining them into one large image) is time consuming and can introduce inconsistencies. These inconsistencies can occur because each image is taken under specific environmental conditions, meaning that the same feature on the ground can produce a different reflectance based on various parameters, including humidity, cloud cover, and time of day. Spatial resolution is discussed further below in section 3.4.2.
Most Earth-observing satellites orbit from pole to pole in a sun-synchronous orbit, allowing the sensor to cross the equator at the same time — usually in the mid-morning for the daylight half of its orbit and in the evening for the other half of its orbit. Areas close to the poles might not be covered at all with sun-synchronous sensors. Most remote sensing satellites have a near-polar orbit and are not able to acquire imagery directly at the poles because their orbit does not go over these areas. Another class of satellite sensor is geostationary, which appear to remain stationery over a point on the ground. These satellites monitor large areas of the Earth’s surface and they track the Earth’s rotation so the satellite sensors are able to continually view the same area of the ground. This is a common orbit for weather satellites.

3.4.2 Resolution (spatial, temporal, spectral, and radiometric)
Several different characteristics affect the detail that can be resolved (seen) in an image. These are traditionally referred to as the four types of image resolution. Most people think of “resolution” as being synonymous with spatial resolution, but these other “resolution” terms are used in the formal literature and directly affect our ability to monitor any given object or phenomenon.
Spatial resolution, which is often referred to as simply “resolution,” is the size of a pixel (the smallest discrete scene element and image display unit) in ground dimensions. In most cases, an image’s resolution is labelled with a single number, such as 30 metres, which represents the length of a side of a square pixel if it were projected onto the Earth’s surface. If the pixel were rectangular, then both the height and width of the pixel would be provided. Along with spatial resolution, it is useful to be aware of the minimum mapping unit, which is the minimum patch size included in a map. This differs from the sensor resolution, because maps usually have been filtered after classification (discussed later in this section). For example, a map produced from 30-metre resolution data may have been filtered so that there are no patches in the map that are smaller than 5 hectares.
Positional accuracy with reference to known locations is a concept correlated with spatial resolution. Are the estimated locations of vegetation boundaries accurate to within 1 kilometre or within 100 metres or more? For the GLC2000 global-scale data set, for example, positional accuracy had a reported error of 300 metres (Mayaux et al. 2006). Positional or locational accuracy is generally better for more recently acquired satellite images than older images, because the processing used to assign geographic position to raw satellite images (called ortho-rectification) has dramatically improved over time.
Repeat frequency or temporal resolution is the minimum time scale during which a particular feature can be recorded twice. For example, with Landsat, the same image area can be recorded every 16 days, because this is the length of time it takes for the satellite to complete one complete path over the Earth. Some sensors with a very wide field of view can acquire multiple images of the same area in the same day. Although most sensors are static within the satellite, some can be pointed (within limits) at particular features as needed. This can reduce the repeat frequency for which a feature can be recorded. Ecosystem-specific regeneration rates are one of the important considerations when determining desired temporal resolution of remote sensing data (Lunetta et al. 2004). Early warning systems for sudden loss of habitat—such as by fire or illegal logging—may require daily to weekly acquisitions.
Spectral characteristics include band width, band placement, and number of bands. Spectral bandwidth—or spectral resolution, as it is often called—is the range of wavelengths detected in a particular image band. Band placement defines the portion of the electromagnetic spectrum used for a particular image band. For example, one band might detect blue wavelengths and another might detect thermal wavelengths. The particular properties of the features of interest indicate which bands are relevant for a given application. The number of bands is generally less important for visual interpretation or viewing, because an analyst can view only three bands at a time. Multiple bands are more important for using automated classification approaches. Hyperspectral sensors slice the electromagnetic spectrum into many discrete spectral bands (usually more than 100). This can enable the detection of spectral signatures that are characteristic of certain plant species or communities. However, analysis of hyperspectral imagery is still a new, developing field.
A sensor records the intensity of a given wavelength as a single whole number between the minimum and maximum of a range. For example, Landsat TM sensors can store values from 0 to 255, whereas IKONOS sensors can store values from 0 to 2,048. This potential range of values is often referred to as radiometric resolution. The number of values that can be stored limits the amount of variation within a wavelength band that can be detected, which is one aspect of sensitivity.
Sensitivity is also defined by the sensor’s dynamic range. Sensors of particular wavelength bands have extremes of sensitivity above and below which they cannot differentiate change in intensity. If the signal returned for a particular band is too faint, then the sensor cannot record it; conversely, if a signal is greater than the maximum recordable by the sensor, it is saturated and cannot record any further change above this level.
Monitoring or assessment programmes may require a certain level of thematic precision, which refers to how many specific categories (such as vegetation types) are represented. There is some desire to understand trends for very specific types of vegetation cover types—for instance “seasonally inundated Mauritzia palm forest.” But accuracy is usually higher when mapping broader types, such as “all humid forests.” It is sometimes desirable to distinguish between different stages of regeneration (or age), especially for forest ecosystems. The degree to which remote sensing data can be used to estimate these stages varies by biome, but in all cases, estimation of area within specific age classes is always less accurate than the estimation of the area of all ages combined. Therefore, before determining the level of precision required for indicators, the implications for expected accuracy in reported trends should be considered. The level of potential thematic precision is maximized when data of high spatial, temporal, and spectral resolution are used.

3.4.3 Image availability
A.3 and A.4 in the Appendix includes links to available satellite imagery that could be used for monitoring land cover and habitat distribution. Of the large number of potential image types, only a few are practical for monitoring over entire countries because most are either costly or have too small a data archive, especially outside the range of the country that launched the satellite. The most practical data sources for biodiversity monitoring are those that record data in the areas of the spectrum from infrared through the visible bands to ultraviolet. Landsat data have been the most heavily used because of their relatively fine spatial resolution (30 by 30 metres) and moderate cost. SPOT HRV images are of similar quality to Landsat, have a pixel size of 20 by 20 metres, and are fairly well archived; however, their cost can be prohibitive for many national assessments. Recent data collected from comparable Brazilian, Chinese and Indian satellites at similar spatial resolutions are also useful when available. However, these latter sources have smaller collections of archived data.
Other useful data are collected at coarser spatial resolutions, such as 250 metres to 1 kilometre. Some of the best-known sources of coarse-resolution data for habitat monitoring are NASA’s MODIS sensors and SPOT Vegetation data, coordinated by the Vegetation Programme. These data are usually free, well archived, and available soon after acquisition. However, they are not optimal for monitoring habitat extent, fragmentation, and rates of change. In most areas, changes occur in many small patches that are more detectable with finer-resolution imagery. Conversely, coarse-resolution imagery can be processed quickly and can therefore provide a valuable complement to finer-resolution data. The availability of both types of data means that countries can maintain a monitoring system in which coarse-resolution imagery can be used to estimate change as part of an early warning system, and finer-resolution data can be analysed less frequently to produce condition and change assessments of greater precision.
Imagery is also available at very high spatial resolutions—five metres or finer. These data are provided by private satellites and are usually very expensive. However, they are perhaps the only option, other than repeated aerial or field surveys, for monitoring certain small-size ecosystems or habitats. These may include waterbodies such as small rivers, lakes, wetlands, and some mangroves and coral reefs. It would be costly to conduct nationwide monitoring of these habitats with such data; however, these data may be useful in a sample-based approach complemented with field or aerial surveys.

3.4.4 Relationships among image size, resolution, and image availability
Wide paths tend to be associated with low spatial resolution and are linked to shorter repeat cycles, thus increasing the temporal resolution. A high spatial resolution is linked to large data volumes, which increase the time needed to manipulate and analyse the images. These issues can lead to trade-offs between spatial, temporal and spectral resolution. To achieve high spatial resolutions, some sensors have a panchromatic band, which has great spectral width, including much of the visible and near-infrared portion of the electromagnetic spectrum, and high spatial resolution. For example, the panchromatic band on Landsat 7 ETM+ sensor is 15 metres; the other bands are 30 metres or greater. The panchromatic band can be incorporated with other bands to enhance the visual sharpness of the image, which is valuable for mapping some important habitat types.

3.5 Image classification

There are various approaches and quantitative methods for using remote sensing data to discriminate different types of habitat cover. These are broadly called classification methods, and some of the more common ones are presented as case studies in this book. Two broad types of classification method are supervised and unsupervised.
In supervised classification, the analyst defines areas where the land cover or habitat type is known to occur. Parameters and statistics are then derived from the satellite imagery for these defined areas (or training sites). These parameters and statistics relate to the spatial reflectance values of the particular land cover types and habitats and include the mean values and covariance (the extent to which two variables vary together) from different wavelength bands for each area. These parameters and statistics are then used to estimate what the cover type is most likely to be for all parts of the image that have not been predefined. The process of defining the known areas of cover and calculating the spectral statistics is called training and is usually carried out by drawing polygons over parts of the image for which land cover type is known and labelled as such, followed by the automated calculation of the statistics by the image processing software.
In an unsupervised classification, the analyst does not predefine the land cover or habitat types. The image processing software divides the image into a certain number of classes, based entirely on the spectral data and with no knowledge of what cover types are present in the image. The user can define limits to the number of output classes and spectral variance within each class. The resulting classes are identified by different numbers, and the analyst must then assign names to these classes with the support of field knowledge and an understanding of how different habitats should appear in these images.
In both supervised and unsupervised approaches, several iterations are usually conducted before a classification is completed. In supervised classification, this usually involves modifying the training data. In unsupervised classification, it usually requires selecting any ambiguous classes and rerunning the analysis to split the classes into a larger number of subgroups that are then labelled separately with known cover types.
More advanced methods of classification include such things as binary decision trees and neural networks, which are currently not available as part of most standard software programs. In these approaches, classification can be per pixel, in which only the spectral data for each individual pixel are used to classify it, or contextual in which data from neighboring cells can be included to assist classification of each pixel. Some contextual classifiers use information about the texture (spectral variance) around a given pixel.
In all classification methods, the resulting image is often speckled, with individual, isolated pixels of one class surrounded by pixels of another. For this reason, filters are usually applied to the final classifications to smooth or generalize the final classification result. Neighborhood filters are often used. An example of a neighborhood filter is the majority filter, which reassigns the value of a central pixel to the value of the majority of cells around it. The effect eliminates very small clusters of pixels of the same class while it smoothes edges between groups of pixels of differing classes. Other filters act as sieves—small patches of pixels of the same class are reclassified, but large ones are not. This eliminates small patches without modifying the edges between larger, unfiltered patches. The analyst must make a decision as to the type and level of filtering applied based on knowledge and desired result, which has consequences for the quality, accuracy, and applicability of the final product.
Change in land cover may be assessed by direct analysis of the raw satellite data from different dates or in a postclassification comparison in which two classified maps from different dates are compared to each other. Estimates of change from the latter technique tend to have greater errors because of differences in interpretation during the classification process for each image date. The better approach is to directly map changes from the raw satellite data collected from the different times. Precise coregistration of images (making sure the same pixels from different
dates overlay each other precisely) is necessary for accurate estimation of change when using this technique. It is usually possible to attain a precision of less than one pixel width. Postclassification comparisons based on supervised classification, unsupervised classification, decision trees, and neural networks have all been successfully applied to the process of mapping changes in vegetation cover over large areas, and all are capable of producing similar results. The key to conducting accurate postclassification comparisons is consistent interpretation of the images used. This is especially important if there are many images to analyse and more than one analyst involved. Care must be taken to ensure that the areas that have not changed over time are being interpreted consistently.
Collection of independent information to assist interpretation of the images, and to conduct an error assessment of the final product, is also important and necessary. This information is increasingly being collected by aerial surveys. The cost of surveys is not as prohibitive as field sampling, and aerial surveys can generate excellent observations over an entire country in a short time. Digital aerial photography and videography are very valuable, especially when the photos and video frames can be automatically linked geographically. Because small features can be easily seen in these images, they provide an excellent source of data to assist satellite image interpretation and validation of final classified maps.

3.6 Additional issues to consider

Like any monitoring tool, remote sensing has advantages and limitations. The main advantages are that large amounts of uniform data can be collected from a distance, remote sensing can cover extensive areas, and remote sensing is less expensive than field-based mapping efforts. Constraints include technical limits on feature discrimination; costs (although cheaper than field-based assessments, they can still be prohibitive); the requirement of high levels of technical expertise; and the need for information to calibrate and verify remote sensing results, which can be limiting (Turner et al. 2003).

3.6.1 Technical limits of remote sensing technology
For most sensors, remote sensing can monitor only features that can be viewed from above; characteristics of the understory must be inferred rather than directly observed. Lidar and radar sensors are exceptions, but as mentioned earlier, these technologies pose other constraints, including cost, lack of analytical monitoring standards, and data availability.
When classifying remote sensing data to produce a map of vegetation, the individual features belonging to a particular class of interest must be large with respect to the resolution of the imagery. For example, a stream that is 10 metres wide could not be detected in an image composed of cells of 1-kilometre spatial resolution. In addition, and crucially, the feature being observed must have a sufficiently unique spectral signature to be separated from other types of features. For example, it may be difficult to distinguish secondary from primary forest without additional supporting data.
Atmospheric phenomena, mechanical problems with the sensor, and numerous other effects can distort the input data and therefore the results, although algorithms and models to correct these distortions are improving continuously. Cloud cover is the most common impediment to seeing the earth’s surface with optical sensors and is particularly problematic in some regions of the world where cloud cover is common (for example, wet tropics). Haze and thin clouds are less problematic, but can result in distortions of feature spectral signatures, resulting in greater error or more expensive and complex processing.
Further reading on this topic can be found in a guide to “Myths & Misconceptions in remote sensing” published by the American Museum of Natural History.

3.6.2 Cost-effectiveness of remote sensing
Using remote sensing in combination with field surveys is likely to be the most cost-effective solution to monitoring the status of many aspects of biodiversity at regular intervals. This is especially true for the monitoring of large areas. Since the total cost is likely to be considerable; there must be clearly defined needs and users for the resulting information. According to Mumby et al. (1999) four types of costs are encountered when undertaking remote sensing: (1) set-up costs, (2) field survey costs, (3) image acquisition costs, and (4) the time spent on analysis of field data and processing imagery. The largest of these are set-up costs, such as the acquisition of hardware and software. This may make up 40–72 percent of the total cost of the project, depending on specific objectives. Fortunately, increases in computational power are driving down the costs of associated hardware and software. If set-up costs are fixed—that is, if remote sensing hardware and software already exist—then field survey costs will dominate the project budget at approximately 80 percent of total costs (or 25 percent with set-up costs). Field survey is a vital component of any habitat-mapping programme and may constitute approximately 70 percent of the time spent on a project. In general, the more time and effort spent on field surveying and ground-truthing, the higher the accuracy of the resulting product.
The third major cost is imagery. The selection of imagery is made considering the trade-offs between map accuracy and the cost of imagery; the latter depends on the size of the study area and choice of sensor. SPOT XS is a cost-effective satellite sensor for mapping an area that does not exceed 60 kilometres in any direction (that is, it falls within a single SPOT scene). ASTER is also economical but new acquisitions must be requested ahead of time online. For larger areas, Landsat TM is a cost-effective and accurate sensor. MODIS is free thus far, but it has a short historical record and is suitable only for coarse-scale regional monitoring. The relative cost-effectiveness of digital airborne scanners and aerial photography are more difficult to ascertain because they are case specific. Generally, the acquisition of digital airborne imagery is more expensive than the acquisition of colour aerial photography. However, this must be offset against the huge investment in time required to create maps from aerial photograph interpretation. After data are acquired, technical expertise will be needed to process and analyse the imagery, and this is likely to cost more in salaries than the imagery or hardware costs. And finally, to establish a successful monitoring programme entails repeated measurements of both biodiversity indicators and habitat, all except set-up costs are likely to be recurrent (Turner et al. 2003).

3.6.3 Technical expertise
One challenge for ecologists and conservation biologists hoping to incorporate remote sensing technologies into their work is to acquire or co-opt the technical expertise required to handle and interpret the data. Managing even small quantities of satellite imagery requires specialized software, hardware, and training. The expertise and equipment often exist in-country, but not necessarily within the agencies with an interest in biodiversity monitoring. Fortunately, new software tools are making remote sensing data more accessible to nonspecialists, and the possibilities for training are growing rapidly. The Appendix includes a list of online tutorials covering specific areas.
Some remote sensing platforms (for example, hyperspectral, lidar, and radar) are largely or exclusively in the research phase of development and may not be in common use for some years. The number of experts who can work with these platforms is likely to grow in the future.

3.6.4 Requirement for calibration

A 2010 indicator monitoring plan should therefore consider means of fostering collaboration among remote sensing researchers and fieldworkers in biodiversity science and conservation. Ecologists, evolutionary biologists, and conservation biologists may have useful data sets on distributions of individual species, species richness, and endemism that would help to derive biodiversity indicators. The expertise to link these biodiversity data with global, regional, and local data sets—such as land cover, primary productivity, and climate—may be found in remote sensing laboratories within universities, other institutions, or consultancy firms. These connections among disciplines are emerging around the world, with existing remote sensing laboratories taking an interest in biodiversity and biodiversity specialists beginning to include remote sensing and GIS expertise in their own professional toolkits.

3.7 References

  1. Lunetta, R. S., D. M. Johnson, J. G. Lyon, and J. Crotwell. 2004. Remote sensing of environment impacts of imagery temporal frequency on land-cover change detection monitoring. International Journal of Remote Sensing 89(4):444–54.
  2. Mayaux, P., A. Strahler, H. Eva, M. Herold, A. Shefali, S. Naumov, S. Dorado, C. Di Bella, D. Johansson, C. Ordoyne, I. Kopin, L. Boschetti, and A. Belward. 2006. Validation of the Global Land Cover 2000 map. IEEE Transactions on Geoscience and Remote Sensing 44(7):1728–39.
  3. Mumby PJ, Green EP, Edwards AJ, Clark CD. 1999 The cost-effectiveness of remote sensing for tropical coastal resources assessment and management. Journal of Environmental Management 55:157-166
  4. Saatchi, S.S., B. Nelson, E. Podest, J. Holt. 2000. Mapping land cover types in the Amazon Basin using 1 km JERS-1 mosaic. International Journal of Remote Sensing 21(6-7): 1201-1234.

  5. Short, N. M. Sr., NASA remote sensing tutorial. http://rst.gsfc.nasa.gov (accessed April 12, 2007)
  6. Turner, W., S. Spector, N. Gardiner, M. Fladeland, E.J. Sterling and M. Steininger. 2003. Remote sensing for biodiversity science and conservation. Trends in Ecology and Evolution 18(6):306–314.

  • United Nations
  • United Nations Environment Programme