Tuesday, May 6, 2014

Lab 8: Spectral Signature Analysis

Introduction

In this lab, students will learn how to take, graph, and analyze spectral signatures from satellite imagery. This will build upon students priory knowledge of spectral reflectance of Earth's surface features. Twelve spectral signatures will be collected from a Landsat ETM+ image that covers a portion of western Wisconsin and Eastern Minnesota. Students will need to located each of these surface features: standing water, moving water, vegetation (forest), riparian vegetation, crops, urban grass, dry soil (uncultivated), moist soil (uncultivated), rock, asphalt highway, airport runway, and a concrete surface.

Methods

ERDAS IMAGINE 2013 will be used to capture and analyze the spectral signatures.


Figure 1: With the proper image opened in ERDAS, Lake Wissota was picked as the location for standing water. The Drawing tab was selected, as seen above, and polygon, near the left end of the toolbar, was chosen. A small polygon was drawn and selected, making it the active area of interest.




Figure 2: Next, the Signature Editor was opened by navigating to the Raster tab > Supervised > Signature Editor, as seen above.



Figure 3: With the polygon still selected, the Create New Signature(s) from AOI icon (looks like a bent arrow next to the plus sign) was clicked. This created a new entry into the black editor window. The name was changed to Standing Water. The same process continued for each feature until all twelve had been taken.



Figure 4: The signature mean plot for standing water. These graphs are generated by clicking the Display Mean Plot Window icon (looks like a zig-zag line) in the Signature Editor window. By holding shift and clicking to select multiple signatures and clicking the Switch Between Single and Multiple Signature Mode Icon (looks like 3 zig-zag lines) in the Signature Mean Plot window, any number of signatures can be plotted on the same graph.


Results


Figure 5: All twelve signatures plotted together to help visualize trends. It became apparent that three trends seemed to dominate the graph. These trends were broadly categorized as water, vegetation, and land.



Figure 6: Water features. Standing water has a higher reflectance across all spectral channels. This is most likely due to higher sediment or algae content in the standing water when compared to moving water.



Figure 7: Vegetation features. Crops and urban grass have higher reflectance in the visible red band and mid-IR channel when compared to forest vegetation and riparian vegetation. This suggests that crops and urban grasses are under more stress or are more unhealthy than the other types of vegetation.




Figure 8: Land features. Rock has the highest reflectance across all spectral channels. Dry soil has substantially higher reflectance in the mid-IR channel than moist soil. As seen in Figure 6, water has low reflectance in the mid-IR channel because it absorbs most of this radiation. As such, the greater the water content in the soil, the lower its reflectance will be, especially in the mid-IR channel.




Data Sources

UWEC Department of Geography and Anthropology
 

Saturday, May 3, 2014

Lab 7: Photogrammetry

Introduction

In this lab, students will learn and experiment with multiple photogrammetric operations. Students will calculate photographic scale and relief displacement, use Erdas Imagine to measure area and perimeter, experiment with stereoscopy by crating an anaglyph, and perform orthorectification on satellite imagery.

Methods

Part 1: Scales, measurements and relief displacement

Photographic scale
Photographic scale, the relationship between the distance observed on a vertical aerial image and its corresponding distance in the real world, can be determined two different ways.

One way is to compare the size of objects measured in the real world with the size of the same objects measured on the image by use of this equation: Scale = pd/gd.  Where, pd is photo distance and gd is ground distance. To convert this fraction into a useful measure of scale, the numerator must be 1. Therefore, the numerator (photo distance) must be multiplied by itself. When working with fractions, what is done to the numerator must also by done to the denominator. After multiplying the numerator and the denominator by the numerator, a representative fraction results.

The other way to find photographic scale is to use the relationship between the focal length of the sensor lens and the flying height of the craft above ground level. This relationship is defined as: Scale = f/(H). Where, f is the focal length and H is the altitude above ground level. If needed, H can be broken down into (H' - h), where H' is the elevation above sea level and h is the elevation of terrain.

Relief displacement
When objects are located at elevations greater than or less than the elevation of the photographs principle point, displacement occurs. At greater elevations features will be displaced away from the principle point and at lower elevations features will be displaced towards the principle point. The extent of the displacement is determined by three factors: 1 the greater the height of the feature, the larger the displacement, 2 the farther the feature is from the principle point, the greater the displacement, and 3 its inverse relationship to the height of the sensor above local datum.

Relief displacement (d) is defined as, d = (h*r)/H. Where h is the actual height of the feature, r is the radial distance from the top of the displaced object to the principal point, and H is the height of the sensor above the local datum.

Heads-up digitizing

Figure 1: Erdas Imagine can be used to measure perimeter and area of features on an image. This is done by clicking Measure in the Home tab of the main toolbar (Measure is between Metadata and Paste) to open the Measurement toolbar.



Figure 2: Changing the first button on the left from point to polygon (for area) and polyline (for perimeter), allowed for the proper measurements to be taken by tracing the outside of the desired feature. For this lab, a lagoon was measured.


Part 2: Stereoscopy
Stereoscopy is the science of creating depth in flat images. To analyze an area with a 3D perspective, Erdas Imagine can be used to create an anaglyph image.  An anaglyph is two of the same images superimposed onto each other in different colors, typically red and cyan. When seen through polaroid glasses, the flat image will appear to have depth.


Figure 3: To create an anaglyph in Erdas Imagine, click Terrain and then choose Anaglyph.



Figure 4: The Anaglyph Generation window. Here the input images are added and other parameters can be adjusted. For this lab the Exaggeration was changed from 1 to 2.


Part 3: Orthorectification
Orthorectification is the process of removing positional and elevation (x, y, and z) error from an aerial photograph or satellite image. For this lab, a previously orthorectified image will be used as a reference image to create a planimetrically true orthoimage from an image with substantial error.



Figure 5: To begin, the LPS Project Manager was opened by clicking the Toolbox tab on the main toolbar then choosing the leftmost icon LPS.



Figure 6: A new project must be created by clicking the leftmost icon, Create New Block File (looks like a blank sheet of paper). Here the geometric model options can be modified. For this lab SPOT imagery is being used so the settings were changed accordingly. OK was clicked.



Figure 7: Next, the Block Property Setup window appeared (left). Set... was chosen in the Horizontal section and the Projected chooser window appeared (right). Here the projection was set up accordingly. OK was clicked.



Figure 8: Next, highlight Images in the left column underneath the main toolbar of the LPS Project Manager. Then click the Add frame to the list icon (looks like a piece of paper with an arrow).



Figure 9: Now there is new information in the bottom portion of the window. The red color seen in multiple fields indicates that the sensor settings need to be verified. The Show and Edit Frame Properies icon was clicked, next to the Add frame to the list icon, and Edit... was chosen near the bottom of the window. The Sensor Information window appeared, OK was clicked, and OK was clicked again in the Frame Editor.



Figure 10: The Int field changed to green indicating that the sensor has been verified and the internal orientation information has been supplied.


Next, ground control points (GCPs) can be established. The Start point measurement tool (looks like crosshairs) was clicked and the radio button for Class Point Measurement Tool was filled. Ok was clicked.


Figure 11: The resulting window. This image was taken from the Lab instructions.
A: Main View
B:Tool Palette
C: Detail View
D: Reference Cell Array
E: File Cell Array
F: Reset Horizontal reference source icon
G: Use Viewer As Reference radio button

Before GCPs can be taken, the Reset Horizontal reference source icon (Figure 11 arrow F) must be clicked. For this lab, in the GCP reference source dialog box, the radio button for Image Layer was filled. OK was clicked and the previously orthorectified image was chosen. OK was clicked. The radio button for Use Viewer As Reference (Figure 11 arrow G) was then filled to bring both images onto the screen.



Figure 12: To create a GCP, a suitable area on the reference image was found (left) by using the Select Point icon (looks like an arrow) in the uppermost left corner of the Tool palette (Figure 11 arrow B) to position the inquire boxes around the area of interest. Then using the Create Point icon (looks like crosshairs), next to the Select point icon, the desired position on the reference image was clicked. Then the inquire boxes on the distorted image were moved to the same area as in the reference image and using the Create Point tool a GCP was placed as close as possible to the same spot. Clicking Add, next to the Tool Palette, will create a new blank point.



Figure 13: After two GCPs have been established, the Set automatic (x,y) drive icon can be turned on. This will automatically place a GCP in the distorted image based on the reference. This automatic placement is not perfect and can be off by quite a bit. Therefore, repositioning of the GCP in the distorted image will be necessary but this is helpful in finding the general area of the GCP quickly rather than having to move the inquire boxes around.

As more GCPs are added their locations can be seen in the bottom of the window. For this lab, nine GCPs were created this way and then the horizontal reference source was changed to another image that covered ground not available in the pervious reference image. After 11 GCPs were established the Reset Vertical Reference Source icon was clicked next to the horizontal reference icon. The radio button for DEM was filled and the appropriate DEM of the area was chosen. OK was clicked.



Figure 14: Then, right clicking the first point number in the table at the bottom of the window, choosing select all, and clicking the Update Z values on Selected Points icon (looks like a blue Z), Z values were generated for each GCP.

Then the Type field for each point was set to Full and the Usage field was set to Contorl. This was done by right-clicking the field title and selecting Formula. In the Formula window, the word Full and Usage was typed into the blank section at the bottom to modify the records accordingly. At this point the Point Measurement tool was saved and closed.



Figure 15: What the LPS Project Manager window looks like after the first image has been rectified.

Back at the LPS Project Manager window, another distorted image was added through the same process starting at Figure 8 and finishing before figure 11.



Figure 16: Once the Start Point Measurement tool is selected, GCPs can be established by selecting the Point # field for GCPs that are located on both the first and second image. Points 3,4,7, and 11 were not located on the second image. When a point was highlighted the rectified image on the right automatically shifted to the GCP. Using the inquire boxes, the same area was found manually for the image on the left. Then the Create Point tool was selected and the position on the left image closest to the GCP on the rectified image was clicked. This was repeated for the remainder of the available points.

Next, tie points can be generated. Tie points act like ground control points whose coordinates are uncertain but is visually recognizable on the image. These are generated by the computer and as a result do a much quicker job at finishing the rectifying process compared to manual GCP designation.



Figure 17: The Automatic Tie Point Generation Properties icon (looks like a hand pointing to a blue cross) was clicked to open its associated dialog window. Here tie point generation properties can be modified. For this lab, the settings seen were used along with changing the Intended Number of Points/Image, found in the Distribution tab, was set to 40. Run was clicked.

The tie points could then be checked for accuracy. Save was clicked and then the window was closed. Back at the LPS Project Manager window, triangulation could now be performed. 



Figure 18: The Triangulation dialog window was opened by navigating to Edit > Triangulation Properteis... For this lab, the settings were changed as seen above for the General Tab. In the point tab, the Type was changed to Same weighted values and X, Y, and Z values were changed to 15. Run was clicked.



Figure 19: After the triangulation was complete, a Triangulation Summary box (left) appeared. This report was saved as a text file by clicking the Report option and choosing File > Save As... in the resulting Editor window (right). Accept was clicked in the Triangulation Summary and OK was clicked in the Triangulation window. In the LPS Project Manager window, the Ext field is now green indicating that the external orientation information has now been supplied.


Figure 20: Next, orthorectified images will be created by clicking the Start Ortho Resampling Process icon (Looks like a square divided into four smaller colored squares). In the Ortho Resampling window, the settings were changed to match the figure above.


Figure 21: Then the Advanced tab was clicked and Add.., near the bottom of the window, was selected. The second image was selected in the new Add Single Output window and the Use Current Cell Sizes option was checked. Ok was clicked. Then OK was clicked in the Ortho Resampling window and the process ran and completed.



Figure 22: Here the two ortho images are viewed in the LPS Project Manager. The file name in the left column can be clicked and View selected to see the actual image.




Results

Figure 23: A sample of the anaglyph made in Part 2. Seeing this image through (red/blue) polaroid glasses highlights the differences in elevation around the Chippewa River as it runs through the UW Eau Claire Campus area.



Figure 24: The two orthorectified images overlaying each other. The boundaries of the images and features in the images match up extremely well.


Data Sources

UWEC Department of Geography and Anthropology

Figure 11 taken from lab 7 instructions

Friday, April 18, 2014

Lab 6: Geometric Correction

Introduction

In this lab, students will expand on their knowledge of geometric correction by reforming image-to-map and image-to-image rectification on a spatially distorted image. These processes are commonly done to satellite imagery before data extraction or visual analysis is preformed. First, a USGS 7.5 minute digital raster graphic (DRG) of Chicago, IL will be used as a reference image for a slightly distorted satellite image of the metropolitan area. Second, a previously rectified satellite image of Sierra Leone, Africa will be used as a reference image for a heavily distorted satellite image of the same area.

Methods

Image-to-Map Rectification
This method uses a map with an established coordinate system as a reference to modify the spatial location of features in a distorted image to match that of the map.


Figure 1: With the distorted image to be rectified in an active window, navigate to Multispectral > Control Points.


Figure 2: The Set Geometric Model window will appear. For this lab, Select Geometric Model is set to polynomial. Click OK.



Figure 3: Next, the GCP Tool Reference Setup window will appear. For this lab, the default value of Image Layer (New Viewer) is kept. Click OK.



Figure 4: Next, the Reference Image Layer window will appear. Here the reference image will be chosen. For the first part of the lab, the Chicago USGS 7.5 minute DRG is selected. After clicking OK, a window will appear indicating the coordinate reference system of the image. Click OK. Then the Polynomial Model Properties (No File) window will appear. For the fist part of this lab, the Polynomial Order is set to 1, all other default values were accepted and the window was closed.


Figure 5: The Multipoint Geometric Correction window will now appear with both the distorted image and the reference image located within. Each image is portrayed in three different scales. In the upper right, the image is at full scale. In the upper left, the image is zoomed to the extent of the inquire box. Below the two smaller upper windows is a larger window where the image can be manually zoomed and panned. This larger window is where the ground control points (GCPs) will be added.



Figure 6: To add a GCP, select the Create GCP tool on the Geometric Correction toolbar and click an area on the distorted image. Then select the Create GCP tool again and click the same area in the reference image. After a desired number of GCPs have been added, zoom in and reposition the pairs of points to spatially match as close as possible. Continue with this process for all GCPs until the RMS error, read in the bottom right corner of the window, is less than 2 (requirement for lab). For the first part of this lab, four GCPs were collected and the RMS value was reduced to 0.124.



Figure 7: Once all GCPs are added and the RMS error is low enough, select the Multipoint Geometric Correction tool on the Geometric Correction toolbar. The Resample window will appear. For the first part of the lab, Nearest Neighbor is chosen as the Resample Method and all other default values were accepted. A rectified output image is generated.




Image-to-Image Rectification
This method uses a previously rectified image as a reference to modify the spatial location of features in a distorted image to match that of the rectified image.



Figure 8: The process to execute an image-to-image rectification is the same as image-to-map rectification. Because the distorted image is so heavily distorted a third degree polynomial will be used instead of a first degree polynomial like the first part of the lab. In the Polynomial Model Properties, change the Polynomial Order to 3.



Figure 9: Because a third degree polynomial was used, the number of minimum controls points increased from 3 (for first degree) to 9. For the second part of this lab, 10 GCPs were added and the RMS error was reduced to 0.0916.



Figure 10: After selecting the Multipoint Geometric Correction tool again, the Resample Method was set to Bilinear Interpolation. Click OK and a rectified output image is generated.




Results



Figure 11: The resulting image over the distorted image with the swipe tool activated enabling both images to be viewed and compared. The section of river highlighted within the yellow circle showcases the difference between the two images.




Figure 12: The resulting image is the lighter colored image. On the left is the output image over the distorted image, the difference between the two is evident. On the right is the output image over the reference image, the two seem to be spatially identical.



Data Sources
UWEC Department of Geography and Anthropology

Wednesday, April 16, 2014

Lab 5: Miscellaneous Image Functions 2 and Image Mosaic

Introduction

In this lab, students will build on their knowledge of analytical processes in remote sensing by exploring more image processing functions provided in ERDAS IMAGINE 2013. Students will experiment with spatial and spectral image enhancement, band ratio, binary change detection, and image mosaic.

Methods

Spatial Enhancement:
Spatial enhancement techniques will improve the appearance of imagery, mainly visual analysis purposes, by amplifying subtle differences in radiometric or spectral resolutions not perceived by the human eye. This can be accomplished through spatial filtering to adjust the imagery's spatial frequency, which is the change in brightness value per unit of distance for any specific area in the imagery. A low frequency image has few changes in brightness values over the specific area, while high frequency has significant changes. Spatial frequency can be increased or decreased depending on the nature of analysis.


Figure 1: To apply a low pass filter, navigate to Raster > Spatial > Convolution. This will open the Convolution Window seen above. Different filters can be applied to an input image by selecting the desired filter in the Kernel options. For this lab, firstly a 5x5 Low Pass and secondly a 5x5 High Pass filter were chosen. Thirdly a 3x3 Laplacian Edge Detection filter was applied, Fill was checked under Handle Edges by, and Normalize the Kernel was unchecked.



Spectral Enhancement:
Spectral enhancement techniques will improve the appearance of imagery, mainly visual analysis purposes, by increasing the contrast in the image. Low contrast imagery can result from detector saturation or spectral similarity in features. There are two types of spectral enhancement, linear and non-linear. In this lab, linear methods used include minimum-maximum contrast stretch and piecewise contrast stretch. Both this methods stretch the histogram of the image from a low-contrast state to the entire range of brightness values (for 8 bit images 0-255), howver minimum-maximum should be used for Gaussian histograms and piecewise should be used for non-Gaussian. One non-linear method is used in this lab, Histogram Equalization. Instead of stretching the histogram like the linear methods do, histogram equalization will redistribute pixel values so the pixels in the output image are equally distributed across the entire range of brightness values (for 8 bit images 0-255).


Figure 2: To apply a minimum-maximum contrast stretch, navigate to Panchromatic > General > General Contrast > General Contrast. The Contrast Adjust window will appear as seen above. Selecting Gaussian as the method and clicking Apply will apply the min-max contrast stretch.



Figure 3: To apply a piecewise contrast stretch, navigate to Panchromatic > General > General Contrast > Piecewise Contrast. The Contrast Tool window will appear as seen above. For this lab, the From: and To: values for low and middle were determined from the image's histogram and the To: value for high was set to 180.



Figure 4: To apply histogram equalization, navigate to Raster > Radiometric > Histogram Equalization. The Histogram Equalization window will appear as seen above. For this lab, all default values were accepted.



Band Ratioing:
Band Ratioing is considered a non-linear spectral enchantment technique. By applying different ratios to an image, environmental factors can be reduced, unique information can be obtained, and features and objects can be distinguished differently. One commonly used ratio is the normalized difference vegetation index (NDVI) to revel unique information about vegetation layers.



Figure 5: To apply the NDVI band ratio, navigate to Raster > Unsupervised > NDVI. The Indices window will appear as seen above. For this lab, Sensor was set to Landsat TM and Select Function was set to NDVI.



Binary Change Detection (Image Differencing)
Image differencing is used to analyze land cover change from images taken at different dates by subtracting brightness values of pixels in one image from the other. To preform image differencing both images must have almost identical radiometric characteristics, identical spatial and spectral resolutions, and they must by geometrically rectified.


Figure 6: To apply binary change detection, navigate to Raster > Functions > Two Image Functions. The Two Input Operators window will appear as seen above. In this lab, the operator was changed to minus (-) and Layer was changed to Layer 4 instead of All.


Figure 7: Model builder can also be used to preform image differencing. Navigate to Toobox > Model Maker > Model Maker. A window with a new blank model and a smaller window will modeler tools will appear as seen above.



Figure 8: For this lab, two input raster objects were selected and connected to a function which in turn is connected to an output raster object, as seen above. A simplified version of the function created is (the 2011 image - the 1991 image + 127). The constant is added to generate all positive numbers for the resulting image's histogram.



Figure 9: Next, model builder was used again to showcase just the areas that changed over the 10 year period. A raster object was connected to a function which was connected to a raster object, as seen above.



Figure 10: The function for this model is more complicated. Functions was set to Conditional  and Either () IF () OR () OTHERWISE was chosen.  The change/no change threshold value was calculated by multiplying the mean of the image's histogram by 3 standard deviations. Essentially this function will display values of change and mask values of no change.



Image Mosaic:
Image mosaic is used to combine individual images together to create one seamless image. This is necessary when an area of interest is large enough to cover multiple images or the area of interest crosses the boundary of two or more images. When mosaicking images, each image must have the same project coordinate system and have identical numbers of layers.

ERDAS offers two options to mosaic images, Mosaic Express and MosaicPro. For this lab, to add images to be mosaicked in ERDAS, the first image file was opened in a new view but before adding the image Multiple images in Virtual Mosaic was checked in the Multiple section and Background Transparent and Fit to Frame was checked in the Raster Options section. The same procedure is used for following image files.


Figure 11: To use Mosaic Express, navigate to Raster > Mosaic > Mosaic Express. The Mosaic Express window will appear as seen above. For this lab, the images to be mosaicked were added in the Input section and all default values for each other section were kept.




Figure 12: To use MosaicPro, naviagate to Raster > Mosaic > MosaicPro. The MosaicPro window will appear as seen above. To add images click the Add Images icon near the save icon. For this lab, when adding the images, Image Area Options was selected before the images were added and Compute Active Area was checked.



Figure 13: Now that the images are added and their outline is visible on the MosaicPro window, their radiometric properties were synchronized by selecting the Color Corrections icon near the Set Overlap Function icon. The Color Corrections window will appear as seen above. For this lab, Use Histogram Matching was selected and Overlap Areas was set for the matching method by first selecting Set in the Color Corrections window. Next, click on the Set Overlap Fucntion icon and check Overlay. To finish the mosaic, click Process in the MosaicPro window and than Run Mosaic.



Results

Figure 14: Result of the 5x5 Low Pass filter. Using this filter decreased the contrast in the image. As a result, when zoomed in to a large extent features are more blurry and hard to distinguish in the output image (on the right).


Figure 15: Result of the 5x5 High Pass filter. Using this filter increased the contrast in the image. As a result, the output image has a greater range of brightness values specifically darker tones.



Figure 16: Result of the 3x3 Lapcian Edge filter. Using this filter resulted in values of sharp contrast to be highlighted and areas of less contrast to be deemphasized. Rivers and roads become more pronounced.



Figure 17: Result of the historgram equalization. With the histogram equalized, the output image has greater range in brightness values, stretching across the entire available range, giving it more contrast.



Figure 18: Restul of the NDVI band ratio. Appling this band ratio to the image emphasized vegetation over other land covers.


Figure 19: Result of first binary change method, Two Image Functions. The resulting image's histogram has positive and negative values and is marked with its appropriate change thresholds in red.



Figure 20: Result of running the first model. The resulting image's histogram has only positive values because of the constant that was added to the function equation. Here the change threshold is only located on the upper end.



Figure 21: Map made in ArcMap using the result from the second model. Running the second model resulted in the red areas shown on the map. By bringing that image and the original image into ArcMap and changing the symbology accordingly, the map seen above was created. When comparing this map to the original imagery, it can be inferred that the areas of change correspond to changes in land use, specifically agriculture.



Figure 22: Result of Mosaic Express. This result is not good. The boundary between the images is very evident when ideally it would be seamless.



Figure 23: Result of MoscaicPro. This result is much better than the one generated through Mosaic Express. The boundary between the images is much more seamless and it is difficult to tell if the boundary exists in much of the output image.



Data Sources
UWEC Department of Geography and Anthropology