Friday, December 6, 2013

Spectral Signature Analysis

Goal
Background: The goal of this lab was to gain experience on the measurement and interpretation of spectral reflectance.

Methods
In order to collect the necessary data for this assessment an image of Eau Claire was brought into Erdas. The "Home" tab was selected, followed by "Drawing" and then the "Polygon" tool was selected in order to create various Areas of Interests (AOIs). After the AOI was selected, the "Raster" tab was chosen, followed by "Signature Editor" and then "Signature Editor". This opened the "Signature Editor" window, where the option to create a new AOI was selected. This brought in the AOI that was just created using the polygon tool. After each AOI was brought in, the name was changed to its respective feature. The twelve features examined were: 1) Standing Water; 2) Moving Water; 3) Vegetation; 4) Riparian Vegetation; 5) Crops; 6) Urban Grass; 7) Dry Soil (uncultivated); 8) Wet Soil (uncultivated); 9) Rock; 10) Asphalt Highway; 11) Airport Runway; 12) Concrete Surface (Parking Lot).

After the AOI was brought into Signature Editor, the "Display Mean Plot Window" option was selected, which allowed for the observance of the Spectral Signature. Below are Figures 1-13, which show the various spectral signatures of the twelve features and one figure showing all of them on the same plot.

Figure 1: This plot shows the spectral signature for Standing Water. The most reflectance is seen in the blue and green bands is the least reflectance are seen in the red and infrared bands.

Figure 2: This plot shows the spectral signature for Moving Water. The most reflectance is seen in the blue band and the least reflectance is seen in the infrared band.

Figure 3: This plot shows the spectral signature for Vegetation. The most reflectance is seen in the red and infrared bands and the least reflectance is seen in the blue and green bands.

Figure 4: This plot shows the spectral signature for Riparian Vegetation. The most reflectance is seen in the red and infrared bands and the least reflectance is seen in the blue and green bands.

Figure 5: This plot shows the spectral signature for Crops. The most reflectance is seen in the red and infrared bands and the least reflectance is seen in the blue and green bands.

Figure 6: This plot shows the spectral signature for Urban Grass. The most reflectance is seen in the red and infrared bands and the least reflectance is seen in the blue and green bands. The background was changed for this plot to make the spectral signature line more visible.

Figure 7: This plot shows the spectral signature for Dry Soil. The most reflectance is seen in the red and mid infrared bands and the least reflectance is seen in the blue band.

Figure 8: This plot shows the spectral signature for Moist Soil. The most reflectance is seen in the red and infrared bands and the least reflectance is seen in the blue and green bands. The background was changed for this plot to make the spectral signature line more visible.

Figure 9: This plot shows the spectral signature for Rock. The most reflectance is seen in the green and infrared bands and the least reflectance is seen in the red band. 

Figure 10: This plot shows the spectral signature for Asphalt Highway. The most reflectance is seen in the green and infrared bands and the least reflectance is seen in the red band. 

Figure 11: This plot shows the spectral signature for Airport Runway. The most reflectance is seen in the green and infrared bands and the least reflectance is seen in the red band. 

Figure 12: This plot shows the spectral signature for Concrete Surface. The most reflectance is seen in the blue, green, and infrared bands and the least reflectance is seen in the red band. 

Figure 13: This shows all of the spectral signatures on the same plot. Trends can be seen between water surfaces, vegetated surfaces, and non-vegetated surfaces in relation to their spectral signatures.

Wednesday, December 4, 2013

Photogrammetry

Goal
Background: The goal of this lab was to develop skills to perform photogrammetric tasks on aerial photographs and satellite images.

Methods
Scales, Measurements, and Relief Displacement:
Section 1: Determining scale is a very essential part of interpreting distance on maps. For the first portion of this assignment, real world points and their distances were provided and the scale had to be determined from them. The distance from point-to-point was measured using a ruler and then the was converted to a relative scale number by multiplying the real world distance, by 12 in order to convert it to inches, then it was divided by the number of inches measured by the ruler. The resulting number provides a relative number, or the scale, of the map (Figure 1). 

Figure 1: Fore this method, the distance from Point A to
Point B was measured wit ha ruler and, using the given
real world distance, a scale was determined for this image.


Scale can also be determined by using the focal length lens of the camera, the altitude of the lens, and the elevation of the object above sea level.

Section 2: Another way to determine the area of a given feature on Erdas is to use the "Measure" tool. This tool is found under the "Home" tab and allows the analyst to measure various aspects of area. The polygon tool was selected for use and a lagoon to the west-southwest of Eau Claire was digitized. This was used to determine the area in hectares and acres, as well as the perimeter in meters and miles (Figure 2). 

Figure 2: This shows the lagoon, denoted by the X, that
was digitized in order to determine area in hectare and
acre, and perimeter in meters and miles.


Section 3: Determining relief displacement allows the analyst to determine how far an object is from their true planimetric location on the ground through use of aerial photographs. The scale of the image was given as was the camera height above the datum. The height of the smokestack was measured and the scale was used to convert it to the smokestack's approximate height. Then, the distance from the top of the smokestack and the principal point was measured and converted to an approximate real-world distance. The numbers were plugged into the relief displacement equation and the approximate height was determined (Figure 3).

Figure 3: The smokestack and Point X was used to determine relief
displacement of the image. The principal point is located in the upper
left corner. The line of flight is denoted by the blue line.


Stereoscopy:
In order to allow for three-dimensional viewing on an image, which is what stereoscopy allows, multiple images were brought in for this exercise. The "Terrain" tab was selected, followed by "Anaglyph" in order to open the "Anaglyph Generation" window. The correct input and output images were determined, the vertical exaggeration of the image was increased to (2), and the model was run (Figure 4). 

Figure 4: In order to properly see the resulting image, polaroid glasses were
worn, which allowed the analyst to see the image in a three-dimensional way.

Orthorectification:
Section 1: The final portion of this lab allows the simultaneous rectification of positional and elevation errors of multiple images. To begin the process of orthorectification, Lecia Photogrammetric Suite (LPS) was opened through Erdas. This program is used for a variety of purposes, one being the orthorectification of images collected by various sensors. A new block file was created and the "Model Setup" window was opened. The correct geometric model category was selected, the correct projection was applied, and the model was prepared for orthorectification. 

Section 2: An image was brought in and "Show and Edit Frame Properties" was selected, followed by "Edit", then "OK" twice. This process specifies the correct sensor that the image is using.

Section 3: Now the main process of recording ground control points was carried out. The GCP icon was selected and "Classic Point Measurement Tool" was chosen as the method. "Reset Horizontal Reference Source" was selected, which opens the GCP Reference Source dialog. "Image Layer" was checked, "OK" was selected, and "Use Viewer As Reference" was selected. The GCPs were then collected in much the same manner as they had in the previous labs. Points were added, the GCP was selected, and the corresponding GCP was selected on the referencing image. After the second point, "Automatic (x,y) Drive" was selected, which allows LPS to approximate where the GCP is on the other image to allow for quicker GCP collection. This was done for nine ground control points. The points were then saved and the last two points were ready to be added. Again, the "Reset Horizontal Reference Source" icon was selected, a new image was brought in, and the last two points were collected. 

Now that the Horizontal Reference Source was set, the Vertical Reference Source needed to be set as well. The "Reset Vertical Reference Source" icon was selected and a Digital Elevation Model image was brought in to supply elevation data. The "Update Z Values on Selected Points" icon was selected and all the Z values, (elevation data) was updated. 

Section 4: The "Type" column was selected and "Formula" was opened in order to set the "Type". In this case the type was set to "Full" and the change was applied. This process was carried out for "Usage" and the usage was set to "Control". The data was saved and the point measurement tool was closed. Next, a second image was brought in. The same work flow was carried out to prepare this image as had been done for the previous image. The GCPs for this image were added as instructed by the guidelines, as portions collected on the previous image were not present for this image (Figure 5). 

Figure 5: This shows the status of orthorectification, thus far. The image is still somewhat tilted, as it has not been fully
rectified yet.

Section 5: "Automatic Tie Point Generation Properties" was selected, opening up a dialog. The necessary characteristics were input, the "Intended Number of Points/Images" was set to forty, and the process was run. Given the data previously collected, this step accurately places even more GCPs and pins the image down even more accurately. This step allows for triangulation of the various components. "Edit" then "Triangulation Properties" was selected to open a dialog. The necessary characteristics were input and a report for the data was generated.

This finally prepared the data to be resampled.  "Start Ortho Resampling Process" was selected and the correct characteristics were input. Bilinear Interpolation was the resampling method used for this process. After all the settings were specified, the model was run and the orthorectified image was produced. 

Section 6: After orthorectification, the images match up properly and are able to be compared (Figure 6). The images are brought into the same viewer and examined to ensure that the various features match (Figure 7 and Figure 8).

Figure 6: After the orthorectification process was complete, the image is rectified to the image above. This shows that the image is properly rectified and is not off improperly referenced.

Figure 7: This image is the result of the orthorectification process. After properly analyzing and assigning values to the images, they blend together very accurately.


Figure 8: This zoomed in view of the orthorectified image
shows that, while there are very small details that may not
match up perfectly, the orthorectification process leads to
a very accurate image mosaic.

Results
This lab provided an extremely in depth look at photogrammetric processes. An understanding was gained that focused on understanding how scales were formulated for aerial imagery, how stereoscopy and stereograms were created in order to show three-dimensional models, and how multiple photographs can be seamlessly mosaicked through orthorectification. 

Thursday, November 21, 2013

Geometric Correction

Goal
Background: The goal of this lab was to gain experience in the preprocessing exercise of geometric correction.

Methods
Image-to-Map:
Method: For this method, an aerial image and a map image of Chicago were used. In order properly begin rectifying, the second viewer must contain the image to be rectified. After this, the Multispectral tab was opened and the Control Points button was picked from the toolbar. Polynomial was selected from the newly opened Set Geometric Model window. As the next windows came up, the default settings were accepted for this model. The second image was then brought in for rectification and the defaults were accepted to bring this image in. As this is a first order polynomial, it will only take a few points to gain ample rectification of the imagery. In order to start entering geometric control points (GCPs), the default GCPs were deleted, the images were fitted to screen and the Create GCP tool was selected. After placing the first point, the button was clicked again and a control point was placed on the second image in roughly the same spot. This was done four times, with a change occuring after the third GCP placement. After the third, the bar on the bottom of the window read "Model solution is current". When the next point was plotted, it was plotted for both of the maps at once. After the GCPs were placed, accuracy was worked on. The Root Mean Square (RMS) error shows how accurate the image is. For this image, the analyst was able to get the RMS error to 0.4850. The recommended RMS value for first order polynomials is 2.0. The result can be seen in Figure 1.

Figure 1: Image-to-map rectification. This image shows the rectified image, on the right, and the image that was used to rectify it on the left.

Once the RMS value has been lowered, the geometric correction was carried out by selecting Multipoint Geometric Correction and then Display Resample Image Dialog.

Image-to-Image:
Method: This method was carried out much in the same way as the Image-to-map rectification. An image of the Sierra Leone was brought in for this model. A difference with this method was, instead of using a first order polynomial, a third order polynomial was used. This was chosen for greater accuracy, and it requires at least nine points to be taken instead of just three. All of the points were scattered throughout the map to ensure that there was no distortion and the RMS value was lowered below 0.5. In this case, the analyst was able to lower the RMS error to 0.2805. This can be seen in Figure 2.


Figure 2: Image-to-image rectification. This image shows the rectified image, on the right, and the image that was used to rectify it on the left.

Results
This lab was a great start to understanding geometric corrections. A few steps were repeated in order to gain a better understanding but in the end, an understanding was achieved and the initial processes of geometric corrections were understood.

Tuesday, November 12, 2013

Image Mosaicking and Miscellaneous Image Functions 2

Goal
Background: The goal of this assignment was to familiarize the user with RGB to IHS transformations, image mosaicking, spatial and spectral image enhancement, band ratioing, and binary change detection. 

Methods
RGB to IHS Transformations:
Purpose: Transforming red, green, and blue (RGB) to intensity, hue, and saturation (IHS) allows for the transformation from IHS back to RGB, which stretches the image and displays colors more closely to how it is perceived by the human eye.

Method 1: An image of Eau Claire from 2000 was used to demonstrate RGB to IHS transformation. The Raster tab was selected, followed by Multispectral. The color bands were set: Band 3 on the Red Color Gun, Band 2 on the Green Color Gun, and Band 1 on the Blue Color Gun (Figure 1).

Figure 1: An image of the Eau Claire area taken in 2000 with Band 3 set to
the red color gun, Band 2 set to the green color gun, and Band 1
set to the blue color gun.

This combination gave the image a blue hue throughout the entirety of the image. Still under the Raster tab, Spectral was selected, followed by RGB to IHS. The correct input and output values were selected, the model was run, and the image was created. This new image was a variation of very bright red and green colors. This image can be seen in Figure 2.

Figure 2: An image of the Eau Claire are taken in 2000 after an RGB to
IHS transformation has been applied.

Method 2: Using the image from the RGB to IHS transformation, the next step was to transform it back using an IHS to RGB transformation. This is done by clicking on Raster, then Spectral, then IHS to RGB. The correct input and output values were selected, the model was run, and a new image was created. This image can be seen in Figure 3.

Figure 3: An image of the Eau Claire are taken in 2000 after an IHS to
RGB transformation has been applied.

The next step was to stretch this image to make it more conducive for viewing as the human eye would. This was done by opening the newly transformed image, clicking Raster, then clicking Spectral, and then clicking Stretch I&S. The stretched image appears to be much like the original image with the color guns switched, however when looking at the histogram for these images, they are much more properly in the spectrum that the human eye sees. This image can be seen in Figure 4.

Figure 4: An image of the Eau Claire are taken in 2000 after the image
has been stretched.

Image Mosaicking:
Purpose: Image mosaicking allows a larger study area to be viewed than is available from the spatial extent of just one satellite image.

Method 1: In order to begin this process the images had to be brought in a certain way. First the "Open" menu was selected, then the first image was chosen, but not loaded to the viewer. With the open menu still open, the "Multiple" tab was selected, then "Multiple Images in Virtual Mosaic". Then, "Raster Options" was selected, "Background Transparent" was made sure to be checked, then "Fit to Frame" was as well. From here the image was uploaded to the viewer. This same process was carried out with the second image. This image can be seen in Figure 5.

Figure 5: This is the image formed after the initial setup has been completed for the image mosaic methods.

After this initial process was carried out, the first image mosaicking method was able to be performed. First, the Raster tab will be selected, then Mosaic, and MosaicExpress from the drop down menu. Next, when the Image Express window appears, the folder icon will be selected in order to input the correct images that were already opened in the viewer. The images are loaded, with the images in the correct order, as specified. Then, the correct input and output values were selected, the model was run, and the new image was created. That image can be seen in Figure 6. It can be easily noticed how much of a difference exists between the left image and the right image. The color difference does not allow for the images to blend together.

Figure 6: This is the image created after MosaicExpress has been used to create a mosaicked image.

Method 2: In order to begin this process the images had to be brought in a certain way. First the "Open" menu was selected, then the first image was chosen, but not loaded to the viewer. With the open menu still open, the "Multiple" tab was selected, then "Multiple Images in Virtual Mosaic". Then, "Raster Options" was selected, "Background Transparent" was made sure to be checked, then "Fit to Frame" was as well. From here the image was uploaded to the viewer. This same process was carried out with the second image. This image can be seen again in Figure 5.

After this initial process was carried out, the second image mosaicking method was able to be performed. First, the Raster tab will be selected, then Mosaic, and MosaicPro from the drop down menu. This brings up the MosaicPro window, as seen in Figure 7.

Figure 7: This image shows the MosaicPro window and the presence of the two images that were brought into the
viewer. This image was extracted from the assignment handout.

From this window, after experimenting with some of the buttons in order to better understand their functions, then next step was ready to be carried out. "Color Corrections" was selected from the top icon bar, as well as "Use Histogram Matching". Then, "Set" was selected and "Overlap Areas" was chosen from the drop down menu. After ensuring that the Overlap Function was correct by opening it up and selecting "Overlay", the image was ready to be mosaicked. "Process" was selected in the MosaicPro window, then "Run Mosaic". The correct input and output values were selected, the model was run, and the new image was created. This image can be seen in Figure 8. The image is much more blended than the image mosaic created using MosaicExpress.

Figure 8: This is the image created after MosaicPro has been used to create a mosaicked image. 

Using Figure 9, we can see that instead of the very noticeable difference in color that can been seen in the MosaicExpress image, located on the right, the MosaicPro image, located on the left, is much more clearly blended. With the MosaicPro image, there is a very noticeable overlap between the images, which is not present in the MosaicExpress image.


Figure 9: The image on the left is the MosaicPro image, while the image on the right is the MosaicExpress image.

Band Ratioing:
Purpose: Band ratioing is used to highlight subtle variations in spectral responses of various surface covers.

Method: An image of the Eau Claire area from 2000 was used to demonstrate band ratioing. First, the Raster tab was selected, followed by Unsupervised, and NDVI (Normalized Difference Vegetation Index) from the drop down menu. The correct input and output values were selected, as was the correct Landsat TM option. The model was run and the new image was created. The image can be seen in Figure 10. In this image, the very light areas characterize spots of vegetation, while the very dark areas are characterized by water. The medium gray areas denote urbanization.

Figure 10: This image of Eau Claire from 2000 shows the normalized
difference vegetation index (NDVI).

Spatial and Spectral Image Enhancement:
Purpose: Spatial and spectral image enhancement is used to sharpen and clarify images using a variety of different methods, including high and low pass filters, edge enhancement, piecewise contrast adjustments, and histogram equalizations.

Method 1: This method is for using a low pass filter. First, an image of Chicago was brought into the viewer. Then, the Raster tab was selected, followed by Spatial, and Convolution from the drop down menu. Under "Kernel" the option "5x5 Low Pass" is selected as the filter that is to be applied. The correct input and output values are selected and the model is run. This image can be seen in Figure 11. There does not appear to be much different with the image from the full extent view, except the new image (shown on right) seems marginally brighter.

Figure 11: This image shows the full extent of the Chicago area after low pass filtering has been administered. The image on the left is the original image and the image on the right is the image that has been filtered. 

After zooming in it is apparent that there is a substantial difference between the images (Figure 12). The images were synced and zoomed in until the difference was apparent. Once again, the image on the left is the original image, while the image on the right was the low pass filtered image. The resolution is substantially lower in the low pass filtered image than in the original image without filtering.

Figure 12: This image shows the zoomed in view of the Chicago area after low pass filtering has been administered. The image on the left is the original image and the image on the right is the image that has been filtered. 

Following the administering of the low pass filter, the images were cleared and another image, this time of the Sierra Leone, was brought in for use. The next step was to demonstrate the use of a high pass filter. For this, same process was carried out as for the low pass filter, except for the choice of "Kernel". In this case, "5x5 High Pass" was selected for use, the model was run, and the image was created. This image can be seen in Figure 13. The new image is much darker, while the lighter qualities of the image were brightened.

Figure 13: This image shows the full extent of the Sierra Leone area after high pass filtering has been administered. The image on the left is the original image and the image on the right is the image that has been filtered. 

After zooming in it is apparent that there is a substantial difference between the images (Figure 14). The images were synced and zoomed in until the difference was apparent. Once again, the image on the left is the original image, while the image on the right was the high pass filtered image. With the use of a high pass filter, the darker parts were darkened, and the lighter parts were lightened. This gives the entire image a much more crisp look, which is apparent when comparing the images while zoomed in.

Figure 14: This image shows the zoomed in view of the Sierra Leone area after high pass filtering has been administered. The image on the left is the original image and the image on the right is the image that has been filtered. 

Method 2: This method is for applying edge enhancement to an image. First, an image of Sierra Leone was brought into the viewer. Then, the Raster tab was selected, followed by Spatial, and Convolution from the drop down menu. Under "Kernel" the option "3x3 Laplacian Edge Detection" is selected as the filter that is to be applied. The "Fill" option was checked under "Handle Edges by" and "Normalize the Kernel" was unchecked. The correct input and output values are selected and the model is run. The new image can be seen in Figure 15. The new image is in the right panel of the figure. Zoomed out it does not seem to provide much for use.

Figure 15: This image is of Sierra Leone has had edge enhancement applied to it. The original image is on the left and the edge enhanced image is on the right.

Zoomed in, there is much more that is apparent in the image. There is not as much blurring along the edges of features, such as the lake in the middle of the image. The color difference makes the edges of features much more apparent.

Figure 16: This is a zoomed in view of the Sierra Leone image after edge enhancement has been applied.

Method 3: This method is for applying a minimum-maximum contrast stretch and a piecewise contrast stretch to an image. The minimum-maximum contrast stretch is initiated by selecting the Panchromatic tab, then General Contrast, and then General Contrast again, from the drop down menu. When the Contrast Adjust window opens the Method tab is selected and "Gaussian" is chosen.

The piecewise contrast stretch is initiated by selecting the Panchromatic tab, then General Contrast, and then Piecewise Contrast. Middle is selected under Range Specifications and then the last mode is changed to 180. Figure 17 shows the piecewise contrasted image. There is not much of a difference from the full extent view.

Figure 17: This is an image of the Eau Claire area after a piecewise contrast stretch has been applied to it.

After zooming in it is apparent that the stretched image has been darkened slightly. The image on the left panel of Figure 18 shows this stretch. 

Figure 18: The image on the left shows the stretched image, while the image on the right shows the original image.


Method 4: This method is an example of a histogram equalized image. The image used is of the Eau Claire area and is the red band of a Landsat TM image capture in 2011. The Raster tab is selected, followed by Radiometric, then Histogram Equalization from the drop down menu. The correct input and output values were selected, the function was run, and the image was created. This image can be seen in Figure 19. The image on the left is the original image, while the image on the right is the newly created image.

Figure 19: This image of the Eau Claire area is an example of the effects of Histogram Equalization.

The original image has a very low contrast, with most of the image being a gray shade. The newly equalized image has a number of variations in color, as seen by Figure 20. Many of the shades have been grouped and, as seen by the histogram on the right of Figure 20, the contrast is much more evenly distributed, and rather blocky.

Figure 20: This figure shows the histogram of the image that has been subjected to histogram equalization.The image on the left is the original image histogram, while the image on the right is the new image histogram.

Binary Change Detection (Image Differencing):
Purpose: Binary change detection, or image differencing, is the process of estimating and mapping brightness values of pixels that have changed over a duration of time. 

Method 1: To initiate this method, two viewers were opened with an image of Eau Claire County from 1991 in one viewer and an image of Eau Claire County from 2011 in the other. Both images were Fit to Frame and synced, the image was zoomed in and differences were noted before moving on. The Raster tab was selected, followed by Functions, and then Two Image Functions. In the window that was opened, the new image was input into the first input file and the old image was input into the second input file. The "Operator" was changed from (+) to (-) and the Layer under the first input file was changed to Layer 4. The function was run and the new image was created. This image can be seen in Figure 21. This image does not show where the change took place, however. This will be determined in the next method. 


Figure 21: This is an image of the Eau Claire County that has had the 1991 image subtracted from the 2011 image. This method does not show what change has occurred between the two images.

The next step was to estimate a threshold of change-no change. To do this, the Image Metadata was opened. First the General tab was looked at and the Mean and Standard Deviation were noted. In this case, as seen in Figure 22, the Mean is 12.253, while the Standard Deviation is 23.946. In order to determine the change-no change threshold, the equation Mean + (1.5 x Standard Deviation) was used. 

Figure 22: This shows the Statistics of the Image Metadata window. This is used to acquire the Mean and Standard
Deviation figures for this image.

To determine what the initial number used is, the histogram is opened and the cursor is placed in the center of the histogram and that number displayed is recorded. The histogram used in this example can be seen below in Figure 23. After the number is recorded, the correct numbers are input into the equation noted above, and the result of this is added to the number of the center of the histogram. The same process is repeated to figure out the lower threshold as well.

Figure 23: This image shows the histogram used in to determine the change-no change threshold for the previous section.

Method 2: This method practices the use of Model Maker to create and analyze images. The Toolbox tab is selected first, followed by Model Maker, and Model Maker again from the drop down menu. First, two raster images are added to the Model Maker, followed by a Function, followed by another raster image. They are connected by arrows that represent the flow of the work. The 2011 Eau Claire County Band 4 Image was loaded into the top-left raster followed by the 1991 Eau Claire County Band 4 Image into the top-right raster. A function was developed to show the change that had been subtracted from the 2011 image. This function is shown by "$n1_ec_envs_2011_b4 - $n2_ec_1991_b4 +127". This image is loaded into the output raster at the bottom of the Model Maker. This progression is seen in Figure 24.


Figure 24: This image shows the Model Maker for the previous section.

Next, the image that was created in the previous step is opened and the workflow carried out to determine the change-no change threshold is repeated. This time, the equation to determine the threshold is Mean + (3 x Standard Deviation). The Statistics used in this example can be seen below in Figure 25 and the Histogram can be seen in Figure 26.


Figure 25: The Statistics panel, showing the Mean and Standard Deviation for the raster created in the previous step.


Figure 26: This image shows the histogram that was used to determine the change-no change threshold for the
previous step. The center of the histogram was determined in order to input the value.

Another Model Maker window was opened and a raster, function, and another raster were added to the window, connected by arrows to show work flow. The first raster image contained the raster created in the previous step. The function window was opened and the function was changed from Analysis to Conditional. From here the "Either IF OR" option was chosen and added to the function building window in the bottom of the Function Definition window. The change-no change threshold value was used and input into the function which reads: EITHER 1 IF ( $n1_ec_91> change/no change threshold value) OR 0 OTHERWISE. This seems to act as a basic binary code. The 0's are turned off and the 1's are turned on. This allows the features that registered as "Changed" to appear and the features that registered as "Not Changed" to not be apparent. Then, the output raster was added to the bottom raster image, the function was run, and a new image was created. The Model Maker can be seen in Figure 27.

Figure 27: This image shows the Model Maker for the
previous workflow.

The image below, Figure 28, shows the result of the binary raster output. In this image, the white areas represent the 1's (turned on), while the black areas represent the 0's (turned off). 

Figure 28: This figure shows the raster output from the binary change function. The white areas represent the changed
features while the black areas represent features that did not register as changed.


Finally, the last part of this process was ready to be carried out. ArcMap 10.2 was opened and the rasters were brought into a new map. The red areas show the areas that registered as "Changed". The areas that are not red are areas that did not exhibit any measurable change in their features. This can be seen in Figure 29. 

Figure 29: This image shows the rasters brought into ArcMap. A legend, scale, and north arrow were added to make the map easier to read.

Results:
This assignment was very useful to begin to understand the various methods of enhancing images to make them more easy to analyze. Through RGB to IHS transformations, image mosaicking, spatial and spectral image enhancement, band ratioing, and binary change detection the analyst is able to expand their arsenal to tackle many different issues that may be faced.