Sunday, December 11, 2016

Lab 8: Spectral Signature Analysis and Resource Monitoring

Background

The objective of this lab is to gain experience on the measurement and interpretation of spectral reflectance of various Earth surfaces and near surface materials using satellite images.

Methods

Collecting images, Graphing images and analyzing images to verify whether they pass the spectral separability test discussed in lectures.

Part 1: Spectral signature analysis

The first step is to open up the Eau_claire_2000.img to measure and plot the spectral reflectrance of 12 materials and surfaces from the image. For this image the analysis will include :

1.Standing Water
2.Moving water
3.Forest  
4.Riparian vegetation. 
5.Crops
6.Urban Grass
7.Dry soil (uncultivated) 
8.Moist soil (uncultivated)
9.Rock
10.Asphalt highway
11.Airport runway

12.Concrete surface (bridge, parking lot, or any type of concrete surface)

To take a spectral signature click on the Home tab and then drawing.  The click on the polygon tool, and digitize an area within, in this case Lake Wissota, to plot the spectral signature.  Now click raster and click on Supervised followed by signature editor.

Now that the signature editor is open create a new signature from the AOI and change the default name to what makes sense, for this example 'Standing Water' was used.

Next, click on the Display Mean Plot Window to display the graph.

Finally, Collect Spectral Signatures for 2-12 in the same way, and add all of the signatures to the same plot.

Getting a final graph that looks like this:

Figure 1:  This is a graph showing the spectral analysis of the 12 objects defined above. The y-axis shows the reflectance and the x-axis is the wavelength.
Part 2:

In the second part of this lab the goal is to preform simple band ratio by implementing the normalized difference vegetation index (NDVI) on a selected image.

NDVI =(NIR - Red)/(NIR+Red)

To start click on Raster- Unsupervised - NDVI

This opens the Indices interface.  For this example Landsat 7 Multipsectral was used for the sensor.

Now under select function make sure NDVI is highlighted, and run the Program

Finally open it into Arcmap and create a map with a classification scheme to understand the colors.

Figure 2: Vegetation Health Monitoring Map: Black - Low Vegetation, White - High Vegetation
Section 2:

Soil Health Monitoring

Ferrous Minneral = MIR/NIR

Open an image to study, and click on Raster-Unsupervised - Indices.

The indices interface opens.  Fill out the data according to the parameters needed and under the selection function choose ferrous minerals

Figure 3: Ferrous Mineral Monitoring Black: bad exposure, White: High Ferrous Minerals
Conclusion

Using spectral analysis can tell a lot about an area without actually have to visit the area.  If looking for places that show high ferrous mineral content creating a map like the one in Figure 3 can be helpful to cut down on time of having to find great exposed areas. Also looking at Vegetation quality can be helpful for farmers, therefore using signature analysis can tell a lot about an area from just a computer screen.  

Monday, December 5, 2016

Lab 7: Photogammetry

Background

This labs objects were to explore the basics of photogametry.  The specific part being looked at included calculating ways to figure out the scale of digital images, measuring perimeters and areas, an introduction to stereoscopy, and orthorectification.

Methods

Scales, Measurements, and Relief Displacement 

Vertical Aerial Photographs

To Calculate the scale of an aerial photograph we use the S= Pd/Gd equation.
Where,
S = Scale
Pd = Photo Distance
Gd= Ground Distance

The following is an example from calculating the scale on an image of Eau Claire, Wisconsin.

S = Pd/GD
S= 2.7'' / 8822.47'

In this case the 2.7 inches was collected by using a ruler measuring the photo distance on the computer monitor.  The next step is to convert to the same units to make the math easier, Therefore the equation looks like S=2.7/105869.64.  To get the Scale divide the numerator and denominator by 2.7'' which will give the result of 1/39211.

In the next example there is an aircraft which aquired the photograph at an alititutde of 20,000 ft above sea lvel with a camera that included a focal length lens of 152mm, and the goal of this example is to figure out the scale of the photograph using the information obtained.

The equation used for this example is S = F/(H-h)
S = Scale
f = focal length
H = altitude above sea level
h = elevation  of the terrain

so,

f = 152mm
h = 769 ft
H = 20,000 ft
Altitude above ground level H': 20,000 ft - 769ft = 19204 ft

Therefore,
S = 0.152m/19204ft = 0.499'/19204'

Divide both the numerator and denominator by 0.499'

S = 1/38485

Measurement of areas of features on Aerial Photographs: 

The next sections shows a lagoon labeled with an x which needs the area calculated.

  • Select measure tool from the home tab on the ERDAS Imagine interface
  • click "point" from dropdown arrow and select the polygon tool to measure the area of the lagoon
  • trace the lagoon creating a polygon and double click when finished
  • in the measurement window change the defaults to Hectares and the other needed units
Area: 39.5517 Hectares and 97.7345 acres. 

Figure 1: Map of Eau Claire showing the Perimeter/Area of the Lagoon. 

To obtain the perimeter:
  • Use the Measure the perimeter of the feature select polyline tool
  • Repeat the outline of the lagoon and doubleclick to finish.  
  • Information will be in the window
Calculating Relief displacement from Object Height

Relief displacement happens when an object is not represented in the correct planimetric location because of its distance from the principle point and the height of the overall object itself. 

The taller the object the more displacement it will have, and the farther away the object is from the principal point the more displacement it will have.  

The equation for dealing with relief displacement is
d = (h*r)/H 
d = relief displacement
h = height of the object (real world)
r = radial distance from the top of the object to the center of the principal point
H = Height of the camera above the local datum

In the example given,  the photo is looking at a smoke stack identified by the letter 'A' on the photograph.

H = 3980'
h = (0.5) ( 3209) = 1204.5'' (in the real world): 0.5 was the photo distance
r = 10.5'' (measurement taken with a ruler)  
d = (1204.5'' * 10.5'') / 47760''

d= (1204.5'' * 10.5) / 47760'' = +0.265 above principle point of elevation) 

The tower should be plotted inwards since the displacement is positive. 





Steroscopy

Steroscopy is a method used to enhance an image to display a 3-D view to show variation in elevation.  This part of the lab looks at the elevation of Eau Claire, Wisconsin which was analyzed from the creation of an anaglyph image using a DEM and DSM

Creating an Anaglyph image with the use of a DEM

The first step is to bring in two photographs into seperate viewers.  The first had a one-meter spatial resolution and the other was a DEM of Eau Claire with 10m spatial resolution

The steps are: 

  • Click the terrain tab in ERDAS and select Anaglyph to open the anaglyth generation Window
  • Input the DEM into the DEM place and the other is the input image.
  • Name output file to the correct location
  • Set the vertical exaggeration to 1 and leave the rest to default. 
Once the process is complete bring the image into a new viewer and use the polaroid lenses to look at the 3-D image created.  

Creating an anaglyph image with the use of LiDAR derived surface model

Bring in the two images into separate viewers.  The first is a one-meter spatial resolution and the other is a 2-meter spatial resolution. Next, Use the same workflow as above to create the new image which can be looked at using the Polaroid lenses.  


Orthorecectification 

Creating a new project:

Open LPS Project Manager by clicking the Imagine Photogrammetry within the Toolbox tab
Create a new block file
Within the model setup window the polynomial-based pushbroom category and SPOT-Pushbroom were selected.

Horizontal Reference Source

To set the horizontal Reference Source click the "Set" button for the horizontal Reference Coordinate System.

The following were used for this lab:

Projection: UTM
Spheroid name: Clarke 1866
Datum: NAD27 (CONUS)
UTM Zone 11 North
Meters for Units

Adding Images

In the Photorammetry project manager window click images which is on the let side, and add frame.
add the spot image to edit the pushbroom settings then click ok to accept the defaults.

Collecting GCPs

To start collecting GCPs select the Classic Point Measurement Tool option Then the point measurement window will appear.

Now, reset the Horizontal Reference to ensure the image layer and add the orthorecifiedimage, and check the box to use viewer as Reference.  

Figure 2: Showing the GCPs on the image

Figure 3: Table Showing Data entered to collect points. 
Now to add the points to both images click the add GCP button and add it to the Pan Image then click add and continue until desired points are reached.

Then add a second image to the block file, collect the GCPs for that image and perform an automatic tie point collection.  Then Tiangulate the images, and othorcif the images.  Then save the block file and it should create a very accurate looking image as they are lined up.
Figure 4: Images matched together.  

Conclusion:

Creating Steroscopic images to use in orthorecectification can be a very heavy and detailed process to create an image that matches up to nearly 100%.  This is a great tool to use when dealing with multiple images to get them attached together.


Sources
Digital Elevation Model (DEM) for Eau Claire, WI
United States Department of Agriculture Natural Resources Conservation Service, 2010.

Digital elevation model (DEM) for Palm Spring, CA
 Erdas Imagine, 2009.

Lidar-derived surface model (DSM) for sections of Eau Claire and Chippewa
 Eau Claire County and Chippewa County governments respectively.

National Agriculture Imagery Program (NAIP)
 United States Department of Agriculture, 2005.

National Aerial Photography Program (NAPP) 2 meter images \
Erdas Imagine, 2009.

Spot satellite images

Erdas Imagine, 2009.

Monday, November 14, 2016

Lab 6: Geometric Correction

Background

Geometric correction is essential to the creation of accurate images which help to create accurate results when using these images in the scientific field.  Geometric correction is correcting the pixels to be in the correct X and Y locations.  In this lab there are two basic ways to correct the image to accuretly represent the real world.  The first is image to map rectification and the other is image to image rectification.  An image can only be rectified from a perviously corrected image.  Therefore, if many corrections are needed then the original image used to correct an image must be corrected fro.  If not corrected for then the accuracy will be very low.

Goals:

To geometrically correct images using the image to map rectification and image to image rectification .
Methods

Image to map rectification

Given an area of interest in the Chicago area there were two images to use the image to map rectification.  This is done by adding ground control points (GCP) in ERDAS Imagine to use the Multipiont Geometric Correction window.

This first part was done using a first degree polynomial formula allowing us to only need to add 3 GCPs.  The first step was to add the two images into two different viewers within the program.  Then to select the Multipsectral tab and select control points.  Then a new winow appears which allows the polynomial selection to take place.  next is to select new layer and imput the DRG (Digital Raster Grahic) for the reference image and accept the default model properties.

 Now add the GCP's to the map.  This can be at splits in rivers or area that are more permanent.  In this lab we were given places to put the GCPs.  Once the requirement of three is met then the program will automatically place GCPS added to one image onto the other.

Now that all of the GCPS are placed the next step is to get the root mean square error (RMS) to under or below 2.  I was lucky enough to get mine down to .4.

Finally save the output image to a folder to bring into ERDAS to view the final product.

Figure 1: Image to Map Rectification (RMS: 0.4411)
Image to Image Rectification

Image to image rectification is done by using the same processes as above.  This time the images were from Sierra Leone, Africa.

The difference in the directions is that this is a third order polynomial that requires ten ground control points instead of three.

Also this one required to get a RMS error of less than 1, and then use the Display Resample Image tool was used again to get the results.

Figure 2: Image to Image Rectification.  The ground control points are in white.  An RMS  of .012 was used.  


Results

Geometric correction is a very important aspect for looking at aerial images or any images that need to be spatially correct.  It makes it more accuretly represent the Earth's features.  If this processes was taken out of the "norm" the images people would deal with would be so far off that none of the data would be accurate enough.  It is also important to understand the RMS error which tells the accuracy of the points.

Sources

Satellite images
Earth Resources Observation and Science Center, United States Geological Survey
Digital raster graphic (DRG)

Illinois Geospatial Data Clearing House


Wednesday, November 9, 2016

Lab 5: LiDAR Remote Sensing

Background

LiDAR data collection has become very popular due to its ability to be analyzed in many different data analysis.

Goals

  • Produce Surface and Terrain Models
  • Create intensity image and other raster (DSM and DTM) from a point cloud data set that was in LAS format

Methods

Creation of Surface and Terrain Models

1. Copy the LAS files into a person folder and create a new LAS datasets.
2. Add the LAS Files into the LAS dataset by clicking "Add Files" and make sure to use all of the files from the folder.
3. Use quality control to make sure the dataset makes sense.
4. Add the correct coordinate system to the X,Y system and Z coordinate systems.  This is found in the files properties and under XY Coordinate System and Z Coordinate System.
For this example XY was set to NAD 1983 HARN Wisconsin (US Feet) and the Z-Coordinate was set to NAVD 1988 US feet.

5.  The next step is to add the LAS data to ArcMap where only a grid will appear, and add basemap of the area in quetions to make sure the data is in the correct spot.  This is used to check the coordinate system.
6. Turn on the LAS Dataset tool bar in Arcmap, and turn on the 3-D analyst extension from the Customize dropdown tab to active this tool.
7.This allows us to see the different options in the LAS toolbar to learn about the various types of models that can be used to utilized this type of data.
8. Choose which type of model to create from the surface symbology render options which include elevation , aspect, slope, and contour.  In figure 1 I will be displaying Aspect, Slope, and Contour.
Another import option is the LAS dataset Profile View which allows the user to view the LiDAR point cloud in a 2-D view.  This creates a pop-up window to be used for easy viewing.  The next important option is the LAS dataset 3-D view.  This is used in viewing the Z-Vaules of the aspect image.
Figure 1: Showcasing Aspect, Slope, and Contour symbolizes LAS datafields from a section of Eau Claire, Wisconsin

There is also a button within the LAS Dataset tool bar that allows the user to see what part of the pointcloud is most important.  This is determined by elevation, class, or return which is based on their classification code, or based on the LiDAR pulse return number.

Using the LAS Dataview a Cross section can be used to see the Z-Values of the point cloud seen in Figure 2. There is also the 3-D view that gives a view of the cross section into 3-D.

Figure 2:  Cross section dataview of the point cloud. This is a 2-D view.
Creating an intensity image


This part of the lab was focused on deriving DSM and DTM products from point clouds.  The average nominal pulse spacing is critical to understanding the spatial resolution should be for the DSM and DTM output images. 

Digital surface model (DSM) with First return

To start this the LiDAR points need to be in elevation.  Then use the LAS dataset to RASTER tool to create the DSM.  Imput the Las file being used and then for the value field use elevation.  Set the interpolation type to binning, maximum and nearest neighbor. The sampling value was set to 6.56168 which is about 2m in feet.  This is becuase our data is 2m by 2m.  The rest of the values are set to deafult.

Digital Terrain Model (DTM)

To create the DTM the same first steps were used.  The only changes were to change the maximum to minimum in the interpolation.  All the other settings were the same except for that.

Hillshade of DSM and DTM

Creating a hillshade for the DSM and DTM help to create an elevation of the land surface.  to accomplish this 3D analysis needs to be turned on.  Now, to start the process search for the hillshade (3d analysis) tool.  Just enter the input raster and name the output to get a hilshade of the selected raster.  This is the same for both the DSM and the DTM.

Figure 3:  The left image is the Hillshade output derived from the DSM.  The image on the right was derived from the DTM.  The difference between these images are the DSM is the first return points compared to the image on the right that has the ground return points.  Therefore the right image will be a smoother surface to represent the ground.  

Intensity Image

To create our final maps the Dataset to Raster tool was used to create the intensity image. This was done by changing the data back to point symbology and changing the filter to First Return.  This time in the tool the value field was set to intensity instead of elevation.  The interpolation was also set to average instead of max or minimum.  The sampling value once again stayed the same to get that 2mx2m grid size.

Now after saving this raster we can bring it into ERDAS becuase ArcGIS only really shows a black or dark image.  Once it is brought into ERDAS by saving it as a .TIF the image can be seen to show the intensity.

Figure 4:  The image on the left is the final image in ERDAS where we can see the image has contrast to make out details.  The image on the right was created in ArcGIS which makes just a dark image that is very hard to see much of any details.  
Conclusion

Raster data is very useful when trying to understand LiDAR data.  It can be helpful to understand elevation and to just analyze raster data.  LiDAR is constantly increasing in popularity due to the amount of possibility that it has and potential.  This lab was helpful to get a foot in the door to understand how to manipulate and use point clouds.

Sources:

Eau Claire County. (2013).


Price, M. (2014). Mastering ArcGIS 6th Edition. Mastering ArcGIS 6th Edition Dataset [shapefile]. New York: McGraw Hill.


Monday, October 31, 2016

Lab 4: Miscellaneous Image Functions

Background

This lab was meant to introduce and explore different functions and tools within ERDAS Imagine to help enhance images to help interpret it.  There are seven parts which include subset images, image fusion, radiometric enhancing, resampling, link image viewer to google earth, binary change, and image mosaking.

Objective/Goals
The goal is to create a base of understanding about how there are many different ways to enhance images to create a better image to interpret with proper techniques.

Methods

Subseting Images

Using the inquire box tool it becomes possible to create an image from the AOI. First off is to right click the image brought into ERDAS Imagine to click on inquire box.  Next is to determine the location of the AOI and the size. Next,to find the next tool, it will be located in the Raster tab. Then click on subset & chip and finally create subset image.  After that a window will pop up.  To have the inquire box be the coordinates for the subset image click on the "From the Inquire Box."  This will take the coordinates from the AOI created and bring it this window.  Next is to choose a place to save it and it will be processed after hitting ok.

Figure 1: Subsetting using a shapefile.  The image on the left shows the shapfile used, and on the right it shows the created subset image.
Image Fusion

First open up the more coarse image into ERDAS.  Next is to open up a second viewer and open the more fine image.  Then click on raster to activate the raster tools. Next, click on Pan Sharpen to find the Resolution Merge Tool.  In the window that pops up input the high resolution file in the first drop down, the low resolution in the next drop down, and create an output file name in the correct folder.  Also in this window under the methods area click on Multiplicative which selects the correct algorithm to use for this example.  Then under Resampling techniques choose nearest neighbor. Now hit ok to run the tool.

Figure 2:  This is an image fusion of the Eau Claire Area.  The left is a lower resolution and the right has been pan sharpened to create a better quality image from the multispecral image.
Radiometric Enhancing Techniques

This is used to reduce Haze shown on the image.  To manipulate the haze from the image the radiometric tab and then Haze reduction tool is the easiest way to get rid of it.  The process was nearly the same to create an output file and run the tool.
Figure 3:  This is an image the shows how to reduce haze on the reflective band image on the left.  The right is what was created from using the haze tool.  
Linking to Google Earth

First, go to the Google Earth Tab.  Then click the connect to Google Earth button and then once that opens up click the match GE to view button.  This will match the view of the image to Google Earth.  Then to make things easier click the Link GE to View and Sync GE to View.  As long as GE is updated this could be useful to interpret the image.

Resampling

From the raster toolset once again select the spatial and resample pixel size Use the working image and look up the metadata to choose the correct pixel size.  If needing to change, change it in the Xcell and YCell Values.  For this lab it went from 30x30 to 15x15, and the square cells box needed to be checked for this lab.  For this lab a nearest neighbor and bilinear interpolation method was run.

Image Mosaicking

Image mosaicking is taking two images to combine together seamlessly.  This may be done because the AOI may be larger than the area of the satellite image, therefore having to put more than one together.  There are two major tools used to make this possible which include Mosaic Express and Mosaic Pro.  The first step is to add the images to the viewer, but there are a few steps that are needed before the addition of the second image.  First highlight one of the images tab to make sure multiple images in virtual mosaic is selected.  Then, back to raster tab once again check the background transparency and fit to frame are checked.  Then add the image to the viewer.  The mosaic express is under the raster tool set.  select the image to be put together and hit run.
Figure 4: This is showing what happenes between just adding two images on the left and then using the mosaic express tool on the right to create a "seamless" transition.  

Mosaic Pro:  The next way to do this is to choose the mosaic pro tool from the mosaic tool list found within the raster tool tab.  First add the images, but before adding them click the image area options and select the compute active area button.  Then set the settings and finish adding the images.  Next is to match the colors which is different than the mosaic express.  To do this it is imperative to choose the Color Corrections tool of the Mosaic Pro Window.  This is done by using the histogram matching.  The method to do that should be set to Overlap Areas.  The final step is to set the output options dialog to Overlap Function to Overlay.  and finally run and process the image.

Figure 5: This is the result of  running Mosaic Pro.  It produces two images that are closer in color and more blend-able at the seam.  

Binary Change (Image Differentiation) 

To figure out the difference between Eau Claire 1991 and Eau Claire 2011, we compare the change in brightness of the pixels.  First off, under the raster tool tab the two image functions will be found in there.  The two image functions will show the 2001 and 1991 image.  Then, to obtain the difference in pixels use the subtraction operation.  Then under the layer scroll bar only band 4 instead of all. then run the image differentiating and open the metadata to view the histogram created.

Figure 6: Historgram image differentiating from Eau Claire 1991 to Eau Claire 2011
The second part of this part was to use model maker to make the functions to run the image differentiation.  The first model used the equation:
I2011 – I1991 + C
which means (+127 makes all the values positive):
$n1_ec_envs_2011_b4 - $n2_ec_envs_1991_b4 + 127

This creates an image that looks like this after using the equation: EITHER 1 IF ( $n1_ec_91> change/no change threshold value) OR 0 OTHERWISE.  
  


Figure 7: Pixel Brightness difference output from 1991 to 2011 in Eau Claire, Wi

Result


Figure 9: Result shows the changed areas based on the difference of pixel brightness. 


This lab was very thorough going through the different tools used to enhance images for better interpretation. 
 There are many methods that may be better than others, but they are all useful in their own way, and it depends on what is trying to be achieved.  These tool are important to understand and create a base of what is actually going on to study and interpret images

Sources


Satellite images
 Earth Resources Observation and Science Center, United States Geological Survey.

Shapefile of the counties
Mastering ArcGIS 6th edition Dataset by Maribeth Price, McGraw Hill. 2014.