Sunday, February 21, 2016

Processing Imagery with Pix4D

Introduction:

Pix4D is a software package that allows for automated bundle-block adjustment to be performed on UAS imagery. The software is extremely powerful, and can generate point clouds and orthomosaics from images, without the aid of a human technician.


The program is powerful, but needs data to be collected within certain parameters. When capturing any imagery, Pix4D needs at least 75% frontal overlap, and at least 60% side overlap for it to derive useful results. The necessary overlap percentages change depending on the surface. When capturing surfaces covered in sand or snow, Pix4D needs at least 85% frontal overlap and at least 70% side overlap.  When flying over fields, it requires the same overlap percentages as snow or sand, and also requires the flights to be done at lower altitudes, as it will increase the visual content.  Pix4D’s rapid check feature can be used to assess the quality of collected imagery in the field. When a study area requires multiple flights, it is important to ensure there is enough overlap between flight plans and that the images are captured with similar sun direction. For Pix4D to process oblique images, it is necessary for images to be captured first at a 45-degree angle and for additional images to be captured with increased flight heights and decreased angles. Ground Control Points (GCPs) are points within the area of interest with known coordinates. They increase the accuracy of Pix4D’s results by placing the model on its exact position on Earth’s surface. Once data are brought into Pix4D, the “Initial Processing” step can be performed. Initial processing generates a quality report, defining the accuracy criteria and average ground sampling distance of the project.

Methods:

I used Pix4D to process imagery captured with two different sensors, a Canon SX260 and the Sentek GEMs. The SX260 has built-in GPS, meaning it records the cameras spatial location in the metadata of each image as they are captured. The GEMs records the locational information in a slightly different manner, requiring the images to be processed using Sentek’s software in order to obtain the images locational information. In order to process the images, I first needed to create a new project, and specify the project’s file location.  Next, I added images to the project from their folder locations. When processing GEMs imagery, it is important not to accidentally include photos from both the visible spectrum and NIR spectrum, but rather process them individually. After the images have been selected, the next step is to make sure their geolocation information is correct and that the correct camera model is selected. As I said earlier, the SX260’s geolocation information is saved within each image, so the screen should show a green check mark next to the geolocation and orientation status. The GEMs requires its geolocation information to be imported from a spreadsheet before it can be processed. The GEMs sensor also requires its characteristics (focal length, sensor size) to be entered in manually before any processing can occur. After all of the parameters have been entered, the project is created.

After the projects were created, I performed initial processing to determine the quality of the collected imagery. The SX260 imagery consisted of 108 total images, of which 105 were used. The three removed images appear to have been captured as the UAS was ascending, leading to pixel distortion and less accurate geolocation (Figure 1). 
Figure 1: Images removed when processing the Canon SX260 flight

The areas between flight lines had slightly less overlap than where the UAS turned between flight lines (Figure 2). 
Figure 2: Number of overlapping images taken with the SX260 

The GEMs imagery consisted of 146 total images, of which 142 were used. The four removed images appear to have been captured over a line of trees with intense shadows…likely causing the software to have difficulty identifying matching pixels (Figure 3). 
Figure 3: Images removed when processing the GEMs flight.

The area covered by the removed images is where the flight plan has lowest overlap (Figure 4).
 
Figure 4: Number of overlapping images taken with the GEMs
After analyzing the quality report, I continued processing the data through the “Point Cloud Densification” and “DSM and Orthomosaic Generation” steps. Once the point clouds were generated it was possible to perform 2D and 3D calcuations from the point cloud as well as create a 3D fly-by animation (Figures 5,6). 
Figure 5: Calculating the total area of the community garden from the GEMs point cloud.

Figure 6: Calculating the volume of a shed at the community garden from the GEMs point cloud

Results:

The orthomosaics and DSMs were generated for the SX260 and GEMs at 1.44cm GSD and 2.62cm GSD, respectively. I measured a known distance (the 100 yard section of the running track) on the SX260 point cloud, in order to perform a basic assessment of spatial error. When measured, Pix4D measured the track’s length to be 100.68 meters or 110.104987 yards, 10% greater than its actual length (Figure 7).

Figure 7: When measured in Pix4D, the track is 100.68m long.

It is currently impossible for me to currently ascertain the spatial accuracy for other parts the imagery, as I don’t have any GCPs or other known distances. The DSM generated from the GEMs had elevation values 30 meters lower than the ground surface. This error is due to the internal GPS’s method of recording elevation not matching the parameter I set when entering the images into Pix4D. 

Overall, Pix4D generated fantastic results, far better than any mosaic generated from Sentek's GEMs software or Microsoft's Image Composite Editor. The SX260's orthomosaic and DSM were both generated at its 1.66cm GSD, and as such have extremely high detail (Figure 8). 

Figure 8: The Orthomosiac and DSM generated from the SX260
In spite of the vertical accuracy errors and larger GSD, the orthomosaic and DSM generated from the GEMs were also quite good (Figure 9). 
Figure 9: The orthomosaic and DSM generated from the GEMs


 Something to note when creating maps from incredibly high resolution imagery, such as the orthomosaic generated from the SX260, is that computers are always resampling the images to coarser resolutions except for at the smallest of scales. Resampling can cause certain details to appear unnecessarily pixelated, so it is important for the proper method to be specified. The "Natural Neighbors" interpolation method preserves pixel values, but causes linear features to appear jagged (Figures 10a, 11a). Bilinear Interpolation averages pixel values, which causes most features to appear smoother and more less distracting to the eye (Figures 10b, 11b).
Figure 10: a. This image was resampled using the Natural Neighbors method (left).
b. This image was resampled using Bilinear Interpolation (right).
Figure 11: a. This image was resampled using the Natural Neighbors method (left).
b. This image was resampled using Bilinear Interpolation (right).
Pix4D automates the photogrammetric process, allowing for relatively fast processing of UAS imagery, and does so rather well. The derivatives generated without GCPs appeared to be true to reality, and didn't feature any major distortion. The program has incredible capabilities, however, attention should still be paid to accuracy.


No comments:

Post a Comment