Sunday, April 8, 2007

Into the gory details

I thought of only including details about the "cool stuff", but when some people started asking me if this whole thing was just a paid vacation, I had to step back and say "hey" it wasn’t all play and no work. This is where I pay the bill, in cash, for the "paid vacation" part of the whole trip. And as stated in most textbooks, this blog can be safely skipped without any break in the other sections... :-)

When we look at the use of satellite imagery, one thing that is seen is that typically these images are used in a "post processing" context rather than as a "real time" tool. During the field campaign, researchers deploy a number of sensors at the scene and during the post processing stage, the data from the sensors and the satellite images are analyzed so as to predict changes occurring at the location. The use of the imagery as a near real time application is limited mainly because of two reasons. The first is due to the low polar orbital repeat rate, which defines the rate at which the satellite would approximately cover the same position twice. This is usually around 1~3 days, which implies that any deformation that takes place at a frequency that is greater than 1~3 days will not be observed between an image pair. The second obstacle lies in the computational and storage requirements that are needed to handle the images (the typical size of an image from the SHEBA data set is approximately 250 MB, 15000x15000 pixels at 50m/pixel resolution). This mismatch between the high spatial resolution but low temporal resolution makes any form of real time processing difficult.

The deformation, or the motion, that take place in sea ice have certain unique characteristics such as the presence of large discontinuities, where the ice floes (a single piece of ice formed on the surface of an ocean) move apart creating "leads" or crash into each other creating "ridges". This kind of motion can actually be classified as a piecewise linear motion where each individual piece has a linear motion but the overall picture is that of a non rigid deformation. This is much like observing a human limb in motion, where each segment undergoes rigid motion but the motion of the limb on the whole is considered non rigid. The presence of this large, discontinuous non rigid motion causes many algorithms to fail when applied on the images of sea ice. To handle this kind of motion, I developed a robust motion estimation algorithm that was found to be extremely efficient and accurate. Unlike the typical products that are currently available that provide a resolution of around 5 km, this algorithm was able to capture motion at a 400m resolution, which is an order of magnitude greater than what is currently available. The preliminary prototype of the algorithm was arrived at during my Masters research and it was significantly modified to handle high noise and discontinuous motion efficiently and accurately. This was the algorithm that was applied to compute a near real time data product for the APLIS ice camp.The primary goal for the data product obtained from the algorithm was to identify regions of activity (leads/ridges) and to transmit this information to the researchers stationed on the ice to perform ground truth stress measurements. With this localization, researchers had the ability to deploy the stress buoys at the location with maximum activity, thereby obtaining ground truth measurements of the stress taking place in the ice for model validation. The activity was estimated using the high resolution motion field by computing invariant characteristics, such as shear. This allowed us to observe the deformations that were taking place in a 200km x 200km region around the camp and to describe locations where the activity was significant.

Description of the project

The original prototype was developed in Matlab but it was not computationally efficient though it provided us with the means to test and debug the entire estimation procedure. In order to handle the computational efficiency of the algorithm, we developed the entire system in C/C++. One of the important things that we did was to provide a high level design description of the system that we were designing using UML. This, I believe, was an important step, since it gave us a picture of the various interactions between the modules and also provided a high level understanding of the whole project. The UML modeling was done using the StarUML toolkit and the use case description for the project is shown below.

Most of the pieces were built in a bottom-up fashion starting from the motion estimation module. The motion estimation module was essentially translated from my Matlab implementation to C/C++ using the OpenCV library. One of the biggest difficulty was in handling map projections (how do we represent a point on the planet as a point in an image?). After considerable struggle, I managed to get the mapx library to perform vector projection of the buoy positions. Raster projection was performed using the ASF convert tool, which converted the CEOS Level 1 satellite image and produced geotiff images under Polar Stereographic projection (the map projection used for this project). The meta-information from the geotiff images and the projected buoy positions was used to extract the image at the camp location.

To provide for interaction between the various modules, I used Python and Matab scripts. The GPS positions from the buoys were obtained as an email and Python was used to parse them and to project them into Polar Stereogrpahic Coordinate system. Python was also used to pull images from the ASF ftp site, to perform map projection of the images and compute motion. These processing was scheduled to run at specific times through out the day using Pycron, a cron substitute for Windows. Once the processing was completed, Matlab was used to draw the maps and to generate the HTML/Javascript web pages that could then be sent to the researchers
(The results can be seen at http://vims.cis.udel.edu/~mani/SEDNA)

No comments: