Skip to main content

Waveform Lidar Simulation

One of the more interesting areas of development right now for laser radar are "waveform" LIDAR systems. Unlike Linear-mode (Lm) or Geiger-mode (Gm) system that collect a relatively small number of height measurements per GSD, a "waveform" LIDAR (wLIDAR) essentially digitizes the returned flux during the time window at some time resolution. That means you have intensity information at all the ranges rather than just a discrete set of them. This allows for some very complex post-processing of the data. The "Airborne Taxonomic Mapping System" (AToMS) operated by the Carnegie Airborne Observatory (CAO) is being used to better quantify the biomass in complex tree canopies and grasslands. The remote sensing arm of the National Ecological Observatory Network (NEON) plans on operating a wLIDAR on it's Airborne Observation Platform (AOP).

Most wLIDAR systems feature a high performance, single-pixel detector coupled to a high performance digitizer. To obtain area coverage that detection system is coupled to a scanner that can provide across-track scanning while the airborne platform provides the along-track scanning via platform motion. Note that along-track pointing isn't out of the question, but let's keep it simple for now. As these types of systems become more familiar with people, we have been getting questions such as "Can DIRSIG model a wLIDAR system?"

Most users familiar with the DIRSIG Lidar modality are familiar with the two-stage approach we employ. Stage 1 is the radiometry simulation that (for each pulse) computes the temporal flux received by each pixel during a user-specified listening window. That result is then input into Stage 2, or the detector model and point generation. The standard detector model supports both generic Linear-mode and Geiger-mode detection schemes. This two-stage approach provides a lot of flexibility for doing detector performance studies because the detection and point generation can be performed very rapidly once the radiometry is precomputed.

Many users have asked "When will a waveform detector model join the existing Linear-mode and Geiger-mode detector models?" The answer is "We don't need to!" Let me explain ...

The .BIN file created in "Stage 1" by DIRSIG contains for each pulse, the temporal flux for each pixel in the receiver array during the period that the array is armed (defined by a range gate or listening window) and is accompanied by a lot of meta-data including the precise absolute time of the current pulse, ephemeris like the platform location, platform orientation, platform relative pointing and information about the source and receiver. But the temporal flux for each pixel is what I want to focus on. That portion of the data is basically what a wLIDAR system collects. The major difference is that the data produced by DIRSIG does not include an impulse response of the detector, A/D system, etc. Unlike the point clouds generated by conventional systems that can use the standardized LAS format, there has not yet been a widespread adoption of a file format for waveform LIDAR data. Note that the LAS 1.3 specification released in 2009 indeed has support for waveform data, but most operating sensors are still using home-grown data formats that were created before the recent LAS additions. For example, the data we have been getting from CAO is formatted as an ENVI image file (see this previous post in addition to other resources) employing the "spectral" dimension as a temporal dimension. Additional files carry important meta data for the pulse including location of the area images, the source pulse profile, etc.

So is there any easy way to convert a DIRSIG .BIN file into an ENVI image file so that the waveform can be visualized and processed? The answer is "Yes" and it doesn't even require conversion. The .BIN file is basically a 3D cube of data with some metadata. The ENVI image header file can be used to directly import the .BIN file as long as:
  • The pulse data is not compressed using the lossless compression option
  • The file contains only a single pulse
The uncompressed data requirement is because ENVI cannot read the compressed data. Pulse compression can be turned off in the "Output" tab of the "Receiver" configuration. The single pulse requirement is because ENVI cannot handle skipping over the meta data between pulses in single, multi-pulse file. If you have a multi-pulse simulation, you can tell DIRSIG to generate a separate file for each pulse using the "Generate files per capture" option on the "Output" tab of the "Receiver" configuration.

The required ENVI header file can be created by hand using the following template:
ENVI
description = {DIRSIG .bin file}
samples = 128
lines   = 128
bands   = 302
header offset = 777
file type = ENVI Standard
data type = 5
interleave = bip
sensor type = Unknown
byte order = 0
wavelength units = Unknown

Let me explain the various variables:
  • For this quick demo, I actually modeled a 128x128 array rather than scan a single pixel. Hence, the "samples" and "lines" are 128, respectively.
  • My listening window opened at 1.31e-05 seconds and closed at 1.34e-05 seconds, and had a time resolution of 1e-09 seconds. That results in 301 time bins "digitized". The DIRSIG .BIN file also includes the passive flux (Sun, Moon, etc.) as the first "band" in the cube. This results in 302 "bands" total.
  • The various meta-data headers in the .BIN file account for 777 bytes at the front of the file.
  • The spatial/temporal data is "band interleaved by pixel".
Below is the data in band #0 (or #1 depending on how you count), which is the passive flux from the Moon during my night simulation (note there is a shadow on the ground that is hard to see on some monitors). You are looking down at a tree planted on some flat ground.
The following is what band #250 looks like, which corresponds to a range about half way down the tree. At this range, we are mostly seeing returns from around the "waist" of the tree.
Below is a temporal plot of a pixel in the cube, showing the waveform profile created by the various leaves and branches within the IFOV of the pixel. Note that some of that shape is from direct returns off the structure of the tree being imaged and some portion is most likely multiple-bounce returns that were scattered by or transmitted through other leaves.
The goal of this article was to show how the existing outputs of DIRSIG could be used to approximate a waveform LIDAR (wLIDAR) system. What is currently missing from the DIRSIG simulation is the impulse response of the receiving hardware (detector, digitizer, etc.), but for now that could be incorporated by users reading in the data an performing the convolution themselves. Until widely accepted visualization tools for wLIDAR are available, we showed a simple trick that allows users to visualize the data using multi- or hyper-spectral image approaches in ENVI (or an ENVI compatible image viewer like the one provided with DIRSIG and used here).

Comments

Popular posts from this blog

LIDAR Point Cloud Visualization

A common question we get asked is how to visualize the point cloud data produced by either the Linear-mode or Geiger-mode LIDAR simulations. First, you should remember that the point cloud files produced by the "APD Processor" are simple ASCII/text files. Each line is the entry for a single return or "point" in the point cloud, including the point location and some attributes that vary depending on whether you modeled a Linear-mode or Geiger-mode system. For a Linear-mode system, a point cloud file will generally look like the example below: 12.7388 -45.3612 -0.0256 5.0290 0 0 0 0 12.8169 -45.3612 -0.0264 4.8362 0 1 0 0 12.8950 -45.3612 -0.0271 4.8362 0 2 0 0 ... 32.4008 -25.5446 10.5945 4.6783 0 65533 0 0 32.4781 -25.5446 10.5953 5.8959 0 65534 0 0 32.5360 -25.5640 12.5408 5.9185 0 65535 0 0 The first three columns are the X/Y/Z location of the point return. The 4th column is the intensity (in photons). Since Linear mode can support multiple returns per pulse, t

Viewing and Importing DIRSIG Output Images

We are often asked what programs can view DIRSIG's image outputs and how to import the raw DIRSIG image data files into other tools for further processing. For basic image viewing, DIRSIG-4.4.0 now includes a very simple viewing tool. Launch it from the main simulation editor window by selecting the "Start image viewer" option from the "Tools" menu. If you run your simulation from the GUI simulation editor, new image files are automatically added to the list in the image viewer as they are generated. If you want to manually add files to the list, simply select the "Open" item from the "File" menu or the toolbar. Here is a screenshot of the main image viewer window. The top part contains the list of currently opened files and the bands within those image files. To view a single band as a gray-scale image, choose "Single Band Display" from the combo box and then click on the image band that you want. Finally, click "Load Band

Using MODTRAN6 with DIRSIG

It has been a pretty exciting year for the team at Spectral Sciences, Inc.  with the release of MODTRAN6 . This latest version marks a major milestone in the continued development of one of the most popular and trusted codes for simulating radiative transfer in the atmosphere. In addition to important science related advancements, this latest code also includes significant improvements to the general usability of the software. This includes a new graphical user interface (GUI) and the introduction of a formal application programmer interface (API), which let's codes like DIRSIG interact with MODTRAN in a far more robust way than previous versions allowed. New MODTRAN, new interfaces The major development in the interface area is a shift from the old "tape5" style inputs to a new JSON (JavaScript Object Notation) style input. In addition to improving the general readability of the input, the JSON document format is much easier to read in, modify and write back out. The