Over the past few years we have been doing more simulations of video systems. Perhaps you have seen the video simulation of MegaScene1 with the cars driving down the streets as would be acquired from a circling UAV:
So how do we make these videos? It is really a simple tool chain involving a tool that is distributed with DIRSIG and another tool that can be downloaded from the web. For this example, we are going to deal with a 2D framing array type system and that has a red, green and blue (RGB) channel set being captured (read out) on a regular interval:
DIRSIG itself doesn't generate video files directly, but you can employ DIRSIG to generate the individual frames for a video and them combine them. This workflow breaks down into a three (3) step process:
- Instruct DIRSIG to output each capture (or "frame") of the focal plane to an individual file.
- Convert each floating-point radiance image file to a 24-bit (8-bits per channel) RGB image file.
- Encode the RGB image files into a video format.
The first step is to get DIRSIG to write each capture of the 2D array to a separate file. In our platform setup, we want to enable the option to generate "an individual file for each capture in each task" on the "Output" tab in the Simple/Basic capture method editor:
After the DIRSIG simulation runs, you will have a series of files that are named with a very specific pattern. For this example, let's assume we have a camera with a 24 Hz frame rate and we capture data for 1 second. We should get 24 individual files (24 frames/second x 1 second = 24 frames) and the filenames will be:
rgb-t0000-c0000.imgThe naming scheme is rather simple. The "rgb" portion of the filename is the "File Basename" (see the screen shot above). The "t0000" portion of the filename indicates the task number the file was created during. The "c0000" portion of the filename is the capture number the file was created during. And the ".img" is the "File Extension" (see the screen shot above). So "rgb-t0000-c0000.img" is the first capture (frame) in the first task and "rgb-t0000-c0023.img" is the 24th capture in the first task. Likewise, "rgb-t0001-c0012.img" would be the 13th capture in the second task.
rgb-t0000-c0001.img
...
rgb-t0000-c0023.img
Now that we have a sequence of files, we need to convert them from the standard DIRSIG radiance image format (see this previous article for more info on the DIRSIG radiance image file format) to a 24-bit image format that can be easily encoded into a video. The DIRSIG image viewer has a way to save a loaded image as a PNG, JPEG, etc. file. But, you don't want to manually load a large number of images into the viewer and manually save out all the individual frames. This problem screams for a scripting solution, but we need a scriptable tool to handle the multi-band, floating-point radiance image to 24-bit image conversion. Beginning with DIRSIG 4.3, we started to distribute a simple command-line tool called "envitopnm" that handles this very task. The tool produces images in the PNM image format compatible with the NetPBM image processing toolbox. If you have this toolbox installed then you can easily convert a DIRSIG image to a PNG image using the following syntax:
There are many options to the envitopnm tool, but in this example the DIRSIG radiance image files has three bands and the tool assumes they correspond to red, green and blue channels (note, you can change this association with a command-line option). The "autoscale" option was used here to automatically scale the floating-point radiances in any given band to the 8-bit range for a given channel (another command-line option allows the user to specify the linear gain and bias for this scaling). Using this tool, you could to automate this image conversion using a simple shell script. With all the radiance images converted to RGB, we can encode them into a video.prompt%] envitopnm --autoscale=true rgb-t0000-c0000.img | pnmtopng > rgb-t0000-c0000.png
The video encoder we are currently using is FFmpeg, which is an extremely powerful, open source video encoder. But, there are a myriad of other image encoders you could employ at this point. On the Mac, QuickTime can make a video from a sequence of images. On Windows, Microsoft offers a variety of tools (depending on which version of Windows you have) that can encode to the Windows Media file format.
The documentation for FFmpeg reveals the wealth of options available including the frame rate, the bit rate, etc. For example, the following syntax will create a Windows Media file with a frame rate of 24 Hz and a maximum bitrate of 4096 kB from a sequence of images files that are named 1.png, 2.png, etc.:
prompt%] ffmpeg -i %d.png -r 24 -b 2048k -minrate 2048k -maxrate 4096k -bufsize 16000k -vcodec wmv2 -an my_video.wmv
Those are the basics, but let's look at an example. In the circling UAV video of MegaScene1, the time scale for the video was on the order of a few minutes and the dynamic component of the video was the motion of the platform and the motion of the cars in the scene. For this example, we will do a time-lapse of a scene. The platform will be statically positioned and the scene content will be static. The dynamic aspect is the sun position and the atmospherics related to the sun position. To make a video that captures the scene every 5 minutes, we compute the clock rate corresponding to 5 minutes (1 / ( 5 min * 60 sec/min ) = 1/300 Hz). To have the sensor capture for the duration of the day, we setup a single task that starts at 5 AM and has a duration of 17 hours (17 hrs * 3600 sec/hr = 61,200 sec). To have MODTRAN-based atmospherics for a simulation spanning such a long period you need to use the "MODTRAN Threshold" atmosphere model, which calls MODTRAN on-the-fly and can constantly update the current sun geometry. The power of this advanced atmospheric model will be discussed in a future post.
The video below was created using this setup (only 5 AM to 4 PM is shown). You can see the sun rise, the sky color change and the shadows move throughout the day.
This article isn't intended to be a fully detailed, step-by-step tutorial because every application will be different. But we hope we have shown you the basics and you can adapt these ideas to suit your needs.
Comments