As many of you may have read in past posts by Niek, we have been using the Blender software heavily in the construction of scenes so they can be ingested into DIRSIG. While we have many Blender tools that work in the editing and exporting of existing scenes, they only utilize a small subset of features that exist in Blender. Blender can be viewed as the vi (or emacs) of the 3D animation world which means it can be very powerful but at times opaque to the newly initiated. After paying our dues in the apprenticeship of Blender knowledgedom, we wanted to make accessible to DIRSIG users the capabilities that have made Blender the de facto standard of the 3D animation domain.
In an ongoing project that addresses the need of data sets for training machine learning algorithms and assessing the performance of target detection/exploitation algorithms, DIRS has focused on implementing tools for producing numerous instantiation of simulated scenes. Blender has been an integral part of this tool development because of its scriptability and tight integration with visualization providing the user with immediate feedback of object placement.
It also brings a scene-centric perspective at the input end of the image and data simulation chain independent of the modality that may be rendered in DIRSIG.
One of the major realizations we encountered in addressing this problem was the need for the role of a Scene Librarian to assess the quality of the scene components. The construction of scene elements vary greatly depending on the application and the facilities available to the creator. It became obvious to us that a guideline of best practices was necessary to vet any shortcomings in the geometric construction or material. This will be discussed in a future posting, but we wanted to give some context of how these tool came about.
The one tool that we feel a Scene Librarian would need is a visualization and setup tool that helps them to not only view the scene in question, but to also provide the facilities for animating camera views and geometries. While this does not directly help with the geometric construction process, it does provide the librarian/user a rapid means of generating the necessary input files for different imaging modalities (VNIR, LWIR, and Hyperspectral). Shortcomings of the scene geometry will hopefully become apparent in one of these modalities. The resulting simulation files are compatible with dirsig_edit and can be modified from that interface. Past demonstrations of DIRSIG involving vehicle and sensor motion was painstakingly specified by hand. While it can be rightly argued that Blender 2.49 is an equally painstaking environment (which we will not contest), we made great lengths to invoking the tool a simple action. The tool takes advantage of Blender animation concepts and translates those parameters into DIRSIG specific files.
A screencast showing some features of that tool can be viewed below.
Additional details regarding this routine is described on the DIRS/CIS wiki at
The above screencast shows the process under Blender 2.49 which was the stable release when we started our development with Blender. We hope to update these and other routines to the Blender 2.5 series as documentation to their API becomes available (which we believe is slated for the 2.58 release).
Enjoy,
Rolando
Comments
At the time I left RIT, our Blender scripts only worked with the older version (2.49). The Blender guys changed the underlying interface in the newer releases and I'm guessing the RIT scripts haven't been updated yet.
For instructions on using the scripts with the older copy of blender, see these posts.