Building an Immersive Training Platform for Infantry Soldiers
- Created: Friday, 01 March 2013
Regardless of the tracker used, the key performance metric that can make or break the simulation experience is the latency or delay between an action in the physical world and a reaction in the virtual world. This is influenced by the response time of the tracker, the communication (wireless, USB, or serial) method, and the CPU processing of the new data. A long latency will provide the soldier with a poor experience. If a head turns and the avatar’s head lags by half a second, for example, the simulator will not mimic real life and immediately make the simulator ineffective.
The maximum desired latency can be calculated based on the frame rate. If the visual is running at 30 Hz or 30 frames per second, a frame is drawn every 33 milliseconds. Therefore a lag of two frames between the physical action to avatar reaction would be about 66 to 80 milliseconds. Going to three or four frames or more of a lag is noticeable and will make the simulation system useless. ExpeditionDI uses a wired MEMS-based tracker for body and head tracking, and achieves a latency of about 66-80 milliseconds when running at 30 Hz.
Touch — Weapons
The nine-soldier US Army squad uses an assortment of weapons, including a M4, M249, and a M4 with a M320 grenade launcher. Soldiers receive extensive training on the use of weapons, and their muscle memory on how to remove and replace the magazine to reload or change the safety switch is ingrained from their basic training. A soldier simulation system needs to integrate the size, weight, balance, and functionality of the weapon within the simulator to reinforce their basic weapons skills in the simulated environment. Training equipment that uses arrow keys to move the weapon does not reinforce critical muscle memory on use of the weapon in a combat environment.
Integrating a weapon into a simulation system begins with an understanding of how a weapon is used and the key components within the weapon. When a trigger is pressed in the simulated weapon, that trigger must fire the virtual weapon in the simulation. When the safety switch is moved from off to semi, then the weapon must behave accordingly in the simulator. Bringing a weapon up to the shoulder and looking through the optics must change the avatar’s view. When the soldier runs out of ammunition, the magazine must be removed and a magazine inserted in order to reload. The weapon’s functions in ExpeditionDI, for example, are electronically instrumented to ensure the form and function of the real weapon is accurately represented in the virtual environment (see Figure 4).
The man-worn system is the image generator (IG) for the ExpeditionDI simulator. The single-channel IG is the hub; it runs the simulation software, processes all the input from trackers and weapons, generates the virtual image for display on the HMD, and then wirelessly communicates the information to a central manager’s station to ensure that all the man-worn systems are in sync for training. Placing the processing on the soldier maximizes GPU performance, while minimizing visual latency (see Figure 5).
An immersive, squad-level simulator is a balance of leading-edge technology and human factors. Simulators like ExpeditionDI develop better team cohesion, improve communication, build leadership skills, and provide training for specific missions. The main goal for any type of simulation and training system, however, is to give aviators, commanders, and infantry soldiers the necessary training to gain confidence and ultimately save lives.
This article was written by Pratish Shah, Vice President of Marketing and Sales, Quantum3D (San Jose, CA). For more information, visit http://info.hotims.com/45601-141.