About Me

My photo
MILITARY TECHNOLOGY (MILTECH) is the world's leading international tri-service defence monthly magazine in the English language. MILITARY TECHNOLOGY is "Required Reading for Defence Professionals". Follow us on Twitter: MILTECH1

29 November 2015

I/ITSEC 2015: Visualisation Systems – Current Technology and Applications Under Review

Modern Visualisation systems can produce almost anything that is required, the trick is how to display it. Ian Strachan looks at the technology and how it contributes to military training.


In the past, for visualising the battlefield or preparing for detailed action, there were maps, models, photographs, and live rehearsals that were carried out over similar terrain. Perhaps their most significant use was in June 1944 when hundreds of thousands of soldiers, sailors and airmen were trained and briefed before the D-Day landings in Normandy. How would we use Visualisation systems in such military operations today? In addition to maps and photography the key word is Simulation, which has almost entirely replaced the construction of models and is more versatile. Critical factors such as different weather conditions and likely enemy action can now be simulated before the operation, images can easily be switched from day to night. First-generation simulator visual systems consisted of “model boards,” over which small cameras travelled at heights and speeds appropriate to the vehicle being simulated. When models were detailed enough and high-resolution colour cameras were used, these were very effective and such systems had their devotees well into the era of computer-generated imagery. In the Soviet era, targets for NATO’s long range ground attack aircraft were far into Warsaw Pact territory and some very large model boards were constructed so that low-level “war routes” could be practised. But in the light of computing developments, model boards for simulator visuals have largely fallen out of use.

The first computer generated imagery (CGI) used in simulators of the 1970s was crude and little use for tactical training. However, during the 1980s the application of Moore’s Law resulted in imagery that was of genuine tactical use and now we see CGI that approaches the real world in fidelity. One of the breakthroughs that made this possible was the use of “texture” within the polygons that make up the computer-generated scene. The first textures allowed a reticulated pattern to be applied to each polygon instead of an otherwise plain surface. This reduced the need for more and more polygons, particularly in a rapidly moving scene such as in an aircraft simulator. The extra points of contrast in the scene that were provided by texture patterns, increased the magnitude of the “picture flow” or “change of perspective” cue that allows the user to sense both height and speed. Increase in computing power allowed more polygons in the scene, and texture technology quickly improved with more realistic patterns, for instance simulating grassland, cornfields and the like. Finally, “photographic texture” was developed. This allowed small but real images to be inserted into each polygon without the need to re-process the image each time, the “texture map” being simply inserted into a polygon as a single element after it had been created the first time. A large visually symmetrical area such as field patterns, woodland, rough sea, or a skyscraper can be created using very few polygons and one or two photo texture maps that are repeated over and over again. Turning to night imagery, vision devices include light intensifiers such as NVGs working in the near IR at a wavelength of about one micron. Also the more expensive passive FLIR, which depends only on thermal contrast within the scene and works on the blackest of nights. Both NVGs and FLIR are easy to simulate by simply adding monochrome colours and appropriate texture to each polygon. In the case of NVGs, monochrome green is normally used. With FLIR the thermal image can be presented as either “white hot” or “black hot,” where white-hot may be the obvious orientation but black-hot may produce a picture that is considered more realistic to the viewer. In any case, the picture can be changed from white- to black-hot at the touch of a switch.

Techniques like the above have allowed the incredibly realistic computer-generated scenes that we see today. Both real and artificial worlds can be created but for military training there is no substitute for the use of imagery of the real world. Since the corners of each polygon are defined as three-dimensional (x.y.z) co-ordinates, the stored database is three-dimensional and can be called up to display scenes that can be viewed from any angle. Up-to-date photographic and mapping data can be semi-automatically transformed into computer-generated imagery and in areas without direct or aerial photography, imagery and terrain data from satellites can be used. The results today are stunning. To the real-world visual scene can be added other images needed for training, such as vehicles, artillery, ships, aircraft, and personnel. It is also possible to combine virtual and real worlds in what is called mixed- or hybrid-reality, where physical and virtual objects co-exist and interact in real time. So in terms of generating scenes for visualisation, systems are now in place to produce whatever is required. The trick is how to display the imagery.


State of the Art 3D Models for Real-Time Visual Systems

The demand for Serious Gaming-based visual system databases and photorealistic 3D content for professional training simulators requires a competent and reliable partner. <RUAG Defence Simulation and Training, with its extensive knowledge in simulator construction, is now offering its know-how in creating 3D content and databases as a separate service and integrative solution for simulators.


Realistic high quality 3D real-time visualisation for training simulators has become an important and of significant value in the professional training education. The learning effect for the trainee and the adaptation to their virtual environment can be improved as they will not be distracted by any seemingly unrealistic influences. To integrate serious gaming technology in existing simulators does not necessarily result in the need to replace of the entire visual system. RUAG 3D content models can be cost-effectively integrated in many of the common 3D real-time visual systems, such as, e.g. VEGA PRIME, VBS2, VR VANTAGE, or Open Scene Graph.

RUAG Defence creates terrain databases, covering all of the known terrain types such as mountain areas, hill sides, flat areas, urban or desert areas. Depending on the training purpose and the training situation, the database can be built either based on an existing terrain or can be designed according to the customer’s requirement. The requirements and the size for a database can change significantly depending on whether the need is for a flight-, a driving-, an infantry- or an ATC-Simulator.

Static 3D content like bridges, tunnels, streets, including traffic signs, as well as buildings and vegetation are integrated parts of the terrain databases. Any 3D content contained within the terrain database will be optimised to guarantee a real-time visual system.


Display Systems

There is enormous variation in how imagery can be displayed, and the rather illiterate saying, “you pays your money and makes your choice,” applies. The choice includes TV monitors, projected displays, head-mounted systems, and distant-focus or “collimated” displays. Projected displays vary from small areas using one projector, to partial and full domes with many projectors. There are also so-called Complete Automatic Virtual Environment (CAVE) displays in which the subject is surrounded by large screens left, right, centre and above, giving close to total immersion in the visual scene. The subject or subjects stand and move in the CAVE and use trackers and sensors to manipulate the visual scene. In large area displays, most projectors are used for terrain and objects on the terrain, but extra “target projectors” can be used for specific aircraft or ground targets. Both forward- and back-projection can be used and a dome can have an array of between 10 and 20 projectors. An example of back-projection is the SimuSphere display system by Link USA, in which a pilot’s cockpit is surrounded by a number of flat “facets” on which outside-world imagery is back-projected. The field-of-view depends on the number of facets and can be up to 360°, the equivalent of a dome.


Systems like this work well for single-pilot simulators, such as for fighter aircraft, as the visual perspective can be optimised for a pilot’s eye-point. However, where two crew in a simulator are seated side-by-side, with a directly-projected display the perspective of the scene cannot be correct for both. If the eye-point for the visual display is selected to give the correct perspective for one of the crew, the other crew member will see some objects in the scene at incorrect angles. For game-type simulation this may not matter, but if the simulator is being used for critical tasks such as landing an aircraft or tracking a target, such errors need to be corrected.

To eliminate these errors in simulators for large transport aircraft and multi-crew helicopters, the Cross-Cockpit Collimated Display (CCCD) system was developed. Here, the two pilots view the outside-world imagery in a large curved mirror rather than a screen. A screen with the outside-world imagery is above the pilot’s compartment and its image is reflected in the mirror, which is what the pilots see. The screen and mirror are of wide horizontal extent, typically between 150 and 220 degrees. The secret that enables undistorted view from both crew seats is that the mirror has a small vertical curvature so that the image appears to the crew at a distant focus. The mirror may be 2/3m in front of the crew but the perceived focus of the image can be 100m or more, depending on the amount of vertical curvature. This allows both crew to see the scene with the correct perspective, with distant objects at the correct angles from both pilot’s seats. The word “Collimated” is used for such a display, derived from “co-linear,” implying parallel lines or infinity-focus. Mirror surfaces in CCC Displays use lightweight materials, such as mylar instead of heavy glass, and must be rigid enough to be compatible with movement of the 6-axis motion system that is required in Civil Full Flight Simulators (FFS). Military transport aircraft and multi-pilot helicopters usually use a similar Simulator design, but because of their more complex roles compared to civil airliners, training on the aircraft backs up the FFS. With multi-pilot helicopter simulators, in addition to the main display there may also be lower “chin windows” that give downward look for hovering. It is developments such as this combined with high-resolution imagery and well-matched motion that have allowed virtually all civil airline training worldwide to take place on a FFS rather than on the aircraft itself, with immense financial savings and less wear-and-tear on these expensive aircraft. The next time you are in an airliner, it is possible that the landing may be the first on that type by the pilot. However, he or she will have just completed intensive training on a FFS and will be supervised by a Training Captain in the other seat for several passenger flights until allowed to carry on without supervision.


Augmented Reality Update – A Significant Impact in the Battlespace

Augmented Reality (AR) involves projecting computer generated information onto the user's view of the real world. The last year has witnessed a number of significant developments in the military's exploration of the technology, both for training and operational uses.

AR has been routinely used in military aircraft for decades in Head-Up Displays (HUDs) and more recently in Helmet-Mounted Displays (HMDs). From weapon aiming symbols to artificial horizons to velocity vector bugs and obstacle warnings, computer generated information overlaid on the real world is a proven means of making military aircraft safer and more effective in combat. Inevitably, AR is moving into ships and ground vehicles, while powerful handheld and wearable computers combined with innovations in lightweight wearable displays are making inroads into the world of the dismounted soldier, which can be the most stressful, confusing and lethal of all. In all cases, both operational and training focused systems are under development. In all of these environments, the AR system must enhance situational awareness (SA) and avoid information overload.
AR's effectiveness depends on many factors, including the timeliness of the information, robustness of communications networks that provide input from multiple external sources, clear, comprehensible symbols accurately registered with the real world view, zero or near zero latency, accurate tracking of the user's movements and the quality of the display.


Training Maintainers, JTACs

The US Navy is continuing to evaluate the use of AR for training and assisting maintainers aboard ship through an initiative led by Lt. Josh Steinman that secured U$100,000 from the Chief of Naval Operations Rapid Innovation Cell (CRIC). The team used Google Glass AR glasses and developed smart phone applications for equipment maintenance that incorporated manuals and videos of real maintenance procedures. The system is not tied to the Google hardware, which is being discontinued.

Due to complete later this year, the five year Augmented Immersive Team Trainer (AITT) programme conducted under the auspices of the US Office of Naval Research (ONR) is intended to provide a “live simulated” training tool for ground-based fire support teams such as artillery observers, Joint Terminal Attack Controllers (JTAC) and Forward Air Controllers (FAC), a system that can turn any environment into a training range. AITT took a step closer to fruition on 21 May, when Marines used it on a Quantico golf course on which only they could see computer generated tanks, mortar fire and battlefield smoke.

Tracking users' head movements is more challenging in outdoor environments, particularly ones without pre-surveyed datum points. According to ONR, advanced software algorithms and multiple sensors enable AITT to determine the user's viewpoint accurately, while virtual aircraft, targets and munitions effects are inserted into the real view via the head-worn display. An enhanced instructor station drives training content, while performance assessment, scenario generation and scenario adaptation strategies are routed in the latest scientific research, says ONR.

Combined inputs from video cameras, inertial measurement units, GPS receivers, magnetometers and air pressure sensors track the users head movements. Virtual elements are then added to the real world scene viewed through the headset and through simulated tactical equipment including binoculars and the Vector 21B laser rangefinder.

A large scale demonstration at Quantico late this year is set to bring the programme to an end so that it can make the transition to the Marine Corps Program Manager for Training Systems for further testing and development.


BAE Systems, Augmenti Team for Vehicle AR

There is a more operational focus to an agreement announced on 15 February by BAE Systems Hägglunds and Norwegian AR specialists Augmenti. Their LoI covers technical cooperation on the development and implementation of AR into the CV90 and BvS10 vehicles, for upgrades to in-service platforms and implementation in new production vehicles for future projects. The LoI also covers the development and implementation of AR in a future Intelligent SA System (ISAS) for combat vehicles.

ISAS is to provide crews with better all-round vision by night and day round vision for vehicle crew, enhance their SA and combat effectiveness through the integration of information overlays, improve platform survivability and reduce crew workload.

The proposed ISAS solution includes multiple HD video and IR cameras with overlapping fields of view positioned around the vehicle, an HMD for the vehicle commander and driver, with peripheral devices such as tablets for the rest of the crew showing camera imagery and AR overlays. According to Michael Karlsson, an AR researcher at Sweden's Umea University, ISAS represents a particularly demanding kind of solution as the system has to track the commander's and driver's head movements independently, present the appropriate sections of the camera imagery to their HMDs and present appropriate, geo-registered symbology to each crew member, who are likely to have different priorities.

Augmenti is building a track record in military AR, having integrated it into Kongsberg's Protector Nordic, a variant of the market leading RWS developed for the Norwegian and Swedish armed forces. Video of a test conducted in March shows a laser rangefinder used to point out targets that are then sent to a BMS for intelligence about them to be added, after which AR symbols appear on the RWS operator's screen. The symbols used are NATO standard ones, first for an unknown contact and then for hostile infantry.

Hardware Agnostic ARC4 

Applied Research Associates (ARA) seems to have a well-developed product in its ARC4 software and is looking to partner with see-through display and mobile computing manufacturers to develop the system for a variety of military, government and commercial applications. ARC4 emerged from DARPA's Ultra-Vis programme, an effort to develop head-up AR for dismounted soldiers on which ARA was prime contractor, and has been tested with BAE Systems' Q-WARRIOR optical waveguide display and devices from Lumus and Vuzix as well as the Exelis (now Harris) Enhanced Night Vision Goggle (ENVG).

The core of the visual interface is a ring set low in the field of view that shows the user his or her position and heading and the relative positions of objects of interest around. Additionally, those objects have icons overlaid on their real world positions, icons that the soldier can interrogate for further information by looking at them. He or she can also add markers to new objects, which can then be shared with team mates' ARC4 displays over the network.

With the kudos of a successful DARPA programme behind it, slick performance and the ability to work with any mobile computing and display platforms, ARC4 seems set to have a significant impact in the land warfare domain.

With progress in computing, stored data bases can now be very large, the appropriate element being called up for display at any one time and then returned to store when the trainees’ eye-point has moved on. Where large-area imagery has to be produced such as for fast-jet flight simulators, a technique called “level of detail scheduling” is used to avoid processing unnecessary data for display. Here, distant objects and terrain are deliberately extracted from store at low resolution and displayed as such, the resolution automatically increasing as objects and terrain get closer to the subject.
If this is done properly, the differing levels of detail in the overall scene are not discernible to the user, but with simpler systems some discontinuities, such as “feature popping” can occur where some features suddenly appear in the scene as they get closer instead of growing gradually in size. As the stored data is three-dimensional because each polygon corner is plotted as x.y.z co-ordinates, this can be exploited in a number of ways. In a simulator the crew is presented with imagery from an “eye point” from which the scene is displayed with the correct perspective. However, at the instructor operating station (IOS) or exercise control (ExCon), different eye-points can be selected during the exercise and afterwards for debrief. The view from more than one eye-point can be shown such as those from the various entities in the exercise. A so-called “God’s Eye View” is where the whole database is viewed from above so that instructors and umpires can visualise tactical activities as they develop and introduce opposing forces, electronic warfare and so forth as required. Exercises are not limited to one site, wide-area network (WAN) links can be used over many thousands of kilometres and multi-Service and multi-national exercises can be carried out, after appropriate preparation. Visual presentations at ExCon can include video from crew stations, maps, montages of the tactical situation, in fact anything that might be required both during and after the exercise for analysis and debrief.

The versatility of computer-generated imagery is truly amazing, and in fighter aircraft simulators, when the simulator computer senses that the pilot is pulling high G, the field of view of the outside-world display can be automatically contracted to show the "tunnel vision" that in the real world precedes the highly dangerous condition known as G-induced loss of consciousness (G-LOC), so that fighter pilots can be prepared for this beforehand.

So, visualisation systems used in modern simulators are very effective, and in the military there has been a major change in attitude to training by simulation. For example, in the last UK defence review one overall aim was to achieve about a 50:50 balance between training by simulation and training using the real equipment. This 50:50 figure is not untypical of training today in many areas of the military, particularly in aircraft.

3D Visualisation Systems

Geoscience visualisation is a fast growing area, and visualisation systems enable geoscientists to communicate with each other and with end users from diverse disciplines to better understand complex and varied datasets.

3D visualisation provides a mechanism for communication. For example, BGS uses the latest software and hardware to visualise geoscience data in 3D and provide a mechanism for effective communication of BGS science. By using dedicated 3D visualisation facilities to run software such as GeoVisionary, geological understanding and risk/confidence is more easily conveyed. The 3DVS team has been involved in a number of high profile projects such as communicating the geological confidence of storing radioactive waste to visualising shale gas/oil rocks and their proximity to aquifers.

BGS has developed GeoVisionary in partnership with Virtalis. GeoVisionary is a geoscientific information system for visualisation and interpretation of geoscience datasets in a virtual reality environment.

GeoVisionary is software offering the ability to visualise comprehensively all of possible elements together in a single, immersive 3D stereoscopic environment, as well as on desktop PCs and laptops. GeoVisionary's powerful graphics rendering engine gives seamless, real time access to the entire data resource. BGS has created an add-in for ArcGIS that links the GIS with GeoVisionary, connecting traditional GIS with the 3D virtual landscape. GeoVisionary also provides simultaneous high resolution 3D visualisation of city models and geoscientific models.

By using Virtalis MaxExchange software (a plug-in for Autodesk 3DS Max) it allows CAD models to be easily imported into GeoVisionary.

For added realism in GeoVisionary projects, BGS can incorporate simple animations created in 3DS Max, such as flying aircraft or vehicles moving along roads.

Visualisation Systems in the Real World

Turning now to real military hardware, visualisation systems are now providing vital additional information. For instance, imagery on an aircraft Head Up Display (HUD) is a form of visualisation, where basic features such as attitude, airspeed and altitude can be added to weapon state and target data, including the optimum flight path to engage a target or evade a threat. First-generation symbology was basic, but now almost anything can be added, including outside-world pictures such as from night vision devices. Night low flying can now take place without the need for complex and expensive terrain-following radar (TFR), which may give away aircraft position to an enemy. Miniaturisation has led to Helmet-Mounted Display (HMD) systems that can enable the helmet of an aircraft pilot, tank commander, or soldier to be just as capable as a separate display unit, the display still being visible when the user scans left, right, and upwards.

Symbology and imagery can match the role and the challenge is to filter information so that what is displayed is relevant to the task rather than saturating the display with non-essential data. As the phase of an operation changes, the displayed data can be changed, then changed again, to what is relevant. The BAE Systems STRIKER series of HMDs are examples, fitted to some Eurofighter TYPHOONs and Saab GRIPENs. The STRIKER II includes night vision cameras so that separate NVGs do not have to be worn. The challenge with HMD systems is to lower weight and rotational inertia, to reduce loads on the pilot’s neck under high G loadings or when scanning rapidly. BAE Systems is currently working with the University of Birmingham in the UK to develop lightweight systems. Certainly, some sort of light eyeglasses may be possible and some people are even forecasting contact-lenses with an imaging capability.

A major problem in aircraft operation is returning to base to find conditions of low cloud, poor visibility, or both. To a certain extent FLIR will help because a sensor working in the far-IR at wavelengths near 10µm can penetrate poor visibility and even a small amount of cloud. However, a much better picture can be produced for pilots using GPS position combined with stored synthetic imagery of the local terrain that can be called up when required and matched to the GPS latitude and longitude as it changes with time. Accurate aircraft altitude is required for such a system to be safe and a combination of pressure and GPS altitude and a good model of local terrain and obstructions can provide this data. Synthetic imagery can be displayed either on an HUD or on an HMD. Such systems have been trialled but there are obvious problems in certification for live use for landings, particularly in the Commercial Air Transport sector. GPS co-ordinates used must be as accurate as possible, so a Satellite-Based Augmentation Systems (SBAS) for the area should be used such as BeiDou 1 (China), EGNOS (Europe), GAGAN (India), MSAS and QZSS (Japan), and  WAAS (North America). Clearly such synthetic visualisation systems can be used for an aircraft approach down to altitudes of 100, even 50m, but what if the real runway still does not appear through the synthetic picture? Will they ever be cleared for the landing itself, or for taxying in fog? With such a system, taxying is likely to be more hazardous than landing, because Air Traffic Control will (probably) be able to guarantee that the landing runway is clear of other aircraft, but taxiways and dispersal areas are another matter entirely!

This update of visualisation systems has covered some of their technology and current applications. CGI is now available with nearly real-world resolution, but that is only the start. The question is not so much the imagery, but how it is displayed. Displays vary from simple TV screens, through forward- and back-projected displays, to those at a distant focus. The distant focus systems apply to large and capable FFS, and at a smaller size, aircraft HUD/HMD. Then there is the use of visualisation systems in real vehicles rather than in simulators, such as for night vision, targeting, or as a landing aid in poor visibility. Overall, modern visualisation systems are in use, are very capable, and contribute in a major way to training both civil and military.

Ian W. Strachan is an expert on simulation and training and a regular contributor to MT.
Peter Donaldson, with 25 years of experience as a journalist and writer covering aerospace and defence technology and operations, is a regular contributor to MT. 

No comments:

Post a Comment