By James D. Miller The Johns Hopkins University Applied Physics Lab Submarine Warfare Program Area

 

Recent technology advances in high-resolution displays, motion sensing, and compact computing/micro-processing have changed the way people interact with computing. Immersive environments can now be delivered inexpensively to anyone who owns a smartphone. A small additional cost of a head-mounted display can take that immersive presentation to the next level. This immersive computing technology is referred to as Mixed Reality (MR).

 

Mixed Reality covers the spectrum of technologies that have been maturing rapidly over the last decade. The continuum of MR spans from the physical (real) world to the fully virtual and includes Augmented Reality (AR) and Virtual Reality (VR). MR adds computer-generated objects/environment to varying degrees to enhance a user’s knowledge or understanding and enable interactive behaviors within the MR experience. DoD has long been investing in technologies that enable the range of MR experiences. Initial investment resulted in lab-based prototypes that supported expensive training and held the promise of future operational use. However, this R&D investment is on the verge of bearing real operational fruit with the commercialization of key components, significantly decreasing cost.

Virtual Reality (VR) is a fully immersive synthetic 3D environment where users can explore and interact with simulated entities within that environment. There are key advantages to using VR in military applications. VR experiences are impactful and memorable for the end users, making them excellent training and mission rehearsal/exercise opportunities. VR can take people to places that are difficult to access due to cost or physical travel limitations. Furthermore, VR affords fewer resource constraints than those that real-world exercises may incur. The synthetic worlds provide safety and an analytic environment for testing relationships or procedural interactions. VR has the advantage of providing very high fidelity worlds that are immersive, impactful, cost-effective, accessible, and safe to use.

Technology is continuously enhancing the state-of-the-art; however, there are limitations to reaching the point where VR is mainstream, even in applications for which it is well suited. Not all human senses are fully immersed; tactile feedback is not typically available and is challenging to integrate. For some applications, like damage control response, smell is also critical and not fully integrated into VR applications. Mobility is limited due to the need to be tethered to hardware that can support high-end graphics processing. It is also difficult to fully suspend belief as a user’s hands and body are not natively represented in the VR world. These last two key limiting technologies are currently being tackled and may soon be overcome with programs like Intel’s initiative, Project Alloy. While technology continues to advance, the understanding of its impact on people and effects of use for extended periods of time are not fully tested. One of the biggest challenges to getting the technology deployed is the burden of developing quality 3D content.

Another immersive technology, Augmented Reality (AR), maintains the physical world and blends new information into the user’s field of view. One of the first commercial applications of AR technology was the yellow first-down line in televised football games. AR affords much more mobility than a full immersive VR and does not always require head-mounted displays. Many AR applications run on mobile devices like smart phones and tablets. AR provides information quickly and exploits or discriminates objects in the real world. This ability makes AR an excellent fit for navigation applications. AR also allows a level of limited tactile feedback as it can work with real-world objects. Additionally, AR worlds generally don’t require the visual fidelity of VR to recreate the environment and thus need less graphical processing to be effective.

Similar to VR, AR is not without its own limitations and drawbacks. Many AR devices are less capable graphically, resulting in lower fidelity and lower-quality 3D imaging. The AR mobile devices are also more limited in power and battery life. AR devices sometimes have difficulty synchronizing the real world with blended objects. AR devices rely on multi-sensory input to display correctly (e.g., wireless motion, graphics, range, and light distortions). AR applications are prone to over saturation or information overload and as a result can distract the user from core objectives, and a user-centered design approach is key to successful implementation. As with VR, the impact on the user is not yet fully researched and tested. As with VR, the biggest hurdle with AR is the ability to develop and map quality content.

The enhancements to the physical world and simulation of real-world effects lend MR to things that are seldom practiced in the real world, things that are complicated and hard to understand, novel experiences that are difficult to practice (AR for maintenance), and things that require suspension of belief to be effective (VR for fire drills). MR has shown to be effective, however, in improving human performance through immersive training, realistic mission rehearsal, and enhanced information presentation.

Many years of MR research and development has shown great promise in improving mission performance. Research has demonstrated the effectiveness of AR and VR to various applications that are directly related to submarine operations. Education studies have found that learning time decreases with virtual simulations and that AR is an effective educational medium. These studies found that AR allowed mechanics to locate tasks more quickly and, in some instances, with less overall head movement than when using current maintenance aid systems or an enhanced version of the system currently used by U.S. Marine Corps mechanics. Applying MR capabilities to future submarine fleet applications has the potential to improve operator effectiveness, situational awareness, training, and mission rehearsals.

 


Columbia University's Augmented Reality for Maintenance and Repair (ARMAR)

 

Operator effectiveness

AR enables overlaying information on a user’s environment based on access. This can enhance team collaboration and communication by enabling teams with different levels of security to be co-located and reduce the information flow bottlenecks. Additionally, as seen in maintenance applications, AR allows for overlay of task steps that are complicated, rarely done, and difficult to recall.

VR allows for immersive collaborative environments for individuals who are not co-located. A shared environment extends the Common Operating Picture concept and would allow for collaborative interaction of the environment; when one individual interacts with an entity in the environment, the other collaborative participants would also experience the change.
 

Situational Awareness (SA)

Extensive design and development has been conducted for displaying submarine and submersible SA displays for post-mission analysis as well as real-time tactical decision aids. Although some 3D technologies are beginning to be used, most of these displays are still 2D. As fleet submarines are required to conduct more operations in concert with other manned and unmanned systems, the opportunity to use VR and AR in multi-source intelligence and other SA tools will improve operational effectiveness. VR and AR have the potential to improve SA across the operational functions in a submarine and other platforms. One example of using AR in operations for enhanced SA is the Navy’s Divers Augmented Vision Display (DAVD) research program. DAVD is a high-resolution, see-through, head-up display embedded directly inside a diving helmet. This NSWC Panama City-developed prototype can provide divers with real-time visual display of sonar, text messages, diagrams, photographs, and video.

The VR technology development is much further along than the research to explore new ways of showing SA data using these devices. The cost effectiveness of emerging hardware allows for more research into the best ways to leverage AR and VR technologies to support SA in submarine operations.

 


Lt. Jeff Kee explores the Office of Naval Research (ONR)-sponsored Battlespace Exploitation
of Mixed Reality (BEMR) lab located at Space and Naval Warfare Systems Center Pacific.
BEMR is designed to showcase and demonstrate cutting-edge low cost commercial mixed
reality, virtual reality and augmented reality technologies and to provide a facility where
warfighters, researchers, government, industry and academia can collaborate.

 

Training and Mission Rehearsal
Research in learning demonstrates that ideal mediums of learning are based on the learning objective. For example, you might want to have supplementary information hover over a physical system component; AR is well suited for this application. The training requirement may instead be to practice a given physical task to a specified level of proficiency; this type of training is suited to VR where that physical task can be simulated in an immersive environment full of risk where actual missteps don’t result in real danger. A current Navy example of this is the Virtual Environment for Submarine Shiphandling Trainer (VESUB). VESUB is a VR-based computer system using virtual environment and head-mounted display technology. The trainer provides the Officer of the Deck (OOD) trainee individual instruction in the knowledge and skills necessary to successfully and safely pilot and maneuver a surfaced submarine through restricted waterways avoiding collisions and grounding.

All MR technologies are possible and useful for training and mission rehearsal; the issue is determining where to apply them. The commoditization of these technologies offers an opportunity for the Navy to match the right training environment (SA, VR, AR) based on training objectives. This facilitates development and deployment of effective training environments that leverage the best of all technologies.

These technologies are mature and, when coupled with physics-based modeling and simulation capability, have proven to be a very effective way of delivering training for a modest investment. MR-based training is software-intensive to allow for rollout of the training quickly. An incremental/agile strategy of focusing on the most critical (or deficient) learning objectives has worked well for these systems. This approach has also been synergistic with existing but more expensive training assets (e.g., hardware trainers / real platforms) as VR is cost effective for reaching a large audience and preparing students for the most effective training possible in expensive or unique training assets.

VR technologies have the potential to improve the suspension of disbelief required for providing engaging training products. Several VR trainers already exist; however, additional research in determining how these technologies could be best applied to training in the naval special operations community is needed. VR training is much more feasible given the significant reduction in hardware cost. For example, it is known that experiential learning improves acquisition of skills; these environments enable low-risk experiences that may be applicable to operations. VR would provide an opportunity to rapidly and inexpensively research, develop, and test training technologies that can be deployed across the service, allowing for skill retention and career progression. Sailors could take tests in a low-risk, distributed learning environment providing experiences applicable to high-risk operational activities. This would enable a research initiative with the aim of transitioning to operational training.

The Navy and submarine community recognize that MR has great potential to impact operator effectiveness and mission readiness. The Office of Naval Research, in collaboration with Space and Naval Warfare Systems Center (SPAWAR) in San Diego, have created the Battlespace Exploitation of Mixed Reality (BEMR Lab) to showcase cutting-edge technology for the warfighter, researcher, government, industry, and academia. This Navy research partnership expanded recently with Rear Adm. Frederick “Fritz” J. Roegge, the commander of Submarine Force, U.S. Pacific Fleet (COMSUBPAC), officially opening the COMSUBPAC Innovation Lab (iLab) on November 7, 2016. In addition to SPAWAR, the iLab partners with the Naval Sea Systems Command (NAVSEA) New Training Technologies Program Office. This facility allows for Submariners to rapidly prototype with commercial visualization technologies.

MR capabilities are at a technological point where significant impact to operations can be made. Near-term and longer-term research to understand the operational effectiveness and how human performance changes as a function of technology is still needed to achieve full potential.

References
Morie, J.F; Virtual reality, immersion, and the unforgettable experience. (2006). Stereoscopic Displays and Virtual Reality Systems XIII, 60551X, Proc. SPIE 6055, doi:10.1117/12.660290.

Haque, S., & Srinivasan, S. (2006). A meta-analysis of the training effectiveness of virtual reality surgical simulators. Information Technology in Biomedicine, IEEE Transactions on, 10(1), 51-58.

Bacca, J., Baldiris, S., Fabregat, R., Graf, S., & Kinshuk. (2014). Augmented Reality Trends in Education: A Systematic Review of Research and Applications. Journal of Educational Technology & Society, 17(4), 133-149.

Haque, S., & Srinivasan, S., 2006; and Bacca, J., Baldiris, S., Fabregat, R., Graf, S., & Kinshuk., 2014.

Henderson, S. J., & Feiner, S. (2011).