by Lt. Andrew Tresansky, USN and Dr. David Segala



SSNs are most stealthy and least detectable when operating highly independently and without significant off-hull communication. However, the independence of these operations requires a highly capable and intelligent crew. As UUVs start supplementing the SSN force, increased UUV independence will be pivotal. Improved UUV ability for online decision making in complex environments will drastically reduce the need for communications and therefore increase UUV independence.

We characterize both SSN and UUV intelligence with an OODA (Observe, Orient, Decide, Act)-loop style framework of perception, decision making, and action. Perception is the process in which data are collected and turned into information. Decision making is the use of information to determine the best available action based on a set of constraints. Action is the projection of the decision into the world external to the vehicle. Independence of the vehicle is how much of the perception, decision making, and action loop must be accomplished by an external actor. These functions can be applied to both near-term employment (e.g., UUV path planning or the on-watch members of a SSN crew) and far-term employment (e.g., UUV mission planning or the off-watch operations planning on a SSN). How intelligent an SSN or UUV is determines how independently it can operate.

Perception
Both manned and unmanned platforms cannot rely on a single sensor or sensor array to provide a complete picture of the environment. Various arrays and sensors provide different data (both acoustic and electromagnetic), which are used in concert to provide the most accurate and cohesive understanding of the environment.
On an SSN, perception occurs as the contact management party takes raw sensor data displayed on a console and turns it into usable information for the Officer of the Deck (OOD). On the UUV, perception is achieved through three functions: detection, localization, and classification.1 Detection is the process of making a decision with regard to an event, in this case when acoustic sound levels exceed a threshold value. Localization is the process of refining bearing and bracketing range, while classification is the process of matching acoustic characteristics of the source to reference source characteristics to determine the source’s type. On the UUV, each of these areas use advanced signal processing techniques and algorithms instead of the human-machine mix used on the SSN.

Decision Making
Intelligence is defined as the ability of a system to act appropriately in an uncertain environment where an appropriate action is one that increases the probability of success, and success is the achievement of behavioral subgoals that support the system’s ultimate goal.2 In the perceive-decide-act paradigm, intelligent decision making is the process of taking information and determining the best action given a set of physical and environmental constraints.

The maritime domain presents unique challenges to communications, sensing, and real-time signal processing, which affects decision making. On an SSN, the OOD is making decisions about course, speed, depth, and arrays or masts to use based on knowledge, experience, and a set of established rules and goals. These rules and goals take of the form of mission accomplishment, detection avoidance, and safe navigation. The difference in decision making between a UUV and an SSN is more than just a technical matter. Submarines carry lethal force into forward areas and operate highly independently because the boat’s decision making is trusted and predictable. Developing artificial decision making that is similarly trusted will be key to trusting UUVs with higher consequence tasking and greater degrees of interaction with SSNs.

Generally speaking, computer-based decision making can be achieved by rules and heuristics, learning-based techniques, equation-based optimization, probabilistic models, and game theoretic approaches.3 Each of these approaches is a well-established technical area that considers different inputs/outputs and constraints and offers varying levels of sophistication and complexity.

Rules-based methods, which leverage finite-state machines and heuristics, are widely used in the automobile industry. These methods work well when there are specific conditions and actions that are repeatable and the operating conditions are controlled. In the maritime domain, this is typically not the case, and developing a robust set of rules may be intractable.

Learning-based methods, including artificial neural networks and machine learning, are very data dependent and are used throughout many domains due to their versatility and adaptability. These methods learn relationships, structure, and causal effects from sample training sets and scenarios. Two areas that have shown reliable results in the complex maritime domain are image processing and classification. With improvements in sensor quality, processing ability, and available training sets, current learning-based algorithms are showing tremendous progress in practical applications.

Methods that are more equation-based, such as optimization, probabilistic models, and game theoretic approaches, require models that capture both spatial and temporal characteristics. The models may be challenging to develop or implement if they span multiple temporal and spatial scales. However, when correctly formulated, they create truthful and repeatable results. Platform route planners and task allocation are two areas that successfully use these methods and are showing promising results in in-water demonstrations.

Independence
Independence is the degree of external involvement in the local perceive-decide-act loop. Lower independence requires communications that are more frequent, lower latency, and higher in information content while higher independence is characterized by the opposite. Fundamentally, because less involvement is needed in a more intelligent vehicle’s perceive-decide-act loop, it is capable of greater independence.

For example, an SSN in Emissions Control is wholly independent and limited in communication but has weapons release authority under rules of engagement. Conversely, a remotely operated vehicle (ROV) operating on a tether has near-zero independence. In between are human-in-the-loop and human-on-the-loop control. Human-in-the-loop control is when higher authority has functional responsibilities inside the perceive-decide-act loop. Human-on-the-loop allows the perceive-decide-act loop to operate quasi-independently, but certain high consequence actions require approval.

Similar to an SSN, UUV communications present potential for detection and consequently should be minimized. Alternatively, low levels of UUV intelligence will drive low independence and require communications, reducing the UUVs effectiveness. Forthcoming advances in UUV intelligence and ability to handle more complex decisions and environments will allow more independent operations and consequently the transfer of less difficult tasking from SSNs to UUVs.

There has been significant work for several decades in developing the technical areas of computer-based perception and decision making as well as efforts to improve the actions UUVs are physically capable of. Current UUVs range from tethered ROVs to complex UUVs capable of moderate levels of independence under benign conditions. As the latter become more widespread and their intelligence more robust, more difficult and independence-enabled tasks will be achievable with UUVs.

Over the past 20 years, there has been significant work in the technical areas of perception, decision making, and action that have increased UUV intelligence to a level where UUVs will soon operate with SSN-like independence. This promises to be a force multiplier in the realm of undersea warfare by outsourcing mundane tasking from SSNs to UUVs. As UUVs proliferate and grow smarter, thinking of their intelligence in terms of our own may make them seem less arcane. Understanding UUVs will allow us to design them more effectively, employ them more proficiently, and ultimately maximize our undersea advantage.

References
1 Hodges, Richard P. Underwater acoustics: Analysis, design and performance of sonar. John Wiley & Sons, 2011.
2 Albus, James S. “Outline for a theory of intelligence.” IEEE Transactions on Systems, Man, and Cybernetics 21.3 (1991): 473-509.
3 LaValle, Steven M. Planning algorithms. Cambridge university press, 2006.