Written By: Richard Domikis, Chief Technology Officer, Cornerstone Defense

What is to be done when new technologies meet old problems?

Whether you are at home, in a hotel, or on a combat field, odds are you have encountered new technologies that challenge our current ways of thinking and acting. How many of us have had a conversation with another person only to have Siri think you were talking to her and suddenly interrupt your conversation? Reminding us, of course, that our phones are always listening to us…

While we choose to invent use these technologies, we don’t always conscientiously choose the complications that come with these technologies.

A recent United Nations report investigated a Libyan Unmanned Combat Air Vehicle (UCAV) and its autonomous operations. The report suggests that this maybe the first AI guided machine to kill humans without human guidance and specific approval. For most unmanned systems, there is still a person behind the scenes that makes that critical “fire” decision. To a degree we can consider these systems as remote controlled or semi-autonomous. Often, these limitations are not from the technologies but a result of our desire to maintain some level of human control over an autonomous system. For example, we might want a UAV that can fly and look for targets based on human provided criteria, but then we still want a final HUMAN decision before a serious action.

The UN report explores a Libyan rotary UCAV that apparently operated, hunted, and killed human targets autonomously. Now to say that this machine operations completely without human input is a bit ingenious. After all, it was humans that built the machine, the software, and rules of targeting and engagement by which the machine would operate. The report eventually suggests that this UCAV did target and kill humans without additional human approvals. That’s a large and scary step in UAV operations – think about a few scenes from the Terminator movies, for example. https://undocs.org/S/2021/229

There is, of course, an important lesson to derive from the findings of the UN report. As we become more confident and dependent on AI/ML and autonomous systems, we also must decide whether or not to relinquish some controls and decision-making abilities to these systems. When we tell our smart washing machine, “more clean” or “more dry,” we’re also causing the machine to decide what “more” means. When we invite Siri into our house and our lives, to a degree, we give up some privacy to have her listen and be ready to set a timer or quickly answer a question. While it’s unclear what checks and balances the Libyan UCAV had in place to ensure proper operation, it is certainly a scary evolution of combat. 

Cornerstone works to operationally use AI/ML/DL, and a frequent challenge is to accept what the autonomous system wants to do/defaults to after being programmed. Billy Bean from Major League Baseball data analytics fame once said, “It’s easy to do what analytics tell you to do when that’s what you want to do…the challenge is doing what it tells you to do when you don’t agree.”

As Cornerstone creates more effective AI/ML/DL systems, we often put their planned actions into a human decision loop to build human confidence that the autonomous system is planning and acting in a manner consistent with the mission, relevant laws, and rules of engagement. When new technologies meet old problems – it’s important to have a conscientious approach to the solution.

From the Tech Corner,

Richard Domikis, Chief Technology Officer, Cornerstone Defense

About Rich Domikis:

With over 30 years of experience as a senior technologist, Mr. Domikis has guided critical solutions for the DoD and Intelligence Community. Rich previously served as the Chief Engineer of Harris/Peraton. Prior to that, he served as Chief Technology Officer (CTO) of ManTech’s Mission Solutions & Services group. Over his long career, Rich has served as a senior technologist with companies such as Boeing, Raytheon, and General Dynamics. He has served as direct support to technology leadership at DIA, ODNI, NRO, NGA, and others, often working on large cross-agency programs.

Rich has a bachelor’s degree in Intelligence and a Master of Science in Space Systems from American Military University and is a Ph.D. candidate at Walden University in Applied Management and Decision Sciences. Rich maintains CISSP, PMP, International Law/Law of Armed Conflict and Use of Humans in Research Certifications.