This presentation discusses the development of test methods to establish acceptable levels of confidence in Artificial Intelligence and Machine Learning (AI/ML) enabled military systems. It focuses on the nature of AI/ML and how traditional T&E approaches need to evolve to address this technology.
AI/ML technologies have the potential to greatly increase the use of autonomous military systems and accelerate decision making during military operations. The use of AI/ML enabled military systems, particularly those with lethal capabilities, requires high confidence in the AI/ML agents used and the data upon which their decisions are made. The Test and Evaluation Community must develop methods to establish acceptable levels of confidence in AI/ML enabled systems.
The underlying principles of test and evaluation are independent of technologies being tested, but AI/ML technology introduces new challenges because the response of the system under test may vary through successive trials. The presentation discusses the nature of AI/ML technology and potential approaches to testing that remain true to the underlying principles of T&E.
The approach taken considered broad classes of AI/ML and analysed methods to test them that leverage traditional T&E frameworks. Current cases of ML use and projected future scenarios were used to inform the investigation.
The presentation will propose possible, general approaches to T&E of AI/ML enabled systems in order to stimulate discussion and innovation.