This news is classified in: Defense Cyber Defense / IT
Jun 27, 2022
Last week, the Defence Artificial Intelligence Strategy was published, setting out how the UK will ‘adopt and exploit AI at pace and scale’ to transform ‘Defence into an ‘AI ready’ organisation and deliver cutting-edge capability.
This new paper aims to trigger a broader debate about the cultural and organisational changes required within the UK defence enterprise to become genuinely ‘AI ready’. It considers this in the context of AI-enabled decision-support, and its impact on the role of command and commanders.
Trust in AI: Rethinking Future Command builds on the premise that trust at all levels (operators, commanders, political leaders and the public) is essential to the effective adoption of AI for military decision-making and explores key related questions such as:
Forecasts by Deployment Mode (Cloud, On-premises), by Component (Hardware, Solution, Services), by Aircraft Systems (Avionics Systems, Flight Control Systems, Communication Systems, Other), by Application (Airline Management, Air Cargo Management, Airport Management, Air Traffic Control Management, Other), by Solution (Threat Intelligence and Response, Identity and Access Management, Data Loss Prevention, Security and Vulnerability Management, Managed Security, Other) AND Regional and Leading National Market Analysis PLUS Analysis of Leading Companies AND COVID-19 Impact and Recovery Pattern Analysis
Download free sample pagesThe paper follows an earlier report produced by QinetiQ, which looked at trust as a fundamental component of military capability and an essential requirement for military adaptability, and is theoretical but with practical application.
The paper considers the concepts of AI and trust, the role of human agency, and AI’s impact on humans’ cognitive capacity to make choices and decisions. It proposes a five-dimensional framework for developing trust in AI-enabled military decision-making and examines the implications of AI on people and institutional structures that have traditionally underpinned the exercise of authority and direction of armed forces.
In seeking to answer how trust affects the evolving human–AI relationship in military decision-making, this paper exposes several key issues requiring further research including:
Paul O’Neill, RUSI Director of Military Sciences, said:
"Much of the discussion about the use of AI focuses on the technology. What our report seeks to do is balance the discussion to take account of the human and organisational impacts and implications of the technology. This is a symbiotic relationship in which the greatest value derives from considering the needs of the whole, human/machine, team."
Christina Balis, QinetiQ Campaign Director for Training and Mission Rehearsal, said:
"The growing military use of AI for operations and missions support will transform the character of warfare. This is not just a question of adapting our armed forces’ tactics; we need to fundamentally rethink the role of humans in future military decision-making across the spectrum of ‘operate’ and ‘warfight’ and reform the institutions and teams within which they operate. It requires that we rethink the notion of trust in human-machine decision-making."