Artificial Intelligence workshop 2020
27 February 2020, BluePoint Brussels, Belgium
Event in cooperation with the NATO Command and Control Centre of Excellence (C2COE) 
“I very much welcome this debate as AFCEA discusses how we can leverage emerging Quantum computing and Artificial Intelligence in order to solve 21st century challenges. These technologies have the potential to enhance how NATO provides collective deterrence and defence in today’s dynamic and interconnected world.”
General Markus Kneip, DEU A, Chief of Staff SHAPE 

> I. The advent of the New Age of Computing: Quantum Computing and Artificial Intelligence
Recent claims of a breakthrough in Quantum Computing indicate that the New Age of Computing, with a “quantum leap” in the speed of processing and managing data, is closer than expected. This will be a basis for promising developments in deep or “real” Artificial Intelligence, with sophisticated algorithms capable of learning and further developing themselves into high precision predictive models beyond human analytical capacity. One set of applications, already existing but still rudimentary, will be able to support sense and decision-making in an unprecedented way. Leaders might want to resort to this capability for advice and to have a basis for decisions in complex situations, which until today were mastered by referring to human experience, history, gut feeling – with all the human errors that could result.

The purpose of this session is to describe the state of the art in QC and AI, its foreseeable developments into practical tools, its interrelationship, and to set the scene for the following sessions.

Key questions that should be addressed are:
o How mature is QC?
o What are potential applications with relevance to governmental activities beyond encryption?
o What factors will determine the path of AI?
o How fast will advanced analytics develop into every-day usability?
o In what areas will “real” AI enter the military and governmental decision making processes?

> II. The impact on Command and Control: Processes, Organisation, Skills
Supposedly, AI will enter hierarchical structures such as the military, resulting in speedier, less risky, and therefore better decisions. Implications on relevant processes and organisational structures seem to be unavoidable. Analytical steps, factors to be included, and even the final decision point in the OODA (Observe-Orient-Decide-Act) loop will be revolutionised and run more and more autonomously. This influences the whole Command and Control structure, from the strategic to the tactical level. The existing balance of centralised and de-centralised decision making might also be affected, depending on the availability of analytical tools. Finally, AI will only unfold its potential regarding efficiency and speed if the decision-maker and his staff are capable and willing to use it. This requires not only trust and a new skill set, but also to develop AI in an ethical way.
The purpose of this session is to discuss the effects of AI, especially in the field of C2; to understand the scope of changes, and to prepare for open-mindness in adopting them. Industry and military already using AI may tell about their initial experiences. Specific requirements for the application of AI in future military operations should also be addressed.
Key questions in this session should be:
o What are the predominant areas of implementation in the military decision making processes?
o Are there mature models and sufficient experience from industry?
o How to use analytical tools: centralised or/and decentralised?
o What is the likely impact on hierarchical organisations?
o Are data to be pre-processed (labeled) in order to maintain the advantage AI gives in accelerated, complex decision=making?
o How to train decision-makers?
o How far away is autonomous decision-making?
o Does a trusted, “ethical” AI deliver the optimal service?

> III. Security challenges within Artificial Intelligence: How to overcome them?
If the decisive advantage in decision making and C2 is increasingly based on AI, the supporting tools need to work flawlessly and securely. Challenges for today’s analytical tools lay mainly within the proper training of algorithms: using the largest data pool available is necessary in order to avoid bias. Lesser known threats come with attacks from cyber space, aiming at corrupting such data sets or even changing algorithms. Other vulnerabilities of AI are its availability under battlefield conditions (robustness and resilience) and its flexibility in adapting to a changing environment (brittleness), as well as the potential scarcity of labeled data. Acceptance comes with trust. Trust comes with understanding. Understanding deep AI is a challenge, if even possible at all. Finally, the negative impact of AI on traditional skills and doctrines needs to be analysed, if redundancy needs to be preserved.
The purpose of this session is to raise awareness about concrete threats and challenges to AI, and to discuss technological and other means to mitigate such risks.
Key questions that should be addressed are:
o What endangers the proper use of AI in decision making?
o Can AI be hampered by cyber attacks?
o Is AI the technological solution for such attacks from cyber space?
o Will there be an arms race in AI?
o Will using AI make us dependent? Will using AI make better, more reliable decisions?How to ensure that AI is available and usable everywhere it is needed?
o Resilience in AI?
o How to develop trust in advanced AI solutions?
o How to train the workforce?
o Will there be a cost issue in the future?