Analysis

Suggest Good Practice

Overview

Whereas data processing refers to administrative or technical data management practices, in the analysis phase data becomes information that is relevant for political decision-making. Different automated data mining methods serve different purposes and are governed by their own specific rules. Large datasets are used both to identify links between already known individuals or organizations as well as to “search for traces of activity by individuals who may not yet be known but who surface in the course of an investigation, or to identify patterns of activity that might indicate a threat.” For example, contact chaining is one common method used for target discovery: “Starting from a seed selector (perhaps obtained from HUMINT), by looking at the people whom the seed communicates with, and the people they in turn communicate with (the 2-out neighbourhood from the seed), the analyst begins a painstaking process of assembling information about a terrorist cell or network.”

 

Many intelligence agencies embrace new analytical tools to cope with the information overload challenge in our digitally connected societies. For example, pattern analysis and anomaly detection increasingly rely on self-learning algorithms, commonly referred to as artificial intelligence (AI). AI is expected to be particularly useful for signals intelligence (SIGINT) agencies due to the vast and rapidly expanding datasets at their disposal. However, the risks and benefits generally associated with AI also challenge existing oversight methods and legal safeguards; they also push legislators as well as oversight practitioners to creatively engage with AI as a dual-use technology. Conversely, malicious use of AI creates new security threats that must be mitigated.

 

 

 

 

Filter

Categories
Professionalism
Countries
France
Netherlands
United Kingdom
Dimensions
Legal Safeguard
Oversight Innovation

Good Practices

no results found

Nerd Corner

The Central Intelligence Agency (CIA) has 137 projects in development that leverage AI in some capacity, for example: incorporating computer vision and machine learning algorithms into intelligence collection cells that would comb through footage and automatically identify hostile activity for targeting; image recognition or labeling to predict future events such as terrorist attacks or civil unrest based on wide-ranging analysis of open source information; developing algorithms to accomplish multilingual speech recognition and translation in noisy environments; geo-locating images with no associated metadata; fusing 2-D images to create 3-D models; and developing tools to infer a building’s function based on pattern of life analysis.

Babuta, Alexander, Marion Oswald, and Ardi Janjev. 2020. “Artificial Intelligence and UK National Security: Policy Considerations.” London: Royal United Services Institute (RUSI). April 2020. https://rusi.org/publication/occasional-papers/artificial-intelligence-and-uk-national-security-policy-considerations.

Brundage, Miles, Shahar Avin, Jack Clark, Helen Toner, Peter Eckersley, Ben Garfinkel, Allan Dafoe, et al. 2018. “The Malicious Use of Artificial Intelligence: Forecasting, Prevention and Migration.” February 2018. https://arxiv.org/ftp/arxiv/papers/1802/1802.07228.pdf.

Cavan, Jo, and Paul Killworth. 2019. “GCHQ Embraces AI, but Not As a Black Box.” About:intel. October 8, 2019. https://aboutintel.eu/gchq-embraces-ai/.

Cranor, Lorrie Faith. 2008. “A Framework for Reasoning about the Human in the Loop.” In Proceedings of the 1st Conference on Usability, Psychology, and Security. UPSEC’08. Berkeley, CA: USE-NIX Association. http://dl.acm.org/citation.cfm?id=1387649.1387650.

Eijkman, Quirine, Nico van Eijk, and Robert van Schaik. 2018. “Dutch National Security Reform Under Review: Sufficient Checks and Balances in the Intelligence and Security Services Act 2017?” Amsterdam: Institute for Information Law (IViR), University of Amsterdam. https://www.ivir.nl/publicaties/download/Wiv_2017.pdf.

Government Communications Headquarters. 2011. “HIMR Data Mining Research Problem Book.” London: GCHQ. September 20, 2011. https://www.documentcloud.org/documents/2702948-Problem-Book-Redacted.html.

Herpig, Sven. 2019. “Securing Artificial Intelligence: Policy Brief.” Berlin: Stiftung Neue Verantwortung. October 2019. https://www.stiftung-nv.de/sites/default/files/securing_artificial_intelligence.pdf.

Hoadley, Daniel S., and Nathan J. Lucas. 2018. “Artificial Intelligence and National Security.” Washington, DC: Congressional Research Service. April 26, 2018. https://fas.org/sgp/crs/natsec/R45178.pdf.

Office of the Director of National Intelligence. n.d. The AIM Initiative: A Strategy for Augmenting Intelligence Using Machines. Washington, DC: DNI. https://www.dni.gov/files/ODNI/documents/AIM-Strategy.pdf.

UK Home Office. 2017. Interception of Communications. Pursuant to Schedule 7 to the Investigatory Powers Act 2016. Draft Code of Practice. December 2017. London. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/668941/Draft_code_-_Interception_of_Communications.pdf.