GCHQ has published a paper, known as “Ethics of AI: Pioneering a New National Security”, in which it set out examples of how it could use AI
British intelligence agency GCHQ has laid out its plan to put artificial intelligence in national security.
GCHQ is the signals intelligence arm of the UK,and is responsible for gathering information as well as securing UK communications.
The organisation has published a paper, known as “Ethics of AI: Pioneering a New National Security”, in which it set out examples of how it could use AI going forwards.
AI in National Security
Potential uses include fact checking and the detection of deep fake media, which has been mooted as a threat to democracy. Alongside that is mapping international trafficking networks, analysing chat rooms for evidence of child grooming and identifying potentially malicious software for cybersecurity purposes.
Director GCHQ Jeremy Fleming said: “AI, like so many technologies, offers great promise for society, prosperity and security. It’s impact on GCHQ is equally profound. AI is already invaluable in many of our missions as we protect the country, its people and way of life. It allows our brilliant analysts to manage vast volumes of complex data and improves decision-making in the face of increasingly complex threats – from protecting children to improving cyber security.”
An Eye on Ethics
The paper also saw the organisation consider how it would use AI ethically, fairly and transparently, alongside supporting the UK’s AI sector with an AI lab in its Manchester office, mentoring AI startups through accelerator schemes and supporting the creation of the UK’s national institute for data science and artificial intelligence, the Alan Turing institute, in 2015.
“While this unprecedented technological evolution comes with great opportunity, it also poses significant ethical challenges for all of society, including GCHQ,” said Fleming. “Today we are setting out our plan and commitment to the ethical use of AI in our mission. I hope it will inspire further thinking at home and abroad about how we can ensure fairness, transparency and accountability to underpin the use of AI.”