UofG researcher co-authors report on privacy intrusion and national security

Published: 7 June 2023

The head of the University of Glasgow’s College of Science & Engineering is one of three authors of a new report on privacy intrusion and national security in the age of AI.

The head of the University of Glasgow’s College of Science & Engineering is one of three authors of a new report on privacy intrusion and national security in the age of AI.
 
Professor Dame Muffy Calder was senior research consultant on the report, prepared by The Alan Turing Institute’s Centre for Emerging Technology and Security (CETaS).
 
The Centre’s mission is to inform UK security policy through evidence-based, interdisciplinary research on emerging technology issues.
 
Recommendations from Centre’s report will help security and law enforcement agencies assess the level of privacy intrusion from automated analytics.
 
The use of automated analytics – using computers to analyse data and provide insight – is crucial to enabling national security and law enforcement agencies to operate in a world where technology is changing rapidly.
 
However, the increased use of automated analytics also raises privacy concerns. National security and law enforcement agencies exist to protect people and institutions from harm.
 
But to do this effectively, their work involves a degree of monitoring and surveillance which, in some cases, can lead to an intrusion of privacy into people’s personal lives.
 
This research offers recommendations to help better understand and assess the level of privacy intrusion from automated analytics.
 
The research is based on interviews and focus groups with stakeholders across the UK government, national security and law enforcement, and legal experts outside government.
 
As part of the research, the authors examined the obligations of national security and law enforcement agencies to keep citizens safe in a challenging operational environment. A crucial part of this is minimising intrusiveness and adhering to the legal principle of proportionality which regulates surveillance activity.
 
The report offers a new framework to help with this. It focuses on six key factors relevant to proportionality judgements that will help individuals and organisations assess the risk of how automated analytics are impacting privacy intrusion. These are: datasets; results; human inspection; tool design; data management; and timeliness and resources.
 
The authors anticipate the proposed framework will add another guarantee to existing authorisation and compliance processes. The framework will help provide assurances that all relevant factors have been considered at every stage in the automated analytics life cycle.
 
Beyond the national security community, the report seeks to further contribute to the public privacy debate, while highlighting that public expectations of privacy are dynamic.Left to right: Marion Oswald - Senior Research Associate, Muffy Calder - Senior Research Consultant, Ardi Janjeva - Research Associate
Left to right: Marion Oswald - Senior Research Associate, Muffy Calder - Senior Research Consultant, Ardi Janjeva - Research Associate 

Professor Calder, who is also Professor of Formal Methods in the School of Computing Science, said: “Artificial intelligence provides us with new ways of gathering and filtering vast amounts of data, with the potential to provide transformative benefits across society. However, its power also brings with it new challenges, particularly in national security and law enforcement. AI’s potential to improve monitoring and surveillance to help keep people safe must be balanced with minimising intrusion into their private lives and respecting their human rights.”
 
Dr Marion Oswald, lawyer and Senior Research Associate at The Alan Turing Institute, said: “We need to better understand, map and monitor the risk of multiple, connected, automated systems feeding into each other over an extended period. We hope this framework will be adopted by people across the national security and law enforcement communities, such as analysts, investigators, legal advisers, oversight bodies and judicial commissioners.”
 
Ardi Janjeva, lead author and Research Associate at CETaS, said: “Big data analytics and automated systems are becoming much more widespread in society. This means that changing expectations of privacy need to be understood in a more rigorous way, to promote transparency and public trust. That’s why we need more public perceptions surveys of intrusion from automated analytics in different scenarios.”


First published: 7 June 2023

<< June