Facial recognition technology has become increasingly common in our daily lives. However, its use raises questions about privacy, security, and human rights. Dr Kerem Öge at the University of Warwick and Manuel Quintin at Université Laval examine how public discussions about facial recognition technology have changed over time in Europe and the United States. They explore how the shifts in narrative have shaped policy. Read More
The researchers analysed thousands of statements about facial recognition from various sources between 2000 and 2022. They found that early discussions mostly focused on facial recognition as a security tool. After the 9/11 attacks in the USA, the technology was promoted as a way to keep the public safe and fight terrorism. Although civil rights groups raised concerns, security considerations overshadowed them.
However, as facial recognition became more widespread, the discourse began to change. Urgent security concerns gave way to questions about privacy. By the 2010s, more people were questioning whether this technology was being used responsibly and ethically.
Öge and Quintin found that Europe and the US had similar overall trends in their facial recognition debates, but with some key differences. In Europe, concerns about transparency and regulation emerged earlier than in the US. This may help explain why Europe has been quicker to implement regulations such as GDPR.
In the US, tech companies played a more central role in the debate. Initially, they promoted the benefits of the technology, but in recent years, some major companies have become more cautious. For example, IBM, Microsoft and Amazon have limited or stopped selling facial recognition to police forces as this practice became ethically untenable.
The researchers also noticed that, as the debate over the technology evolved, different groups began to use similar language. To visualise this, the researchers used a method called Discourse Network Analysis. They created visual maps of the debate, showing how different actors are connected based on their statements about facial recognition. In these visualisations, each actor is represented by a dot, and lines between them show when actors use similar arguments.
Early network visualisations showed two main clusters: a large group of government agencies and businesses promoting facial recognition for security, and a smaller group of NGOs raising concerns.
Over time, these networks became more complex. By 2020, the visualisations revealed a more diverse debate, with some tech companies joining civil rights groups in expressing caution. This ‘harmonisation’ has made it easier for people with different viewpoints to engage in meaningful dialogue.
The changing nature of the facial recognition debate has had real-world impacts on policy. For example, the researchers suggest that the growing focus on ethics and transparency in Europe has contributed to the development of EU proposals to regulate AI.
Öge and Quintin’s study shows how public discussions about new technologies can evolve over time and influence policy decisions. By understanding these patterns, we can better navigate the complex relationship between technological innovation, public opinion, and regulation.