Signal’s president warns of the dangers of AI and surveillance

Signal, the not-for-profit secure messaging app, has a president who is not afraid to speak her mind about the risks and harms of artificial intelligence (AI) and surveillance. Meredith Whittaker, who joined Signal in 2022 after leaving Google, has been vocal about her views on how AI is fundamentally “a surveillance technology” that threatens human rights and democracy.

9/26/20232 min read

Whittaker, who is also a co-founder of NYU’s AI Now Institute and an adviser to the Federal Trade Commission, shared her perspective at TechCrunch Disrupt 2023, where she explained why she believes AI is largely inseparable from the big data and targeting industry perpetuated by the likes of Google and Meta, as well as less consumer-focused but equally prominent enterprise and defense companies.

“It requires the surveillance business model; it’s an exacerbation of what we’ve seen since the late ’90s and the development of surveillance advertising. AI is a way, I think, to entrench and expand the surveillance business model,” she said.

She also argued that the use of AI is also surveillant, as it produces data about people that can be used to make predictions and decisions that affect their lives and opportunities.

“You know, you walk past a facial recognition camera that’s instrumented with pseudo-scientific emotion recognition, and it produces data about you, right or wrong, that says ‘you are happy, you are sad, you have a bad character, you’re a liar, whatever.’ These are ultimately surveillance systems that are being marketed to those who have power over us generally: our employers, governments, border control, etc.,” she said.

Whittaker also pointed out that AI systems are not as intelligent or autonomous as they are often portrayed, but rather rely on human labor and intervention at various stages of their development and deployment.

“There’s no way to make these systems without human labor at the level of informing the ground truth of the data — reinforcement learning with human feedback, which again is just kind of tech-washing precarious human labor. It’s thousands and thousands of workers paid very little, though en masse it’s very expensive, and there’s no other way to create these systems, full stop,” she said.

Whittaker contrasted Signal’s approach to AI and surveillance with that of other tech companies. She said that Signal does not collect or monetize user data, nor does it use any AI tools or processes in its app or development work, except for a small on-device model for face blur feature in its media editing toolset.

She said that Signal’s mission is to provide secure and private communication for everyone, without compromising on human rights or democracy.

“We won’t participate in the surveillance business model. We won’t participate in building technologies that are designed to surveil people. We won’t participate in building technologies that are designed to manipulate people,” she said.

Whittaker also criticized the UK government’s online safety bill, which seeks to scan users’ messages for harmful content. She said that this would violate users’ privacy and security, as well as create technical and legal challenges for end-to-end encrypted services like Signal.

She said that Signal would “walk” from the UK if the bill undermined its core principles.

“We’re not going to compromise on our values. We’re not going to compromise on our users’ security. We’re not going to compromise on our users’ privacy,” she said.