Wir brauchen Ihre Unterstützung — Jetzt Mitglied werden! Weitere Infos
«When AI systems are used, they are usually used for surveillance»
Meredith Whittaker, fotografiert von Florian Hetz.

«When AI systems are used, they are usually used for surveillance»

The President of the messaging app Signal, Meredith Whittaker, warns about the application of Artificial Intelligence. It’s important not to give big corporations linked to governments a free pass, she says.

Lesen Sie die deutsche Version hier.

The Signal messaging app offers secure information; it claims that not even Signal can find out what two people are saying or writing in the app. Do you still have governments pushing you to give them information about your customers and asking you to build in backdoors?

Like any other tech company, we frequently receive requests from governments for information. The best way to meaningfully protect communication privacy in the digital space is to ensure that we don’t have that information. We encrypt not only the contents of people’s messages, the «what you say», but also the information about «who you are», the metadata – which is your name, your profile info, your contact list, who’s in your groups. We’re really not able to provide any of that information as we don’t have it. If the government puts a gun to my head, they would have to shoot. That’s how end-to-end encryption works.

 

Signal is often blocked by authoritarian governments. Which countries are blocking it at the moment?

At the moment it’s blocked in China and in Iran at least, but I don’t have an updated list of all countries right now. Wherever the central power fears that dissenting opinions or honest expressions pose a threat to it, attempts are made to crack down on dissenters. And of course, dissent requires freedom of speech and expression. And spaces where people can honestly discuss issues with each other. In their attempt to suppress dissenting opinions and expressions they perceive as threatening, they attack the instruments that enable their dissemination.

 

In China the trend is towards the handling of all actions on the smartphone by never leaving the one super app, WeChat. Do you see this trend also coming to the West, as Elon Musk is trying to do develop Twitter into a super app called X?

People have been talking about it for years, and we still don’t have a super app. There are precedents like WeChat – which of course is very closely linked to Beijing – and in Japan there’s Line, in Korea Kakao. They are apps that grew up at a specific time, when there wasn’t a lot of competition in the market, and they are integrated into government services. They didn’t grow up in the particular context in which the technology industry that predominates in the U.S. developed. I don’t think there’s much hope for a super app that suddenly appears, displaces the competition, and then dominates multiple markets in the U.S.

 

Why should I use Signal and not WhatsApp or Telegram?

Telegram does a lot of posturing and marketing around their privacy promises, but ultimately, it’s not a secure app – almost everything on Telegram is sent in the clear, which means that if they are forced to do so, they will share your data with governments. Therefore privacy violations can easily happen.

WhatsApp does use the Signal protocol to encrypt message contents, but they do not encrypt metadata. And as we know, metadata is extraordinarily revealing. And let’s be real, they’re owned by Meta. So it’s not inconceivable that they could combine the metadata and other information they have with the extraordinarily invasive surveillance data collected by other Meta properties like Facebook or Instagram. The big difference is that we are a nonprofit company that goes out of its way to have no data at all.

 

Is Signal using Artificial Intelligence (AI)?

Signal does use one small machine learning model which is actually part of our media editing suite of tools. It allows people to click a button to automatically makes faces unrecognizable in a photo; it runs locally on your phone. For example, if you take a photo of a party where you don’t know everyone, and you don’t have consent to share facial biometric data, you can click a button and this model will help you recognize faces and blur them so you can ensure privacy. That’s a nice and useful application of AI and it doesn’t send data to an app company.

 

What, then, is the problem with AI?

When AI systems are used, they are usually used for surveillance. They profile people’s faces and create data about whose face that is or what kind of person that face indicates. It’s used for ratings or for other purposes, which are surveillance in themselves. To create these AI systems, you first have to have huge amounts of data to train and inform those systems so they can be calibrated. The metastasis of AI as a kind of dominant and very hyped form of technology is antithetical to ensuring real privacy. It entrenches and expands the business model of surveillance, because its insatiable demand for data will naturally lead to more surveillance, more collection and generation of data.

 

What problems does AI have besides surveillance?

The things we’re calling AI now are corporate technologies, they rely on extraordinarily concentrated resources that only a handful of corporations, based largely in the US and China, have access to. They rely on high-powered computational systems and huge amounts of data which is not just taken off the shelf or scraped from a database, but meticulously and laboriously organized and labeled. That data is then assessed by huge numbers of human workers, which itself is very expensive. Hence, the barrier to entry for large-scale AI is very high, and only a handful of companies can do it. This is why Open AI, which started off as a nonprofit, is now effectively part of Microsoft, and this is why Anthropic is effectively part of Google. This is why only a few companies that do have these resources are able to create these models from scratch.

 

Why is that dangerous?

Because we have handed the keys for incredibly important societal decision-making and direction-setting to a handful of surveillance companies that are ultimately driven by profit and growth, not by social benefit. And we know that these systems will be used to achieve those ends, even if they undermine important social values. That primary danger could play out from AI systems that are increasingly surveilling and penalizing workers, extracting wages from them. The use of generative AI systems will undermine creative professions like journalism, art, writing etc. In policing and military conflict contexts AI will be used to shut down protest and dissent to further aims of social control. We can go through a litany of systems that are calibrated to serve the interests of the powerful at the expense of those who have less power. We should be very wary about the kind of turnkey trust we’re putting in the hands of those corporate actors.

 

Big Tech and governments are calling for regulation of AI. But isn’t that also to suppress possible competition coming from open source AI projects?

There’s no clear definition of what open source AI projects mean. For example, Meta released its Large Language Model called LLaMA and labeled it open source when very little information was actually released about the model. Open source AI doesn’t allow everyone to have the resources necessary to develop AI from scratch. The incredibly important decisions about model weights, about data, about how those models are calibrated and trained that are currently in the hands of big companies are never open source.

 

How important is open source to Signal?

Open source provides two things that are generally good: It allows transparency, so you can review the code and potentially other documentation and resources, which is really helpful for accountability. In Signal’s case, that’s really helpful because it ensures that you don’t have to take our word for our privacy promises because we have a whole community of people who scrutinize our code, test it, and if they find a bug, they report it and we fix it. It creates a kind of immune system that keeps us honest and makes sure that we benefit from a lot of eyes. That’s the classic value of open source. Depending on the license, open source is ultimately a legal license agreement and certain forms of reuse are allowed. You can take the code, you can fork it, and in the case of Signal, you can take all of our code, you can reuse it, you just can’t call it Signal. Open source is an essential pillar to doing what we do well, rigorously, and honestly. But in the case of AI, it doesn’t solve the problems of competition, centralized control, or resource scarcity in the AI market.

 

Very generally, there is a growing asymmetry between the government and the individual. The government knows more and more about me, and I know less and less about the government. How did the free West come to this situation and how does it get out of it?

I see, at least in the U.S., that we cannot meaningfully separate the surveillance companies and the surveillance industry from the government. We saw that with the Snowden files and the wiretapping: We now recognize that the surveillance companies› business model has been allowed to exist in part because it benefits the government, including the intelligence agencies and others who, at least in the U.S., are not allowed to spy on citizens to the same extent that the private industry can. These are enmeshed.

«I see, at least in the U.S., that we cannot meaningfully

separate the surveillance companies and the surveillance industry

from the government.»

 

Do you have an example?

Facebook recently turned over private messages between a mother and her daughter in Nebraska that were used to convict them of felonies and misdemeanors in a case of illegal abortion – now they face jail time. Even if companies sometimes resist giving the data to the government, ultimately, if they have it, they will hand it over.

 

How can I best take responsibility for my own digital security?

Use Signal. Also, people can apply pressure on a local level. If they see their schools implementing facial recognition, as is happening in the U.S., they can go to a meeting and oppose it. If they see their government pushing for audit checks or client-side scanning in the name of child welfare, they can say, this is an excuse for surveillance and we don’t accept this. It’s really important to protect your privacy before you need it. Not in a moment of crisis.

«It’s really important to protect your privacy before you need it.

Not in a moment of crisis.»

What about using VPN or the TOR browser?

TOR is great and it does help to protect privacy, but it’s hard to use. And with VPN, you just never know who the VPN provider is. It’s a really scammy market. You need to find a trusted provider.

 

If I am targeted by government and I use a smartphone, don’t they find out anything they want about me anyway? Projects like Pegasus only need to know the phone number to get total access.

Pegasus is a really dangerous spy program, and it should be banned. But it’s not as if I could enter your phone number into a web page and have Pegasus attack you. It targets a single person, it’s not some kind of massive dragnet-system. And it’s expensive. I think it’s absolutely despicable that governments around the world, including the U.S. government, are licensing these spying technologies. We have seen Mexico use this technology to spy on journalists.

 

You said that if the EU Commission does enforce the chat control law, Signal will leave Europe for good. Is this still your plan?

We’re never going to leave willingly. But when we have to choose between adulterating our encryption, undermining the privacy promises we make, and leaving, then we would leave. Because we will not betray the people who rely on us. But we are confident that this terribly misguided law will be remedied before it comes down to that.

 

How do you see the risk that chat control will already be implemented or required at operating system level?

This would mean that the operating systems that everyone relies on have a significant vulnerability. No one could trust these systems which includes the intelligence services, the financial sector, the government, and so on. It would be a disaster.

 

Are there other dangerous government policies that liberty loving people in Europe should be afraid of?

The spying clause in the UK Online Safety Bill is another example of that. In general, I see a kind of religious fervor right now in many countries in Europe, in the UK, in the U.S., and to some extent beyond, to try to enforce very dangerous laws that would undermine end-to-end encryption. And, of course, end-to-end encryption is the name for the only technology that we have that actually provides privacy for interpersonal communications. So that would be a disaster and would ultimately mean the elimination of interpersonal privacy in the digital realm. Let’s not forget: We don’t really have a choice about whether we use online tools and services. We are born into a world where we can grow up in the woods, eat berries, and not have friends. But otherwise, we are forced to use these services, and it’s not a matter of individual choice. Our lives, our societies, our governments, our workplaces are structured around it. This is not a matter of individual choice. Not using social media is cited as a risk factor in many AI assessment programs that assess people for social and other benefits.

 

Many people say: «If you have nothing to hide, you have nothing to fear.» What’s your answer to that?

I don’t think a lot of people think that, because a lot of people really value their privacy. When they’re talking to their partner at night, they don’t want that witnessed. If they imagined that everything they said on their Alexa device suddenly blasted out of the speakers at their workplace, they would probably melt into a puddle and cry. People are concerned because it’s not an individual decision they can make when they need directions to work, when they need to log into their workplace computer, and so on. When the products and services of the Internet were introduced in the 1990s and became the infrastructure of our world in the 2000s, the terms were never made clear: People were not told that databases of their location, their friends and their purchases were kept. Which will all be used in the future to train AI systems that used to judge whether they are a good worker or not, for example. There has been a lot of trickery and misdirection in the introduction of online technologies into our daily lives, and we are just beginning to see the harmful consequences.

»
Abonnieren Sie unsere
kostenlosen Newsletter!