AI expert on data and power: "The industry is a monoculture"


Meredith Whittaker is an artificial intelligence expert who joined the Signal Foundation from Google. A conversation about the surveillance business model.

A cat image created by AI

A cat image created by AI. The DALL-E software creates images using text descriptions Photo: OpenAI/afp

taz: On a normal day – for example, we use local public transport and go to shops, we use apps and maybe the voice assistant of the smartphone – how often do we come along Artificial Intelligence (AI) in contact?

Meredith Whittaker: It's very, very difficult to say because in most cases we don't even notice it. There is not even an obligation to disclose to users if they are dealing with an AI. So you apply for an installment payment, report an insurance claim, walk past a surveillance camera or send an application - and you don't even realize that a decision is being made in the background using AI.

started her career in the tech business at Google - and soon became one of the harshest critics in the industry. She organized protests by Google workers, left the tech giant and became a co-founder of the AI ​​Now Institute at New York University, an interdisciplinary institution that researches artificial intelligence. As of September 12, she is President of the Signal Foundation.

How, then, can we know how big a role this technology is playing in our society?

That's the problem: we can't say for sure. However, we can make assumptions based on the applications that are already on the market. And we know that AI is in many ways affecting the choices we have in life and our access to resources. For example, when it comes to whether you have a chance of getting a job or getting a loan to build a house or start a business.

When it comes to AI, many people are afraid of cars running amok or robots turning on humans. Are these fears justified?

I wouldn't say they are completely wrong. Self-driving cars, for example, currently have a rather dubious track record. But there are other aspects that worry me more. Especially this one: There are only a few corporations in whose hands AI applications are located. Only these few corporations have the financial and human resources to build the large models needed for AI. So we have an immense market concentration. And what they program forms a center of power that transcends our social and political institutions.

What do you mean?

For the corporations, the AI ​​serves to market the surveillance business model. And so to further increase market power and profits. It is also historically easy to see that large corporations like Google or Facebook jumped in at the very moment they realized that AI was a great way to use their surveillance data and market it even more profitably. This accumulation of data gives corporations a unique power that is beyond what we have previously known from government institutions.

When we talk about power, we are also talking about people who have that power. Who is it about?

In the US, we have nothing but white men in the tech industry who studied at Stanford. This reflects the dynamics of power within society, including its racist and sexist exclusions. In this monoculture with its limited horizon, the diversity of the real world is overlooked and instead one's narrow horizon of experience is further reproduced in technical developments.

A well-known story is the automatic soap dispenser, where no soap came out when a person of color put their hand under it.

And such cases will exist as long as companies do not face penalties that seriously affect their profits. So, for many reasons, we also need to talk about the capitalist structures. Because if a for-profit company gets a contract offer from the US military and the CEO declines, then he was the CEO for the longest time.

In this context, can there still be AI applications that are profitable for a society?

That's conceivable, yes. The question is: if we come up with such an application, will it be profitable enough for a business model? I have my doubts about that.

In France, there is an AI application that tax authorities use to discover illegal private pools based on aerial photographs. In times of water scarcity, this plays a role.

HM I dont know. Wouldn't it be better to put the money into water treatment? In working greywater systems? I understand the approach. But instead of ending our global dependence on fossil fuels, for example, we are training a gigantic model to punish private pool owners. Sounds more like theater to me.

How do we get out of the situation of surveillance and concentration of power?

I wish I had a good answer to this question. I have no solution for a regulatory structure or a technical idea how to untie this knot of insanely complex problems. I think that a company like Signal and its messenger app exists is important. It gives people the ability to communicate outside of the surveillance apparatus. But of course it is only a small aspect of the big picture.

Do you have the impression that politicians are overlooking the picture?

My impression is: it will get better. But IT systems still remain something of a foreign language for most people and that will not change overnight. Education is therefore important, this aspect is systematically underestimated.

Now you are President of the Signal Foundation. What are your plans?

I think first of all I will listen and learn a lot. And then my focus will be to come up with some sort of guiding strategy: Where are we going? How do we go? How can we establish a communication app that works similar to the other messenger apps but does not participate in the surveillance business model?

So is it about money?

Yes, the financial question is an existentially important one. We have recently been experimenting with a small voluntary donation model in the Signal app. The feedback is encouraging and perhaps this is a path we can take further. But the question of how software can be built in this corset of surveillance and profit that eludes these structures has not yet been answered.

But there are other companies that do something similar.

Yes? Maybe I'm missing something, but I don't see any other massively used software that meets user expectations and isn't funded in some way by a traditional tech industry business model.

Mozilla with your Firefox browser?

They are heavily funded by Google. And we don't want that.

There are people who argue that responsibility should not be shifted to the users, but that a political solution to the problem is needed. How do you see it?

I think it's important that users have a choice. After all, how do we seriously want to regulate a company like Facebook in its current form? If we take away their data, the business model collapses. No one has yet answered how a Facebook can work without surveillance.

In the EU, large messengers like Whatsapp will have to offer interfaces for small messenger services in the future - so that users of different apps can exchange messages with each other. Is that progress?

Basically, I think interoperability is good. But what doesn't work is that Signal lowers its standards in order to cope with Messengers like Whatsapp that have lower standardsto be compatible. Whatsapp uses the same content encryption as Signal. But Whatsapp collects metadata, for example. So who communicated with whom when, who is in which groups with whom, profile pictures and much more. Signal doesn't do that. And do we want to give Whatsapp the metadata of the signal users? Whatsapp that too Facebook heard? Absolutely no way.



Source link