Wir brauchen Ihre Unterstützung — Jetzt Mitglied werden! Weitere Infos
«Everybody today is surely smarter than Aristotle»
Sendhil Mullainathan, fotografiert von Lukas Leuzinger.

«Everybody today is
surely smarter than Aristotle»

People are too quick to attribute intelligence to AI, says Sendhil Mullainathan. At the same time, we are only using a tiny part of its potential.

Lesen Sie die deutsche Version hier.

Sendhil Mullainathan, could artificial intelligence come up with a better first question for the interview than this one?

I tend to differentiate between two types of activities. One type is where the algorithm generates, and the human sifts through and picks. Those activities are pretty interesting, because even if the algorithm provides no value half the time, you find something useful. Many of the areas where large language models (LLMs) like ChatGPT have been successful, have actually been in those types of contexts. For example, people know that it helps with coding. But if you look at the data, most of the code suggestions that it makes are rejected. Some of the suggestions don’t even make any sense. Still, coders love it, because every useful suggestion saves them 10 minutes. That’s worth it. Similarly, the model could give you suggestions for questions, and some of them might be helpful.

 

What’s the second type of activities?

The second category is where humans not only take suggestions from the model, but let it do things automatically. That’s a much higher bar. To come back at your example, the model would decide on its own which questions to ask. And half the time, it’s going to produce garbage, or at least very banal, plain questions.

 

You have founded Dandelion, a company that aims to improve healthcare using AI. Where do you see the potential in this area?

I think that AI is going to be very transformative. Most of the AI startups in healthcare are about automation. They develop an algorithm that does what a radiologist can do, for example. That’s valuable, but the value is not that high, because we’re limited by our current understanding. AI in healthcare has unbounded potential when algorithms can start helping us make better discoveries and understand diseases better. That’s what we’re trying to do with our company.

«AI in healthcare has unbounded potential when algorithms can start helping us make better discoveries and understand diseases better.»

 

How?

Let’s assume that some people who had a tumor responded very well to treatment, while others didn’t. I have no idea why, but that could be a prediction task for an algorithm using MRI images. If the algorithm can tell one group apart 60 percent of the time, that’s already great, because humans can only guess. Solving discovery problems like this is going to be the future of healthcare.

 

How successful has your company been so far?

As we started to do research projects, we quickly realized a challenge. If you want to automate a radiologist, it’s very easy to get the data: Here’s the X-ray, here’s what the radiologist said. If you want to predict something, you need the X-ray and you need to be able to follow the patient over time. It was very hard to get this data. It sits inside health systems, but those systems are very hard to work with. This is a problem for the entire sector. So we started to form partnerships with several health systems to get all their data. We think there’s going to be a lot of people who are going to try and do AI and health, the one thing they all will need is data.

 

This data, however, is quite sensitive. In Switzerland, we don’t even have electronic patient files yet, also because of concerns about privacy. The enthusiasm for the use of digital technologies seems to be limited in this area.

This is probably the first concern. So, we put a lot of resources into anonymization. What’s more, we don’t actually give data to anybody, so the data will always sit with us. People will be able to train algorithms on it, and then the only thing they’ll be able to take away is the algorithm.

 

Another concern is ethics. If I die of a cancer that AI failed to identify, who’s to blame?

We have to differentiate between the training of the algorithm and the deployment. We’re only involved in the training. The only thing the training does is generate new information that we didn’t have. We have to be super careful about how to deploy that information. When the algorithm is responsible for giving suggestions, we’ll all feel on safer ground, because we know who’s responsible. The doctor has to make the final call. All of the concern comes with automation. The more the field is obsessed with automation, the worse off it will be. The reason the field is obsessed with automation is pure economics. Automation has an immediate bottom line. That’s the thing we should resist, because it’s kind of a stupid, short-term activity. It’s just a very unambitious view of AI.

 

How much of AI’s potential are we using right now in percent?

I’ll give two numbers. Right now, we’re probably only using 10 percent of the current technical capacity in AI. But that capacity is probably only 10 percent of the full potential. Because of ChatGPT, people imagine that the development part is done. We’re just getting started. It’s comparable with the internet: In 1998, people said that the internet bubble is over. In fact, we didn’t have web 2.0.

 

So, you would say that in the hype cycle, AI has peaked.

Yes, we’re about to enter the trough before it’s going to take off. People are so fixated on automation. But that’s not the upside – the upside is enhancing human capability. In every endeavor, the biggest gains in society happen when we can push the frontier of what we can do. Automation just allows us to do what we’re doing more cheaply. It’s fine, but pushing the frontier of what we can do is just transformative. Enhancing human capability has got to be the main thing that we focus on.

«People are so fixated on automation. But that’s not the upside –

the upside is enhancing human capability.»

 

Couldn’t AI also diminish human capabilities? Just as the spread of navigation apps like Google Maps has led to more people getting lost without them, do you see a danger of people getting more stupid because of AI?

Absolutely. We’re already seeing it. Think of all these AI-generated articles on online news sites. They publish some crappy articles; readers then don’t read as carefully; they get a little dumber; the demand for well written articles goes down. That’s definitely a tendency. This is why, when people ask me about the future of AI, I reply: It’s not a single future – we get to choose the future. We decide as a society: Are we going to use these algorithms in ways that make us dumber, or are we going to build them in ways that make us smarter? There’s nothing about social media, for example, that it had to end up in the bad situation it ended up. When I was a kid, my dad was in the US, and we were in a village in India, and we didn’t have a phone or anything. He would send little cassette tapes, and we would play them and listen to his voice. Now, my mom calls my grandmother on Facetime, and they get to talk in person. It’s fantastic! At the same time, you see what happens on social media. The same technology can realize itself in many ways.

 

What has your research on AI taught you about intelligence?

Good question. One thing I’ve learned from the previous research in AI, starting in 1970, is that the stuff that we thought was the highest kind of intelligence very often turned out to be the easiest to automate, and the stuff that we thought was the lowest kind of intelligence turned out to be the hardest. If someone had asked in 1980: What’s more indicative of intelligence, being able to play chess at a Grandmaster level, or looking at a photo and telling me if there’s a dog in the photo? We all would have said: Anyone can tell you if there’s a dog in the photo, very few people can play chess at a Grandmaster level. Flash forward to 2000: Algorithms destroyed Grandmasters in chess, but we were no closer to identifying where there’s a dog in the photo.

 

What’s your explanation for this?

There are higher-level processes that are sufficiently close to formal things. We write the rules. Chess has rules. But because we can write the rules, we could program it up. Then there are lower-level processes that are just intuitive and instinctive. That inversion turned out to be quite fundamental. And if you think about it, that story has been happening for a long time. From the Byzantine Empire all the way until 1900, being an accountant was considered amongst the most skilled, intelligent activities. Now that we have Excel, everybody can be an accountant. In my own research, the thing I’ve learned is that people are too quick to attribute intelligence. So when you use ChatGPT, it’s very hard not to imagine that it is intelligent. When it sees something, talking, putting together words, you’re like: “Oh, that thing must be smart.” Sure it is. But it’s not at all smart in the way that a human mind is smart.

 

What do you mean?

This morning, I asked ChatGPT this classic problem: There are six glasses. Glasses 1, 2 and 3 are filled with water. Glasses 4, 5 and 6 are completely empty. If you can only move one glass, how would you make sure that they’re alternatingly full and empty. ChatGPT immediately told me the solution: You pick up glass 2, pour all of that water into glass 5 and put it down. Then I asked it the same question, but this time, I said that the glasses were filled with pudding. And it just went into this La La Land. That is actually the hardest part of intelligence.

 

How smart are you?

I have a different view of intelligence than most people. Intelligence is not about the hardware. It’s about the software that you upload to your mind over time. I feel I’m much more effectively intelligent today at 52 than I was at 42 or 32. I constantly try to upload and increase the software I have, like a carpenter who has more and more powerful equipment. I think of intelligence as a constructed object, not as a given object.

 

So you would say you’re not naturally born intelligent or dumb?

The one characteristic that is probably the single most important characteristic is your capacity to recognize your own thoughts and be willing to improve them – some people call this metacognition. Many people, however smart they were born, are held back by the fact that they take their own thoughts too seriously. If you realize that your thoughts are stuff that you don’t really know and you can improve them, anyone can be super intelligent. If you think about it, everybody today is surely smarter than Aristotle. Aristotle didn’t have access to training on probability, so he just couldn’t reason probabilistically – probability wasn’t invented until 1500 years later. Also, Aristotle didn’t have access to the scientific method.

 

But he was probably quite good at metacognition.

He probably was. But he just didn’t have many tools. We live in an era where all I have to do is be born, and for 12 years, society instills in me tools that Aristotle couldn’t dream of. He didn’t know what chemistry was. Think of how much chemistry a baker today knows that helps him bake. You could bake a better cake than Aristotle could ever.

 

How did you decide to become an academic?

I’m an intellectual fiddler. I have a lot of curiosity, but it’s a very impatient curiosity. I enjoy learning in the way that I can then fiddle and push the frontier and explore into the unknown. I enjoy not understanding something and then seeing if we can chip away at this confusion and get somewhere. That’s one of the frustrations I have with a lot of my students. Academia has become so professionalized. People just write papers. You can write many papers and be successful without ever confronting confusion. But I think as a field and as a society, we move forward when academics go into the confusion and take the risk of confronting it, knowing that in the end, the odds are you will stay confused. I like being in the confused area.

 

Is this frustration with academia part of the reason why you started some projects outside of it?

That academia right now is not balanced. It has porous walls with the world. It should inform how businesses are run, but it should also be informed by that. That’s when it works at its best. Physics was developed when Archimedes was building catapults and then working out the principle of lever. Thermodynamics was invented when people were building engines. Almost all breakthroughs happen through a back and forth between nature. Today, academia has gotten a little too insular. People want their colleagues to think: «Oh, they’re very smart.» But our colleagues don’t dictate whether we’ve done something useful – the world does. In too many disciplines, we’ve decided it’s okay if we all agree. That’s part of why I’d like to get more experience doing other things: to understand stuff.

 

Do we actually need prestigious universities anymore? With AI, remote work and so on, that knowledge could be gathered and distributed in totally different ways.

I do think we need universities, maybe not at the scale we have. When I got tenure, I spent a long time thinking: «Why do I have tenure?» It’s not because I work hard. I do not work as hard as my parents did. In my view, tenure is not a privilege, it’s a responsibility. The deal academics have with society is this: Most people deal with everyday life, but we need a group of people with a time horizon of 100 years. Those people cannot have incentives in the moment. The minute you have incentives as an academic, you ruin the contract. You can’t be worried what your colleagues think that you; you can’t be doing the safe thing. If you look at the history of humanity, pretty much everything that’s happened is the result of some research that then sparked something in the world. We cannot give up this part of research. I don’t love that some of my colleagues don’t explore as much as they should. But the contract is a good contract.

»
Abonnieren Sie unsere
kostenlosen Newsletter!