'It is the first technology in history which is not just a tool, it is an agent.'
Yuval Noah Harari on the dangers of AI.
On a warm evening over the weekend, a mix of people ranging from school students, AI enthusiasts, entrepreneurs, actors assembled to hear superstar writer Yuval Noah Harari on the lawns of the five star The Lalit near Mumbai's airport.
Harari was in the city to promote his latest book Nexus: A Brief History of Information Networks from the Stone Age to AI which is already a bestseller, and one could get a sense of that looking at people carrying copies of the book in the audience.
The person sitting beside me had two.
Those in attendance had paid a minimum of Rs 1,250 to hear the Israeli writer whose books have catapulted him to international stardom.
Seated on stage surrounded by a fawning audience, the unnaturally lean Harari who makes annual vipassana trips to Mumbai was in conversation with neuropsychiatrist Dr Rajesh M Parikh and actor Aamir Khan.
Khan said he had bought many copies of Harari's first book Sapiens and gifted it to friends and visitors to his home.
"If humans are so smart, why are we so stupid?" was among the first of Harari's many provocative and make-you-think-kind of questions which lie at the heart of his new book.
Some illuminating takeaways from Harari about the dangers of AI and why an information diet might do us some good.
1. 'AI might escape our control and enslave or annihilate us'
We have conquered this planet. We are the dominant species. We have reached the moon, split the atom and yet time and again, we are making dumb, self-destructive decisions.
We are now on the verge of ecological collapse because of mismanagement and are now creating an extremely powerful new technology, AI which might escape our control and enslave or annihilate us.
It is the first technology in history which is not just a tool, it is an agent. It can act, it can learn, it can make decisions and invent new things.
And instead of uniting to deal with it, we are in the process of destroying the global order. International tensions are rising fast and we are on the verge perhaps of a third world war.
Why are we so self-destructive?
The problem is in our information.
If you give good people bad information, they will make bad decisions.
Why do we have so much bad information when we have spent thousands of years developing sophisticated information technology?
Modern societies are as likely as Stone Age tribes to fall for mass delusions and psychosis. Millions of people believe harmful stories about the world and themselves.
In the beginning of the Internet age, the main metaphor was the world wide web which was meant to connect us and spread truth, freedom and democracy. But in 2024, the main metaphor is the cocoon.
The web closed in on us individually and collectively, so each person or each group of people are enclosed inside an information cocoon and don't see reality.'
2. 'This is the end of the age of self because it has been hacked'
The self is disinitegaring. Networks have learnt how to hack the human self. In social media, algorithms have learnt how to capture human attention. They have hacked the self to manipulate human beings.
Throughout history, outside forces have wanted to know us better -- dictators or the church or the merchant.
But nobody outside could get to know us inside because most of the time they did not even get to see us.
Even in totalitarian States like the Soviet Union, it was technically impossible for the KGB to follow each person all the time, but today it is easy.
You don't need agents, you have smart phone, cameras, drones, microphones, computers and you can place an entire population under surveillance all the time.
Now one doesn't need human analysts to analyse secret files, you have AIs. It is becoming technically possible to do this all the time.
For the first time in history we are in a situation where an external authority -- government, religious organisations or corporation know us better than we know ourselves.
This is the end of the age of self because it has been hacked.
3. 'AI is taking over bureaucracy'
Millions and billions of AI agents will increasingly replace humans in the bureaucracy.
AI will decide whether you get a loan from a bank or admission to a university.
In the war between Israel and Hamas, AI increasingly decided which places to bomb in Gaza. The shooting is still done by human beings, but the decisions on what to shoot is done by AIs.
4. 'AI an enormous danger to dictators'
'It is difficult for AI to take control of a democratic country because there are many different centres of power and checks and balances.
In fact, AI poses an enormous danger to dictators. The greatest danger for every dictator in history wasn't a democratic revolution, but that one of the subordinates might kill them or turn them into puppets.
Dictators are always paranoid.
5. 'These are warnings, but there are enormous opportunities'
Nexus is not a prophecy of doom. I emphasise again and again the future is not written anywhere. The future is the outcome of the decisions that all of us are making today, and I issue a lot of warnings in the book on the assumption that if people heed these warnings and make good decisions, we can avoid the worst case scenarios and develop AI to create a wonderful world.
There are enormous opportunities in AI. Say, healthcare where millions of people who cannot access a doctor can have millions of AI doctors diagnose diseases potentially better even than human doctors.
At present, the decisions about the development of AI are made by a tiny percentage of humanity who mostly live in two countries, the United States and China.
We need to get more people into the conversation and this is what I'm trying to do with the book as a kind of introduction to the AI revolution.
People must understand the magnitude of the change and the dangers. We are flooding the world with millions and billions of new inorganic agents that in certain fields will be more intelligent than us, but we can't predict how they will develop, what they will learn, what books they will write.
This year the Nobel Prize in Physics and Chemistry basically went to AI.
How long before the Nobel Prize in Literature goes to AI? What would be the meaning of that? It's an open question to which I don't know the answer.
For thousands of years, we lived in a culture created by human minds. Poetry, mythology, movies came out of the human mind, but very soon we will live in a culture where much of it is the creation of a non-human intelligence.
I don't talk about AI's positive potential because we have enough entrepreneurs, investors who do that and they control immense resources.
Therefore, it is the job of historians, philosophers, artists to issue some warnings and say, we understand the enormous positive potential, but don't forget about the dangers.
The intention is to make sure that we are aware of the dangers and avoid them.'
6. 'People need to go on an information diet'
The same way as food diets, people need to go on an information diet. As they are mindful about what they put into their body, they should be equally mindful of what they put into their mind.
Information is the food of the mind. If you eat too much, you don't have time to digest.
Similarly, if you flag the mind with more and more information, and especially junk information, you don't give the mind anytime to think, to meditate, to process.
Feature Presentation: Aslam Hunani/Rediff.com