'TCS Hiring 2 Bus Loads Of Kids Every Year Won't Happen'

13 Minutes ReadWatch on Rediff-TV Listen to Article
Share:

Last updated on: December 17, 2025 14:24 IST

x

'4 lakh to 5 lakh people graduating in computer science getting jobs in software companies will not happen.'

Kindly note this illustration was generated using ChatGPT and is intended solely for representational purposes. Any resemblance to real persons, living or dead, is purely coincidental.
 

Sundar Pichai: There are elements of irrationality in the AI boom.

Jared Kaplan: AI autonomy could spark a beneficial 'intelligence explosion' or be the moment humans lose control.

Geoffrey Hinton: AI may be our successor, not just a tool.

Dario Amodei: AI could eliminate half of all entry level white collar jobs.

Bill Gates: Tons of investments in AI will be dead ends.

These are some of the observations by tech leaders who have invested heavily in Artificial Intelligence.

How true their forecasts will be only time will tell.

Professor B Ravindran is the head of the Department of Data Science and Artificial Intelligence (DSAI) (external link), the Wadhwani School of Data Science and Artificial Intelligence (WSAI) (external link), the Robert Bosch Centre for Data Science & Artificial Intelligence (RBCDSAI) (external link) and the Centre for Responsible AI (CeRAI) (external link) at IIT Madras.

"The hype today is very strong on the Frontier models; AI is going to be super intelligent and things like that. That hype will slow down because I don't think they will be able to deliver to the extent of the hype," Professor Ravindran tells Shobha Warrier/Rediff in a must-read interview.

"But so many other frontiers like AI augmented search or coding are delivering on the ground. We are seeing products of value across the board, and not in one niche area. That's why I feel there is no way this bubble will completely burst."

As a professor of AI, how do you look at the obsession of tech companies all over the world in investing unrealistic amounts of money and effort in AI?

I am not sure about the unrealistic part of it. But there is a hype in trying to drive towards AI Super Intelligence or AI General Intelligence.

Somehow, they feel that is the great way to raise more and more money, and run towards more capable AI products.

Given the current state of technology and the target they want to reach, I think the investment they are doing is required because the current technology is not that efficient.

To reach the super intelligence level they are talking about, I don't think the technology we are following right now is the best way.

The real question is, is this truly the most effective way to generate value out of AI? I have serious doubts about it.

I am not sure how much true value you are going to create by building one super intelligent AI model.

There are enough AI technologies right now. For example, if you want to build an AI technology that will help every student in the country, we have 95% of AI capability to do so.

What needs to be done is investing in engineering and last-mile connectivity.

But pushing the boundary to get more and more powerful AI? Probably not the place where we should be investing in, if you are looking at generating true value from building products with AI.

I see this as the problem right now.

The mad rush to build more powerful AI by these big companies maybe is not warranted especially if you want to build solutions that will work on the ground today.

Nvidia CEO Jensen Huang said tech magnates are pouring in trillions of dollars not really knowing what lies at the end of the AI marathon.

That's what I said earlier. We are not even sure whether the technology will take us to Super Intelligence.

If we want to attempt it, yes, we need all the investment. But you don't know whether this will succeed.

As an AI scientist, what do you see at the end of this AI marathon? Where will it lead to?

All my adult life, I have been an AI researcher, and I have been trying to make machines that are more and more capable when it comes to solving what you normally think requires human intelligence to solve.

I believe that if we continue to push the boundaries, we will get machines that are more and more capable of solving things.

If you ask me what will happen 100 to 200 years from now, I am not sure..

Not that far! Just a decade from now, or even after five years?

I was just thinking...

Decades from now, we will have more efficient AI models.

I am not completely convinced that we would have cracked the human intelligence levels uniformly.

What is going to happen is, we have to start asking harder and harder questions about ourselves as to what human intelligence is.

For example, the Turing Test (a method to evaluate a machine's ability to exhibit human-like intelligence through conversation) obviously, is a very weak test.

Alan Turing's test in 1949 was to see whether machines can imitate human beings in the course of a conversation.

Many of the machines can do it today.

For example, ChatGPT. They can be your friends, give emotional support, answer your questions, etc.

Human beings treat them as buddies and start confiding in them.

So, that boundary has passed.

Do these machines have emotional intelligence?

They are connecting emotionally with human beings. That's what is worrying me. We are investing our emotions in them.

We are kind of reflecting ourselves on the other side.

We are able to somehow read empathy into the algorithm which produces words of AI.

This emotional connect with an AI model is very scary.

Next question is, are human beings adapt at solving every problem? You require lots of experience, practice, teaching, training, etc before you become good at solving problems.

What we are looking to build in AI is not one that is good at everything, but different copies of AI that are good at different things. For example, one model can do mathematics very well, but not good at politics.

It is like human beings. Some human beings may be artistic while some others may be good in sports. Some people are good in mathematics. Some people are good in politics. Some people are extroverts while some others are introverted.

Similarly, you need a variety of AI models, good at different things, and not one that is good at everything.

Jared Kaplan, the chief scientist at Anthropic, said that by 2030, human beings must decide whether they were willing to take the risk of AI systems growing more powerful than humans, and them losing control of the technology.
Do you see the risk of AI systems improving and becoming more powerful than human beings? Will it not be catastrophic?

Well, AI systems are already more powerful than humans.

For example, no human being can beat AI in chess. AI beat Gary Kasparov in the 1990s! For the last 30 years, no human beings can beat the best AI in chess.

No human being can digest the amount of knowledge that AI can digest today.

There are many, many, tasks across the board, like recognising images.

So, can AI become better than humans? Yes.

AI is better than humans, but the improvement is very uneven.

Jared Kaplan also spoke about humans losing control of the technology.

That's a different matter. We still have control over the technology.

We also can still decide whether we want to give away the control to AI or not.

He was talking about human beings taking a decision on whether they were willing to take the risk of losing control of the technology...

I will give you an example. Would you give an automated software system -- not necessarily AI -- the power to make a decision on whether to press the nuclear button? No.

Whenever I am giving full power to an automated system to take a decision, I run the risk of something catastrophic to happen.

The same thing can happen with AI too, and the chances of something like that happening with AI is higher.

I don't think we should ever give up our agency where the risk is high.

Most tech leaders of major AI companies are painting such scary scenarios for the future. What kind of future do you see?

I would say AI is going to help us. There is no question about it.

AI is a wonderful technology. And it is not a single technology. You should understand that AI doesn't mean LLMs. AI doesn't mean ChatGPT.

AI as a technology has a lot of different things, and the technology has changed our lives significantly already.

On a lighter note, the quality of e-mails I get from students these days for internships has improved significantly because they are using the help of AI!

I am not saying it is bad. If you have a tool, you use it.

It is like moving from writing using a fountain pen to tapping a keyboard.

You know, this technology has been there from the 1950s. In the 1950s, people used to write stories in the newspapers about AI going and exploring the other world on behalf of humans. They said it would be AI that would go to other planets and not humans.

This was not science fiction. These were articles that appeared in the newspapers.

So, the hype has been around for a few decades.

But the future is going to depend on how effectively we are going to use AI to improve our lives. It will have a very positive impact.

AI is not going to go away anywhere.

As with any technology, there will be some negative impact also.

You said, with any technology, there would be some negative impact. It is said that because of AI, so many jobs are being lost and as per the reports, it will wipe out 50% of all entry level white-collar jobs. Will this not be disruptive for society?

People are only talking about one side of the issue.

Yes, jobs are going away. But new jobs are also being created.

I will give an analogy. Tik Tok created a new category of celebrities and influencers even in rural India. Not just that, a lot of people started making their livelihoods out of this. They are not a small number, perhaps a couple of million people.

Similarly, AI will create new opportunities which you cannot even imagine now.

AI will put power in the hands of people, and they will start creating new jobs.

Whenever this kind of an enabling technology comes up, there will be a job displacement, that is, jobs move from one sector to another.

Job loss also is a part of it.

Are you not worried about the job losses that is happening now?

I am worried for the people who are in their forties as they really have to re-train themselves. They are not at that level in their career with expertise so that they can say, we are indispensable and cannot be replaced. But those who are at the very top will stay.

I am not worried for those who are growing up in this age as they will be the ones who will lead the new economy.

There will be job displacements, of course.

Programming as a job is not going to go away.

But TCS coming to hire two bus loads of kids every year will not happen.

The kind of expertise that will be taught in college also will change.

It will be more of structuring the programmes, and how you lead a project.

This will be a small number of people, maybe a few lakhs.

4 lakh to 5 lakh people graduating in computer science getting jobs in software companies will not happen.

At the same time, what will happen is the bar to setting up a software company will come down. You don't need the size of a TCS or a Cognizant to be providing this kind of service with AI.

You will see more small companies providing niche coding services.

IMAGE: Professor B Ravindran. Photograph: Kind courtesy Professor B Ravindran

You mean, you see people becoming more entrepreneurial?

Yes, they can actually build products at a much lower scale. They will not need the kind of investment you see today.

But you cannot fully anticipate the new ecosystem that will get created. A lot of changes are taking place.

And these changes are happening very rapidly...

Yes, very, very rapidly. We are already seeing new kind of business ideas and new business models coming up.

You will see a big shakeup happening. The next few years are going to be very interesting.

My job is the one that is severely under threat today! The kids may not need a teacher anymore. They can go online and ask an AI model on any subject.

Teaching as a whole is going to change drastically.

At the same time, tech majors are losing sleep over the AI bubble bursting. Do you see the AI bubble bursting like the dotcom burst?

I tell people who talk about the bubble that AI had gone through many 'winters'. They call it AI winters.

Like in the late 1950 and early 1960s, the AI bubble 'burst'. And for the next 10 years, nobody was investing in AI. Those were the bad times working in AI because there was very little money.

Again, in the mid-1980s, AI picked up. But by the 1990s, the AI bubble burst.

This has been happening every 10 years.

My guess is we will not have an AI bubble burst like that now.

We will not have another AI winter. But I wouldn't mind an AI autumn though!

Why are you so optimistic that the disruptions we saw every decade will not happen now, and there will not be another AI winter?

The hype today is very strong on the Frontier models; AI is going to be super intelligent and things like that.

That hype will slow down because I don't think they will be able to deliver to the extent of the hype.

But so many other frontiers like AI augmented search or coding are delivering on the ground.

We are seeing products of value across the board, and not in one niche area.

That's why I feel there is no way this bubble will completely burst. I don't see an AI winter now.

Feature Presentation: Ashish Narsale/Rediff

Share: