An AI pioneer says the technology is 'limited' and won't replace humans anytime soon
- - An AI pioneer says the technology is 'limited' and won't replace humans anytime soon
Jared PerloDecember 27, 2025 at 3:00 AM
0
Andrew Ng delivers a keynote address at the AI Developer Conference in New York in mid-November. (Joe Jenkins )
NEW YORK — When Andrew Ng talks about AI, people listen — in classrooms, boardrooms and Silicon Valley.
The researcher-turned-educator-turned-investor has become an AI statesman of sorts, co-founding Google Brain, which became part of Google’s flagship DeepMind division that now produces some of the world’s best AI systems, and serving as Chief Scientist of Chinese tech titan Baidu.
In today’s influencer-obsessed information landscape, Ng’s biggest claim to fame might be his credential as a “Top Voice” on LinkedIn, an honor the platform gives to a select few handpicked experts, with over 2.3 million followers.
Armed with decades of AI experience, Ng says he remains clear-eyed about AI’s abilities. “The tricky thing about AI is that it is amazing and it is also highly limited,” Ng told NBC News in an interview on the sidelines of his AI Developers Conference in November. “And understanding that balance of how amazing and how limited it is, that’s difficult.”
Over the past few years, generative AI has attracted hundreds of billions of dollars in investment as nearly every major tech company has pivoted towards the industry’s hottest topic. But in the last several months, many have questioned whether the surging investment has created a bubble now at risk of bursting due to persistent issues like hallucinations, AI’s involvement in mental health crises and increased regulatory scrutiny.
Ng is broadly bullish about AI’s upward trajectory, though he is quick to cast doubt on AI systems’ potential to broadly displace humans in the near future. He has repeatedly argued that artificial general intelligence (AGI), roughly defined as AI systems that can match human performance on all meaningful tasks, is a distant possibility — contrary to other AI luminaries who envision AGI emerging in the next few years.
“I look at how complex the training recipes are and how manual AI training and development is today, and there’s no way this is going to take us all the way to AGI just by itself,” Ng said.
“When someone uses AI and the system knows some language, it took much more work to prepare the data, to train the AI, to learn that one set of things than is widely appreciated,” he added.
Ng also has stellar bona fides in the education world. Besides serving as a computer science professor at Stanford University, Ng founded Coursera — one of the world’s largest online learning platforms — and oversees one of the most popular AI-focused education platforms, DeepLearning.AI.
With over a decade of success in the AI-meets-education ecosystem, Ng adopts a Chef Gusteau approach to AI education and coding in particular, arguing that anybody and everybody should code given advancements in coding tools.
“Some senior business leaders were recently advising others to not learn to code on the grounds that AI will automate coding,” Ng said. “We’ll look back on that as some of the worst career advice ever given. Because as coding becomes easier, as it has for decades, as technology has improved, more people should code, not fewer.”
Many experts have recently asserted that coding is the “epicenter of AI progress” and that AI’s shocking capabilities only become apparent when people use AI tools to code. Those developments have led some to theorize that traditional coding-only jobs will wither with the rise of AI, and early evidence backs up those claims.
“It’s true that I don’t want to write code by hand anymore. I want AI to do it for me. But as the barriers become lower and lower, more people should do it. For example, my best recruiters don’t screen resumes by hand. They write prompts or write code to screen resumes.”
“People that use AI to write code will just be more productive, and I think have more fun than people that don’t. There will be a big societal shift towards people who code,” Ng added.
As AI systems become more powerful, Ng is aware that real downsides are emerging — but he thinks today’s risks pale in comparison to AI’s potential upside.
“I think for a lot of AI models, the benefit is so much greater than the harm,” he said.
“The death of any single person is absolutely tragic,” Ng added, referencing recent suicides that allegedly involved the use of AI. “At the same time, I am nervous about one or two anecdotes leading to stifling regulations. That means it doesn’t help save 10 lives, right? It’s a very difficult calculus for the number of people that are getting good mental health support from these systems.”
Instead of what he describes as suffocating regulation, Ng is a strong proponent of laws that demand transparency from leading AI companies, like the recently passed SB 53 in California and RAISE Act in New York.
“If I had my druthers, if I were a regulator, transparency of large platforms is what I will push for, because that gives us a much better chance of being able to clearly see what problems there are, if any, and then work for their solution,” Ng said.
Ng is also intimately connected with many of today’s private-sector AI leaders: he oversaw Dario Amodei, co-founder and CEO of Anthropic, when Amodei worked at Baidu; briefly taught Sam Altman, co-founder and CEO of OpenAI, at Stanford and served as a postdoc advisor to Ilya Sutskever, a co-founder of OpenAI who left to create a competing, early-stage organization called Safe Superintelligence.
Despite his connections to this Silicon Valley cadre that has announced trillions of dollars of AI infrastructure investment, Ng is quick to contend that part of today’s AI landscape looks like a bubble.
Ng said the first steps of creating AI models, referred to as the “training” or “pre-training” stages, “is where a lot of the questions are, where the very real questions are. When will the payoff for all of the capital expenses going into this training, when will they pay off?”
“Whatever happens, it will be good for the industry, but certain businesses might do poorly,” Ng said, referring to the possibility of a bubble collapse.
Instead, Ng sees steady and rising demand for the later stages of AI computation, referred to as the “inference” stage, when users query pre-set AI systems. “Inference demand is massive, and I’m very confident inference demand will continue to grow.”
“We need to build a lot more data centers to serve this demand,” Ng added.
Ng was an early advocate of today’s most widely adopted approach for building today’s most advanced AI models, in which AI companies use powerful computer hardware called graphics processing units (GPUs) that were once mainly used by video-game enthusiasts. NVIDIA’s status as the world’s most valuable company stems almost entirely from its world-leading GPUs that now power many of the world’s leading data centers.
Looking ahead to other areas of AI, Ng said the public should be paying more attention to voice-related AI. “I think people underestimate how big voice AI will get. If you look at Star Trek movies, no one envisioned everyone typing on the keyboard, right?”
Ng also said the public should expect more and rapid progress in the field of “agentic AI,” a term he helped popularize which references AI systems that can perform many actions autonomously.
“During the summer of last year, a bunch of marketers got hold of the ‘agentic AI’ term and slapped it as a sticker on everything in sight, which caused the hype to just take off incredibly rapidly.”
“I’m very confident that the field of agentic AI will keep on growing and rising in value. I don’t know what the hype will do. That’s hard to predict. But the actual commercial value will keep rapidly rising.”
Source: “AOL Breaking”