I’ve written several blogs about AI, and looking back at them I notice a running theme: It can be very useful but at increasingly high costs in terms of the amounts of resources it requires. I think a lot about these contradictions.
One view of AI is that it should keep growing so that it will ultimately be as smart as or even smarter than humans — what’s been called artificial general intelligence (AGI). In that scenario, AI will become an “everything machine,” able to solve any problem better than a human. The definition in Wikipedia is that an “AGI system can generalize knowledge, transfer skills between domains, and solve novel problems without task‑specific reprogramming.” The largest AI companies in the U.S. are committing vast amounts of resources to the goal of achieving AGI. They believe they will ultimately provide an AI infrastructure that everybody will use for everything.
I believe that the quest for AGI is not only foolish but ultimately harmful. Not harmful in the way the so-called AGI doomers believe — that computers will become so superior to humans that they will destroy us as unnecessary impediments. As I wrote in a recent blog, the biggest harm is AGI’s insatiable need for land, water, energy, and capital, which is already hurting localities where new AI data centers are being built.
AI tech journalist and author Karen Hao favors a different approach. “When you have highly curated small data sets, you create very powerful AI models … My vision for AI development in the future is to have more small task-specific AI models that are not trained on mass datasets … and therefore need only small amounts of computational power and can be deployed in challenges we actually need to tackle: for example, mitigating climate change by integrating more renewables into the grid; improving healthcare; and doing more drug discovery,” said Hao.
Getting Real
A panel discussion, “ AI and the Future of Work ,” was held at the City University of New York Graduate Center, featuring two Nobel laureate economists — Daron Acemoglu of MIT and Paul Krugman of the CUNY Graduate School; Danielle Li, Professor of Management at MIT; and Zeynep Tufekci, Professor of Sociology and Public Policy at Princeton. This panel, with its mix of specialties, was able to offer a meaningful high-level overview of some of the technical and social issues surrounding AI, specifically its impact on work.
Acemoglu started the discussion by saying, “AI is everywhere, deservedly, because it's going to be a transformative technology … but the future of AI is shaped by the decisions that we make. There are very different directions of AI, and which direction we choose is going to have great consequences. The issue is about the direction of technology, and for that, you need a prospective way of thinking about policy, which is where do we want this technology to go and how can we ensure it?”
“I realized, looking back to my engineering days and watching all of the latest and greatest as an editor, I have a lot of thoughts about what’s happening now in light of my engineering experiences, and I’d like to share some of them now.”
Tufekci provided useful guidance about where we want the technology to go. “So, look at where it really does work, coding, for example. Coding is a verifiable domain; when you use the AI to create code, you can check if it works.” On the other hand, there are areas where there can be a serious liability for a wrong result, such as medicine and law.
AI, Automation, and Workers
As an engineer I love delving into automation technology. It can be a valuable aid to workers, handling many of the most burdensome and dangerous tasks, as well as improving the quantity and quality of the output. But at the same time, I worry about whether the level of sophistication AI can bring to automation will eliminate more and more jobs.
Acemoglu, however, suggested that “there is a future where we use AI with a very different architecture — what I call pro worker AI — and that could actually amplify worker capabilities.”
He offered an example of how AI could amplify a worker’s abilities rather than replacing them. “Imagine an electrician, an occupation that's very important and will become even more so. An electrician needs to know a lot of practical electrical engineering — a huge amount of information about all sorts of electrical machinery.” Although automation will make things work more smoothly in general, there will always be unexpected problems popping up. The more highly automated the processes become, the more difficult it will be for an average electrician to have the detailed knowledge to deal with these problems. It takes a long time to accumulate the necessary experience. “Most electricians are too novice to be able to deal with this at an expert level,” said Acemoglu. But imagine you have an AI tool that is trained on all the relevant electrical engineering, is knowledgeable about all the electrical machinery, and has been trained on high-quality data from the best electricians around the world troubleshooting the most difficult problems. This is a very easy tool to develop, and for a few million dollars, some companies have already done prototypes of it.” In that scenario AI would be amplifying the workers’ capabilities rather than replacing them.
“The same thing could work for nurses, educators, plumbers — all sorts of workers. That's the kind of pro-worker AI we're talking about.”
The moderator then asked Li what she saw as beneficial uses for AI. “I guess the piece of advice I would give is we should go for the lowest-hanging AI fruit, which is, we should think about socially valuable problems that humans have sucked at. In my opinion, we should be putting a lot of effort into AI for better science — thinking about ways to speed up innovation, drug discovery, to have better energy technologies.”
Final Thoughts
I love the following critique of AGI by Paul Krugman: “As more and more of the data that these things are being trained on is itself AI-generated, there is this slop apocalypse story, where basically AI just chokes on its own waste products.”
“We should not buy into the idea that we should be creating AGI in the first place. Ultimately, we should be trying to figure out what problems do we actually need solved? Where are the leverage points for AI? And how can AI serve humans in solving those problems, rather than just completely replicating humans,” said Karen Hao.
I agree, as its supporters continue to say, that AI is extremely useful and its growth will have significant consequences. I strongly disagree that its data infrastructure should be exponentially expanded to solve anything and everything. We don’t need, and shouldn’t use, AI to generate memos or term papers. The best use of AI is to create different models to tackle specific problems.

