(Image: phonlamaiphoto/Adobe Stock)

With all the talk about AI these days, I no longer hear much about that other intelligence — human intelligence (HI). Except for those who are predicting that someday in the not-too-distant future AI will surpass HI. I think the belief that could happen is not only wrong but is detrimental to the design of AI for practical applications where it can contribute a great deal — not trying to build a general intelligence machine but multiplying human capabilities.

I think the difference between the two approaches can be highlighted by viewing AI as an assistant vs. viewing it as an “agent.” Assistant I understand, but I was not sure of the exact meaning of agent, so I asked AI and it replied:

An AI agent is an autonomous software system that perceives its environment, reasons to set goals, plans actions, and executes them with minimal human input, using technologies like LLMs and tools to achieve complex objectives, learn from outcomes, and adapt over time, unlike simpler bots or assistants. They act on behalf of users, handling tasks from scheduling and research to complex workflows by integrating data, decision-making, and tool usage.

This answer is one example of the limitations of generative AI. I think that not only does it have an overblown view of its own capabilities but states it with a sense of certainty. If AI sets goals, plans actions, and executes them “with minimal human input,” we’re in trouble. When generative AI works cooperatively with HI, it can be incredibly useful. AI excels at the ability to analyze far larger amounts of data, far more quickly and accurately than humans can. So, AI can free humans to do more of what humans are best at: creativity and intuition and making unexpected connections. Professor Boyuan Chen studies the way humans and AI agents work together and views them as teammates. The greatest benefit from generative AI “will not be AI alone or humans alone, it’s the collective intelligence between humans and AI.”

So, I’ve been thinking about some of the uses for generative AI where it can enable its human partners to do more and better work. In general, it works best in areas that are built on sets of rules or rule-based structures, such as language, law, or software engineering. But even in those areas, there are limitations. “With the current architecture, the best that we can do is to copy human decision makers, so we can load in a lot of data from doctors making diagnoses or reading radiology reports, or from financial planners. And then generative AI has a great way of imitating these human decision makers. But if you do that, you're not going to get much better than the human decision makers,” said MIT economist Doran Acemoglu  .

But there is a way to get better results: By the human decision makers and the generative AI having a dialog with each other at key points throughout the course of their project.

After decades of work as an EE, SAE Media Group’s Ed Brown is well into his second career: Tech Editor. “I realized, looking back to my engineering days and watching all of the latest and greatest as an editor, I have a lot of thoughts about what’s happening now in light of my engineering experiences, and I’d like to share some of them now.”

For an illustration of how AI and HI can beneficially work together, I again turned to my software engineer son Jeremy.

On one occasion, he told me, his team leader wanted to demonstrate what agentic AI could do; so during a meeting he gave the agent the task of explaining how to write a piece of software that would run particular mathematical equations in a certain computer language on a certain platform in the cloud. Although none of this was new, it is tedious and time consuming, so he thought he could just give the agent the task, say go, and the work would be done. The result was unwieldy and almost chaotic. One of the other team members then suggested that a better way to proceed would be in smaller steps with limited concrete goals like: “Give me an algorithm that could solve this particular math problem.”

Jeremy follows that advice but does it in a way that is more like a collaboration between himself and the AI agent. He begins by advising the agent of the overall task, but only as a way of providing context, not permission to do the whole job. Then he gives the agent the first task leading to the ultimate goal.

An example might be:

  1. Give me an example of code in the R programming language that would do this mathematical calculation.
  2. Now what do I need to do to incorporate that program into the specific cloud hosting service where I have all my other work?
  3. Once I have the math how do I plug it into the cloud?
  4. Once it’s in the cloud, how do I connect it to the proper database?

In addition to breaking the overall task into smaller steps, the agentic AI that Jeremy uses reports at every step what it wants to do but waits for permission to go ahead. “I sort of keep it on a short leash,” said Jeremy.

Even if at some point in the future agentic AI improved enough so that it could do more on its own, Jeremy would still like that short leash. He explained that the temptation might be to say, OK it’s done, I’ll just use it without understanding exactly how it did the job. “The temptation will be to just put it behind you and move on and get more work done. But I think it's the developer's responsibility to know precisely what the AI has built, because you're really taking responsibility for it. You're not going to get away with saying, oh, I didn't know that dangerous security loophole was in there, because it was the AI. Blaming the AI is not going to be a good excuse,” said Jeremy.

The Bottom Line

I think the best way to sum up the essence of my thinking about all of this is from Acemoglu: “My recommendation to business leaders would be, don't be taken in by the hype. Instead, think where your most important resource, which is your human resource, can be better deployed, and how to leverage that human resource together with technology, together with data, to increase people's efficiency and enable them to create better and newer goods and services.”