Tagging a fish to track its movements. (Image: PINA/Adobe Stock)

In my last blog, “AI is Great — Is There Enough Electricity to Feed It?” I discussed the serious problem that unsustainable amounts of electrical and water resources are needed to run the data centers required by the exponential growth of generative AI. And I didn’t see much hope for solving the problem. But then I ran across an article in the Harvard Business Review that got me thinking in a new direction — rather than trying to feed that exponential growth, how about reducing it?

According to the article, a report from the MIT Media Lab found that 95 percent of companies that have incorporated generative AI tools into their workplace “see no measurable return on their investment in these technologies.” The article says that one possible reason is that employees are using AI tools to create low-effort, passable-looking work that ends up creating more work for their co-workers. “We refer to this phenomenon as ‘workslop.’ We define workslop as AI-generated work content that masquerades as good work but lacks the substance to meaningfully advance a given task.” And that requires receivers of this work to waste time “interpreting, correcting, or redoing it.”

That leads me to think about applications for which generative AI does indeed make sense, so we can make smart decisions about where it is useful it and where it is a waste. To gain some insight, I asked a software engineer I know, who happens to be my son Jeremy, how he uses AI in his work. The following is his answer:

“I think the key to finding good uses for AI is to understand how badly the technology is named. It has nothing to do with intelligence. I think it's much more accurate to think of it as speculative logic or automated guesswork. We all know that computers excel at being able to do calculations at incredibly high speeds. A computer is a calculator that uses binary logic to do its calculations. Binary logic is based on a huge matrix of individual forks in the road, or 'logic gates,' where there are four possible values: A is true, B is true, both are true, or neither is true.

“There is a fifth possible value, which is that there is insufficient certainty to determine which of those four states is true. This points to one of the weaknesses of traditional computer logic, namely that it has quite reasonably been relied upon to treat such moments of uncertainty, or insufficient data, as deal breakers. If you don't have enough data then stop, beep, throw an error.

“One simple strategy to overcome this weakness is to design computer programs that can use statistical probability to supply missing values. If X has been true in 99 percent of thousands of similar calculations, then perhaps X is true now, even though its value is not known. So, your program can return a result, perhaps with a value representing a margin of error.

“Although getting binary logic to take chances is not terribly new, what is new, the thing we're calling AI, is that we now have computer programs that deal with uncertainty in a much bolder, and much riskier, way. AI is basically a shift in the way software engineers think, rather than how computers function. The point of what we're calling AI is that it detects patterns, devises algorithms to simulate those patterns, calculates the statistical probabilities that govern the likely combinations of those patterns, and then returns results that are abstracted from the hard limits of the binary logic gates that are the building blocks of computer logic.

After decades of work as an EE, SAE Media Group’s Ed Brown is well into his second career: Tech Editor.

“I realized, looking back to my engineering days and watching all of the latest and greatest as an editor, I have a lot of thoughts about what’s happening now in light of my engineering experiences, and I’d like to share some of them now.”

“For example, my primary use of AI, as a software developer, is in the form of tools that can actually generate code. One of the fundamental building blocks of a computer program is a function (or 'method'). A function is just a logical block of code that has a definable job, for instance to calculate the area of a triangle based on the length of its sides.

“In my case, since I write applications for fish biologists, my functions tend to do things like calculating the distance between where a fish was caught and tagged off the coast of Virginia, and where it was later caught with that tag in some other part of the ocean. Let's say that function already exists. But next I need to write a simple function that calculates the duration of time between between when it was tagged and when it was re-caught. I want to call this function ‘timeAtLiberty,’ so I start typing that into my AI-powered code editor. As soon as I type that name, the AI tool suggests the code for the entire function. All I have to do is click to accept it or ignore it. This is a simple example of a function that is easy to guess at, but I am amazed at how often these AI-generated functions are right on the money.

“Part of the reason this generated code is so good is that the AI tool is able to 'read' all of the code in the application, across many files, and it can look for comparable work across the internet, so it has ample data on which to base a solid guess.

“But, sometimes, while the code is almost perfectly correct, there will be errors that make it nonfunctional. I really have to look very carefully. For example, just because I wanted to know the distance between the two points, and the time between the sightings, AI tried to ‘perfect’ it by using those two numbers to calculate a shark’s speed. But that would be completely wrong because there’s no way of knowing the shark’s path. And I never wrote that I was particularly interested in sharks, although in the past I was. And why, when I have used a variable ‘tagLocation,’ is the AI tool calling it ‘fishTagLocation’? I have to be very careful.

“Sometimes there are absolute howlers, like the time when I was writing an application entirely devoted to the scientific study of sharks, and the AI tool wrote a long function for me that was clearly meant for the management of clients at a veterinary clinic. I declined that one.

“Where these code-writing tools really shine, in my experience, is in two areas. First, when I have to write code that is obvious and tedious, where I am simply coding to well-established standards with a very specific goal in mind, AI jumps in and completes the job for me instantly. I cannot even begin to calculate the hundreds of hours a year of busy-work this must be saving me.

“The second, and more profound, way in which I think AI might be revolutionizing software development is in greatly reducing the cost of trying things that might not work, or even things that you know are possible but might not be worth the time it would take to do them. Should I even bother trying to write a function that will take into account whether a shark had to swim around Florida to get from point A to point B, and to include that in the distance travelled? An algorithm already exists for that, AI tells me.

“Of course, I know by now that I will have to test this code, remove any faulty assumptions, excise bits of code that are just bonkers ... but it does make innovation so much easier and more efficient. It makes me want to think outside the box, to ask the AI tool to try something that might even turn out to be impossible, while I grab a cup of coffee.”

My Bottom Line

Jeremy describes a considerably more useful application for generative AI than writing a report for your boss. Thoughtfully separating the wheat from the chaff will go a long way toward reducing the growing burden of too much AI hardware.