Why AI is Inefficient Even If It’s Fast

When I ask people to define efficiency, I usually get a definition that’s something like: 

Efficiency is completing a job faster or cheaper. 

Albeit somewhat silly, here is an example of what that answer isn’t quite accurate. 

Let’s say we walked into a car factory with a proposal.

You tell the executives that you have a plan to increase profit margins by 60% while reducing costs by 30%. They’ll probably want to know more. 

You suggest they stop installing engines.

No engines means fewer parts, and that reduces expenses and increases production speed. The factory keeps building cars the same way it always has, except for the most complicated part, which disappears.

I like to imagine after a slow, incredulous blink, they say… “no thanks.” 

The idea fails because it does not matter how much time or money you save if the result is something no one wants.

Defining efficiency 

My trade is writing, so I spend quite a bit of time with words. Sometimes that means retracing the steps we took to reach a our current definition of a word. Efficiency was originally more a philosophical concept about capability. An efficient cause was the force that made something happen. So your goal might be to make two dozen cookies, and you need to know if two eggs will be enough.

During industrialization, we started developing our current understanding of the word. 

This is about the time Frederick Winslow Taylor got the idea to time workers with stopwatches and study the smallest motions of their tasks. His goal was to eliminate wasted movement and standardize production. Efficiency, in this industrial sense, meant producing the maximum output with the minimum input. So, it’s like how many cookies can I make with two eggs?

But in knowledge work, a system can become very good at producing the wrong thing.

I think it helps to stop thinking of efficiency as a straight-line path to an objective and more like a trade-off between competing inputs. 


The counter definition: Efficiency is the relationship between inputs and outputs.


Inputs are the resources we spend, like time, labor, money, and materials. Outputs are the results we produce. Most companies want to increase profits. So if AI magically makes the input cost zero, but the results (profit) are also zero, then efficiency gains have no value. 

AI can be technically efficient yet still yield a zero or negative value. 

It’s easier to measure efficiency value when you break it down into its different types. Those types are often technical efficiency, productive efficiency, and allocative efficiency.  (Sometimes, technical and productive efficiency are categorized together because they both involves focusing on adjustments to the inputs).

Technical efficiency means using the same inputs in ways that improve the output.

In healthcare, researchers sometimes find that a medication works just as well at 10 milligrams as at 20 milligrams. The dosage becomes more precise, but the treatment is the same, so patients get the same outcome while using fewer resources.

As a writer, I’m going to use time as one of those resources that can be difficult to modify. 

If an afternoon contains two hours, those hours must be divided somehow. If my options are writing a draft or attending a meeting, doing more of one means doing less of another.

A meeting might improve the final article by clarifying the topic or uncovering useful information and increase the quality of the output. But a meeting can also crowd out the time needed to produce the article.

In economic diagrams, this idea often appears as a production frontier. One end might represent spending all your time in meetings. The other end represents spending all your time writing. Every point along the curve shows a possible balance between the two.

Points inside the curve mean you are wasting resources (like point A). Something in the system is inefficient. Points beyond the curve are impossible without changing the inputs, like working longer days. 

Productive efficiency focuses on adjusting inputs.

Historically, if a pregnant woman was above a certain age, she was more likely to be offered an expensive diagnostic test to screen for Down Syndrome.

Many people who met the age threshold turned out not to need the test. Later, more advanced screening methods were introduced. These screenings cost more than an age verification, but they were far more accurate.

The result was fewer people needing the expensive diagnostic test. Even though the system spent more on screening, it saved money overall.

The inputs changed, but the outcome improved relative to the total resources used.

Allocative efficiency focuses on outputs rather than inputs. 

My favorite example is: what if you discover a cure for hiccups that’s 100% effective and costs one penny per dose?

From a narrow standpoint, this is incredibly efficient. The cost is tiny, and the outcome is perfect.

If society is facing a problem like … I don’t know… a novel virus, and every available researcher is needed to develop a vaccine.

Even if the hiccup cure were flawless, allocating resources to hiccups rather than the virus would be allocatively inefficient. 

This is where AI often fails. 

Generative systems can produce enormous volumes of content very quickly. 

But if much of that output is average or low quality, and people do not want to read it, the system may not be efficient in any meaningful sense. The inputs have shifted toward speed and volume, while ignoring what the marketplace will value. 

The result can look a little like the engine-less car. 

I can sometimes make the mistake of assuming higher quality automatically leads to better outcomes.

But sometimes the extra hour spent perfecting the content does not create meaningful value for the reader, so that approach isn’t allocatively efficient either. 

It Only Costs $94 to Not Talk to Each Other

But let's look at an example from the standpoint of allocative efficiency, especially if we're using email to make sales. 
In this example, we've spent slightly more to incorporate A, but we save a significant amount of time. 

We've increased volume, so the cost per email goes down, but we only have one sale, so the cost per sale goes up. 

The reason the sale might go down could be without the feeling that time or volume is a constraint, the segmentation strategy became less sophisticated, so more people received messages irrelevant to them. 

Maybe people just know when content is AI, so they ignore the copy or are less likely to engage with it. 

Or maybe the recipient has detection software, so it’s filtered out of their main inbox. 

Here is my favorite example: 

  • If you are on a mission to fully automate AI, you might purchase software to help you write the messages for $20. 

  • Then you purchase software that helps you sort emails and clean your inbox for another $20. 

  • But then people tell you that the emails sound too robotic, so you pay $7 for a tool that humanizes your AI copy. 

Let's say in this perfect world, you successfully automated your email management. And now it's unlikely that you are the only person who has solved this problem, so the recipient is doing the same. 

In a business communication course I taught at a university, we would always start the semester with a lesson on the communication process that looked quite a bit like a standard workflow.

The sender has an idea for a message. Then they have to encode it to get it out of their brain and into a shareable format. Next, they have to formulate that message and choose the best channel. 

The recipient then has to decode it, and feedback tells us whether communication was successful. And there's a lot of noise along the way, which is where we focus on communication failures. 

When AI is implemented solely to help create the message, the risk is pretty low. The potential failures are there, but they're not detrimental. Where we really start to see efficiency and communication failures is when AI is introduced into the encoding and decoding portions of the communication process. What could be hilarious is if these people are paying collectively $94 to not talk to each other. 

Oh Look a CTA…

In the past, inbound marketing taught the idea of “they ask, you answer” to find leads and earn their business. The call to action was the bread and butter of content marketing for the better part of a decade. I think that’s going to be wildly difficult when the new status quo is “They ask. AI answers.” However, I have no data to support that hunch, so here’s a CTA.

What action am I asking you to take? Well, I want to work on AI deployment with companies. Mostly because I think it’s a creativity and incentive challenge more than a tech and skill challenge.

The instinct might be to bring on the best technical engineer to focus on AI deployment, and that might be the right person.

However, I think the best person for the job can spot when a new tool might not be the right answer even if it’s the shiniest (but they’ll definitely be really, really good friends with that highly knowledgeable engineer… and the IT team… and legal council… and accounting… etc.).

So leave your contact information, and we can at least talk about the work that needs to be done, and if I’m the right person to do it.

Next
Next

670 Published News Articles Later… What I Wish Businesses Knew About Community Journalism