≡ Menu

What’s Up With AI?

As if life wasn’t complex enough, we now must make sense of AI. It hasn’t been easy, through three words may help.

A MIT study found 95% of investments in Gen AI have produced zero returns, while 90% of workers surveyed reported daily use of personal Gen AI tools like ChatGPT or Claude for job tasks.

A study from software company Atlassian found daily usage of AI among individual workers has doubled over the past year, while 96% of businesses “have not seen dramatic improvements in organizational efficiency, innovation, or work quality.”

A survey of 3,700 business executives found 87% said AI will “completely transform roles and responsibilities” within their organizations over the next twelve months, while 29% said their workforces are equipped with the skills and training necessary to leverage the technology.

Harvard economist found 92% of the U.S. GDP growth in first half of 2025 was from AI investments, yet a Center for Economic Studies (CES) paper found a 1.3% drop in productivity after implementing AI though they expect productivity gains later.

It seems clear AI “is” and “will be” transformational, though it is hard to distinguish what “is” versus “will be” or whether we are in an AI bubble. OpenAI CEO Sam Altman, Amazon founder Jeff Bezos, and 54% of fund managers recently indicated that AI stocks were in bubble territory.

Railroads, electricity, and the internet were transformational innovations that created bubbles, went bust, and then faded to normal in the background. When the internet moved past boom and bust, we faded into new business moats such as Google (Search, Android, Chrome, Cloud), Meta (social media), Tik Tok, Amazon (eCommerce, Cloud), Microsoft (Windows, Office, Cloud), and Apple (MacOS and iOS). The announced massive AI investments, with over one trillion by OpenAI, indicates investors expect AI to be more than companions, coders, and search tools, rather new moats when AI fades to the background.

To help make sense of AI, we may think in terms of advising, assisting and doing. We must also be clear what AI “is” versus “will be.” Today, AI “is” mostly “advising” with some exciting new “assisting.”  The hype is mostly about what AI “will be,” which is “doing.”

1. Advising – most AI use cases are advising. It takes inputs and creates inferences such as predicting email spam, loan worthiness, what to wear to a party, and content (Tik Tok video) that will maximize your engagement.  The AI inferences feed deterministically programmed actions, “if this, then do that.” Advising helps us figure out how to do things and answers our questions. The human-in-the-loop decides what to do next or the inference result powers explicit programming such as maximizing user engagement. AI is not replacing humans, though it should help us become more efficient and effective. It is hard to measure the productivity of advising, though if the strategies result in less actions (efficiency) and better outcomes (effectiveness), it must be more productive.

2. Assisting – this is essentially a tool that does stuff in the digital world like tools (i.e., shovel, washing machine) in the physical world. It is often called Gen AI. It creates videos, software code, summarizes content, drafts letters or does homework assignments based on user prompts. It makes us more efficient and effective creating digital content within individual tasks. It requires the human-in-the-loop to judge the content created to avoid adverse outcomes, like a chatbot that accepted a $1 for a new Chevy Tahoe with a MSRP of $58,195. While the terms “AI agents” or “Agentic AI” are used to describe AI that extracts data from documents, engages customers, curates and summarize content, the next actions are determined by the human-in-the-loops or predetermined and executed with explicit software logic (like Siri or Alexa). It’s logical to assume that creating digital content, like tools in the physical world, will help us become more efficient and effective.

3. Doing – this is goal achievement without a human-in-the-loop. “Doing” is a typically a highly efficient, tightly synchronized flywheel of few to millions of “inferences” and “actions” where the actions are not predetermined. Humans have a tight integration of the neurons, synapses, and well-tuned perceptual, motor, learning, memory, and executive neurocognitive functions. This enables flexibility in novel environments based on mentally imagined models of the world. “Doing” is an autonomous vehicle without a human driver. As we have learned, addressing the last 1% to 5% of autonomous driving edge case may take a decade or longer. “Doing” is the human immune system that makes inferences based on inputs and uses its agency to makes decisions and take actions to destroy pathogens. A thermostat that automatically turns on the heat is not “doing”, rather it is “advising” because a human explicitly programmed the next actions based on inferences.  Doing is difficult for AI as it lacks the capacity to understand the world, understand the physical world, the ability to remember and retrieve things, persistent memory, the ability to reason, and the ability to plan. This is according to Turing Award winner Yann LeCun.  While there is no doubt these AI challenges will be addressed, AI is not doing much today.

“Advising” and “assisting” is today’s AI reality. “Doing” gets the AI hype with arousal headlines of how it will replace our jobs and superintelligence will rapidly, irreversibly, and uncontrollable take over the world to render humans subservient.

When trying to make sense of AI, begin with who decides and does the next best actions. Is it the human-in-the-loop, predetermined by humans, or AI with agency.

{ 0 comments… add one }

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.