← All posts
ai-learningleadershipworkplace-aigetting-started

AI Learning Is Not a Straight Path

Camila Lima·April 30, 2026·6 min read

What the HBR article found about AI skill

I recently read the Harvard Business Review article What the Best AI Users Do Differently—and How to Level Up All of Your Employees, written by Nick Hallman, Zach Kowaleski, Anu Puvvada, and Jaime J. Schmidt.

The article looks at what separates people who simply use AI from people who use it well. The researchers worked with KPMG and studied a large set of AI interactions from employees over eight months. Their main point is simple but important: measuring AI use is not the same as measuring AI skill.

That stood out to me because many teams are still focused on access. Do employees have the tool? Are they using it? How many prompts are they writing? Those questions matter, but they do not tell the full story. The better question is: are people learning how to use AI to improve the quality of their work?

Using AI often is not the same as using AI well

One of the strongest findings from the article is that leaders should not only look at how much people use AI. A person can open an AI tool every day and still use it in a very basic way. They may ask one short question, accept the first answer, and move on.

The article explains that stronger AI users tend to work differently. They make clearer requests. They ask AI to help with bigger tasks. They refine the answer. They use AI to think through a problem, not only to finish a small task faster.

This is an important difference. AI skill is not about typing more prompts. It is about learning how to ask for better help, review the answer, improve the result, and apply it to real work.

The four behaviours common in sophisticated AI users

One of the most useful parts of the article is the way it describes four behaviours that were common among more sophisticated AI users.

First, they were more ambitious with AI. They did not only use it for quick edits or simple summaries. They brought larger, more meaningful tasks to the tool and expected AI to help with work that required thought.

Second, they treated AI as a reasoning partner. Instead of asking for a final answer and stopping there, they used AI to test ideas, compare options, challenge assumptions, and improve their thinking.

Third, they delegated complex tasks with clear objectives. They did not just say "help me with this." They explained what they needed, gave context, and made the expected result clearer.

Fourth, they used AI as a general thinking tool, not only as a shortcut for productivity. In other words, AI was not just there to make a task faster. It was there to help them plan, learn, analyze, review, and make better decisions.

This is the part I think many teams should pay attention to. Sophisticated AI use is not about being fancy or technical. It is about being intentional. The user stays involved, gives direction, reviews the output, and keeps improving the work.

The best users treat AI as part of the thinking process

The article describes strong AI users as people who use AI for more than simple productivity. They do not only ask it to rewrite a sentence or summarize a document. They also use it to explore ideas, compare options, structure a problem, test their thinking, and improve a piece of work over several rounds.

That matches what I see in practice. The people who get the most value from AI are not always the most technical people. They are often the people who are willing to try, adjust, and keep going. They bring a real task to the tool and stay involved in the process.

This is where the learning happens. Not by watching a long presentation about AI. Not by memorizing the perfect prompt. The learning happens when someone uses AI on a task they already understand, sees what works, notices what does not work, and tries again.

My opinion: AI learning is not a straight path

My biggest takeaway is that AI learning is not a straight path. People do not move neatly from beginner to advanced by following one fixed set of steps. They learn by trying different things, making small mistakes, seeing better examples, and slowly building confidence.

This matters for leaders. If leaders expect everyone to become good at AI just because the company bought a tool, they will be disappointed. Access is only the beginning. People need support, time, examples, and permission to experiment.

They also need clear expectations. Experimenting with AI does not mean using it without judgment. Teams should know when AI is appropriate, when human review is required, what information should not be shared, and what quality standard the final work must meet.

Leaders need to make good AI use visible

The article points to a practical leadership lesson: teams need to know what good AI use looks like. If the only message is "use AI more," people may use it in shallow ways. If the message is "use AI to improve your work, and here are examples," the team has a clearer path.

Leaders can support this by sharing simple examples from real work. For example, how someone used AI to prepare for a meeting, improve a customer email, analyze feedback, or draft a first version of a report. These examples do not need to be complex. They need to be clear and useful.

This also helps reduce fear. Many people hesitate because they think everyone else already knows what they are doing. When leaders make learning visible, it becomes normal to ask questions, test ideas, and improve over time.

Learning should happen inside real work

The best way to improve AI knowledge is by learning and putting it into action. This is why I believe AI training should be practical. People need to use AI on the kinds of tasks they already handle: emails, planning, research, documents, meetings, analysis, and decision support.

Theory has a place, especially for safety and responsible use. But theory alone is not enough. Someone can understand what AI is and still not know how to use it well on Monday morning.

A better approach is to combine clear guidance with hands-on practice. Give people a small task. Let them try it with AI. Ask them to compare the first result with the improved result. Show them how to refine the request. Then let them apply the same process to another task.

What this means for teams

For teams, the message is not that everyone needs to become an AI expert overnight. The message is that AI skill grows through repeated, supported practice.

Start small, but do not stay small forever. Use AI for a simple task first. Then try using it to plan something more complex. Then ask it to help you review your thinking. Then ask it to create options and explain the tradeoffs. Each step builds confidence.

For leaders, the role is to create the conditions for this learning. Give people room to experiment. Set clear rules. Share examples. Reward thoughtful use, not just frequent use. Make it clear that the goal is better work, not more prompts.

Read the full article

The full HBR article is worth reading because it gives a deeper view into what strong AI users do differently and how companies can help more employees build those habits.

You can read it here: What the Best AI Users Do Differently—and How to Level Up All of Your Employees.

My view is simple: AI learning works best when people are supported, encouraged to experiment, and expected to apply what they learn. The more people try AI on real work, the faster they understand what it can and cannot do. That is how confidence grows. That is how skill grows.

Join the AI at Work Academy Telegram chat

AI news, practical tips, and recommended readings to help you use AI better at work.

Join the Telegram chat

Ready to take the next step?

AI at Work Academy gives you a structured, step-by-step path from beginner to confident AI user. Module 1 is free.

Start Module 1 Free →