7 minute read

I recently spent a few days updating my AI workshop materials for a ~50-person training. Last time I refreshed this content was only three months ago. Many technical details in my slides required substantial changes.

The last few months brought significant change: Agents and agentic AI expanded significantly. MCP protocols are now a thing. OpenAI’s Deep Research can produce comprehensive reports that would have been extremely time-consuming to compile manually. OpenAI rolled out persistent memory (that sometimes helps and sometimes makes things worse). Tool integrations got deeper across all the major model providers. Plus a dozen other significant capability jumps.

As an example: my slides that used to say “treat these models like a smart but forgetful coworker” suddenly felt wrong. That mental model breaks down when the model remembers things from other conversations. And when it remembers stuff it shouldn’t remember in the context you want it to have, you now have to turn on temporary chat. That’s a lot of new nuance to explain!

It was tedious to make a lot of these nuanced updates, and at some points, it felt a bit useless to be spending my time doing this. During the workshop, the team leader said something that crystallized what I’d been feeling while updating all those slides: “What you’re learning right now is likely to be obsolete in 6 to 12 months.” - Yup, a lot of it will be.

Teaching specific tools and techniques is becoming less valuable when the tools change this fast. But there’s still something durable to teach.

The Shift from Tools to Learning Capability

I’ve been shifting my workshops more towards teaching “AI mindset” rather than teaching specific AI capabilities. At a high level “AI Mindset” includes: treating AI as an iterative thought partner, not a search engine. Being curious and willing to experiment. Starting with small problems where failure doesn’t matter much.

Most importantly - Just getting started, and expecting to iterate. Your first attempt won’t be perfect. That’s normal. The people who get good at this are comfortable refining their approach through multiple rounds.

The technical stuff changes too fast. The mindset for working with these tools is more durable. But even with the right mindset, people still hit the same organizational walls. Individual empowerment is just the first bottleneck. The main bottleneck is organizational structure.

Organizational Barriers That Block Learning

Across organizations, I’ve seen a few consistent patterns emerge:

People don’t have time to experiment. Most teams are already at capacity. AI experimentation requires time to try things, fail, and iterate. But organizations give people one or two training days per year and treat it like an event, not ongoing practice. There’s simply no time to learn how to change how you do the work. People are too busy doing the work itself.

If you lead a team, this is your first problem to solve. Are you actually willing to tell people to spend a portion of their time experimenting with these tools to find out how they can be applied in their day-to-day tasks? Because that’s what it takes. And it’s not easy, because you can say it all you want, but it takes a lot more than that to actually create the space for it to happen.

Organization policy is ambiguous. People don’t know when they’re allowed or expected to use AI. They don’t know what they can use it for, what tools they can use, and what they need to do differently when they use it for an output. There’s usually a gap between what leadership thinks they communicated and what employees actually understand.

This isn’t just security policies. It’s clarity about when using AI is expected versus optional, which tasks are appropriate, how to handle weird edge cases. It’s absolutely critical to be clear about this — and repeat it over and over again. If you’re a leader, find where the policy is ambiguous and clarify it. A good way to do this is to just talk to the people doing the work and ask them about it directly. Ask them what they think the policy is. Ask them where they wish they could use it but don’t feel empowered to. Ask them what their questions are that they are afraid to ask. Like anything, the best leaders know how to ask the right questions to the right teammates at the right moment.

People underestimate what’s possible. At every workshop I do, I show capabilities that surprise people. A few recent examples: NotebookLM can focus on your specific documentation and create presentation materials from it, with citations. ChatGPT’s image creation can build custom iconography for your presentations in seconds. OpenAI’s o3 model can do some things a lot better than 4o (don’t just choose the default model).

This gap can be closed by creating space to experiment and getting people exposed to good training opportunities. But I think the gap might be grounded in something more implicit.

Even when people see these capabilities, they struggle to connect them to their actual work. Part of this is the time problem — without space to experiment, you can’t discover the connections. Part of it is an “I’ve always done it this way” problem. It’s hard to adjust a workflow that works.

Even for me, it’s hard to maintain that healthy background thread of “How might I use AI to make this task I’m currently working on easier?”

Part of it might also be implicit fear or cynicism that these capabilities will replace someone’s core value proposition in the organization. If you want to understand the behavior, look at the incentive structure. If people think adopting AI tools might threaten their role, they’ll resist regardless of potential benefits, even if the resistance is indirect.

What Actually Works

So with the above factors blocking adoption, what does an organization do? The companies that figure this out aren’t just providing better training. They’re solving structural problems.

Create actual time for experimentation. Not “find time when you can.” Real calendar time. Some teams block Friday afternoons. Others do monthly experiment sessions. The format matters less than making it consistent and protected.

Set clear expectations. Look at Shopify’s approach. They didn’t just say “AI is important.” They said build your prototype with AI, your performance will reflect how well you use these tools, you can’t add headcount until you’ve shown AI can’t handle it. They made AI usage a requirement for promotion discussions and team planning meetings.

Those are concrete expectations tied to real outcomes and regular processes people already follow.

Build systems for sharing knowledge. The teams that compound their learning have ways to share what works. Prompt playbooks. Regular sessions to discuss AI wins and failures. Make learning visible and collaborative.

Back your early adopters. Find the people already experimenting and support them. Give them resources, platform their successes, help them teach others. Organic adoption spreads faster than top-down mandates.

Leadership needs to do the work. You can’t delegate this. If you’re a leader asking your team to experiment with AI but you haven’t spent real time using these tools yourself, your people will notice. You need to understand what AI is actually good and bad at, not just what you’ve heard in briefings.

Put in your own hours. Try building something. Get frustrated with the limitations. Only then can you set realistic expectations and spot the difference between genuine innovation and AI theater.

AI theater is everywhere. Teams that demo impressive-looking outputs but can’t explain how they’d actually integrate into workflows. People who talk about AI transformation but have never tried to get a model to do anything useful. Leaders love it because it feels like progress without requiring real change.

Not every organization will pull this off. Some cultures are too rigid. Some leadership teams won’t commit the time or resources. That’s fine. But the ones that do will build a meaningful advantage.

The Learning Advantage Compounds

Some companies are slowly circling AI, waiting for best practices to stabilize. Others are building with these tools every day and getting better at learning new capabilities as they emerge.

The difference won’t be obvious until suddenly it is. Teams that have been experimenting for months will have intuitions about what works, systems for sharing knowledge, and comfort with rapid iteration. They’ll adapt faster when the next wave of capabilities hits.

Teams that waited will be trying to catch up to a moving target while also building the organizational muscle to learn continuously.

You can’t fast-follow this. The advantage comes from months of small experiments, failed attempts, gradual skill building across your team. You can’t compress that timeline.

If you’re leading a team, the question isn’t whether AI matters. It’s whether your people are actively building the muscle to use it well and learn what’s next.

That takes real time, clear expectations, and systems that support continuous learning. The technology will keep changing. Organizations that solve the structural problems will be ready for whatever comes next.

Start now or watch others pull ahead. Those are your options.

Updated: