My Takes On AI
AI is evolving quickly, with Generative AI and Large Language Models drastically expanding what’s possible and who can tap into its potential. I’m often asked where I see AI heading and how to approach it, but I hadn’t taken the time to organize my thoughts until now. Below, I’ve distilled some of my perspectives. While this isn’t comprehensive, it helped me clarify my thinking and focus on what feels most important today. I hope you find value in these insights, and I’d love to hear your perspectives on this ever-changing and exciting space.
Writing and communication matter more than ever.
Learning how to write clearly has immense leverage. AI tools amplify the results of clear thinking and precise communication. Those who can express themselves effectively unlock better outcomes.
Self-guided learning is here, in a big way.
You can now learn on your own terms, in a way that aligns with how you think and communicate uniquely. Motivated, curious people can unlock vast amounts of knowledge without waiting for structured programs or external permission.
There’s still so much to build with what’s already here.
Even if AI progress stopped today, the tools already here are transformational. We haven’t scratched the surface of what can be built on top of them. The opportunity lies in applying these tools creatively and thoughtfully.
Multimodal AI capabilities are the most creative frontier.
If you’re curious, take a video and upload it to Gemini. You’ll be amazed by what it can do with your video. Just try it. These capabilities open up entirely new possibilities we’re just beginning to explore.
Different adoption rates are creating significant and compounding productivity gaps.
Teams that embrace tools like Cursor and Copilot are seeing meaningful productivity gains, especially in certain verticals like coding. These advantages compound over time, creating a snowball effect. Late adopters risk falling far behind.
Domain expertise + cross-functional collaboration is the winning formula.
The most differentiated skill set today is a domain expert who’s willing to learn and apply these tools. The reverse is equally true: if you know how to use these tools, finding domain experts and bringing them on board creates a moat. There’s never been a better time to be an expert in a domain.
The skill set that matters is changing – it’s not just tech that matters now.
The ability to connect dots across disciplines and find patterns will outpace the value of pure technical skills. Humanities, history, philosophy, and anything involving lots of reading and writing will become even more valuable as we navigate this new era.
Evaluating AI outputs is both old and new – and more important than ever.
These new models are astonishingly capable but still require rigorous, automated testing. Having robust testing in place allows you to adopt new models faster, ensuring they still produce ‘correct’ output for your use cases. Upskilling newcomers on testing hygiene is also critical. Building with LLMs is accessible to a much wider audience, but many lack the testing experience gained from classical software engineering or ML. Addressing this gap is essential.
AI wrappers solve the hard problem of adoption, and there’s no reason to be cynical about them.
By “AI wrappers,” I mean a very light layer of logic / orchestration on top of foundation models. These tools might look simple — easily reproducible within the foundation model’s UI — but they solve the challenge of getting users to adopt and integrate the technology into their workflows. Channel delivery and customer acquisition are hard problems to solve - a lot harder than prompting.
User experience is everything.
Solving how people interact with AI tools is far more critical than just improving prompts. “Human in the loop” and how that human is integrated into the process defines the success of these tools. This is the real challenge — and the teams that solve it will win.
Generative AI is just one tool in the toolbox.
Use it where it makes sense. The old tools still work — don’t throw out your entire toolbox. For example, if you’re solving a linear regression problem, use a linear regression model — not generative AI.
AI adoption works best from the ground up.
Identify the handful of people in your organization who are excited about AI, let them explore and experiment, and then empower them to teach others. If you don’t have one of these people yet, grow one from within your organization.
The data flywheel drives compounding improvement.
Designing your product to collect the right data at the right time is critical, and you need an automated way to feed that data back into how you use and improve your models. Don’t ignore this aspect — it makes the compounding effect happen so much faster.
The future of AI lies in discovering new form factors.
The ways we interact with AI are rapidly changing, and many ways are yet to be discovered. This is one of the most interesting areas to experiment with. For example, I used ChatGPT’s voice mode while on a run to think through and refine the core ideas in this post. I couldn’t imagine getting value out of this form factor even a few months ago – so what will be the next form factor like this? (NotebookLM is another great example).
Conclusion
Some of these takes are going to be wrong, and that’s part of the fun. AI is evolving quickly, and everyone is figuring it out as they go. If you don’t have a perspective yet, it might be worth diving in and exploring these tools for yourself. There’s no better way to form a view than by experimenting firsthand. What are your takes? Send me a note, I’d love to hear them.