Getting Started with AI & LLMs (For Technical Folks)
Recently, someone asked me for recommendations on how to get started with AI product building, and I realized I had a lot of thoughts here - mainly because I’m way down the rabbit hole. I also figured this would be useful for others, so I am sharing structured thoughts here.
If you’re somewhat technical and trying to get your head around AI — whether you’re a software engineer, data scientist, or product builder — this is a mix of foundational and hands-on technical resources to help you get started. I’ve included core learning materials, a few blog posts I’ve written, and links to other useful reads that helped me make sense of things.
There’s a lot of information out there, but the most important thing is to just start somewhere and experiment. The best insights come from hands-on learning rather than just reading stuff. Don’t worry, I promise that if you just dig in somewhere, and have curiousity - you’ll figure it out.
📚 Core Learning Resources
🎓 Generative AI for Everyone (DeepLearning.AI)
Andrew Ng is one of the best teachers out there — he has a way of explaining things clearly and concisely without unnecessary fluff. This course is a broad but accessible introduction to generative AI — what it is, how it works, and how it’s being applied in real-world scenarios. It’s geared towards technical professionals who want a solid conceptual foundation before diving into hands-on work.
💡 ChatGPT Prompt Engineering for Developers
This 1.5-hour course, co-created by Andrew Ng and OpenAI’s Isa Fulford, is one of the fastest ways to learn prompt engineering. It’s practical, with hands-on examples and interactive Jupyter notebooks. If you’re wondering how to get better outputs from models like GPT-4 or Claude, this will immediately level up your ability to craft effective prompts.
🛠️ A Hacker’s Guide to Language Models (Jeremy Howard)
This one-hour video by Jeremy Howard is a great technical breakdown of how LLMs work. He explains the core concepts behind modern language models and how they can be applied effectively. If you’re a software engineer or data scientist who wants a deeper, under-the-hood understanding, this is worth watching.
🔍 Understanding and Applying Text Embeddings (DeepLearning.AI)
This is a bit more specialized, but the original person who asked me about this sounded like they had some kind of classification problem, so I’m including it here. Embeddings are a key concept in modern NLP — they allow AI models to understand relationships between words, phrases, and documents. If you’re working with clustering, classification, or search, learning about embeddings is useful.
📝 More Useful Reads
These are all personal reflections on resources and ideas that I found interesting. I read and flag tons of material, but when something really stands out, I’ll write a blog post to memorialize it.
Here are some relevant ones, but my blog has 50+ other posts if you want to dig in.
- Prompt Engineering Quick Start – A practical overview of techniques for crafting effective AI prompts.
- A Brain Dump on How I Use AI Tools – A walkthrough of the different ways I get value from AI tools in my workflow.
- Anthropic’s Cookbook – A great collection of examples and demos for using AI APIs effectively.
- Using AI Without Overthinking It – A practical guide by Ethan Mollick on integrating AI effectively into workflows.
- How Large Language Models Work (3Blue1Brown Video) – A high-level yet visually intuitive breakdown of LLMs.
- Key Takeaways from Jason Liu’s Podcast on RAG Pipelines – Insights on retrieval-augmented generation (RAG), a method for improving AI retrieval accuracy.
- Anthropic Demo for LLM Text Classification – How Anthropic used large language models for classification and how that compares to classical approaches.
People Worth Following
One of the best ways to stay current is by following practitioners who are actively experimenting with and writing about AI. Many of them share insights, patterns, and experiments before they make it into courses or formal publications.
Twitter/X is probably the best place to track AI developments. There are even curated lists of AI experts worth following.
👉 Here’s a good one.
Even just clicking into these links and seeing what people are discussing can help guide your learning path. Engaging with content there is valuable, but it’s also a firehose of information, so you have to be precise about where you spend your time.
Here are some good blogs from expert practitioners as well:
- Eugene Yan – Writes about AI systems, machine learning deployment, and applied ML strategies.
- Jason Liu – Specializes in RAG pipelines and retrieval-based AI systems.
- Simon Willison – Shares deep insights on AI tooling, prompt engineering, and real-world applications.
- Ethan Mollick – Writes about AI’s impact on business, work, and education.
- Developer Relations from OpenAI, Anthropic, and Google DeepMind – Many DevRel engineers at these companies post cutting-edge AI techniques, prompt engineering insights, and hands-on demos.
🎯 Final Thoughts
There’s a ton of material out there, but the most important thing is to just start experimenting. AI isn’t something you learn by only reading about it — you learn by applying it to real problems, testing ideas, and iterating.
✅ A Few Good Ways to Get Started:
- Try out some of the hands-on courses linked above.
- Play around with an AI model, whether through OpenAI’s API, Anthropic’s Claude, or Google’s AI tools.
- Follow a few people in the space and see what insights they’re sharing.
- Run a small experiment — build a quick classifier, use an AI coding tool, set up a summarization pipeline, just go to browser interface and play around a bit.
The field is changing fast, but that also means there’s a huge advantage for those who just dive in and start learning by doing.
If you have any questions or want to bounce around ideas, reach out! Always happy to chat.