Common Maxims - LLM Remix
Today I got access to OpenAI’s o1 pro, a specialized model within the ChatGPT Pro lineup, designed for complex reasoning tasks. It’s particularly effective f...
Today I got access to OpenAI’s o1 pro, a specialized model within the ChatGPT Pro lineup, designed for complex reasoning tasks. It’s particularly effective f...
One corner of AI Twitter that’s been catching my attention is where people share prompts designed to get ChatGPT to reflect on specific things about you. The...
I took the Reasoning with o1 short course led by Colin Jarvis at OpenAI. At just over an hour long, it’s a concise introduction to a few things that o1 is pa...
OpenAI recently released their O1 model, and I’ve been following the early feedback closely. A tweet from Deedy Das caught my attention, stating that O1 Prev...
Andrej Karpathy had a question: What would the Founding Fathers think about America today? He couldn’t find a book that explored it, so he built one himself ...
I love finding content that’s easy to share with people curious about AI but unsure where to start. Ethan Mollick’s post, Good Enough Prompting – Don’t Make ...
Steven Johnson’s essay - You Exist in the Long Context - opens with something incredible: he took his 400-page book The Infernal Machine, wrote a 400-word pr...
One of my main beliefs about large language models is that they enable people to learn interesting concepts on their own terms, in their own ways, unlocking ...
This week, Google released Gemini 2.0 Flash, which introduces two standout features. You can share your screen or a video in real time and interact with the ...
Recently, I watched a fantastic 3Blue1Brown video on large language models that does an excellent job of demystifying the technology. The video is under 10 m...
In my last post , I shared a detailed plan for swapping assets between a taxable and a tax-deferred account to optimize tax efficiency. I used an LLM to buil...
The other day at the playground, I got into a conversation with my friend Nick, who knows a lot about investing. We started talking about our portfolios, and...
I recently listened to Stanley Druckenmiller recount his British Pound trade on a podcast, and it really stuck with me. It’s a great example of how clear evi...
A recent episode of the How AI Is Built podcast, hosted by Nicolay Gerold and featuring Max Buckley from Google, explored how to improve Retrieval-Augmented ...
I spend a lot of time thinking about how to apply ML and LLMs to real-world problems. A recent episode of the Dwarkesh Podcast featuring Gwern Branwen pushed...
The recent Latent Space podcast with Erik Schluntz, a member of the technical staff at Anthropic, explored agents — programs powered by Large Language Models...
Today, I came across a tweet by Alex McCaw asking what topics young adults should learn but aren’t typically taught. This sparked an idea: what could large l...
I’m excited about using AI tools for self-guided learning — it’s one of my highest-conviction use cases for this technology. I wrote about this before, but t...
In my previous post, I shared how I use LLMs for self-directed learning. Here’s an example that shows a different approach: using rough notes to guide an LLM...
I’ve typically leaned on classical machine learning tools like scikit-learn and XGBoost for classification tasks. I recently found a demo notebook from Anthr...
Anthropic’s Cookbook is a collection of Jupyter notebooks that teach foundational techniques for working with LLMs. These examples go beyond just using Claud...
AI is evolving quickly, with Generative AI and Large Language Models drastically expanding what’s possible and who can tap into its potential. I’m often aske...
This article from Hamel Husain is the highest-signal post I’ve read on building evaluations for LLM-based applications. I encourage you to spend 20 minutes r...
Twitter is the absolute highest signal for learning about leading-edge LLM advancements – if you follow the right people. That said, it’s also overwhelming a...
Before we dive in, take 10 minutes to watch Anthropic’s demo video. Seeing the tool in action is way more valuable than any written summary — mine included.
Varun Godbole and his team at Google recently released an excellent prompt tuning playbook, packed with practical advice. Varun, a core contributor to Gemini...
Jason Liu creates incredibly high-signal AI and LLM content online. I try to read or listen to everything he creates. His recent podcast appearance on TwiML ...
Clear communication is critical for effective teams. Using simple, precise language helps bridge gaps across roles. Clear language enables everyone to contri...
I’ve written before about some key themes I see in building with AI: the importance of creativity and curiosity, the increasing accessibility of tools, and m...
Check out my Halloween Candy Calculator!
If you have an hour, click the below link and then just close out my website. If you don’t have an hour, you can read below, but it wont be as good.
Anthropic created a set of python notebooks detailing how to create prompt evaluations. The GitHub repo is here. If you’re using a language model in your app...
I’ve been pretty far down the rabbit hole of Large Language Models for a while, yet there’s still so much for me to learn. I feel so early in the journey. At...
The cost to prototype custom software that solves niche problems can be substantially lower with LLM capabilities. Aspects of a software system that previous...
I’ve written before that self-guided learning is an area where LLM-based applications excel. It’s never been easier to simply open up ChatGPT and have a conv...
In software engineering, we have the tendency to invent complex terminology to describe relatively simple concepts. I’ve experienced that as I’ve learned mor...
There’s a video making waves this week in the AI rabbit hole community. If you haven’t yet taken the time to fully understand what’s become possible in the l...
Note: I’m trying something new in this post. I’ve included a PDF document at the bottom that contains some full input and output prompts from the experiments...
How we can use information found in unstructured data to drive investment ideas is an area I’m really interested in. When used appropriately, language models...
I’ve written in the past on how I love Ethan Mollick’s idea of “always inviting AI to to the table”. He wrote the following in this post:
Yesterday I wrote about how long context windows are game-changers, enabling us to solve problems that were essentially unsolvable before.
Google built an LLM with a large input context window named Gemini. A context window is the amount of text an LLM can consider at once when generating a resp...
a photo realistic image of a dad playing soccer with his daughter on a lake michigan beach
When I’m trying to get people excited about AI and LLMs, showing them audio and video stuff usually does the trick. A demo I saw early on that blew me away w...
Building Custom GPTs is an interesting way to create customized LLM chat functionality with limited effort. I’ve observed that most people using LLMs tend to...
Multi-modal LLMs – and specifically the incorporation of vision models – is such an amazing unlock for what is possible. With limited prompting and no custom...
When ChatGPT first came out, I remember showing my family funny poems that it could write. I remember the novelty of telling ChatGPT to make the poem rhyme, ...
This post was written primarily with ChatGPT. Using ChatGPT was part of the experiment. Can I build something moderately interesting and write a post about i...
I previously wrote about the traits of a good leader in a startup. I spent some time reflecting on the key traits I value when hiring new teammates. This isn...
I like to read books. I’m always looking for ways to discover new titles. I mainly use Goodreads and Twitter as a discovery mechanism, and I use Goodreads to...
I want to build my Prompt Engineering skills. As I highlighted in a previous post, I’m convinced that LLMs will be a huge part of the future of work. Underst...
I finished reading Shane Parrish’s new book Clear Thinking. It has quite a few nuggets of wisdom that I’d like to reference in the future. I’m trying to get ...
What is this? I used ChatGPT to generate a children’s book about a worm learning a valuable life lesson. This was just a fun way to learn more about the tech...
I love Ethan Mollick’s ‘Jagged Frontier’ concept with LLMs. We don’t exactly know what LLMs are great at, and what they are bad at — yet. To figure this out,...
I recently finished reading Co-Intelligence by Ethan Mollick. I was very eager to read this book, as Ethan continually posts fantastic LLM content on Twitter...
I’ve spent most of my career in small-ish VC-backed startup companies, with between 10 and 150 people. Things tend to move and change fast in small companies...
I’ve recently built a desire to write more, and wanted to dig into why I am feeling this way. What are my reasons and goals? A big part of this desire is roo...
There is an endless amount of stuff in the world to learn, and not enough time to learn it. The demands of modern work nudge us to build skills in a very spe...
Explaining how things work to different audiences is a topic that interests me. Can you put yourself in the mind-space of the person you’re speaking with — a...
Last month, during a visit to the Apple Store, I experienced an unexpected nudge towards being present — a concept I encounter frequently in books I’ve read....
I want to be a great dad. It’s not easy. Being a dad is the biggest privilege and opportunity in my life. As a parent, you have huge responsibility in the ac...
Everything competes for your attention The most recent thing seems like the most important Without a framework for what you pay attention to, what hap...
At the beginning of 2013, I made a resolution to learn more things outside of Technology. In the past, I’ve been a semi-active reader, finishing about 15 boo...
Why default-resiliency is not the best option
‘What should we tell them?’ How about the truth.
Thoughts on ‘Product strategy means saying NO’
Thoughts on being malleable instead of magnetic
Thoughts on doing ‘things about the thing’ instead of the thing itself
… can be a dangerous attitude to have when trying to solve a problem.
Yesterday, I called Fidelity to get help with my account. Before I was connected to a human, I was asked to enter my username, and then my password using the...
I often cringe when I hear people say they are ‘getting out of the building’ to test their product idea. At the core, ‘getting out of the building’ is a prox...
When you work at a big company, your role is specialized. On a day-to-day basis, you don’t have to venture far from your ‘comfort zone’ of core skills to acc...
There is not a linear relationship between the complexity of a product’s features and the product’s value to an end user. The graph of complexity versus valu...
I am a new student of the Business Model Generation. I’m working to understand and apply the tools and techniques outlined by key influencers such as Steve B...
I’ve always appreciated a good non-fiction book. My preferred reading ‘categories’ are behavioral economics (e.g. Ariely, Thaler, Dubner, Gladwell), analyses...
People have told me that it isn’t easy to keep up a blog. Now I know what they mean! Somehow, I’ve gone an entire month without posting anything — time seems...
Ok, I admit it…this may qualify as the nerdiest / lamest name for a blog post…ever…in the history of the blogosphere…but hear me out on this one, because I’m...
Over the last 5 years since I began working full-time, I have developed a strong interest in investing in the stock market. I started investing in mutual fun...
The bloggers over at Newly Corporate are asking “What 1 or 2 CORE traits get you noticed at work or help you succeed in your day-to-day operations” Here is t...
Every day, a software project dies. Some die a slow, painful, expensive, death. Others die a quick, not painless, and relatively embarrassing death. As Sof...