Building a Game From a 400-Page Book in 400 Words
Steven Johnson’s essay - You Exist in the Long Context - opens with something incredible: he took his 400-page book The Infernal Machine, wrote a 400-word prompt, and used a large language model to turn it into an interactive detective game. Players can step into the role of a forensic detective, explore the story’s details, and solve the mystery, all within a web browser. The AI not only retained all the facts from the book but guided players with historical context, improvised new scenes, and kept the experience grounded in the book’s timeline.
If you don’t do anything else with this post, go try the game. It’s remarkable to see what’s possible with just a prompt and the right data.
While many people still use LLMs like search engines — asking one-off questions — they miss the bigger potential. With the right data and format, these models can tackle much larger, more complex problems. Instead of just answering questions, they can reason across large datasets, make connections, and simulate scenarios in ways that were unthinkable a few years ago.
Here are my key take-aways from the essay:
Context windows expanded faster than people realize
- A few years ago, models like GPT-3 had tiny context windows (~1,500 words), making it hard for them to remember earlier inputs or reason across large datasets.
- Now, with models like Gemini, context windows are a thousand times larger — millions of tokens. That means they can process entire books, archives, or massive datasets at once. This shift happened so quickly that most people haven’t caught up to what’s now possible.
These tools don’t just retrieve information — they draw connections
- Johnson asked the model to analyze his book for how he used suspense. The model didn’t just highlight passages; it explained the techniques, cited direct quotes, and connected them to future events in the story.
- This shows that with the right inputs, LLMs can reason and make connections across large datasets, going far beyond simple search or summarization.
A practical example: Johnson uses LLMs as a personal archive
- Johnson uploaded years of his writing, including books, articles, and notes, into a long-context model. This allowed him to interact with his work in new ways.
- For example, while thinking about the theme of memory, he asked the model for relevant ideas from his past writing. It surfaced the story of patient H.M., which became a key part of his essay.
- By turning decades of material into an accessible archive, he’s able to ask questions, find patterns, and explore connections he might have forgotten.
For organizations: long-context models offer a new kind of intelligence
- Johnson envisions how long-context models could transform decision-making within companies by processing thousands of internal documents — reports, meeting notes, customer feedback — at once.
- These tools can act like an additional, incredibly well-informed voice at the table, helping organizations see patterns or connections that might otherwise go unnoticed.
- For example, Johnson suggests these tools could support scenario planning by simulating the downstream consequences of decisions. By grounding the model in detailed organizational data, companies can use it to explore complex outcomes and plan more effectively.
- The effectiveness of these models depends on the quality of inputs. Johnson highlights that well-curated and annotated archives will be critical for organizations to maximize their value.
If you’re curious about what’s possible, I highly recommend reading the essay — and trying the game Johnson created. It’s a great example of how these tools can move beyond answering questions to solving problems and exploring ideas in ways that feel like magic. More importantly, it gives you a great lens on context windows, why they are important, and how you might use them effectively.