Co-Intelligence : Take-Aways
I recently finished reading Co-Intelligence by Ethan Mollick. I was very eager to read this book, as Ethan continually posts fantastic LLM content on Twitter. The content is a really good mix — links to technical papers, links to new tools, insights he’s had through his work, etc. — overall just a fantastic way to stay informed in this incredibly fast moving space.
I enjoyed the book, and wanted to share out a few notes / insights / ideas that I found interesting:
- There is understandable concern and active discussion around LLM hallucination, but hallucination is essentially the way LLMs can create new ideas and find new connections. Without hallucination, they wouldn’t be able to ‘create new ideas’
‘This is the paradox of Al creativity. The same feature that makes LLMs unreliable and dangerous for factual work also makes them useful.’
- When prompting an LLM, giving it some detailed information about the desired persona is an effective prompting technique to generate better results. Example:
‘You are an expert at marketing. When asked to generate slogan ideas you come up with ideas that are different from each other, clever, and interesting. You use clever wordplay. You try not to repeat themes or ideas’
- LLMs can help with creative tasks, but the default state seems to give the ‘average answer’ - You can shift this by giving the LLM detailed information on its desired persona. Tell the LLM exactly what you want it to be — don’t be generic.
‘We need to push the Al from an average answer to a high-variance, weird one. We can do this again by telling the Al who it is. Force it to give you less likely answers, and you will find more original combinations.’
- The likelihood that a different set of human knowledge / expertise may emerge as the the most valuable knowledge for effectively using LLMs (e.g. humanities background instead of a science or technology background)
‘We need people who have deep or broad knowledge of unusual fields to use AI in ways that others cannot, developing unexpected and valuable prompts and testing the limits of how they work.’
- The value in having an openness to bringing AIs to ‘all of our work’, so that we can determine where this emergent technology has the highest impact. Along with the concept of the ‘Jagged Frontier’ — meaning that we need to actively discover what LLMs are good at and what they are not good at. We need to build this value map by experiemting with usage across many of our jobs and tasks.
‘That takes time and experience, which is why it is important to stick with the principle of inviting Al to everything, letting us learn the shape of the Jagged Frontier and how it maps onto the unique complex of tasks that comprise our individual jobs’
- Jobs are going to change meaningfully as we figure out how to integrate LLMs — and companies need to have an explicit plan to take advantage of this change, or they will get left behind. This requires behavior, policy, and incentive changes.
‘First, they need to recognize that the employees who are figuring out how best to use Al might be at any level of the organization, with any sort of history or past performance record. No company hired employees based on their Al skills, so Al skills might be anywhere.’
‘Organizations should highly incentivize Al users to come forward, and expand the number of people using Al overall. That means not just permitting AI use but also offering substantial rewards to people finding substantial opportunities for Al to help.’
- LLMs could have a very positive impact on how we teach and provide education, but it might require a dramatic shift in how we structure learning, classroom time utilization, etc. The author referenced a very interesting paper showing differences in student performance when given one-to-one tutoring. Building custom tutors is well within the realm of what LLMs can do now, so how might we replicate this success with LLMs at scale?
‘Benjamin Bloom, an educational psychologist, published a paper in 1984 called “The 2 Sigma Problem.” In this paper, Bloom reported that the average student tutored one-to-one performed two standard deviations better than students educated in a conventional classroom environment.’
- Treating an LLM as a conversational partner, and giving it explicit / detailed instructions, shows much better results than just asking general questions. This technique is known as Chain of Thought Prompting, as the author describes in the book:
‘One approach, called chain-of-thought prompting, gives the Al an example of how you want it to reason, before you make your request. Even more usefully, you can also provide step-by-step instructions that build on each other, making it easier to check the output of each step (letting you refine the prompt later), and which will tend to make the output of your prompts more accurate.’
-
The possibility of increased LLM utilization creating a ‘training gap’ in the workforce for new employees. Lots of new employees learn on the job from other experts in their field (apprenticeships, internships, etc.) This mentorship allows for transfer of knowledge and skills, but also provides some other real benefits (leadership experience for more senior employees, team building opportunities, company-specific knowledge transfer, etc.) - If more of this learning pivots to individual usage of LLMs, how do these residual benefits to organizations degrade, and what impact does that degradation have?
- The real impact LLMs will have on content. It’ll be incredibly easy to fabricate content (images, videos, sound-bytes), and incredibly hard to disambiguate real from fake. It’s going to be a weird few years.
‘It is already impossible to tell Al-generated images from real ones, and that is simply using the tools available to anyone today. Video and voice are also trivially easy to fake. The online information environment is going to become completely unmanageable, with fact-checkers overwhelmed by the flood.’
Finally, I loved this quote in the book. It forces you to drill out and consider how much amazing stuff has been built in the past, and is being built right now. It’s only accelerating. Somewhat frightening, but incredibly awe-inspiring.
‘Humans, walking and talking bags of water and trace chemicals that we are, have managed to convince well-organized sand to pretend to think like us’