Trying Gemini Flash 2.0’s Screen Interaction
One of my main beliefs about large language models is that they enable people to learn interesting concepts on their own terms, in their own ways, unlocking entirely new paths toward knowledge. The tools are evolving quickly, building better and better user experiences to make this possible. Gemini Flash 2.0’s new screen-sharing feature feels like a step in that direction, and I wanted to see how well it worked in practice.
To test it, I set up a simple experiment:
- I started by using ChatGPT in voice mode to create a hypothetical mutual fund and ETF portfolio, complete with tickers and percentages.
- Next, I entered the portfolio into Portfolio Visualizer, a free tool my friend Nick pointed out to me. It lets you input a portfolio and build detailed reports about its composition, market factors, and performance metrics. It’s an awesome resource for anyone looking to dive deeper into their finances.
- With the Portfolio Visualizer report ready, I opened a screen-sharing session with Gemini Flash 2.0.
- I asked the model questions about the report, like “What does the Sharpe ratio mean?” and “How do these market exposures affect the portfolio?”
- As I got responses, I refined my questions to explore specific parts of the data in more detail.
What stuck out to me was how much this approach reduces friction. Instead of jumping between tabs, typing out questions, or copying and pasting data, I could just flow naturally. It felt like having someone sitting next to me, pointing things out and guiding me through the details. Some of the answers weren’t perfect and could have used more depth, but it was still really impressive to hear some of the insights the tool provided. I’m confident that this kind of interaction will only get better over time.
Of course, sharing your screen with an AI might feel strange at first. There’s an initial discomfort in giving it access to what you’re working on. But once you’re in, the experience is very natural. For people who are open to trying this kind of interaction, I think it offers a real edge — whether you’re learning new concepts, iterating on ideas, or just trying to understand something quickly.
This experiment left me impressed with the potential of tools like this for self-directed learning. If you’re curious, I’ve included a recording of my session so you can see exactly how it worked. It’s free to try out right now, so it’s worth experimenting with if this kind of feature fits into your workflow.
Here’s the video walking through the above: