09-23-2025
Agentic Interfaces
We can build agents that write code, browse the web, and manage our calendars. But the question remains: how do we speak to them, and how should they speak back?
Earlier this September, dozens of curious minds filled Paradigm's San Francisco living room to share lessons, doubts, and hopes for the interfaces shaping our future.
Text hits a ceiling
"Text is very expressive. You can actually get a lot in and out. But it's also quite limiting."
Ilan Bigio, who works on applied research at OpenAI, puts it simply. Text gave us the first breakthroughs in talking with machines, but falls short when structured workflows require complex states and changing dependencies.
This issue goes beyond AI. In 1979, linguist Michael Reddy proposed that language itself is a leaky conduit; we're always losing meaning in translation.
Once we acknowledged the limits of text, the next question emerged: where else should agents live?
Context is the interface
The best agents move beyond isolated chat dialogues into surfaces we already trust.
"Taking the chatbot but putting it into another interface (like a spreadsheet) then becomes really powerful." — Anna Monaco, Paradigm
"I feel like a browser [is the future] … because you can do an infinite number of things on a browser and we're all using browsers." — Liam Matteson, Browserbase
We're already seeing this shift: Perplexity's Comet lives in your browser, while OpenAI's Operator spins up its own browsers—performing work you'd normally do yourself.
The browser matters for another reason: much of the internet still runs on legacy software without APIs, MCP endpoints, or direct integrations.
Hosted browsers let agents authenticate, click, and complete tasks where crawlers or connectors fail. Context is the bridge to the parts of the internet we'd otherwise leave behind.
Less prompting, more iterating
Once agents inhabit these trusted spaces, how do we direct them? The answer surprised many in the room:
"You actually can get better results just prompting less… half the people at Cursor are just not overthinking these super long prompts." — Jason Ginsberg, Cursor
"Iteration is key. None of the current interfaces fully solve the problem of specifying your intent to the model in the most efficient way." — Silas Alberti, Cognition
This echoes Bret Victor's vision of learnable programming: immediate feedback beats elaborate specification. Let agents fill in details, search, and experiment. Then refine. Quick loops trump long instructions.
Safety shapes everything
Rapid iteration only works if you can trust what's happening.
Safety hit home when we recalled stories of a Replit agent that wiped a production database, then invented tests to cover its tracks.
"I blame the user in that case… treat agents like you would a new engineer on your team." — Jason Ginsberg
"At a big company with 50,000 engineers, they already don't trust their own. Agents are often onboarded the same way — with minimal privileges." — Silas Alberti
"Nothing is safe right now… assume the agent is a malicious hacker." — Ilan Bigio
Constitutional thinking about AI systems reminds us that trust requires infrastructure: undo modes, minimal permissions, transparency about actions, and visible error states.
Show the work
Trust is more than just permissions and guardrails. We establish it when we show users as agents at work. This led to one of the evening's most interesting insights:
"We thought people only cared about the final result. But actually part of what's fun about using an agent is the dopamine hit." — Silas Alberti
"There's an entertainment factor, but also an education factor. If people can see what the agent is doing, they can shape it differently the next time." — Anna Monaco
HCI research has shown this for decades: people want to see work in motion. Not log dumps or token streams, but enough signal to build trust and catch errors early. Partial outputs, milestones, "thinking" paths. The process teaches the outcome.
The divide widens
Then came a harder question: what happens when some people become fluent with these tools and others don't?
"I'm scared about how big the divide will be between power users and not… agents can already have this effect where the people that know how to use them can just do an insane amount more." — Anna Monaco
"This is probably the most valuable time ever to learn deep skill sets. Above-the-waterline expertise can be immensely leveraged." — Silas Alberti
"With AI, you can learn anything you want as fast as you want." — Ilan Bigio
We're entering an era where the floor is rising for everyone, but so is the gap between digital fluency levels. Breadth matters more. There's still value in going deep.
One path forward is to use agents to scaffold breadth, then choose specific domains where you dig deep.
Agents as mirrors
As the evening wound down, we offered one last thought:
"When we think about the context the agent is collecting, the fallacies it falls into and its chain of thought — in which ways am I the agent? After all, they were modeled after our minds."
Building agents goes beyond directing our machines: it's also expanding how we see ourselves think, decide, and grow.
To recap our conversation
- Text has limits: Chat is powerful but falls short when you need structure, history, and guardrails.
- Context is the interface: Agents work best when embedded in environments we already trust—browsers, editors, calendars.
- Iterate, don't over-prompt: Quick feedback loops beat elaborate instructions.
- Safety isn't optional: Treat agents like junior engineers—minimal permissions, visible actions, undo modes.
- Show the work: Users want to see agents thinking and acting, not just final results.
- The divide widens: Fluency with AI tools will separate power users from everyone else. Breadth matters; depth still pays.
- Agents are mirrors: Building them teaches us how we ourselves think and decide.
This evening reminded me why I love our work: we're crafting new systems, new ways of spelling out intent with tools capable of reason. Not because we have all the answers, but because we're willing to gather in living rooms and ask better questions together.
A special thanks
To our speakers who were so generous with their time, expertise, curiosity, and laughter: Ilan Bigio (OpenAI), Liam Matteson (Browserbase), Anna Monaco (Paradigm), Jason Ginsberg (Cursor), and Silas Alberti (Cognition).
And to everyone who filled Paradigm's living room that September evening: your questions and insights made this conversation possible.
Until next time,
Flo ᢉ𐭩