Parth Patil on Coding Agents, Building Reid AI, and What It Takes to Operate at the Frontier

"I early withdrew my 401k and just kept burning it on OpenAI API calls," says Parth Patil, AI specialist in Reid Hoffman's office. After getting laid off from Clubhouse in early 2023, Patil spent four months doing nothing but running experiments on GPT-4 — no salary, no plan, just a growing conviction that directing AI systems was a skill the market would eventually pay for. Hoffman hired him to build Reid AI, a functioning digital twin capable of conducting interviews on Hoffman's behalf, built entirely without a software engineering background through vibe coding.

This episode of the Village Global Podcast, hosted by Sam Kirschner, VP at Village Global, covers the evolution of coding agents from first chatbots to today's multi-agent orchestration, and the specific tactics Patil uses to push these tools past their default ceiling, including how context engineering separates practitioners from casual users.

Listen to the full episode on Apple Podcasts, Spotify, YouTube, or wherever you like to listen.

Follow us on X, LinkedIn, Youtube, Instagram, and TikTok.

Key Insights

The IDE was built for humans. The AI doesn't need it.

Patil lived in Cursor for years. He introduced it to everyone he knew. Then he used Claude Code for the first time and stopped using Cursor the same week.

"What if we don't need the IDE?" he says. "When you're talking to Claude, it's writing all this code — you're not really looking at the code." Cursor is a VS Code fork: 12 years of tooling designed around the assumption that a human would be doing the coding. Reading diffs, navigating file trees, watching the cursor blink. Claude Code and Codex operate through the terminal, not through a GUI. The IDE was built for human eyes; the terminal is native to the machine.

The interface optimized for human coding and the interface optimized for agentic coding are increasingly diverging. Teams building on the former assumption inherit a constraint the technology no longer has.

Your typing speed should be dropping

Patil clocked 85 words per minute a year ago. He's at 70 now and expects the number to keep falling.

He uses Whisper Flow to dictate at 140 words per minute, nearly double his typing ceiling. But the case for voice isn't just throughput. Typing caps thinking speed in a way speech doesn't. "Every time you have a typo, you hit the brakes on thinking to fix it," he says. "Speech is closer to thinking speed than typing is."

The binding constraint in any coding agent workflow is the bandwidth at which a human can describe a problem fully. A spoken description of a complex task arrives faster and often richer than a typed one. The LLM cleans up whatever garbled transcript comes out. Patil lets the model handle structure and focuses on coverage and completeness.

Go to AI with the problem. Not the answer.

The most common mistake Patil sees is people arriving at an AI coding agent with a predetermined solution. They describe what they want built rather than what they're trying to solve. The model obliges and delivers exactly what they asked for — which is often not what they actually needed.

"The shape of the solution might be different from how you've already predefined it," he says. His practice is to describe the problem space first and let the model surface approaches he wouldn't have considered. This is how he ended up using tmux (a terminal multiplexer from 2007) to manage dozens of concurrent AI coding agent sessions. He described his orchestration problem to Codex. The model pulled a solution from its pretraining that Patil had never heard of. It was the right call.

He spent three hours in that conversation before choosing tmux over its alternative, Zellij. Choosing the right tool to work with an army of agents was worth the investment.

When your agent hits a wall, the context is polluted

The failure mode is predictable: 25 exchanges into a session, progress has stalled, and the agent is moving in circles. Everyone he talks to tries the same fix first: refine the prompt, try different phrasing, escalate to a smarter model. Patil's diagnosis is different.

"That is a sign that the context window is now polluted with irrelevant information," he says. The agent is holding too much. Every failed attempt, every detour, every correction is still in the context window, diluting the bandwidth available for actual problem-solving. His fix: spawn a fresh agent with a clean context and have it attack the problem from scratch with only the relevant information loaded.

This is the operational core of context engineering. Where prompt engineering focused on the quality of an individual instruction, context engineering is about the composition of everything the model is holding at once. "Models do better when they're singularly focused and everything they need to solve the problem is right there or easily retrievable," he says. The context window is finite. Dead weight in it costs capacity.

Context pollution also explains his preference for CLI tools over MCP servers. Load 25 MCP servers into an agent and it's burning context memorizing tool descriptions for integrations it rarely uses. CLI tools with --help flags solve this with progressive disclosure: the model reads the docs when it needs the tool, not before. More free context means more capacity to think about the actual problem.

Don't let the corporation cut your learning rate

Patil's advice for getting started isn't "use it for work." Work carries the wrong incentives — your company may not survive, your employer might ban the tools, and if the problem isn't one you intrinsically care about, you won't go fast enough to build real fluency.

"If you waited for someone to write the book, it would already be outdated by the time it's published," he says. His mother, a career software engineer whose entire team was laid off, now uses multimodal models to pre-visualize art projects and explore watercolor techniques. You get good at these tools through something you'd do without being asked.

The harder version is aimed at people inside large organizations where AI tool adoption moves slowly. "If it takes four months to approve Codex at a massive company, the person who spent four months in Codex would never imagine working for that company." The real risk is falling behind so fast the culture gap becomes unbridgeable. "The most important thing is your learning rate."

AI is a power user technology masquerading as a universal one

The advantage AI generates is not distributed evenly, Patil argues.

"The metacognition — thinking about how to apply intelligence — is a compounding advantage that only goes to power users," he says. The floor of AI capability is rising fast; the ceiling is rising faster. Better models amplify the strategies of people who've already developed sophisticated approaches to using them. The gap between a top-tier practitioner and a casual user widens with each release.

The top 200 players in StarCraft and the 98th percentile are playing a cognitively different game. The same divergence is opening up between practitioners who've built real context engineering workflows and those still treating these tools as smart autocomplete. "The top 1% experience of wielding AI is still unfolding," he says. "There's no textbook."

The constraint in 2023 was runway. He spent his savings on tokens and rebuilt from there. Today, managing 45 agents across projects, he says the only thing slowing him down is the need to sleep.

About the Host

Sam Kirschner is Vice President at Village Global, investing in ambitious early-stage founders building at the frontiers of science and tech. Village Global is a first-check investor in AI startups.

Are you an amazing entrepreneur working on a big idea?