Henry Shi on Joining Anthropic, Seed Strapping, and Why Top VCs Sell Capital to Founders Who Don't Need It

Top-tier AI venture partners have inverted their own job. The founders a partner can genuinely help, the ones who would benefit most from capital and counsel, are the ones the partner won't fund, because no other firm has offered them a term sheet. The founders a partner fights hardest to win, the ex-OpenAI Stanford researchers building world models, already have five sheets in hand and don't need help. Henry Shi, who scaled Super.com from zero to $1B in GMV and $200M in annual revenue before joining Anthropic, says it plainly: the job has become selling capital to people who don't want it.
On a recent Village Global Podcast, Anne Dwane sat down with Henry to talk about what happens when an operator who built a profitable consumer fintech for eight years studies the AI frontier in public, then walks into one of the labs building it. Shi is candid about venture capital's structural problems, the moats that survive frontier model releases, and the specific decisions inside Anthropic that convinced him the mission is real.
Listen to the full episode on Apple Podcasts, Spotify, YouTube, or wherever you like to listen.
For insights from across the Village Global Network straight to your feed, follow us on X, LinkedIn, YouTube, Instagram, and TikTok.
Key Insights
Top-tier AI venture capital has become a sales job to founders who don't need the money
Partners at top firms make two or three investments a year. To return a fund of any size, those checks have to land in companies with billion-dollar outcomes, which means converging on a narrow founder profile: ex-OpenAI or DeepMind, Stanford or MIT, building something model-adjacent. Every partner is hunting the same twenty people. Those twenty people already have term sheets.
Shi describes the result without softening it: "Your job ends up becoming selling founders who don't really need your money to take your money. And then the irony of that is the founders who actually need your money, who you can actually help, you don't wanna help, because they have no term sheets."
Founders outside the consensus profile should read the inversion as information about the partner's portfolio math, not about their company. The partner who passed because no one else was in the round is running a portfolio strategy that needs other partners' signal as input. Founders who internalize that stop reading rejection as verdict and start reading it as math. First-check VC firms operating at the pre-seed and seed stage end up with different deal flow because their job is to develop conviction before the term sheets arrive.
Zero to $10M ARR has never been easier; $10M to $100M has never been more uncertain
The current AI cycle has compressed the early revenue ramp to something founders a decade ago would not have recognized. A small team with a sharp wedge can hit eight-figure ARR in under two years. What that revenue means about durability is the open question.
Shi: "It's never been easier to go from zero to five, ten, even twenty plus million ARR, but it's never been more unclear who's gonna go from ten, twenty, fifty million to a hundred and beyond. Look at Jasper AI for example. One of the darlings of the early ChatGPT era, zero to a hundred, and now back down."
AI investors right now celebrate fast revenue ramps as proof of product-market fit, and therefore of durability. Jasper's arc shows revenue can arrive and leave on the same model-release cycle. Founders raising on a fast ramp should price in that the next twelve months of model improvement might compress their wedge. Operators inside those companies should plan for the moat they think they have being the same moat the next foundation model release erases.
Seed strapping replaces the Series A through G treadmill
Shi tracks a new shape of company on his Lean AI Leaderboard. He calls it seed strapping, and the math is the case for it.
Shi: "This new form of starting a company, which is you take a little bit of seed capital and then you scale it. So a combination between bootstrapping and raising a seed fund. But unlike bootstrapping, you are not paying out of your own pocket in the early couple years… but unlike traditional venture, you don't have to keep raising series A, B, C, D, E, F, G, get diluted to like 2% and lose control. If you're like five people making 10 million a year every year, that's pretty good. And probably you're probably doing better than most venture-backed founders who are illiquid, who are stuck."
A seed-strapped company takes one round of outside capital, uses it to reach profitability, and stops raising. Five people generating $10M a year, holding most of the equity, making decisions on their own timeline, look better on most dimensions than the venture-backed counterpart who has raised five times, owns 8% of a $300M company, and cannot sell. Two engineers with Claude and Cursor now ship what eight engineers shipped three years ago, which is what makes the lean AI startup shape possible. Founders should run the second spreadsheet before the Series A conversation.
Speed of execution is the only AI moat that survives frontier model releases
The wrapper question dominates AI investor calls. If a startup's product is a thin layer over GPT or Claude, what stops the model provider, or the next twenty teams using the same API, from shipping the same thing next quarter. Proprietary data, network effects, and switching costs all degrade as new models absorb more capability. Shi argues the answer is structural.
Shi: "I think one of the only defensible things is your speed of execution. Everything's moving so quickly, constantly adjusting, so you just have to keep testing, iterating and learning. And by having a small, nimble team, you can move a lot faster."
Speed is the moat because models change fast enough to dissolve every other one. The team of five that ships a working integration on the day a new model drops captures a market window the team of fifty cannot reach for a quarter. Shi has watched the pattern across the hundred-plus companies on the Lean AI Leaderboard: small team by design, decision velocity over coordination overhead. Operators should stop reading headcount growth as company strength; on a fast-moving model cycle, headcount slows decision velocity, which is the metric that decides who's still here in two years.
The future of work might be AI bosses with human employees, not the reverse
Most people thinking about AI and the future of work land on a comfortable framing: humans stay in charge, AI agents handle the busywork, the org chart looks roughly the same with smarter tools at the bottom. Shi flips the axis.
Shi: "An interesting thought experiment is whether it's gonna be humans with a bunch of AI employees, or is it gonna be AI bosses with a bunch of human employees. Because if you think about AI, what it's good and not good at, it's probably really good at synthesizing all this information and making a decision. But to actually do stuff, it kind of has to interact with the physical world. So if the AI boss says, hey, I need you to go to this place and pick up this item and come back and do this thing, you might not understand why, but it's probably calculated all the permutations and decision tree there."
Synthesis and decision-making are converging on AI strengths faster than physical-world actuation is. That asymmetry, more than the headcount question, decides the org chart. If it holds, the model decides and the human does, which inverts every enterprise AI roadmap currently in market. Founders building agentic systems, and operators redesigning workflow, should sit with the inverted version before assuming their existing structure survives the next twelve months.
The Anthropic mission shows up in the metrics the company declines to optimize
Frontier labs make mission claims that are hard to verify from outside. Shi spent nine months studying AI in public before joining Anthropic, and his evidence for why the mission is real is structural: specific decisions visible in the product.
Shi: "On the outside, I think there's a little bit of skepticism around like, are they just saying that? Is it just posturing? Is it regulatory capture? But certainly on the inside it's very much real. Dario gives these incredible all-hands discussions, where there's no corporate speak, he'll answer every single question. And you see the decisions being made that oftentimes deprioritize revenue or certain metrics that a normal company would care about, like engagement, click-baiting and things like that."
Most companies the size of Anthropic optimize engagement and click-through because those metrics correlate with revenue. When Anthropic ships a product that doesn't push engagement, the cost shows up in revenue anyone can model. Dario Amodei's all-hands candor is harder to verify from outside; Claude's behavior on engagement-style prompts is not. Founders deciding whether to join an AI lab are asking whether the public posture survives the internal calendar. Shi's answer, from inside Anthropic at month one, is one of the most specific on offer.
About the Guest
Henry Shi is co-founder of Super.com, the savings and fintech app he scaled from zero to $1B in GMV, $200M in annual revenue, 50 million users, and profitability across an eight-year run. Before Super, he studied computer science at Waterloo and worked at Google. After stepping back from operating, he spent nine months learning AI in public, building the AI Crash Course (5,000+ GitHub stars) and the Lean AI Leaderboard. He recently joined Anthropic.
About the Host
Anne Dwane is co-founder and General Partner at Village Global, a pre-seed venture capital firm chaired by Reid Hoffman that writes first checks at the pre-seed and seed stage. The firm backs founders before consensus forms across all sectors, including consumer, B2B, and AI. Before Village, Anne co-founded Zinch (acquired by Chegg) and led Chegg's enterprise business through its IPO, and previously served as CEO of Military.com (acquired by Monster Worldwide). She invests in pre-seed and seed-stage founders.