Evan Conrad on the AI Compute Bubble, GPU Offtake, and Why SF Compute Looks Like Marriott

The AI bubble, if it exists, sits in the equity of the labs that prepaid the GPU clouds. That is the inversion Evan Conrad, founder and CEO of San Francisco Compute Company, has been pressure-testing since the company's first month in business, when his two-person audio-model startup signed a year-long GPU lease, used the cluster for thirty days, and built what became the first compute exchange while subleasing the other eleven months to dig out from under the bill.
Worldbuilders is a Village Global Podcast subseries hosted by Sumeet Singh, founder and General Partner of Worldbuild and a Village Global Network Investor. Sumeet's first episode with Evan Conrad reframes almost every assumption a generalist investor brings to AI infrastructure: where the risk actually lives, why GPU clouds run on commodity-thin margins, and what an honest neocloud business model looks like when the asset on your balance sheet behaves like a fund.
Listen to the full episode on Apple Podcasts, Spotify, YouTube, or wherever you like to listen.
For insights from across the Village Global Network straight to your feed, follow us on X, LinkedIn, YouTube, Instagram, and TikTok.
Key Insights
The AI compute bubble lives in the buyers' equity, not the GPU clouds' P&L.
Most analysts point at Nvidia, the hyperscalers, or the neoclouds piling H100s into data centers. In Evan's read, the GPU cloud operators that survived the H100 cycle structured the next one around upfront payments and multi-year lock-ins. They take deposits before customers ever spin up a job. They lock customers into multi-year contracts. When a cluster sits idle, the customer's deposit burns down before the cloud's loan does.
"If there is a bubble, here is where the bubble is," Evan says. "Whoever owns the equity of those companies that just raised a whole big round." A pre-revenue lab raises a monster round at a stretched mark, prepays a multi-year compute contract, and ships nothing. The preference stack rules out an acquihire. The valuation rules out another round. The cluster keeps drawing down the money.
For investors writing those rounds, the diligence question shifts. The interesting question becomes whether the offtake contract has any escape hatch at all if demand projections miss.
GPU cloud margins are thin by physics, and the public markets are pricing them as if they will eventually behave like SaaS.
The bitter lesson usually shows up as a research observation about scaling. Evan turns it into a margin argument. CPU clouds sell to SaaS companies. Gusto and Rippling write software once and sell it many times, and their CPU bill barely moves with revenue. Seventy-percent margins fall out of the structure of the business.
AI works on a different physics. "Your customer is paying you for the video," Evan says, "and then you turn around and you use compute for that video, and you do that every time." Every generation costs compute proportional to the revenue it produces. GPU buyers are structurally price-sensitive, and that pressure propagates straight through to the cloud.
"Almost every GPU cloud in existence is positioning themselves to the public markets as we are the premium experience," Evan says. "Meanwhile, customers don't agree." Buyers want reliability, security, price, and shorter contracts, in that order. The right comp for a GPU cloud is a property manager.
Offtake is the load-bearing contract in AI capital markets.
"Offtake is like the most important word in all of AI," Evan says. He defines it cleanly. Offtake is the long-term purchase contract a builder takes to a financier so capital flows toward construction. Evan argues AI infrastructure is project finance with a tech layer on top.
Many of the H100-era operators Evan watched as a broker built speculative clusters and got crushed when demand softened. He saw the dump in real time, with H100s trading on his order book at forty cents an hour because the operators who built without contracts had to clear capacity off their books. Blackwell deployments at scale start with the contract: customers sign, then operators build the cluster, then financing closes.
Founders trying to start a neocloud and venture investors underwriting compute-heavy startups sit on the same side of this equation. The structure of the contract is the company. SF Compute Company built its order book to manufacture offtake on the demand side, designing every clause around how a buyer of GPU offtake actually wants to be able to exit.
Neoclouds have two viable paths. Picking both fragments the company.
Evan sees a clean split in the neocloud business model. Path one: deploy large clusters with thin managed services, sell offtake to OpenAI and Anthropic, accept that those customers run their own teams and are mostly buying volume at a reasonable price. Path two: build managed services that engineers love, in the shape of Modal or something near it, and skip cluster ownership entirely.
"Modal's product is amazing," Evan says. "They do not build clusters. Meanwhile, Modal cannot sell like a big GPU cluster to OpenAI." The expertise to site, build, and land an OpenAI-grade cluster is its own discipline. The expertise to build a managed-service product engineers prefer over the alternatives is also its own discipline. Companies that try both end up resourcing both halfway.
Evan wants SF Compute to be the cloud the path-two companies sit on top of. The biggest clouds carry competing incentives, because the public markets reward managed services while OpenAI shows up with a contract that wants raw capacity at the lowest possible price. Most of them will keep splitting their attention. Evan's read: the neoclouds that survive will commit to one path.
SF Compute looks like Marriott. That is why the math works.
Almost every GPU cloud finances its own clusters. SF Compute hands the financing to outside capital. "I've currently stumbled upon just calling it like Marriott," Evan says. "The hotels are owned by somebody else, but Marriott manages them." Outside investors own the cluster. SF Compute designs the bill of materials, runs the operating layer, lists capacity on its order book, and takes a smaller cut.
The structural reason matters more than the analogy. "All the clouds have set themselves up and positioned themselves to investors as companies," Evan says. "Meanwhile, their assets under their balance sheet are growing and growing and growing, and actually they look like funds." A fund returning twenty percent on capital is doing well. A startup running on twenty percent margins is dead. Neoclouds raise from venture investors who underwrite for 70-percent margins, then operate an asset that behaves like a fund. The investor base never matches the asset.
Sovereign wealth funds and infrastructure funds carry cheaper cost of capital than most venture firms or even the hyperscalers. SF Compute hands them the asset and keeps the operating layer. Cheaper capital lets SF Compute price GPUs lower. Evan thinks price is the variable every AI compute investor's model eventually bends toward.
When the underlying thing is genuinely large, hype is the wrong response.
The current GPU build-out runs multiple times the cost of the Apollo Project and roughly thirty times the cost of the Manhattan Project. The number exceeds the GDP of several countries. Evan treats anti-hype branding as an epistemic discipline more than a marketing posture.
"If you actually think that the current moment is really important, and we do, you actually shouldn't be very hypey," Evan says. "It makes it hard for you to do anything real." Founders who buy into their own hype make worse calls right when the stakes are highest. The bigger the opportunity, the more expensive a fogged judgment becomes.
The brand discipline follows from the epistemic one. "We are the company that takes no bullshit," Evan says. "You will never see us say, we're gonna democratize compute, or powering the AI revolution. We sell GPUs. We rent them to you." For founders building inside a market this large, the temptation runs toward scaling the rhetoric to match the size of the opportunity. Evan's discipline of refusing the rhetoric is the same discipline of planning across a ten-year horizon, which is how SF Compute now operates, in deliberate contrast to the month-by-month survival mode of its first year.
About the Guest
Evan Conrad is the founder and CEO of San Francisco Compute Company (SF Compute). Evan started SF Compute as an audio-model lab and pivoted into infrastructure after buying a year-long GPU cluster the team only needed for a month, building the first compute exchange in the process to sublease the rest. SF Compute Company now operates as a market and operator for GPU clusters financed by outside capital, focused on the lowest possible price for AI labs that need compute on contract terms they can actually manage. Evan Conrad and SF Compute are building the offtake engine for the largest infrastructure project in human history.
About the Host
Sumeet Singh is the founder and General Partner of Worldbuild, a thesis-driven investment firm backing creative technologists across AI infrastructure, the model economy, and adjacent frontier categories. Sumeet is also a Village Global Network Investor and host of the Worldbuilders podcast. Together, Sumeet and Village Global write pre-seed and first checks into AI founders across the model economy and AI infrastructure.