From the lab
Writing on AI ownership
Published on Founder Reality. We don't host the essays here — we link to them. One blog, one source, no duplicate canonicals.

Fine-tuning your own AI doesn't cost $35,000. It cost us about $50.
Two A100 graphics cards. Spinning quietly in a Google datacenter. Five hours of training. About $50 in compute. That's what it cost us to fine-tune our own 4-billion-parameter A...

Your ChatGPT and Claude Conversations Are Court Evidence
Greg Brockman's journal became Exhibit 161 this week. The next chapter writes itself. Someone's ChatGPT history becomes Exhibit 162. That sentence sounds like speculation. It i...

One Rack Is a Cloud
What colocation is, and why most AI founders have never heard of it

You Want Out of OpenAI. Here's Where to Actually Start.
A week ago, I published AI Real Estate. The framing was simple: the AI you use today is rented — like an apartment. There's a ladder above it. Most people don't know the ladder ex...

Three Kinds of Cloud (and Why Two of Them Keep Getting Confused)
I sat down with a Canadian university last week. They were trying to articulate to industry partners what their compute offering would be. They knew "sovereign" was the right word...

GPU Cloud Shopping in Canada: Three Weeks Later
Three weeks ago I wrote a post called GPU Cloud Shopping in Canada: What's Actually Available. The short version: I checked every major cloud provider with a Canadian data center,...

What fine-tuning actually costs (it's not what you think)
Training an AI model is assumed to cost millions of dollars. It's the single most common misconception in the space, and it's wrong by roughly two orders of magnitude for the acti...

Why I chose Unsloth (before training a single token)
Honest note up front: I have not yet fine-tuned anything with Unsloth. I have not run a single training job. What I did is spend three weeks researching fine-tuning frameworks bef...