Back to posts

2026

The founder's AI stack: how I actually use it

A detailed look at the skills, integrations, and workflow I use to get real leverage from AI as a founder.

A lot of founders say they "use AI." Most of them mean they open ChatGPT a few times a day and paste things in. That isn't what I mean.

I treat AI the way a software engineer treats a compiler. The compiler doesn't write the code. I write the code. The compiler executes it against inputs, at volume, without getting tired. My job is to be precise enough that the output is useful. Claude, or any other model, is my compiler for judgment.

Here is what that looks like in practice.

Skills, not prompts

I don't use prompts. I use skills. The distinction matters.

A prompt is a thing you type. A skill is a file, versioned, with rules, reference documents, and workflow steps. I have about seventeen of them. Each one encodes a decision I've already made about how a specific job should be done.

I have a skill for cold outreach that includes persona classification, pain framings by buyer type, an explicit ban list of phrases that don't work, and writing rules (no em dashes, no "I'm Aoi" openers, under a hundred words). I have a skill for discovery calls that walks through a seven-step framework. I have skills for demos, for closing deals, for writing blog posts in my voice, for resume building, for job research, for humanizing AI-generated writing so it doesn't sound like AI-generated writing.

When I ask the model to help me write a sales email, it's not starting from a blank page. It's starting from a document that describes what a good email looks like for the specific persona I'm writing to. The output arrives already aligned to rules I've already debated with myself.

This is the most important thing I can tell another founder about using AI: the leverage isn't in the model. It's in how much of your taste you've written down.

Grounding in real conversations

Every sales email I send out referencing a prior conversation gets drafted against the actual transcript of that conversation. Not my memory of it. The transcript.

I have Fireflies and Granola recording my calls. When I draft a follow-up, the model pulls the transcript, finds the exact language the prospect used, and writes around it. If I'm referencing a pain point, I'm referencing their pain point, in their words, not my approximation of it.

This solves a problem I didn't realize I had. My memory of calls is generous to me. I remember myself as more persuasive and the prospect as more enthusiastic than the tape shows. The transcript corrects me. The email I send is less flattering to my own performance, and more faithful to what the prospect actually said.

The rule in my playbook is simple: never invent quotes. Pull transcripts first, write second. If the claim can't be sourced, the claim doesn't go in the email.

The humanizer pass

Every email, every blog post, every document that gets drafted with AI assistance gets run through a skill called the humanizer before I send it. The humanizer looks for twenty-four specific patterns that are dead giveaways for AI writing. Inflated significance. Superficial "-ing" analyses. Rule of three. Vocabulary words like delve, tapestry, pivotal, landscape. Em dashes. Negative parallelisms ("not only X, but Y"). Sycophantic openers.

It's a lint pass. The model wrote the first draft. The humanizer rewrites the parts that sound like the model wrote them. Then I read it one more time and ship.

I trust AI to draft. I don't trust it to finish. That distinction shows up everywhere in my workflow.

Connected tools, not isolated chats

My Claude session has access to my Gmail, my Google Calendar, my Google Drive, my Slack, my Linear, my Fireflies, my Granola, and a handful of other tools. When I ask a question, Claude can answer by pulling from any of them.

This turns the model from a writer into a researcher. When I ask "when did we first pitch Blue Owl," I get an answer grounded in actual email and meeting data, not a hallucination. When I ask "what did we decide about pricing for Sikich," the model pulls the email thread and the transcript of the call where we decided it, and synthesizes from those sources.

The setup work to connect these was real. Every integration added had a fraction of a percent of value on its own. Stacked together, they make the difference between an assistant that guesses and an assistant that knows.

Building skills that build skills

I have a skill called autoresearch. Its job is to improve my other skills. It runs a skill on a set of test inputs, scores the outputs against success criteria, mutates the skill's instructions, runs it again, and keeps the changes that improve the score.

I have another skill called skill-creator. Its job is to help me create new skills. When I notice I'm doing the same kind of task more than twice, I open skill-creator and turn that task into a new skill.

This is the recursive part. The compiler has access to itself. I spend a few hours setting up the system, and then the system helps me maintain and improve itself. That's where the real leverage lives.

What this costs

I'll be honest about the investment. Setting all of this up took weeks of real work, and it's still evolving. I'm editing my skills every few weeks. I'm adding new integrations every month. I'm reading research papers on prompt caching and context engineering because the frontier keeps moving.

The people who tell you AI is plug and play are either selling you something or haven't actually tried to get serious leverage out of it. Plug and play gets you a slightly better version of ChatGPT. Getting to where I am today requires treating your own workflow like a product that deserves engineering attention.

Why I think most founders are underusing AI

The average founder I talk to uses AI like a search engine. They open a chat window, ask a question, get an answer, close the window. Everything is thrown away at the end of the session.

That approach wastes the compounding. The value of AI at the founder level comes from the compounding. Every skill I write makes every future task cheaper. Every transcript I ground my writing in makes every future email more specific. Every integration I add expands the surface area of problems the model can help me solve.

If you're using AI and you don't feel like it's changing how you work, that's a signal. You're probably using it as a smart search bar. The compounding isn't on.

The argument I'd make to a skeptical founder

I'm not an AI maximalist. I don't think the model is going to replace me. I think the model is going to replace the version of me that writes worse emails, does shallower research, drafts messier pitches, and forgets what the prospect said on the last call.

That version of me was already bad at my job. I don't want him back. The AI-assisted version is better at every part of founding that involves writing, research, and specification. Those happen to be most of the parts.

The question I'd leave you with is this: what have you written down about how you want your work to be done? Because that document, whatever form it takes, is the actual unlock. The model is just the thing that executes against it.