Back to posts

2026 - Colabra

What I was wrong about

Eight bets I got wrong building Colabra: Benchling, alliance managers, raise size, interaction models, sales language, pain framing, and more.

The previous three posts in this series make Colabra's arc look cleaner than it was. I want to correct that. Here are the bets I got wrong, in rough order, and what each one cost me.

1. I thought we were going to beat Benchling

We started Colabra in 2020 as an AI-powered electronic lab notebook for R&D teams. The thesis was that scientists were drowning in documentation work and an LLM-native product could absorb most of it.

The thesis wasn't wrong. The market was. Benchling owned the pre-clinical software space, had deep integration into every biotech's workflow, and had sales muscle we couldn't match. We spent most of 2021, 2022, and 2023 trying to find a wedge against an incumbent that didn't need to move fast because they were already entrenched.

The lesson I learned too slowly is that being better isn't enough if the customer's switching cost is higher than the value you're adding. Benchling was good enough. Scientists didn't want to retrain. IT didn't want to re-approve. We lost deals to inertia more than to features.

2. I thought alliance managers would save us

When it was clear the core ELN business was stuck, we went looking for an adjacent wedge. In early 2024 we ran discovery calls with alliance managers at biotechs like Vertex and Scribe Therapeutics. The idea was that Colabra could help them write JSC reports and manage partnership deliverables.

The quote that killed the thesis came from a former Vertex alliance manager. He said Vertex wouldn't use our kind of software because they weren't doing the research. They didn't want to force their biotech partner to adopt new tools. They were relationship-centric, which meant not dictating. They just wanted their partner to upload the deck to SharePoint.

That call was the moment I realized alliance management wasn't a wedge, it was a smaller version of the same problem. The big company had no incentive. The small company didn't have budget. I kept doing a few more discovery calls to be sure, and every one confirmed it.

We burned about four months on that direction.

3. I raised too little

When we raised our seed, we closed $1.4M. In hindsight I wish we'd raised $2-3M.

My mistake was being proud of running a tight process. I compressed meetings into three weeks, created an echo chamber of interest, and told investors the round would close "whenever our deal fills up." It worked. The round filled up fast. I also left money on the table that would have given us a longer runway through the R&D-to-M&A pivot.

The founder lesson here isn't "raise more." It's "raise enough to fail at one thing and still get to try the next thing." I priced my round for the plan I believed in. I should have priced it for the plan I actually ended up executing, which included a full product category change.

4. I thought "Assign to AI" was the right interaction model

After the pivot to M&A, our first product instinct was to keep the existing project-and-task structure and add an "Assign to AI" button on each task. The user would upload documents, create tasks, and click the button to get AI analysis per task.

It got rejected after seven months in active development.

What we missed is that M&A buyers don't want to click a button on every task. They want to drop 300 documents into a room and get an issues list back. The AI isn't a tool the user invokes. The AI is the thing that runs automatically and hands you the output. The user interaction happens afterward, when they review the findings, not before, when they request them.

The fix was conceptually small and practically huge. We killed the per-task trigger and made the analysis run automatically across the whole data room. That's the product we sell today.

The lesson: when you're applying a product pattern from your old market to your new one, you should assume that pattern is wrong until proven otherwise. Our old users were scientists who liked composing tasks. Our new users are deal teams who want output.

5. I led with "AI-powered" in my sales pitch for too long

Through most of 2024, my cold emails and demos led with "AI-powered." It felt like a feature. It was actually noise.

Every prospect I sent those emails to had already received five similar pitches that week. The phrase "AI-powered" carried no information. It signaled "I'm one of the tools you're already ignoring."

What finally broke the pattern was a series of prospect conversations where I started describing the customer's Tuesday instead of the product's features. The email reply rate roughly doubled once I cut "AI" from the subject line and first sentence.

The lesson: if the word describes you and also describes 200 other vendors chasing the same buyer, the word is not doing work for you. Cut it.

6. I thought senders should introduce themselves

For a long time my cold emails started with "Hi [Name], I'm Aoi." I thought it was polite. A friend pointed out that the sender line already said Aoi, so the first three words of every email were redundant. I was spending attention I didn't have to spare.

Small, but the kind of mistake that multiplies across thousands of emails sent.

7. I thought the pain was "takes too long"

Early in selling M&A diligence, I framed the buyer's pain as time. Diligence takes six weeks. We can get you to two. That framing worked a little, but not as much as I expected.

The framing that actually worked, which I learned from a conversation with a PE managing director in August 2025, was about missing things. Buyers aren't scared of slow diligence. They're scared of blind diligence. They've lived through deals that closed with undiscovered risks and had to answer to their LPs about it.

I was selling time savings to people who wanted risk coverage. I changed the pitch. Reply rates went up again.

8. I thought research operational efficiency was a budget line

This one cost me most of 2023 and early 2024, so it's worth naming even though it overlaps with some of the earlier mistakes.

In the R&D era, my thesis was that teams would pay to make their operational work more efficient. Scientists spent too much time writing up experiments, assembling reports, and reconciling data across tools. Our software made all of that faster. The value was obvious to me.

The problem was that it wasn't obvious to anyone with a budget. Research operational efficiency, as a category, sits below the line where enterprise budget decisions get made. VPs of R&D nodded politely when I described it. They did not fund it. The work was real. The pain was real. The spending priority was not.

It took me longer than it should have to accept that. A workflow can be painful and still not be paid for. The question that matters isn't "does this hurt." It's "does this hurt enough that someone with a budget will move money to fix it."

When I asked that question about M&A diligence instead, the answer changed completely. A PE firm spending $250,000 on quality of earnings analysis alone, with 0.5 to 2 percent of deal value going to diligence overall, has a budget for this problem. They're already spending. Our job is to redirect a fraction of it, not to create a new category of spend.

That's the test I now run on every product idea before I get excited about it. Is there already a budget for this, and who controls it? If the answer is "no one directly," the idea probably doesn't work no matter how elegant the solution is.

What I'd tell a younger founder

You will be wrong about more things than you expect. You will also be right about a few things you didn't realize were the actual drivers of your success. The trick is to pay more attention to which outputs your inputs actually produce, and less to the story you told yourself before you started.

Every wrong bet I listed above taught me something I now use every week. The ELN era taught me that incumbents don't lose to marginally better tools. The alliance management detour taught me to test the economic buyer before falling in love with the use case. The Assign to AI failure taught me that pattern-matching from an old market to a new one is expensive. The "AI-powered" phase taught me that commodity language is not differentiation.

The current version of Colabra is the product of those mistakes, compressed into a shape that works. If I hadn't made them, I wouldn't have gotten here. That doesn't make them less embarrassing to admit. It just makes them useful.

If you're a founder reading this and you're embarrassed about a bet that didn't work, my advice is simple. Write it down. Name the lesson. Put it next to the next decision you make. That's how the wrong turn stops being wasted and starts being tuition.