AI and data teams in a sense, kind of, do the same thing: make decisions based on data. So how do you build AI that helps data teams do their best work?
Hex was one of the first companies in their space to embrace language models and build code generation features into their data workspace. In this episode of Barrchives, I went deep with Hex’s co-founder and CEO, Barry McCardel, about Hex’s journey towards becoming an AI company.
Here are a few highlights from our conversation:
“It never felt right for us to charge for our AI features for a few principled reasons. One, the cost is pretty low and it’s getting cheaper. But more importantly, having an AI add-on presupposes that there’s a version of the product you’d want to sell that doesn’t have AI. That’s true today, but I don’t think it’s going to be true in a couple of years.
It’s like charging for cloud back in 2014. It would have made sense for it to be an add on, but eventually that just became how software works.”
“The first version of our Magic features took one month to build. One engineer, one designer, and me. It was pretty simple, you gave a prompt and it generated code. We weren’t passing in context, we didn’t have a vector DB, or anything like that. And then over the years we scaled it up and invested in it a lot more.
A lot of founders are putting a lot of effort into AI stuff, and it’s an interesting question of whether they’re consistently getting product out. In a lot of cases, these things end up like research projects where you put energy in, and it’s unclear if you get value out. AI confounds a lot of your product intuitions and it’s not always clear if things are going to work or why.”
“We spent a bunch of time building this feature where you could have our AI edit your code. And the experience wasn’t quite there, since you’d have to wait for this thing to stream edits for all of your lines of code in a block, even if it only wanted to change 1. We put all of this engineering time into figuring out how to do this faster…and then GPT 3.5 turbo came out, and the faster model with the old approach worked just fine.
So to some degree, you need to try to extrapolate where progress is going and where models are going to be in a few months, or you’ll waste time on problems that someone else will eventually solve for you.”