Well, this was (ahem) unexpected.
Seriously now, early commentators seem to be so narrowly focused on the chain-of-thought demos that they completely miss the point that o1
has the potential to bring to the table one thing that most AI-driven systems lack: Explainability–i.e., the ability to lay out the reasoning behind its internal processes in a way that end users can understand.
I’m not really very impressed with the chain-of-thought part by itself–that’s been pieced together with agent frameworks and multi-shot approaches for a while now–and I think that not making having it available to users is a mistake. I do think that it has to be exposable in some way, because otherwise OpenAI is locking themselves out of a lot of potential applications.
One of the key blockers in many customer discussions I have is that AI models are not really auditable and that automating complex processes with them (let alone debug things when “reasoning” goes awry) is difficult if not impossible unless you do multi-shot and keep track of all the intermediate outputs.
But I get their point, and (as usual) I’m waiting for the other shoe to drop where it relates to failure modes (and even adversarial attacks, to which agent frameworks are particularly susceptible).
I’m also curious about how this will be used in practice, since (unless, again, there is a structured output of the chain inself, something like a response structure tree) AI will still be tricky to integrate into auditable automation–or even more mundane things like code generation, but I’m pretty sure that will be sorted out.
Buckle up, the generative AI merry-go-round/hype cycle is going for round two (or three?)