X

This site uses cookies and by using the site you are consenting to this. We utilize cookies to optimize our brand’s web presence and website experience. To learn more about cookies, click here to read our privacy statement.

From Experiments to Advantage: A CTO Lens on AI Strategy

I like building things, not just because I enjoy the craft, but because building has a way of forcing clarity. In the physical world, assumptions get tested quickly. Tolerances matter. Decisions compound. And when something goes wrong, you don’t get to debate it. You adapt. That mindset has been useful lately as I’ve been thinking about how organizations should approach AI in 2026.

This post is inspired by a recent video from my LinkedIn series. Click the image to watch now.

A laser cutter machine in operation with a circular inset of a man’s face above yellow text that reads, "AI Lessons from Chess"—highlighting the impact of AI tools.
 

A quick story, and the strategic point behind it

I recently built a wall-mounted chessboard as a birthday gift for my son. It’s designed for long-running games, moves that happen over days or even weeks. I designed the board in CAD, cut and engraved most of it with a laser, and framed it with reclaimed barnwood that had spent over a century as part of a Midwestern barn.

The build itself isn’t the point. The process is. When you work on a project like that, you’re constantly balancing three things: a clear end goal, the reality of constraints, and the need to adjust as new information shows up. That’s also what good AI leadership looks like.

Why chess is a better AI metaphor than “innovation lab”

Chess became the symbol for strategy because it rewards disciplined thinking: planning multiple moves ahead, evaluating tradeoffs, and staying aware of the state of the board as it evolves.

AI initiatives work the same way, especially the ones that matter. The “state of the board” in an organization includes data quality, risk tolerance, regulatory realities, security posture, workforce adoption, and the operational maturity required to keep systems reliable after launch. Those variables change over time, and they change unevenly. Strategy has to account for that.

Experimentation is necessary, until it becomes the plan

Before I committed to the full chessboard, I built a small prototype. I needed to test spacing, engraving depth, fit, and laser settings. That prototype wasn’t “the product.” It was a deliberate step to reduce risk and validate assumptions before scaling up.

That’s the right role for experimentation in AI, too.

Over the last six months, many organizations have been running pilots and proofs of concept, often in pockets: one team testing a tool, another building a workflow, another exploring data access. That early activity has value. It builds fluency and confidence.

But here’s the leadership shift I think matters now:
In 2026, experimentation should support a longer-term AI strategy, not substitute for one.

If you can’t describe what the pilots are in service of, what capability you’re trying to build, what operating model you’re targeting, what risks you’re prepared to manage, then you’re not experimenting. You’re sampling.

Sampling feels busy. Strategy creates advantage.

Expect failure. Design for recovery.

Any meaningful build includes moments where something goes wrong.

In my project, the laser settings were off at one point, and the board literally caught fire. Later, after the board was assembled, I realized I’d misspelled “queen.”

Neither one was fun. But neither one changed the goal. They changed the plan.

That’s a useful way to frame AI efforts. Models change. Vendors change. Costs shift. Capabilities improve. Security risks evolve. Data assumptions break. Teams discover edge cases too late. The organizations that win aren’t the ones that avoid surprises; they’re the ones that can adapt quickly without losing alignment.

If your AI initiative can’t absorb setbacks, it isn’t ready to scale.

The “jig” matters more than the demo

One of the most important parts of my chessboard build was also the least impressive: I made a jig. It held each chess piece in the right position so engraving would be consistent. Once I decided what “good” looked like, the jig made it repeatable.

That’s the part many organizations underestimate with AI.

The competitive advantage rarely comes from the demo. It comes from what makes outcomes reliable:

  • consistent decision-making about what “good” is
  • repeatable workflows and controls
  • evaluation and monitoring
  • clear ownership and accountability
  • governance that enables speed without creating risk

Call it a jig, call it a system, call it an operating model. Either way, it’s what turns experimentation into production value.

Keep humans in the loop, especially where strategy and risk live

I don’t think AI replaces judgment. I think it scales it. AI can handle repetitive work, accelerate drafts, and surface options. But humans should remain responsible for the decisions that carry real risk, particularly in long-running systems where context changes and consequences compound.

If you want durable value from AI, you need a clear division of labor: where automation is appropriate, where review is required, and where accountability sits.

What I’d emphasize to leaders in 2026

If you’re leading AI efforts this year, I’d focus on three questions:

  1. What are we building toward?
    Not “what tools are we testing,” but what enduring capability do we want: faster delivery, better customer experience, lower operational cost, higher quality, new product lines?
  2. What’s our system for making AI reliable?
    Your “jig”: evaluation, governance, monitoring, and a path from prototype to production.
  3. How will we adapt as the board changes?
    Models, costs, risk, regulation, and user expectations will keep moving. Plan for iteration without losing strategic coherence.

That’s the long game: strategy first, experimentation with intent, and an operating model that makes results repeatable.

Watch the video from my LinkedIn series

If you’re navigating the shift from pilots to production AI, that’s work we do every day at SPR, helping teams move from “interesting experiments” to systems that create real advantage. Be sure to watch the video that inspired this post and share your thoughts with me on LinkedIn.