Remember the first time you heard your company was going AI-first?
Maybe it came through an all-hands that felt different from the others. The CEO said, “By Q3, every team should have integrated AI into their core workflows,” and the energy in the room (or on the Zoom) shifted. You saw a mix of excitement and anxiety ripple through the crowd.
Maybe you were one of the curious ones. Maybe you’d already built a Python script that summarized customer feedback, saving your team three hours every week. Or maybe you’d stayed late one night just to see what would happen if you combined a dataset with a large language model (LLM) prompt. Maybe you’re one of those who’d already let curiosity lead you somewhere unexpected.
But this announcement felt different because suddenly, what had been a quiet act of curiosity was now a line in a corporate OKR. Maybe you didn’t know it yet, but something fundamental had shifted in how innovation would happen inside your company.
How innovation happens
Real transformation rarely looks like the PowerPoint version, and almost never follows the org chart.
Think about the last time something genuinely useful spread at work. It wasn’t because of a vendor pitch or a strategic initiative, was it? More likely, someone stayed late one night, when no one was watching, found something that cut hours of busywork, and mentioned it at lunch the next day. “Hey, try this.” They shared it in a Slack thread and, in a week, half the team was using it.
The developer who used GPT to debug code wasn’t trying to make a strategic impact. She just needed to get home earlier to her kids. The ops manager who automated his spreadsheet didn’t need permission. He just needed more sleep.
This is the invisible architecture of progress — these informal networks where curiosity flows like water through concrete… finding every crack, every opening.
But watch what happens when leadership notices. What used to be effortless and organic becomes mandated. And the thing that once worked because it was free suddenly stops being as effective the moment it’s measured.
The great reversal
It usually begins quietly. Often when a competitor announces new AI features, — like AI-powered onboarding or end-to-end support automation — claiming 40% efficiency gains.
The next morning, your CEO calls an emergency meeting. The room gets still. Someone clears their throat. And you can feel everyone doing mental math about their job security. “If they’re that far ahead, what does that mean for us?”
That afternoon, your company has a new priority. Your CEO says, “We need an AI strategy. Yesterday.”
Here’s how that message usually ripples down the org chart:
-
At the C-suite: “We need an AI strategy to stay competitive.”
-
At the VP level: “Every team needs an AI initiative.”
-
At the manager level: “We need a plan by Friday.”
-
At your level: “I just need to find something that looks like AI.”
Each translation adds pressure while subtracting understanding. Everyone still cares, but that translation changes intent. What begins as a question worth asking becomes a script everyone follows blindly.
Eventually, the performance of innovation replaces the thing itself. There’s a strange pressure to look like you’re moving fast, even when you’re not sure where you’re actually going.
This repeats across industries
A competitor declared they’re going AI-first. Another publishes a case study about replacing support with LLMs. And a third shares a graph showing productivity gains. Within days, boardrooms everywhere start echoing the same message: “We should be doing this. Everyone else already is, and we can’t fall behind.”
So the work begins. Then come the task forces, the town halls, the strategy docs and the targets. Teams are asked to contribute initiatives.
But if you’ve been through this before, you know there’s often a difference between what companies announce and what they actually do. Because press releases don’t mention the pilots that stall, or the teams that quietly revert to the old way, or even the tools that get used once and abandoned. You might know someone who was on one of those teams, or you might’ve even been on one yourself.
These aren’t failures of technology or intent. ChatGPT works fine. And teams want to automate their tasks. These failures are organizational, and they happen when we try to imitate outcomes without understanding what created them in the first place.
And so when everyone performs innovation, it becomes almost impossible to tell who’s actually doing it.
Two kinds of leaders
You’ve probably seen both, and it’s very easy to tell which kind you’re working with.
One spends an entire weekend prototyping. They try something new, fail at half of it, and still show up Monday saying, “I built this thing with Claude. It crashed after two hours, but I learned a lot. Wanna see? It’s very basic, but it might solve that thing we talked about.”
They try to build understanding. You can tell they’ve actually spent time with AI, and struggled with prompts and hallucinations. Instead of trying to sound certain, they talk about what broke, what almost worked and what they’re still figuring out. They invite you to try something new, because it feels like there’s room to learn. That’s what leading by participation looks like.
The other sends you a directive in Slack: “Leadership wants every team using AI by the end of the quarter. Plans are due by Friday.” They enforce compliance with a decision that’s already been made. You can even hear it in their language, and how certain they sound.
The curious leader builds momentum. The performative one builds resentment.
What actually works
You probably don’t need someone to tell you where AI works. You already know because you’ve seen it.
-
Customer support: LLMs genuinely help with Tier 1 tickets. They understand intent, draft simple responses and route complexity. Not perfectly, of course, — I’m sure you’ve seen the failures — but well enough to matter.
-
Code assistance: At 2 a.m., when you’re half-delirious and your AI assistant suggests exactly what you need, it feels like having an over-caffeinated junior programmer who never judges your forgotten semicolons. You save minutes at first, then hours, then days.
These small, cumulative wins compound over time. They aren’t the impressive transformations promised in decks, but the kind of improvements you can rely on.
But outside these zones, things get murky. AI-driven revops? Fully automated forecasting? You’ve sat through those demos, and you’ve also seen the enthusiasm fade once the pilot actually begins.
Have the builders of these AI tools failed? Hardly. The technology is evolving, and the products built on top of it are still learning how to walk.
So how can you tell if your company’s AI adoption is real? Simple. Just ask someone in finance or ops. Ask what AI tools they use daily. You might get a slight pause or an apologetic smile. “Honestly? Just ChatGPT.” That’s it. Not the $50k enterprise-grade platform from last quarter’s demo or the expensive software suite in the board deck. Just a browser tab, same as any college student writing an essay.
You might make this same confession yourself. Despite all the mandates and initiatives, your most powerful AI tool is probably the same one everyone else uses. So what does this tell us about the gap between what we’re supposed to be doing and what we’re actually doing?
How to drive change at your company
You’ve probably discovered this yourself, even if no one’s ever put it into words:
-
Model what you mean: Remember that engineering director who screen-shared her messy, live coding session with Cursor? You learned more from watching her debug in real time than from any polished presentation, because vulnerability travels farther than directives.
-
Listen to the edges: You know who’s actually using AI effectively in your organization, and they’re not always the ones with “AI” in their title. They’re the curious ones who’ve been quietly experimenting, finding what works through trial and error. And that knowledge is worth more than any analyst report.
-
Create permission (not pressure): The people inclined to experiment will always find a way, and the rest won’t be moved by force. The best thing you can do is make the curious feel safe to stay curious.
We’re living in this strange moment, caught between the AI that vendors promise and the AI that actually exists on our screens, and it’s deeply uncomfortable. The gap between product and promise is wide.
But what I’ve learned from sitting in that discomfort is that companies that will thrive aren’t the ones that adopted AI first, but the ones that learned through trial and error. They stayed with the discomfort long enough for it to teach them something.
Where will you be six months from now?
By then, your company’s AI-first mandate will have set into motion departmental initiatives, vendor contracts and maybe even some new hires with “AI” in their titles. The dashboards will be green, and the board deck will have a whole slide on AI.
But in the quiet spaces where your actual work happens, what will have meaningfully changed?
Maybe you’ll be like the teams that never stopped their quiet experiments. Your customer feedback system might catch the patterns humans miss. Your documentation might update itself. Chances are, if you were building before the mandate, you’ll be building after it fades.
That’s invisible architecture of genuine progress: Patient, and completely uninterested in performance. It doesn’t make for great LinkedIn posts, and it resists grand narratives. But it transforms companies in ways that truly last.
Every organization is standing at the same crossroads right now: Look like you’re innovating, or create a culture that fosters real innovation.
The pressure to perform innovation is real, and it’s growing. Most companies will give in and join the theater. But some understand that curiosity can’t be forced, and progress can’t be performed. Because real transformation happens when no one’s watching, in the hands of the people still experimenting, still learning. That’s where the future begins.
Siqi Chen is co-founder and CEO of Runway.
Read more from our guest writers. Or, consider submitting a post of your own! See our guidelines here.
