The seventy percent
When Brice Challamel arrived at Moderna to lead its AI transformation, he did what consultants are supposed to do: he talked to people. Over the course of several months he conducted 270 interviews across the company, asking each person the same question: “What would make you say that you have succeeded beyond your wildest expectations?” The answers were revealing, but not in the way he expected. Before anyone could talk about AI, they talked about basics. There were conference rooms without working plugs to charge laptops. Internal systems didn’t talk to each other. The IT infrastructure that would need to support any AI initiative was, in Challamel’s assessment, not ready.
This was Moderna, the company that had used mRNA technology to develop a COVID vaccine in record time. A company with genuine scientific sophistication and a CEO, Stéphane Bancel, who was already talking publicly about AI as central to Moderna’s future. And the starting point was rooms without plugs.
Within six months of launching its AI program, Moderna had achieved something almost no large company has managed: 100 percent companywide AI adoption. Every employee, from bench scientists to administrative staff, was actively using AI tools. The company had identified 180 game-changing use cases. A self-selected team of over a hundred AI champions was meeting biweekly. And Bancel was describing a vision where a few thousand people, armed with AI, could do the work that would have required a hundred thousand under conventional methods. How Moderna covered that distance says something about what AI transformation actually requires.
The previous chapter laid out the structural requirements for AI innovation: domain ownership, shared platforms, automated governance. Get the architecture right and AI can reach the people who need it. But architecture is the destination. Most organizations are staring at the gap between where they are and where the architecture says they should be, and the gap feels enormous. But the answer is counterintuitive: the transformation is 70 percent cultural and only 10 percent technological. The companies that crossed fastest didn’t start with better models. They started with better questions about their own organizations.
In 2024, BCG published the results of a global study on AI adoption. The headline finding was a ratio: 10-20-70. Ten percent of the value from AI comes from algorithms. Twenty percent comes from technology infrastructure. Seventy percent comes from people, process, and culture. The ratio has held across industries, geographies, and company sizes, and it reframes the entire question. Most organizations spend their energy debating which model to deploy, which vendor to choose, which use cases to pilot. Those decisions account for a third of the outcome. The other two-thirds are about the humans.
Challamel reached the same conclusion independently. His framework for Moderna’s transformation was three words: culture, business, technology, in that order. Not the conventional “people, process, technology” that management consultants have been reciting for decades. Challamel was specific about the difference: “I’m not an HR person. I’m a transformation person. I need to change culture, not people, to drive behaviors that lead to business value.” The distinction matters. Changing people means training programs, competency frameworks, and skill assessments. Changing culture means changing what’s normal, what gets celebrated, what’s expected. One is a curriculum. The other is an environment.
The data supports the sequence. BCG found that in companies they classified as “future-built,” 88 percent of managers actively role-modeled AI use. In AI laggard companies, the number was 25 percent. The gap wasn’t in the tools available or the budgets allocated. It was in whether the people with organizational influence treated AI as something they personally used and visibly valued, or as something they delegated to a task force. Companies led by what BCG called “trailblazing CEOs” allocated 60 percent of their AI budgets to upskilling. Followers allocated 24 percent. The trailblazers matched the followers on technology spending but allocated proportionally more to the seventy percent.
The technology is the easy part. Not because it’s simple, but because the market is commoditizing much of what organizations need. Models are available from multiple vendors. Inference costs are falling. Open-source alternatives exist for many enterprise use cases. What remains scarce is the organizational capacity to use any of it well. That capacity is cultural before it is technical.
Understanding where an organization actually stands in its AI journey requires honest assessment, and most leaders lack the frame of reference to make one. Brotman and Sack, who spent years working with C-suite leaders on AI adoption, identified a structural mismatch they call the proficiency gap: the distance between what AI can do and what most leaders know it can do. The gap widens with each model generation because AI capabilities advance faster than most executives can track.
In their field research with over a hundred leaders, Brotman and Sack found that 80 percent either didn’t use ChatGPT regularly or used only the free version of an older model. Their usage was limited to occasional emails or job descriptions. These were not technophobic executives. They wanted to learn more. They sensed AI “must be able to do so much more for their businesses.” They just didn’t know where to start. Deloitte’s 2023 survey of 2,800 C-suite executives confirmed the pattern at scale: only one in five believed their organization was “highly prepared” to address AI skill needs. BCG’s concurrent survey of 1,400 C-suite leaders found that 90 percent were what BCG labeled “observers,” experimenting in small ways but “ambivalent or dissatisfied” with their progress.
Brotman and Sack frame the gap through three stages. At literacy, individuals use AI for simple search queries and basic content creation, and organizations deploy chatbots and experiment with cost-cutting. At proficiency, individuals build custom applications and tackle complex tasks, while organizations integrate AI across departments and automate workflows. At fluency, individuals have internalized AI’s capabilities and limits well enough to find novel solutions, and organizations use AI for strategic decisions, new business models, and companywide transformation.
Most organizations are stuck between literacy and proficiency. And the gap compounds: organizations that reach fluency learn faster, extend their lead, and attract the talent that widens the advantage further. The proficiency gap is not a temporary condition. Left unaddressed, it becomes a permanent competitive disadvantage.
The cultural explanation is real, but it’s incomplete. There’s a structural reason enterprises move slowly that has nothing to do with whether the CEO uses ChatGPT. Real companies have budgets, annual planning cycles, and EPS numbers they’ve committed to Wall Street. Every new spend category must survive a budget cycle, get approved through procurement, and justify itself against committed earnings targets. As Levie puts it: “You don’t get to just be like, oh, we’re going to token max across the enterprise where everybody gets unlimited token budgets because obviously then that company would just miss their earnings throughout the year.” A startup can decide on Monday to give every employee unlimited API access. A public company with quarterly guidance can’t. The money has already been allocated. The earnings have already been promised. New AI spending has to come from somewhere, and “somewhere” means displacing a line item that someone else is already counting on. The proficiency gap isn’t just a knowledge problem. It’s also a fiscal-machinery problem, and the machinery of public companies moves in annual cycles regardless of how fast the technology moves.
Part of the problem is hiding in the simplicity of the interface. The AI chat box is, as Brotman and Sack put it, “hauntingly simple — just a blank input chat box, with a blinking cursor, staring at you.” There is no learning curve to start typing. There is also no user manual for software whose capabilities most users haven’t come close to discovering. Most executives treat it like a search engine: type a question, get an answer, move on. They never discover that they could upload a quarterly financial report, ask for a competitive analysis compared to three peers, request a draft board presentation, and have the AI produce something genuinely useful in minutes. The gap between what people do with AI and what they could do with AI is where most of the unrealized value sits.
How they did it
If the gap is cultural, closing it is a cultural project. The companies that have moved fastest share a common pattern: they treated AI adoption as an organizational behavior problem, not a technology deployment problem. But they approached the behavior change in strikingly different ways.
Eric Vaughan, CEO of IgniteTech, chose maximum intensity. He mandated that every employee spend 20 percent of their work hours, what the company called “AI Mondays,” learning and using AI tools. He built a gamified scoring system: employees earned points for submitting AI tips and use cases, with $2,500 prizes for top contributors. The program was not optional. Employees who consistently failed to participate, those Vaughan placed in “the basement” of the leaderboard, faced termination. Across the company, 1,186 AI tips were submitted, with a projected value of $1.8 million. The approach was polarizing by design. Vaughan’s theory was that the proficiency gap couldn’t be closed by invitation. It had to be closed by immersion.
Alicia Parker, Chief Marketing Officer at Tishman Speyer, the real estate firm, took a different approach entirely. Rather than mandating adoption across the company, she adopted AI deeply within her own department first. She ran a structured bootcamp: education on capabilities, protected time for experimentation, shared notes on what worked and what didn’t. Her team became visibly more productive. Other departments noticed. Parker then shared her playbook laterally, department by department, with each team adapting it to their own context. The transformation spread through demonstration rather than decree. It was slower than IgniteTech’s blitz but carried less organizational trauma and more durable buy-in.
Sal Khan took a third path. When Khan Academy received early access to GPT-4 in the summer of 2022, months before ChatGPT launched publicly, Khan didn’t commission a strategy review or convene a governance committee. He built a working prototype. Within two weeks, Khan Academy had Khanmigo, a tutoring AI that could guide students through problems without giving away answers. The prototype was rough. It hallucinated. It sometimes veered off topic. Khan shipped it anyway, because the prototype did something no strategy deck could: it showed people what was possible. Forty employees were under NDA within two months. The entire team had access within three months. Skeptics who had worried about AI replacing teachers watched Khanmigo help a student work through a math problem and changed their minds. The formal governance structures, the training programs, the ethical guidelines, all came later. They were built around something that already worked, not around a theoretical plan for something that might.
Khan’s one regret: “I wish I was even more aggressive in terms of the pace of our internal AI adoption in our everyday processes.” The person who moved fastest still wishes he’d moved faster. That instinct, the sense that urgency is always underestimated, recurs in every successful transformation case. Nobody on the other side of the journey says they should have spent more time planning.
JPMorgan Chase demonstrated that the same principles work at massive scale. The bank, with its $18 billion annual technology budget and hundreds of thousands of employees, built an internal tool called LLM Suite and made it available to everyone. Within eight months, 200,000 employees were using it daily. Investment bankers were creating five-page presentation decks in thirty seconds. Contract review time dropped 40 percent in commercial lending. Derek Waldron, the bank’s Chief Analytics Officer, described the goal as building the world’s first “fully AI-connected enterprise.”
What made JPMorgan’s adoption viral rather than mandated was competition. Teams that adopted LLM Suite became visibly faster. Other teams wanted the same advantage. The adoption spread through what Waldron called “healthy competition, driving viral adoption.” The bank combined this organic bottom-up momentum with deliberate top-down transformation of high-impact functions. Jamie Dimon was candid about the workforce implications: operations staff would fall by at least 10 percent. But the bank was simultaneously expanding what it could do — analytical capabilities and client services that hadn’t been economically feasible before.
These four cases represent three distinct approaches to the same problem: bottom-up gamification, middle-out departmental expansion, and top-down demonstration. Each worked. Each produced measurably different organizations within months, not years. The common thread was commitment: in every case, leadership treated AI proficiency as a non-negotiable organizational capability. The mandate looked different at each company. IgniteTech enforced it with consequences. At Khan Academy it was implicit — the founder built the prototype himself. JPMorgan embedded it in the tooling and the competitive culture that made non-adoption feel like falling behind. Tishman Speyer’s spread through visible success rather than any directive. But none of them left AI adoption to individual curiosity, and none waited for a perfect plan before starting.
Moderna combined elements of all three. Challamel’s sequence was deliberate, and the sequencing is where most organizations go wrong.
The foundation phase ran from 2021 through 2022. Those 270 interviews were about understanding the organization, not AI strategy. What were the actual pain points? What would success look like in each person’s terms? What infrastructure gaps would block any transformation before it started? The discovery that conference rooms lacked working power outlets wasn’t trivial. It told Challamel that the organization’s physical and digital infrastructure had to be fixed before any AI initiative could land. He spent months on this. No press releases, no AI announcements, just quietly building the foundation.
The launch phase came in mid-2023. After a second round of 150 interviews, the team built mChat, an internal version of ChatGPT running on the API, giving every employee a sanctioned, secure AI tool. Then came the prompt contest. It was modeled on Xprize, with CEO Bancel’s podcast as the kickoff. The design was specific: employees submitted prompts that solved real business problems, and the entire company could vote on which ones were most valuable. Three thousand people participated, which at Moderna meant essentially everyone. Over 400 prompts were submitted. A hundred and eighty were identified as game-changing use cases. The contest generated a library of prompts the company could actually deploy, and it turned AI adoption into a social event — something people talked about and competed over, not a compliance box to check. The voting mechanism also surfaced the people behind the best ideas, which mattered more than the ideas themselves.
Those top performers self-selected into the Gen AI Champions Team, a group of over a hundred people who now meet biweekly with 60 to 70 percent attendance. Challamel: “If you’re good at finding a great prompt and you’re popular so people want to vote for you, then you’re our champion.” Most organizations pick their AI advocates differently. They appoint them: the CTO nominates someone, HR selects people with the right job titles, or a committee picks based on seniority. Moderna’s approach selected for demonstrated competence and social credibility simultaneously, two qualities that predict actual influence over colleagues far better than title or tenure.
The champions became the distributed teaching infrastructure that no top-down training program could replicate. They weren’t appointed by management. They earned their credibility by demonstrating competence in front of their peers. They spread AI fluency laterally, team by team, use case by use case, in the language of their own domains. The prompt contest cost almost nothing compared to a formal training program. It produced better results because it selected for the two qualities that matter most in organizational change: competence and social influence.
The target that keeps moving
Every prior technology transformation had a destination. You migrated to cloud. You implemented ERP. You went mobile. There was a state B that was different from state A, and the project was done when you arrived. AI transformation has no destination, because the technology’s capabilities shift every few months.
Challamel saw this pattern before most people had language for it. He observed that every major technology revolution follows a democratization arc: the Turing machine was invented in 1936 and personal computers arrived in 1979, a forty-three-year gap. ARPANET launched in 1969 and the consumer web browser arrived in 1996, twenty-seven years. Neural networks have existed for decades, but ChatGPT compressed the democratization lag to almost nothing. “AI itself is not a technology revolution,” Challamel argued. “The technology has been there for a while. But transformer models and gen AI — now that’s a democratization revolution.”
Most organizations haven’t internalized the consequence. McKinsey’s 2025 research on what they call “the agentic organization” found that the length of tasks AI can reliably complete has been doubling on a regular cadence: every seven months since 2019, every four months since the launch of GPT-4. At the time of their research, AI could reliably complete tasks taking roughly two hours. Their projection: by 2027, AI could complete four full days of autonomous work without supervision. Whether that projection proves exact matters less than the direction. The frontier of what AI can do is advancing faster than most organizations’ planning cycles. A pilot approved in January may be obsolete by June, not because it failed but because the model’s capabilities moved past it.
Mustafa Suleyman, co-founder of DeepMind and CEO of Microsoft AI, has named this the coming “middle era” between current AI and artificial general intelligence: a period when models become dramatically more capable but organizations haven’t yet restructured around what’s newly possible. He calls it ACI, Artificial Competent Intelligence, models capable enough to handle almost any task competently, enabled by a hundredfold increase in available compute. The proficiency gap will widen fastest during this middle era, because the technology will leap ahead while organizational adaptation remains linear.
A one-time transformation program, no matter how well executed, is insufficient. The organizations that sustain their advantage build continuous learning into their operating rhythm. One effective mechanism: a weekly demo where someone shows something they’ve built or a new way they’re doing their job with AI. That cadence keeps the organization’s shared understanding of AI current and surfaces use cases before anyone writes a strategy document about them.
Measurement is inseparable from the moving-frontier problem, because traditional metrics can’t capture what’s actually changing. David Gallacher at UC Berkeley’s Haas School of Business argued in 2025 that applying traditional ROI to AI transformation “misapplies industrial-era metrics to a cognitive-era transformation.” Measuring AI’s value on a six-month financial return, he wrote, is like “judging the internet’s value in 1995 by immediate profit increases.” The value is real but it doesn’t fit in the usual spreadsheet.
McKinsey’s own research supports a different approach. When they examined twenty-five attributes that correlate with AI’s impact on earnings, the single strongest predictor was whether the organization had redesigned its workflows around AI, not which model it used or how much data it had. Workflow redesign beat model selection, data quality, and technical infrastructure. This aligns with what Challamel discovered at Moderna and what Paul David documented about electricity a century ago: the value comes from reorganization, not from the technology itself.
Bill Gates proposed a framework that captures what traditional ROI misses. AI improves three dimensions of work simultaneously: quantity (more output), quality (better output), and efficiency (less resource per unit of output). But the most important dimension is the fourth, the unlock: AI makes things possible in a given timeframe, with a given set of resources, that were previously impossible. The legal ops team reviewing forty thousand contracts was doing something that had never been feasible. That unlock doesn’t show up in a productivity ratio. It shows up in new capabilities and new markets.
But the Gates framework still has to survive the budget meeting, and the budget meeting has a structural problem. Token budgets can’t stay classified as IT spend. IT typically runs 10 to 12 percent of revenue at large enterprises, and that number is tightly managed. If AI compute stays in the IT budget, it competes with Salesforce licenses and server maintenance for a fixed slice of revenue. That’s the wrong frame. Token spend has to move into regular operating expenditure, because what it replaces isn’t a software license. It’s labor and process cost. Trading a marketing campaign for more automation in the marketing engine is an OpEx decision, and OpEx isn’t capped the way IT spend is. When AI budgets reclassify from IT to OpEx, enterprise tech spend could roughly double, from 12 percent of revenue to something closer to 20 percent. That sounds dramatic, but consider what the spend buys: work that used to require headcount now runs on compute. The budget goes up because the denominator changes.
Some companies aren’t waiting for the reclassification to sort itself out. One ran a “shark tank” pitchathon where internal teams competed for token budgets, then reviewed ROI at three-to-six-month intervals. Another created tiered allocation: the top 5 percent of users get frontier models with unlimited capacity, the next 20 percent get some limits on a more efficient model, and everyone else gets the cheapest option. These are practical responses to a real constraint. You can’t give everyone unlimited tokens without blowing through your earnings guidance. But you also can’t ration tokens so tightly that nobody discovers the high-value use cases. The companies figuring this out are building the budget architecture for a new category of spend that didn’t exist two years ago.
Google Cloud’s 2025 data provides some grounding: among organizations that had deployed AI agents in production, 74 percent reported achieving ROI within the first year, with cost savings of 26 to 31 percent across supply chain, finance, and customer operations. But even that framing undersells the story, because cost savings are the visible part. The invisible part, the work that’s newly possible, is where the compounding value lives.
Consider what Moderna’s scientists do with their reclaimed time. Before AI handled the literature reviews, the regulatory cross-referencing, and the routine data analysis, those tasks consumed the majority of a researcher’s week. With AI absorbing that overhead, the same scientists run more experiments, explore more hypotheses, and evaluate more candidates in their drug pipeline. What emerges is a different kind of organization, one where human judgment goes to its highest-value use because AI handles the surrounding work. The measurement framework that captures this is a capability inventory, not a cost ratio: what can this team do today that it couldn’t do six months ago?
What becomes possible
Transformation pays off in what organizations become capable of once they close the proficiency gap and sustain the learning. Reid Hoffman argues that AI is to the twenty-first century what steam power was to the Industrial Revolution: synthetic intelligence, paralleling synthetic energy. Before the steam engine, productivity was constrained by available muscle, human and animal. Before AI, productivity was constrained by available cognition. In both cases, making the scarce resource abundant created industries that hadn’t existed before.
Structurally, the parallel holds. Synthetic energy enabled one worker to do the physical work of fifty. Synthetic intelligence enables one knowledge worker to do the cognitive work of many. Besides speeding up existing work, the steam engine made entirely new categories feasible: railways, factories, mass production, global trade. AI is following the same trajectory. The forty thousand contracts reviewed in eleven days were new work, something that had never been feasible before.
Moderna provides the most concrete evidence of what the force multiplier looks like in practice. Bancel’s target is explicit: the same impact on patients from a few thousand people that would have required a hundred thousand under conventional pharmaceutical methods. That is roughly a thirty-times multiplier. Bancel: “If we had to do it the old biopharmaceutical ways, we might need a hundred thousand people today. We really believe we can maximize our impact on patients with a few thousand people, using technology and AI to scale the company.” The multiplier works because AI absorbs the cognitive overhead that consumed most of a scientist’s time, leaving the human to focus on experimental design, clinical interpretation, strategic decisions about which molecules to pursue.
That expansion of capability shows up at every scale. A Head of Finance at a Series B startup automated reporting workflows that previously consumed days of analyst time. The real gain was the strategic decisions that became possible when financial analysis arrived in hours rather than weeks. A three-person content team at a mid-size company produces the output that once required twelve people, not by working harder but by using AI to handle research, first drafts, and distribution while the humans focus on voice, judgment, and editorial quality. In each case, the force multiplier amplifies what the organization already does well. The team with strong domain expertise gets more leverage than the team without it. The organization with clear strategic direction points the multiplier at the right targets. The one without clear direction wastes the amplification on confusion.
Hoffman calls the societal expression of this multiplier superagency: when a critical mass of individuals are personally empowered by AI, the effects compound through society and reach even non-users. A doctor diagnoses more precisely; a child’s tutor adjusts to their pace in real time. Services that were once too labor-intensive to personalize become viable. The effects ripple outward from the individuals and organizations that adopt AI to everyone they serve.
This is why broad access matters for the transformation playbook. Concentrating AI capability in a few expert hands captures some value. Distributing it across the entire organization captures compounding value, because every employee who reaches proficiency becomes a source of new use cases, new efficiencies, and new capabilities that no central team would have imagined. Moderna’s 180 game-changing use cases didn’t come from a strategy team. They came from bench scientists, administrative staff, and marketing coordinators who knew their own work better than any consultant ever could. The prompt contest doubled as a discovery mechanism for where AI creates the most value in the organization, powered by the people closest to the work.
Hoffman sorts AI perspectives into four camps: Doomers and Gloomers want to slow down or stop; Zoomers want minimal constraints. Hoffman is a Bloomer, optimistic but participatory, favoring iterative deployment and broad access over retreat. “The freedom to innovate and the obligation to regulate are both important.” The Bloomer stance maps onto the transformation playbook: deploy early, deploy broadly, learn from what happens, iterate.
The companies in this chapter that succeeded, Moderna, Khan Academy, JPMorgan, even IgniteTech with its aggressive methods, all followed the Bloomer pattern without necessarily using the word. They deployed AI tools to everyone, not just a pilot team. They iterated based on what employees actually did with the tools, not on what a strategy document predicted they would do. They accepted imperfection in early versions because they understood that organizational learning requires contact with the real thing. Khan shipped Khanmigo knowing it hallucinated. Moderna launched mChat before the prompt contest had identified the best use cases. JPMorgan made LLM Suite available to 200,000 people before optimizing every workflow it would touch.
Carl Benedikt Frey’s framework from The Technology Trap distinguishes between enabling technologies that augment human capability and replacing technologies that substitute for it. The distinction determines social acceptance, political resistance, and ultimately whether the technology produces shared prosperity or concentrated gains. Hoffman adds a civilizational lens to this distinction: the Industrial Revolution’s early decades were “violently dehumanizing” even though steam power was, in aggregate, an enabling technology. Whether synthetic intelligence proves enabling or replacing depends on the same institutional and organizational choices that determined the outcome for steam.
The transformation playbook is the organizational expression of the enabling path. When Moderna builds mChat and distributes it to every employee, that’s an enabling deployment: AI augments what each person can do. When a company deploys AI purely to cut headcount, measuring success by positions eliminated, that’s a replacing deployment, and the historical evidence suggests it produces organizational backlash, quality degradation, and ultimately the kind of reversal Klarna experienced. The choice shows up in the design of every AI program, the metrics chosen to evaluate it, and the story leadership tells about why they’re doing it.
Acemoglu and Johnson warn about a third possibility: so-so technologies that replace workers without meaningfully improving output. Self-checkout kiosks that are slower than cashiers. Automated call centers that frustrate customers into hanging up. Basic AI deployments that cut costs on paper while degrading the actual service. The question for any AI program is whether it creates new value or merely shifts costs from wages to shareholders. The transformation playbook, executed well, avoids the so-so trap by starting with the question Challamel asked in those 270 interviews: what would wild success look like? Organizations that define success as headcount reduction tend to build so-so deployments. Organizations that define success as new capability tend to build enabling ones. The question you ask at the beginning shapes the technology you build at the end. Moderna asked “what would make you say you’ve succeeded beyond your wildest expectations?” and built an enabling deployment. An organization that asks “where can we cut 20 percent of costs?” will build something different, and the history of technology suggests it will be worse for everyone involved, including the shareholders.
The cultural project
Challamel’s “culture, business, technology” framework inverts the order that most enterprises follow. The typical sequence is technology first: select a model, build a platform, then figure out the business case, and hope the culture follows. Moderna reversed it. Culture first: make AI adoption normal, visible, and celebrated. Business second: identify the use cases that create genuine value. Technology third: build the infrastructure to support what people are already doing.
The reversal works because it solves the proficiency gap at the root. When leaders personally use AI and visibly value it, when prompt contests make experimentation a social event rather than a private experiment, when champions earn credibility through demonstrated competence rather than appointed authority, the culture shifts before the formal programs arrive. The formal programs then codify what’s already happening rather than trying to create behavior from scratch. This is why Khan Academy’s retroactive playbook worked: the culture of experimentation preceded the governance framework. The governance legitimized and protected a transformation that was already underway.
The five-step methodology that Brotman and Sack distill from their research follows this same cultural logic. Start with AI education and proficiency, building baseline understanding so that downstream decisions are informed by direct experience. Establish an AI council, a cross-functional group of AI-literate leaders, not a committee of executives who delegate the learning to their direct reports. Define an AI use policy, and frame it as a permission structure rather than a restriction. A well-designed policy increases freedom by providing the guardrails that let people experiment without fear. Build a road map of pilots, with clear owners, ninety-day limits, and before-and-after benchmarks. And conduct regular AGI-horizon assessments, updating the organization’s understanding of where AI capabilities are heading so that pilots and investments stay aligned with a target that moves.
The sequence matters because each step depends on the last. Council members who haven’t personally used AI make bad decisions about it; policy without informed judgment produces paralysis or theater; pilots without guardrails produce the shadow AI problem. Skip a step and the one after it fails.
Moderna ran through this sequence in months. Most organizations take years, if they finish at all. The difference was Challamel’s decision to make culture the first priority and technology the last. By the time Moderna needed sophisticated AI infrastructure, thousands of employees already knew what they wanted to build with it. The infrastructure had a customer base before it had a platform team.
The transformation playbook is available to every organization willing to treat AI adoption as a cultural project rather than a technology deployment. The tools are commoditized, the case studies exist, and the research on what works is accumulating fast. What remains scarce is the willingness to start with culture, to invest in the seventy percent, and to accept that the target will keep moving long after the first program ends.
The compression Challamel described earlier — decades between invention and mass adoption narrowing to almost nothing — leaves less runway with each cycle to close the proficiency gap before it becomes unbridgeable. The organizations that start now have an advantage measured in months. The ones that wait for the perfect strategy will find, as they always have, that the strategy arrived too late to matter.
Rooms without plugs to thirty times leverage. The distance turns out to be organizational, not technological.
But the playbook, however well executed, leaves a question unanswered. The organizations that transform successfully are asking their people to work differently, to develop new instincts, to exercise judgment in domains where they used to follow procedures. That demands something from the humans inside these organizations that no cultural program can simply install. It demands sustained attention, and attention is getting harder to find.