Every satisfying transformation begins with one intentional decision: the decision to stop waiting for the perfect moment and start moving.
In March 2026 — which, depending on when you're reading this, was either last week or already feels like ancient history — two research breakthroughs landed that would have dominated headlines for months in any prior decade. Google published TurboQuant, a KV cache compression technique that delivers a 6x reduction in memory footprint and an 8x inference speedup on H100 GPUs, with zero accuracy loss. Zero. Not "negligible" or "within acceptable margins" — zero. The same week, Moonshot AI's Kimi lab released their work on Attention Residuals, replacing the traditional residual connections in transformer architectures with attention mechanisms. The result: a 1.25x improvement in compute efficiency. A free gain, essentially — better results for the same electricity bill.
Both of these are genuinely impressive. Both will matter. And both will be footnotes by the time you finish implementing whatever you're implementing right now.
This is the treadmill. Not a treadmill you're running on — one that's running under you whether you've laced up your shoes or not. The pace of foundational AI research has passed the point where any single breakthrough deserves a strategic pause. There is no "let's wait for the dust to settle" because the dust is volcanic and the eruption is continuous. If you're building your agentic strategy around the capabilities of a specific model released in a specific month, you're building on sand. Impressive sand, perhaps, but sand.
Consider the tools your development teams are already using — the ones they chose themselves, often before anyone in leadership approved a budget line. First came GitHub Copilot, the autocomplete that made developers feel like they had a very keen intern reading over their shoulder. Then Cursor arrived, wrapping the entire development environment in an AI-native experience. Then Claude Code, bringing agentic coding to the terminal — not just suggesting the next line, but planning, executing, debugging, and iterating across entire codebases. Your developers have opinions about these tools. Strong opinions. The kind of opinions that, in an earlier technological era, fuelled the Star Wars versus Star Trek debates — each faction genuinely convinced that their preference is objectively, measurably, cosmically superior. (It's worth noting that in the Star Wars vs Star Trek debate, the correct answer is obviously Babylon 5, but that's a digression for another paper.)
The developer community has taken to calling these tools "clankers" — a term borrowed, with varying degrees of affection, from the general clatter of AI-assisted everything. Every developer has their favourite clanker right now. Every team lead has a preferred stack. Every CTO has a slide deck justifying one over another. And every single one of them is correct, temporarily.
Here's what matters: there is no single winner yet. We haven't reached the Excel moment — that point where one tool becomes so ubiquitous that competence in it is simply assumed, like literacy or the ability to operate a coffee machine. We haven't reached the Spotify moment, where one platform absorbs enough of the ecosystem that alternatives become niche. Both moments will come. The agentic tooling landscape will consolidate. But we're not there, and anyone telling you they know which tool will win is selling something.
When that consolidation does happen — and it will, with the same inevitability that gave us Microsoft Office and Google Search — the competitive advantage of using the tool will evaporate. The advantage will belong to those who started early enough to build context, processes, and institutional muscle around agentic ways of working. The tool itself will be a commodity. The ability to wield it will not.
So here's the message, stripped to its studs: technology is not your constraint. The models are good enough. The tools are good enough. They'll be better next month, and better still the month after that. If you're waiting for the right model, the right tool, the right moment — you're waiting for a coin to land on its edge. It happens, technically. But you wouldn't bet your company on it.
Don't obsess over which model is best today. Obsess over whether your organisation is ready to use any of them well.
The intentional organisation doesn't just adopt AI — it feeds AI what it needs to be useful.
Let me ask you a simple question: what will you prepare for dinner tonight?
If you're a reasonably functional adult, your brain just ran a rapid calculation involving the contents of your fridge, your energy level, the time available, dietary preferences, and perhaps the memory of that leftover chicken that really needs to be used today. You arrived at an answer — maybe pasta, maybe a stir-fry, maybe "order pizza and pretend it counts as cooking."
Now let me ask the same question to six different people:
Your eighteen-year-old son hears it and answers "pizza" before you've finished the sentence. Your vegan wife considers the seasonal vegetables available at the market and suggests a roasted cauliflower with tahini. Your traditional Polish mother-in-law — who is, let's be honest, an absolutely magnificent cook and the correct answer to most dinner-related questions is "whatever she decides" — starts mentally assembling a three-course meal involving at least one thing wrapped in dough. Each answer is completely valid. Each is completely different. The question was identical.
Now add five words: "I want to impress my guests."
Every answer changes. The pizza disappears (or becomes artisanal, wood-fired, with burrata). The cauliflower gets promoted to a centrepiece with pomegranate. Your mother-in-law smiles, because she was going to make it impressive anyway, but now she adds the special napkins.
Context is everything.
When you ask a large language model a question without context, it has to produce an answer anyway. That's its job — predict the next token, then the next, then the next, until a coherent response emerges. And it will produce something. The problem is that "something" without context is the AI equivalent of your eighteen-year-old saying "pizza" to every dinner question. Technically an answer. Rarely the right one.
This is where hallucinations come from — not from some fundamental flaw in the technology, but from the model doing exactly what it was designed to do: generate the most probable continuation given the input. When the input is thin, the output is generic at best and fabricated at worst. The model isn't lying to you. It's confabulating, the way a polite dinner guest makes up an opinion about wine when they don't actually know anything about wine. The solution isn't to distrust the guest — it's to give them better information.
Here's where the damage compounds: most people's first experience with generative AI is context-free. They type a question, get a mediocre answer, and conclude the technology doesn't work. There are no second first impressions. The executive who tried ChatGPT once, got a vague response, and filed it under "overhyped" is not coming back voluntarily. The supply chain planner who asked an AI tool to help with demand forecasting without providing any historical data, constraints, or business rules — and received beautifully formatted nonsense — has a story they'll tell for years. "I tried AI. It didn't work."
It did work. It worked exactly as designed. It just wasn't given anything to work with.
For the agentic enterprise, context means knowledge management. And knowledge management is, without any close competitor, the most undervalued process in enterprise. It sits somewhere between "we should really do that" and "we'll get to it after this quarter's priorities," which is to say it sits in purgatory, dressed in good intentions and a layer of dust.
Let me describe what knowledge debt looks like, because you'll recognise it instantly: it's twenty slide decks explaining a business process that has never been captured in a single BPMN diagram. It's the tribal knowledge that means a new hire becomes productive in month four, not day four, because the things they need to know live exclusively in the heads of people who are too busy to write them down. It's "let's schedule a meeting" as the universal remedy for "nobody documented this properly and now we need an answer." It's the person who knows being the person who is always the bottleneck, because knowing has become their job rather than a thing that's been captured and shared.
Knowledge debt accrues interest, just like financial debt. Every month you don't capture, structure, and make accessible the knowledge your organisation runs on, the cost of doing so later increases. And unlike financial debt, there's no restructuring option. The person who retires takes their knowledge with them. The team that gets reorganised scatters their context across a dozen new reporting lines. The process that worked fine when three people understood it breaks catastrophically when those three people are on holiday simultaneously, which will happen, because the universe has a scheduling sense of humour.
You must invest in context before you get returns from AI. And you have accrued enormous knowledge debt. This is not a criticism — it's a diagnosis. Every enterprise has it. The question is whether you'll address it intentionally or continue to pay the interest while wondering why your AI initiatives deliver mediocre results.
The coin doesn't land on its edge by accident. It lands on its edge because someone placed it there, carefully, with intent.
The intentional leader doesn't replace people — they give people a reason to transform.
The steam engine was invented in 1712. It took until the 1830s — over a century — before railways fundamentally transformed commerce and industry. Electricity was harnessed in the 1830s, but electrification of factories didn't truly peak until the 1920s, nearly a hundred years later. Personal computers emerged in the mid-1970s. Enterprise adoption didn't become universal until the late 1990s — two decades of awkward transition, resistance, pilot programs, and a truly staggering number of terrible training videos featuring people in business casual pointing at cathode ray monitors.
In each case, the technology existed long before the transformation happened. The delay was never technical. It was organisational. It was human. It was the entirely reasonable challenge of taking a system that works — imperfectly, sure, but works — and rebuilding it around a capability that most people in the system haven't internalised yet.
Changing one person's habits is hard. Ask anyone who's tried to start exercising regularly or stop checking their phone before breakfast. Changing a department's habits is harder — you're now dealing with shared routines, mutual dependencies, and the deeply human tendency to revert to familiar patterns the moment pressure increases. Changing an entire enterprise? You'd better go one cup at a time, because boiling the ocean in one go will burn everyone and warm nothing.
Your organisation is full of protein-based intelligence. That's not a metaphor I use dismissively — these are dedicated, experienced, valuable people who have spent years or decades building the knowledge, relationships, and judgment that keep your business running. They are the context. They are the institutional memory. They are the ones who know why that process has an exception clause for the third Tuesday of every quarter (there's a reason, and it probably involves a specific customer, a specific compliance requirement, and a very specific person who was very insistent in 2014).
AI should not be implemented as a mechanism to harvest the benefits of these people's experience for the exclusive advantage of those at the top. If the value created by agentic transformation flows only upward — into executive dashboards, shareholder returns, and efficiency metrics that translate to headcount reduction — you will face resistance. And that resistance will be entirely justified.
Don't implement the AI stick. Don't build systems that monitor how many tokens each employee consumes and rank them on a leaderboard, as if AI usage were a fitness challenge. "Congratulations, Sarah from Accounting used 47% more AI prompts this month!" This is not the way. Tracking adoption to understand reality? Yes, absolutely. You need to know what's being used, by whom, and whether it's working. But the metric that matters is impact, not activity. Reward what was previously impossible, not the process of getting there.
If your best demand planner can now run scenarios in an afternoon that previously took a week, the win isn't "they used the AI tool" — the win is "we can respond to market shifts four times faster." If your procurement team uses an agentic workflow to analyse supplier risk across six dimensions simultaneously, the win isn't the workflow — it's the million-dollar disruption you avoided. Measure that. Celebrate that. Compensate for that.
Here's the framing that works: the competitive zero-sum game is external. There are competitors in your market who will lose customers, market share, and talent if you get this right. That's the game. Internally, within your own walls, it must be win-win. The person who builds a brilliant prompt library that saves their team ten hours a week should benefit from that contribution, not watch as the ten hours get absorbed into higher targets with no recognition.
Think of it this way: you're asking protein brains to help you train silicon brains. The protein brains need a reason to cooperate. "Because I said so" works in parenting, occasionally. In enterprise transformation, it produces compliance at best and sabotage at worst. Give people a reason to participate that serves them, not just you.
The alternative — coercion, mandate, top-down enforcement — produces what every enterprise transformation expert has seen a hundred times: enthusiastic compliance in meetings, creative avoidance in practice, and a transformation that looks great in quarterly reports and terrible in operational reality.
Intentionality isn't about grand plans — it's about where you point your attention first.
We are, as a species, literally boiling the ocean. The global mean sea surface temperature has been rising steadily, and while I don't intend to turn this paper into a climate treatise, the metaphor is worth noting: humanity is, in fact, demonstrating that you can boil an ocean. It just takes a very, very long time, involves enormous amounts of energy, and the results are catastrophic for everyone who lives in or near the ocean, which is most of us.
Trying to turn your company agentic in one go is the enterprise equivalent. You'll generate enormous heat, consume staggering resources, and the results will be disastrous for everyone in and near the organisation. Which is everyone.
The alternative is not inaction. The alternative is deliberate, incremental, well-directed warmth.
Every organisation has pockets of excellence — teams or individuals who are already experimenting, already building, already pushing at the boundaries of what's possible. They're the ones who set up an AI tool on their own time, built a workflow nobody asked them to build, or quietly automated a reporting process that everyone else still does manually while complaining about it. These people exist. They're in your organisation right now. Some of them are frustrated, because they can see the potential and nobody's listening. Some of them are energised, because they've already tasted what's possible. All of them are your kindling.
Leaders must find these pockets and tend to them. Not with bureaucracy — not with steering committees and governance frameworks and stage-gate processes that turn a weekend project into a six-month initiative. Tend to them with air and fuel: give them permission, give them resources, give them cover. Let them try. Let them fail. Let them try again.
Here's a paradox you'll encounter: a catalogue of 200 AI agents is a terrible destination but a great waypoint. If your agentic transformation ends with 200 agents listed in a SharePoint site, each doing one narrow thing, each maintained by one person, each fragile — you've built a zoo, not an enterprise capability. But if those 200 agents represent 200 experiments, 200 learning opportunities, 200 seeds planted by people across your organisation — you've created an ecosystem. Some seeds won't germinate. Some will grow into weeds. A few will become trees. And the orchestration layers of tomorrow — the platforms and frameworks that will coordinate these agents into coherent workflows — will be dramatically better than what exists today. Your job isn't to build the perfect orchestration layer now. Your job is to ensure there's something worth orchestrating.
This links directly to the context challenge. You can approach knowledge management as one Herculean effort — the Augean stables, cleaned in a single diversion of a river. And like that myth, it sounds impressive but requires a demigod. Alternatively, you can grow your knowledge base through pockets of excellence. Each team that builds an agent documents their process. Each agent that succeeds or fails generates learnings. Each experiment adds to the organisational context. Incremental wins, a few instructive failures, evolutionary success. The knowledge grows because the practice grows. The context improves because people are doing the work.
Think of the difference between kitesurfers and container ships. A kitesurfer is fast, agile, responsive to conditions. They can change direction in seconds, exploit a wind shift, try a new approach. If they fall, they're back up quickly. A container ship is slow, steady, and nearly impossible to redirect once it's moving. It carries enormous value, but it needs a harbour built before it arrives.
Your pockets of excellence are kitesurfers. Scale what works. Kill what doesn't. Let them move fast, learn fast, fail fast. When something proves itself — when an agent or workflow demonstrates real, measurable value — then you build the harbour. Then you containerise it. Then you scale it properly, with the governance and infrastructure it now deserves.
But if you start by building the harbour for a container ship that doesn't exist yet, you'll spend two years and a considerable fortune on a facility that may not be in the right port.
Intentionality includes the intent to learn, the intent to correct, and the courage to admit both are necessary.
Generative AI is non-deterministic. This is not a limitation — it's a fundamental characteristic. The same prompt, given to the same model, with the same parameters, will produce different outputs on different runs. For people accustomed to deterministic systems — where the same input reliably produces the same output, where 2 + 2 is always 4 and a SQL query returns the same results every time (assuming nobody's been creative with the database overnight) — this is deeply uncomfortable.
Your first version will probably be wrong. Not catastrophically wrong, not "we've made a terrible mistake" wrong, but wrong in the way that a first draft of anything is wrong. The prompt that seemed perfect in testing produces unexpected results in production. The agent that worked beautifully on ten examples stumbles on the eleventh. The workflow that handled standard cases elegantly chokes on the exception that, it turns out, occurs 30% of the time. (There's a rule in enterprise software: the "exception" is always more common than anyone admits. If someone tells you "that almost never happens," what they mean is "I don't have data on how often it happens, but I'd prefer not to deal with it.")
This is why feedback loops are not optional — they're foundational. Build them from day one. Not "we'll add monitoring later" — later never comes, and by the time you realise you need monitoring, you've accumulated weeks of blind operation. Every agentic workflow should have a mechanism for capturing what went right, what went wrong, and what was surprising. Every prompt should be treated as a living document, not a finished product. Prompt engineering is not a one-time activity. It's gardening. You plant, you water, you prune, you observe, you adjust. And sometimes you realise you planted something in entirely the wrong bed and you need to move it.
Don't zero-shot everything. The temptation is enormous — throw a task at the model, see what happens, declare victory or defeat. But the difference between a zero-shot prompt and a well-engineered prompt with examples, constraints, and context is the difference between asking a stranger on the street for directions and asking a local who's lived there for thirty years. Both will give you an answer. One will get you where you're going.
Track, monitor, optimise. Not as bureaucracy — please, not as bureaucracy — but as survival. The organisations that succeed with agentic AI will be the ones that treat every deployment as a hypothesis, not a conclusion. "We believe this agent will improve X by Y" is a hypothesis. Test it. Measure it. If it's wrong, that's not a failure — it's data. Adjust and try again.
Human-in-the-loop is a conscious design decision, not an afterthought. For every agentic workflow, ask: where does a human need to review, approve, or redirect? The answer is not "nowhere" — at least not yet, and not for anything that matters. The answer is also not "everywhere" — that defeats the purpose. The answer is specific, considered, and based on the consequences of getting it wrong. An agent that summarises meeting notes can probably run unsupervised. An agent that approves purchase orders above a certain threshold should probably not. The line isn't about trusting AI or not trusting AI — it's about understanding where the cost of error justifies human oversight and where it doesn't.
Expect to be wrong. Build for it. The coin lands on its edge only when you're willing to flip it enough times.
Every intentional journey has a first step that costs less than you fear and teaches more than you expect.
There are three things you can do in the next ninety days that will cost relatively little, teach you an enormous amount, and set the foundation for everything that follows. Let me be specific, because "start somewhere" is advice so vague it belongs on a motivational poster next to a photograph of a sunset.
Run a hackathon. Before you build a platform.
Passion exists in your team. It coexists, peacefully enough, with mediocrity and stay-with-the-flow inertia. That's not cynicism — that's the normal distribution of engagement in any organisation larger than a football team. Some people are excited about what's possible. Some people are doing their job competently and would prefer not to change anything, thank you very much. And some people are actively frustrated — they can see how things could be better and they're stuck in processes designed for a world that no longer exists.
Focus on the passion. More specifically, focus on the frustration — because frustration is passion with nowhere to go. The person who complains that the weekly reporting process is absurd is the person who will build an agent to fix it, if you let them. Turn dissatisfaction with the status quo into a vehicle for change.
A hackathon does this naturally. Give people two days. Give them access to tools. Give them a theme — "automate something that annoys you" is often more productive than any carefully crafted brief. Set stewardship limits — data security, no production systems, nothing customer-facing without review — but within those limits, let the principle be "what is not prohibited is allowed," not "you must ask permission for everything." The first principle produces innovation. The second produces compliance documentation.
After the hackathon, don't file the results in a shared drive and move on. This is where most hackathons die — in the graveyard of good ideas that never got follow-through. Instead: cure or kill. Every project gets a 30-day trial. Monitor it. Does it work? Does anyone use it? Does it save time, reduce errors, improve decisions? If yes, iterate, improve, invest. If no, kill it with gratitude and learn from it. This is not a platform. This is evolution. The platform comes later, grown from the roots of what actually worked.
Give people an AI allowance. Then watch what happens.
SaaS revenue officers — the business development professionals at every software company on Earth — love adding tools to your enterprise. Copilot at $20 per user per month. Office 365 at $18. Claude Pro at $20. Copilot Studio at $400. Each tool is individually justifiable. Collectively, your CFO is developing a twitch.
Here's an alternative: give each team member an AI allowance. A hundred dollars a month, for example. Let them choose what to spend it on. The developer who gets enormous value from Claude Code uses it for that. The marketing analyst who lives in Copilot uses it for that. The supply chain planner who's found an obscure but brilliant niche tool uses it for that. The person who doesn't find any AI tool useful yet doesn't spend it — and that's information too.
Monitor the spend. Understand what's being used, by whom, and what value it creates. You'll see patterns emerge — tools that cluster, tools that nobody uses after the first week, tools that a few people swear by. Kill the long tail of subscriptions nobody touches. Double down on what's working. And where you see heavy usage of external tools, ask whether local inference or internal alternatives could do the same job at a fraction of the cost. You're not penny-pinching — you're learning what your organisation actually needs by letting the people who need it choose.
This approach respects the intelligence of your workforce. It says "you understand your job better than a procurement committee does" — which, in fairness, is almost always true.
Build a New Hire Onboarding Agent. Right now. Today if possible.
Onboarding is the second most overlooked process in enterprise, right after knowledge management — and the two are cousins. Every organisation has a version of the same problem: new hires spend their first weeks lost, overwhelmed, and dependent on the goodwill of colleagues who have their own work to do. The information they need exists, scattered across an intranet that was last redesigned when the company logo was different, a series of slide decks of varying vintage and accuracy, and the memories of people who are usually too busy to sit down and explain things properly.
An onboarding agent — built on your existing documentation, however imperfect — can answer the questions that every new hire asks. Where do I find the expense policy? How do I set up my development environment? Who do I talk to about X? What's the process for Y? These questions are repetitive, predictable, and answerable from existing knowledge. A human answering them for the fifteenth time is a human not doing the work they were hired for.
The beautiful thing about an onboarding agent is the feedback loop. New hires are uniquely positioned to identify what's missing, what's wrong, and what's confusing — because they're experiencing it with fresh eyes. Build the mechanism for them to flag gaps. "I asked the agent about the VPN setup and it didn't know" becomes a signal to improve the knowledge base. Over time, the agent improves because the people using it improve it. This is evolutionary knowledge management — context growing through use.
Stop wasting your ninjas' time. The brilliant, experienced, highly paid experts in your organisation should not be spending their Tuesdays re-delivering the same introductory presentation to each new cohort. That's not a good use of anyone — neither the ninja who could be solving real problems nor the new hire who'd benefit more from being able to ask questions at 10 PM when they're reviewing materials at home.
Intentional scaling means amplifying what works, not replicating what's easy.
Once you've started — hackathons running, AI allowance revealing patterns, onboarding agent teaching you about your own knowledge gaps — three initiatives will carry you from experimentation to capability.
Build an Internal Skills Market. Treat prompts like intellectual property, because they are.
A skill, in the agentic context, is a document that tells a language model how to do something well. It's a structured set of instructions, examples, and constraints that transforms a general-purpose AI into a specialist. This very paper was built using two of them — one for writing long-form content with a specific voice and structure, another for converting that content into clean, responsive HTML. Skills are the new currency of the agentic enterprise.
Don't force your employees to reinvent the wheel. If someone in your logistics team has built a prompt that reliably extracts delivery exceptions from carrier emails with 95% accuracy, that prompt has value beyond their team. If someone in finance has engineered a skill that reconciles invoices across three systems in a way that used to take four hours, that skill is an asset.
Build a gallery. Make it searchable. Make it easy to contribute and easy to consume. This is hackathonable in a weekend — it doesn't need to be a billion-dollar platform. A curated collection, reviewed for quality, accessible to all.
But here's the human wrinkle: prompts are "secret sauce." Not everyone shares willingly. The person who spent three weeks perfecting a prompt may not be eager to hand it to colleagues who'll use it without understanding the effort behind it. This is natural. Play on vanity — recognition works. Credit the creator. Track usage. Create a leaderboard of most-used skills. Let people build reputations as the person who wrote the skill that saved the procurement team fifteen hours a week. In an economy of knowledge workers, being known as someone who creates valuable knowledge is its own reward. For the rest, there's always the annual review.
Build One Brain. Don't overthink it.
Your organisation needs a corporate memory — a single, searchable, structured repository of what the company knows. Not what it's filed (that's your document management system, and we both know what state that's in). What it knows.
In the agentic world, this is the equivalent of what OpenClaw calls MEMORY.md — a living document that an AI system can reference for context. At enterprise scale, it's a RAG (Retrieval-Augmented Generation) system: a knowledge base that the AI can search and reference when answering questions or completing tasks.
RAG is well-established technology. It works. You don't need to listen to a hundred startups pitching their perfect memory system, each claiming to have solved knowledge management in a way that nobody else has. Start with what exists. Capture your knowledge in interoperable formats — Markdown is ideal: lightweight, readable by humans and machines alike, portable, and free of vendor lock-in. Not PDFs (search-hostile). Not PowerPoints (where knowledge goes to become bullet points and die). Not proprietary formats that hold you hostage.
Some knowledge is tiered — not everyone should have access to everything. Financial projections, personnel data, competitive intelligence — these need access controls. Build them in from the start, but don't let the perfect be the enemy of the good. A corporate brain with imperfect access controls and genuine knowledge is infinitely more valuable than a perfectly secured system with nothing in it.
Start capturing today. Every meeting that produces a decision should produce a record. Every process that's documented in someone's head should be documented somewhere accessible. Every piece of tribal knowledge that currently lives in the space between someone's ears should have a copy somewhere more permanent. The best memory system is the one that exists.
Embrace Microsoft Agentic. You're already paying for it.
Your organisation is almost certainly on Office 365. You're paying for it. Microsoft is investing billions in agentic and generative AI capabilities within that ecosystem — Teams integration, Excel copilot, draw-not-code interfaces, agent builders. These aren't future promises. They're available now. Some of them are rough. Some of them are genuinely useful. All of them are part of a platform you already own.
I'm not suggesting Microsoft is the only answer — that would be as foolish as declaring a winner in the clanker wars. But ignoring the agentic capabilities built into the platform your entire workforce already uses every day is like owning a Swiss Army knife and hiring a separate contractor for each blade. Open it. Try it. Monitor what works. Build on the successes. Report the failures (Microsoft actually listens, occasionally, when enough enterprise customers say the same thing).
The advantage of starting with Microsoft's agentic tools is adoption friction. Your people already know Teams. They already know Excel. An agent that works within those familiar environments faces a dramatically lower barrier than one that requires a new tool, a new login, and a new set of habits.
Sustaining transformation is an intentional act — entropy is the default, and entropy always wins unless you fight it.
Starting is one thing. Sustaining is everything. Here are four investments that compound over time.
Let Them Claw. Then Claw-In.
OpenClaw, Hermes, and the wave of agentic tools arriving in the first half of 2026 represent something new: personal AI systems that individuals can configure, train, and deploy for their own workflows. Not corporate tools deployed by IT. Personal agents, shaped by personal context, serving personal productivity.
This matters because personal context engineering — the art of giving an AI system enough about you, your work, your preferences, and your goals to be genuinely useful — is a skill that can only be learned by doing it. You can't teach it in a workshop. You can't deploy it top-down. It has to be practiced, refined, and internalised by each person.
The cost is remarkably low. An Ollama Cloud Pro subscription and an OVH VPS runs about $30 a month. Give people domains. Let them build. Let them experiment with agents that manage their email, organise their notes, draft their reports, track their tasks. Some will build brilliant workflows. Some will build things that barely work. All of them will be learning the most important skill of the agentic era: how to communicate effectively with AI systems in the context of real work.
Then claw-in: take the best ideas — the workflows that individuals have built and proven — and insource them. Turn personal innovations into team capabilities. Turn team capabilities into enterprise assets. This is bottom-up innovation with top-down amplification, and it works because the innovation has already been tested in the harshest environment possible: daily use by someone who doesn't have time for things that don't work.
Invest in Local Inference and API-First Orchestration.
Local inference is ready for prime time. Models running on consumer hardware — a gaming PC with a decent GPU, around $1,499 — can now handle 256k context windows and deliver results above 2024 state-of-the-art benchmarks. That's not a typo. A machine sitting under someone's desk can now outperform what was cutting-edge cloud AI eighteen months ago.
The implications are significant. Privacy-first: data never leaves the building. Offline-capable: no dependency on external services. Cheaper for recurring work: once you've bought the hardware, the marginal cost of each inference is electricity. For tasks that involve sensitive data, high-volume repetitive processing, or environments where internet connectivity is unreliable, local inference isn't a compromise — it's the optimal choice.
On the tooling side, prefer CLI-based API orchestration over MCP middleware where possible. Build your own facades for your enterprise tools — Workday, Jira, Clockify, Concur, whatever your stack includes. An API facade gives you full control over what data flows where, keeps context windows smaller (and therefore cheaper), and doesn't depend on a middleware layer that may or may not survive the next consolidation cycle. This isn't anti-platform — it's pro-resilience. Build what you control. Integrate what you must.
Make Content Immersive. It's No Longer Text Only.
Generative AI has democratised multimodal content creation in a way that would have seemed fantastical three years ago. Video generation, text-to-speech, podcast creation, interactive presentations — these are no longer the province of specialists with expensive tools. They're accessible to anyone with the right prompts and a few minutes.
Consider the weekly team update. Traditionally, this is either a meeting (consuming thirty minutes of everyone's time, half of which is logistics) or a newsletter (which roughly 11% of recipients read, and that's being generous). Now imagine a ten-minute podcast, generated from the same content, consumable during a commute. Or a five-minute video summary with visualisations. Or both.
Voxtral, released in March 2026, brings text-to-speech to a quality level where generated audio is genuinely pleasant to listen to — not the robotic monotone of earlier systems, but natural, expressive, and available in multiple languages. On-device TTS means no data leaves the organisation. A weekly podcast update is easier to produce than a hand-crafted newsletter, reaches people who prefer audio, and can be consumed in contexts where reading isn't practical.
This builds on the "make content interoperable" principle: capture knowledge in structured, machine-readable formats (Markdown), then deliver it through whatever medium best serves the audience. The knowledge is the asset. The delivery format is a choice. Make the choice that maximises engagement.
Recognise Your Agentic Talent. Or Watch Them Walk.
Here's a truth that will become increasingly uncomfortable over the next twelve to eighteen months: as your organisation masters agentic capabilities, the people leading that mastery become the most sought-after talent in the market. The prompt engineer who built your skills library, the team lead who ran your first successful agentic workflow, the developer who figured out how to integrate local inference with your enterprise systems — these people are developing capabilities that every other company in your sector wants.
If you don't recognise them, someone else will. And that someone else will welcome them with arms wide open and a compensation package that reflects the value they've already demonstrated.
This isn't a threat — it's arithmetic. The demand for agentic-era talent is growing faster than the supply. The people who understand both the technology and the organisational change it requires are rare. If they're in your organisation, they're there because of some combination of compensation, mission, culture, and opportunity. Make sure those factors are deliberately calibrated, or the combination will tip.
Win-win, again. The person who delivers 10x impact through agentic capabilities should see that reflected in their recognition, their growth trajectory, and yes, their compensation. Not because it's generous, but because it's accurate. They are worth more than they were before they developed these skills, and the market knows it even if their current performance review doesn't.
Build a talent retention plan that accounts for the new reality. This isn't HR's problem. It's a leadership responsibility.
This chapter doesn't start with an intentionality sentence. It is one.
Strip away the technology. Set aside the models, the tools, the frameworks, the acronyms. What remains in every chapter of this paper is a choice. Not a technical choice — a leadership choice. A choice that only business leadership can make, because it involves direction, resources, risk tolerance, and the willingness to change how things have always been done.
The technology will sort itself out. It always does. The steam engine didn't need a CEO to decide which metallurgical technique was optimal — it needed someone to decide to build a railway. Electricity didn't need a board to choose between AC and DC (well, it did, actually, and that was a messy affair involving elephants, but the point stands) — it needed someone to decide to wire the factory. AI doesn't need your leadership team to choose between GPT and Claude and Gemini. It needs them to decide that the organisation will become agentic, and then to make the hundred smaller decisions that turn intention into reality.
Here's a framework that works, described not as a methodology — the world has enough of those — but as a set of commitments.
Set a north star. It can be vague. "We will be an AI-native organisation within three years." That's directional, not prescriptive. It gives permission without mandating a path. It says "this matters" without saying "do it exactly this way." The specificity comes from the incremental plans beneath it — the hackathons, the allowances, the onboarding agents, the skills markets, the corporate brain. Each of these is concrete, measurable, and achievable within a quarter. Together, over time, they add up to the north star.
The mountain-top metaphor is useful: the summit is visible, but you don't climb it by helicopter. You climb it through numerous camps, each one higher than the last. At each camp, you acclimatise, assess conditions, adjust your plan. Sometimes you discover that the route you planned is impassable and you need to find another way. Sometimes you discover a route that's better than anything on the map. The summit is the same. The path is discovered through climbing.
Being intentional means several things, each of which is a decision:
Name it. Give the transformation a name, a sponsor, a home. Not a department — a mandate. Something that says "this is real, this is funded, this is happening." The things that aren't named don't get done. They float in the space between "somebody should" and "nobody did."
Fund it. Not lavishly — transformation doesn't require a massive budget. It requires a deliberate one. The AI allowances, the hackathon time, the infrastructure for local inference, the headcount for the skills market — these are not enormous costs. They are intentional investments. The organisations that budget nothing for agentic transformation are not saving money. They're accruing debt of a different kind.
Measure the right things. Not token consumption. Not the number of agents deployed. Not the percentage of employees who've completed the AI awareness training. Measure impact: time saved, decisions improved, errors avoided, revenue enabled, knowledge captured. The metrics should answer the question "are we better at our jobs?" not "are we using the tools?"
Protect the pockets of excellence. The kitesurfers will face headwinds. Not from the market — from inside. From the colleague who thinks this is a fad. From the middle manager who sees AI as a threat to their relevance. From the procurement process that wasn't designed for $30 VPS subscriptions. From the security team that hasn't yet developed a policy for local inference. Protect the innovators. Give them cover. Shield them from the organisational antibodies that attack anything unfamiliar.
Choose win-win deliberately. It won't happen by default. The default is that the benefits of AI accrue to the organisation and the costs are borne by individuals — in disrupted routines, obsoleted skills, and increased anxiety about relevance. Win-win requires active design: shared benefits, transparent communication, retraining opportunities, and recognition that the value created by AI is value created by the people who made it work.
The paper you're reading — or the presentation you're about to attend — is common sense. I'm aware of that. Nothing in these ten chapters requires a breakthrough insight or a radical departure from what thoughtful business leaders already know. The challenge is not understanding. The challenge is doing.
Common sense that stays in the inbox is indistinguishable from no sense at all. Like a coin landing on its edge — technically possible, frequently discussed, almost never observed in practice. The difference between the enterprises that transform and those that don't is not intelligence, or resources, or technology. It's the decision to pick up the coin and place it deliberately where you want it to stand.
If you take nothing else from these pages, take these. One idea per sentence. All of them yours to use.
Technology is a treadmill — don't wait for the perfect model; start running with what exists.
Context is everything — without it, AI is an expensive random number generator with excellent grammar.
Knowledge debt is your real liability — every undocumented process is a future failure you can't predict.
Your people are not the obstacle — they're the asset, and they need a reason to participate, not a mandate to comply.
Boil the ocean one cup at a time — find your pockets of excellence and give them fuel, not governance.
Expect to be wrong — build feedback loops from day one, because your first version is a hypothesis, not a solution.
Start with a hackathon, an AI allowance, and an onboarding agent — ninety days, manageable cost, enormous learning.
Build a skills market, a corporate brain, and embrace the tools you already own — scaling is amplification, not reinvention.
Let people build personal agents, invest in local inference, make content immersive, and for the love of everything strategic — recognise your agentic talent before someone else does.
Lead intentionally — name it, fund it, measure impact not activity, protect your innovators, and choose win-win every single time.
Common sense that stays in the inbox is worthless. The coin doesn't land on its edge by accident. Someone placed it there.
This paper is an invitation. Not to agree with everything in it — agreement without challenge is just another form of inertia. But to have the conversation. To make the decision. To start.
The ocean won't boil itself. Not usefully, anyway.
Let's begin.