On Why the Future Doesn't Wait
Meet Gibson
"The future is already here. It's just not evenly distributed yet." William Gibson.
Proof of Concept
- Farmers in Kenya leapfrogged credit cards entirely โ they went straight to mobile money.
- M-Pesa launched in 2007. By 2013, 43% of Kenya's GDP flowed through it. Rural Kenyans had a more functional financial future than most Americans with legacy banking infrastructure. The future arrived in Nairobi before it did in Mississippi. (Starter Source)
- Estonia ran a fully digital government โ including legal identity, taxes, and voting โ while the U.S. still mailed paper census forms.
- By 2007, 98% of Estonians filed taxes online in under five minutes. The e-governance future existed in Tallinn for two decades before it was a policy aspiration in Washington. (Starter Source)
The Argument
For many Gibson disciples, the future doesn't arrive randomly โ it arrives where the cost of not changing is highest.
Kenya had no legacy banking infrastructure to protect, so mobile money filled the void.
Estonia was a tiny post-Soviet nation that needed to build government from scratch and couldn't afford bureaucracy, so it went digital.
The places that adopt the future first aren't the most innovative โ they're the most exposed. They don't have the luxury of the old system. Privilege, in this frame, is just another word for insulation from urgency.
The American with a Wells Fargo account doesn't need M-Pesa. The U.S. federal government with its entrenched paper workflows doesn't need to become Estonia.
The future pools where the old world offers no shelter. Which means the last people to see the future coming are usually the ones most convinced they're in the middle of it.
- The US Army learned what Gibson implied: potential isn't the variable. Urgency is. (Root Cause Counter)
The Other Side of the Chasm
No, not a Gilbert & Sullivan knock-off.
Everett Rogers' Diffusion of Innovations (1962) found that adoption follows a predictable bell curve: risk-tolerant Innovators move first, visionary Early Adopters follow โ betting on promise before proof โ then a cautious Early Majority that won't move until it works reliably.
Geoffrey Moore's Crossing the Chasm (1991) identified the fault line between those last two groups: Early Adopters tolerate friction; the Early Majority demands a finished, de-risked product. That gap โ the Chasm โ is where most technologies quietly die.
Together, Rogers and Moore describe the future as something that spreads by persuasion, moving person by person, conditional on psychology and appetite for uncertainty. Mass adoption, in their model, is the finish line โ proof that the future has been made safe enough for everyone.
The Through-line
Gibson's insight and Rogers/Moore's framework are circling the same argument from opposite ends โ seen from different angles.
- Rogers and Moore describe how the future travels through a population that has choices.
- Gibson describes what happens when a population doesn't.
- Kenya didn't cross the Chasm โ it was pushed across by a banking system that had already written off 80% of the population. Estonia didn't weigh the risks of going digital; it was a newly sovereign nation that couldn't afford the alternative.
One Chasm is about who you are. The other is about where you're standing.
The Context
Both Gibson and Rogers/Moore assume the future is something you can orient to. It pools where context demands it (Gibson), or it spreads through populations as psychology infects (Rogers/Moore).
In either case, there's a map. You can be early or late. You can be exposed or insulated. You can (mostly) see the Chasm and decide whether to cross it.
At McChrystal Group, we draw a distinction between complicated and complex environments. Knowing which one youโre in dramatically affects how you should orient to a chasm and a future:
- Complicated problems have knowable parts โ engineering challenges, legal contracts, supply chains. Dave Snowden's Cynefin framework puts them in the domain of "good practice": the relationship between cause and effect is discoverable, experts can analyze it, and the right answer exists. You can solve your way out.
- Complex problems are categorically different. Snowden calls this the domain of "emergent practice" โ cause and effect are only visible in retrospect, and the system responds to your interventions in ways you can't predict in advance. Ralph Stacey adds that complex environments involve agents who learn and adapt, meaning the ground shifts underfoot as you move. You don't solve a complex problem. You probe, sense, and respond โ what McChrystal calls building the awareness to adapt faster than the environment changes.
Gibson's world is largely complex โ emergent, context-shaped, resistant to clean prediction. Kenya and Estonia sit at the sharper end of that spectrum: environments where complexity combined with urgency necessitate a change due to a deprivation of alternatives.
Rogers and Moore were mapping the softer version โ how technologies that could be ignored often aren't, because psychology and social proof eventually pull a population across the Chasm, even when the old world is still technically functional.
The Amplifier
Exponential โ most precisely articulated by Azeem Azhar โ is a different lens than Cynefin.
Snowden asks: given the nature of this problem, how should you decide?
Azhar asks something prior: how fast is the environment itself changing?
Exponential happens when a technology improves faster than the institutions built to govern it.
These aren't competing frameworks โ Cynefin still applies inside an exponential environment.
But the speed changes the stakes.
Smartphones rewired commerce, attention, and democracy before anyone thought to ask who owned your data โ GDPR arrived eleven years after the iPhone. Solar dropped 90% in cost in a decade while net metering policy was still being debated by committees that barely understood it. CRISPR edited human embryos before the ethics boards finished their first draft โ and the WHO framework came after the scandal, not before. The technology outran the room. The room caught up โ awkwardly, expensively, sometimes only after someone got hurt. But it caught up.
So, exponential is disorienting.
It produces winners and losers faster than policy can referee. But it is still, at its core, a human-authored system. The gap between technology and orientation is uncomfortable โ even punishing. And the honest asterisk is this: closing it requires not running faster, but rebuilding the institutions doing the running. That's hard. But it has been done before.
The Future?
Does AI unsettle all of these frames?
The recursive possibility at the frontier of AI โ where each generation of the system helps architect the next, better generation โ doesn't give you emergent behavior to observe and adapt to. Well, it could, but it'll likely outrun us.
Why? Because AI can rewrite the context before weโve finished reading the room.
Every prior technology โ even complex ones โ still had humans as the authors of context.
- We built the banking system Kenya worked around.
- We influenced the Soviet legacy Estonia had to reject.
- Even in exponential growth, humans were still deciding what was growing and why.
The alignment problem introduces something genuinely different: a system that can improve its own objectives, its own training, its own frame โ without waiting for human consensus on whether that's the right direction.
That's not just fast โ it's a different category of thing. Every framework we've used to navigate change assumes humans are the agents doing the orienting. At sufficient capability, AI removes that assumption.
The fear of AI is the fear of being on the wrong side of Gibson's distribution and not knowing it. But every prior technology had a fixed distribution. Someone drew it; someone could redraw it. The future pooled somewhere, and you could โ in principle โ find it.
AI introduces the possibility of a technology that redraws its own map, faster than we can read the last one.
We are all the American with the Wells Fargo account: certain the infrastructure is sufficient because nothing has yet required us to imagine otherwise. But AI doesn't need our banking system to fail. It doesn't need our urgency. It doesn't need our consent. It improves without permission, toward ends we haven't named, at a pace we didn't set.
Gibson was right: the future is already here. He just didn't say what happens when the future starts writing itself.