The Gap — Geopolitical reading and organisational exposure
There is something that invites humility in watching a geopolitical situation unfold in real time. Not so much because the events themselves would be surprising — but because of the gap between what was visible and what was seen. Between what the signals were pointing to and what was concluded from them.
It is worth pausing on that word: surprise. The feeling of being surprised by an event is rarely a simple reaction to something genuinely unforeseeable. It tends to be proportional to our distance from the situation — to what we did not know, to the uncertainty we were operating in, to the gaps we had filled with comfortable assumptions. Surprise says something about the observer as much as it says something about the event. And the observer is never truly outside the system they are trying to read — they are part of it.
What systems thinking and cybernetics point to here — and I am reading these works freely, not restating them faithfully — is that how we look shapes what we see. The analyst’s position, their access, their prior categories, their blind spots: all of this conditions what becomes visible and what does not. There is no neutral vantage point. No reading from nowhere.
Surprise, then, is not only a signal about the event. It is also a signal about the observer.
But there is something else worth saying before going further. Even the most rigorous analytical work — sustained, honest about its own limits — will not eliminate surprise. Some portion of uncertainty is irreducible. Geopolitics always retains something that resists full anticipation. This piece does not propose a method to fix that. It simply tries to reflect on some of the reasons the gap exists — and on what a more honest posture toward it might look like.
At what level does the explanation sit?
There is a notion in cognitive science — that of explanatory level — which I find useful to borrow here, freely and with the necessary caution. What follows is my own transposition to the geopolitical field, not a direct application of the work of those who formalised it.
Albert Moukheiber uses a simple example: explaining a car accident by studying the atoms that make up the bodywork is not wrong — they are indeed there. But it is not the right level. What explains the accident is the state of the road, the speed, the visibility, perhaps the driver’s fatigue. The right explanatory level depends on what one is trying to understand.
In geopolitics, the same challenge exists. When trying to explain a conflict, a policy shift, an actor’s behaviour, one implicitly chooses a level of analysis. Official declarations. Short-term incentives. Long historical dynamics. Economic structures. Each of these levels illuminates something real. But they do not all illuminate the same thing — and stopping at the most accessible, the most immediate, sometimes produces readings that feel coherent but miss what is actually driving the situation — with the risk of misreading the causes, drawing erroneous conclusions, and building responses on a foundation that does not hold.
Reading backwards — and the problem of conflicting narratives
The invasion of Ukraine in February 2022 is often cited as an illustration of these limits. What follows is a reading built after the fact, from information available today — not an account of what analysts should have seen in real time.
The signals were there: troop movements, diplomatic deadlock, a pattern of behaviour over years. Beyond those signals, there was a structural level — long-term dynamics, the weight of accumulated tensions, the logic of a context built over decades. At that level, the outcome may have been less unforeseeable than it seemed. But “may have been” and “at that level” are necessary qualifications — not rhetorical formulas.
Because retrospective readings carry their own distortion. Once we know how events unfolded, it becomes easy to trace a coherent line backward. That coherence is partly a construction — made possible by knowledge of the ending. It was not available to those who had to read the situation before the conclusion existed to organise the evidence around it.
There is a further problem with retrospective readings that deserves to be named. The reconstruction is not only made possible by knowledge of the ending — it also tends to impose a structure that was not there to begin with. Events are presented as a linear chain: A produced B, which produced C. That chain feels coherent. It satisfies the need for explanation. But it misrepresents how the situation actually unfolded.
What was actually in motion was not a line. It was a system — a set of actors, constraints, and feedback loops in continuous interaction, each influencing the others, each being shaped by the responses it provoked. Change one loop, and the outcome changes. Not incrementally — potentially entirely. The number of possible trajectories through such a system is not large. It is effectively infinite.
This means that the linear reading produced after the fact is not simply incomplete. It is structurally false — a projection of a rationality that the situation never had. And it carries a particular danger: it makes the outcome look inevitable. Which it was not.
That distortion compounds with another — one that was particularly visible in the Ukrainian case. At the time, most of those trying to make sense of the situation were working from what was publicly available. What some intelligence services were reading in closed channels was not part of that analytical environment for the majority. And even the public signals — including those made public in an unusually deliberate and transparent way by certain intelligence agencies — entered a contested interpretive space. Real indicators of intent, or a pressure campaign, a posture, a manipulation? That question was not unreasonable. It had its own coherence. The availability of information does not, on its own, resolve the problem of how to interpret it — nor the question of the level at which one chooses to read it.
This is the condition in which geopolitical reading actually happens — in the middle of competing narratives, partial access, and a brain that is, structurally, a meaning-making machine — and that does not stop being one simply because the situation is too complex to fully read.
Observing, in the face of all this, means holding that tendency at a slight distance. Not suppressing it — that is not possible. But noticing it. Asking: am I observing what is actually here, or am I observing what my existing schema is making of it?
What the signals were saying — and the difficulty of hearing them
One of the recurring challenges is not a shortage of information. In most contexts today, there is more data available than can be processed. The question is elsewhere: distinguishing what is confirmed, what is inferred, and what is merely suggested.
One discipline that helps — less as a solution than as a safeguard against premature closure — is to keep three types of material explicitly separate. Confirmed facts — verifiable, sourced, not in dispute — form the foundation. Analytical hypotheses — informed interpretations of what those facts suggest — are the working layer. They should be held lightly, labelled as such, and revised when new information arrives. And then there are weak signals: early indicators that do not yet constitute identifiable risks but point in a direction. Often dismissed because they are not yet facts. Often the most valuable — not because they guarantee correct anticipation, but because they open a space for reflection before the situation closes.
The disruptions to Red Sea shipping that intensified in late 2023 and into 2024 illustrate this difficulty. The signals existed — regional tensions, progressive escalations, underlying dynamics. But distinguishing those signals from the ambient noise, and drawing operational conclusions before the event, is precisely the difficult exercise. Easy to say today. Less obvious to do in real time, in uncertainty, without knowing which trajectory would become structural.
The same reflection applies to the fragmentation of trade relationships since 2022 — what some analysts call friendshoring, the reorientation of supply chains, the use of export controls as a geopolitical lever. These developments built gradually, through observable shifts. But observing them was not enough to anticipate their scale or pace. That is the core difficulty: signals are rarely absent. What is often missing is the capacity to weigh them correctly in an environment where other signals point in different directions.
What is visible, and what is actually happening
What actors give to be seen is rarely the totality of what is in motion. Behind the public statements, the official positions, the diplomatic gestures, there are channels that surface analysis does not reach — parallel negotiations, quiet arrangements, decisions taken in restricted circles that do not filter outward.
Sometimes what is visible is indeed the main current. But sometimes it is a posture — useful for managing audiences, maintaining a productive ambiguity, or covering a direction that is not yet publicly assumable. This is not an invitation to cynicism. It is simply a reminder that the visible is one layer among several.
What makes non-visible channels partially readable — gradually, over time — is sustained familiarity with the actors, their patterns of behaviour, the capacity to notice what is not being said as much as what is. A change in tone without a change in position. A silence where a statement might have been expected. An absence where a presence would normally be.
This kind of reading does not assemble itself on demand. It builds before the situation becomes urgent — which is precisely when it is most needed.
The organisation as a map of interdependencies
These limits of external reading are compounded by something internal. Before an organisation can read its environment clearly, it needs to know what it is actually exposed to — and that mapping is rarely as complete as it appears.
Organisations that operate internationally have developed structures of considerable complexity. Production sites, logistics corridors, supply contracts, joint ventures, regulatory dependencies, financial exposures, reputational relationships. Taken together, these elements form a map of interdependencies that no single person holds in full, and that is rarely examined as a whole.
This matters because vulnerability is relational. It does not live in the threat alone — it lives in the intersection between the threat and the organisation’s own characteristics. A disruption to a maritime corridor in the Gulf represents a very different reality depending on whether the organisation has alternatives or not, buffer stocks or not, substitute suppliers or not. The difference is not in the event. It is in the exposure.
And optimising for efficiency and optimising for resilience do not always point in the same direction. Growing internationally also means expanding the surface of exposure — sometimes faster than the capacity to grasp what that exposure actually means.
Many organisations carry vulnerabilities they have not fully mapped. Not through negligence, but because the interdependencies are genuinely complex. The connections between a sourcing decision in one region and an operational risk in another are not always traceable in advance. When the disruption arrives — as it did for many European organisations when the invasion of Ukraine suddenly reconfigured access to certain raw materials — the map of interdependencies becomes visible in a way it rarely was before.
Observing as a practice, not a moment
What connects these reflections is something less methodological than attitudinal.
Reading a geopolitical environment seriously — seeking the right level of explanation, staying alert to the gap between declared and actual, distinguishing weak signals from noise, being honest about what remains invisible — does not happen on demand. It builds over time, through sustained attention. It requires familiarity with the context, with the actors, with the history of the relationships. And it requires enough patience to stay with incomplete information without forcing a conclusion the situation does not yet support.
Research in cognitive science shows that the brain is not naturally well equipped for this. In uncertainty, it fills gaps. It constructs narratives. It gravitates toward what is familiar, available, consistent with what it already believes. These are not failures — they are structural features of human cognition facing complexity. Knowing this does not eliminate the tendency, but it can create enough distance to notice it when it operates.
There will always be surprises. That is not a failure of analysis. It is the nature of the terrain. The most one can honestly do is observe carefully, say clearly what is known and what is not, and remain genuinely open to being wrong. That is less satisfying than a method. But it is more honest.
