Start here: you have a blindspot
Not because you are careless. Not because your team is not smart. Because the view from inside any organization is always limited. The framework you use to understand the world cannot show you what lies outside it.
That gap — the space between what you can currently see and what is actually there — is your blindspot. The K* method is a way to find it before a competitor does, before a crisis makes it obvious, or before a decision you thought was safe turns out to have been made in the dark.
The most dangerous blindspots are not the ones you do not know about. They are the ones you have unconsciously decided not to look at — because looking would be uncomfortable, expensive, or inconvenient.
How to ask a good question
Always be clear about what topic, organization, or sector you are asking about. When in doubt, use the best available URL. Avoid acronyms. Keep your query very short — under three words is ideal. Less is more.
The map
The K* method places your organization on a map alongside key players in your space — the main competitors, the adjacent players in related fields, and the institutions that shape the landscape you operate in. Not an exhaustive directory. A diagnostic lens. The map has two directions.
Left to right: how far your reach extends. A small local organization sits on the left. A global institution sits on the right. We call this the X axis — Reach.
Bottom to top: how deeply you think before you act. An organization that reacts to whatever happens sits near the bottom. One with a careful, tested system for thinking through decisions before committing sits near the top. We call this the Y axis — Wisdom Architecture.
Add the two scores together and you get the R+G score — Reach plus Generativity. Generativity means the capacity to keep producing genuinely new insight rather than recycling old analysis faster.
The R+G score reveals something neither axis can show alone: whether an organization has achieved both scale and depth simultaneously. Most only manage one.
The X score: Reach
Reach is not just size. It is verifiable, independently confirmed institutional presence. A self-reported claim of global reach does not score the same as a documented record of supranational clients.
| Score |
What it means |
| 1 — 3 |
Early stage. Limited evidence of deployment. Starting out. |
| 4 — 5 |
Growing. One sector or geography. Some external validation. |
| 6 — 7 |
Established. Multiple sectors or countries. Named clients or partners. |
| 8 — 9 |
Supranational or deeply institutional. Named public references. Independent validation. |
| 10 |
Standard-setting. Cited by regulators or policy bodies. Defines the category. |
The Y score: Wisdom Architecture
This is the less familiar of the two scores. Most competitive frameworks treat governance as a compliance question — relevant to risk management but irrelevant to capability. The K* method treats it as a capability question, because the governance architecture of a decision-support system determines what its analysis will be worth in ten years, not just today.
| Score |
What it means |
| 1 — 3 |
Transactional. Responds to queries. No governance layer. Output quality depends entirely on input quality. |
| 4 — 5 |
Structured. Uses frameworks. Has human review. No written governance document. |
| 6 — 7 |
Governed. Has written rules. Independent oversight. Structured disagreement built in. |
| 8 — 9 |
Constitutional. Ratified charter. Active independent board. Future generations obligation. Named human contributors with documented traditions. |
| 10 |
Sovereign. Verifiably gets wiser with use. Wisdom methodology peer-reviewed and published. |
Notice what is being measured at the top of the scale. Not just that an organization has smart analysts. But that it has built a system that cannot easily be pressured into telling its clients what they want to hear.
The three zones
| Zone |
What it means |
◇ Boundary Intelligence
R+G below 13.5
|
Operating at the edge of established categories. Real capability, but not yet at the threshold of something genuinely new.
|
◇◇ Generative Depth
R+G 13.5 to 16.5
|
Building across both axes at the same time. The gap between here and the next zone is closing.
|
◇◇◇ Uncontested Ground
R+G 16.5 and above
|
The Clearing. A new category. Rivals cannot reach this space by optimizing within their current model — they would have to change what they are.
|
The Clearing is not just a higher score. It is a different kind of space. An organization in the Clearing is not ahead of rivals on the same track — it is on different ground. Rivals cannot follow by doing more of what they already do.
What blindspots look like in practice
A blindspot is structural, not accidental. It is built into the business model, the incentive structure, or the framework an organization uses to make sense of the world. Here is what that looks like across four types of player:
Large management consultancies
McKinsey, BCG, Deloitte, and their peers have X scores of 9.0 to 9.5 — enormous global reach. Their Y scores are limited because they operate post-decisionally by design. They are engaged after a problem has been identified, after the decision frame is set. Their revenue depends on this timing. A consultancy that systematically prevented the crises it would later be paid to analyze would earn less. That is the structural blindspot.
Research subscription services
Gartner and Forrester have high X scores — their research reaches hundreds of thousands of subscribers. Their Y scores are moderate. They produce strong analysis of what has already happened and what current trends suggest. They are calibrated to recognized patterns and declared events. The pre-declaration layer — where threats are forming before they have names — is outside their scope by design. That is the structural blindspot.
AI and data platforms
IBM watsonx defines value as automation of existing processes. The structural blindspot is the wisdom layer — where AI needs to augment human judgment in genuinely novel situations, not just speed up pattern-matching on historical data.
Palantir integrates large datasets and builds operational dashboards, primarily for government and defense. Its structural blindspot is governance: no constitutional human-AI boundary, and no mechanism for the communities whose data it processes to have a voice in how it is used.
Community innovation platforms
Wazoku and IdeaScale gather structured human input and channel it into organizational idea pipelines. They have genuine collective intelligence but it is organized around problems the organization has already named.
The structural blindspot is the pre-problem layer — the point before a challenge has been identified as a challenge. Named problems can be crowdsourced. Unnamed ones cannot.
Your organization
The most useful part of a K* chart is not seeing where your rivals are. It is seeing where your own structural blindspot is.
A genuine blindspot is uncomfortable to read. If the leadership of your organization could read your blindspot analysis in a board meeting and comfortably agree with it, it is probably not the real blindspot.
The real blindspot names something the organization knows but has chosen not to address, or something its incentive structure prevents it from seeing clearly.
Three kinds of intelligence in the analysis
A K* chart draws on three separate types of intelligence. Understanding the difference between them matters, because each type catches signals the other two miss.
Open source intelligence (OSINT)
Systematic scanning of publicly available signals — news, policy documents, academic research, corporate filings, community voices — synthesized into a structured assessment.
The discipline here is governance: who decides what counts as a signal, what analytical framework is applied, and how competing interpretations are handled. An ungoverned OSINT process produces confirmation bias at scale.
Human intelligence (HUMINT)
Named people with documented intellectual traditions contributing their judgment — not anonymous analysts, not averaged crowd opinion.
The value is that named contributors have skin in the game. Their track records are verifiable. Their frameworks are documentable.
When Michael W. Wright's 28 validated predictions from 1984 to 2025 shape an analysis, that is a different kind of human intelligence than a survey of unnamed market participants.
Pre-linguistic signals
The signals that matter most often do not yet have names. A threat forming before institutions have a framework for it. A community carrying an experience that has not yet become a statistic.
A pattern visible only at the edges of several disciplines simultaneously. Detecting signals at this layer requires a different architecture than OSINT or HUMINT alone — it requires sitting with ambiguity long enough to let a pattern emerge before forcing it into an existing category.
Why the analysis asks about governance
The Y score asks a question most competitive assessments skip: what happens when the analytical system produces a conclusion its most important client does not want to hear?
There are three possible answers. It publishes the finding. It softens it. It suppresses it. Each answer reflects a governance architecture — or the absence of one.
A platform with no constitutional framework for human-AI decisions will, as AI capabilities increase, face escalating pressure to let the AI decide more.
A platform with no independent oversight will, under commercial pressure, face escalating pressure to shade its conclusions toward what clients want.
A platform with no obligation to future generations will systematically discount long-horizon risks in favor of immediately legible short-term analysis.
These are not hypothetical risks. They are the predictable structural dynamics of ungoverned decision-support systems. The Y axis scores them because they determine what the analysis is worth in ten years.
One question cuts through the Y score directly: if your analytical system produced a finding that would cost your most important client significant money or status — would it be published, softened, or suppressed?
Your honest answer to that question is your Y score.
Three depths of analysis
Not every question requires the same depth of analysis. The K* method distinguishes three kinds of question, corresponding to three levels of complexity:
| Level |
What it is for |
|
Simple Explorations
|
Questions with clear causal chains and known categories. The diagnostic layer — what domain is this decision actually in? Where are the obvious gaps? The right starting point for most questions.
|
|
Complicated Navigations
|
Questions with knowable causal chains that need rigorous mapping. The Category Genesis Engine finds unnamed categories before anyone else has language for them. For decisions that are complex but ultimately solvable through expertise.
|
|
Complex Strategies
|
Questions where causal chains cannot be fully known in advance. Genuinely novel territory. The pre-decision layer at its deepest — where existing frameworks are themselves part of the problem.
|
The discipline is starting at the right level. Beginning with Complex Strategies when you are actually in Simple territory wastes resources and produces over-engineered analysis.
Beginning with Simple Explorations on a genuinely complex problem produces false confidence. The first question is always: which kind of question is this?
Three questions worth sitting with
In front of any K* chart — including one about your own organization — three questions consistently reveal the most:
• Where is the unoccupied space? If a zone on the map is empty, is that because nobody has found it yet, or because nobody has been able to reach it? What would it actually take for your organization to get there?
• What does our blindspot tell us about our incentive structure? The structural blindspot named in the analysis is not random. It reflects what the organization is rewarded for doing and what it is quietly discouraged from questioning. What does that tell you about what needs to change?
• Who is not on this map? Which communities, voices, or perspectives are absent from the landscape entirely — and what signals are they carrying that the map does not yet show? The absence of a voice from the analysis is itself an analytical finding.
The one question under everything
What are you about to commit to that you will not be able to revisit — and do you understand its consequences for the people who will live with it longest?
That is the pre-decision question. Not "what should we do?" — that question comes later.
The prior question is whether the frame you are using to make the decision is the right frame, whether the people who will be most affected have a voice in how the problem is defined, and whether the analytical tools you are using were designed for this kind of question or for a different one.
The map shows you where you are. The blindspot analysis shows you what you cannot currently see from there. The three levels of depth give you the right analytical tool for the kind of question you are actually asking.
What you do with all of that is your decision. That is the point.
www.preempt.life · April 2026