---
title: Methodology — Noah Predict
description: How Noah Predict measures what is about to change — the pipeline, the physics layer, the doctrine and the limits. A public disclosure of methodology.
url: https://noahpredict.com/methodology/
source: Noah Predict — Worldwide AI Media Ltd
last_updated: 2026-04-17
licence: CC-BY-4.0-with-attribution-and-no-redistribution
---

← Back to Noah Predict
 METHODOLOGY · PUBLIC DISCLOSURE


# How Noah measures what’s about to change.



Evidence before narrative. Equal-source vote. Physics before story.


“Most risk systems describe the past. Noah measures the present becoming the future.”


 **Methodology**
 NOAH PREDICT · Public methodology disclosure


 Sources monitored
 1.6m
 Refreshed every 15 minutes


 Signal dimensions
 5
 Velocity, trajectory, divergence, volatility, coupling


 Narrative lead
 6.4h
 Median, ahead of broker consensus


 Provenance
 100%
 Every conclusion traceable to source



 The problem


## Consensus arrives late. By design.




Traditional risk assessment runs on three rails — broker consensus, country-risk reports, and expert surveys. Each is useful. Each is slow.


By the time a view appears in a quarterly briefing, the underlying signal has been detectable in the information system for days. Survey-based scoring measures what analysts *think today*, not where conditions are *heading*. Confirmation bias compounds the lag — analysts are slower to update a prior than the system is to change around them.


Noah is not a replacement for any of those sources. It is the layer underneath — the one that runs continuously, measures rate of change, and answers a different question.


Traditional systems answer *how risky is this place?*Noah answers *how risky will this risk become?*




 The pipeline


## Twenty thousand candidates become three hundred signals.




Every Noah output begins as roughly twenty thousand candidate corpus items and ends as approximately three hundred structured signals. In between is a five-stage pipeline, run in the same order on every question.


 01 · INGEST


**Continuous collection**



Across 1.6 million sources — wire copy, regulator filings, academic pre-prints, official advisories, sanctions registries, court listings, parliamentary records, domain-specific feeds.


 02 · CLASSIFY


**Per-paragraph, not per-article**



A single report can carry ten distinct signals. Headline-level analysis loses them. Entity, sentiment and event type are tagged at paragraph granularity.


 03 · SCORE


**The physics layer**



Velocity, trajectory, divergence, volatility and coupling computed against rolling baselines. Signals are scored, not counted.


 04 · REDUCE


**Progressive narrowing**



The engine walks the reduction funnel three or four times. The user sees one structured answer. Pass count is plumbing, never labelled in output.


 05 · RENDER


**Persona-locked output**



An underwriter and a portfolio analyst get different framings of the same signal. Both are correct. Both are traceable to source. Every bundle carries the doctrine, the pass log, the six external anchors and the full evidence trace.






 The mechanics


## Five measurements, applied continuously.




Signals are scored on five independent axes. None is sufficient on its own; together they describe the shape and direction of change.


 Velocity


Rate of change in mentions, severity and actor involvement. The first derivative of attention.


 Trajectory


Where velocity is heading over a defined window. Positive trajectory is an accelerating narrative; negative is one settling down.


 Divergence


The gap between what official sources say and what the broader system is saying. Large divergence is a strong predictive signal.


 Volatility


Noise around the signal. High-volatility signals are noted but weighted lower until they stabilise.


 Coupling


The degree to which two independent signal streams move together. When port closure and road closure couple at Jaccard 1.0, cargo movement has stopped.






 The doctrine


## Many passes. One answer.




A twenty-thousand-item corpus cannot be reduced to three hundred signals in a single pass without losing exactly the signals you wanted. The arithmetic is unforgiving.


The engine runs multiple passes — typically three or four — each one narrowing the search on what the previous pass surfaced. The first pass is broad. The last is focused. The user never sees the pass count because it is plumbing, not product.


This is the single largest difference between Noah and lighter retrieval-and-summarise systems. Those systems read 300 items and produce an answer. Noah walks the full 20,000 and reduces.




 The anchors


## Every bundle carries six independent benchmarks.




For country-level risk outputs, Noah attaches an _external_anchors block to every bundle. Six independent, publicly-respected references that a reviewer can cross-check Noah’s read against. Broad agreement is strong confirmation; disagreement is a prompt to read Noah’s blind spot, not to dismiss the output.

 FCDO

UK Foreign, Commonwealth and Development Office travel advice. Level, regions-advised-against, summary text.
 US State Dept

Level 1–4 travel advisories with the nine indicator codes: Crime, Terrorism, Civil Unrest, Health, Natural Disaster, Time-limited Event, Kidnapping, Wrongful Detention, Other.
 Global Peace Index

IEP annual ranking 1–163 with overall score and three domain sub-scores: ongoing conflict, societal safety, militarisation.
 World Bank WGI

Worldwide Governance Indicators on six axes: political stability, rule of law, control of corruption, government effectiveness, regulatory quality, voice and accountability.
 OFAC & UK HMT

Sanctions registries via Risk Atlas. Sovereign-level designations, named entity designations, lists matched, last update.
 ACLED

Armed Conflict Location and Event Data. 30, 90 and 365-day event counts, fatalities, top actors and top admin-1 regions. Activates when the API key is enrolled.



For insurance-underwriting outputs, Noah produces the structure of a Marsh-style World Risk Review — WRR-adjacent peril tables, peer comparison, district-level verdicts — without licensing Marsh WRR data. Rate-band suggestions are Noah-derived, not Marsh-sourced. The phrase “Marsh-style” denotes the shape of the output, not the data source.


This is also why Noah Labs ranks capabilities by benchmark strength rather than hiding the weaker ones. *Benchmark-grade*, *exploratory*, *bespoke* — each label reflects whether reliable comparables exist for the question.




 The empirical test


## The Polymarket 30% drop.




The equal-vote doctrine is unusual. It is also testable.


In internal testing, we measured Noah’s predictive accuracy on resolved Polymarket markets with and without tier-one source weighting. Upweighting tier-one sources (Financial Times and equivalents) reduced Noah’s accuracy by **30%**, measured on Brier score.


The reason is mechanical: tier-one prestige correlates with *lateness* and *consensus-alignment*, and both destroy forward-predictive signal. By the time a sentiment is comfortable enough to print in a tier-one publication, the markets have already moved. Noah’s job is to measure the sentiment before the market moves. Equal-source vote is the arrangement that makes that job tractable.


This is the single load-bearing empirical fact behind the doctrine. It travels with every bundle as the polymarket_attestation field.




 What we screen out


## Equal vote does not mean no screen.




The most common objection to the equal-vote doctrine is about manipulation: what about fake news, propaganda, or AI-generated spam? The answer has two parts.




**Removed before ingestion**



- AI-generated content farms identified by fingerprinting.

- Documented state-propaganda outlets flagged by independent media-monitoring sources.

- Sources that have failed basic factuality checks across multiple documented incidents.





**Retained with equal vote**



- Low-prestige but legitimate regional and local publications.

- Non-English-language sources, translated.

- Blogs, Substacks, government readouts, press releases, trade publications.





The rule: synthetic amplification is filtered, because it represents manufactured rather than organic sentiment. Legitimate low-prestige sources are retained, because they are the measurement surface — they are where sentiment arrives before it is compressed into tier-one narrative. The specific blocklist is commercial; the screening categories are disclosed above.


There is a second framing worth naming plainly: Noah measures *sentiment that moves markets, decisions and votes*, not truth claims. A narrative that millions of actors believe is a real sentiment regardless of whether the originating story was accurate. If it moves behaviour, it is in scope. This is the measurement frame of narrative economics and reflexivity theory, applied to risk.




 The five rules


## The internal constitution, made public.




Every Noah output is produced under the same set of doctrinal rules. They are not stylistic; they materially change what the engine surfaces and how it weights it.


 Equal-source vote.


A wire service, a court listing, an academic pre-print and a parliamentary transcript each count once. No tier-one bias. Weight is earned by corroboration, not prestige — and we can prove that empirically.


 Upstream is king.


A tier-one article is a compression of hundreds of upstream sources. By the time a story appears at the top, the signal is already filtered and days-to-weeks behind the underlying events. Noah reads the inputs the journalist saw, before the filter.


 Noise is substrate, not pollution.


A sentiment expressed by many low-prestige sources is a real sentiment. Conventional tools filter it out; Noah measures it. The 2016 US election is the paradigm case: every tier-one institution called it wrong because they filtered signal as “fringe”.


 Per-paragraph sentiment.


An 800-word article typically contains 6–12 structured signal units, not one. Legacy lexical sentiment flattens to a single article-level score at roughly 65% accuracy. Noah reads every paragraph with a language model at near-100% accuracy on the classifications it measures.


 Contrarian read and blind spot, mandatory.


Every Noah output must name the strongest case against its own headline, and the specific gap in the corpus that might be wrong. “There is no contrarian case” is a validator rejection. Confidence without self-doubt is a failure mode.






 The limits


## What Noah is not. What Noah cannot answer.




A system that tells you what it is not reliable for is more reliable than one that doesn’t. These are the lines Noah will not cross.




**Noah is not**



- A chatbot. It is a structured-output system with a conversational surface.

- An analytics dashboard. It produces written assessments, not query-only visualisations.

- A news aggregator. Its output is analysis of the aggregate, not a repackaged feed.





**Noah cannot reliably answer**



- Questions that depend on private or proprietary data — board decisions, undisclosed financial positions, closed intelligence.

- Deterministic single-event predictions. Noah measures conditions under which an event becomes more or less likely.

- Regulated advice — financial, insurance, legal, medical. The output supports professional judgment; it does not substitute for it.







 The entity


## Built inside a newsroom-scale operation.




A service of Worldwide AI Media Ltd · London, United Kingdom


Noah Predict is a service of **Worldwide AI Media Ltd**, a London-based group company. Its sister service, **Noah Wire**, has operated at newsroom scale for several years — ingesting and structuring global information for publishers, with a joint venture with the **Financial Times**. Group-wide infrastructure processes more than six million sources a day.


That heritage matters. Before Noah Predict forecast anything, Noah Wire understood how information behaves — how stories form, how narratives propagate, how signals emerge before they consolidate into reported events. Noah Predict is that understanding, applied to risk.


We are not a broker, a rating agency, an investment house or a regulated adviser. We do not trade on the signals we produce. Our only product is the intelligence itself.





We are not analysing articles. We are measuring patterns across the information system.


 Free while in preview
 Read the methodology. Then *try the system*.


Google sign-in. Three free analyses a day. No card.

 Open the workspace →
 Email the team →

---

## Citation
Cite this page as: "Noah Predict, Methodology, Worldwide AI Media Ltd, https://noahpredict.com/methodology/, accessed [date]."
