Methodology

Every score on PredictionScout comes from the same framework, applied the same way, every time. This page explains exactly how we evaluate prediction market platforms — what we measure, how we weight it, and what it takes to earn each score. If you think we got something wrong, you can hold us to this.

Last updated: March 2026 (v1.0)

The Core Principle

Separate facts from opinions. Score both. Show your work.

Every platform evaluation has two layers:

  • Objective criteria — measurable, verifiable, no judgment required. Fees, withdrawal time, regulatory status. Either a platform has segregated customer funds or it doesn’t. Either it charges 3% or it charges 9%.
  • Subjective criteria — experience-based, requiring defined rubrics to stay consistent. UX quality, support responsiveness. Without rubrics, “good UX” is just a feeling. With rubrics, it’s a defensible judgment.

Both matter. Both get scored. Neither gets inflated because a platform is paying us an affiliate commission.

The Scoring System

We score each platform on a 10-point scale per category, then calculate a weighted composite score. Half-points are allowed (a platform can earn a 7.5, not just a 7 or 8).

Why 10 points? Granular enough to differentiate platforms meaningfully. Simple enough that you understand a 6 vs an 8 at a glance. And it translates cleanly to visual displays — a score badge, a comparison table, a bar chart.

Scoring Categories and Weights

CategoryWeightTypeWhy This Weight
Regulatory Status & Fund Safety20%ObjectiveIf your money isn’t safe, nothing else matters
Fees & Costs15%ObjectiveDirect impact on every trade you make
Market Selection15%MixedCan you trade what you came here for?
Liquidity & Execution15%TestedCan you actually trade at the prices shown?
Withdrawal Experience10%TestedThe moment of truth — getting your money out
User Experience10%SubjectiveMatters most for beginners, less for veterans
Deposit & Funding Options5%ObjectiveHow you get money in
Customer Support5%TestedWhen things go wrong
Tax & Reporting Tools5%ObjectiveAn overlooked pain point that bites users at tax time

Total: 100%. Regulatory status gets 20% because a platform that loses your money fails at the most fundamental level, regardless of how clean its interface is. Fees, market selection, and liquidity each get 15% because they directly determine whether you can profitably do what you came to do. Everything else fills out the remaining 40% based on real-world impact.

Category Rubrics

These rubrics define exactly what a platform must do to earn each score range. Reviewers are not allowed to deviate from them without a documented reason and a methodology version update.

1. Regulatory Status & Fund Safety — 20%

This is where we ask: if the platform shuts down tomorrow, what happens to your money? The answer ranges from “you’re fully protected” to “you’re probably out of luck.”

ScoreCriteria
9–10Regulated by a primary financial authority (CFTC, FCA). Segregated customer funds. Insurance or bonding protections in place. Independently audited.
7–8Regulated through secondary body or licensing arrangement. Customer funds held separately, but limited formal protections.
5–6Operates under a regulatory gray area or specific exemption. Some fund protections, not fully guaranteed.
3–4Minimal regulation. Offshore structure or jurisdiction-shopping. Limited recourse if the platform fails.
1–2Unregulated. No fund protections. Significant counterparty risk.

2. Fees & Costs — 15%

We measure total round-trip cost, not just the headline trading fee. Our standard test: deposit $100, buy a $50 position, sell or let it settle, withdraw the remaining balance. Every fee touched along that path counts.

Fees measured include: trading fees and spreads, withdrawal fees, deposit fees, inactivity fees, and settlement costs.

ScoreCriteria
9–10Total round-trip cost under 2%. No hidden fees. Fee structure clearly documented and easy to calculate in advance.
7–8Total round-trip cost 2–5%. Minor fees on some operations. Pricing is transparent even if not zero.
5–6Total round-trip cost 5–8%. Some fees obscured or complex to calculate before you commit.
3–4Total round-trip cost 8–12%. Hidden or confusing fee structures that make real cost hard to know upfront.
1–2Total round-trip cost over 12%. Predatory or deliberately opaque fee model.

3. Market Selection — 15%

A platform with 500 markets you don’t care about is less useful than one with 50 markets you do. We score breadth of categories and depth within each, plus how actively new markets are added.

ScoreCriteria
9–106+ categories (politics, finance, weather, sports, entertainment, crypto/tech). 200+ active markets. Regular new market creation.
7–84–5 categories. 100–200 active markets. Steady market additions.
5–62–3 categories. 50–100 active markets. Occasional new additions.
3–41–2 categories. Under 50 active markets. Slow to add new ones.
1–2Very limited selection. Markets rarely added.

4. Liquidity & Execution — 15%

This is tested, not claimed. We place real trades and measure what actually happens at the order book — not what the platform says will happen.

ScoreCriteria
9–10Can fill $500+ orders with less than 1% slippage across most markets. Tight bid-ask spreads under 3 cents.
7–8Can fill $200–500 orders with minimal slippage. Spreads 3–5 cents on popular markets.
5–6Can fill $100–200 orders. Noticeable slippage on medium-size trades. Spreads 5–10 cents.
3–4Difficulty filling orders over $100. Wide spreads. Thin order books in most markets.
1–2Effectively illiquid. Prices shown are not executable at any meaningful size.

5. Withdrawal Experience — 10%

Getting money in is easy. Getting money out is where platforms reveal their true character. We time every withdrawal from initiation to funds in our account.

ScoreCriteria
9–10Funds received within 24 hours. Multiple withdrawal methods. No unexpected holds or verification hurdles at withdrawal.
7–8Funds received within 2–3 business days. At least 2 withdrawal methods. Smooth process with no surprises.
5–6Funds received within 5 business days. Limited withdrawal methods. Minor friction.
3–4Over 5 business days. Additional verification requirements triggered at withdrawal. Complaints from users are common.
1–2Unreliable. Delayed or stuck withdrawals. Reports of funds inaccessible for extended periods.

6. User Experience — 10%

We score UX from the perspective of a competent adult new to prediction markets — not a developer, not a day trader, not someone who already knows what a limit order is.

ScoreCriteria
9–10Clean interface. Functional mobile app. A complete beginner could find a market, understand the contract, and place a trade in under 5 minutes.
7–8Good interface with minor friction. Mobile works. A beginner needs 10–15 minutes but gets there.
5–6Functional but dated or cluttered. Weak mobile experience. Several confusion points for new users.
3–4Difficult interface. No native mobile app. Significant learning curve even for motivated users.
1–2Confusing or broken interface. Actively impedes trading.

7. Deposit & Funding Options — 5%

How many ways can you get money onto the platform, and how quickly does it clear? We test the most common method available in the US.

ScoreCriteria
9–104+ deposit methods (bank transfer, debit card, credit card, crypto, PayPal or similar). Funds clear within 24 hours for at least one method. No deposit fees.
7–82–3 deposit methods. Funds clear within 1–3 business days. Low or no deposit fees.
5–61–2 methods. Funds may take 3–5 days to clear. Minor deposit fees.
3–4Limited methods. Slow clearing times. Deposit fees present.
1–2Single method. Long clearing times. High deposit fees.

8. Customer Support — 5%

We test support by submitting a real, non-trivial question — fee calculations, settlement timing, verification requirements. Not “how do I sign up.” We document response time and whether the answer was actually helpful.

ScoreCriteria
9–10Live chat plus email. Response within 2 hours. Our test issue was resolved completely and correctly.
7–8Email support. Response within 24 hours. Helpful and accurate resolution.
5–6Email only. Response within 48 hours. Generic but adequate.
3–4Slow responses (3+ days). Unhelpful or scripted replies that don’t address the actual question.
1–2No response to our inquiry, or no visible support channels to begin with.

9. Tax & Reporting Tools — 5%

Tax treatment of prediction market winnings is an unsettled area — no formal IRS guidance as of 2026. That makes it even more important that platforms give users clean records to work with.

ScoreCriteria
9–10Provides 1099 or equivalent tax document. Downloadable transaction history in standard format (CSV). Tax guide available for users.
7–8Provides tax documents. Transaction history available but requires some manual work to format.
5–6Basic transaction history only. No formal tax documents. User must calculate gains and losses themselves.
3–4Limited records. Difficult to reconstruct trade history for tax purposes.
1–2No transaction export capability. Tax reporting effectively impossible without manual tracking.

Our Testing Protocol

Every full platform review follows these steps in sequence. If a step can’t be completed — for example, a platform isn’t available in our state — we disclose it clearly in the review and mark the relevant score categories as “Based on Public Data” with a reduced confidence indicator.

  1. Account creation. Sign up and complete identity verification. We document the time from starting the application to receiving approval and being able to trade.
  2. Deposit $200 via the most common method available to US users (bank transfer or debit card). We document how long it takes for funds to be available for trading.
  3. Place 5 trades across different market categories at $20–50 each. We record the bid-ask spread on entry, whether our order filled at the displayed price, and any slippage experienced.
  4. Monitor positions for at least 2 weeks. We assess notification quality, position management tools, and whether the platform behaves the way it says it does during the holding period.
  5. Contact customer support with a specific, non-trivial question about fee calculations or settlement timing. We document the response time and whether the answer was accurate and useful.
  6. Withdraw $100 via the most common method. We time from withdrawal initiation to funds arriving in our bank account and document every fee deducted along the way.
  7. Calculate total cost across the entire test cycle — every fee paid from deposit through withdrawal.
  8. Review tax and reporting tools. Export the transaction history, check for 1099 or equivalent availability, and document what a user would need to do to file their taxes based on this platform alone.

Testing takes a minimum of 6 weeks per platform. We do not publish a review until the full protocol is complete.

Anti-Gaming Protections

The most common way review sites get corrupted: affiliate partners start paying more, rankings start changing. Here’s how we prevent that.

  1. Scores follow rubrics, not feelings. If a platform meets the criteria for a 7, it gets a 7 — regardless of how much it pays us in affiliate commissions. The rubric is the arbiter, not our relationship with the platform.
  2. Retesting on a fixed schedule. Major platforms are retested every 6 months. Score changes are published with a changelog and the date of testing.
  3. Commission transparency on every review. Each review discloses exactly what we earn: “We receive a $X CPA from this platform” or “We have no affiliate relationship with this platform.” No vague “we may be compensated” language.
  4. Methodology changes are versioned. If we adjust a category weight or rubric definition, we publish the change with our reasoning. You can see version history at the bottom of this page.
  5. Negative reviews stay published. A poor score doesn’t come down because a platform offers us a better deal. If anything, a platform trying to buy a score change is worth documenting.

“Best For” Rankings and Modified Weights

A single composite score doesn’t capture everything. A platform that’s great for election traders might be mediocre for beginners. Instead of one ranking that pretends to serve everyone, we publish multiple “Best For” lists with adjusted weights tailored to each use case.

RankingWhat Changes
Best for BeginnersUX weight raised to 25%. Liquidity lowered to 10%. Ease of getting started matters more than execution quality at small sizes.
Best for Election MarketsMarket Selection weighted by political market depth specifically. Platforms filtered by quality of political contract coverage.
Best for Low FeesFees weight raised to 30%. Everything else scales down proportionally. The cost-sensitive trader’s view.
Best for US UsersFiltered to CFTC-regulated platforms only. Regulatory score threshold applied as a filter, not just a weight.
Best for Crypto UsersFiltered to platforms accepting crypto deposits. Deposit options weight raised to reflect this preference.
Best for SafetyRegulatory weight raised to 30%. Withdrawal experience raised to 15%. For users where capital preservation is the top priority.

Each “Best For” page explains the modified weights and why they fit that use case. The underlying platform data is the same — only the weighting changes.

How Scores Appear in Reviews

In individual platform reviews, you’ll find:

  • An overall composite score displayed prominently
  • A category-by-category breakdown with the score and a brief explanation
  • A “What We Tested” section with the testing dates
  • The affiliate disclosure at the top of the page — not buried in a footer
  • A pros and cons section derived from the category scores, not from marketing materials

In comparison pages, you’ll find side-by-side category scores, a “best for…” callout for each platform, and quick-pick recommendations by use case.

Methodology Version History

VersionDateChanges
v1.0March 2026Initial methodology published. 9 categories, weights as above. Testing protocol defined.

Frequently Asked Questions

Do affiliate relationships affect your scores?
No. Scores are determined by rubric criteria, not by whether a platform pays us. Every review discloses the exact commission relationship. If a platform that pays us well earns a low score, that score gets published.

What if a platform improves or gets worse after you publish?
We retest major platforms every 6 months and update scores with a changelog. If a significant change happens between retesting cycles — a fee increase, a regulatory action, a withdrawal problem — we update the relevant section within one week and note the change.

What does “Based on Public Data” mean in a review?
It means we couldn’t complete that part of the testing protocol for that platform — usually because it wasn’t available in our state or required citizenship verification we couldn’t provide. Those score categories carry a lower confidence rating and we explain exactly what data we used instead.

Can I suggest a platform for review?
Yes. Use the contact form on the About page. We prioritize platforms based on US availability, regulatory status, and user volume. We don’t accept payment to expedite reviews.

How do I know when a score has been updated?
Each review shows a “Last tested” date. The methodology version history above tracks changes to the scoring framework itself. For significant score changes, we note the date and reason in the review’s changelog section.