Last updated: April 2026
Every SaaS platform in our directory is scored on a single 100-point framework. The same rubric is applied to every platform, by the same evaluation system, using the same types of data sources. This page explains exactly how that score is built, what goes into it, and how a platform can improve theirs.
We publish our methodology in full because a ranking is only as trustworthy as the rubric behind it.
Every platform receives a score from 0 to 100, calculated as the sum of four equally weighted dimensions:
| Dimension | Range | What it measures |
| Product Depth | 0–25 | How comprehensive and capable the platform is across network coverage, campaign workflows, integrations, intelligence layers, and AI |
| Customer Proof | 0–25 | Who uses the platform, what they achieved, and how verifiable the evidence is |
| Industry Recognition | 0–25 | External validation from platform partnerships, review sites, awards, analysts, and press |
| Revenue & Growth | 0–25 | Business scale, stability, funding, team size, and growth trajectory |
| Total | 0–100 |
Each dimension is scored independently against fixed criteria, not relative to other platforms. A platform’s score reflects what it can prove, not where it sits in a ranking.
No platform can pay to be listed, pay to improve its score, or sponsor its ranking.
0–25 points
Product Depth measures how comprehensive and capable the platform is.
This is not a judgement on visual design or brand positioning. It is a structural assessment of what the product actually does, how many social networks it supports, how much of the influencer marketing workflow it covers, how deep its intelligence capabilities go, how well it connects to the buyer’s existing tech stack, and whether AI is used meaningfully.
Product Depth is scored across five sub-dimensions, each worth up to 5 points:
| Sub-dimension | Range | What it measures |
| Social Network Coverage | 0–5 | Which social platforms are supported and how deep that support goes |
| Campaign Lifecycle Coverage | 0–5 | How much of the workflow the platform covers from discovery to payments |
| Intelligence & Data Layers | 0–5 | Social listening, audience analysis, benchmarking, trend detection, sentiment, and fraud detection |
| Integration Ecosystem | 0–5 | Native integrations, API access, webhooks, Zapier, Make, and stack connectivity |
| AI Capabilities | 0–5 | Verifiable AI features across creator matching, reporting, analysis, brief generation, and optimisation |
| Score | Description |
| 20–25 | Broad, enterprise-grade platform with strong network coverage, full lifecycle workflows, deep intelligence, strong integrations, and meaningful AI capabilities |
| 15–19 | Strong platform with broad functionality, but some limitations in integrations, AI, intelligence, or workflow coverage |
| 10–14 | Capable platform with solid core features, but narrower coverage or fewer advanced capabilities |
| 5–9 | Point solution with limited workflow, network, integration, or intelligence depth |
| 0–4 | Little or no verifiable product capability data available |
We look for verifiable product evidence, including:
We give more weight to specific, verifiable capabilities than vague product language. A platform that clearly documents “AI creator matching based on audience fit and campaign goals” will score higher than a platform that only says it uses “AI-powered influencer marketing.”
Customer Proof measures who uses the platform and what outcomes they achieved. This is the SaaS equivalent of client impact.
We look at named customers, customer scale, case study depth, quantitative results, industry coverage, and whether the evidence is tied to specific customer outcomes.
| Score | Description |
| 20–25 | 10+ named enterprise customers, 5+ detailed case studies with specific quantitative results, 1,000+ customers, and diverse industry coverage |
| 15–19 | 5–9 named enterprise customers, 3–4 case studies with quantitative results, customer count in the hundreds, and several industries represented |
| 10–14 | Named customers present, mostly mid-market, with 1–2 case studies and some quantitative data |
| 5–9 | Few named customers, case studies exist but lack quantitative results, and customer count is not disclosed |
| 0–4 | No named customers, no case studies, or no verifiable adoption data |
We look for:
We weight named, specific, customer-level evidence more heavily than broad aggregate claims. A platform that can show what it delivered for a named customer will outscore a platform that claims large numbers without context.
Industry Recognition measures external validation from sources the platform does not control.
A platform’s own marketing is useful context, but it is not enough on its own. We look for independent signals such as platform partnerships, review site presence, analyst mentions, awards, and trade press.
| Score | Description |
| 20–25 | Official partner status with 3+ major platforms, G2 rating of 4.5+ with 200+ reviews, analyst recognition, multiple awards, and regular trade press coverage |
| 15–19 | Official partner status with 1–2 major platforms, G2 rating of 4.0+ with 100+ reviews, some awards, and occasional trade press |
| 10–14 | G2 or Capterra rating of 3.5+ with 50+ reviews, limited awards or press, and no major platform partnerships |
| 5–9 | Listed on G2 or Capterra with fewer than 50 reviews, no platform partnerships, and limited external validation |
| 0–4 | No review presence, no partnerships, no press coverage, or no external recognition found |
We look for:
We treat third-party validation as stronger than self-published claims. Review volume also matters: a 4.8 rating from 12 reviews is not the same signal as a 4.6 rating from 500 reviews.
Revenue & Growth measures business scale, stability, and trajectory.
For a SaaS buyer, this dimension answers a practical question: is this platform likely to keep improving, supporting customers, and existing in two years?
A good platform is not always a large platform. But scale, funding, team size, time in market, and growth signals are meaningful proxies for reliability and long-term viability.
| Score | Description |
| 20–25 | 500+ employees, 8+ years in market, $50M+ total funding or demonstrated profitable scale, global office footprint, and strong growth signals |
| 15–19 | 100–500 employees, 5–8 years in market, $10M–$50M funding, multi-market presence, and stable growth indicators |
| 10–14 | 25–100 employees, 3–5 years in market, seed to Series A funding, and single or few-market presence |
| 5–9 | 10–25 employees, 1–3 years in market, early funding stage, and single-market focus |
| 0–4 | Fewer than 10 employees or no business scale data available |
We look for:
We do not assume that the biggest company is automatically the best product. Revenue & Growth is one dimension among four, and it is balanced against product capability, customer proof, and external recognition.
Every score is built from data we collect, structure, and verify. We do not rely on platform self-submission for the inputs that drive scoring.
We use sources such as:
We may also use:
We do not use paid placements, sponsored entries, or self-rated scorecards as scoring inputs. No platform can pay to be added to the directory or to influence its score.
Every platform in the directory passes through the same multi-stage process before being scored.
We collect core identity and positioning data from the platform homepage, including name, tagline, customer logos, customer count, free trial availability, demo availability, and creator database size.
We review product and feature pages to identify supported networks, lifecycle stages, core features, AI capabilities, intelligence features, analytics capabilities, and workflow coverage.
We collect buyer-facing pricing data where available, including pricing model, pricing transparency, starting price, plan names, free trial availability, free tier availability, enterprise plan availability, and demo availability.
Pricing is collected for buyer usefulness, but it is not included in the score.
We review integrations pages, partner pages, API documentation, Zapier, and Make to identify native integrations, integration categories, API availability, webhooks, and automation ecosystem support.
We collect named customers, customer logos, testimonials, case studies, industries served, use cases, and quantitative results. This is the largest input into Customer Proof scoring.
We collect company-level information such as founded year, headquarters, country, team size, office locations, funding, investors, acquisitions, and business status.
We check third-party review sites such as G2, Capterra, and TrustRadius for rating, review count, category, and review presence.
We collect external recognition from platform partner directories, award databases, press pages, analyst mentions, industry publications, and conference programmes.
The structured data is passed through our four-dimension rubric. Each dimension is scored against fixed criteria, not against other platforms in the directory.
Every score and the data behind it is recorded and timestamped. Scores are recalculated when new data becomes available, for example when a platform launches new features, publishes new case studies, receives new reviews, wins awards, raises funding, or expands into new markets.
A platform must meet minimum completeness requirements before it can be published or scored.
A profile should include:
A platform can be published before it is scored. Scoring requires a higher completeness bar:
We do not score profiles without sufficient data. A half-scraped entry damages the usefulness of the directory.
The fastest way to improve a score is to improve the underlying evidence. Below is a concrete checklist for each dimension.
A specific result for a named customer is stronger than a broad claim. “Brand X reduced reporting time by 40% using our analytics dashboard” is more useful than “our customers save time.”
If a platform believes its score does not reflect its current state, the right path is to update the underlying data on its website, review profiles, partner pages, and public company profiles. Scores will reflect those updates on the next refresh.
It is as important to be clear about what is excluded from the rubric as what is included.
Pricing is not a scoring dimension. A $49/month tool is not automatically worse than a $5,000/month platform. Pricing is displayed because buyers care about it, but it does not affect the score.
Platforms cannot pay to improve their score, sponsor their ranking, or buy a better position in the directory.
We do not use self-submitted scorecards as scoring inputs.
Customer quotes are useful context, but testimonials without named results do not significantly move scores.
A platform’s own social media following is not a meaningful proxy for its ability to support influencer marketing programmes.
New entries and long-standing entries are scored on the same scale. A platform does not receive points simply for having been listed longer.