High-performance distributed query engines and SQL-on-Hadoop systems that enable fast analytics across large, heterogeneous datasets.
Trino is the most consistently-surfaced brand when users ask AI assistants about Analytical Query Engines, but not to the point of monopolising the conversation. ClickHouse and a handful of others appear regularly, giving users a reasonable set of alternatives to compare. The top of this leaderboard is where most of the AI-driven traffic will go.
Ranked by overall AI Visibility Score (geometric mean of LBA, Authority, and TOM). Click any brand for the full report.
| # | Brand | LBA | Authority | TOM | Overall |
|---|---|---|---|---|---|
| 1 |
Trino
trino.io
|
82 | 69 | 91 | 80 |
| 2 |
ClickHouse
clickhouse.com
|
86 | 64 | 81 | 76 |
| 3 |
Snowflake
snowflake.com
|
85 | 58 | 83 | 74 |
| 4 |
Apache
apache.org
|
83 | 43 | 66 | 62 |
| 5 |
Databricks
databricks.com
|
92 | 27 | 63 | 54 |
| 6 |
DuckDB
duckdb.org
|
87 | 34 | 41 | 49 |
| 7 |
Starburst
starburst.io
|
78 | 17 | 28 | 34 |
| 8 |
Dremio
dremio.com
|
81 | 13 | 28 | 31 |
| 9 |
MotherDuck
motherduck.com
|
75 | 0 | 3 | 3 |
| 10 |
StarRocks
starrocks.io
|
78 | 0 | 24 | 3 |
| 11 |
Presto
prestodb.io
|
77 | 0 | 5 | 2 |
| 12 |
SingleStore
singlestore.com
|
80 | 0 | 4 | 2 |
| 13 |
Rockset
rockset.com
|
75 | 0 | 2 | 1 |
| 14 |
B
Brytlyt
brytlyt.com
|
24 | 0 | 0 | 0 |
| 15 |
CrateDB
crate.io
|
74 | 0 | 0 | 0 |
| 16 |
Databend
databend.com
|
53 | 0 | 0 | 0 |
| 17 |
Exasol
exasol.com
|
66 | 0 | 0 | 0 |
| 18 |
Firebolt
firebolt.io
|
75 | 0 | 0 | 0 |
| 19 |
H
Heavy.AI
heavy.ai
|
63 | 0 | 0 | 0 |
| 20 |
Ingres
actian.com
|
36 | 0 | 0 | 0 |
| 21 |
Kinetica
kinetica.com
|
67 | 0 | 0 | 0 |
| 22 |
Kyligence
kyligence.io
|
50 | 0 | 0 | 0 |
| 23 |
Ocient
ocient.com
|
36 | 0 | 0 | 0 |
| 24 |
RisingWave
risingwave.com
|
67 | 0 | 0 | 0 |
| 25 |
SelectDB
selectdb.com
|
39 | 0 | 0 | 0 |
| 26 |
Yellowbrick
yellowbrick.com
|
60 | 0 | 0 | 0 |
Every brand in this leaderboard is scored against the same set of 221 shared Analytical Query Engines prompts. The same prompts, same model, same iterations. So differences in scores reflect actual differences in AI visibility, not differences in measurement.
(LBA × Authority × TOM)^(1/3). Geometric mean is used so that any single weak metric pulls the overall score down, rather than being masked by strength elsewhere.
quality × meta × stability × share × recognition × 100. Read the full LBA methodology →