After building CryptoPrism-DB, I had something genuinely valuable sitting in PostgreSQL: 9 months of pre-computed technical indicators for 1,000+ cryptocurrencies, updated daily, with financial ratios that most retail tools don't even expose. The Sharpe ratio, Sortino ratio, Alpha and Beta relative to Bitcoin — indicators that institutional desks use but retail platforms either hide or charge for.
TradingView's screener charges $60+/mo and doesn't expose raw ratio data. Coinigy is $18.66/mo for a screener with no financial ratio layer. I already had the data — I was missing the product layer that would let people query it.
The screener also needed to be the second CryptoPrism product, which meant monetisation architecture had to be a first-class design constraint — not an afterthought bolted on after the data model was locked.
I designed backward from the core value proposition: the advanced filter endpoint.
POST /api/v1/screening/filter is where the product lives —
the ability to say "give me all coins where RSI-14 is below 30, Sharpe ratio is above 1.5, and
OBV is trending up over 7 days." Everything else is infrastructure around that query.
FastAPI was the obvious choice as the thin API layer: it speaks directly to PostgreSQL via SQLAlchemy, auto-generates Swagger UI documentation, handles async WebSocket connections natively, and its type hints double as API documentation. I did not write a single line of API docs manually.
On the frontend, React Query handles the REST/WebSocket data boundary. The screener results table subscribes to WebSocket price updates for live overlays without re-fetching indicator data (which changes daily, not by the second). This separation — slow data via REST, fast data via WebSocket — keeps the UI performant and the data model clean.
Three-tier architecture: React frontend, FastAPI middleware, CryptoPrism PostgreSQL backend:
┌──────────────────────────────────────────────────────────────────┐
│ CRYPTOPRISM SCREENER STACK │
└──────────────────────────────────────────────────────────────────┘
FRONTEND (React 18 + TypeScript + Tailwind)
─────────────────────────────────────────────
┌────────────────────────────────────────────┐
│ Screener Table │ Watchlists │ Alerts │
│ Filter Builder │ Chart View │ Auth UI │
└──────────┬───────────────┬─────────────────┘
│ REST │ WebSocket
│ │
─────────────────────────────────────────────
FASTAPI BACKEND (Python 3.10+)
─────────────────────────────────────────────
┌────────────────────────────────────────────┐
│ JWT Auth Layer + Subscription Tier Check │
├────────────────────────────────────────────┤
│ GET /api/v1/cryptocurrencies │
│ POST /api/v1/screening/filter ← core │
│ GET /api/v1/watchlists │
│ POST /api/v1/alerts │
│ WS /ws/market-data ← live │
├────────────────────────────────────────────┤
│ Rate Limiter (per subscription tier) │
│ Query Builder (100+ indicator combos) │
│ Swagger UI / ReDoc (auto-generated) │
└──────────┬─────────────────────────────────┘
│ SQLAlchemy (async)
│
─────────────────────────────────────────────
POSTGRESQL DATABASES
─────────────────────────────────────────────
┌──────────────────────┐ ┌──────────────────┐
│ dbcp (production) │ │ cp_ai (analysis)│
│ • OHLCV tables │ │ • DMV scores │
│ • Listings │ │ • AI summaries │
│ • Volume indicators │ │ • QA reports │
└──────────────────────┘ └──────────────────┘
9 months pre-computed data · 2.26M+ records
DEPLOYMENT: Docker Compose · Nginx · SSL
Indicator categories available in the filter builder:
MOMENTUM RSI (14), ROC, Williams %R, Stochastic %K/%D OSCILLATORS MACD, CCI, ADX, Bollinger Bands width/position RATIOS Sharpe, Sortino, Alpha vs BTC, Beta vs BTC VOLUME OBV, VWAP, Volume channel position, CMF METRICS Market cap, ATH/ATL distance, coin age, dominance
The screener table shows indicator values (from REST, updated daily) alongside live price and volume overlays (from WebSocket, updated per-second). The naive implementation cached everything in React Query — but WebSocket price updates were overwriting the cached indicator rows, causing stale re-renders where RSI values would flicker and reset every time a price tick came in.
The filter endpoint needs to accept arbitrary combinations of conditions: "RSI below 30 AND Sharpe above 1.5 AND market cap above $100M". The naive approach is a massive switch statement that generates SQL fragments — unmaintainable after more than a handful of indicators, and an injection risk if not carefully escaped.
Solution: a typed filter schema where each indicator's valid ranges, SQL column name, table source, and comparison operators are defined in a central registry. The query builder reads from this registry to generate parameterised SQLAlchemy queries — no string interpolation, no injection risk. Adding a new indicator means one registry entry, not a code change in the query builder.
Enterprise-grade rate limiting typically requires Redis — a shared in-memory store that all API instances can hit to count requests. Running Redis in a Docker Compose stack for a solo-built SaaS felt like premature infrastructure, but per-process in-memory rate limiting breaks the moment you run more than one FastAPI worker.
Solution: a hybrid approach. Rate limit counters are stored in PostgreSQL with a TTL-based cleanup job running every hour. The FastAPI middleware checks PostgreSQL for the count, increments it, and returns 429 if the tier limit is hit. It's slightly slower than Redis (2–5ms overhead per request) but perfectly correct across multiple workers and requires zero additional infrastructure. At the scale of a new SaaS product, this is the right trade-off.
Core product metrics at the end of the build:
POST /screening/filter endpoint consistently returns
results in under 100ms for standard filter combinations against 2.26M+ records, due to
pre-computed indicator columns and targeted PostgreSQL indexes on the most commonly filtered
fields (RSI, Sharpe, market cap).
-
Monetisation architecture should be designed before the first line of code.
Adding subscription tiers to an existing system means retrofitting auth checks into every endpoint and migrating the user table schema. Designing it in from the start — JWT claims carrying tier information checked at the middleware layer — meant zero retrofitting.
-
WebSocket + REST data sources need a single source of truth on the client.
Mixing them in the same React Query cache is a mistake. Segregate by update frequency: slow data in React Query, fast data in a separate event-driven store. The merge happens at the render layer, not the data layer.
-
FastAPI's auto-generated Swagger docs saved hours of documentation time.
Every endpoint has accurate, interactive documentation with no additional work. The Pydantic type system that drives FastAPI's validation also drives the Swagger schema — they're the same source.
-
A registry pattern for complex query builders is worth the initial investment.
The indicator registry pattern meant that adding the 50th indicator was identical in effort to adding the 5th. The alternative — a growing switch statement — would have been a maintenance liability with every new indicator added.
-
The right infrastructure is the least infrastructure that solves the problem correctly.
Choosing PostgreSQL over Redis for rate limiting adds 2–5ms per request. At the scale of a new SaaS product, that's an acceptable trade-off for zero additional infrastructure. Decisions should be driven by actual constraints, not anticipated ones.