← Back

Web App · Production

Peechz

A video-first talent marketplace powered by a 3-layer AI routing engine. Live at peechz.com. After winning 1st Place at HackOHIO 2025 (vs 800 participants), we launched to immediate organic founder adoption and were invited to present to 300+ people at Ohio State in our first 7 days.

Status

Live Beta · HackOHIO 2025 Winner

Stack

Next.js 15FastAPIPostgresCloudflare R2Groq

[ THE THESIS ]

LinkedIn shows polish. GitHub shows code. Neither shows momentum.

The hiring market says it wants proof of skill, but discovery is still trapped in old signals. Resumes reward formatting. LinkedIn rewards network polish. GitHub only works if someone already knows where to look, and most real builders never get an audience for the projects they ship. Peechz fixes that missing layer: a video-first talent marketplace where builders pitch real work, AI extracts the technical signal, and recruiters, investors, and collaborators discover people through proof of work instead of polished paperwork.

LIVE · peechz.com · Production Beta · First week after launch

1st
HackOHIO 2025
vs 800 participants
3
official mentions
HackOHI/O + MVNU + Kenyon
25
early users
first week live
5
live pitches
production beta
300+
launch audience
Ohio University
95%
fewer API calls
on interaction hot path

SECTION 01 — SYSTEM CONSTRAINTS

Four hard rules that shape every decision

Startups die by over-engineering. Video and real-time feeds get expensive fast, so Peechz was designed around constraints that keep the system cheap, observable, and hard to corrupt.

01

Supabase is the single source of truth

Every write lands in Postgres. Realtime fans out from here. The backend cannot drift.

02

Backend is stateless on the hot path

FastAPI holds zero session state per request. Any instance can serve any call.

03

Video never touches Python

Presigned PUTs upload straight to R2. Reads come from the CDN. FastAPI observes zero video bandwidth.

04

No vanity metrics

PTCHD only counts a view after ≥50% watch time or a high-intent action. Saves, comments, and follows matter more than impressions.

SECTION 02 — DISTRIBUTED SYSTEM TOPOLOGY

How the four layers cooperate

[ Runtime Map ]

Client actions flow into Postgres, FastAPI, and R2 without crossing responsibilities.

Next.js 15 · Supabase · FastAPI · Cloudflare R2

01 · Client control plane

Next.js + TypeScript

InteractionManager

  • 30s batch sync
  • 50-item early flush
  • localStorage LRU (500)

CommentManager

  • Optimistic UI · temp IDs
  • Realtime WS per-pitch
  • Offline queue

02 · Source of truth

Realtime WS

Supabase owns durable state.

  • Postgres 15 + RLS — security at the database layer.
  • Realtime publication — WebSocket fan-out on 6 tables.
  • AFTER INSERT/DELETE triggers — atomic counts, no read-modify-write.
  • UNIQUE indexes on pitch_views(pitch_id, session_id)

03 · Stateless backend

REST API

FastAPI routes signal, not sessions.

Searcher

LLM

Detector

Behavior

Recommender

Scoring

Groq provider wrapper

3-key rotation · 60s cooldown on 429 · fallback: gpt-oss-20b → llama-3.1-8b → llama-3.3-70b

04 · Media layer · Cloudflare R2 + CDN

Videos bypass Python entirely. Client uploads via presigned PUT URLs directly to R2.

Backend never proxies video bytes

SECTION 03 — THE SEKAN ALGORITHM

3-layer AI routing · rejects LangChain / CrewAI / LlamaIndex

Most AI features are thin wrappers around one prompt. Peechz needed deterministic, sub-second feed ranking, so the routing system is three small agents instead of a heavy orchestration framework.

L1

Searcher

LLM · Groq

Tag generation · summary · query expansion

Does NOT make recommendation decisions. Enriches content only.

L2

Detector

Deterministic · no LLM

Converts raw events into InteractionVector

watch_ratio ≥ 0.6 → watched_enough=1 · batches of 10

L3

Recommender

Weighted scoring engine

Scores candidates against user profile

6 signals sum to 1.0 · ±10% randomness · 5-min TTL cache

Recommender · weighted scoring signals

Σ = 1.00 · × random(0.9, 1.1)

Tag similarity

25%

Creator preference

20%

Popularity

18%

Recency

15%

Audience targeting

12%

Engagement quality

10%

SECTION 04 — ENGINEERING HIGHLIGHTS

Three decisions I can defend line by line

HONEST METRICS

PTCHD — the view count that cannot be faked

A pitch only counts as PTCHD when (a) watch ratio ≥ 50% via IntersectionObserver + onTimeUpdate, OR (b) high-intent action (like / save / comment). Dedup client-side via Set<string>, server-side via UNIQUE(pitch_id, session_id).

atomic RPC · no race

NETWORK PATTERN

Hybrid Manager Pattern

Client singletons intercept every user action before it hits the network. 500-entry LRU cache · optimistic UI · 30s sync or 50-item early flush · localStorage queue survives hard refresh.

~95% fewer API calls

CONCURRENCY

Atomic counts via Postgres triggers

Application code cannot race. AFTER INSERT/DELETE triggers with SECURITY DEFINER update likes_count / saves_count / comments_count inside a single transaction. GREATEST(…, 0) defends against negative counts.

0 lost counts under load

SECTION 05 — DELIBERATE NON-CHOICES

What I chose NOT to build — with the exact trigger to revisit

Good engineering is knowing what not to ship yet. These are deliberate constraints, not missing features.

Not yet

LangChain / CrewAI / LlamaIndex

Why

3 Python classes are clearer + faster for a 3-agent system

Trigger to revisit

>7 agents or multi-step tool chains

Not yet

Redis / separate cache tier

Why

In-process cache + 5-min TTL is enough for current load

Trigger to revisit

Horizontal scale beyond 1 FastAPI instance

Not yet

Client-side vector DB

Why

Tag overlap + weighted scoring is fast and explainable

Trigger to revisit

Corpus crosses ~10k pitches

Not yet

Backend video transcoding

Why

Browsers handle MP4/WebM natively; bad uploads get clear errors

Trigger to revisit

User-reported format failures > 5%

SECTION 06 — CODE PROOF

The actual atomic-count trigger · production

Verbatim from the supabase/migrations/ folder. Not pseudocode.

supabase/migrations/0012_atomic_counts.sql

CREATE OR REPLACE FUNCTION handle_user_action_count()
RETURNS TRIGGER AS $$
BEGIN
  IF (TG_OP = 'INSERT') THEN
    IF NEW.action_type = 'like' THEN
      UPDATE pitches
         SET likes_count = COALESCE(likes_count, 0) + 1
       WHERE id = NEW.pitch_id;
    ELSIF NEW.action_type = 'save' THEN
      UPDATE pitches
         SET saves_count = COALESCE(saves_count, 0) + 1
       WHERE id = NEW.pitch_id;
    END IF;
  ELSIF (TG_OP = 'DELETE') THEN
    IF OLD.action_type = 'like' THEN
      UPDATE pitches
         SET likes_count = GREATEST(COALESCE(likes_count, 0) - 1, 0)
       WHERE id = OLD.pitch_id;
    END IF;
  END IF;
  RETURN NULL;
END;
$$ LANGUAGE plpgsql SECURITY DEFINER;

CREATE TRIGGER on_user_action_change
AFTER INSERT OR DELETE ON public.user_actions
FOR EACH ROW EXECUTE FUNCTION handle_user_action_count();