LLM Council

llm-council-local.vercel.app

A web app that sends the same query to multiple LLMs simultaneously and synthesises the best answer from the combined output.

aiweb-appllmtooling

Inspired by Andrej Karpathy's llm-council — his version requires cloning the repo and running it locally. This is a deployed version anyone can use without any setup.

One query, sent to multiple LLMs simultaneously. Responses stream back in parallel, and a synthesis layer distills the best answer from the combined output. Built with Next.js, Render, and MongoDB.

How it works

Stage 1 — First opinions. Your query goes to all LLMs individually. Responses are collected and shown in a tab view so you can inspect each one.

Stage 2 — Review. Each LLM is shown the other models' responses and asked to rank them by accuracy and insight. Identities are anonymized so no model can play favorites.

Stage 3 — Final response. A designated Chairman LLM takes all the responses and rankings and compiles them into a single final answer.