Skip to Content

LLM Council: Route Tasks Across Models and Aggregate Answers

NEO built a council-style layer: several LLMs can propose, critique, and merge answers. That spreads risk on important prompts so you are not betting everything on one model.


Problem Statement

We asked NEO to wire up a flow that runs multiple LLMs on the same task, compares what comes back, and then surfaces either a consensus or a ranked answer. The point is to trim blind spots and overconfidence from a single vendor or weights file.


Solution Overview

NEO shipped LLM Council with:

  1. Multi-model routing: Pick providers and models per stage.
  2. Deliberation rounds: Optional critique and revision between models.
  3. Aggregation: Vote, score, or use a judge model to pick the final answer.

LLM Council architecture

Workflow / Pipeline

StepDescription
1. Task ingestUser prompt and constraints; council configuration loaded
2. Parallel proposeEach model generates a candidate answer
3. Critique / refineOptional cross-model feedback and second pass
4. AggregateJudge or heuristic selects final output and rationale

Repository & Artifacts

abhishekgandhi-neo/llm_council_by_neoView on GitHub

Generated Artifacts:


References

View source on GitHub


Learn More