StackA2A
generalgoogle-adktypescript

Consensus.Ai

38

by Adidem23

Where AI disagrees before users suffer

1 starsUpdated 2026-02-08
Quality Score38/100
Community
7
Freshness
87
Official
30
Skills
10
Protocol
40
🔒 Security
20

Getting Started

1Clone the repository
$ git clone https://github.com/Adidem23/Consensus.ai
2Navigate to the project
$ cd Consensus.ai
3Install dependencies
$ npm install
4Run the agent
$ npm start

Or connect to the hosted endpoint: https://consensus-ai-lake.vercel.app

README

🧠 LLM Debate Agent

A multi-agent reasoning system that debates complex questions, evaluates arguments, and produces more reliable, explainable AI outputs.

📌 Problem Statement

Large Language Models (LLMs) often:

  • Produce confident but incorrect answers
  • Struggle with controversial or subjective questions
  • Provide single-perspective reasoning
  • Lack transparent evaluation of their own responses

For decision-making, learning, and analysis, a single LLM response is not enough.
Users need balanced reasoning, counter-arguments, and measurable confidence.

💡 Proposed Solution

The LLM Debate Agent introduces a multi-agent debate architecture where multiple LLMs independently reason from opposing perspectives, followed by an unbiased evaluation agent.

Instead of asking:

“What is the answer?”

We ask:

“What are the strongest arguments for and against this claim, and which one stands up to scrutiny?”

This approach improves:

  • Answer reliability
  • Reasoning transparency
  • Trustworthiness of LLM outputs

🏗️ System Architecture

📐 Architecture Overview

The system is designed as council of Different LLM nodes where they critique each others output and one supervisor Node handles LLM responses and gives most relevant output

Key components:

  • Supervisor Agent – Delegates user query to all nodes in LLM Debate and Gathers final output of all and returns most realiable output
  • LLM Nodes – They Critique with each other enhances their output and then send final Answer to Supervisor Ndoe
  • Central Autority – This is central which knows which LLM has given answer which has given a critique . In brief it is LLM Debate State Manager
  • Opik- All LLM calls are being traced to comet opik

🔁 Execution Flow

The debate follows a structured, step-by-step flow:

  1. User submits a question or claim
  2. Supervisor delegates it to all llm nodes in Council
  3. LLM nodes generates one final output at their end
  4. LLM nodes sends the output to supervisor Node
  5. Supervisor Checks the most reelevant answer and returns Back to user

🎥 Project Walkthrough & Demo

🚀 Why This Matters

Unlike traditional single-response LLM systems, the LLM Debate Agent:

  • Makes disagreement explicit
  • Encourages deeper reasoning
  • Provides evaluative confidence instead of blind trust

This makes it suitable for decision support, education, and LLM evaluation workflows.

🤖 Tech Stack Used

Capabilities

StreamingPush NotificationsMulti-TurnAuth: none
adk-pythonfastapilangchainmongoopikreactjstypescript
View on GitHub