conversationalfastapipython
Smart Campus Helpdesk
18A multi-agent AI helpdesk system for smart campuses using locally-hosted LLMs (Ollama/LM Studio). Features specialized agents for academics, finance, and events with MCP-style communication, FastAPI backend, and React chat interface. Runs completely offline with modular, extensible architecture.
Updated 2025-08-02
Quality Score18/100
★ Community
0
◷ Freshness
19
✓ Official
30
⚡ Skills
10
⬡ Protocol
30
🔒 Security
20
Getting Started
1Clone the repository
$ git clone https://github.com/Rajat25022005/Smart_Campus_Helpdesk
2Navigate to the project
$ cd Smart_Campus_Helpdesk
3Install dependencies
$ pip install -r requirements.txt
4Run the agent
$ uvicorn main:app --reload
README
Project Roadmap: Smart Campus Helpdesk with Agentic AI
PHASE 1: Requirements & Setup
Goals:
- Agents communicate with each other (MCP style)
- Run locally using Ollama or LM Studio (offline LLMs)
- Users interact via a chat interface
- Agents pull data from a real database
- Modular, so more agents can be added later
Tools Needed:
- Python: Agent logic & backend
- FastAPI or Flask: API backend
- SQLite or MongoDB: Database
- Ollama / LM Studio: Running LLMs locally
- React.js or HTML/JS: Chat frontend
- LangChain (optional): Agent chaining
- Postman: For API testing
- VSCode: Development
PHASE 2: Agent Design
Agents:
- OrchestratorAgent - Routes queries to correct agent
- AcademicAgent - Handles class schedules, subjects, exams
- FinanceAgent - Handles fees, payments, due dates
- EventAgent - Handles events, workshops, fests
MCP-style Message Format (JSON):
{
"sender": "OrchestratorAgent",
"receiver": "AcademicAgent",
"action": "get_exam_schedule",
"data": {"student_id": "CU2025AI001"}
}
PHASE 3: Database Design
Use SQLite or MongoDB.
Example Tables:
courses(id, name, instructor, schedule)fees(student_id, amount_due, due_date)events(id, title, date, description)users(student_id, name, email)
PHASE 4: Backend + Agent Logic
Use FastAPI with endpoints:
/chat (POST)→ OrchestratorAgent/academic→ AcademicAgent/finance→ FinanceAgent/events→ EventAgent
Each agent:
- Parses MCP messages
- Queries DB
- Returns JSON response
PHASE 5: LLM Integration (Ollama / LM Studio)
Use Ollama’s HTTP API with models like mistral, gemma.
Example:
import requests
def call_ollama(prompt):
response = requests.post("http://localhost:11434/api/generate", json={
"model": "mistral",
"prompt": prompt
})
return response.json()["response"]
PHASE 6: Chat Interface
Tech Stack:
- React.js or HTML/JS
UI Components:
- TextBox for input
- Message bubbles
- Agent avatars
- Display chat log
Flow:
Frontend posts user input to /chat → response displayed in UI
PHASE 7: Communication Flow
- User types a question
- Frontend sends to
/chat(OrchestratorAgent) - Orchestrator uses LLM to decide target agent
- Sends MCP message to agent
- Agent queries DB and replies
- Orchestrator formats response
- Frontend displays it
PHASE 8: Optional Features
- Voice input via Web Speech API
- Add new agents (e.g., LibraryAgent)
- Save chat history
- User login system
- LangChain for agent chaining
Folder Structure
/smart-campus-helpdesk
├── backend/
│ ├── main.py
│ ├── agents/
│ │ ├── orchestrator.py
│ │ ├── academic_agent.py
│ │ ├── finance_agent.py
│ │ └── event_agent.py
│ ├── database/
│ │ ├── db.py
│ │ └── models.py
│ └── utils/
│ └── ollama.py
├── frontend/
│ └── chat-ui/
├── requirements.txt
└── README.md
Learning Outcomes
- Build and deploy multi-agent architecture
- Route messages using MCP
- Run offline LLMs via Ollama/LM Studio
- Build real-time chat interface
- Master DB + API integration
Capabilities
StreamingPush NotificationsMulti-TurnAuth: none
agentic-aiaiai-agentsapillmmcp-serverollama