Build a Local Slack Agent That Responds Using a Self-Hosted LLM (coming soon)

In this first hands-on project, you’ll build a fully local AI agent that listens for questions in Slack and replies in real-time using a local LLM like Mistral running via Ollama. No cloud APIs, no server, just a clean Python setup that shows how agents can hook into real chat tools.

You will:

  • Set up a Slack RTM client that listens for new messages in real time
  • Route user questions to a locally running LLM (via Ollama)
  • Format basic prompts and post responses back into Slack threads
  • Run everything 100% locally with no external services or APIs
  • Understand the base loop of agent input, model processing, and output
  • Prepare for more advanced logic in later projects


Complete and Continue