Skip to the content.

Slack MCP Client in Go

This project provides a Slack bot client that serves as a bridge between Slack and Model Context Protocol (MCP) servers. By leveraging Slack as the user interface, it allows LLM models to interact with multiple MCP servers using standardized MCP tools.

GitHub Workflow Status Go Version Trivy Scan Docker Image GitHub Release License: MIT

Overview

This project implements a Slack bot client that acts as a bridge between Slack and Model Context Protocol (MCP) servers. It uses Slack as a user interface while enabling LLM models to communicate with various MCP servers through standardized MCP tools.

Important distinction: This client is not designed to interact with the Slack API directly as its primary purpose. However, it can achieve Slack API functionality by connecting to a dedicated Slack MCP server (such as modelcontextprotocol/servers/slack) alongside other MCP servers.

How It Works

Image

flowchart LR
    User([User]) --> SlackBot
    
    subgraph SlackBotService[Slack Bot Service]
        SlackBot[Slack Bot] <--> MCPClient[MCP Client]
    end
    

    
    MCPClient <--> MCPServer1[MCP Server 1]
    MCPClient <--> MCPServer2[MCP Server 2]
    MCPClient <--> MCPServer3[MCP Server 3]
    
    MCPServer1 <--> Tools1[(Tools)]
    MCPServer2 <--> Tools2[(Tools)]
    MCPServer3 <--> Tools3[(Tools)]
    
    style SlackBotService fill:#F8F9F9,stroke:#333,stroke-width:2px
    style SlackBot fill:#F4D03F,stroke:#333,stroke-width:2px
    style MCPClient fill:#2ECC71,stroke:#333,stroke-width:2px
    style MCPServer1 fill:#E74C3C,stroke:#333,stroke-width:2px
    style MCPServer2 fill:#E74C3C,stroke:#333,stroke-width:2px
    style MCPServer3 fill:#E74C3C,stroke:#333,stroke-width:2px
    style Tools1 fill:#9B59B6,stroke:#333,stroke-width:2px
    style Tools2 fill:#9B59B6,stroke:#333,stroke-width:2px
    style Tools3 fill:#9B59B6,stroke:#333,stroke-width:2px
  1. User interacts only with Slack, sending messages through the Slack interface
  2. Slack Bot Service is a single process that includes:
    • The Slack Bot component that handles Slack messages
    • The MCP Client component that communicates with MCP servers
  3. The MCP Client forwards requests to the appropriate MCP Server(s)
  4. MCP Servers execute their respective tools and return results

Features

Installation

From Binary Release

Download the latest binary from the GitHub releases page or install using Go:

# Install latest version using Go
go install github.com/tuannvm/slack-mcp-client@latest

# Or build from source
git clone https://github.com/tuannvm/slack-mcp-client.git
cd slack-mcp-client
make build
# Binary will be in ./bin/slack-mcp-client

Running Locally with Binary

After installing the binary, you can run it locally with the following steps:

  1. Set up environment variables:
# Using environment variables directly
export SLACK_BOT_TOKEN="xoxb-your-bot-token"
export SLACK_APP_TOKEN="xapp-your-app-token"
export OPENAI_API_KEY="sk-your-openai-key"
export OPENAI_MODEL="gpt-4o"
export LOG_LEVEL="info"

# Or create a .env file and source it
cat > .env << EOL
SLACK_BOT_TOKEN="xoxb-your-bot-token"
SLACK_APP_TOKEN="xapp-your-app-token"
OPENAI_API_KEY="sk-your-openai-key"
OPENAI_MODEL="gpt-4o"
LOG_LEVEL="info"
EOL

source .env
  1. Create an MCP servers configuration file:
# Create mcp-servers.json in the current directory
cat > mcp-servers.json << EOL
{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "$HOME"],
      "env": {}
    }
  }
}
EOL
  1. Run the application:
# Run with default settings (looks for mcp-servers.json in current directory)
slack-mcp-client

# Or specify a custom config file location
slack-mcp-client --config /path/to/mcp-servers.json

# Enable debug mode
slack-mcp-client --debug

# Specify OpenAI model
slack-mcp-client --openai-model gpt-4o-mini

# Configure metrics port
slack-mcp-client --metrics-port 9090

The application will connect to Slack and start listening for messages. You can check the logs for any errors or connection issues.

RAG Setup and Usage

The client includes an improved RAG (Retrieval-Augmented Generation) system that’s compatible with LangChain Go and provides professional-grade performance:

Quick Start with RAG

  1. Enable RAG in your configuration:
{
  "llm_provider": "openai",
  "llm_providers": {
    "openai": {
      "type": "openai", 
      "model": "gpt-4o",
      "rag_config": {
        "enabled": true,
        "provider": "json",
        "database_path": "./knowledge.json",
        "chunk_size": 1000,
        "chunk_overlap": 200,
        "max_results": 10
      }
    }
  }
}
  1. Ingest documents using CLI:
# Ingest PDF files from a directory
slack-mcp-client --rag-ingest ./company-docs --rag-db ./knowledge.json

# Test search functionality
slack-mcp-client --rag-search "vacation policy" --rag-db ./knowledge.json

# Get database statistics
slack-mcp-client --rag-stats --rag-db ./knowledge.json
  1. Use in Slack:

Once configured, the LLM can automatically search your knowledge base:

User: “What’s our vacation policy?”

AI: “Let me search our knowledge base for vacation policy information…” (Automatically searches RAG database)

AI: “Based on our company policy documents, you get 15 days of vacation…”

RAG Features

Custom Prompts and Assistants

The client supports advanced prompt engineering capabilities for creating specialized AI assistants:

System Prompts

Create custom AI personalities and behaviors:

# Create a custom system prompt file
cat > sales-assistant.txt << EOL
You are SalesGPT, a helpful sales assistant specializing in B2B software sales.

Your expertise includes:
- Lead qualification and discovery
- Solution positioning and value propositions  
- Objection handling and negotiation
- CRM best practices and sales processes

Always:
- Ask qualifying questions to understand prospect needs
- Provide specific, actionable sales advice
- Reference industry best practices
- Maintain a professional yet friendly tone

When discussing pricing, always emphasize value over cost.
EOL

# Use the custom prompt
slack-mcp-client --system-prompt ./sales-assistant.txt

Configuration-Based Prompts

Define prompts in your configuration:

{
  "llm_provider": "openai",
  "llm_providers": {
    "openai": {
      "type": "openai",
      "model": "gpt-4o",
      "system_prompt": "You are a helpful DevOps assistant specializing in Kubernetes and cloud infrastructure.",
      "conversation_starters": [
        "Help me debug a Kubernetes pod issue",
        "Explain best practices for CI/CD pipelines", 
        "Review my Dockerfile for optimization"
      ]
    }
  }
}

Assistant Roles

Create specialized assistants for different use cases:

Agent Mode

Agent Mode enables more interactive and context-aware conversations using LangChain’s agent framework. Instead of single-prompt interactions, agents can engage in multi-step reasoning, use tools more strategically, and maintain better context throughout conversations.

How Agent Mode Works

Agent Mode uses LangChain’s conversational agent framework to provide:

  1. Interactive Conversations: Maintains context across multiple exchanges
  2. Strategic Tool Usage: Agents decide when and how to use available tools
  3. Multi-Step Reasoning: Can break down complex problems into manageable steps
  4. Streaming Responses: Provides real-time updates during processing
  5. User Context Integration: Incorporates user information for personalized responses

Agent Mode Configuration

Enable Agent Mode in your configuration file:

{
  "use_agent": true,
  "use_native_tools": false,
  "custom_prompt": "You are a DevOps expert specializing in Kubernetes and cloud infrastructure. Always think through problems step by step.",
  "llm_provider": "langchain",
  "llm_providers": {
    "langchain": {
      "type": "openai",
      "model": "gpt-4o",
      "agent_prompt_prefix": "You are a helpful assistant with access to various tools."
    }
  },
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/project"]
    },
    "github": {
      "command": "github-mcp-server",
      "args": ["stdio"],
      "env": {
        "GITHUB_PERSONAL_ACCESS_TOKEN": "your-token"
      }
    }
  }
}

Configuration Options

Agent vs Standard Mode

Standard Mode:

Agent Mode:

Agent Mode Examples

Interactive Development Consultation:

User: "I need help optimizing my React app performance"

Agent Response:
🤖 I'd be happy to help optimize your React app performance! Let me understand your current setup better.

[Agent maintains conversation context and asks relevant follow-up questions]
Agent: "What specific performance issues are you experiencing? Are you seeing slow renders, large bundle sizes, or something else?"

User: "The app takes too long to load initially"

Agent: "Let me check your current bundle setup and suggest optimizations..."
[Agent uses filesystem tools to analyze the project structure and provides targeted advice]

Contextual Problem Solving:

User: "Can you help me with my deployment pipeline?"

Agent Response:
🤖 I'll help you with your deployment pipeline. Since I know you're working on a React project, let me check your current CI/CD setup.

[Agent leverages previous conversation context and user information to provide personalized assistance]
[Agent strategically uses relevant tools based on the conversation flow]

Agent Mode Best Practices

  1. System Prompts: Design clear, specific system prompts that guide the agent’s behavior
  2. Tool Selection: Provide relevant tools for the agent’s domain
  3. Context Management: Agents maintain better context across conversations
  4. User Personalization: Leverage user context integration for personalized responses
  5. Tool Strategy: Choose between native tools or system prompt-based tools based on your needs

Limitations and Considerations

Kubernetes Deployment with Helm

For deploying to Kubernetes, a Helm chart is available in the helm-chart directory. This chart provides a flexible way to deploy the slack-mcp-client with proper configuration and secret management.

Installing from GitHub Container Registry

The Helm chart is also available directly from GitHub Container Registry, allowing for easier installation without needing to clone the repository:

# Add the OCI repository to Helm (only needed once)
helm registry login ghcr.io -u USERNAME -p GITHUB_TOKEN

# Pull the Helm chart
helm pull oci://ghcr.io/tuannvm/charts/slack-mcp-client --version 0.1.0

# Or install directly
helm install my-slack-bot oci://ghcr.io/tuannvm/charts/slack-mcp-client --version 0.1.0 -f values.yaml

You can check available versions by visiting the GitHub Container Registry in your browser.

Prerequisites

Basic Installation

# Create a values file with your configuration
cat > values.yaml << EOL
secret:
  create: true

env:
  SLACK_BOT_TOKEN: "xoxb-your-bot-token"
  SLACK_APP_TOKEN: "xapp-your-app-token"
  OPENAI_API_KEY: "sk-your-openai-key"
  OPENAI_MODEL: "gpt-4o"
  LOG_LEVEL: "info"

# Optional: Configure MCP servers
configMap:
  create: true
EOL

# Install the chart
helm install my-slack-bot ./helm-chart/slack-mcp-client -f values.yaml

Configuration Options

The Helm chart supports various configuration options including:

For more details, see the Helm chart README.

Using the Docker Image from GHCR

The Helm chart uses the Docker image from GitHub Container Registry (GHCR) by default. You can specify a particular version or use the latest tag:

# In your values.yaml
image:
  repository: ghcr.io/tuannvm/slack-mcp-client
  tag: "latest"  # Or use a specific version like "1.0.0"
  pullPolicy: IfNotPresent

To manually pull the image:

# Pull the latest image
docker pull ghcr.io/tuannvm/slack-mcp-client:latest

# Or pull a specific version
docker pull ghcr.io/tuannvm/slack-mcp-client:1.0.0

If you’re using private images, you can configure image pull secrets in your values:

imagePullSecrets:
  - name: my-ghcr-secret

Docker Compose for Local Testing

For local testing and development, you can use Docker Compose to easily run the slack-mcp-client along with additional MCP servers.

Setup

  1. Create a .env file with your credentials:
# Create .env file from example
cp .env.example .env
# Edit the file with your credentials
nano .env
  1. Create a mcp-servers.json file (or use the example):
# Create mcp-servers.json from example
cp mcp-servers.json.example mcp-servers.json
# Edit if needed
nano mcp-servers.json
  1. Start the services:
# Start services in detached mode
docker-compose up -d

# View logs
docker-compose logs -f

# Stop services
docker-compose down

Docker Compose Configuration

The included docker-compose.yml provides:

version: '3.8'

services:
  slack-mcp-client:
    image: ghcr.io/tuannvm/slack-mcp-client:latest
    container_name: slack-mcp-client
    environment:
      - SLACK_BOT_TOKEN=${SLACK_BOT_TOKEN}
      - SLACK_APP_TOKEN=${SLACK_APP_TOKEN}
      - OPENAI_API_KEY=${OPENAI_API_KEY}
      - OPENAI_MODEL=${OPENAI_MODEL:-gpt-4o}
    volumes:
      - ./mcp-servers.json:/app/mcp-servers.json:ro

You can easily extend this setup to include additional MCP servers in the same network.

Slack App Setup

  1. Create a new Slack app at https://api.slack.com/apps
  2. Enable Socket Mode and generate an app-level token
  3. Add the following Bot Token Scopes:
    • app_mentions:read
    • chat:write
    • im:history
    • im:read
    • im:write
    • users:read
    • users.profile:read
  4. Enable Event Subscriptions and subscribe to:
    • app_mention
    • message.im
  5. Install the app to your workspace

For detailed instructions on Slack app configuration, token setup, required permissions, and troubleshooting common issues, see the Slack Configuration Guide.

LLM Integration

The client supports multiple LLM providers through a flexible integration system:

LangChain Gateway

The LangChain gateway enables seamless integration with various LLM providers:

LLM-MCP Bridge

The custom LLM-MCP bridge layer enables any LLM to use MCP tools without requiring native function-calling capabilities:

Configuration

LLM providers can be configured via environment variables or command-line flags:

# Set OpenAI as the provider (default)
export LLM_PROVIDER="openai"
export OPENAI_MODEL="gpt-4o"

# Use Anthropic
export LLM_PROVIDER="anthropic"
export ANTHROPIC_API_KEY="your-anthropic-api-key"
export ANTHROPIC_MODEL="claude-3-5-sonnet-20241022"

# Or use Ollama
export LLM_PROVIDER="ollama"
export LANGCHAIN_OLLAMA_URL="http://localhost:11434"
export LANGCHAIN_OLLAMA_MODEL="llama3"

Switching Between Providers

You can easily switch between providers by changing the LLM_PROVIDER environment variable:

# Use OpenAI
export LLM_PROVIDER=openai

# Use Anthropic
export LLM_PROVIDER=anthropic

# Use Ollama (local)
export LLM_PROVIDER=ollama

Configuration

The client uses two main configuration approaches:

Environment Variables

Configure LLM providers and Slack integration using environment variables:

Variable Description Default
SLACK_BOT_TOKEN Bot token for Slack API (required)
SLACK_APP_TOKEN App-level token for Socket Mode (required)
OPENAI_API_KEY API key for OpenAI authentication (required)
OPENAI_MODEL OpenAI model to use gpt-4o
ANTHROPIC_API_KEY API key for Anthropic authentication (required for Anthropic)
ANTHROPIC_MODEL Anthropic model to use claude-3-5-sonnet-20241022
LOG_LEVEL Logging level (debug, info, warn, error) info
LLM_PROVIDER LLM provider to use (openai, anthropic, ollama) openai
LANGCHAIN_OLLAMA_URL URL for Ollama when using LangChain http://localhost:11434
LANGCHAIN_OLLAMA_MODEL Model name for Ollama when using LangChain llama3

Monitoring Configuration

The client includes Prometheus metrics support for monitoring tool usage and performance:

Example metrics access:

# Access metrics endpoint
curl http://localhost:8080/metrics

# Run with custom metrics port
slack-mcp-client --metrics-port 9090

MCP Server Configuration

MCP servers are configured via a JSON configuration file (default: mcp-servers.json):

{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/directory"],
      "env": {}
    },
    "github": {
      "command": "github-mcp-server",
      "args": ["stdio"],
      "env": {
        "GITHUB_PERSONAL_ACCESS_TOKEN": "your-token"
      }
    },
    "web-api": {
      "mode": "http",
      "url": "http://localhost:8080/mcp",
      "initialize_timeout_seconds": 30
    }
  }
}

For detailed configuration options, see the Implementation Notes.

Slack-Formatted Output

The client includes a comprehensive Slack-formatted output system that enhances message display in Slack:

For more details, see the Slack Formatting Guide.

Transport Modes

The client supports three transport modes:

Documentation

Comprehensive documentation is available in the docs/ directory:

Configuration & Setup

Development & Implementation

User Guides

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

This project is licensed under the MIT License - see the LICENSE file for details.

CI/CD and Releases

This project uses GitHub Actions for continuous integration and GoReleaser for automated releases.

Continuous Integration Checks

Our CI pipeline performs the following checks on all PRs and commits to the main branch:

Code Quality

Security

Testing

Release Process

When changes are merged to the main branch:

  1. CI checks are run to validate code quality and security
  2. If successful, a new release is automatically created with:
    • Semantic versioning based on commit messages
    • Binary builds for multiple platforms
    • Docker image publishing to GitHub Container Registry
    • Helm chart publishing to GitHub Container Registry