Skip to content
Case study · Homelab Platform (in-house) 2026

Slack + Discord + Telegram from one Fastify endpoint.

A small Fastify service that fans notifications out to Slack, Discord, and Telegram from one POST endpoint.

3

Channels supported

1 unified API

Endpoint

Brief

A small service that takes a notification payload and fans it out to Slack, Discord, and Telegram from a single endpoint. Used by the rest of the platform: ContentForge calls it for the daily review prompt, FB-Media calls it for pipeline alerts, the homelab dashboard calls it for threshold breaches.

The brief was simple: replace three different notification clients (one per channel) with one HTTP endpoint that does the right thing. Reduce the number of credentials each project has to know about from three to one.

Architecture

A single Fastify endpoint (POST /notify) on Node, running in CT 209 on the services Proxmox node. The payload is a thin envelope:

{
  "message": { "title": "...", "body": "...", "source": "..." },
  "severity": "info | success | warning | error | critical"
}

The handler picks the per-channel formatting (Slack attachment with color, Discord embed with severity icon, Telegram Markdown with emoji) from a single lookup table, adding a fourth channel later means one entry, not a per-channel switch statement.

A second endpoint (POST /telegram/webhook) listens for inbound Telegram messages and replies via a small bi-directional chat backend pointed at the workstation Ollama container. The chat is kept out of public access via the Telegram bot allowlist (only paired chat IDs get responses).

Outcomes

  • 3 channels (Slack, Discord, Telegram) reachable through one HTTP endpoint.
  • One unified payload schema, every project that calls in uses the same envelope.
  • Bi-directional Telegram chat with a workstation-resident Ollama backend, no per-message API cost.
  • Zero unplanned outages since launch, the service runs as a single systemd unit with auto-restart.

What’s next

Two items on the next-iteration list:

  1. Standardise the severity → channel-formatting mapping in a small lookup table from day one, the current per-channel switch statement is fine at three channels but would grow unruly at six. The lookup-table refactor is queued.
  2. Webhook receiver for Prometheus AlertManager, the platform already runs Prometheus + Grafana; the natural next consumer of /notify is AlertManager firing on Grafana threshold breaches.

The tech

Tech used

  • Node
  • Fastify
  • LXC

What I'd do differently

Standardise the severity → channel-formatting mapping in a small lookup table from day one, the per-channel switch grew unruly fast.

Want something like this for your team?

30-minute discovery call. No pitch deck. We talk about what you're shipping, what's in the way, and whether I can help. If yes, you get a fixed quote within a week.