Skip to content

Getting Started

  • Node.js 18+
  • Ollama running locally with qwen2.5:7b pulled (or set SENSOR_HUMOR_MODEL for a different model)
  • An MCP client (Claude Code, Cursor, or any MCP-compatible host)

Optional for voice:

  • mcp-voice-soundboard with Piper backend
  • Piper ONNX voice models (en_GB-alan-medium, en_US-ryan-high, en_US-lessac-high, en_GB-cori-high)
Terminal window
npm install sensor-humor

Or clone and link for development:

Terminal window
git clone https://github.com/mcp-tool-shop-org/sensor-humor.git
cd sensor-humor
npm install
npm run build
Terminal window
ollama pull qwen2.5:7b

This is the default model for comedy generation. It has strong instruction following, concise output, and solid JSON schema adherence. Override with the SENSOR_HUMOR_MODEL environment variable if needed.

Terminal window
SENSOR_HUMOR_DEBUG=true npm start

The server runs on stdio transport. Your MCP client connects to it as a local tool server.

In your MCP client:

mood_set(style: "dry")

You should get back:

{
"mood": "dry",
"description": "Deadpan, minimalist, says the obvious like devastating news",
"voice_notes": "Flat, weary, metronomic"
}

Then try:

comic_timing(text: "null pointer at 0xdeadbeef", technique: "understatement")

Expected: a flat, one-sentence rewrite like “Pointer at deadbeef. Naturally.”

Start mcp-voice-soundboard with Piper:

Terminal window
cd mcp-voice-soundboard
VOICE_SOUNDBOARD_ENGINE=piper VOICE_SOUNDBOARD_PIPER_MODEL_DIR=/path/to/piper/models npm start

Then after any sensor-humor tool call, pipe the output to voice:

voice_speak({ text: result.roast, mood: "roast" })

The mood parameter maps to a Piper voice + prosody preset automatically.