Using MPD to set up HomeAssistant remote speakers

August 28, 2023

Of late I’ve been experimenting with HomeAssistant, a self-hosted home automation system. It started as a way to get a working doorbell without needing to rip the walls open, and I’ve been gradually adding new features like spoken reminders to put the chickens to bed.

Of course, for this, it needs to be able to play audio, and the audio needs to be audible throughout the house. Early testing was done by plugging a tiny usb-powered speaker into the actual machine HA was running on, but this didn’t have enough juice to be audible even in the next room.

HomeAssistant, of course, supports lots of media players. A lot of those are unsuitable for my needs, though, either because they required purchasing additional (and usually quite expensive) hardware, or because they required external cloud services; I wanted a way to use the speakers on the devices I already had.

My first thought was to set up a bunch of things as DLNA media renderers, which has good support in HomeAssistant and interoperates with a bunch of other things besides. But this turned out to be harder than I expected, especially on android, so I ended up going with plan B: music player daemon. HA’s MPD integration is pretty barebones – it doesn’t let you assign the device to a physical area, for example – but it suffices for setting up a device that you can push audio to from HA; and you can run MPD on a lot of platforms.

The Devices

I ended up using three different families of devices as speakers. Getting these working turned out to be more complicated, in every case, than I expected, so I split them off into their own posts:

The end result is not usable as a multi-room music playback system like Snapcast or Sonos – in particular, there is no ability synchronize playback between speakers – but for what I wanted, i.e. the ability to play brief announcements and have them audible anywhere in the house, it works fine.

HomeAssistant Configuration

With MPD up and running on everything, getting HomeAssistant to talk to them was actually the easy part.

The MPD integration can’t be configured from the GUI, but is straightforward to add to configuration.yaml:

media_player:
  - platform: mpd
    name: mpd_pladix
    host: pladix
  - platform: mpd
    name: mpd_flox
    host: flox
    port: 8600

Then restart HA and they should all show up as media_player entities, and become available once HA successfully connects to them.

Grouping them together

What we have so far is sufficient to let you send media to any individual media player, but of course the goal is to send it to all of them. HA doesn’t have a convenient way to group them together, so we turn to a script:

tannoy:
  alias: Tannoy
  description: Play sound on all house speakers
  mode: queued
  icon: mdi:speaker-multiple
  fields:
    sound:
      name: sound
      description: the sound to play
      selector:
        media:
  sequence:
  - service: media_player.play_media
    data:
      media_content_id: ''
      media_content_type: ''
    metadata: {}
    target:
      entity_id:
      - media_player.mpd_pladix
      - media_player.mpd_whirlwind
      - media_player.mpd_sargo
      - media_player.mpd_bullhead
      - media_player.mpd_flox

This gives you a new service, script.tannoy, which takes a media item as its sole argument – which can be any audio media available to HA, including TTS output – and plays it on all of those speakers. Adding or removing a speaker merely requires changing the list of entity_ids at the end.

Text to Speech

It would be convenient if we could also call the script and pass some text as an argument, and have it automatically TTS it and speak it through all the speakers. Conveniently, we can! If you have a TTS engine installed, you can generate a media-source:// URI on the fly that will call the TTS engine, generate the audio file, and send it to the output.

tannoy_tts:
  alias: Tannoy TTS
  description: Play TTS on all house speakers
  icon: mdi:speaker-multiple
  mode: queued
  fields:
    message:
      name: Message
      description: The message to output
      selector:
        text:
  sequence:
  - service: script.tannoy
    data:
      sound:
        media_content_id: "media-source://tts/tts.piper?language=en_US&voice=en_US-amy-low&message="
        media_content_type: vox
        metadata: {}

This script reuses the tannoy script above so we don’t need to change the list of speakers in multiple places.

Messages from Other Systems

Now that HA can play sound effects and speak everywhere in the house, wouldn’t it be cool if any system on the local network could do that? This last trick needs an MQTT server, but it’s not hard to set one up, and once you have it running (and HA connected to it) you can do this in automations.yaml:

- id: '1693355984360'
  alias: MQTT to TTS
  description: ''
  trigger:
  - platform: mqtt
    topic: homeassistant/tannoy
  condition: []
  action:
  - service: script.tannoy_tts
    data:
      message: ''
  mode: single

and now any text payload published on the homeassistant/tannoy MQTT channel will be spoken using the TTS engine. To be honest, I don’t have a good use for that yet. But it is cool.