Building a digital roster/serving board using Companion and the Planning Center Services API

If you’re involved in tech ministry and like to tinker, chances are you’ve heard of — and maybe even used — micboard.io.

This is Micboard.

Straight from their website, “Micboard simplifies microphone monitoring and storage for artists, engineers, and volunteers. View battery, audio, and RF levels from any device on the network.” It’s a neat tool and has helped a lot of teams over the years.

I always liked the idea of Micboard because it would be a great way to show who is serving that day. We tried to implement it at my church but eventually moved away from it, mainly because it hadn’t been updated in quite a while (over 6 years now), and we needed some additional features. Specifically, we were looking for integration with Planning Center Services — something that could automatically pull assignments from an interface our team was already familiar with. And – something we could use for more than just people on stage.

At first, I forked the Micboard repo (since it’s open-source) and started making improvements, cleaning up some code, and tweaking it to run more easily on modern MacOS systems. But pretty quickly, I realized I had too much on my plate to maintain a whole fork long-term.

Fast forward a year or so. I came across a few posts on some Facebook groups that I was in where people were using my ScreenDeck project to essentially create a Micboard style interface using Companion.

I wish I had my own Acoustic Bear.

What I loved about this approach is that it leveraged something we were already using — Companion — and could still be viewed from anywhere on the network, just like Micboard. Plus, Companion supports a lot more devices beyond just Shure systems.

Even better, this opened the door to that Planning Center integration I had wanted without introducing a bunch of extra overhead — we were already using the PCO module to control our LIVE service plans!

One thing I’ve wanted for a while was a digital roster — something simple to show who’s serving each day, helping everyone put names to faces across band, tech, safety, and more. A “Serving Board,” if you will.

About a year ago, I had modified the PCO module to pull scheduled people into variables — showing their names and assigned roles. I recently took it further by adding a feedback: “Show Person Photo based on Position Name.”

Now, the module pulls the photo from the person’s assignment, converts it into a PNG, and stores it internally as a base64 image — which can be shown directly on a button.

Pretty cool – and it looks like this:

Say “hi”, Adam.

But I didn’t want to stop there — I wanted the person’s status (Confirmed, Unconfirmed, or Declined in PCO) to show too.

Using the companion-module-utils library (thanks to another awesome Companion dev!), I added a simple colored border overlay for statuses.

A few extra lines of code later:

And you can get this look!

Thanks for confirming!

At this point, it was looking great — but I started thinking:

What if I don’t want to redo all my buttons every week? What if my teams and roles change?

So I added a new option: a generic “position number” approach.

You can now pick a position number in the plan (or within a specific team) — and the module will automatically pull the right person’s info, week to week, without you having to manually reconfigure anything.

For example:

• Pick any number across the entire plan.

• Or pick a number within a specific team, like Band or Tech.

With this option, you can choose any number, regardless of the team.
This picks the first person scheduled in the band.

I also built some Module Presets to make setting this up super easy:

Generic Position Number (no specific team)

Position Number Within a Team (like “Band” only)

Generic without regard to what Team
In this example, you can choose a number within the Band team.

And here’s where it all comes together:

Let’s say you have a “Wireless Assignments” team in PCO, and you assign a person to a position called “Wireless 4.”

Now, using the Shure Wireless module in Companion, you can match that name and see live RF and battery stats for Wireless 4 — tied directly to the person assigned!

All together, you get a clean, dynamic, reusable Micboard-style dashboard — all inside Companion, no extra tools required.

Here’s a walk through video showing it all in action:

The updated PCO Services Live module is available now in the Companion betas — go check it out if you want to try it!

gamepad-io: Using game controllers, the Web Gamepad API, Electron, socket.io, and a custom module to use any game pad or controller as a Satellite surface for Companion

I started on this over 6 months ago – and am excited to release this! Last year, I was approached by a programming client who had a simple ask: “Can I use an Xbox controller with Companion?” This of course got my wheels turning, for a few reasons. 1, It’s cool. 2, It hadn’t been done before. 3, Game controllers are inexpensive, and that excited me for anyone trying to do more while spending less, like so many of us are trying to do in church tech ministry.

My first response to this client was: “I can make this – but I want to release it as an open source project for the community. You’ll effectively help to sponsor the creation of a new resource for everyone, while gaining the custom solution you need.”

They agreed and I got started. I first created a small Electron app that runs a Renderer process with a Chromium window that accesses the Web Gamepad API. Any game controller that is connected to the computer (whether wired or wireless) will show up once a button on any controller is connected.

All of this controller data is tracked and stored, and then sent via socket.io to any connected client. The app itself could be used to send this controller data to anything. In my case, I made a Companion module that listens to this socket and then the module itself emulates a Companion surface.

My client needed a lot of variables and configuration options for what they were doing. Every button and axis movement is stored as a variable in the module. And, if you enable the option to use as a surface, you can press controller buttons and it will then in turn press the assigned button in Companion! I’ve been stress testing this with my client for a few months now and am super pleased with how well it works.

Here’s a video of it in action:

The module is in the Companion betas now, and you can download the app for free from my Github repository: http://github.com/josephdadams/gamepad-io/releases

It’s open source, so if you see something that could be better, submit a pull request!

Using Ross Dashboard and the Companion Satellite API to create a virtual touch surface on a Ross Video Ultritouch

My church, Fellowship Greenville, has been building a second campus now for a little over a year. It’s been an exciting process. The new auditorium will feature a control room much like what we have at our existing campus.

One of the newer pieces of equipment that we are putting in is a Ross Video UltriTouch HR. It’s a 2RU touch screen computer essentially, running Ross Dashboard. (I’ve written about Ross Dashboard before if you want to read about any of those.) Dashboard is a very flexible program that lets you program very custom interfaces to control your gear. We used it heavily until I started investing a lot of time toward Companion.

Once I knew we were getting one of these, I knew right away that I wanted to be able to use it as a satellite surface for Companion. Taking what I learned from my ScreenDeck project, and my OGScript knowledge (Ross’s flavor of Java/JavaScript that powers the custom panels in Dashboard), I was able to make this:

It was pretty easy to get simple buttons with text on them, and get the colors of the buttons to match Companion button colors. But I wanted the buttons to look like Companion buttons, and that took some work. Dashboard doesn’t have any image editing libraries that I was aware of, so I had to get creative. The image data coming from Companion is base64 encoded 8-bit RGB. I reached out to the Ross staff on their forums and they quickly got back to me with a helpful decoder function. It was similar to the one I had already written to decode the base64 encoded text data that comes from the Companion Satellite API.

Once I was able to decode it back to the binary RGB data, it was “simply” a matter of writing a function that saves these as bitmap files in a folder local to the panel and then changing the style of the button to show the new bitmap image.

And there we have it! I’m looking forward to using this on our UltriTouch as well as the TouchDrive touch screen as well.

The panel supports turning the bitmaps on/off, setting the button size, total keys, keys per row, and of course the IP/port to Companion. The satellite port is changeable on the Dashboard side but is currently fixed in Companion to 16622.

If you’re a Ross Dashboard user and want to tinker with the panel, I’ve made it available via Github on my RossDashboardPanels repository where I have shared some other panels as well.

If you ever need any custom Dashboard panels created (or Companion modules!), I do this for hire on the side to support my family. You can reach out to me via my website, josephadams.dev.

Streamlining Electron App Development with AI: Building a Virtual Stream Deck for Bitfocus Companion using the Satellite API

On the side from my full time job in ministry, I do coding-work-for-hire. It’s one of the ways I provide for my family. I’ve had opportunities to create custom dashboard panels, modules for Bitfocus companion, and lots of other bespoke solutions for whatever people need. (Hit me up if you ever need anything!)

One of the tools in my tool belt that I use regularly when coding is Github Copilot. It’s $10 a month and saves me so much time. Never heard of it?

GitHub Copilot is an AI-powered coding assistant developed by GitHub and OpenAI, designed to help developers write code faster and more efficiently. Integrated directly into popular code editors like Visual Studio Code, Copilot suggests code snippets, functions, and even entire blocks of code in real time as you type, based on the context of your project. It supports multiple programming languages and leverages a vast amount of open-source code to provide relevant suggestions, making it a valuable tool for both beginners and experienced developers looking to speed up their workflow, reduce errors, and explore new coding approaches.

It seriously saves me a lot of time by providing suggestions and workflows that I may never have thought of, while not necessarily doing things that I would not have done. After using it for a year and a half, I have it trained well on the ways I like to code.

Recently, I also signed up for OpenAI’s ChatGPT Plus plan. It’s $20 a month. I may not keep subscribing long term, but I’m trying it out. It gives me access to GPT 4o and DALL-E and all of their other tools. I used it to help me decipher a protocol for some paid work I was doing and it helped me save time. These tools are not at a point where I can just hand it the whole job and get a perfect response – but guiding it through the process in steps? I can get helpful responses that way.

After I was done with my protocol project, I simply asked ChatGPT, “give me a boilerplate typescript Electron app using my example”. I’ve shared several of my electron apps before. It’s my preferred method for cross platform apps (meaning they can run on MacOS, Windows, and Linux desktops). I wanted to see if I could guide ChatGPT through the process of giving me a new template to help take some projects further and implement standards and practices that I might not be aware of.

One particular project I’ve wanted to work on for awhile now is something I’m calling ScreenDeck. It’s essentially a screen based stream deck for Bitfocus Companion that uses the built in Satellite API to create virtual surfaces.

Every good project needs a logo, right?

I know the browser based emulator exists, but I wanted something that ran a little more “native looking” on the OS and could always sit on top of other windows so it’s immediately accessible.

I had started on it over a year ago, but the small nuances and things to code just felt overwhelming to implement in my “spare time”. However, together with my AI tools, I was able to quickly craft a new boilerplate template and apply it to the ScreenDeck project I had started a long time ago, and come up with a working solution in just a few days. It was a lot of back and forth with the chat, prompting it to craft more and more refined responses.

Like many of my other projects, I’m releasing ScreenDeck open source with the hopes that it will help the community – especially churches.

Here’s a simple 4 button, 1 per row, deck.
The settings allow you to configure how it looks, how many buttons, whether it’s always on top, etc. You can even change the bitmap size to create HUGE buttons!
Here’s a standard “Stream Deck XL” layout.
Some of the context menu options.
Because it uses the Satellite API in Companion, it shows up as a physical surface in Companion!
Because Companion sees it as a surface, this means you can do anything with it that you’d do to any physical surface.

You can download it here: http://github.com/josephdadams/screendeck

It’s available for MacOS, Windows, and Linux desktops!

Here’s a video showing it in action!

Using Zabbix and Ntfy to monitor production software status for free

I wanted to share a couple of tools we have been using lately that have been helpful in keeping us notified when we are having issues with our production software or hardware.

The first tool is Zabbix.

Zabbix is a powerful, open-source monitoring tool designed to track the performance and availability of various resources, including networks, servers, applications, and cloud services. It provides real-time monitoring and alerting, allowing you to quickly identify and resolve issues.

We use it to track the status of our production machines – hard drive space, memory used, etc., but we also use it to track the online status of software like Cronicle, Bitfocus Companion, etc. These tools are vital to the success of our services, so I want to know the instant they go down. It’s rare that they do, but I still need to know as soon as possible, and preferably before something fails because it’s down.

At my church, we have a server set up to run virtual servers through a hypervisor, and our IT department set up a simple linux virtual server for me. On that server, I run several Docker containers for software like Cronicle, Companion, Tally Arbiter ;), and more. I manage all of our containers through Portainer which is a GUI for Docker.

So – using Zabbix, I can create a host for any of my computers and set up things to monitor within those computers. For most of these, it’s just simple web checks. I can have Zabbix go check to see, for example, if the Companion web interface loads once a minute. If it doesn’t load, I can assume that Companion is no longer running, for whatever reason – the computer is off, the software crashed or was closed, etc.

Zabbix will then automatically report a problem with the service within the user interface. That’s great – but what I really need is to be notified on my phone. This is where the Ntfy service comes in.

Ntfy is an open-source notification service that lets you send real-time alerts. You can get notifications on your phone, computer, or browser. We use it with Cronicle whenever a job fails, error chaining on the job will automatically send out a Ntfy alert to my phone. I also use it for things like letting me know a recorder has started – anything that I just want to know about but don’t need or want to go specifically look at a device to know the status. Ntfy has a Companion module (I wrote it) and I also made a Cronicle plugin for it.

Zabbix can be configured to send alerts to a variety of media types – email, Discord, Slack, SMS, you name it. It comes automatically with several types. Someone in the community created a Ntfy type. I forked it and added additional support options, available here: https://github.com/josephdadams/zabbix-ntfy

So with these two tools, I am able to know instantly when a service has gone down, by receiving a push notification to my phone.

Here is a video that explains how to set up a web scenario in Zabbix and get pinged by Ntfy:

Maybe this will help you monitor your equipment as well! Leave me a comment with what you might use it for.

Using midi-relay to send MIDI feedback to Companion for control

I was recently hired to add a new feature to my midi-relay software that allows you to capture MIDI data coming on any MIDI port and send that data back to Bitfocus Companion where you can then store it as a variable or even use the MIDI data as a satellite surface to control buttons directly.

Here’s a video walkthrough:

This should enable a lot of fun things with MIDI and automation!

You can download the software from my Github page: https://github.com/josephdadams/midi-relay/releases/tag/v3.2.0

It’s free so go check it out. If the things I create are helpful to you, a donation to my family is always appreciated: https://techministry.blog/about/

Notify production team members remotely using open source software and low cost USB busy lights

At my church, we have a couple of these:

They’re great. Expensive, but they work well.

The problem for us is that anytime anyone presses the Call light on the intercom party line, any flashers on that party line will light up. This means we can really only have 1 unique flasher per line.

Sometimes, we want or need to get a specific person/position’s attention.

I created some software to help with this. It’s called beacon.

It’s a small app that runs in the system tray and hosts a network API so you can signal a USB busy light, such as the Luxafor Flag or Thingm blink(1). Or, if you don’t have or want a physical signal light, you can also have an on-screen dot that you can use.

I’ve designed this to work in tandem with a custom module for Bitfocus Companion, but since it does have a full API, you can implement any third-party integrations that you like. All of the documentation is on the Github repository: https://github.com/josephdadams/beacon

You can set a beacon to stay a solid color, fade to a new color, flash a color, and more. You can send custom notifications to the user’s window as well as play tones and sounds.

Here’s a video of the project in action to show you how you can use it:

Go check it out today!

https://github.com/josephdadams/beacon

Controlling Planning Center LIVE with a streamdeck, with timers and other variables

If you use Companion and are in tech ministry, you have probably used my PCO Services LIVE module. While in the process of converting this module to the new API we are using for Companion 3.0, I gave it an overhaul and added lots of new features!

Here is a video that shows how it works:

Go check it out for yourself!

midi-relay v3.0 is here – as an electron app for Mac and Windows!

I decided to give some love recently to midi-relay since person after person asked me to make this an easier-to-run app rather than setting up a nodejs runtime.

When I originally created midi-relay, I designed it to run on every OS, especially the Raspberry Pi platform. Thousands of people use it all over the world for all kinds of stuff. Probably because it’s free. 🙂

This software is designed to accept a JSON object via its API and then turn that object into a MIDI command and send it out a local MIDI port. It allows for remote control of a lot of systems by sending the command over a simple network protocol.

Now it’s even easier to use.

It runs in the system tray for easy access.

Some new features include:

  • a new socket.io API for bi-directional communication
  • a virtual MIDI port, for loopback uses
  • an upgraded Bitfocus Companion v3 module
  • Disabling remote control, if needed

So if you’re a midi-relay user and you want an easy way to run this on your Mac or Windows desktop, go check out the latest release!

If using my software makes your life easier, please consider supporting my family.

Thanks!

Using a Nano Pi, a POE splitter, and a custom project box to create a mobile UDP to RS485/VISCA shading rig

Awhile back, I wrote about how we created a network based VISCA shading rig for our Marshall CV503 cameras we use on stage, to control their various exposure settings. The cameras themselves can only do this via RS485 serial, and our system sends UDP from Bitfocus Companion (we use the Sony VISCA module/connection) over the network, converts to serial at a Raspberry Pi, and then using custom cables, we can send the signal to our cameras over the patchbay.

We’ve been using that system ever since and it works great. We have even recently taken the steps to create custom cable looms that have SDI, CAT6, and power all in one loom to make it a breeze to set up.

Recently, we set up one of these cameras at the back of our auditorium where it’s impractical to run a cable all the way to our patchbay in the rack room at the stage side for a serial connection. We still need to control the exposure, so a solution was needed.

It’s also impractical these days to buy a Raspberry Pi. They have gotten quite expensive, and difficult to find in stock.

A few months ago, I bought a Nano Pi NEO and started playing around with it to see what it could do, since it’s easy to get ahold of and very affordable.

This is the Nano Pi NEO single board computer.

It has an ethernet port, a full size USB A port, and is powered via micro USB. It runs Armbian quite well, so it was very simple to install my existing udp-to-serial nodejs script.

I bought a project box and modified it to fit all the parts. I started with a dremel but I should have just used a hacksaw from the beginning, because that gave me much cleaner cuts. I didn’t want to do any soldering or make custom internal cables, so my box had to be a little larger.

The entire rig is powered by a single POE to USB adapter. This provides the ethernet data to the Nano Pi, and then micro USB power to the Nano Pi’s power port. I also figured out awhile back that you can use a USB 5V to 12V step-up cable to power these cameras, so I put one of those in the box as well.

POE to USB adapter, RS485 cable, and two keystone jacks for serial out. Blue/White-Blue pins for +/-.

We put RJ45 keystone jacks on the box to provide the serial out connections, and we also hot glued the POE to USB adapter to the lid of the box so the connection could be flush with the edge.

It’s certainly crammed in there! The Nano Pi is glued to the bottom, and the rest of the cables are tucked into the box. The USB splitter, the USB to RS485, and the USB 5V to 12V DC cable.

Here are the parts I used:

  • Nano Pi Neo
  • POE to USB adapter – to pass network to the Nano Pi and to give USB power
  • USB 5v to 12v DC step-up adapter – to power the Marshall CV503 instead of using the stock camera power supply
  • USB splitter cable – to split the POE USB power to both the Nano Pi and the step-up cable that powers the camera
  • Micro USB cable – to power the Nano Pi
  • USB to RS485 adapter – this is what sends the received UDP data out to serial
  • Keystone jacks used for the serial connections. We then have custom RJ45 to Phoenix connectors that plug into the cameras. This method allows us to use any standard CAT5/6 patch cable to make the connections in between.
  • Project box to hold it all

These are Amazon purchase links. As an Amazon Associate I earn from qualifying purchases.

One single POE connection provides all the power and data needed.

Overall, pretty pleased with how it turned out! I like that it’s just two cables – one for the SDI video signal off the camera, and one ethernet to power it all and provide the data connection.

What project ideas do you have for a Nano Pi?