I published my first version of ScreenDeck at the end of last year and now I am excited to release version 2.0!
Here’s what’s new!
Multiple Decks
You’re no longer stuck with just one screen deck. You can launch as many as you want, each with its own layout and size. You can make them read-only by disabling button presses.
Profiles
Now you can save and instantly switch between different layouts. Whether you’re controlling slides on Sunday or streaming on Wednesday, just create, save, and then pick the profile you need.
X/Y Button Mapping
Instead of “keys per row” and “total keys,” everything is now based on columns and rows.
Hotkey Support
You can now assign global hotkeys to any button on any ScreenDeck. Press the assigned key combo on your keyboard and trigger any action, even when the deck is hidden!
Background Customization
Each deck can have its own background color and opacity.
Button or Encoder Mode
Turn any button into an encoder-style dial by right clicking on that key. Great for volume control, brightness, or cycling through options!
Window Memory
Decks will remember their position, size, and settings—no need to rearrange every time you start the app.
A few years ago (6 years ago!), I shared about a solution I came up with to create the weekly “talking points” Google documents that my team relies on. We’ve been using that same solution with Google Apps Script ever since. It’s been rock solid, and saves us a lot of time from creating each one of these documents by hand.
I decided it was time to refresh this script and document, since we now have a third venue (at a new campus). And, when it’s time to refine – why not consult some AI in the process?
This was my starting prompt.
I started by sending ChatGPT my existing script and asking if it had any ideas to improve the prompts.
The response
We immediately got to work redesigning the script – mostly focusing on the dialogs and flow.
I came up with a basic new design that featured the church logo and a simpler header. ##VENUE## and ##DATE## are placeholders that get replaced with the actual Venue name and Date of the document.
After some back and forth, here’s what the new dialog looks like:
This looks a lot better! I even added a progress bar:
If you’re hesitating to jump in using generative AI – give it a whirl! It can save you a lot of time and propose ideas you may not have thought about.
If you’re involved in tech ministry and like to tinker, chances are you’ve heard of — and maybe even used — micboard.io.
This is Micboard.
Straight from their website, “Micboard simplifies microphone monitoring and storage for artists, engineers, and volunteers. View battery, audio, and RF levels from any device on the network.” It’s a neat tool and has helped a lot of teams over the years.
I always liked the idea of Micboard because it would be a great way to show who is serving that day. We tried to implement it at my church but eventually moved away from it, mainly because it hadn’t been updated in quite a while (over 6 years now), and we needed some additional features. Specifically, we were looking for integration with Planning Center Services — something that could automatically pull assignments from an interface our team was already familiar with. And – something we could use for more than just people on stage.
At first, I forked the Micboard repo (since it’s open-source) and started making improvements, cleaning up some code, and tweaking it to run more easily on modern MacOS systems. But pretty quickly, I realized I had too much on my plate to maintain a whole fork long-term.
Fast forward a year or so. I came across a few posts on some Facebook groups that I was in where people were using my ScreenDeck project to essentially create a Micboard style interface using Companion.
I wish I had my own Acoustic Bear.
What I loved about this approach is that it leveraged something we were already using — Companion — and could still be viewed from anywhere on the network, just like Micboard. Plus, Companion supports a lot more devices beyond just Shure systems.
Even better, this opened the door to that Planning Center integration I had wanted without introducing a bunch of extra overhead — we were already using the PCO module to control our LIVE service plans!
One thing I’ve wanted for a while was a digital roster — something simple to show who’s serving each day, helping everyone put names to faces across band, tech, safety, and more. A “Serving Board,” if you will.
About a year ago, I had modified the PCO module to pull scheduled people into variables — showing their names and assigned roles. I recently took it further by adding a feedback: “Show Person Photo based on Position Name.”
Now, the module pulls the photo from the person’s assignment, converts it into a PNG, and stores it internally as a base64 image — which can be shown directly on a button.
Pretty cool – and it looks like this:
Say “hi”, Adam.
But I didn’t want to stop there — I wanted the person’s status (Confirmed, Unconfirmed, or Declined in PCO) to show too.
Using the companion-module-utils library (thanks to another awesome Companion dev!), I added a simple colored border overlay for statuses.
A few extra lines of code later:
And you can get this look!
Thanks for confirming!
At this point, it was looking great — but I started thinking:
What if I don’t want to redo all my buttons every week? What if my teams and roles change?
So I added a new option: a generic “position number” approach.
You can now pick a position number in the plan (or within a specific team) — and the module will automatically pull the right person’s info, week to week, without you having to manually reconfigure anything.
For example:
• Pick any number across the entire plan.
• Or pick a number within a specific team, like Band or Tech.
With this option, you can choose any number, regardless of the team.
This picks the first person scheduled in the band.
I also built some Module Presets to make setting this up super easy:
• Generic Position Number (no specific team)
• Position Number Within a Team (like “Band” only)
Generic without regard to what Team
In this example, you can choose a number within the Band team.
And here’s where it all comes together:
Let’s say you have a “Wireless Assignments” team in PCO, and you assign a person to a position called “Wireless 4.”
Now, using the Shure Wireless module in Companion, you can match that name and see live RF and battery stats for Wireless 4 — tied directly to the person assigned!
All together, you get a clean, dynamic, reusable Micboard-style dashboard — all inside Companion, no extra tools required.
Here’s a walk through video showing it all in action:
The updated PCO Services Live module is available now in the Companion betas — go check it out if you want to try it!
I started on this over 6 months ago – and am excited to release this! Last year, I was approached by a programming client who had a simple ask: “Can I use an Xbox controller with Companion?” This of course got my wheels turning, for a few reasons. 1, It’s cool. 2, It hadn’t been done before. 3, Game controllers are inexpensive, and that excited me for anyone trying to do more while spending less, like so many of us are trying to do in church tech ministry.
My first response to this client was: “I can make this – but I want to release it as an open source project for the community. You’ll effectively help to sponsor the creation of a new resource for everyone, while gaining the custom solution you need.”
They agreed and I got started. I first created a small Electron app that runs a Renderer process with a Chromium window that accesses the Web Gamepad API. Any game controller that is connected to the computer (whether wired or wireless) will show up once a button on any controller is connected.
All of this controller data is tracked and stored, and then sent via socket.io to any connected client. The app itself could be used to send this controller data to anything. In my case, I made a Companion module that listens to this socket and then the module itself emulates a Companion surface.
My client needed a lot of variables and configuration options for what they were doing. Every button and axis movement is stored as a variable in the module. And, if you enable the option to use as a surface, you can press controller buttons and it will then in turn press the assigned button in Companion! I’ve been stress testing this with my client for a few months now and am super pleased with how well it works.
My church, Fellowship Greenville, has been building a second campus now for a little over a year. It’s been an exciting process. The new auditorium will feature a control room much like what we have at our existing campus.
One of the newer pieces of equipment that we are putting in is a Ross Video UltriTouch HR. It’s a 2RU touch screen computer essentially, running Ross Dashboard. (I’ve writtenaboutRossDashboard before if you want to read about any of those.) Dashboard is a very flexible program that lets you program very custom interfaces to control your gear. We used it heavily until I started investing a lot of time toward Companion.
Once I knew we were getting one of these, I knew right away that I wanted to be able to use it as a satellite surface for Companion. Taking what I learned from my ScreenDeck project, and my OGScript knowledge (Ross’s flavor of Java/JavaScript that powers the custom panels in Dashboard), I was able to make this:
It was pretty easy to get simple buttons with text on them, and get the colors of the buttons to match Companion button colors. But I wanted the buttons to look like Companion buttons, and that took some work. Dashboard doesn’t have any image editing libraries that I was aware of, so I had to get creative. The image data coming from Companion is base64 encoded 8-bit RGB. I reached out to the Ross staff on their forums and they quickly got back to me with a helpful decoder function. It was similar to the one I had already written to decode the base64 encoded text data that comes from the Companion Satellite API.
Once I was able to decode it back to the binary RGB data, it was “simply” a matter of writing a function that saves these as bitmap files in a folder local to the panel and then changing the style of the button to show the new bitmap image.
And there we have it! I’m looking forward to using this on our UltriTouch as well as the TouchDrive touch screen as well.
The panel supports turning the bitmaps on/off, setting the button size, total keys, keys per row, and of course the IP/port to Companion. The satellite port is changeable on the Dashboard side but is currently fixed in Companion to 16622.
If you’re a Ross Dashboard user and want to tinker with the panel, I’ve made it available via Github on my RossDashboardPanels repository where I have shared some other panels as well.
If you ever need any custom Dashboard panels created (or Companion modules!), I do this for hire on the side to support my family. You can reach out to me via my website, josephadams.dev.
On the side from my full time job in ministry, I do coding-work-for-hire. It’s one of the ways I provide for my family. I’ve had opportunities to create custom dashboard panels, modules for Bitfocus companion, and lots of other bespoke solutions for whatever people need. (Hit me up if you ever need anything!)
One of the tools in my tool belt that I use regularly when coding is Github Copilot. It’s $10 a month and saves me so much time. Never heard of it?
GitHub Copilot is an AI-powered coding assistant developed by GitHub and OpenAI, designed to help developers write code faster and more efficiently. Integrated directly into popular code editors like Visual Studio Code, Copilot suggests code snippets, functions, and even entire blocks of code in real time as you type, based on the context of your project. It supports multiple programming languages and leverages a vast amount of open-source code to provide relevant suggestions, making it a valuable tool for both beginners and experienced developers looking to speed up their workflow, reduce errors, and explore new coding approaches.
It seriously saves me a lot of time by providing suggestions and workflows that I may never have thought of, while not necessarily doing things that I would not have done. After using it for a year and a half, I have it trained well on the ways I like to code.
Recently, I also signed up for OpenAI’s ChatGPT Plus plan. It’s $20 a month. I may not keep subscribing long term, but I’m trying it out. It gives me access to GPT 4o and DALL-E and all of their other tools. I used it to help me decipher a protocol for some paid work I was doing and it helped me save time. These tools are not at a point where I can just hand it the whole job and get a perfect response – but guiding it through the process in steps? I can get helpful responses that way.
After I was done with my protocol project, I simply asked ChatGPT, “give me a boilerplate typescript Electron app using my example”. I’ve shared severalofmyelectronapps before. It’s my preferred method for cross platform apps (meaning they can run on MacOS, Windows, and Linux desktops). I wanted to see if I could guide ChatGPT through the process of giving me a new template to help take some projects further and implement standards and practices that I might not be aware of.
One particular project I’ve wanted to work on for awhile now is something I’m calling ScreenDeck. It’s essentially a screen based stream deck for Bitfocus Companion that uses the built in Satellite API to create virtual surfaces.
Every good project needs a logo, right?
I know the browser based emulator exists, but I wanted something that ran a little more “native looking” on the OS and could always sit on top of other windows so it’s immediately accessible.
I had started on it over a year ago, but the small nuances and things to code just felt overwhelming to implement in my “spare time”. However, together with my AI tools, I was able to quickly craft a new boilerplate template and apply it to the ScreenDeck project I had started a long time ago, and come up with a working solution in just a few days. It was a lot of back and forth with the chat, prompting it to craft more and more refined responses.
Like many of my other projects, I’m releasing ScreenDeck open source with the hopes that it will help the community – especially churches.
Here’s a simple 4 button, 1 per row, deck.
The settings allow you to configure how it looks, how many buttons, whether it’s always on top, etc. You can even change the bitmap size to create HUGE buttons!
Here’s a standard “Stream Deck XL” layout.
Some of the context menu options.
Because it uses the Satellite API in Companion, it shows up as a physical surface in Companion!
Because Companion sees it as a surface, this means you can do anything with it that you’d do to any physical surface.
I wanted to share a couple of tools we have been using lately that have been helpful in keeping us notified when we are having issues with our production software or hardware.
The first tool is Zabbix.
Zabbix is a powerful, open-source monitoring tool designed to track the performance and availability of various resources, including networks, servers, applications, and cloud services. It provides real-time monitoring and alerting, allowing you to quickly identify and resolve issues.
We use it to track the status of our production machines – hard drive space, memory used, etc., but we also use it to track the online status of software like Cronicle, Bitfocus Companion, etc. These tools are vital to the success of our services, so I want to know the instant they go down. It’s rare that they do, but I still need to know as soon as possible, and preferably before something fails because it’s down.
At my church, we have a server set up to run virtual servers through a hypervisor, and our IT department set up a simple linux virtual server for me. On that server, I run several Docker containers for software like Cronicle, Companion, Tally Arbiter ;), and more. I manage all of our containers through Portainer which is a GUI for Docker.
So – using Zabbix, I can create a host for any of my computers and set up things to monitor within those computers. For most of these, it’s just simple web checks. I can have Zabbix go check to see, for example, if the Companion web interface loads once a minute. If it doesn’t load, I can assume that Companion is no longer running, for whatever reason – the computer is off, the software crashed or was closed, etc.
Zabbix will then automatically report a problem with the service within the user interface. That’s great – but what I really need is to be notified on my phone. This is where the Ntfy service comes in.
Ntfy is an open-source notification service that lets you send real-time alerts. You can get notifications on your phone, computer, or browser. We use it with Cronicle whenever a job fails, error chaining on the job will automatically send out a Ntfy alert to my phone. I also use it for things like letting me know a recorder has started – anything that I just want to know about but don’t need or want to go specifically look at a device to know the status. Ntfy has a Companion module (I wrote it) and I also made a Cronicle plugin for it.
Zabbix can be configured to send alerts to a variety of media types – email, Discord, Slack, SMS, you name it. It comes automatically with several types. Someone in the community created a Ntfy type. I forked it and added additional support options, available here: https://github.com/josephdadams/zabbix-ntfy
So with these two tools, I am able to know instantly when a service has gone down, by receiving a push notification to my phone.
Here is a video that explains how to set up a web scenario in Zabbix and get pinged by Ntfy:
Maybe this will help you monitor your equipment as well! Leave me a comment with what you might use it for.
I was recently hired to add a new feature to my midi-relay software that allows you to capture MIDI data coming on any MIDI port and send that data back to Bitfocus Companion where you can then store it as a variable or even use the MIDI data as a satellite surface to control buttons directly.
Here’s a video walkthrough:
This should enable a lot of fun things with MIDI and automation!
It’s free so go check it out. If the things I create are helpful to you, a donation to my family is always appreciated: https://techministry.blog/about/
One of my first projects I shared on this blog over 5 years ago was TimeKeeper – something we use to help manage countdowns to service start times, various production elements, etc. 5 years later, it’s still running strong and we use it every week.
I recently decided to give some effort toward creating a UI that would allow us to add and edit timers from the web interface, so that we could retire the Ross Dashboard custom panel that I created for it. We still use Dashboard, but for this, it was actually more work for our volunteers to use two tools rather than one – one for adding/editing, and one for viewing.
The new UI is simple – you can add a new timer directly from the page or edit an existing one. For now, I haven’t bothered with any permissions because our needs are very simple.
Editing a timer is just as easy.
I also created a Companion 3.0 module for us that allows us to create and modify timers and view them there as well. Now we can easily bump a timer and add 30 seconds if needed just by quickly pressing a button.
The problem for us is that anytime anyone presses the Call light on the intercom party line, any flashers on that party line will light up. This means we can really only have 1 unique flasher per line.
Sometimes, we want or need to get a specific person/position’s attention.
I created some software to help with this. It’s called beacon.
It’s a small app that runs in the system tray and hosts a network API so you can signal a USB busy light, such as the Luxafor Flag or Thingm blink(1). Or, if you don’t have or want a physical signal light, you can also have an on-screen dot that you can use.
Luxafor Flag
blink(1)
On-screen digital beacon
I’ve designed this to work in tandem with a custom module for Bitfocus Companion, but since it does have a full API, you can implement any third-party integrations that you like. All of the documentation is on the Github repository: https://github.com/josephdadams/beacon
You can set a beacon to stay a solid color, fade to a new color, flash a color, and more. You can send custom notifications to the user’s window as well as play tones and sounds.
Here’s a video of the project in action to show you how you can use it: