One of the first blog posts here was about PCO’s custom reports. I’ve written a lot of them and helped a lot of churches get started with their own.
In anticipation of a possible need for split teams, I’ve now created a new custom report that has several customizable features, enhanced checklists, dynamic notes, and more, without having to write any actual code. Just modifying variables at the top of the report.
This new report supports the following:
Customizable header
Custom print order, with variable plan items as columns and/or rows alongside the plan item description
Automatic highlighting of Plan Item Note changes to signify important information
Ability to display Plan Notes for everyone, by team, or by position
Custom CSS for your own unique look
Ability to show headers in their own row, or inline to save space
Here’s the report with Headers as their own rows.Here’s the exact same report, but with headers inline for a cleaner look.
Here’s a video that shows how it all works:
Because of the substantial amount of work I have put into creating and coding this report, I have chosen to make this report available for purchase. I’m pricing it at a point that is affordable for most churches, at $45. Once payment is received, I will send over the report code and help you install it, if needed.
PCO Services Matrix Report with Split Teams, Fully Customizable
This custom report will revolutionize the way you share information with your team!
Report code will be sent to the email address provided once payment is received.
If you have a need for a custom report beyond this, contact me! I’m always available for hire for your custom PCO reporting projects, or whatever other custom coding needs your ministry or organization may have.
About a year ago, I released some camera tally lights software because we desperately needed it at my church. Since that time, a ton of new features have been added, both by me and by the community.
It’s now in use in hundreds of places, from churches to event venues to sports stadiums.
Version 2.0 was silently released a few weeks ago, which includes a compiled application that can run natively on Windows, MacOS, and Linux, without the need to install Node.js and other dependencies like the command line. And, of course, it still runs on a Raspberry Pi.
Lots of people in the community have shared how they are using it, made their own tutorials, and added to the existing documentation.
It’s truly becoming a community project, and I love that. We now have an official Facebook user group to help facilitate conversation amongst users, and I’m excited for the new features on the roadmap in the coming days.
Someone from the community designed a new logo! Isn’t it nice?
Since the Coronavirus pandemic has shut everything down, like everyone, my whole schedule and routine has changed. Being with my family more is really nice. One significant change is that the church I work at has told everyone to stay home and only be in the office when doing a task that can only be done there.
When that happened, I came up with a workflow that would allow me to run video production equipment housed at the church, from my house, in the event that I couldn’t get to the facility, like a few weeks ago when I had to stay isolated waiting on the results of a COVID-19 test (it was negative).
We have a private VPN connection that I can use at my house with my workstation, which is great because it allows me to access all of the internal network devices at the church while I’m at home. From a networking standpoint, it’s as if I’m there. I can screen share to all my computers and use terminal windows to control nearly everything.
With the private VPN, I have Companion 2.0 running on my laptop with a Streamdeck XL as a control surface. I’m able to control the video router (Blackmagic VideoHub), video switcher (Ross Carbonite), recording equipment (AJA KiPros), and of course OBS. But getting a monitoring feed in real time with audio was a challenge, especially when we have several Netflix, YouTube, and Disney+ streams going!
I made a page that allows me to do basic cuts between the sources on the switcher. I press the button here, the command goes over the VPN to the switcher, and I get visual feedback from the video conference call with Zoom.
I can change scenes in OBS and even have transport control of the AJA Ki Pro, all remotely!
Enter Zoom! And a Blackmagic ATEM Mini! The ATEM Mini is a relatively new device, it’s basically a small portable video switcher. We sort of panic-bought one when this virus was just coming around in our area, in case we needed to be able to do a portable live stream off-campus. Thankfully, we haven’t had to do that yet, but since we have it, I’ve been putting it to use for small events.
The Blackmagic ATEM Mini. It’s a portable 4-input mini switcher.
The ATEM Mini has an HDMI output, but it also has a “webcam output”, which means the sum of your video production can be sent to the computer and received as a normal webcam. This feed can then be brought into Zoom as a camera option!
I am only using one input as this is just a basic HDMI to webcam converter at this point. But if I had more inputs, I could connect them and control it all remotely!
A screenshot of the multiviewer being sent back to me over Zoom.
Overall, I have found it very helpful to have access to this while I work remotely. I could run our live stream on Sundays completely remotely from my house, if I needed to. Along with our Unity Intercom setup, I could even run the switcher and direct cameras from my house for our weekly music recording. I hope I don’t ever have to do that, but it’s nice to know that I could!
Also, since I’m sitting at home more, and being a video DJ for my kids, fulfilling their various TV watching requests, I added a page to the Stream Deck to allow me to control the Roku TV on the other side of the room. This is a module I wrote for Companion that uses Roku’s ECP protocol. It makes life a little easier!
I can control the basic functions of the Roku remote with this module, and even launch Netflix from the push of a button! Now I just need to make it start their favorite shows automatically…
It is amazing what we can do with technology these days, and it delights me to be able to to see technology put to use to serve the church. I hope this is helpful to you! How are you doing remote production during all of this?
Just thought I would share a quick custom panel that shows how to send OSC from Ross Dashboard to other devices.
If you’re not familiar with OSC (Open Sound Control), you can read about it here. Essentially, it is a protocol used for real-time communication between (typically) media devices, synthesizers, etc. It has grown to be used by a wide variety of software for remote control purposes.
To send a message, first a byte array must be constructed. In Dashboard, the easiest way to do this is to use a messageBuilder object and then convert it to a byte array at the end.
function createOSCMessage(cmd, val, varType)
{
var messageBuilder = ogscript.createMessageBuilder();
var len = cmd.length+1;
var pad = (4 - len%4)%4;
messageBuilder.writeString(cmd);
// put null terminator at end of command string
messageBuilder.writeChar(0); // null terminator
// pad end of command string with nulls
for (var i=0; i<pad; ++i)
{
messageBuilder.writeChar(0);
}
This creates the message builder object, inserts the OSC command, and then pads the rest of the bytes with nulls. The command byte must be a multiple of 4, so the pad is calculated.
Next, the type (float, int, or string) is determined and the value applied:
// set the 4 bytes that identify the format
messageBuilder.writeChar(',');
if (varType == 'float')
{
messageBuilder.writeChar('f');
messageBuilder.writeChar(0);
messageBuilder.writeChar(0);
messageBuilder.writeFloat(val);
}
else if (varType == 'int')
{
messageBuilder.writeChar('i');
messageBuilder.writeChar(0);
messageBuilder.writeChar(0);
messageBuilder.writeInt(val);
}
else
{
messageBuilder.writeChar('s');
messageBuilder.writeChar(0);
messageBuilder.writeChar(0);
messageBuilder.writeString(val);
}
return messageBuilder.toByteArray();
}
The resulting byte array is returned to the function that called it.
To send a float:
function sendOSCMessageFloat(ip, port, cmd, val)
{
ogscript.sendUDPBytes(ip, port, createOSCMessage(cmd, val, 'float'));
ogscript.debug('OSC Float Sent');
}
var host = '127.0.0.1';
var port = '12321';
var oscCommand = '/command/float';
var oscFloat = 1.1;
sendOSCMessageFloat(host, port, oscCommand, oscFloat);
To send an int:
function sendOSCMessageInt(ip, port, cmd, val)
{
ogscript.sendUDPBytes(ip, port, createOSCMessage(cmd, val, 'int'));
ogscript.debug('OSC Int Sent');
}
var host = '127.0.0.1';
var port = '12321';
var oscCommand = '/command/int';
var oscInt = 1;
sendOSCMessageInt(host, port, oscCommand, oscInt);
To send a string:
function sendOSCMessageString(ip, port, cmd, val)
{
ogscript.sendUDPBytes(ip, port, createOSCMessage(cmd, val, 'string'));
ogscript.debug('OSC String Sent');
}
var host = '127.0.0.1';
var port = '12321';
var oscCommand = '/command/string';
var oscString = 'TEST';
sendOSCMessageString(host, port, oscCommand, oscString);
That’s it! Pretty simple using the message builder and byte array.
I have had a few people ask if I could post another walkthrough with more precision on setting up midi relay to control Chroma Q Vista (formerly owned by Jands) with their stream decks.
Bitfocus Companion installed and running on a computer/device (it can be the same computer running Vista, or another computer on the network)
To set it all up:
First, you will need to set up the loop-back MIDI port. Open Audio MIDI Setup. It’s in Applications > Utilities.
In the Audio MIDI Setup window, choose Window from the top menu, then Show MIDI Studio.
This opens the MIDI Studio window. You will see a few options here such as Bluetooth, IAC Driver, and Network. Depending on how you may have configured MIDI ports in the past, the number of devices here can vary.
Double click the IAC Driver device. This will open the Properties window. The main thing you need to do is click the checkbox for “Device is online” (if not already checked). You may also want to change the device name to Vista.
You can close out all of the Audio MIDI Setup windows now.
Now you need to start midi-relay running. Open a Terminal window and change directory to where you put the executable file for midi-relay. I put mine in a subfolder within the Documents folder. It’s important that you run the executable while the Terminal window directory is the same folder the executable is in, or things may not work correctly. Once you’ve changed directory to the correct folder, you can drag the executable file from Finder to the Terminal window, or you can type in the executable name manually. Hit enter to run it.
When midi-relay starts up, it will give you a read-out in the console of all the available MIDI in/out ports. You should now have one that says Vista Bus 1.
Open Vista. Go to the User Preferences menu by selecting File > User Preferences.
Go to the MIDI tab.
Under the MIDI Show Control section, set the Device ID to 0 (zero).
Under the External MIDI Ports section, check the box next to the Vista Bus 1 MIDI port.
Click OK.
In Vista, right click on the cue list you want to use with MIDI control, and choose Properties.
Go to the MIDI tab.
Now open the Companion Web GUI on the computer that is running Companion.
Add a new instance by searching for Tech Ministry MIDI Relay.
In the instance configuration, type in the IP address of the computer running Vista and midi-relay. If you’re running Companion on the same computer, you can use IP address 127.0.0.1.
Click Apply Changes.
To Send a MIDI Note On and advance a cuelist:
Add a new button in Companion.
Add a new action to that button, using the midi-relay action, Send Note On.
Under the options for this action, choose the Vista Bus 1 for the MIDI port.
By default, it will send channel 0, note A0 (21), with a velocity of 100. Vista does not look for a specific velocity value, only channel and note. Vista will listen to any channel by default, but if you set a specific channel in the Vista MIDI settings, you will need to make sure you send the correct channel from Companion.
Go back to Vista and in the Cuelist Properties, MIDI tab, click Learn next to the Play item. The Play command is what advances a cuelist. The Learn function will listen for incoming MIDI notes and makes setting the MIDI note slightly easier (and it proves that it works). You can also just set the note manually if you want.
Go back to Companion and click Test Actions (or press the physical button on your stream deck if you are using one), and the Learn box in Vista will go away, and you’ll see that the note you sent from Companion is now populated in the Vista settings.
Now every time you press that button in Companion, it will advance that cuelist. If you have multiple cuelists, you will need to use different MIDI note values.
To Send a MIDI Show Control message to go to a specific cue in a cuelist:
Add a new button in Companion.
Add a new action to that button, using the midi-relay action, Send MSC Command.
Choose Vista Bus 1 for the MIDI port.
The default Device ID is 0 (zero) but if you changed that in Vista, make sure it matches here.
The Command Format should be Lighting – General and the Command should be Go.
The Cue field should be the specific Cue Number in Vista of the Cuelist you want to control.
The Cue List field should be the specific Cuelist Number in Vista.
Now every time you press that button in Companion, it will go to that specific cue in that specific cuelist.
Here’s a walkthrough video of these steps:
[wpvideo HZriRGlS]
I hope this is helpful! If you’re using MIDI relay, feel free to drop a comment and share how it is working for you!
I love automation. I love making things more efficient and consistent, and I’ve found that on a particular level, automating or simplifying certain tasks through automation can make it easier for volunteers when working in a church production environment.
The latest app that I’ve been enjoying is the iOS “Shortcuts” app that was added to my phone in a recent iOS upgrade. It allows you to use actions within apps or activity on your phone to spawn other actions. Things like “Text my wife how long it will take me to get home, when I leave work” by using the GPS location on my phone. Or, make a shortcut that when you take a picture using the camera app, it is automatically posted to Facebook.
Look for this app on your iOS device.
If you’ve ever used the service IFTTT, you’ll find familiarity with the Shortcuts app in some of the concepts. Of course, the integration into the phone at a core level with Shortcuts is much nicer. One thing I particularly like is that, once you name a shortcut, you can simply say, “Hey Siri, [shortcut name]” and it will run it.
And, Shortcuts can make HTTP requests (GET, POST, with JSON, etc.) as actions. So, it’s super easy to add a shortcut that triggers a Companion button or a task in a Ross Dashboard custom panel, for example. And that’s one of the ways I’m using the Shortcuts app.
In our production workflow, we use Ross Dashboard custom panels that I created to control nearly every aspect of our video system (and slowly, audio and lighting as I create the connections). It’s very easy to trigger a button via HTTP request, so I set up several shortcuts that I can use to save me time, especially when I am away from the production area or not near a computer running one of the Dashboard panels, as long as my phone is connected to the production network wifi (or I’m connected via VPN if remote).
Here are a few of the shortcuts I’ve created.
All this particular shortcut does is make an HTTP GET request to my master Ross Dashboard Custom Panel, which is listening to Port 5400, and triggers the GPI, “aud1_psl”.
It’s the same as clicking on this yellow button, but I can run it from my phone, as long as I am connected to the production network!
So, just like that, it’s very easy to do something like this: “Hey Siri, go to Pre Service in Auditorium 1”, and have all of the lights change (by sending a midi-relay command to a MIDI Show Control message to our Vista lighting console) and the program screens go to the pre-service loop (by sending a RossTalk command to execute a custom control on the Carbonite to change inputs).
Here’s a video of it in action!
[wpvideo URjPHb4M]
Go check out the Shortcuts app if you aren’t using it already in your production workflow!
If you are a user of Renewed Vision’s ProPresenter software, hopefully by now you’ve heard that they just released version 7 for both MacOS and Windows.
ProPresenter 7.
The new version is more similar between the two operating systems than ever before, and there’s a lot of new features, most notably the UI design. One other enhancement that I am excited about is that all of the add on modules (alpha keyer module, communications, MIDI, SDI/NDI output, etc.) are now all included as part of the software license. This will be great for us because now we can have these features available to all of our ProPresenter installs, whereas in the past, the pricing model was a limitation for us.
I have been slowly checking out the new version and we will be purchasing an upgraded license soon to roll this out in our various venues within the coming months.
With all of the new features that ProPresenter has, I thought it would be fun to include the Top 8 Features of ProPresenter that I hope to see implemented. Here they are, in no particular order:
Tally Integration. If you’ve followed this blog, you have probably seen where I’ve mentioned the ProTally software I created to help fill in the gap here so our volunteers could know when their ProPresenter output was on-air. So while tally protocol support (whether it be TSL or data coming directly from something like an ATEM switcher) would likely render tools like ProTally obsolete for a lot of use cases, it would make the experience so much better for the end user, and I’m definitely a fan of that.
HTTP GET/POST slide cues. This would be awesome. Some people do a workaround right now where they put a “web element” on a slide and make it invisible, but a true communication cue to send GET/POST (along with JSON data) whenever I click on a slide would be a great way to open up some automation efforts to trigger other software.
Hide Audio Bin / Re-arrange the interface. This is a simpler one, but the ability to hide the audio bin that we aren’t likely to use as well as being able to re-arrange the UI would be nice to have.
Customizable border on the current active slide. A lot of our volunteers have expressed that it would be nice to have a way to quickly see which slide is active, and sometimes the current border box around the active slide isn’t easy to see. So a way to make that border thicker, change the color, make it blink, etc. would be a nice feature.
A built-in, free, amazing sync option. I’ve written about how we currently do cloud syncing in ProPresenter by using Dropbox and sharing all the libraries to all the machines. It works fine for what it is. But a way to truly share playlists, themes, media, etc. from one ProPresenter install to another, built in, would be awesome, especially if it could use the drive/file sync tools we already use, like Dropbox.
Go To Next Timer showing a countdown. Another simpler one, but it would be really nice if any time a slide was on a advance timer, if the UI showed how much time was left before it advanced (in minutes/seconds).
Web interface to show slide information, clocks, etc. A page where I can view the slides, the current/next slide, timers, messages, etc. A “producer’s page” of sorts. Right now, we use PresentationBridge for this. We would keep this web page open in our control rooms for the director to see so they know exactly where we are at in a presentation or song.
Published and supported REST API. It would be great to have a published and supported interface where we can control ProPresenter remotely. A lot of people have done great work to reverse-engineer the ProRemote app, and that protocol is getting a lot of use through projects like Companion. But something officially documented and supported would be truly great. And on that note, some kind of official support for stream decks would be great too! Whether it is acknowledgement of the Companion project or another avenue.
So there’s my top 8 feature requests! I’m excited about this new version of ProPresenter, because with their ProPresenter+ plan, we are going to see more regular feature updates. If you haven’t checked it out yet, you can demo it for free!
Back in the summer, I posted about a project I had recently finished, which involved sending HTTP requests to a server that would then relay a MIDI output message based on the request that was sent.
We’ve been using that software (dubbed midi-relay) since then to be able to control our Chroma-Q Vista lighting desks remotely across vlans by using stream decks running Companion. It works pretty well, especially since the midi-relay software is configured to run directly on the lighting consoles upon startup. We have even set up a few crontab entries to send CURL commands to the light desks to turn them on at certain times when we don’t want to be on-site just to press a button.
In anticipation of completing my most recent project, “LiveCaption“, which takes audio and transcribes it to text in real-time, I started working on midi-relay 2.0: listening to MIDI input and using that to trigger a response or action.
I figured it was time this thing had a logo.
In both auditoriums at my church, we have Avid S6L audio consoles. These consoles can do a lot, and like most consoles, they have GPIO pinouts to allow you to trigger things remotely, whether as an action originating from the sound console, or externally that then triggers something on the console like recalling a snapshot, muting an input, etc.
Stock photo of the console I found on the Internet.
These are (some of) the I/O pins on the S6L console. It has GPIO and MIDI ports. We use the footswitch input for setting tap tempo.
I started looking at the possibility of using the GPO pins on the console to trigger an external action like sending an HTTP request to Ross Dashboard, Companion, etc. However, there are only 8 GPO pins on this audio board, so I knew that could be a limiting factor down the road in terms of the number of possible triggers I could have.
The S6L also has MIDI In and Out, and through the Events section of the console, it can be used as either a trigger (MIDI In) or an action (MIDI Out) on just about anything.
The Events page on an Avid S6L console. All kinds of things can be used as triggers and actions here! In this particular event, I’ve created a trigger that when the Snapshot “Band” is loaded, it sends MIDI Out on Channel 1 with Note 22 (A#0) at Velocity 100. MIDI-Relay then listens for that MIDI message and sends an HTTP POST request to the LiveCaption server to stop listening for caption audio.
We already have a snapshot that we load when we go to the sermon/message that mutes things, sets up aux sends, etc. and I wanted to be able to use that snapshot event to automatically start the captioning service via the REST API I had already built into LiveCaption.
In the previous version, midi-relay could only send Note On/Off messages and the custom MSC (MIDI Show Control) message type I had written just for controlling our Vista lighting consoles. With version 2.0, midi-relay can now send MIDI out of all of the channel voice MIDI message types:
Note On / Note Off
Polyphonic Aftertouch
Control Change
Program Change
Pitch Bend
Channel Pressure / Aftertouch
It can also send out:
MSC (MIDI Show Control), which is actually a type of SysEx message
Raw SysEx messages, formatted in either decimal or hexadecimal
And, midi-relay can now listen for all of those channel voice and SysEx messages and use it to trigger one of the following:
HTTP GET/POST (with JSON data if needed)
AppleScript (if running midi-relay on MacOS)
Shell Script (for all OS’s)
There are a few software and hardware products out there that can do similar things, like the BomeBox, but I wanted to build something less-expensive and something that could run on a Raspberry Pi, which is exactly how we’ve deployed midi-relay in this case.
Here is the Raspberry Pi running midi-relay, connected to the MIDI ports on the S6L via a USB to MIDI interface. It tucks away nicely at the back of the desk.
Now we can easily and automatically trigger the caption service to start and stop listening just by running the snapshots on the audio console that we were already doing during that transition in the service. This makes it easier for our volunteers and they don’t really have to learn a new thing.
The documentation is pretty thorough if you want to use the API to send relay messages or set up new triggers, but you can also use the new Settings page running on the server to do all that and more.
From the Settings page, you can view available MIDI ports, add/delete Triggers, view detected midi-relay hosts running on the network, and send Relay messages to other hosts.
And if you’re a Companion user for your stream deck, I updated the module for Companion to support the new channel voice MIDI relay messages as well! You’ll need to download an early alpha release of Companion 2.0 to be able try that out. Search for “Tech Ministry MIDI Relay” in Companion.
Here’s a list of the Raspberry Pi parts I used, off Amazon:
I hope this is helpful to you and your projects! If you need any help implementing along the way, or have ideas for improvement, don’t hesitate to reach out!
For awhile now, I’ve wanted to be able to offer live captions for people attending services at my church who may be deaf or hard of hearing, to allow them to follow along with the sermon as it is spoken aloud. I didn’t want them to have to install a particular app, since people have a wide variety of phone models and OS’s, and that just sounded like a pain to support long-term. I also wanted to develop something low-cost, so that more churches and ministries could benefit from it.
I decided to take concepts learned my PresentationBridge project from last year’s downtown worship night and use it for this project. The idea was essentially the same, I wanted to be able to relay, in real-time, text data from a local computer to all connected clients using the Node.js socket.io library. Instead of the text data coming from something like ProPresenter, the text data would be the results of the Web Speech API’s processing of my audio source.
If you’re a Google Chrome user, Chrome has implemented W3C’s Web Speech API, which allows you to access the microphone, capture the incoming audio, and receive a speech-to-text result, all within the browser using JavaScript. It’s fast and, important to me, it’s free!
Here is how it works: The computer that is doing the actual transcribing of the audio source to text must use Google Chrome and connect to a Bridge room, similar to how my PresentationBridge project works. Multiple bridge rooms (think “venues” or “locations”) can be configured on the server, and if multiple rooms are available, when end users connect, they will be given an option to choose the room they want to be in and receive text. The only requirement for browser choice is the computer doing the transcribing; all others can use any browser on any computer or device they choose.
This is the primary Bridge interface that does the transcribing work.
From the Bridge interface, you can choose which “Bridge” (venue) you want to control. If the Bridge is configured with a control password, you will have to enter it. Once connected, you can choose whether to send text data to the connected clients, go to logo, etc. You can redirect all users to a new webpage at any time, send a text/announcement, or reload their page entirely. To start transcribing, just click “Start Listening”. You’ll have to allow Chrome to have access to the microphone/audio source (only the first time). When you are connected to the Bridge, you can also choose to send the users to Logo Mode (helpful when you’re not broadcasting), or you can choose to send data or turn it off (helpful when you want to test transcribe but not send it out to everyone). There is also a simple word dictionary that can be used to replace commonly misidentified words with their proper transcription.
A note about secure-origin and accessing the microphone:If you’re running this server and try to access the page via localhost, Google Chrome will allow you to access the microphone without a security warning. However, if you are trying to access the page from another computer/location, the microphone will be blocked due to Chrome’s secure-origin policy.
If you’re not using a secure connection, you can also modify the Chrome security flag to bypass this (not recommended for long-term use because you’ll have to do this every time Chrome restarts, but it’s helpful in testing):
Navigate to chrome://flags/#unsafely-treat-insecure-origin-as-secure in the address bar.
Find and enable the Insecure origins treated as secure section.
Add any addresses you want to ignore the secure origin policy for. Remember to include the port number (the default port for this project is 3000).
Save and restart Chrome.
Here is a walkthrough video of the captioning service in action:
[wpvideo r6P0iWGj ]
I chose to host this project on an Amazon EC2 instance, because my usage fits within the free tier. We set up a subdomain DNS entry to point to the Elastic IP so it’s easy for people in the church to find and use the service. The EC2 instance uses Ubuntu Linux to run the Node.js code. I also used ngninx as a proxy server. This allowed me to run the service on my custom port, but forward the necessary HTTPS (port 443) traffic to it, which helps with load balancing and keeps my server from having to handle all of that secure traffic. I configured it to use our domain’s SSL certificate.
I also created a simple API for the service so that certain commands like “start listening”, “send data”, “go to logo” etc. can be done remotely without user interaction. This will make it easier to automate down the road, which I plan to do soon, so that the captioning service is only listening to the live audio source when we are at certain points in the service like the sermon. Because it’s just a simple REST API, you can use just about anything to control it, including a Stream Deck!
We deployed them in our two auditoriums using ChromeBooks. An inexpensive solution that runs the Chrome Browser!
In order to give the devices a direct feed from our audio consoles, I needed an audio interface. I bought this inexpensive one off Amazon that’s just a simple XLR to USB cable. It works great on Mac, PC, and even ChromeBooks.
XLR to USB audio interface so we can send a direct feed from the audio console instead of using an internal microphone on the computer running the Bridge.
At my church, we often delay or “time slip” the preaching of the service in the room where the pastor isn’t physically present. To do this, we record the sermon video as it happens live, and then play it back out either a few seconds or few minutes later.
This has been a good workflow for us. Often though, in the delayed auditorium, it’s helpful for the worship leader to know when the server is ready to play back the delayed sermon video. We usually communicate this over the intercoms into the band in-ears, whenever there’s an appropriate break to do so, like when they aren’t actively singing, praying or talking. That works well, but sometimes it means we have to wait longer than we should to be able to let them know we are ready to play back the video.
So, I thought, if we had a simple cue light that we could use to let them know when we’re ready, I wouldn’t need to have my team wait to communicate. The band could just look at the light and know we are ready for them. It would also give that boost of confidence before they hear from us in the in-ears.
To create this system, I bought a Raspberry Pi Zero W and a blink(1) USB light. If you haven’t heard about the blink(1) light, I wrote about using it in this post. I bought the Pi Zero in a kit that came with a black case and power supply.
I bought this kit off Amazon for $27.
I had initially envisioned this light being located on stage but after talking to my team, they actually preferred that it be located on top of the camera back in the tech booth, so they could easily see it.
Here is the notification light. This is easy to see from the stage. That’s a professional gaff tape install. Currently we move this device back and forth between auditoriums as we alternate which room is the video venue.
I’ve been learning Python recently, so I whipped up a simple Python web server that accepts HTTP requests to then light up the blink(1) light. For now, I’ve limited it to red and green. Red = problem like we aren’t sufficiently delayed, the server is not ready, etc, green = ready/good for playback anytime, and clear/no light = no status. I set up the Pi to start this web server when it boots up, so it’s very easy to set up.
We trigger the light using a Stream Deck Mini running Companion located at the video server. The operator has three buttons, and each one sends an HTTP request to the Pi Zero to trigger the light.
This Stream Deck Mini is running Companion and sends HTTP GET Requests to the Pi Zero server.
I also have a command set for each button action on the stream deck to update a button on another stream deck in the other control room, so each director knows the status of the video server. This doesn’t replace our intercom communication, but it certainly augments it!
Overall, we’re very happy with this notification system! All in, it cost us about $55 for the Pi Zero kit and the blink(1) light, and of course, the code was free. 🙂 It’s available on Github if you need it! That’s where I will provide updates as I add more features to this.