Using a Raspberry Pi Zero W and a blink(1) light for silent notifications

At my church, we often delay or “time slip” the preaching of the service in the room where the pastor isn’t physically present. To do this, we record the sermon video as it happens live, and then play it back out either a few seconds or few minutes later.

This has been a good workflow for us. Often though, in the delayed auditorium, it’s helpful for the worship leader to know when the server is ready to play back the delayed sermon video. We usually communicate this over the intercoms into the band in-ears, whenever there’s an appropriate break to do so, like when they aren’t actively singing, praying or talking. That works well, but sometimes it means we have to wait longer than we should to be able to let them know we are ready to play back the video.

So, I thought, if we had a simple cue light that we could use to let them know when we’re ready, I wouldn’t need to have my team wait to communicate. The band could just look at the light and know we are ready for them. It would also give that boost of confidence before they hear from us in the in-ears.

To create this system, I bought a Raspberry Pi Zero W and a blink(1) USB light. If you haven’t heard about the blink(1) light, I wrote about using it in this post. I bought the Pi Zero in a kit that came with a black case and power supply.

91VrInDBG3L._SL1500_
I bought this kit off Amazon for $27.

I had initially envisioned this light being located on stage but after talking to my team, they actually preferred that it be located on top of the camera back in the tech booth, so they could easily see it.

IMG_9294.JPG
Here is the notification light. This is easy to see from the stage. That’s a professional gaff tape install. Currently we move this device back and forth between auditoriums as we alternate which room is the video venue.

I’ve been learning Python recently, so I whipped up a simple Python web server that accepts HTTP requests to then light up the blink(1) light. For now, I’ve limited it to red and green. Red = problem like we aren’t sufficiently delayed, the server is not ready, etc, green = ready/good for playback anytime, and clear/no light = no status. I set up the Pi to start this web server when it boots up, so it’s very easy to set up.

We trigger the light using a Stream Deck Mini running Companion located at the video server. The operator has three buttons, and each one sends an HTTP request to the Pi Zero to trigger the light.

IMG_9295.JPG
This Stream Deck Mini is running Companion and sends HTTP GET Requests to the Pi Zero server.

I also have a command set for each button action on the stream deck to update a button on another stream deck in the other control room, so each director knows the status of the video server. This doesn’t replace our intercom communication, but it certainly augments it!

Overall, we’re very happy with this notification system! All in, it cost us about $55 for the Pi Zero kit and the blink(1) light, and of course, the code was free. 🙂 It’s available on Github if you need it! That’s where I will provide updates as I add more features to this.

Controlling Planning Center LIVE with a Stream Deck

In my last post, I mentioned a great tool, Companion, that integrates with the Elgato Stream Deck. I’ve had the opportunity to write a few modules for it to extend its control capabilities, like controlling a CueServer, or my own software, ProTally.

If you work in tech for a church, chances are that you use or have at least heard of Planning Center Online to manage your worship services and people. PCO has a feature for their Services product called Services LIVE that allows you to designate where you are at in a service flow while a service is ongoing, which updates anyone who may be watching. It also records the times so you can look back later and see things like “Did that song we said would take 5 minutes actually take more like 6 minutes and 30 seconds?” It’s a very useful tool.

The interface to advance a LIVE plan, however, has not been the best for our volunteers. Even within the PCO app, the buttons to advance a plan to the next item are rather tiny, and some of my team have trouble knowing whether or not they hit the button.

IMG_9032
This is the standard PCO Live interface. The small double arrows at the bottom left and right of the screen are the controls. Our volunteers have a hard time pressing these.

One thing that makes Planning Center Online great is that they love developers, and they’ve made a very extensive Application Programming Interface (API) available for anyone to use. This means you can get access to your service and plan data without having to actually click and browse the website.

I delved into that API this past week and used it to create a new module for Companion. One of the caveats of using the API is that in order to advance a live plan, you have to know both the service type ID and the plan ID. This requires traversing the API data some and making multiple requests. If you’re a programmer, this makes sense. If you’re just an end-user, it may not be as straightforward. So, I set out to make something easy enough for anyone to use.

Here is a walkthrough video on how the module works:

What it actually does:

  1. When you first load the module and supply it with the authentication tokens, it requests all of the available service types and stores that internally.
    https://api.planningcenteronline.com/services/v2/service_types
  2. Then it asks for the next 7 upcoming plans for each service type based on the list that was just retrieved. This is then used to build the drop down list so you can choose your plan.
    https://api.planningcenteronline.com/services/v2/service_types/${serviceTypeId}/plans?filter=future&per_page=7
  3. When you send a “Previous” or “Next” command, it first asks for the LIVE information for that selected plan.
    https://api.planningcenteronline.com/services/v2/service_types/${serviceTypeId}/plans/${planId}/live
  4. It checks for who the current controller of the plan is, and compares that to an internal variable in Companion that represents the owner of the authentication token.
  5. If the current controller is null, a command is sent to toggle control to the token owner, and the returning value of the current controller is stored in that internal variable so we know who “we” are for next time.
    https://api.planningcenteronline.com/services/v2/service_types/${serviceTypeId}/plans/${planId}/live/toggle_control
  6. If the current controller is not null, a toggle command is sent to release control of the plan to no one, and then a toggle command is immediately sent again so that control is toggled to us. The reason for this is that if our authentication key is not the current controller, the API will return an error when we try to advance the plan.
  7. Now that we know we are in control, the current controller value returned by the API is stored as an internal variable, and then the next or previous command is sent to advance the plan.
    https://api.planningcenteronline.com/services/v2/service_types/${serviceTypeId}/plans/${planId}/live/go_to_next_item

Because Node.js is an asychronous programming language, all of this is done through Promises, which is similar in concept to a callback function, however it allows for cleaner and easier to read code.

So, stay tuned for this module to become available in the next stable release of Companion, and if you’re willing to try it out in development mode, it’s available now!

IMG_9035
The PCO Live module in action!

Support for blink(1) now available in ProTally!

I wrote ProTally last year so our volunteers running ProPresenter could know when their source was on screen or about to be on screen. It has been very helpful in minimizing our mistakes by making distracting graphic changes while on-air. It supports tally data from our Ross Carbonite switchers but I’ve also written support for the TSL 3.1 protocol, Blackmagic ATEM switchers, OBS Studio scenes, and most recently, Bitfocus Companion.

I recently picked up a blink(1) to test out for another project I’m working on. If you’ve not heard of it, the blink(1) is a small $30 USB device with LEDs built in, designed to give you a quick-glance notice of anything on your computer. The creators have made libraries in several popular programming languages, like Node.js (the language ProTally is written in), to interact with it.

 

I decided to get my feet wet and learn about the device’s capabilities by integrating it with ProTally. Since ProTally can read and work with tally data from so many different types of sources, that means it’s already primed to take that tally data and act on it in different ways, not just on-screen.

So, I am pleased to announce, that ProTally now supports up to 4 blink(1) devices that can mirror the color the user chooses for an on-screen tally box. The user can choose between showing the tally color on a box on their monitor (like normal), a connected blink(1) device, or both. If you are using multiple tally boxes but don’t own an equal number of blink(1) devices, you can also choose to share the blink(1) across multiple tally boxes, and the higher box will get priority.

 

The latest release of ProTally supporting blink(1) devices as tally lights is available on Github now, so go check it out!

Using a Stream Deck as a Production Controller, Revisited

One of my first posts on this blog detailed how I wrote software in Node.js to interface with an Elgato Stream Deck to control some of our production equipment, interfacing with the video switchers, router, Ross Dashboard, etc. It’s time to revisit that.

We’ve been using my controller now every week in our control rooms and tech booths for about a year. My team loves it. It integrates into our centralized production workflow, where each deck sends commands to a central Dashboard panel, which runs the command, and then sends out updates to all the connected stream decks.

However, I haven’t had much time to make it a better product for other people. I wrote support for the Stream Deck Mini when that was released, but that’s about it. I haven’t had time or cause to do much else with it. So, for that reason, I wanted to share with you a piece of software that is under constant, active development: Bitfocus Companion.

Screen Shot 2019-02-15 at 10.43.46 AM

 

Companion is written in Node.js and packaged in Electron just like my product, so it can run on Mac, Windows, or Linux. But it can do so much more than my controller! One of the best features is that it has a web-based management interface, so you can add actions to buttons easily and on-the-fly. It supports a ton of production equipment and chances are good that your gear is already on the supported list, or, perhaps someone can create a module for it.

I was asked to join the development team recently for Companion, so I’ve started making some modules for Companion to integrate with software and gear that we have. I’ve created a module for Interactive Technologies’ CueServer, which we have in a couple of our venues here.

Screen Shot 2019-02-15 at 10.54.05 AM
Here are some actions you can perform on a CueServer now with the module I created for Companion.
Screen Shot 2019-02-15 at 10.53.53 AM
An example of a key down action for triggering a CueServer macro in Companion.

If you use ProTally, my on-screen tally box notification software, and want to integrate with Companion, I made a module for that too! Make sure to go download the latest ProTally release which supports this feature! With Companion, in addition to Preview and Program windows, you can also send a Beacon, which flashes at a custom rate and color. Check this video out for a demo:

Both of these modules are available in the bleeding edge builds of Companion and will be included in the next stable release soon.

So, if you’re looking for a great production controller that integrates with the Stream Deck, go check out Companion! It’s only going to get better from here!

 

Using Node.js and a Raspberry Pi to monitor Streaming ACN network for DMX changes and trigger actions

Awhile back, I wrote about the Shade Controller I created using Node.js and a USB relay running on a Raspberry Pi Zero. It works great. We can raise and lower the shade from anywhere on the network. However, I’ve always wanted a way to control this a little more automatically. The lighting volunteer is typically the person who operates the remote for the shade, so I really wanted a way to automate that part of the process for them so the shade can raise and lower exactly when we want it to, without them having to use an extra tool or device.

As I was working on some networking changes to one of our lighting consoles (we use Jands L5 consoles running Chroma-Q’s Vista 3), I had an idea… What if we could monitor the Streaming ACN lighting network for data changes just like any lighting node, and use that to trigger an action?

If you’ve not heard of Streaming ACN (sometimes called sACN or its official name E 1.31), it is an ethernet based protocol for sending DMX address and value information from a lighting console to receiver nodes which then relay the DMX information to lighting fixtures. It uses multicast traffic to send the information so it is very fast and efficient. At my church, we have several DMX universes of lighting information going over the network for each auditorium, controlling all of the light fixtures.

Luckily for me, a base protocol module for E 1.31 was already available for Node.js. So, using that module, I sat down and prototyped a solution and had something working in just a couple of hours. I’m calling my software sACN Translator. I’ve deployed it to a Raspberry Pi for production. It supports a simple REST API to allow you to control which universes it should listen to, as well as the fixtures to run triggers for. I also created a simple web interface which utilizes this API.

Screen Shot 2019-01-20 at 10.28.46 AM.png
Here is the simple web interface which interacts with the REST API.

Here is how I set it up on our system to trigger the shade controller. I started by adding two fixtures to the L5 console on Universe 1 (where I happened to have some spare room in my DMX addresses). I called these fixtures “Shades Up” and “Shades Down”, with DMX Addresses 511 and 512.

screen sharing picture january 20, 2019 at 5.34.39 am est
Here are the two “fixtures” on the layout, with notes attached.
screen sharing picture january 20, 2019 at 5.35.29 am est
I labeled the fixtures as generic “utility” fixtures with 1 DMX address each.

Then, I added entries in sACN Translator to monitor Universe 1 on the network and look for value changes to fixture addresses 511 and 512. I set it to run an HTTP trigger any time the values reaches 255 (100%). So, when I put the Shades Down fixture at 100% on the lighting console, the software sees that value, looks for a match in its list of fixtures, and then runs the corresponding HTTP request on the Raspberry Pi Zero connected to the USB relay to trigger the action which lowers the shade.

Here is a video of it in action:

Pretty cool! I decided to use separate fixture addresses for each trigger action, but I didn’t have to. I could have just one fixture and watch for two separate lighting values.

So now, all the operator has to do is run the cues like normal, and the programming will do the rest! I’ve made this software available for free on my Github repository. Let me know how it works for you!

Using Google Apps Script with user input to automate repetitive tasks in Google Docs

Do you find yourself ever doing repetitive tasks over and over again in Google Docs? (Or any of the Google Suite Apps?) I sure do. At my church, we create a Google Doc every week for all of the “talking points”, the parts of the service that aren’t song or sermon, where we script out what someone needs to say or communicate during that portion.

screen shot 2019-01-13 at 5.48.00 am
Here is a sample document that we use each week.

A couple years ago, I started creating template files to help my team do this every week, because having the template already there with some common headers, the service date, etc. removed the barrier to get down to writing the actual words. Creating the files wasn’t too complicated, and after awhile, I started making them “in bulk”, where I would sit down and just make 3-4 months worth of documents at a time, making copies of my master template, editing the new file and updating the date, etc. Then we added a second auditorium, which doubled the amount of documents I needed to create.

With the new year, it was time to create more documents, so I decided this time around that I would create a script to help automate this task using the framework within Google Apps Script.

If you’ve not heard of or used Google Apps Script (GAS), it’s a scripting language based on Javascript, for light-weight application development. All of the code runs on Google’s servers to interact with your documents. If you’ve ever used an “add-on” in Google Apps, it’s using this scripting framework.

It’s pretty easy to use if you know Javascript, and it’s easy to get started. From any document, just go to Tools > Script Editor. This opens a new tab where you can start writing Apps Script.

Here is my script:


function myFunction()
{
var ui = DocumentApp.getUi();

var templateDocId = '[templateid]'; // put the document ID of the master template file here

var prompt_numberOfDocs = ui.prompt('How many Talking Point Documents do you want to create?');
var prompt_startingDate = ui.prompt('What is the starting date? Please enter in MM/dd/yyyy.');

var numberOfDocs = parseInt(prompt_numberOfDocs.getResponseText());
var startingDate = prompt_startingDate.getResponseText();

var prompt_venueResponse = ui.prompt('Venue', 'Create Documents for both Auditoriums? If no, please type in the Venue Title and click "No".', ui.ButtonSet.YES_NO);

var venueTitle = '';

var bothAuditoriums = true;

if (prompt_venueResponse.getSelectedButton() == ui.Button.NO)
{
venueTitle = prompt_venueResponse.getResponseText();
bothAuditoriums = false;
}

var date = new Date(startingDate);

var htmlOutput = HtmlService
.createHtmlOutput('Creating ' + numberOfDocs + ' documents. Please stand by...

')
.setWidth(300)
.setHeight(100);

ui.showModalDialog(htmlOutput, 'Talking Points - Task Running');

for (var i = 0; i < numberOfDocs; i++)
{
var loopDate = new Date(date.getTime()+ ((i * 7) * 3600000 * 24)); // uses the looping interval to get the starting date and add 7 days to it, creating a new date object
var documentName = 'Talking Points - ' + Utilities.formatDate(loopDate, Session.getScriptTimeZone(), "MMMM dd, yyyy");
var documentDate = Utilities.formatDate(loopDate, Session.getScriptTimeZone(), "MM/dd/yyyy");
if (bothAuditoriums)
{
createNewTalkingPointDocument(templateDocId, documentName + ' (Aud 1)', 'Aud 1', documentDate);
createNewTalkingPointDocument(templateDocId, documentName + ' (Aud 2)', 'Aud 2', documentDate);
}
else
{
documentName += ' (' + venueTitle + ')';
createNewTalkingPointDocument(templateDocId, documentName, venueTitle, documentDate);
}
}

htmlOutput = HtmlService
.createHtmlOutput('google.script.host.close();')
.setWidth(300)
.setHeight(100);
ui.showModalDialog(htmlOutput, 'Talking Points - Task Running');
}

function createNewTalkingPointDocument(templateDocumentId, documentName, venueTitle, documentDate)
{
//Make a copy of the template file
var documentId = DriveApp.getFileById(templateDocumentId).makeCopy().getId();

//Rename the copied file
DriveApp.getFileById(documentId).setName(documentName);

//Get the document body as a variable
var body = DocumentApp.openById(documentId).getBody();

//Insert the entries into the document
body.replaceText('##Venue##', venueTitle);
body.replaceText('##Date##', documentDate);
}

Once you have a script in place, you can choose triggers for when it should run, like when it is opened, or on a schedule, etc.

Here is the new template with the script in action:

screen shot 2019-01-13 at 6.10.10 am

First, I ask how many documents should be created. 1, 5, 500, whatever I need.

screen shot 2019-01-13 at 6.10.29 am

Next, I ask for the starting date. We specifically use these for Sunday services, so I’ve programmed the script to take this starting date and then calculate every 7 days when creating multiple documents.

screen shot 2019-01-13 at 6.10.44 am

Then, I ask the user if they want to create documents for both auditoriums, or if this is for a special service or off-site service, etc. Typically we want them for both auditoriums, but the one-off feature makes things easy for those types of services too.

screen shot 2019-01-13 at 6.10.57 am

As the script runs, it displays this dialog box. Creating that many documents can take awhile, and I wanted the user to be aware of this. The box goes away automatically when the process is completed.

Now that we have this, I can pass the task on to anyone on our team, anytime they need these documents! And it saves a good bit of time. I definitely spent less time creating this script than I would have spent creating the 3-4 months worth of documents manually, and now I never have to do that again!

How can you use Google Apps Script to automate some of your more repetitive tasks?