Live Camera Production: A Technical Walkthrough of our Video System

I talk about programming and software and building solutions here a lot, but I thought I would write a post about something else I’m passionate about: live camera production. At my church, for the last 15 years or so, I’ve had the pleasure of getting to direct cameras for the annual Christmas program. We call the program, “Jingle Jazz”, because the music is mostly centered around a jazz format. In church terms, it’s an “invest and invite” event where people can bring their friends, neighbors, and co-workers for a great first exposure to Fellowship Greenville and have a fun relaxing evening filled with various styles of music.

This is one of a small handful of times a year where we get to maximize the potential of our volunteers and systems and put it all to the test. I always try to challenge myself to make it better than the year before, whether that’s adding more cameras, equipping volunteers, or even automating something.

In years past, I had a huge role involving writing scripts, creating and producing videos, and working late night after late night after late night and put “all of me” into this event to make it happen! In recent years, the workload has been balanced a lot better, and my job role has shifted some, so now I am not having to do so many late nights prior to the event. I did still manage to get in almost 16,000 steps one day last week though!

Photo Dec 16, 1 07 04 PM
The steps I took in one day. Pretty high for me!

This year, I set a target goal of 14 cameras. I put out a call for volunteers and 11 people signed up! We used our primary auditorium cameras, older (like 12-15 year old) cameras using component to SDI adapters, borrowed production equipment from the communications department, rented 4 cameras, and I even traded some of my programming time to a local university in return to borrow some cameras and lenses from them. Overall, I felt like we were able to keep costs down by being good stewards of what we already had, and renting where needed.

One thing I did in advance that really helped me to succeed was to plot all of my patching across patchbays and plates in a spreadsheet. It helped me think through all the limitations I might face, especially when multiple cameras needed a signal/data cable as well as genlock/reference. Some of the cameras didn’t support genlock, so I had to frame sync those within the switcher. The Ross Carbonite switcher has 6 frame syncs, so after I ran out of syncs on the switcher for Auditorium 1, I actually sent signal to our switcher for Auditorium 2, synced them, and sent them back to the other switcher on aux sends! It took both control rooms to be able to pull off this many cameras, primarily because of the camera equipment we had available.

Photo Dec 14, 10 28 27 PM
This spreadsheet kept me in line to make sure I didn’t forget to patch anything!

For intercoms, everyone was on a wired Clearcom. We used a combination of belt packs we already had plus adapters I made to work with some older stuff we used to use. The two mobile stage cameras used Unity Intercom bridged to our Clearcom system.

Photo Dec 12, 7 57 03 PM
The view during rehearsal.

I knew I wanted to record what I call the “tech cut” that combines the multiviewer feed plus the intercom chatter this year so we could save it for review and training. The Carbonite switcher has two multiviewer outputs, so I dedicated one of them to viewing all 14 cameras for the recording.

Photo Dec 13, 11 50 35 AM
Another view of the control room.

Because the boxes were so small, I wanted a way to be able to see any camera on a larger screen, so I rolled in a TV cart and patched it to a MiniME output and controlled it from a Stream Deck (with Companion). Using Custom Controls, I was able to also have the multiviewer show a white box around whatever source was active on that TV cart. Here is a video of that in action:

[wpvideo z5nxUcc4]

One thing that I really am glad we did this year was to treat the LED wall that we have center stage as more of a lighting/stage element than something that our video team needed to drive. It was nice because the lighting guys controlled it all and I didn’t have to think about it! We used PVP and had motions and Christmas-themed b-roll on the screen most of the time, and occasionally cut to a graphic here and there as needed.

Like any service, it takes people to make it happen. We have great volunteers and staff here that I get to work with and lead.

Photo Dec 14, 8 06 37 PM
The crew!

As we were wrapping up this event, and I watched everyone serving with such joy even though it was a lot of late nights, I was reminded of this quote from author Simon Sinek:

When we work hard on something we don’t believe in, it’s called stress. When we work hard on something we believe in, it’s called passion. – Simon Sinek

Working and serving in tech ministry has to come from a place of passion, or it will always be stressful. Colossians 3:23-24 says, Whatever you do, work heartily, as for the Lord and not for men, knowing that from the Lord you will receive the inheritance as your reward. You are serving the Lord Christ.”

May we always work heartily on what we believe in. Not just programming, software, or live production, but seeing God transform lives, and people pursuing life and mission with Jesus.

If you’d like to watch our tech cut, here it is!

Using Node.js on a Raspberry Pi to listen to MIDI messages from an Avid S6L console to trigger HTTP requests or run scripts

Back in the summer, I posted about a project I had recently finished, which involved sending HTTP requests to a server that would then relay a MIDI output message based on the request that was sent.

We’ve been using that software (dubbed midi-relay) since then to be able to control our Chroma-Q Vista lighting desks remotely across vlans by using stream decks running Companion. It works pretty well, especially since the midi-relay software is configured to run directly on the lighting consoles upon startup. We have even set up a few crontab entries to send CURL commands to the light desks to turn them on at certain times when we don’t want to be on-site just to press a button.

In anticipation of completing my most recent project, “LiveCaption“, which takes audio and transcribes it to text in real-time, I started working on midi-relay 2.0: listening to MIDI input and using that to trigger a response or action.

logo
I figured it was time this thing had a logo.

In both auditoriums at my church, we have Avid S6L audio consoles. These consoles can do a lot, and like most consoles, they have GPIO pinouts to allow you to trigger things remotely, whether as an action originating from the sound console, or externally that then triggers something on the console like recalling a snapshot, muting an input, etc.

Screen Shot 2019-11-19 at 4.23.54 PM
Stock photo of the console I found on the Internet.
photo-nov-19-2-39-41-pm.jpg
These are (some of) the I/O pins on the S6L console. It has GPIO and MIDI ports. We use the footswitch input for setting tap tempo.

I started looking at the possibility of using the GPO pins on the console to trigger an external action like sending an HTTP request to Ross Dashboard, Companion, etc. However, there are only 8 GPO pins on this audio board, so I knew that could be a limiting factor down the road in terms of the number of possible triggers I could have.

The S6L also has MIDI In and Out, and through the Events section of the console, it can be used as either a trigger (MIDI In) or an action (MIDI Out) on just about anything.

Photo Nov 19, 1 28 22 PM
The Events page on an Avid S6L console. All kinds of things can be used as triggers and actions here! In this particular event, I’ve created a trigger that when the Snapshot “Band” is loaded, it sends MIDI Out on Channel 1 with Note 22 (A#0) at Velocity 100. MIDI-Relay then listens for that MIDI message and sends an HTTP POST request to the LiveCaption server to stop listening for caption audio.

We already have a snapshot that we load when we go to the sermon/message that mutes things, sets up aux sends, etc. and I wanted to be able to use that snapshot event to automatically start the captioning service via the REST API I had already built into LiveCaption.

In the previous version, midi-relay could only send Note On/Off messages and the custom MSC (MIDI Show Control) message type I had written just for controlling our Vista lighting consoles. With version 2.0, midi-relay can now send MIDI out of all of the channel voice MIDI message types:

  • Note On / Note Off
  • Polyphonic Aftertouch
  • Control Change
  • Program Change
  • Pitch Bend
  • Channel Pressure / Aftertouch

It can also send out:

  • MSC (MIDI Show Control), which is actually a type of SysEx message
  • Raw SysEx messages, formatted in either decimal or hexadecimal

And, midi-relay can now listen for all of those channel voice and SysEx messages and use it to trigger one of the following:

  • HTTP GET/POST (with JSON data if needed)
  • AppleScript (if running midi-relay on MacOS)
  • Shell Script (for all OS’s)

There are a few software and hardware products out there that can do similar things, like the BomeBox, but I wanted to build something less-expensive and something that could run on a Raspberry Pi, which is exactly how we’ve deployed midi-relay in this case.

Photo Nov 19, 1 27 32 PM
Here is the Raspberry Pi running midi-relay, connected to the MIDI ports on the S6L via a USB to MIDI interface. It tucks away nicely at the back of the desk.

Now we can easily and automatically trigger the caption service to start and stop listening just by running the snapshots on the audio console that we were already doing during that transition in the service. This makes it easier for our volunteers and they don’t really have to learn a new thing.

Here’s a video of it in action:

 

 

[wpvideo W77anq42]

If you’d like to check out version 2.0 of midi-relay, you can download both the source code and binaries from GitHub: https://github.com/josephdadams/midi-relay

The documentation is pretty thorough if you want to use the API to send relay messages or set up new triggers, but you can also use the new Settings page running on the server to do all that and more.

Screen Shot 2019-11-19 at 4.21.28 PM
From the Settings page, you can view available MIDI ports, add/delete Triggers, view detected midi-relay hosts running on the network, and send Relay messages to other hosts.

And if you’re a Companion user for your stream deck, I updated the module for Companion to support the new channel voice MIDI relay messages as well! You’ll need to download an early alpha release of Companion 2.0 to be able try that out. Search for “Tech Ministry MIDI Relay” in Companion.

Here’s a list of the Raspberry Pi parts I used, off Amazon:

Photo Nov 19, 3 52 56 PM

I hope this is helpful to you and your projects! If you need any help implementing along the way, or have ideas for improvement, don’t hesitate to reach out!

Free Real-Time Captioning Service using Google Chrome’s Web Speech API, Node.js, and Amazon’s Elastic Cloud Computing

For awhile now, I’ve wanted to be able to offer live captions for people attending services at my church who may be deaf or hard of hearing, to allow them to follow along with the sermon as it is spoken aloud. I didn’t want them to have to install a particular app, since people have a wide variety of phone models and OS’s, and that just sounded like a pain to support long-term. I also wanted to develop something low-cost, so that more churches and ministries could benefit from it.

I decided to take concepts learned my PresentationBridge project from last year’s downtown worship night and use it for this project. The idea was essentially the same, I wanted to be able to relay, in real-time, text data from a local computer to all connected clients using the Node.js socket.io library. Instead of the text data coming from something like ProPresenter, the text data would be the results of the Web Speech API’s processing of my audio source.

If you’re a Google Chrome user, Chrome has implemented W3C’s Web Speech API, which allows you to access the microphone, capture the incoming audio, and receive a speech-to-text result, all within the browser using JavaScript. It’s fast and, important to me, it’s free!

Here is how it works: The computer that is doing the actual transcribing of the audio source to text must use Google Chrome and connect to a Bridge room, similar to how my PresentationBridge project works. Multiple bridge rooms (think “venues” or “locations”) can be configured on the server, and if multiple rooms are available, when end users connect, they will be given an option to choose the room they want to be in and receive text. The only requirement for browser choice is the computer doing the transcribing; all others can use any browser on any computer or device they choose.

Screen Shot 2019-11-04 at 1.36.34 PM
This is the primary Bridge interface that does the transcribing work.

From the Bridge interface, you can choose which “Bridge” (venue) you want to control. If the Bridge is configured with a control password, you will have to enter it. Once connected, you can choose whether to send text data to the connected clients, go to logo, etc. You can redirect all users to a new webpage at any time, send a text/announcement, or reload their page entirely. To start transcribing, just click “Start Listening”. You’ll have to allow Chrome to have access to the microphone/audio source (only the first time). When you are connected to the Bridge, you can also choose to send the users to Logo Mode (helpful when you’re not broadcasting), or you can choose to send data or turn it off (helpful when you want to test transcribe but not send it out to everyone). There is also a simple word dictionary that can be used to replace commonly misidentified words with their proper transcription.

A note about secure-origin and accessing the microphone: If you’re running this server and try to access the page via localhost, Google Chrome will allow you to access the microphone without a security warning. However, if you are trying to access the page from another computer/location, the microphone will be blocked due to Chrome’s secure-origin policy.

If you’re not using a secure connection, you can also modify the Chrome security flag to bypass this (not recommended for long-term use because you’ll have to do this every time Chrome restarts, but it’s helpful in testing):

  • Navigate to chrome://flags/#unsafely-treat-insecure-origin-as-secure in the address bar.
  • Find and enable the Insecure origins treated as secure section.
  • Add any addresses you want to ignore the secure origin policy for. Remember to include the port number (the default port for this project is 3000).
  • Save and restart Chrome.

Here is a walkthrough video of the captioning service in action:

[wpvideo r6P0iWGj ]

I chose to host this project on an Amazon EC2 instance, because my usage fits within the free tier. We set up a subdomain DNS entry to point to the Elastic IP so it’s easy for people in the church to find and use the service. The EC2 instance uses Ubuntu Linux to run the Node.js code. I also used ngninx as a proxy server. This allowed me to run the service on my custom port, but forward the necessary HTTPS (port 443) traffic to it, which helps with load balancing and keeps my server from having to handle all of that secure traffic. I configured it to use our domain’s SSL certificate.

I also created a simple API for the service so that certain commands like “start listening”, “send data”, “go to logo” etc. can be done remotely without user interaction. This will make it easier to automate down the road, which I plan to do soon, so that the captioning service is only listening to the live audio source when we are at certain points in the service like the sermon. Because it’s just a simple REST API, you can use just about anything to control it, including a Stream Deck!

IMG_2076.JPG
We deployed them in our two auditoriums using ChromeBooks. An inexpensive solution that runs the Chrome Browser!

In order to give the devices a direct feed from our audio consoles, I needed an audio interface. I bought this inexpensive one off Amazon that’s just a simple XLR to USB cable. It works great on Mac, PC, and even ChromeBooks.

Screen Shot 2019-11-14 at 2.15.19 PM.png
XLR to USB audio interface so we can send a direct feed from the audio console instead of using an internal microphone on the computer running the Bridge.

If you’d like to download LiveCaption and set it up for yourself, you can get it from my Github here: https://github.com/josephdadams/LiveCaption

I designed it to support global and individual logos/branding, so it can be customized for your church or organization to use.

Controlling Chroma-Q (Jands) Vista with MIDI Show Control and a Stream Deck using Node.JS and the Web MIDI API

At my church, we use Chroma-Q’s Vista lighting platform (formerly owned by Jands). It’s a great platform and easy for volunteers to execute pre-programmed lighting cues. Every large worship space we have on campus with a lighting system runs some version of Vista, whether on a physical console or a PC.

We generally program our lighting to use one or two cuelists and volunteers just advance cue by cue within that list for the service. It’s pretty straightforward and works well for them.

Sometimes, we need the ability to advance cues in a list remotely, when we’re not near the lighting console, and that’s where this latest project began.

Most lighting consoles can be controlled using some form of MIDI command. Older ones require a physical connection, others can use network connections. By using a loopback/virtual port, Vista can receive both MIDI notes and MIDI Show Control commands.

A lot of people have been able to accomplish this type of remote control over the network using a protocol called RTP-MIDI. This protocol is very easy to use and computers can broadcast/discover each other over the network, so it makes it a lot quicker to get up and going.

Screen Shot 2019-06-06 at 3.44.05 PM

This is great, and I’ve used it, but I wanted to design something particular for our needs. (1) I wanted something I could run on any PC or Mac to accept commands from a wider range of sources, and so many devices nowadays can send HTTP requests. (2) I wanted something that primarily triggered over TCP, because while RTP-MIDI is great and fast, it uses UDP traffic that can’t cross vlans/subnets. TCP traffic easily can.

So, I broke this project down into two parts: a server that listens to HTTP requests and relays local MIDI, and a module for Companion that allows the Stream Deck to send requests to that server. The server is flexible to support other devices that may want to trigger it, and the Companion module is perfectly paired to work with it.

The server runs a simple REST API that returns a list of local MIDI ports and can accept Note On, Note Off, or MSC (MIDI Show Control) commands. It accepts JSON data via HTTP POST which is then used to build the hexadecimal data and send the MIDI commands.

The HTTP side of things in Node.js uses the Express framework. The MIDI side uses the Jazz Soft JZZ.js library.

The server runs directly on the Vista computer to relay the MIDI commands on a virtual MIDI port which Vista is listening to.

Here is a video of it in action!

[wpvideo ZvMtBNPa]

If you want to do this yourself, setting up the Vista side of things is pretty straightforward.

First, if you are using Vista on a PC and haven’t already, downloaded LoopMidi, you can get it here. It’s free software that creates a virtual MIDI port on the PC.

Once that is configured, open Vista and go to the MIDI settings in User Preferences.

Screen Shot 2019-06-06 at 1.41.54 PM.png

Under the MIDI tab, select the External MIDI port “LoopMidi” (or whatever you named your port). If you’re going to be using MSC, be sure to make note of the Device ID you select.

If you want to advance a cuelist using MIDI Note On commands, right click on the cuelist and under the MIDI tab, select the Note you want to send for the “Play” command.

Screen Shot 2019-06-06 at 1.42.24 PM.png

I hope this is helpful for you! You can dowload a binary release of the MIDI-Relay server from my Github. It’s available for Mac, Windows, and Linux. The Companion module will be made available in a release build at some point.

Controlling Planning Center LIVE with a Stream Deck

In my last post, I mentioned a great tool, Companion, that integrates with the Elgato Stream Deck. I’ve had the opportunity to write a few modules for it to extend its control capabilities, like controlling a CueServer, or my own software, ProTally.

If you work in tech for a church, chances are that you use or have at least heard of Planning Center Online to manage your worship services and people. PCO has a feature for their Services product called Services LIVE that allows you to designate where you are at in a service flow while a service is ongoing, which updates anyone who may be watching. It also records the times so you can look back later and see things like “Did that song we said would take 5 minutes actually take more like 6 minutes and 30 seconds?” It’s a very useful tool.

The interface to advance a LIVE plan, however, has not been the best for our volunteers. Even within the PCO app, the buttons to advance a plan to the next item are rather tiny, and some of my team have trouble knowing whether or not they hit the button.

IMG_9032
This is the standard PCO Live interface. The small double arrows at the bottom left and right of the screen are the controls. Our volunteers have a hard time pressing these.

One thing that makes Planning Center Online great is that they love developers, and they’ve made a very extensive Application Programming Interface (API) available for anyone to use. This means you can get access to your service and plan data without having to actually click and browse the website.

I delved into that API this past week and used it to create a new module for Companion. One of the caveats of using the API is that in order to advance a live plan, you have to know both the service type ID and the plan ID. This requires traversing the API data some and making multiple requests. If you’re a programmer, this makes sense. If you’re just an end-user, it may not be as straightforward. So, I set out to make something easy enough for anyone to use.

Here is a walkthrough video on how the module works:

[wpvideo hyQpcLnH]

What it actually does:

  1. When you first load the module and supply it with the authentication tokens, it requests all of the available service types and stores that internally.
    https://api.planningcenteronline.com/services/v2/service_types
  2. Then it asks for the next 7 upcoming plans for each service type based on the list that was just retrieved. This is then used to build the drop down list so you can choose your plan.
    https://api.planningcenteronline.com/services/v2/service_types/${serviceTypeId}/plans?filter=future&per_page=7
  3. When you send a “Previous” or “Next” command, it first asks for the LIVE information for that selected plan.
    https://api.planningcenteronline.com/services/v2/service_types/${serviceTypeId}/plans/${planId}/live
  4. It checks for who the current controller of the plan is, and compares that to an internal variable in Companion that represents the owner of the authentication token.
  5. If the current controller is null, a command is sent to toggle control to the token owner, and the returning value of the current controller is stored in that internal variable so we know who “we” are for next time.
    https://api.planningcenteronline.com/services/v2/service_types/${serviceTypeId}/plans/${planId}/live/toggle_control
  6. If the current controller is not null, a toggle command is sent to release control of the plan to no one, and then a toggle command is immediately sent again so that control is toggled to us. The reason for this is that if our authentication key is not the current controller, the API will return an error when we try to advance the plan.
  7. Now that we know we are in control, the current controller value returned by the API is stored as an internal variable, and then the next or previous command is sent to advance the plan.
    https://api.planningcenteronline.com/services/v2/service_types/${serviceTypeId}/plans/${planId}/live/go_to_next_item

Because Node.js is an asychronous programming language, all of this is done through Promises, which is similar in concept to a callback function, however it allows for cleaner and easier to read code.

So, stay tuned for this module to become available in the next stable release of Companion, and if you’re willing to try it out in development mode, it’s available now!

IMG_9035
The PCO Live module in action!

Support for blink(1) now available in ProTally!

I wrote ProTally last year so our volunteers running ProPresenter could know when their source was on screen or about to be on screen. It has been very helpful in minimizing our mistakes by making distracting graphic changes while on-air. It supports tally data from our Ross Carbonite switchers but I’ve also written support for the TSL 3.1 protocol, Blackmagic ATEM switchers, OBS Studio scenes, and most recently, Bitfocus Companion.

I recently picked up a blink(1) to test out for another project I’m working on. If you’ve not heard of it, the blink(1) is a small $30 USB device with LEDs built in, designed to give you a quick-glance notice of anything on your computer. The creators have made libraries in several popular programming languages, like Node.js (the language ProTally is written in), to interact with it.

 

I decided to get my feet wet and learn about the device’s capabilities by integrating it with ProTally. Since ProTally can read and work with tally data from so many different types of sources, that means it’s already primed to take that tally data and act on it in different ways, not just on-screen.

So, I am pleased to announce, that ProTally now supports up to 4 blink(1) devices that can mirror the color the user chooses for an on-screen tally box. The user can choose between showing the tally color on a box on their monitor (like normal), a connected blink(1) device, or both. If you are using multiple tally boxes but don’t own an equal number of blink(1) devices, you can also choose to share the blink(1) across multiple tally boxes, and the higher box will get priority.

 

[wpvideo RL2TDnjS]

The latest release of ProTally supporting blink(1) devices as tally lights is available on Github now, so go check it out!

Using a Stream Deck as a Production Controller, Revisited

One of my first posts on this blog detailed how I wrote software in Node.js to interface with an Elgato Stream Deck to control some of our production equipment, interfacing with the video switchers, router, Ross Dashboard, etc. It’s time to revisit that.

We’ve been using my controller now every week in our control rooms and tech booths for about a year. My team loves it. It integrates into our centralized production workflow, where each deck sends commands to a central Dashboard panel, which runs the command, and then sends out updates to all the connected stream decks.

However, I haven’t had much time to make it a better product for other people. I wrote support for the Stream Deck Mini when that was released, but that’s about it. I haven’t had time or cause to do much else with it. So, for that reason, I wanted to share with you a piece of software that is under constant, active development: Bitfocus Companion.

Screen Shot 2019-02-15 at 10.43.46 AM

 

Companion is written in Node.js and packaged in Electron just like my product, so it can run on Mac, Windows, or Linux. But it can do so much more than my controller! One of the best features is that it has a web-based management interface, so you can add actions to buttons easily and on-the-fly. It supports a ton of production equipment and chances are good that your gear is already on the supported list, or, perhaps someone can create a module for it.

I was asked to join the development team recently for Companion, so I’ve started making some modules for Companion to integrate with software and gear that we have. I’ve created a module for Interactive Technologies’ CueServer, which we have in a couple of our venues here.

Screen Shot 2019-02-15 at 10.54.05 AM
Here are some actions you can perform on a CueServer now with the module I created for Companion.
Screen Shot 2019-02-15 at 10.53.53 AM
An example of a key down action for triggering a CueServer macro in Companion.

If you use ProTally, my on-screen tally box notification software, and want to integrate with Companion, I made a module for that too! Make sure to go download the latest ProTally release which supports this feature! With Companion, in addition to Preview and Program windows, you can also send a Beacon, which flashes at a custom rate and color. Check this video out for a demo:

[wpvideo 0Xy1IvWn]

Both of these modules are available in the bleeding edge builds of Companion and will be included in the next stable release soon.

So, if you’re looking for a great production controller that integrates with the Stream Deck, go check out Companion! It’s only going to get better from here!

 

ProTally 1.4, with custom colors per tally box, now available

If you’re using ProTally, I’ve just released a new version that supports custom colors for each tally box, rather than global colors applied to all boxes.

Screen Shot 2019-02-01 at 10.15.01 AM

You can download the latest version on Github: https://github.com/josephdadams/ProTally/releases

Using Node.js and a Raspberry Pi to monitor Streaming ACN network for DMX changes and trigger actions

Awhile back, I wrote about the Shade Controller I created using Node.js and a USB relay running on a Raspberry Pi Zero. It works great. We can raise and lower the shade from anywhere on the network. However, I’ve always wanted a way to control this a little more automatically. The lighting volunteer is typically the person who operates the remote for the shade, so I really wanted a way to automate that part of the process for them so the shade can raise and lower exactly when we want it to, without them having to use an extra tool or device.

As I was working on some networking changes to one of our lighting consoles (we use Jands L5 consoles running Chroma-Q’s Vista 3), I had an idea… What if we could monitor the Streaming ACN lighting network for data changes just like any lighting node, and use that to trigger an action?

If you’ve not heard of Streaming ACN (sometimes called sACN or its official name E 1.31), it is an ethernet based protocol for sending DMX address and value information from a lighting console to receiver nodes which then relay the DMX information to lighting fixtures. It uses multicast traffic to send the information so it is very fast and efficient. At my church, we have several DMX universes of lighting information going over the network for each auditorium, controlling all of the light fixtures.

Luckily for me, a base protocol module for E 1.31 was already available for Node.js. So, using that module, I sat down and prototyped a solution and had something working in just a couple of hours. I’m calling my software sACN Translator. I’ve deployed it to a Raspberry Pi for production. It supports a simple REST API to allow you to control which universes it should listen to, as well as the fixtures to run triggers for. I also created a simple web interface which utilizes this API.

Screen Shot 2019-01-20 at 10.28.46 AM.png
Here is the simple web interface which interacts with the REST API.

Here is how I set it up on our system to trigger the shade controller. I started by adding two fixtures to the L5 console on Universe 1 (where I happened to have some spare room in my DMX addresses). I called these fixtures “Shades Up” and “Shades Down”, with DMX Addresses 511 and 512.

screen sharing picture january 20, 2019 at 5.34.39 am est
Here are the two “fixtures” on the layout, with notes attached.
screen sharing picture january 20, 2019 at 5.35.29 am est
I labeled the fixtures as generic “utility” fixtures with 1 DMX address each.

Then, I added entries in sACN Translator to monitor Universe 1 on the network and look for value changes to fixture addresses 511 and 512. I set it to run an HTTP trigger any time the values reaches 255 (100%). So, when I put the Shades Down fixture at 100% on the lighting console, the software sees that value, looks for a match in its list of fixtures, and then runs the corresponding HTTP request on the Raspberry Pi Zero connected to the USB relay to trigger the action which lowers the shade.

Here is a video of it in action:

[wpvideo vpTSk2WQ]

Pretty cool! I decided to use separate fixture addresses for each trigger action, but I didn’t have to. I could have just one fixture and watch for two separate lighting values.

So now, all the operator has to do is run the cues like normal, and the programming will do the rest! I’ve made this software available for free on my Github repository. Let me know how it works for you!

Using Google Apps Script with user input to automate repetitive tasks in Google Docs

Do you find yourself ever doing repetitive tasks over and over again in Google Docs? (Or any of the Google Suite Apps?) I sure do. At my church, we create a Google Doc every week for all of the “talking points”, the parts of the service that aren’t song or sermon, where we script out what someone needs to say or communicate during that portion.

screen shot 2019-01-13 at 5.48.00 am
Here is a sample document that we use each week.

A couple years ago, I started creating template files to help my team do this every week, because having the template already there with some common headers, the service date, etc. removed the barrier to get down to writing the actual words. Creating the files wasn’t too complicated, and after awhile, I started making them “in bulk”, where I would sit down and just make 3-4 months worth of documents at a time, making copies of my master template, editing the new file and updating the date, etc. Then we added a second auditorium, which doubled the amount of documents I needed to create.

With the new year, it was time to create more documents, so I decided this time around that I would create a script to help automate this task using the framework within Google Apps Script.

If you’ve not heard of or used Google Apps Script (GAS), it’s a scripting language based on Javascript, for light-weight application development. All of the code runs on Google’s servers to interact with your documents. If you’ve ever used an “add-on” in Google Apps, it’s using this scripting framework.

It’s pretty easy to use if you know Javascript, and it’s easy to get started. From any document, just go to Tools > Script Editor. This opens a new tab where you can start writing Apps Script.

Here is my script:

[code language=”JavaScript”]

function myFunction()
{
var ui = DocumentApp.getUi();

var templateDocId = ‘[templateid]’; // put the document ID of the master template file here

var prompt_numberOfDocs = ui.prompt(‘How many Talking Point Documents do you want to create?’);
var prompt_startingDate = ui.prompt(‘What is the starting date? Please enter in MM/dd/yyyy.’);

var numberOfDocs = parseInt(prompt_numberOfDocs.getResponseText());
var startingDate = prompt_startingDate.getResponseText();

var prompt_venueResponse = ui.prompt(‘Venue’, ‘Create Documents for both Auditoriums? If no, please type in the Venue Title and click “No”.’, ui.ButtonSet.YES_NO);

var venueTitle = ”;

var bothAuditoriums = true;

if (prompt_venueResponse.getSelectedButton() == ui.Button.NO)
{
venueTitle = prompt_venueResponse.getResponseText();
bothAuditoriums = false;
}

var date = new Date(startingDate);

var htmlOutput = HtmlService
.createHtmlOutput(‘Creating ‘ + numberOfDocs + ‘ documents. Please stand by…

‘)
.setWidth(300)
.setHeight(100);

ui.showModalDialog(htmlOutput, ‘Talking Points – Task Running’);

for (var i = 0; i < numberOfDocs; i++)
{
var loopDate = new Date(date.getTime()+ ((i * 7) * 3600000 * 24)); // uses the looping interval to get the starting date and add 7 days to it, creating a new date object
var documentName = 'Talking Points – ' + Utilities.formatDate(loopDate, Session.getScriptTimeZone(), "MMMM dd, yyyy");
var documentDate = Utilities.formatDate(loopDate, Session.getScriptTimeZone(), "MM/dd/yyyy");
if (bothAuditoriums)
{
createNewTalkingPointDocument(templateDocId, documentName + ' (Aud 1)', 'Aud 1', documentDate);
createNewTalkingPointDocument(templateDocId, documentName + ' (Aud 2)', 'Aud 2', documentDate);
}
else
{
documentName += ' (' + venueTitle + ')';
createNewTalkingPointDocument(templateDocId, documentName, venueTitle, documentDate);
}
}

htmlOutput = HtmlService
.createHtmlOutput('google.script.host.close();’)
.setWidth(300)
.setHeight(100);
ui.showModalDialog(htmlOutput, ‘Talking Points – Task Running’);
}

function createNewTalkingPointDocument(templateDocumentId, documentName, venueTitle, documentDate)
{
//Make a copy of the template file
var documentId = DriveApp.getFileById(templateDocumentId).makeCopy().getId();

//Rename the copied file
DriveApp.getFileById(documentId).setName(documentName);

//Get the document body as a variable
var body = DocumentApp.openById(documentId).getBody();

//Insert the entries into the document
body.replaceText(‘##Venue##’, venueTitle);
body.replaceText(‘##Date##’, documentDate);
}

[/code]

Once you have a script in place, you can choose triggers for when it should run, like when it is opened, or on a schedule, etc.

Here is the new template with the script in action:

screen shot 2019-01-13 at 6.10.10 am

First, I ask how many documents should be created. 1, 5, 500, whatever I need.

screen shot 2019-01-13 at 6.10.29 am

Next, I ask for the starting date. We specifically use these for Sunday services, so I’ve programmed the script to take this starting date and then calculate every 7 days when creating multiple documents.

screen shot 2019-01-13 at 6.10.44 am

Then, I ask the user if they want to create documents for both auditoriums, or if this is for a special service or off-site service, etc. Typically we want them for both auditoriums, but the one-off feature makes things easy for those types of services too.

screen shot 2019-01-13 at 6.10.57 am

As the script runs, it displays this dialog box. Creating that many documents can take awhile, and I wanted the user to be aware of this. The box goes away automatically when the process is completed.

Now that we have this, I can pass the task on to anyone on our team, anytime they need these documents! And it saves a good bit of time. I definitely spent less time creating this script than I would have spent creating the 3-4 months worth of documents manually, and now I never have to do that again!

How can you use Google Apps Script to automate some of your more repetitive tasks?