ProPresenter 7 and the Top 8 Features I would like to see

If you are a user of Renewed Vision’s ProPresenter software, hopefully by now you’ve heard that they just released version 7 for both MacOS and Windows.

pro7-header-image
ProPresenter 7.

The new version is more similar between the two operating systems than ever before, and there’s a lot of new features, most notably the UI design. One other enhancement that I am excited about is that all of the add on modules (alpha keyer module, communications, MIDI, SDI/NDI output, etc.) are now all included as part of the software license. This will be great for us because now we can have these features available to all of our ProPresenter installs, whereas in the past, the pricing model was a limitation for us.

I have been slowly checking out the new version and we will be purchasing an upgraded license soon to roll this out in our various venues within the coming months.

With all of the new features that ProPresenter has, I thought it would be fun to include the Top 8 Features of ProPresenter that I hope to see implemented. Here they are, in no particular order:

  1. Tally Integration. If you’ve followed this blog, you have probably seen where I’ve mentioned the ProTally software I created to help fill in the gap here so our volunteers could know when their ProPresenter output was on-air. So while tally protocol support (whether it be TSL or data coming directly from something like an ATEM switcher) would likely render tools like ProTally obsolete for a lot of use cases, it would make the experience so much better for the end user, and I’m definitely a fan of that.
  2. HTTP GET/POST slide cues. This would be awesome. Some people do a workaround right now where they put a “web element” on a slide and make it invisible, but a true communication cue to send GET/POST (along with JSON data) whenever I click on a slide would be a great way to open up some automation efforts to trigger other software.
  3. Hide Audio Bin / Re-arrange the interface. This is a simpler one, but the ability to hide the audio bin that we aren’t likely to use as well as being able to re-arrange the UI would be nice to have.
  4. Customizable border on the current active slide. A lot of our volunteers have expressed that it would be nice to have a way to quickly see which slide is active, and sometimes the current border box around the active slide isn’t easy to see. So a way to make that border thicker, change the color, make it blink, etc. would be a nice feature.
  5. A built-in, free, amazing sync option. I’ve written about how we currently do cloud syncing in ProPresenter by using Dropbox and sharing all the libraries to all the machines. It works fine for what it is. But a way to truly share playlists, themes, media, etc. from one ProPresenter install to another, built in, would be awesome, especially if it could use the drive/file sync tools we already use, like Dropbox.
  6. Go To Next Timer showing a countdown. Another simpler one, but it would be really nice if any time a slide was on a advance timer, if the UI showed how much time was left before it advanced (in minutes/seconds).
  7. Web interface to show slide information, clocks, etc. A page where I can view the slides, the current/next slide, timers, messages, etc. A “producer’s page” of sorts. Right now, we use PresentationBridge for this. We would keep this web page open in our control rooms for the director to see so they know exactly where we are at in a presentation or song.
  8. Published and supported REST API. It would be great to have a published and supported interface where we can control ProPresenter remotely. A lot of people have done great work to reverse-engineer the ProRemote app, and that protocol is getting a lot of use through projects like Companion. But something officially documented and supported would be truly great. And on that note, some kind of official support for stream decks would be great too! Whether it is acknowledgement of the Companion project or another avenue.

So there’s my top 8 feature requests! I’m excited about this new version of ProPresenter, because with their ProPresenter+ plan, we are going to see more regular feature updates. If you haven’t checked it out yet, you can demo it for free!

Live Camera Production: A Technical Walkthrough of our Video System

I talk about programming and software and building solutions here a lot, but I thought I would write a post about something else I’m passionate about: live camera production. At my church, for the last 15 years or so, I’ve had the pleasure of getting to direct cameras for the annual Christmas program. We call the program, “Jingle Jazz”, because the music is mostly centered around a jazz format. In church terms, it’s an “invest and invite” event where people can bring their friends, neighbors, and co-workers for a great first exposure to Fellowship Greenville and have a fun relaxing evening filled with various styles of music.

This is one of a small handful of times a year where we get to maximize the potential of our volunteers and systems and put it all to the test. I always try to challenge myself to make it better than the year before, whether that’s adding more cameras, equipping volunteers, or even automating something.

In years past, I had a huge role involving writing scripts, creating and producing videos, and working late night after late night after late night and put “all of me” into this event to make it happen! In recent years, the workload has been balanced a lot better, and my job role has shifted some, so now I am not having to do so many late nights prior to the event. I did still manage to get in almost 16,000 steps one day last week though!

Photo Dec 16, 1 07 04 PM
The steps I took in one day. Pretty high for me!

This year, I set a target goal of 14 cameras. I put out a call for volunteers and 11 people signed up! We used our primary auditorium cameras, older (like 12-15 year old) cameras using component to SDI adapters, borrowed production equipment from the communications department, rented 4 cameras, and I even traded some of my programming time to a local university in return to borrow some cameras and lenses from them. Overall, I felt like we were able to keep costs down by being good stewards of what we already had, and renting where needed.

One thing I did in advance that really helped me to succeed was to plot all of my patching across patchbays and plates in a spreadsheet. It helped me think through all the limitations I might face, especially when multiple cameras needed a signal/data cable as well as genlock/reference. Some of the cameras didn’t support genlock, so I had to frame sync those within the switcher. The Ross Carbonite switcher has 6 frame syncs, so after I ran out of syncs on the switcher for Auditorium 1, I actually sent signal to our switcher for Auditorium 2, synced them, and sent them back to the other switcher on aux sends! It took both control rooms to be able to pull off this many cameras, primarily because of the camera equipment we had available.

Photo Dec 14, 10 28 27 PM
This spreadsheet kept me in line to make sure I didn’t forget to patch anything!

For intercoms, everyone was on a wired Clearcom. We used a combination of belt packs we already had plus adapters I made to work with some older stuff we used to use. The two mobile stage cameras used Unity Intercom bridged to our Clearcom system.

Photo Dec 12, 7 57 03 PM
The view during rehearsal.

I knew I wanted to record what I call the “tech cut” that combines the multiviewer feed plus the intercom chatter this year so we could save it for review and training. The Carbonite switcher has two multiviewer outputs, so I dedicated one of them to viewing all 14 cameras for the recording.

Photo Dec 13, 11 50 35 AM
Another view of the control room.

Because the boxes were so small, I wanted a way to be able to see any camera on a larger screen, so I rolled in a TV cart and patched it to a MiniME output and controlled it from a Stream Deck (with Companion). Using Custom Controls, I was able to also have the multiviewer show a white box around whatever source was active on that TV cart. Here is a video of that in action:

[wpvideo z5nxUcc4]

One thing that I really am glad we did this year was to treat the LED wall that we have center stage as more of a lighting/stage element than something that our video team needed to drive. It was nice because the lighting guys controlled it all and I didn’t have to think about it! We used PVP and had motions and Christmas-themed b-roll on the screen most of the time, and occasionally cut to a graphic here and there as needed.

Like any service, it takes people to make it happen. We have great volunteers and staff here that I get to work with and lead.

Photo Dec 14, 8 06 37 PM
The crew!

As we were wrapping up this event, and I watched everyone serving with such joy even though it was a lot of late nights, I was reminded of this quote from author Simon Sinek:

When we work hard on something we don’t believe in, it’s called stress. When we work hard on something we believe in, it’s called passion. – Simon Sinek

Working and serving in tech ministry has to come from a place of passion, or it will always be stressful. Colossians 3:23-24 says, Whatever you do, work heartily, as for the Lord and not for men, knowing that from the Lord you will receive the inheritance as your reward. You are serving the Lord Christ.”

May we always work heartily on what we believe in. Not just programming, software, or live production, but seeing God transform lives, and people pursuing life and mission with Jesus.

If you’d like to watch our tech cut, here it is!

Using Node.js on a Raspberry Pi to listen to MIDI messages from an Avid S6L console to trigger HTTP requests or run scripts

Back in the summer, I posted about a project I had recently finished, which involved sending HTTP requests to a server that would then relay a MIDI output message based on the request that was sent.

We’ve been using that software (dubbed midi-relay) since then to be able to control our Chroma-Q Vista lighting desks remotely across vlans by using stream decks running Companion. It works pretty well, especially since the midi-relay software is configured to run directly on the lighting consoles upon startup. We have even set up a few crontab entries to send CURL commands to the light desks to turn them on at certain times when we don’t want to be on-site just to press a button.

In anticipation of completing my most recent project, “LiveCaption“, which takes audio and transcribes it to text in real-time, I started working on midi-relay 2.0: listening to MIDI input and using that to trigger a response or action.

logo
I figured it was time this thing had a logo.

In both auditoriums at my church, we have Avid S6L audio consoles. These consoles can do a lot, and like most consoles, they have GPIO pinouts to allow you to trigger things remotely, whether as an action originating from the sound console, or externally that then triggers something on the console like recalling a snapshot, muting an input, etc.

Screen Shot 2019-11-19 at 4.23.54 PM
Stock photo of the console I found on the Internet.
photo-nov-19-2-39-41-pm.jpg
These are (some of) the I/O pins on the S6L console. It has GPIO and MIDI ports. We use the footswitch input for setting tap tempo.

I started looking at the possibility of using the GPO pins on the console to trigger an external action like sending an HTTP request to Ross Dashboard, Companion, etc. However, there are only 8 GPO pins on this audio board, so I knew that could be a limiting factor down the road in terms of the number of possible triggers I could have.

The S6L also has MIDI In and Out, and through the Events section of the console, it can be used as either a trigger (MIDI In) or an action (MIDI Out) on just about anything.

Photo Nov 19, 1 28 22 PM
The Events page on an Avid S6L console. All kinds of things can be used as triggers and actions here! In this particular event, I’ve created a trigger that when the Snapshot “Band” is loaded, it sends MIDI Out on Channel 1 with Note 22 (A#0) at Velocity 100. MIDI-Relay then listens for that MIDI message and sends an HTTP POST request to the LiveCaption server to stop listening for caption audio.

We already have a snapshot that we load when we go to the sermon/message that mutes things, sets up aux sends, etc. and I wanted to be able to use that snapshot event to automatically start the captioning service via the REST API I had already built into LiveCaption.

In the previous version, midi-relay could only send Note On/Off messages and the custom MSC (MIDI Show Control) message type I had written just for controlling our Vista lighting consoles. With version 2.0, midi-relay can now send MIDI out of all of the channel voice MIDI message types:

  • Note On / Note Off
  • Polyphonic Aftertouch
  • Control Change
  • Program Change
  • Pitch Bend
  • Channel Pressure / Aftertouch

It can also send out:

  • MSC (MIDI Show Control), which is actually a type of SysEx message
  • Raw SysEx messages, formatted in either decimal or hexadecimal

And, midi-relay can now listen for all of those channel voice and SysEx messages and use it to trigger one of the following:

  • HTTP GET/POST (with JSON data if needed)
  • AppleScript (if running midi-relay on MacOS)
  • Shell Script (for all OS’s)

There are a few software and hardware products out there that can do similar things, like the BomeBox, but I wanted to build something less-expensive and something that could run on a Raspberry Pi, which is exactly how we’ve deployed midi-relay in this case.

Photo Nov 19, 1 27 32 PM
Here is the Raspberry Pi running midi-relay, connected to the MIDI ports on the S6L via a USB to MIDI interface. It tucks away nicely at the back of the desk.

Now we can easily and automatically trigger the caption service to start and stop listening just by running the snapshots on the audio console that we were already doing during that transition in the service. This makes it easier for our volunteers and they don’t really have to learn a new thing.

Here’s a video of it in action:

 

 

[wpvideo W77anq42]

If you’d like to check out version 2.0 of midi-relay, you can download both the source code and binaries from GitHub: https://github.com/josephdadams/midi-relay

The documentation is pretty thorough if you want to use the API to send relay messages or set up new triggers, but you can also use the new Settings page running on the server to do all that and more.

Screen Shot 2019-11-19 at 4.21.28 PM
From the Settings page, you can view available MIDI ports, add/delete Triggers, view detected midi-relay hosts running on the network, and send Relay messages to other hosts.

And if you’re a Companion user for your stream deck, I updated the module for Companion to support the new channel voice MIDI relay messages as well! You’ll need to download an early alpha release of Companion 2.0 to be able try that out. Search for “Tech Ministry MIDI Relay” in Companion.

Here’s a list of the Raspberry Pi parts I used, off Amazon:

Photo Nov 19, 3 52 56 PM

I hope this is helpful to you and your projects! If you need any help implementing along the way, or have ideas for improvement, don’t hesitate to reach out!

Free Real-Time Captioning Service using Google Chrome’s Web Speech API, Node.js, and Amazon’s Elastic Cloud Computing

For awhile now, I’ve wanted to be able to offer live captions for people attending services at my church who may be deaf or hard of hearing, to allow them to follow along with the sermon as it is spoken aloud. I didn’t want them to have to install a particular app, since people have a wide variety of phone models and OS’s, and that just sounded like a pain to support long-term. I also wanted to develop something low-cost, so that more churches and ministries could benefit from it.

I decided to take concepts learned my PresentationBridge project from last year’s downtown worship night and use it for this project. The idea was essentially the same, I wanted to be able to relay, in real-time, text data from a local computer to all connected clients using the Node.js socket.io library. Instead of the text data coming from something like ProPresenter, the text data would be the results of the Web Speech API’s processing of my audio source.

If you’re a Google Chrome user, Chrome has implemented W3C’s Web Speech API, which allows you to access the microphone, capture the incoming audio, and receive a speech-to-text result, all within the browser using JavaScript. It’s fast and, important to me, it’s free!

Here is how it works: The computer that is doing the actual transcribing of the audio source to text must use Google Chrome and connect to a Bridge room, similar to how my PresentationBridge project works. Multiple bridge rooms (think “venues” or “locations”) can be configured on the server, and if multiple rooms are available, when end users connect, they will be given an option to choose the room they want to be in and receive text. The only requirement for browser choice is the computer doing the transcribing; all others can use any browser on any computer or device they choose.

Screen Shot 2019-11-04 at 1.36.34 PM
This is the primary Bridge interface that does the transcribing work.

From the Bridge interface, you can choose which “Bridge” (venue) you want to control. If the Bridge is configured with a control password, you will have to enter it. Once connected, you can choose whether to send text data to the connected clients, go to logo, etc. You can redirect all users to a new webpage at any time, send a text/announcement, or reload their page entirely. To start transcribing, just click “Start Listening”. You’ll have to allow Chrome to have access to the microphone/audio source (only the first time). When you are connected to the Bridge, you can also choose to send the users to Logo Mode (helpful when you’re not broadcasting), or you can choose to send data or turn it off (helpful when you want to test transcribe but not send it out to everyone). There is also a simple word dictionary that can be used to replace commonly misidentified words with their proper transcription.

A note about secure-origin and accessing the microphone: If you’re running this server and try to access the page via localhost, Google Chrome will allow you to access the microphone without a security warning. However, if you are trying to access the page from another computer/location, the microphone will be blocked due to Chrome’s secure-origin policy.

If you’re not using a secure connection, you can also modify the Chrome security flag to bypass this (not recommended for long-term use because you’ll have to do this every time Chrome restarts, but it’s helpful in testing):

  • Navigate to chrome://flags/#unsafely-treat-insecure-origin-as-secure in the address bar.
  • Find and enable the Insecure origins treated as secure section.
  • Add any addresses you want to ignore the secure origin policy for. Remember to include the port number (the default port for this project is 3000).
  • Save and restart Chrome.

Here is a walkthrough video of the captioning service in action:

[wpvideo r6P0iWGj ]

I chose to host this project on an Amazon EC2 instance, because my usage fits within the free tier. We set up a subdomain DNS entry to point to the Elastic IP so it’s easy for people in the church to find and use the service. The EC2 instance uses Ubuntu Linux to run the Node.js code. I also used ngninx as a proxy server. This allowed me to run the service on my custom port, but forward the necessary HTTPS (port 443) traffic to it, which helps with load balancing and keeps my server from having to handle all of that secure traffic. I configured it to use our domain’s SSL certificate.

I also created a simple API for the service so that certain commands like “start listening”, “send data”, “go to logo” etc. can be done remotely without user interaction. This will make it easier to automate down the road, which I plan to do soon, so that the captioning service is only listening to the live audio source when we are at certain points in the service like the sermon. Because it’s just a simple REST API, you can use just about anything to control it, including a Stream Deck!

IMG_2076.JPG
We deployed them in our two auditoriums using ChromeBooks. An inexpensive solution that runs the Chrome Browser!

In order to give the devices a direct feed from our audio consoles, I needed an audio interface. I bought this inexpensive one off Amazon that’s just a simple XLR to USB cable. It works great on Mac, PC, and even ChromeBooks.

Screen Shot 2019-11-14 at 2.15.19 PM.png
XLR to USB audio interface so we can send a direct feed from the audio console instead of using an internal microphone on the computer running the Bridge.

If you’d like to download LiveCaption and set it up for yourself, you can get it from my Github here: https://github.com/josephdadams/LiveCaption

I designed it to support global and individual logos/branding, so it can be customized for your church or organization to use.

Custom Reports in Planning Center Online, Part 2

At my church, we have two venues where we run worship services simultaneously. This means that when I am running reports and printing paperwork for all the teams, there’s a lot to print! Using the PCO custom reporting tool is great because it saves so much time.

If you didn’t read Part 1 where I first talked about this, hop on over and check that out.

Lately, rather than printing reports one at a time with each plan, I’ve been using the matrix view in Planning Center to view multiple plans at once and print reports all at the same time.

Screen Shot 2019-09-26 at 9.26.05 AM
The matrix view in PCO is very powerful and helps when you want to look at several plans at once.

My custom matrix report is similar to the normal plan report, but this one supports multiple plans, obviously. It loops through every plan, and then every position in the plan (based on the teams I have chosen), and then generates a sheet for that position, customized with their checklists, notes, etc.

My most recent edit includes a custom array for sort order, because the default is to print the position reports alphabetically. Rather than rename my positions, I opted for the custom sort.

Screen Shot 2019-09-26 at 9.26.49 AM.png
The print order array can be customized so that reports are sorted in the order you want.

This saves time every week because now they print out in the order I need to pass them out!

If you’d like to get a copy of this custom report, head on over to the GitHub repository I set up for it: https://github.com/josephdadams/PlanningCenterServicesReports

Controlling Chroma-Q (Jands) Vista with MIDI Show Control and a Stream Deck using Node.JS and the Web MIDI API

At my church, we use Chroma-Q’s Vista lighting platform (formerly owned by Jands). It’s a great platform and easy for volunteers to execute pre-programmed lighting cues. Every large worship space we have on campus with a lighting system runs some version of Vista, whether on a physical console or a PC.

We generally program our lighting to use one or two cuelists and volunteers just advance cue by cue within that list for the service. It’s pretty straightforward and works well for them.

Sometimes, we need the ability to advance cues in a list remotely, when we’re not near the lighting console, and that’s where this latest project began.

Most lighting consoles can be controlled using some form of MIDI command. Older ones require a physical connection, others can use network connections. By using a loopback/virtual port, Vista can receive both MIDI notes and MIDI Show Control commands.

A lot of people have been able to accomplish this type of remote control over the network using a protocol called RTP-MIDI. This protocol is very easy to use and computers can broadcast/discover each other over the network, so it makes it a lot quicker to get up and going.

Screen Shot 2019-06-06 at 3.44.05 PM

This is great, and I’ve used it, but I wanted to design something particular for our needs. (1) I wanted something I could run on any PC or Mac to accept commands from a wider range of sources, and so many devices nowadays can send HTTP requests. (2) I wanted something that primarily triggered over TCP, because while RTP-MIDI is great and fast, it uses UDP traffic that can’t cross vlans/subnets. TCP traffic easily can.

So, I broke this project down into two parts: a server that listens to HTTP requests and relays local MIDI, and a module for Companion that allows the Stream Deck to send requests to that server. The server is flexible to support other devices that may want to trigger it, and the Companion module is perfectly paired to work with it.

The server runs a simple REST API that returns a list of local MIDI ports and can accept Note On, Note Off, or MSC (MIDI Show Control) commands. It accepts JSON data via HTTP POST which is then used to build the hexadecimal data and send the MIDI commands.

The HTTP side of things in Node.js uses the Express framework. The MIDI side uses the Jazz Soft JZZ.js library.

The server runs directly on the Vista computer to relay the MIDI commands on a virtual MIDI port which Vista is listening to.

Here is a video of it in action!

[wpvideo ZvMtBNPa]

If you want to do this yourself, setting up the Vista side of things is pretty straightforward.

First, if you are using Vista on a PC and haven’t already, downloaded LoopMidi, you can get it here. It’s free software that creates a virtual MIDI port on the PC.

Once that is configured, open Vista and go to the MIDI settings in User Preferences.

Screen Shot 2019-06-06 at 1.41.54 PM.png

Under the MIDI tab, select the External MIDI port “LoopMidi” (or whatever you named your port). If you’re going to be using MSC, be sure to make note of the Device ID you select.

If you want to advance a cuelist using MIDI Note On commands, right click on the cuelist and under the MIDI tab, select the Note you want to send for the “Play” command.

Screen Shot 2019-06-06 at 1.42.24 PM.png

I hope this is helpful for you! You can dowload a binary release of the MIDI-Relay server from my Github. It’s available for Mac, Windows, and Linux. The Companion module will be made available in a release build at some point.

Using a Raspberry Pi Zero W and a blink(1) light for silent notifications

At my church, we often delay or “time slip” the preaching of the service in the room where the pastor isn’t physically present. To do this, we record the sermon video as it happens live, and then play it back out either a few seconds or few minutes later.

This has been a good workflow for us. Often though, in the delayed auditorium, it’s helpful for the worship leader to know when the server is ready to play back the delayed sermon video. We usually communicate this over the intercoms into the band in-ears, whenever there’s an appropriate break to do so, like when they aren’t actively singing, praying or talking. That works well, but sometimes it means we have to wait longer than we should to be able to let them know we are ready to play back the video.

So, I thought, if we had a simple cue light that we could use to let them know when we’re ready, I wouldn’t need to have my team wait to communicate. The band could just look at the light and know we are ready for them. It would also give that boost of confidence before they hear from us in the in-ears.

To create this system, I bought a Raspberry Pi Zero W and a blink(1) USB light. If you haven’t heard about the blink(1) light, I wrote about using it in this post. I bought the Pi Zero in a kit that came with a black case and power supply.

91VrInDBG3L._SL1500_
I bought this kit off Amazon for $27.

I had initially envisioned this light being located on stage but after talking to my team, they actually preferred that it be located on top of the camera back in the tech booth, so they could easily see it.

IMG_9294.JPG
Here is the notification light. This is easy to see from the stage. That’s a professional gaff tape install. Currently we move this device back and forth between auditoriums as we alternate which room is the video venue.

I’ve been learning Python recently, so I whipped up a simple Python web server that accepts HTTP requests to then light up the blink(1) light. For now, I’ve limited it to red and green. Red = problem like we aren’t sufficiently delayed, the server is not ready, etc, green = ready/good for playback anytime, and clear/no light = no status. I set up the Pi to start this web server when it boots up, so it’s very easy to set up.

We trigger the light using a Stream Deck Mini running Companion located at the video server. The operator has three buttons, and each one sends an HTTP request to the Pi Zero to trigger the light.

IMG_9295.JPG
This Stream Deck Mini is running Companion and sends HTTP GET Requests to the Pi Zero server.

I also have a command set for each button action on the stream deck to update a button on another stream deck in the other control room, so each director knows the status of the video server. This doesn’t replace our intercom communication, but it certainly augments it!

Overall, we’re very happy with this notification system! All in, it cost us about $55 for the Pi Zero kit and the blink(1) light, and of course, the code was free. 🙂 It’s available on Github if you need it! That’s where I will provide updates as I add more features to this.

ProTally 1.7 with support for Roland Smart Tally!

In my last post, I mentioned my partnership with Tony at Calvary Chapel in Las Vegas, writing software to support their Roland V-60HD switcher.

As I was reading the specs on that switcher, I noticed it had a feature Roland called “Smart Tally”. It allows users to pull up a web page on their phones and monitor sources for being in Preview or Program live as the switcher is used.

maxresdefault

118170-lamroland2-3-org

I knew I just had to add this support to ProTally, so while working to implement the remote control module, I snooped how the Smart Tally service worked and came up with a way for ProTally to monitor for tally changes the same way mobile users accessing the server directly would.

It was actually pretty straightforward: When a user goes to to the IP address of the Roland V-60HD in a browser, they are presented with a list of addresses. Clicking on any of these addresses then loads a page where the browser repeatedly requests this url in the background:

http://[ipaddress]/tally/[tally address]/status

This status page simply returns three values: unselected, selected (in Preview), and onair (in Program).

Since I wouldn’t have access to the Roland switcher to develop and test with, I needed a solution to be able to test locally. I’ve been learning the Python programming language recently, so I decided to whip up a simple web server in Python to emulate this page request, with it turning one of the three values based on the seconds of the clock. If the time of day was between 0 and 20 seconds, it would return unselected. If between 20 and 40, it would return selected, and finally, if between 40 and 60, it would return onair. This was a simple way to emulate the setup of having a Roland switcher with Smart Tally.

Screen Shot 2019-03-28 at 2.34.47 PM
This simple Python script made testing a lot easier!

This feature has been released, so you can go get it now up on the Github repo!

Controlling a Roland V-60HD video switcher with a Stream Deck and Companion

A couple of weeks ago, I was contacted through the blog by Tony Perez, longtime staff member at Calvary Chapel in Las Vegas. He asked if I could help their team to control their Roland V-60HD switcher through a stream deck using Companion.

God has given me a heart and passion to be a resource for other churches, so I jumped right in and started reading the TCP protocol specification for their video switcher. The protocol was simple enough, basically just a telnet protocol to send parameters with a terminating character to designate the end of the command.

rol-v-60hd
This is the Roland V-60HD video switcher.

I had to take a sick day recently to take care of one of my kids who had an ear infection, so while he was resting, I sat down and prototyped a module for Companion to control their video switcher.

Tony and I then set a time to talk on the phone and do a TeamViewer session, and after doing some slight debugging, we had it working!

The protocol is pretty straightforward. For example, with this command:

\u0002CUT;

The switcher will perform a cut between the current on-air source and the preview source. “\u0002” is the ASCII control code “02H” which tells the switcher that a command code is coming. “CUT” is the command , and the semicolon terminates the command.

We were able to implement every video-related operation and some of the system operations that seemed necessary to control remotely from a Stream Deck.

So, with just a few short hours of work, now his team can control their Roland V-60HD video switcher from anywhere on their network! This will be a great help and add to their flexibility.

Screen Shot 2019-03-28 at 2.12.50 PM.png
You can see some of the options available for the module in this screenshot.

This was a fun project to get to help with, especially since I had not ever seen or used this particular video switcher before, and I was able to help a ministry on the other side of the country.

Here are some pictures of the module in action!

The module is open-source and part of the Companion project now, so anyone else who has this switcher can jump in and use it too! You can view the module code here.

Sending automated reminders via a Slack webhook, AppleScript, and Launchd on MacOS

I have always enjoyed finding ways to automate processes, especially ones that don’t require much user interaction but just need to be done at a certain time or at regular intervals. At one of my first jobs out of high school, I wrote software to automate a job for one of the clients that normally took 2.5 days by hand, taking the process down to 30 minutes, including filling out all the paperwork. Of course, the company didn’t like losing those billable hours, but it was hard to argue with the efficiency.

At my church, we have a few computers with limited drive space. And that drive space always fills up fast! In the past, I would check the drives periodically and either delete old files or move them off to another storage place. I sat down recently and decided to take that a step further: I only wanted to be notified to check the drive when the drive was full to a certain threshold.

I’ve been playing around with Slack recently with a project I’m working on at home to notify me when my laundry is finished. If you’ve not heard of Slack, it is a collaboration/communication tool that integrates with lots of other platforms. It’s like a work-specific chatroom on steroids. One of the ways you can use it is with custom apps and webhooks, providing an easy way to send data and interact via a custom URL.

I won’t delve into setting up Slack and webhooks here, but I did want to share with you how I accomplished my goal to only get notifications when the drive is full to a certain amount. I used AppleScript and the Launchd framework built into MacOS.

If you’ve been on the Mac platform for awhile, you’ve no doubt heard of and have maybe used AppleScript. It’s a great way to interact with Mac apps and the system as a whole, so you can automate all kinds of things.

Launchd, as defined by Apple, is “a unified, open-source service management framework for starting, stopping and managing daemons, applications, processes, and scripts.” This framework is always working in the background on MacOS, whether you knew it or not!

So, I sat down and wrote an AppleScript that does the following:

  • Polls the system for the available space on the hard drive(s) I specified
  • If the space remaining is a certain amount or less, it sends a webhook request to my Slack app with a custom message to remind me to clear up the particular drive.

Screen Shot 2019-03-07 at 9.40.27 PM

Now, to schedule it. In the past, I used to use the built-in iCal/Calendar app for MacOS. It worked ok sometimes but I found that there were times scheduled events simply didn’t run for whatever reason. So, I decided to use a different method and take advantage of the Launchd process built into the operating system. There’s a lot you can learn about Launchd for MacOS, but I’ll summarize it here:

  • You can run processes as daemons, which run at the system level, not the user level
  • You can run processes as agents, which run at the user level
  • You can have them run when the system loads, or you can schedule them
  • Depending on where you place the file with the instructions about your script determines whether it runs as a daemon or agent

I chose to have mine run on a schedule every day at 7am, and send me an alert if the drive(s) are too full. I didn’t need it to run at the system level, so I made it an agent.

Screen Shot 2019-03-18 at 9.22.16 AM.png
This is the file that MacOS will look at to schedule the script to run.

Once I placed this file in my ~/Library/LaunchAgents/ folder (my main user account’s Launch Agents folder) and restarted the computer, it was ready to go! I’m looking forward to not having to remember to check those drive spaces manually anymore. I’ll automatically get notifications on my phone when I need to clear up space!

IMG_9213
This is what the alert looks like on my phone.

I hope this helps you! If you want any of the scripts, they’re up on Github.