Tally Arbiter 1.2 – support for newtek tricaster, GPO Output, and tsl 3.1 protocol conversion

A few weeks ago, I released some free tally light software to the community. I’ve had people checking it out and I am excited to be able to offer some more features!

Some highlights in the 1.1 release:

  • Overall performance improvements
  • Fixed an issue where Devices added during runtime did not obtain a proper initialization state and would not assign tally data properly until the server was restarted
  • Fixed an issue where Devices mapped to OBS Studio sources could not correctly be in both preview and program bus at the same time (when in Studio mode)
  • Better checking on Source connection states
  • TCP/UDP ports are now verified as in-use or reserved to help eliminate user errors
  • More verbose logging in the server console
  • All tally data received by all Sources is now sent to the Settings page (while open) for logging and diagnostic purposes
  • New Producer page; allows users to view all device states (tally information) without having the Settings page open. Created in dark mode for in-service viewing
  • Documentation added to Settings page to assist in initial setup and learning
  • OSC added as a Source type to trigger tally states
  • OSC added as a Device Action type (supports multiple arguments)
  • “Python” client renamed to “blink(1)” in preparation of other types of listener clients that may also use Python
  • Version is now displayed on the Settings page for diagnostic purposes

Now, I am releasing version 1.2! The highlights:

  • Newtek Tricaster support now included as a tally source type
  • OBS can now properly discern whether it is in preview or program
  • Support for TSL Clients – Tally Arbiter can now send all device states (derived and arbitrated from any source type) as TSL 3.1 (UDP or TCP) out by specifying a TSL Address for each Tally Arbiter Device. This can be used to drive UMDs and other tally receiving interfaces by acting as a protocol converter between all source types and TSL.
  • New Python listening client – GPO Output! Now you can trigger just about anything using the GPIO ports on a Raspberry Pi.
  • Bug fixes and UI improvements
  • More documentation and upgrade instructions

The biggest feature in this release is the new TSL Clients functionality. Tally Arbiter can now send out TSL 3.1 data to any number of connected devices any time a device changes state within Tally Arbiter. So, you can have, for example, a multiviewer of one switcher dynamically show whether a camera is in use on that switcher or a switcher of an entirely different brand/model by using Tally Arbiter as a protocol converter.

Here’s a video to show how the new TSL Clients feature works within Tally Arbiter and how to integrate it with a switcher like the Ross Carbonite. In this example, tally data is coming from both a Carbonite and a Blackmagic ATEM and the Carbonite multiviewer reflects that in real-time.

If you’d like to check out Tally Arbiter or learn more about it, check out the GitHub repository here: https://github.com/josephdadams/TallyArbiter/

automating production equipment using a chromebox and a scheduling server

Have I mentioned before how I love automation? Efficiency at its best.

One of the things I like to automate is the turning on of production equipment on Sunday mornings. It’s nice to walk in for the day and already have equipment turned on and ready to go. It saves me time so I can focus on other things.

In the past, we’ve used crontab on some of the production Macs to send HTTP requests via CURL commands. It worked, but it’s hard to manage when there’s a lot of commands to keep up with. We even tried using GUI interfaces for crontab like Cronnix, but the end result is the same. How do we track and manage all of the commands from a central place?

I came across a tool awhile back called Cronicle. It’s a multi-server task scheduler and runner, with a web based front-end UI. It’s essentially cron but written in Node.js. It can create scheduled and repeating jobs, perfect for ministry activities that tend to repeat.

Cronicle screenshot from their Github repository.

I knew that I wanted to set up a dedicated server to run Cronicle for us. Awhile back, I picked up a used Asus Chromebox CN60 off eBay for $37. I originally bought it hoping to be able to use it with my LiveCaption project, but that didn’t work out. However, the specs on the box are just as good (if not better) than a Raspberry Pi, so I decided to turn it into the server for this project.

Pretty small so it’s not too bad sitting in the rack!

It’s a fairly simple process to remove ChromeOS and install Ubuntu. I won’t detail that here but you can read about it. Installing Cronicle is just as easy if you follow the instructions on the Cronicle Github repo.

Once the server is up and running, it’s time to make some events! I decided to break our scheduled events into some basic categories:

  • Auditorium 1 (events for equipment primarily in Aud 1 related to regular ministry activities like Sunday mornings)
  • Auditorium 2
  • Campus Speakers
  • Automated Video Recording
  • General

For example, I have a scheduled event for every Sunday, at 6am, to turn the Auditorium 1 projectors on. It sends an HTTP request using the built-in Cronicle HTTP Request plugin to my Dashboard Production Control system which contains the code to turn the projectors on/off.

Screenshot of an event

I also have the lights turn on using my midi-relay software. It automatically routes the side screens on our Ross Carbonite switchers to the pre-service slides, turns on the campus speakers, etc.

A really nice feature of Cronicle is the ability to add your own plugins. They can be written in virtually any programming language and receive JSON input from Cronicle, so you can customize parameters and commands that get passed to them.

For some of our needs, I’ve created a few plugins so far which I have made available on my Github repository:

  • Rosstalk – to send commands to Ross Carbonite switchers
  • VICREO Listener File Opener – to open files, scripts, and programs on remote devices (requires the free VICREO Listener program)
  • VICREO Listener Hotkey – to send hotkey commands to remote devices
  • Videohub – to change routes on Blackmagic Videohub routers

I decided to write all of my plugins in Python because the linux server can run them right out of the box with little modifications needed, especially since they are just using simple TCP protocols to send information.

Some of the events I have created so far.

I’m always forgetting to run the video recording, so I automated that. We have a speaker system throughout the campus that has various amps that need to be turned on, so now we can turn them on with a schedule, and even start Spotify playing at a specific time! (This is done by executing an AppleScript on the computer running Spotify.)

We also have an 8-week event coming up in the fall for a Bible study that is at 6am in Auditorium 2. The tech needs are minimal, but they want lights on, a microphone, video recorded, to project some slides, etc. So, we created events to:

  • Open Vista (VICREO File Opener)
  • Go to a specific light cue on a specific cue list (midi-relay)
  • Turn on the projectors (HTTP request)
  • Turn off the LED wall (HTTP request)
  • Take the PTZ camera to a specific preset position (HTTP request)
  • Turn on a mic (midi-relay to a Raspberry Pi connected to the S6L via MIDI)
  • Route the program audio and video to the Ki Pro recorder (Videohub)
  • Start the Recording

And then later in the morning at specific times, everything will turn off and go back to normal. We will train someone to be on-site in the event of a change in plan, but this will greatly minimize the need to train someone all of the necessary buttons to press to turn everything on in the right order – it will just happen for them automatically based on the time of day!

Here’s a walkthrough video of it in action:

Overall, I’m very glad to have a centralized system in place to manage these scheduled events to automate our systems and am looking forward to making it even better as we continue to use it. If you want to try Cronicle out for yourself, you can read more about it on their website. It’s a free tool so definitely worth checking out. I have made my simple plugins free as well, and you can get them here: https://github.com/josephdadams/CroniclePlugins

VIDEO: Using Custom Reports in Planning Center Online to print out checklists for each position on the team

I’ve written a few different times about Planning Center Online and how we use it at my church. I thought I would share a video walkthrough of how to set up custom checklists for positions as well as our latest custom matrix report.

If you’d like to try out any of the reports for yourself, you can download them here: https://github.com/josephdadams/PlanningCenterServicesReports

Using node.js, python, multiple raspberry pi’s and usb lights to create an inexpensive wireless camera tally system that can arbitrate multiple sources simultaneously

Update: Version 1.2 is available now; read about it here:

At my church, we have two auditoriums, each with their own video switcher and cameras. All of the inputs and outputs of each switcher are on a common video router, so all of these sources can easily be shared across both rooms. However, even with all this, we have no camera tally system. Commercial tally systems can be expensive, and it’s just something we’ve never been able to afford.

It’s not normally an issue, but sometimes we want to pull up a shot of a camera in Auditorium 1 and show it in Auditorium 2. Because we have no tally system, the camera operator would not know their shot was being used. And, even if we did have a tally system, those systems generally only interface with one tally source/switcher, not multiple sources at the same time.

A few weeks ago, I was quarantined from work due to a co-worker testing positive for COVID-19. I set out to use this time to write a tally system for our church to use. Now that we’ve re-opened for church services, we will really need this, because we will have cameras capturing the service to stream online, but won’t necessarily have those cameras visible on the projector screens in the auditoriums during that time, where the operators would at least have a visual reference if their shot was in use.

And, because we have two video switchers, I needed to come up with a solution that would allow either video switcher to pull up a camera in either auditorium in Preview or Program, and make sure the operator still knew their shot was in use.

So here is Tally Arbiter. I called it this because the software aggregates tally data from multiple sources and “arbitrates” whether that device is in Preview, Program, or both across all sources and buses.

I played Halo a lot back in the day. A LOT.

The server software is written in Node.js and can run on a Raspberry Pi. It supports the TSL 3.1 network protocol like what our Ross Carbonite switchers use, but I’ve also written support for Blackmagic ATEM switchers, OBS Studio, StudioCoast VMix, and Roland SmartTally. I plan to one day add support for incoming webhooks, and GPIO inputs for switchers that don’t have network-based protocols.

The settings page of the Tally Arbiter server.

The software supports tally data coming from multiple sources, and each source can vary in protocol type. This could be useful, for example, if you had shared cameras for your production on-screen using an ATEM and also through your live stream using OBS or VMix, and you need the cameras to reflect the tally data of either system.

You can configure multiple devices in the software. These would be what generally receives tally data, whether it be cameras, CG stations, monitors, etc. Each device can support addressing from multiple sources. This is the “arbitration” portion of the software.

Once a device is determined to be in preview and/or program, device action(s) can be run. This can be sending out a TSL 3.1 protocol message (to a monitor/scope/multiviewer), an outgoing webhook (to tell another device to start playing a video (“roll clip”), for example), triggering a relay if you have CCUs that need contact closures to turn on the tally lights, or even local console output for logging and testing.

Some of our cameras have built-in tally lights, like the two Hitachi Z-HD5000 cameras we have. For those, I implemented a separate relay controller client that listens to the data on the Tally Arbiter server. It uses simple USB relays with the Node.js library I created a couple years ago that controls our auditorium window shade.

I bought a project box, put the relay in, ran some CAT5e cable I had laying around and connected it to the relay and the CCU’s with custom DB25 connectors. I had to modify the project box some because I wanted the relay to sit flat in the box, so I used a dremel to remove the bottom of the middle screwposts, which weren’t really needed anyway. Never be afraid to modify something to make it work!

The relay fits snugly inside this box. This particular unit has 8 relays, so I could add 2 more cameras with preview/program tally control to this unit.
The box and the Pi running the server fit nicely on top of one of the CCUs.
A clean rack is a happy rack!
Preview and Program lights!
This will make our camera operators happy.

But what about the cameras we use that don’t have tally lights? For these, I decided to use Raspberry Pi Zero W‘s that would run software listening over websockets to the Tally Arbiter server. These particular Pi models are inexpensive and simple to use. I knew that I could get the least expensive cost for physical tally lights out of these Pi’s if I went the GPIO route with some LED lights and custom circuitry, but I wanted to design something that people who may not be comfortable with these concepts could easily implement. And honestly, the thought of soldering something just sounded like something I’d have to possibly maintain down the road. So, I used the blink(1) USB lights by ThingM.

I first started experimenting with these USB lights about a year ago when I created a silent notification system for our band to use in case we had a tech issue during a service. The company that makes these has published very easy to use APIs, which makes it a great tool to use with custom software.

I like this simple black case from Vilros. You can get the whole kit minus the SD card for about $30 on Amazon.
Here’s a blink(1). A pretty versatile device!

The listener client script is written in Python since that programming language runs so easily on the Raspberry Pi OS no matter what model Pi you have. And, since we are using the socket.io websocket libary, bi-directional real-time communication between the server and clients even though the programming languages vary is not an issue.

I used a USB extension cable to bring the light up by the camera, but the Pi is down on the floor of the platform.
Another view.

All together, each wireless tally light should cost between $55 and $60 depending on what Pi case you use, SD cards, etc. Tally Arbiter has no built-in limitation of the number of wireless clients that can be connected, so this makes it a very versatile and flexible system no matter what the size is of your production.

Lastly, I also created an option to view live Tally data in a browser, like on a tablet or phone. You can select the device from the list and the background of the page will be red, green, or black depending on that device’s tally state.

The web based tally option is nice if you need a quick portable tally option.

The web tally is controllable through the Settings page just like any other listening client, so you can reassign the tally remotely and even send a flash to that client to get their attention.

Here’s a walkthrough video of the whole system in action:

As usual with my projects, I’ve made this open-source and available for your use on Github: http://github.com/josephdadams/TallyArbiter. It is fully documented with a REST API if you want to automate use of it outside of the GUI that I have created. There are also step-by-step instructions on how to set up a Raspberry Pi Zero, with OS imaging and all of of the libraries and script installing needed to get it going.

My hope and passion is to see resources like this to be used to further the Gospel. I believe and have seen that God can use technology for His good, and when we can use it to further ministry, that is how we can see the Gospel spread.

If these projects are helpful to you and your church, let me know! I love hearing how you use technology to serve the church.

Walkthrough: Using a streamdeck and midi-relay to control faithlife proclaim

If you’ve not heard of Proclaim, it is a presentation software similar in concept to ProPresenter. I’m not a user, but I have had several people write in and ask about how they could control it with a streamdeck, so I though I would share a quick post on how to do this with Companion and the midi-relay software.

This walkthrough will be on the Mac platform.

First, on the Proclaim computer, open the application, “Audio MIDI Setup”.

The Audio MIDI Setup program.

Now, in the Audio MIDI Setup application, go to Window > Show MIDI Studio.

Double click on the IAC driver.

Make sure the checkbox “Device is online”, and click Apply.

Now that the IAC driver is enabled, you need to download midi-relay on the Proclaim computer. You can get it here: https://github.com/josephdadams/midi-relay It is up to you if you want to run it directly from within Node or use a compiled binary. The results are the same.

Once midi-relay is running, you’ll see the terminal output window showing the available MIDI ports.

You can see the IAC Driver Bus 1 listed here.

Now open Companion. It can be running on the same Proclaim computer, or another computer on the same network. In the Web GUI, create a new instance of the midi-relay module.

Search for “midi” in the search bar and the “Tech Ministry MIDI Relay” module should show up.

In the configuration tab, type in the IP address of the computer running midi-relay. If the same computer is running Companion as Proclaim (and midi-relay), you can type in 127.0.0.1.

Now create a new button with a midi-relay action. Choose “IAC Driver Bus 1” for the MIDI Port, and the other MIDI values as you like. Proclaim will detect them in the next step, so the channel, note, and velocity are not too important as long as the note is unique for each action you want to take (previous slide, next slide, etc.)

Now in Proclaim, go to Settings, and click the MIDI Input tab. Click “Add Command”.

Select the command you want to be able to control from Companion. Here, I’ve chosen “Previous Slide”.

There are a lot of options you can control within Proclaim!

Once you select a command, Proclaim will start listening for the MIDI message.

Now go back to the Companion GUI and click “Test Actions” on your button.

Proclaim will detect the MIDI message and apply it to the command.

Repeat this for all the commands you want to control from your streamdeck with Companion and midi-relay.

That’s it! I hope that is helpful! As always, if you need some help along the way, don’t hesitate to reach out to me. If this post or others have helped you, take a minute and learn more about me.

Using Zoom, a Blackmagic ATEM Mini, and a Streamdeck XL for real-time remote video production over a VPN

Since the Coronavirus pandemic has shut everything down, like everyone, my whole schedule and routine has changed. Being with my family more is really nice. One significant change is that the church I work at has told everyone to stay home and only be in the office when doing a task that can only be done there.

When that happened, I came up with a workflow that would allow me to run video production equipment housed at the church, from my house, in the event that I couldn’t get to the facility, like a few weeks ago when I had to stay isolated waiting on the results of a COVID-19 test (it was negative).

We have a private VPN connection that I can use at my house with my workstation, which is great because it allows me to access all of the internal network devices at the church while I’m at home. From a networking standpoint, it’s as if I’m there. I can screen share to all my computers and use terminal windows to control nearly everything.

With the private VPN, I have Companion 2.0 running on my laptop with a Streamdeck XL as a control surface. I’m able to control the video router (Blackmagic VideoHub), video switcher (Ross Carbonite), recording equipment (AJA KiPros), and of course OBS. But getting a monitoring feed in real time with audio was a challenge, especially when we have several Netflix, YouTube, and Disney+ streams going!

IMG_4010
I made a page that allows me to do basic cuts between the sources on the switcher. I press the button here, the command goes over the VPN to the switcher, and I get visual feedback from the video conference call with Zoom.

IMG_4011
I can change scenes in OBS and even have transport control of the AJA Ki Pro, all remotely!

Enter Zoom! And a Blackmagic ATEM Mini! The ATEM Mini is a relatively new device, it’s basically a small portable video switcher. We sort of panic-bought one when this virus was just coming around in our area, in case we needed to be able to do a portable live stream off-campus. Thankfully, we haven’t had to do that yet, but since we have it, I’ve been putting it to use for small events.

IMG_3990
The Blackmagic ATEM Mini. It’s a portable 4-input mini switcher.

The ATEM Mini has an HDMI output, but it also has a “webcam output”, which means the sum of your video production can be sent to the computer and received as a normal webcam. This feed can then be brought into Zoom as a camera option!

IMG_3991
I am only using one input as this is just a basic HDMI to webcam converter at this point. But if I had more inputs, I could connect them and control it all remotely!

zoom
A screenshot of the multiviewer being sent back to me over Zoom.

Overall, I have found it very helpful to have access to this while I work remotely. I could run our live stream on Sundays completely remotely from my house, if I needed to. Along with our Unity Intercom setup, I could even run the switcher and direct cameras from my house for our weekly music recording. I hope I don’t ever have to do that, but it’s nice to know that I could!

Also, since I’m sitting at home more, and being a video DJ for my kids, fulfilling their various TV watching requests, I added a page to the Stream Deck to allow me to control the Roku TV on the other side of the room. This is a module I wrote for Companion that uses Roku’s ECP protocol. It makes life a little easier!

IMG_4100
I can control the basic functions of the Roku remote with this module, and even launch Netflix from the push of a button! Now I just need to make it start their favorite shows automatically…

It is amazing what we can do with technology these days, and it delights me to be able to to see technology put to use to serve the church. I hope this is helpful to you! How are you doing remote production during all of this?

 

Sending OSC messages from Ross Dashboard

Just thought I would share a quick custom panel that shows how to send OSC from Ross Dashboard to other devices.

If you’re not familiar with OSC (Open Sound Control), you can read about it here. Essentially, it is a protocol used for real-time communication between (typically) media devices, synthesizers, etc. It has grown to be used by a wide variety of software for remote control purposes.

To send a message, first a byte array must be constructed. In Dashboard, the easiest way to do this is to use a messageBuilder object and then convert it to a byte array at the end.

function createOSCMessage(cmd, val, varType)
   {
      var messageBuilder = ogscript.createMessageBuilder();
      var len = cmd.length+1;
      var pad = (4 - len%4)%4;
      messageBuilder.writeString(cmd);      
      
      // put null terminator at end of command string
      messageBuilder.writeChar(0); // null terminator
      
      // pad end of command string with nulls
      for (var i=0; i<pad; ++i) 
      {
         messageBuilder.writeChar(0);
      }

This creates the message builder object, inserts the OSC command, and then pads the rest of the bytes with nulls. The command byte must be a multiple of 4, so the pad is calculated.

Next, the type (float, int, or string) is determined and the value applied:

     // set the 4 bytes that identify the format
     messageBuilder.writeChar(',');

     if (varType == 'float')
     {
          messageBuilder.writeChar('f');
          messageBuilder.writeChar(0);
          messageBuilder.writeChar(0);
          messageBuilder.writeFloat(val);
     }
     else if (varType == 'int')
     {
          messageBuilder.writeChar('i');
          messageBuilder.writeChar(0);
          messageBuilder.writeChar(0);
          messageBuilder.writeInt(val);
     }
     else
     {
          messageBuilder.writeChar('s');
          messageBuilder.writeChar(0);
          messageBuilder.writeChar(0);
          messageBuilder.writeString(val);
     }

     return messageBuilder.toByteArray();
}

The resulting byte array is returned to the function that called it.

To send a float:

function sendOSCMessageFloat(ip, port, cmd, val)
{
     ogscript.sendUDPBytes(ip, port, createOSCMessage(cmd, val, 'float'));
     ogscript.debug('OSC Float Sent');
}

var host = '127.0.0.1';
var port = '12321';
var oscCommand = '/command/float';
var oscFloat = 1.1;
sendOSCMessageFloat(host, port, oscCommand, oscFloat);

To send an int:

function sendOSCMessageInt(ip, port, cmd, val)
{
     ogscript.sendUDPBytes(ip, port, createOSCMessage(cmd, val, 'int'));
     ogscript.debug('OSC Int Sent');
}

var host = '127.0.0.1';
var port = '12321';
var oscCommand = '/command/int';
var oscInt = 1;
sendOSCMessageInt(host, port, oscCommand, oscInt);

To send a string:

function sendOSCMessageString(ip, port, cmd, val)
{
     ogscript.sendUDPBytes(ip, port, createOSCMessage(cmd, val, 'string'));
     ogscript.debug('OSC String Sent');
}

var host = '127.0.0.1';
var port = '12321';
var oscCommand = '/command/string';
var oscString = 'TEST';
sendOSCMessageString(host, port, oscCommand, oscString);

That’s it! Pretty simple using the message builder and byte array.

I’ve made the custom panel available on Github.

How to create a custom Alexa Skill to play church sermons on Amazon Echo devices

We are an Amazon household. We buy stuff on Prime all the time. Sometimes, it feels like a daily task! We also really love the Amazon Echo devices and using Alexa for a variety of things. My boys love to ask Alexa to play fart sounds and we use it for music, timers, announcements, phone calls, sound machines at night, you name it.

One thing I have wanted for a while is the ability to easily play our church’s sermons on the Echo Dots in our house so I can listen while doing other things. In the past, I’ve simply played them from my phone and set up the output with the Echo acting as a bluetooth speaker. That works ok until I walk out of bluetooth range, of course, and it of course means my phone is tied up playing that audio.

Amazon has made it super easy to create your own Alexa Skills, which are like voice-driven apps. You can enable and disable skills, using the Alexa app, similar to how you install and uninstall apps on your phone. Using Alexa Skills Blueprints, creating your own church Alexa app is super easy.

Screen Shot 2020-03-04 at 9.27.35 AM
The Alexa Blueprints home page.

There are a wide variety of blueprints available, which are basically templates to speed up creating your own skill. This is especially great if you don’t want to or don’t know how to write in the programming language yourself to figure it out.

They have a pre-made template called “Spiritual Talks”.

Screen Shot 2020-03-04 at 9.28.18 AM
This is the blueprint/template that makes the process very simple!

To create your own skill, you will need:

  • Your podcast audio URL. We already post our sermons to iTunes and generate an RSS feed automatically through our church management software, Rock RMS: https://www.fellowshipgreenville.org/GetChannelFeed.ashx?ChannelId=28&TemplateId=1116&count=110
  • A Welcome message. When the skill is launched for the first time, Alexa will speak a welcome message. I used something simple: Welcome to Fellowship Greenville, South Carolina. Come and join us to worship every Sunday at 9am and 11am. Visit us any time to hear previous sermons.
  • A Returning message. When the skill is re-opened, Alexa will speak a welcome-back message. Here is what I used: Welcome back to Fellowship Greenville’s Sunday morning sermons podcast.
  • A skill name and logo. I used our church’s name and logo for this.

Once you’ve supplied all the information, you will want to publish the skill to the Alexa Skills Store. Someone will review it and once it’s approved, it will be publicly available. You can also privately share the skill if you don’t want to go through the publication process. I think they said to allow for 2 business days but mine was approved a lot faster than that. You can also make changes to the skill any time you want, but it will have to go through the re-approval process each time you make a change that you want made public.

Now, if people in our church want to use the skill, they just have to open the Alexa App on their phone, search for Fellowship Greenville in the Skills Store, and enable it.

IMG_3734

Then, they can say things like:

  • Alexa, open Fellowship Greenville”
  • “Alexa, ask Fellowship Greenville for the latest message”
  • “Alexa, Start Fellowship Greenville”

IMG_3736

So far, it’s working pretty great for us! I am excited about adding this feature for our church as I am always looking for ways to make our sermon content more accessible. The nice thing about this is that it uses our existing podcast feed, so I don’t have to do any extra work each week for the skill to get the latest content! It just works.

Go check it out for your church! If you don’t have an Amazon account, you’ll need to create one. The skill will be tied to that account, so make sure it’s an account you own.

Walkthrough: Setting up midi-relay on MacOS to control Chroma Q Vista 3 with a Stream Deck over the network

I have had a few people ask if I could post another walkthrough with more precision on setting up midi relay to control Chroma Q Vista (formerly owned by Jands) with their stream decks.

What you will need:

  • MacOS running Vista 3 (Vista 2 will also work)
  • Node.js installed, or you can download the MacOS binary release of midi-relay here: https://github.com/josephdadams/midi-relay/releases
  • Bitfocus Companion installed and running on a computer/device (it can be the same computer running Vista, or another computer on the network)

To set it all up:

  1. First, you will need to set up the loop-back MIDI port. Open Audio MIDI Setup. It’s in Applications > Utilities.
  2. In the Audio MIDI Setup window, choose Window from the top menu, then Show MIDI Studio.
  3. This opens the MIDI Studio window. You will see a few options here such as BluetoothIAC Driver, and Network. Depending on how you may have configured MIDI ports in the past, the number of devices here can vary.
  4. Double click the IAC Driver device. This will open the Properties window. The main thing you need to do is click the checkbox for “Device is online” (if not already checked). You may also want to change the device name to Vista.
  5. You can close out all of the Audio MIDI Setup windows now.
  6. Now you need to start midi-relay running. Open a Terminal window and change directory to where you put the executable file for midi-relay. I put mine in a subfolder within the Documents folder. It’s important that you run the executable while the Terminal window directory is the same folder the executable is in, or things may not work correctly. Once you’ve changed directory to the correct folder, you can drag the executable file from Finder to the Terminal window, or you can type in the executable name manually. Hit enter to run it.
  7. When midi-relay starts up, it will give you a read-out in the console of all the available MIDI in/out ports. You should now have one that says Vista Bus 1.
  8. Open Vista. Go to the User Preferences menu by selecting File > User Preferences.
  9. Go to the MIDI tab.
  10. Under the MIDI Show Control section, set the Device ID to 0 (zero).
  11. Under the External MIDI Ports section, check the box next to the Vista Bus 1 MIDI port.
  12. Click OK.
  13. In Vista, right click on the cue list you want to use with MIDI control, and choose Properties.
  14. Go to the MIDI tab.
  15. Now open the Companion Web GUI on the computer that is running Companion.
  16. Add a new instance by searching for Tech Ministry MIDI Relay.
  17. In the instance configuration, type in the IP address of the computer running Vista and midi-relay. If you’re running Companion on the same computer, you can use IP address 127.0.0.1.
  18. Click Apply Changes.

To Send a MIDI Note On and advance a cuelist:

  1. Add a new button in Companion.
  2. Add a new action to that button, using the midi-relay action, Send Note On.
  3. Under the options for this action, choose the Vista Bus 1 for the MIDI port.
  4. By default, it will send channel 0, note A0 (21), with a velocity of 100. Vista does not look for a specific velocity value, only channel and note. Vista will listen to any channel by default, but if you set a specific channel in the Vista MIDI settings, you will need to make sure you send the correct channel from Companion.
  5. Go back to Vista and in the Cuelist PropertiesMIDI tab, click Learn next to the Play item. The Play command is what advances a cuelist. The Learn function will listen for incoming MIDI notes and makes setting the MIDI note slightly easier (and it proves that it works). You can also just set the note manually if you want.
  6. Go back to Companion and click Test Actions (or press the physical button on your stream deck if you are using one), and the Learn box in Vista will go away, and you’ll see that the note you sent from Companion is now populated in the Vista settings.
  7. Now every time you press that button in Companion, it will advance that cuelist. If you have multiple cuelists, you will need to use different MIDI note values.

To Send a MIDI Show Control message to go to a specific cue in a cuelist:

  1. Add a new button in Companion.
  2. Add a new action to that button, using the midi-relay action, Send MSC Command.
  3. Choose Vista Bus 1 for the MIDI port.
  4. The default Device ID is 0 (zero) but if you changed that in Vista, make sure it matches here.
  5. The Command Format should be Lighting – General and the Command should be Go.
  6. The Cue field should be the specific Cue Number in Vista of the Cuelist you want to control.
  7. The Cue List field should be the specific Cuelist Number in Vista.
  8. Now every time you press that button in Companion, it will go to that specific cue in that specific cuelist.

Here’s a walkthrough video of these steps:

[wpvideo HZriRGlS]

I hope this is helpful! If you’re using MIDI relay, feel free to drop a comment and share how it is working for you!

Using the iOS Shortcuts app to automate production workflows

I love automation. I love making things more efficient and consistent, and I’ve found that on a particular level, automating or simplifying certain tasks through automation can make it easier for volunteers when working in a church production environment.

The latest app that I’ve been enjoying is the iOS “Shortcuts” app that was added to my phone in a recent iOS upgrade. It allows you to use actions within apps or activity on your phone to spawn other actions. Things like “Text my wife how long it will take me to get home, when I leave work” by using the GPS location on my phone. Or, make a shortcut that when you take a picture using the camera app, it is automatically posted to Facebook.

1200x630wa
Look for this app on your iOS device.

If you’ve ever used the service IFTTT, you’ll find familiarity with the Shortcuts app in some of the concepts. Of course, the integration into the phone at a core level with Shortcuts is much nicer. One thing I particularly like is that, once you name a shortcut, you can simply say, “Hey Siri, [shortcut name]” and it will run it.

And, Shortcuts can make HTTP requests (GET, POST, with JSON, etc.) as actions. So, it’s super easy to add a shortcut that triggers a Companion button or a task in a Ross Dashboard custom panel, for example. And that’s one of the ways I’m using the Shortcuts app.

In our production workflow, we use Ross Dashboard custom panels that I created to control nearly every aspect of our video system (and slowly, audio and lighting as I create the connections). It’s very easy to trigger a button via HTTP request, so I set up several shortcuts that I can use to save me time, especially when I am away from the production area or not near a computer running one of the Dashboard panels, as long as my phone is connected to the production network wifi (or I’m connected via VPN if remote).

Photo Nov 18, 2 22 59 PM
Here are a few of the shortcuts I’ve created.

Photo Nov 18, 2 23 21 PM.png
All this particular shortcut does is make an HTTP GET request to my master Ross Dashboard Custom Panel, which is listening to Port 5400, and triggers the GPI, “aud1_psl”.

Screen Shot 2019-11-18 at 2.23.52 PM
It’s the same as clicking on this yellow button, but I can run it from my phone, as long as I am connected to the production network!

So, just like that, it’s very easy to do something like this: “Hey Siri, go to Pre Service in Auditorium 1”, and have all of the lights change (by sending a midi-relay command to a MIDI Show Control message to our Vista lighting console) and the program screens go to the pre-service loop (by sending a RossTalk command to execute a custom control on the Carbonite to change inputs).

Here’s a video of it in action!

[wpvideo URjPHb4M]

Go check out the Shortcuts app if you aren’t using it already in your production workflow!