Using a Stream deck and a raspberry pi to create a remote control panel to adjust marshall cameras over ip with rs-485 control

At my church, we have 4 of these cameras: Marshall CV503

Marshall CV503 Miniature Camera

We use them during services to capture shots of the instruments (drums, keys, etc.) and whatever is happening on stage. They are great little action-style cameras, and they have SDI out on them so they are super easy to integrate into our video system.

They have a lot of adjustment options to them via a local joystick-style controller at the camera, but obviously, that’s challenging to use during a service if we needed to adjust the camera’s exposure. The menu is OSD and shows up on the live output. Plus they’re all over the stage and we can’t walk there during the service!

While I wish they were IP-controllable directly, this particular model does not have that option. They do, however, come with RS-485 serial connectors.

So we decided to create a remote shading system using a stream deck running Bitfocus Companion. The Marshall cameras support the VISCA protocol over RS-485. In fact, if you’re a Windows user, Marshall provides free software to control the cameras over RS-485.

Marshall provides this program to control, if you have Windows and want to connect your cameras directly to that computer.

We don’t use a lot of Windows computers around here, and that program requires that the computer running their software be the one physically connected to the cameras via serial. Not ideal for us because the cameras are on a stage and our computers typically are not. Marshall also actually makes a nice hardware RCP – but we didn’t want to pay for that.

So we did what you probably already guessed – put in a Raspberry Pi with a USB to RS-485 adapter that we could control remotely.

We have several wallplates across the stage with network tie lines on them that feed back to the rack room in a patchbay. So we made cables that connect to the RS-485 ports at each camera that then go back to a wall plate into a RJ45 port. We utilized the blue/white-blue pair on CAT6 cable. We used that pair because these are data pins in a normal network connection, which means if someone ever accidentally connected it straight to a switch or something, there would not be any unintended voltage hitting the cameras.

Each camera is set to its own camera ID (1-4), and the matching baud rate of 9600 (the default). Then in the rack room, we made a custom loom to take the 4 connections and bring them into a jack, which then feeds into the USB to RS-485 adapter on the Pi.

The Pi is a 4 model with 4GB of ram. Honestly, for what this thing is doing, we probably could have just run it off of a Pi Zero, but I wanted it hardwired to my network, and the bigger Pi’s come with ethernet ports built in.

I bought this adapter off Amazon:

DSD TECH SH-U10 USB to RS485 Converter with CP2102 Chip

When connected, it represents itself as serial port /dev/ttyUSB0. We originally planned to use the socat program in Linux to listen for UDP traffic coming from Companion:

sudo socat -v UDP4-LISTEN:52381 open:/dev/ttyUSB0,raw,nonblock,waitlock=/tmp/s0.locak,echo=1,b9600,crnl

To actually send the UDP data, we’re using the Sony VISCA module already built into Companion. The Marshall cameras use the same protocol over RS-485.

Using the socat method, we quickly found that it would only listen to UDP traffic coming from one instance of the module. We need 4 instances of the Companion module because we have 4 cameras, each with a different camera ID.

However, nothing a small Node.JS program can’t solve. So I wrote a program that opens the specified UDP port, opens the specified serial port, and sends any data received at that UDP port straight to the serial port. You just configure a new instance in Companion for each camera with the same IP of the Pi running the udp-to-serial program, and the camera ID that you configured at the Marshall camera.

Here’s a video that shows it all in action:

If you want to try this out for yourself, I’ve made the udp-to-serial repository available here:

http://github.com/josephdadams/udp-to-serial

Automated Printing of Google Documents using Google Apps Script, the DropBox API, and Automator Folder Actions

A couple of years ago, I shared a workflow that we still use to auto generate documents that we use each week. A few months ago, I shared another workflow that showed how I automated printing our weekly Planning Center Online paperwork.

I decided recently that I was tired of still having to manually print these weekly “talking points” documents, while having my Planning Center paperwork fully automated. So, I took a few minutes and wrote a new Google Apps Script to help with this.

We print these every week. I was doing it manually, but not anymore!

Here is what the script does:

  • Searches a specific Google Drive folder for all subfolders with files that match today’s date (the script will run on a weekly trigger)
  • If the file is a match, it opens the file as a PDF and stores the binary contents in a variable
  • An upload request is made to the Dropbox API with that binary data and a file name
  • Dropbox saves the file into the “Automated Printing” folder
  • Dropbox then syncs the file to the local computer (Mac)
  • The local Mac is configured with a Folder Action that automatically prints any files placed in this folder
  • After the Automator Folder Action prints the file, it removes the file

Here’s how you set it up:

First, you want to create a new Dropbox “App”. Go to dropbox.com/developers and click “Create apps”.

Then, you need to fill out these fields:

  1. “Choose an API”: Scoped Access. It’s your only choice.
  2. “Choose the type of access you need”: I chose “Full Dropbox” because I already had a specific folder set up in the root of my Dropbox. If you’re setting up the Automator Folder action for the first time, you could probably keep the scope within “App folder”.
  3. “Name Your App”: Give it a meaningful name. It does have to be unique across all of Dropbox, for some reason, so if you get an error here, just add something unique to you.
  4. “Choose the Dropbox account that will own your app”: If you have personal/business accounts linked, you’ll need to choose the account that owns the app. I’m using a business account for this, so I chose that one.

On the next page, choose the “Permissions” tab.

Then give your app “files.content.write” access.

Now back on the Settings tab, generate a new Token and set the Expiration to “No expiration”.

This will generate a Token key which you will use within the Google Apps Script in the next steps.

Now in Google Drive, click “New”, go down to “More”, and choose “Google Apps Script”. Google Apps Script is essentially Javascript, so it’s super easy to use.

You’ll want to give the project a helpful name, as it will be stored in your Google Drive this way.

Give your project a helpful name.

In the code section, paste in my script below:

/*
EDIT THESE VARIABLES FOR YOUR SETUP
*/
var accessToken = "token"; //Dropbox App Access Token
var rootFolder = "folderID"; // Google Drive Root Folder where these files live
var dropboxPath = "/Automated Printing/"; //Dropbox Folder Path to place file in
var numberOfCopies = 2; //the number of copies you want per file

//Nothing to edit below

function myFunction() {
  var dtDate = new Date();
  const monthNames = ["January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"];
  var strDate = monthNames[dtDate.getMonth()] + " " + dtDate.getDate() + ", " + dtDate.getFullYear();
  var mainFolder = DriveApp.getFolderById(rootFolder);
  var subFolders = mainFolder.getFolders();
  while(subFolders.hasNext()) {
    var subFolder = subFolders.next();
    var files = subFolder.getFiles();
    while(files.hasNext()) {
      var file = files.next();
      var fileName = file.getName();
      if ((fileName.indexOf(strDate) > -1) && (fileName.indexOf(".pdf") == -1)) {
        //this is a file we want to print
        Logger.log("Generating PDF: " + file.getName());
        for (let i = 0; i < numberOfCopies; i++) {
          sendToDropbox(file.getName() + ".pdf", file.getAs('application/pdf'));
          Utilities.sleep(15000); // wait 15 seconds before doing the next file, so that Dropbox has time to sync the file, the Automator can print the file, remove it, and close out
        }
      }
    }
  }
}

function sendToDropbox(fileName, fileBlob) {
  var parameters = {
    "path": dropboxPath + fileName,
    "mode": "add",
    "autorename": true,
    "mute": false,
    "strict_conflict": false
  };

  var headers = {
    'Authorization': 'Bearer ' + accessToken,
    'Content-Type': 'application/octet-stream',
    'Dropbox-API-Arg': JSON.stringify(parameters)
  };

  var options = {
    "method": "POST",
    "headers": headers,
    "payload": fileBlob
  };

  var apiUrl = "https://content.dropboxapi.com/2/files/upload";
  var response = JSON.parse(UrlFetchApp.fetch(apiUrl, options).getContentText());
}

Now modify the top section to include your Dropbox access token (the one you generated earlier), the Google Drive folder ID (the folder ID is in the URL of the page when you open that folder in Google Drive), the Dropbox path to save to, and the number of copies you need for each matching document. In our case, I need 2 copies of each document.

I learned in testing that if Dropbox syncs the files too fast while my Automator folder action is still running, the new files that were added don’t get included in the folder action, and the folder action doesn’t re-run those new files. So, what this script does is it uploads a new PDF for every copy needed, but it waits 15 seconds in-between. This gives Google time to upload to Dropbox, Dropbox time to sync to my local Mac with the Automator action, and Automator time to run its script and print the file and delete it. It’s not very efficient, but the files are not that large.

Now that your script is in place, you need to assign a trigger to it. Click “Triggers” on the left-hand side of the screen:

Add a new trigger. I used the following settings to have it run weekly on Sundays between 6 and 7am. Be sure to target the “myFunction” function as that’s the main one we are using.

You’ll need to create the folder action in Automator. Follow my previous post on how to do this, as the steps are the same. I didn’t have to change that at all!

Here’s a tutorial video if you learn better that way:

I hope this helps you think of ways to automate what you’re doing in Google Drive so you can spend more time on ministry and less on manual tasks!

Tally Arbiter 1.5 – New Features, Bug fixes, and support for more tally clients

It’s been a few months since a major release for Tally Arbiter, but I’ve been hard at work on it in my spare time. If you haven’t read about this software, you can read some past posts on this blog about it. It’s free, open-source camera tally lights software that I developed to help churches around the world put on better productions.

Today, Tally Arbiter 1.5 is officially released!

Here are some highlights of the release:

  • The GUI has been revamped with all internal socket.io calls. The REST API still exists, but this will be more streamlined.
  • A Generic TCP Device Action has been added: Now you can send a custom TCP string or command to another network location whenever a camera enters or exits Program on your switcher, for example.
  • VMix Tally Protocol Emulation: If you’ve got a favorite tally client designed specifically for VMix, now you can use it with Tally Arbiter! It’s very simple – Tally Arbiter represents itself as a VMix source. You can even use the emulated VMix connection from Tally Arbiter as a source in another Tally Arbiter install! (Not sure why, but, you can!)
  • Devices can now have multiple addresses assigned from a single source. This helps if you really consider one Camera to be “on-air” whenever it’s used on Input 1 or Input 5 from the same source, for example.
  • Device Sources can now be linked on either the Preview Bus, the Program Bus, or both. This means that your Camera won’t be considered to be in Program unless it is in Program on ALL assigned sources. This is helpful for cases where you may have nested switchers.
  • Preview + Program mode added to the OSC Source Type
  • Some Device Source Addresses can now be chosen via a drop down list instead of manually typed in
  • Blackmagic VideoHub (all models) added as a tally source. You can choose which destinations are considered “on-air” destinations for both the Preview and Program bus.
  • The Companion client now supports reassigning of tally listener clients as a button press. This is useful if you want to have a tally light at a shader station; you can press a button on your stream deck to route a camera to your shader monitor and simultaneously reassign the tally light at the shader monitor to that camera, and now you know if that camera is on-air as you shade it!
  • A “Test Mode” has been added that cycles through tally states to test all tally outputs. Very helpful when you’re not actively in a show but want to verify everything is working!
  • Support for Roland VR-50HD-MKII as a tally source
  • The Producer page can now send messages to supported tally clients like the Web tally and M5StickC. Don’t have an intercom system? Use the chat to tell your camera op to zoom in!
  • The M5StickC Plus is now officially supported. And M5Stick clients will now retain their last used Device when they reboot or reconnect.
  • The M5 Atom Matrix is now also supported.
  • Various other bug fixes and improvements

This release saw a lot more interaction from the community through Github issues (feature requests and bug reports), pull requests, and other contributions. It’s truly becoming a community project, which is awesome to see!

Here’s a video to show most of this in action:

As always, you can get the latest code on Github: https://github.com/josephdadams/TallyArbiter

And if someone you know needs tally lights for their production, tell them to go check out tallyarbiter.com!

Using A Third Party Streaming Service For Live Streaming Workflows and Communication

I don’t do this often but wanted to share briefly about a service we have been using since last March, Restream. I’ll clarify right now that I am not being compensated to say this, but the workflow has value for other churches, which is why I’m sharing.

When we started streaming our full service online last March (like practically every church around the globe who had the capability), I wanted us to send our content not to just YouTube but also to Facebook. The goal being: let’s reach people where they are at, and the platforms they are already on. For us in our church, that’s Youtube and Facebook.

We currently encode and live stream using OBS Studio at our church. Our internet connection is pretty good, and we don’t typically have bandwidth issues when it comes to sending data out. However, in order to send a stream to multiple live streaming platforms simultaneously, if you didn’t have a 3rd-party streaming tool, you would have to use double the bandwidth to send to two platforms at the same time.

So, for us, enter Restream. We send our live stream to them, and then they relay it to YouTube and Facebook on our behalf. The delay is minimal. They also support a variety of other platforms as well, which we don’t use.

I got this from their website. You get the idea.

We “start streaming” in OBS which sends our feeds to Restream, and then our moderators, when ready, can switch on the streams for each connected social media platform when they are ready to go live on the actual social platforms. They do this remotely/off-site, which is great in these times that call for distancing. All of the API/connection data is stored in Restream, so I don’t have to give out admin logins or privileges to any moderators directly.

When we started streaming our services online, I wanted to create environments where people felt welcome to chat and share their prayer needs, stories, and just feel connected to others. I also wanted to be able to easily share sermon content and notes for people to help them in their application of scripture as they listen.

We initially had multiple moderators, 1-2 people “watching” the chats and comments on each social platform (YouTube and Facebook, currently) but then we started using Restream’s built in Chat feature. This aggregates all of the chat data into a single interface, which allows one person to respond individually per platform, or post to all places at once, depending on the needs.

Here’s a screenshot of the Restream Chat interface.

Overall, this workflow has really helped us to serve more people with less staff and volunteers. The capability to turn the stream on and off independently of what production world is doing on-site is very helpful, and having all of the chat in one place means we don’t have to monitor it on each platform.

If you’re looking to stream to more than one platform, check out Restream! We pay something like $20 a month, and it’s well worth it. Here is my referral link: https://restream.io/join/2Nyvv

Automating Lights, Sending Advanced MIDI Messages, HTTP Requests, and More through ProPresenter Slide Notation and the new PresentationBridge Client

A couple of years ago, I wrote about the real-time lyrics sharing software I created to help us be able to send lyrics from ProPresenter straight to people’s phones and tablets at our outdoor worship night. Since then, we have not used this software too much, but I have helped countless other churches get it going for them, especially in this era of doing church differently in a pandemic. Many churches have found this free software valuable so that they could share worship lyrics and other messages while doing outdoor or distanced services.

Now, I have an update!

I have created a client-side app that runs in the system tray to facilitate the connection to ProPresenter and send the lyrics to the cloud server. It also supports several unique “slide notations” that allow you to automate nearly everything just by having a ProPresenter operator click on a slide.

An example of slide notation that PresentationBridge Client supports.

These slide notations are interpreted by the PresentationBridge Client software and are triggered when they are a part of the current slide.

The PresentationBridge Client interface.

The software can also detect instances of ProPresenter (and midi-relay!) running on your network to help make it easier to get connected. It supports sending all of the midi-voice messages that midi-relay supports, as well as a custom shortcode for Chroma-Q Vista, which requires MIDI Show Control in order to remotely execute specific cues on specific cuelists. It can also send out HTTP GET/POST requests, and it can virtually press a Companion button on your remote instance of Companion. This means that you can do just about anything automatically, just by clicking on a slide.

We had a chance to use it in our outdoor worship night back in October, and it worked great! I was making tweaks to it in real-time as people were using it.

We used the new PresentationBridge Client at our outdoor night of worship and it worked very well.

Here’s a video that shows it in action:

This project will be released open-source at some point, but currently I am looking for a few testers to give their feedback. If you’d like to be considered, please reach out to me via the contact form and I will be in touch.

Tally Arbiter Version 1.4, with multiple ME support for ATEM, a new Bootstrap framework, and more!

It took long enough, but version 1.4 of my Tally Arbiter software is released!

Here are some of the highlights:

First of all, the entire web interface has been rewritten to support the use of the Bootstrap framework. This makes it a lot easier to use, and it looks much better too! Huge shoutout to Matthijs (mg-1999) for taking the time to do this. The bulk of the Bootstrap rewrite is his work.

This is the new interface!

Secondly, Tally Arbiter can now monitor multiple ME’s on ATEM sources, whereas previous versions could only monitor ME 1. This is great if you have more busses that you need to track for on-air sources! If a source goes into preview or program on the ME’s you’ve selected to monitor, and a Device is associated with an address on that source, it will show up.

It’s easy to choose which ME’s will be “on air”.

Third, the Blink(1) and GPO listener clients have been updated to support automatic attempts to reconnect to the Tally Arbiter server, if they start up and the server is offline. This will make it even easier to keep everything connected!

Also, support for Analog Way LiveCore devices has been added, thanks to Alberto (albertorighetto)! So if you have one of those devices, go check it out!

If you use the phone tally viewing page (a very convenient way to turn any phone or tablet into a tally light!), Tally Arbiter has two updates: a QR code has been added to the home page, making it easier for users to navigate to your Tally Arbiter server, especially if you’re running locally and you want to avoid typing in IP address information. Also, your device (if supported) will vibrate or pulse when the selected Device for tally goes into preview or program.

A QR code is automatically generated to facilitate phones and tablets connecting to your Tally Arbiter server.

Lastly, there were a few bug fixes here and there to help with performance.

Here’s a video to show it in action!

You can get the latest source code at the Github repo: https://github.com/josephdadams/TallyArbiter

Using cronicle, the planning center online api, and automator on a mac to automate printing weekly paperwork

In my never-ending quest to automate anything I ever have to do more than once, I thought that it might be nice if I could have my paperwork/custom reports that I manually print out every Sunday to print out automatically for me. I do the same thing every week – open Matrix view, select the next plan of each service type, click Print, and choose my report.

I’ve written about and shared my PCO custom reports before. I’ve also shared about how Planning Center makes a robust API available to get data and information about your plans.

So, I whipped up a new Cronicle plugin that does the following:

  • Accepts a PCO AppID and Secret Key
  • Accepts a list of PCO Service Type Id’s, delimited by semicolon
  • Accepts the PCO Matrix Custom Report ID and printing parameters (page size, print orientation, print margin)
  • Loops through the list of provided service types and determines the “next plan id” of each service type and adds that to a list. For us, that’s the next plan in Auditorium 1, and the next plan in Auditorium 2.
  • Then it builds the URL to generate the PDF just like PCO would do within the browser.

Then, the plugin sends a TCP message to a computer running VICREO Listener to open the URL which generates the PDF. This free program is used to send hotkeys remotely to other computers, but it can also execute files and shell scripts. It sends a command to the computer to open the Safari browser with this URL. Safari automatically downloads it to a folder I have in Dropbox, called “Automated Printing”. I do have to keep Safari logged into my PCO account for this to work, and I chose Safari for this task because it’s a browser that’s already installed which I don’t often use, so it’s fine to have all downloads automatically go to that Dropbox folder.

Lastly, I made a Folder Action in Automator. If you haven’t heard about Automator for MacOS before, I strongly suggest checking it out. It can do so much. I’ve used it for all kinds of things. This folder action watches for new files in that “Automated Printing” folder, filters out any newly added files that aren’t PDF files (just in case something else gets put in there by accident), prints out any added files to the default printer, and then deletes the files 5 seconds later. I don’t need to keep them anyway.

Here’s a video of the whole plugin in action:

You can get this plugin from my Github repository, https://github.com/josephdadams/CroniclePlugins

Additional Cronicle Plugins

A few weeks back, I shared about how I am using a Chromebox running Node.js to run an automated scheduling server to control our production equipment. It uses the open-source project, Cronicle, to do this.

Since sharing that post, I’ve created a couple more plugins:

  • This one is probably obvious, but I’ve made a plugin for Companion. It accepts the IP of the computer running Companion, the TCP listening port (51234), and the page and button you want to press.
  • The second plugin I’ve made is to control TP-Link HS100 wifi outlets. We have a lot of these around here and we use them to turn on and off equipment remotely. I did some work awhile back to determine the protocol necessary to control them over the network without having to use the TP-Link app. Now we can do this through Cronicle which makes it super easy to automate turning on and off equipment for events.

These are available in the Github repository, so go check them out: http://github.com/josephdadams/cronicleplugins/

Tally Arbiter 1.3 – Support for sending tally data to the cloud, feedback/control on a stream deck, and tally output on an M5StickC arduino

If you haven’t read about my Tally Arbiter project, you can read about it here and here. Today I’m excited to release version 1.3 which offers some exciting new features!

First, Tally Arbiter Cloud! Now you can send tally data from your local instance of Tally Arbiter to a server in the cloud. Anyone can connect to your cloud server without having to tunnel into your private production network. And, if you are doing remote production with switchers in multiple physical locations or networks, each location can run an instance of Tally Arbiter and the cloud server can aggregate all of the data together in real time! All you need in order to make a connection is a Cloud Key provided by the local client that matches on the server. Keys can be made and revoked at any time.

I’ve set up an Amazon EC2 instance running Ubuntu, with Tally Arbiter running on it. I set a cloud key and set up a cloud destination on my local server to send the data to the server running on EC2. Now, I can log into my EC2 server’s Tally Arbiter web interface and view the tally data from anywhere without having to VPN to the church network. This will make it easy for volunteers to use their personal phones to view tally without having to be in the private network.

Here is a video to show it in action:

Second, Feedbacks and Control through Bitfocus Companion on your stream deck! Companion 2.1 is out now, and if you run the latest build, you can use the new “TechMinistry Tally Arbiter” module to view live tally data from Tally Arbiter on any button on your stream deck. It also supports the ability to “flash” any connected listener client.

Third, a new tally listener client – the M5StickC! This is an inexpensive Arduino ESP32 “finger computer”. A friend of mine in the UK recommended this to me for a possible integration with the project. I bought mine off Amazon for $20 but you can buy them directly from the manufacturer for less than $10. It is a portable, easy-to-use, open source, IoT development board.

Programming this thing was fun because the code is all in C++ which I haven’t used since high school. The power of websockets and the socket.io protocol means that this microcontroller can connect to my Tally Arbiter server and communicate the same way any of the other listening clients do.

Here’s a video to show how it works and how to program one:

Version 1.3 of Tally Arbiter also comes with some other perhaps less exciting but still helpful updates:

  • All Settings, REST API, and Producer page now require a Basic Auth username/password to access.
  • In the settings or producer page, if you mouse over the preview and program boxes, Tally Arbiter will show you which sources currently have that device in that bus
  • The settings page will now show the number of device sources and device actions assigned to a device in the list.
  • Sources will now attempt to auto-reconnect if the connection is lost with a max retry of 5 times.

Lastly, I’ve set up a website for this project to help others who want to share about it. You can access it at: http://www.tallyarbiter.com

You can get the source code for Tally Arbiter and the listener clients from the Github repository: http://github.com/josephdadams/tallyarbiter

100% free and ready for you to use!

My hope is that this project enables churches and any organization who needs tally for their productions be able to attain it at a lower cost. I’ve put a lot of hours into developing this free software. If it is helpful to you, please let me know!