Controlling Planning Center LIVE with a streamdeck, with timers and other variables

If you use Companion and are in tech ministry, you have probably used my PCO Services LIVE module. While in the process of converting this module to the new API we are using for Companion 3.0, I gave it an overhaul and added lots of new features!

Here is a video that shows how it works:

Go check it out for yourself!

midi-relay v3.0 is here – as an electron app for Mac and Windows!

I decided to give some love recently to midi-relay since person after person asked me to make this an easier-to-run app rather than setting up a nodejs runtime.

When I originally created midi-relay, I designed it to run on every OS, especially the Raspberry Pi platform. Thousands of people use it all over the world for all kinds of stuff. Probably because it’s free. 🙂

This software is designed to accept a JSON object via its API and then turn that object into a MIDI command and send it out a local MIDI port. It allows for remote control of a lot of systems by sending the command over a simple network protocol.

Now it’s even easier to use.

It runs in the system tray for easy access.

Some new features include:

  • a new socket.io API for bi-directional communication
  • a virtual MIDI port, for loopback uses
  • an upgraded Bitfocus Companion v3 module
  • Disabling remote control, if needed

So if you’re a midi-relay user and you want an easy way to run this on your Mac or Windows desktop, go check out the latest release!

If using my software makes your life easier, please consider supporting my family.

Thanks!

Using a Nano Pi, a POE splitter, and a custom project box to create a mobile UDP to RS485/VISCA shading rig

Awhile back, I wrote about how we created a network based VISCA shading rig for our Marshall CV503 cameras we use on stage, to control their various exposure settings. The cameras themselves can only do this via RS485 serial, and our system sends UDP from Bitfocus Companion (we use the Sony VISCA module/connection) over the network, converts to serial at a Raspberry Pi, and then using custom cables, we can send the signal to our cameras over the patchbay.

We’ve been using that system ever since and it works great. We have even recently taken the steps to create custom cable looms that have SDI, CAT6, and power all in one loom to make it a breeze to set up.

Recently, we set up one of these cameras at the back of our auditorium where it’s impractical to run a cable all the way to our patchbay in the rack room at the stage side for a serial connection. We still need to control the exposure, so a solution was needed.

It’s also impractical these days to buy a Raspberry Pi. They have gotten quite expensive, and difficult to find in stock.

A few months ago, I bought a Nano Pi NEO and started playing around with it to see what it could do, since it’s easy to get ahold of and very affordable.

This is the Nano Pi NEO single board computer.

It has an ethernet port, a full size USB A port, and is powered via micro USB. It runs Armbian quite well, so it was very simple to install my existing udp-to-serial nodejs script.

I bought a project box and modified it to fit all the parts. I started with a dremel but I should have just used a hacksaw from the beginning, because that gave me much cleaner cuts. I didn’t want to do any soldering or make custom internal cables, so my box had to be a little larger.

The entire rig is powered by a single POE to USB adapter. This provides the ethernet data to the Nano Pi, and then micro USB power to the Nano Pi’s power port. I also figured out awhile back that you can use a USB 5V to 12V step-up cable to power these cameras, so I put one of those in the box as well.

POE to USB adapter, RS485 cable, and two keystone jacks for serial out. Blue/White-Blue pins for +/-.

We put RJ45 keystone jacks on the box to provide the serial out connections, and we also hot glued the POE to USB adapter to the lid of the box so the connection could be flush with the edge.

It’s certainly crammed in there! The Nano Pi is glued to the bottom, and the rest of the cables are tucked into the box. The USB splitter, the USB to RS485, and the USB 5V to 12V DC cable.

Here are the parts I used:

  • Nano Pi Neo
  • POE to USB adapter – to pass network to the Nano Pi and to give USB power
  • USB 5v to 12v DC step-up adapter – to power the Marshall CV503 instead of using the stock camera power supply
  • USB splitter cable – to split the POE USB power to both the Nano Pi and the step-up cable that powers the camera
  • Micro USB cable – to power the Nano Pi
  • USB to RS485 adapter – this is what sends the received UDP data out to serial
  • Keystone jacks used for the serial connections. We then have custom RJ45 to Phoenix connectors that plug into the cameras. This method allows us to use any standard CAT5/6 patch cable to make the connections in between.
  • Project box to hold it all

These are Amazon purchase links. As an Amazon Associate I earn from qualifying purchases.

One single POE connection provides all the power and data needed.

Overall, pretty pleased with how it turned out! I like that it’s just two cables – one for the SDI video signal off the camera, and one ethernet to power it all and provide the data connection.

What project ideas do you have for a Nano Pi?

A new Planning Center Online Services Custom Report, supporting Split Teams

One of the first blog posts here was about PCO’s custom reports. I’ve written a lot of them and helped a lot of churches get started with their own.

In anticipation of a possible need for split teams, I’ve now created a new custom report that has several customizable features, enhanced checklists, dynamic notes, and more, without having to write any actual code. Just modifying variables at the top of the report.

This new report supports the following:

  • Customizable header
  • Custom print order, with variable plan items as columns and/or rows alongside the plan item description
  • Dynamic checklists
  • Automatic highlighting of Plan Item Note changes to signify important information
  • Ability to display Plan Notes for everyone, by team, or by position
  • Custom CSS for your own unique look
  • Ability to show headers in their own row, or inline to save space
Here’s the report with Headers as their own rows.
Here’s the exact same report, but with headers inline for a cleaner look.

Here’s a video that shows how it all works:

Because of the substantial amount of work I have put into creating and coding this report, I have chosen to make this report available for purchase. I’m pricing it at a point that is affordable for most churches, at $45. Once payment is received, I will send over the report code and help you install it, if needed.

PCO Services Matrix Report with Split Teams, Fully Customizable

This custom report will revolutionize the way you share information with your team! Report code will be sent to the email address provided once payment is received.

$45.00

Click here to purchase.

If you have a need for a custom report beyond this, contact me! I’m always available for hire for your custom PCO reporting projects, or whatever other custom coding needs your ministry or organization may have.

Tally Arbiter 2.0 now available!

About a year ago, I released some camera tally lights software because we desperately needed it at my church. Since that time, a ton of new features have been added, both by me and by the community.

It’s now in use in hundreds of places, from churches to event venues to sports stadiums.

Version 2.0 was silently released a few weeks ago, which includes a compiled application that can run natively on Windows, MacOS, and Linux, without the need to install Node.js and other dependencies like the command line. And, of course, it still runs on a Raspberry Pi.

Lots of people in the community have shared how they are using it, made their own tutorials, and added to the existing documentation.

It’s truly becoming a community project, and I love that. We now have an official Facebook user group to help facilitate conversation amongst users, and I’m excited for the new features on the roadmap in the coming days.

Someone from the community designed a new logo! Isn’t it nice?

A few features to note since version 1.5:

  • An entirely new User Interface and native applications for the Big 3 OS models
  • Easily installed for command line via new NPM image or Docker image
  • 1-second updates function for TSL Clients (provides compatibility with certain tally products like Cuebi)
  • Recording/Streaming statuses for OBS and VMix now available for tally states
  • Generic TCP Device Action improvements
  • TSL 5.0 source support
  • New Ross Carbonite source type to monitor any bus regardless of the “on air” settings
  • Web tally page can now be loaded directly by Device Id, and chat can be disabled
  • Pimoroni Blinkt! Listener Client
  • TTGO_T Display Listener Client
  • Improved Outgoing Webhooks – support for https and content-type selections
  • Roland Smart Tally emulation for use with STAC
  • Panasonic AV-HS10 support
  • Support for ATEM super sources in tally states
  • Bug fixes and performance improvements

If you’re new to Tally Arbiter, go check it out! You can also join the new Facebook user group here: https://www.facebook.com/groups/tallyarbiter

And to everyone in the community who has helped to make TA what it is, thank you! Your contributions are helping everyone.

PresentationBridge Client now in public release!

I shared back in the fall about my new Presentation Bridge Client software. Since that post, the software has been in a private testing period as I was getting feedback from users. And now, thanks to some help from the community, it’s ready to release!

My hope is that this software will help you be more efficient in your tech ministry, especially when you need to do a lot of things without a lot of people.

Go check it out! And, as always, feedback and contributions are welcome.

You can get the latest release here: https://github.com/josephdadams/presentationbridge-client/releases/latest

Controlling a Canon XF Series camera using a stream deck and Companion by reverse-Engineering the Canon BrowSer Remote

It’s been awhile since I posted! Earlier in the year, we had a few unexpected expenses come up in our family. I started spending my spare time in the evenings doing custom freelance programming to help meet the needs. I have been doing this for a few months now which has helped us out.

God continues to bring new visitors to this blog and I have been able to return emails, phone calls, Zooms, and help so many people implement the ideas and software that I’ve created here. It is truly a blessing to see how God has used this little blog I started a few years ago.

I’m excited to share a new project that I have been working on with my team: Control of our Canon XF cameras through a stream deck. We have a couple of these cameras here at my church, the Canon XF 705 series:

I have been mentoring the guys who work part time in A/V here with me on how to write code and specifically code modules for the Companion project that we use so heavily here. We decided it would be great if we had control of these particular cameras at our shader station alongside the shader control of our Marshall cameras (I wrote about that here) and our broadcast cameras.

These Canon cameras come with a LAN port (you can also use wifi) and it runs a little web server called the Browser Remote which allows you to have full control of all the camera functions, from focus/zoom/iris/gain all the way to recording, white balance, and shutter control. If there’s a button on the camera, chances are you can control it from the browser remote. You can even see a live preview of the camera!

The built in browser remote functions of the Canon XF series.

So we started doing some digging, and realized that there is an internal API on the camera that returns a lot of the data in simple JSON sets. Once you initiate a login request to the camera, it returns an authentication token, which must be sent along with every future request.

For feedbacks on the camera state, we simply poll the camera every second or so. The browser remote page itself seems to do this as well, so we just emulated that.

The browser remote unfortunately only allows one user at a time to be logged in, so when our Companion module is in use, the actual browser remote page can’t be used. But for our purposes, that’s not really an issue since we just want to have button control of the iris/gain functions when we use these cameras during live services. Now I don’t have to ask my operators to iris up or down, I can just do it right from the stream deck!

Here’s a little walkthrough video that shows the module in action:

The module will soon be a part of the Companion beta builds, so if you have a Canon XF series camera, go check it out!

Using a Stream deck and a raspberry pi to create a remote control panel to adjust marshall cameras over ip with rs-485 control

At my church, we have 4 of these cameras: Marshall CV503

Marshall CV503 Miniature Camera

We use them during services to capture shots of the instruments (drums, keys, etc.) and whatever is happening on stage. They are great little action-style cameras, and they have SDI out on them so they are super easy to integrate into our video system.

They have a lot of adjustment options to them via a local joystick-style controller at the camera, but obviously, that’s challenging to use during a service if we needed to adjust the camera’s exposure. The menu is OSD and shows up on the live output. Plus they’re all over the stage and we can’t walk there during the service!

While I wish they were IP-controllable directly, this particular model does not have that option. They do, however, come with RS-485 serial connectors.

So we decided to create a remote shading system using a stream deck running Bitfocus Companion. The Marshall cameras support the VISCA protocol over RS-485. In fact, if you’re a Windows user, Marshall provides free software to control the cameras over RS-485.

Marshall provides this program to control, if you have Windows and want to connect your cameras directly to that computer.

We don’t use a lot of Windows computers around here, and that program requires that the computer running their software be the one physically connected to the cameras via serial. Not ideal for us because the cameras are on a stage and our computers typically are not. Marshall also actually makes a nice hardware RCP – but we didn’t want to pay for that.

So we did what you probably already guessed – put in a Raspberry Pi with a USB to RS-485 adapter that we could control remotely.

We have several wallplates across the stage with network tie lines on them that feed back to the rack room in a patchbay. So we made cables that connect to the RS-485 ports at each camera that then go back to a wall plate into a RJ45 port. We utilized the blue/white-blue pair on CAT6 cable. We used that pair because these are data pins in a normal network connection, which means if someone ever accidentally connected it straight to a switch or something, there would not be any unintended voltage hitting the cameras.

Each camera is set to its own camera ID (1-4), and the matching baud rate of 9600 (the default). Then in the rack room, we made a custom loom to take the 4 connections and bring them into a jack, which then feeds into the USB to RS-485 adapter on the Pi.

The Pi is a 4 model with 4GB of ram. Honestly, for what this thing is doing, we probably could have just run it off of a Pi Zero, but I wanted it hardwired to my network, and the bigger Pi’s come with ethernet ports built in.

I bought this adapter off Amazon:

DSD TECH SH-U10 USB to RS485 Converter with CP2102 Chip

When connected, it represents itself as serial port /dev/ttyUSB0. We originally planned to use the socat program in Linux to listen for UDP traffic coming from Companion:

sudo socat -v UDP4-LISTEN:52381 open:/dev/ttyUSB0,raw,nonblock,waitlock=/tmp/s0.locak,echo=1,b9600,crnl

To actually send the UDP data, we’re using the Sony VISCA module already built into Companion. The Marshall cameras use the same protocol over RS-485.

Using the socat method, we quickly found that it would only listen to UDP traffic coming from one instance of the module. We need 4 instances of the Companion module because we have 4 cameras, each with a different camera ID.

However, nothing a small Node.JS program can’t solve. So I wrote a program that opens the specified UDP port, opens the specified serial port, and sends any data received at that UDP port straight to the serial port. You just configure a new instance in Companion for each camera with the same IP of the Pi running the udp-to-serial program, and the camera ID that you configured at the Marshall camera.

Here’s a video that shows it all in action:

If you want to try this out for yourself, I’ve made the udp-to-serial repository available here:

http://github.com/josephdadams/udp-to-serial

Automated Printing of Google Documents using Google Apps Script, the DropBox API, and Automator Folder Actions

A couple of years ago, I shared a workflow that we still use to auto generate documents that we use each week. A few months ago, I shared another workflow that showed how I automated printing our weekly Planning Center Online paperwork.

I decided recently that I was tired of still having to manually print these weekly “talking points” documents, while having my Planning Center paperwork fully automated. So, I took a few minutes and wrote a new Google Apps Script to help with this.

We print these every week. I was doing it manually, but not anymore!

Here is what the script does:

  • Searches a specific Google Drive folder for all subfolders with files that match today’s date (the script will run on a weekly trigger)
  • If the file is a match, it opens the file as a PDF and stores the binary contents in a variable
  • An upload request is made to the Dropbox API with that binary data and a file name
  • Dropbox saves the file into the “Automated Printing” folder
  • Dropbox then syncs the file to the local computer (Mac)
  • The local Mac is configured with a Folder Action that automatically prints any files placed in this folder
  • After the Automator Folder Action prints the file, it removes the file

Here’s how you set it up:

First, you want to create a new Dropbox “App”. Go to dropbox.com/developers and click “Create apps”.

Then, you need to fill out these fields:

  1. “Choose an API”: Scoped Access. It’s your only choice.
  2. “Choose the type of access you need”: I chose “Full Dropbox” because I already had a specific folder set up in the root of my Dropbox. If you’re setting up the Automator Folder action for the first time, you could probably keep the scope within “App folder”.
  3. “Name Your App”: Give it a meaningful name. It does have to be unique across all of Dropbox, for some reason, so if you get an error here, just add something unique to you.
  4. “Choose the Dropbox account that will own your app”: If you have personal/business accounts linked, you’ll need to choose the account that owns the app. I’m using a business account for this, so I chose that one.

On the next page, choose the “Permissions” tab.

Then give your app “files.content.write” access.

Now back on the Settings tab, generate a new Token and set the Expiration to “No expiration”.

This will generate a Token key which you will use within the Google Apps Script in the next steps.

Now in Google Drive, click “New”, go down to “More”, and choose “Google Apps Script”. Google Apps Script is essentially Javascript, so it’s super easy to use.

You’ll want to give the project a helpful name, as it will be stored in your Google Drive this way.

Give your project a helpful name.

In the code section, paste in my script below:

/*
EDIT THESE VARIABLES FOR YOUR SETUP
*/
var accessToken = "token"; //Dropbox App Access Token
var rootFolder = "folderID"; // Google Drive Root Folder where these files live
var dropboxPath = "/Automated Printing/"; //Dropbox Folder Path to place file in
var numberOfCopies = 2; //the number of copies you want per file

//Nothing to edit below

function myFunction() {
  var dtDate = new Date();
  const monthNames = ["January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"];
  var strDate = monthNames[dtDate.getMonth()] + " " + dtDate.getDate() + ", " + dtDate.getFullYear();
  var mainFolder = DriveApp.getFolderById(rootFolder);
  var subFolders = mainFolder.getFolders();
  while(subFolders.hasNext()) {
    var subFolder = subFolders.next();
    var files = subFolder.getFiles();
    while(files.hasNext()) {
      var file = files.next();
      var fileName = file.getName();
      if ((fileName.indexOf(strDate) > -1) && (fileName.indexOf(".pdf") == -1)) {
        //this is a file we want to print
        Logger.log("Generating PDF: " + file.getName());
        for (let i = 0; i < numberOfCopies; i++) {
          sendToDropbox(file.getName() + ".pdf", file.getAs('application/pdf'));
          Utilities.sleep(15000); // wait 15 seconds before doing the next file, so that Dropbox has time to sync the file, the Automator can print the file, remove it, and close out
        }
      }
    }
  }
}

function sendToDropbox(fileName, fileBlob) {
  var parameters = {
    "path": dropboxPath + fileName,
    "mode": "add",
    "autorename": true,
    "mute": false,
    "strict_conflict": false
  };

  var headers = {
    'Authorization': 'Bearer ' + accessToken,
    'Content-Type': 'application/octet-stream',
    'Dropbox-API-Arg': JSON.stringify(parameters)
  };

  var options = {
    "method": "POST",
    "headers": headers,
    "payload": fileBlob
  };

  var apiUrl = "https://content.dropboxapi.com/2/files/upload";
  var response = JSON.parse(UrlFetchApp.fetch(apiUrl, options).getContentText());
}

Now modify the top section to include your Dropbox access token (the one you generated earlier), the Google Drive folder ID (the folder ID is in the URL of the page when you open that folder in Google Drive), the Dropbox path to save to, and the number of copies you need for each matching document. In our case, I need 2 copies of each document.

I learned in testing that if Dropbox syncs the files too fast while my Automator folder action is still running, the new files that were added don’t get included in the folder action, and the folder action doesn’t re-run those new files. So, what this script does is it uploads a new PDF for every copy needed, but it waits 15 seconds in-between. This gives Google time to upload to Dropbox, Dropbox time to sync to my local Mac with the Automator action, and Automator time to run its script and print the file and delete it. It’s not very efficient, but the files are not that large.

Now that your script is in place, you need to assign a trigger to it. Click “Triggers” on the left-hand side of the screen:

Add a new trigger. I used the following settings to have it run weekly on Sundays between 6 and 7am. Be sure to target the “myFunction” function as that’s the main one we are using.

You’ll need to create the folder action in Automator. Follow my previous post on how to do this, as the steps are the same. I didn’t have to change that at all!

Here’s a tutorial video if you learn better that way:

I hope this helps you think of ways to automate what you’re doing in Google Drive so you can spend more time on ministry and less on manual tasks!

Automating Lights, Sending Advanced MIDI Messages, HTTP Requests, and More through ProPresenter Slide Notation and the new PresentationBridge Client

A couple of years ago, I wrote about the real-time lyrics sharing software I created to help us be able to send lyrics from ProPresenter straight to people’s phones and tablets at our outdoor worship night. Since then, we have not used this software too much, but I have helped countless other churches get it going for them, especially in this era of doing church differently in a pandemic. Many churches have found this free software valuable so that they could share worship lyrics and other messages while doing outdoor or distanced services.

Now, I have an update!

I have created a client-side app that runs in the system tray to facilitate the connection to ProPresenter and send the lyrics to the cloud server. It also supports several unique “slide notations” that allow you to automate nearly everything just by having a ProPresenter operator click on a slide.

An example of slide notation that PresentationBridge Client supports.

These slide notations are interpreted by the PresentationBridge Client software and are triggered when they are a part of the current slide.

The PresentationBridge Client interface.

The software can also detect instances of ProPresenter (and midi-relay!) running on your network to help make it easier to get connected. It supports sending all of the midi-voice messages that midi-relay supports, as well as a custom shortcode for Chroma-Q Vista, which requires MIDI Show Control in order to remotely execute specific cues on specific cuelists. It can also send out HTTP GET/POST requests, and it can virtually press a Companion button on your remote instance of Companion. This means that you can do just about anything automatically, just by clicking on a slide.

We had a chance to use it in our outdoor worship night back in October, and it worked great! I was making tweaks to it in real-time as people were using it.

We used the new PresentationBridge Client at our outdoor night of worship and it worked very well.

Here’s a video that shows it in action:

This project will be released open-source at some point, but currently I am looking for a few testers to give their feedback. If you’d like to be considered, please reach out to me via the contact form and I will be in touch.