If you use Companion and are in tech ministry, you have probably used my PCO Services LIVE module. While in the process of converting this module to the new API we are using for Companion 3.0, I gave it an overhaul and added lots of new features!
I decided to give some love recently to midi-relay since person after person asked me to make this an easier-to-run app rather than setting up a nodejs runtime.
When I originally created midi-relay, I designed it to run on every OS, especially the Raspberry Pi platform. Thousands of people use it all over the world for all kinds of stuff. Probably because it’s free. 🙂
This software is designed to accept a JSON object via its API and then turn that object into a MIDI command and send it out a local MIDI port. It allows for remote control of a lot of systems by sending the command over a simple network protocol.
Now it’s even easier to use.
It runs in the system tray for easy access.
Some new features include:
a new socket.io API for bi-directional communication
a virtual MIDI port, for loopback uses
an upgraded Bitfocus Companion v3 module
Disabling remote control, if needed
So if you’re a midi-relay user and you want an easy way to run this on your Mac or Windows desktop, go check out the latest release!
If using my software makes your life easier, please consider supporting my family.
Awhile back, I wrote about how we created a network based VISCA shading rig for our Marshall CV503 cameras we use on stage, to control their various exposure settings. The cameras themselves can only do this via RS485 serial, and our system sends UDP from Bitfocus Companion (we use the Sony VISCA module/connection) over the network, converts to serial at a Raspberry Pi, and then using custom cables, we can send the signal to our cameras over the patchbay.
We’ve been using that system ever since and it works great. We have even recently taken the steps to create custom cable looms that have SDI, CAT6, and power all in one loom to make it a breeze to set up.
Recently, we set up one of these cameras at the back of our auditorium where it’s impractical to run a cable all the way to our patchbay in the rack room at the stage side for a serial connection. We still need to control the exposure, so a solution was needed.
It’s also impractical these days to buy a Raspberry Pi. They have gotten quite expensive, and difficult to find in stock.
A few months ago, I bought a Nano Pi NEO and started playing around with it to see what it could do, since it’s easy to get ahold of and very affordable.
This is the Nano Pi NEO single board computer.
It has an ethernet port, a full size USB A port, and is powered via micro USB. It runs Armbian quite well, so it was very simple to install my existing udp-to-serial nodejs script.
I bought a project box and modified it to fit all the parts. I started with a dremel but I should have just used a hacksaw from the beginning, because that gave me much cleaner cuts. I didn’t want to do any soldering or make custom internal cables, so my box had to be a little larger.
The entire rig is powered by a single POE to USB adapter. This provides the ethernet data to the Nano Pi, and then micro USB power to the Nano Pi’s power port. I also figured out awhile back that you can use a USB 5V to 12V step-up cable to power these cameras, so I put one of those in the box as well.
POE to USB adapter, RS485 cable, and two keystone jacks for serial out. Blue/White-Blue pins for +/-.
We put RJ45 keystone jacks on the box to provide the serial out connections, and we also hot glued the POE to USB adapter to the lid of the box so the connection could be flush with the edge.
It’s certainly crammed in there! The Nano Pi is glued to the bottom, and the rest of the cables are tucked into the box. The USB splitter, the USB to RS485, and the USB 5V to 12V DC cable.
Keystone jacks used for the serial connections. We then have custom RJ45 to Phoenix connectors that plug into the cameras. This method allows us to use any standard CAT5/6 patch cable to make the connections in between.
These are Amazon purchase links. As an Amazon Associate I earn from qualifying purchases.
One single POE connection provides all the power and data needed.
Overall, pretty pleased with how it turned out! I like that it’s just two cables – one for the SDI video signal off the camera, and one ethernet to power it all and provide the data connection.
Awhile back, I shared a post about using Google Apps Script (GAS) to automate repetitive tasks in Google Docs. 4 years later, we are still using this same script.
My church recently announced that we would be launching an evening service, so I took the opportunity to open the script and add support for the additional document type. I also added conditional checks that if the user ever selects the ‘CLOSE’ button on one of the dialog boxes, the script will automatically end and no further action is taken.
Here’s the updated script:
function myFunction()
{
var ui = DocumentApp.getUi();
var templateDocId = '[templateid]';
var prompt_numberOfDocs = ui.prompt('How many Talking Point documents do you want to create?');
if (prompt_numberOfDocs.getSelectedButton() == ui.Button.CLOSE) {
// don't go any further
return;
}
var prompt_startingDate = ui.prompt('What is the starting date? Please enter in MM/dd/yyyy.');
if (prompt_startingDate.getSelectedButton() == ui.Button.CLOSE) {
// don't go any further
return;
}
var numberOfDocs = parseInt(prompt_numberOfDocs.getResponseText());
var startingDate = prompt_startingDate.getResponseText();
var prompt_venueResponse = ui.prompt('Venue', 'Create Documents for a custom venue? Type in the venue name and click YES. Otherwise click NO.".', ui.ButtonSet.YES_NO);
var venueTitle = '';
var customVenue = false;
var createAud1Morning = false;
var createAud2Morning = false;
var createAud1Evening = false;
if (prompt_venueResponse.getSelectedButton() == ui.Button.CLOSE) {
//don't go any further
return;
}
else if (prompt_venueResponse.getSelectedButton() == ui.Button.YES)
{
venueTitle = prompt_venueResponse.getResponseText();
customVenue = true;
}
else {
var prompt_Aud1MorningResponse = ui.prompt('Venue', 'Create Documents for Aud 1 (Morning)?', ui.ButtonSet.YES_NO);
var prompt_Aud2MorningResponse = ui.prompt('Venue', 'Create Documents for Aud 2 (Morning)?', ui.ButtonSet.YES_NO);
var prompt_Aud1EveningResponse = ui.prompt('Venue', 'Create Documents for Aud 1 (Evening)?', ui.ButtonSet.YES_NO);
if (prompt_Aud1MorningResponse.getSelectedButton() == ui.Button.CLOSE) {
//don't go any further
return;
}
else if (prompt_Aud1MorningResponse.getSelectedButton() == ui.Button.YES) {
createAud1Morning = true;
}
if (prompt_Aud2MorningResponse.getSelectedButton() == ui.Button.CLOSE) {
//don't go any further
return;
}
else if (prompt_Aud2MorningResponse.getSelectedButton() == ui.Button.YES) {
createAud2Morning = true;
}
if (prompt_Aud1EveningResponse.getSelectedButton() == ui.Button.CLOSE) {
//don't go any further
return;
}
else if (prompt_Aud1EveningResponse.getSelectedButton() == ui.Button.YES) {
createAud1Evening = true;
}
}
var date = new Date(startingDate);
var htmlOutput = HtmlService
.createHtmlOutput('<p>Creating ' + numberOfDocs + ' documents. Please stand by...</p>')
.setWidth(300)
.setHeight(100);
ui.showModalDialog(htmlOutput, 'Talking Points - Task Running');
for (var i = 0; i < numberOfDocs; i++)
{
var loopDate = new Date(date.getTime()+ ((i * 7) * 3600000 * 24)); // uses the looping interval to get the starting date and add 7 days to it, creating a new date object
var documentName = 'Talking Points - ' + Utilities.formatDate(loopDate, Session.getScriptTimeZone(), "MMMM dd, yyyy");
var documentDate = Utilities.formatDate(loopDate, Session.getScriptTimeZone(), "MM/dd/yyyy");
if (customVenue) {
documentName += ' (' + venueTitle + ')';
createNewTalkingPointDocument(templateDocId, documentName, venueTitle, documentDate);
}
else {
if (createAud1Morning) {
createNewTalkingPointDocument(templateDocId, documentName + ' (Aud 1 Morning)', 'Aud 1 (Morning)', documentDate);
}
if (createAud2Morning) {
createNewTalkingPointDocument(templateDocId, documentName + ' (Aud 2 Morning)', 'Aud 2 (Morning)', documentDate);
}
if (createAud1Evening) {
createNewTalkingPointDocument(templateDocId, documentName + ' (Aud 1 Evening)', 'Aud 1 (Evening)', documentDate);
}
}
}
htmlOutput = HtmlService
.createHtmlOutput('<script>google.script.host.close();</script>')
.setWidth(300)
.setHeight(100);
ui.showModalDialog(htmlOutput, 'Talking Points - Task Running');
}
function createNewTalkingPointDocument(templateDocumentId, documentName, venueTitle, documentDate)
{
//Make a copy of the template file
var documentId = DriveApp.getFileById(templateDocumentId).makeCopy().getId();
//Rename the copied file
DriveApp.getFileById(documentId).setName(documentName);
//Get the document body as a variable
var body = DocumentApp.openById(documentId).getBody();
//Insert the entries into the document
body.replaceText('##VENUE##', venueTitle);
body.replaceText('##DATE##', documentDate);
}
If you’re using Google Apps Scripts to automate something, leave a comment to share how! If you’d like to create something like this for your ministry and need my help, drop me a line.
One of the first blog posts here was about PCO’s custom reports. I’ve written a lot of them and helped a lot of churches get started with their own.
In anticipation of a possible need for split teams, I’ve now created a new custom report that has several customizable features, enhanced checklists, dynamic notes, and more, without having to write any actual code. Just modifying variables at the top of the report.
This new report supports the following:
Customizable header
Custom print order, with variable plan items as columns and/or rows alongside the plan item description
Automatic highlighting of Plan Item Note changes to signify important information
Ability to display Plan Notes for everyone, by team, or by position
Custom CSS for your own unique look
Ability to show headers in their own row, or inline to save space
Here’s the report with Headers as their own rows.Here’s the exact same report, but with headers inline for a cleaner look.
Here’s a video that shows how it all works:
Because of the substantial amount of work I have put into creating and coding this report, I have chosen to make this report available for purchase. I’m pricing it at a point that is affordable for most churches, at $45. Once payment is received, I will send over the report code and help you install it, if needed.
PCO Services Matrix Report with Split Teams, Fully Customizable
This custom report will revolutionize the way you share information with your team!
Report code will be sent to the email address provided once payment is received.
If you have a need for a custom report beyond this, contact me! I’m always available for hire for your custom PCO reporting projects, or whatever other custom coding needs your ministry or organization may have.
About a year and a half ago, I shared our project where we were able to gain network control of our Marshall cameras by bridging RS485 over UDP with a Raspberry Pi and a USB to RS485 adapter.
I’m happy to report that it is still working great, week after week. In fact, we recently implemented a second system in our other auditorium, so when we have simulcast services and I want to move a camera over there, it’s still just as easy to shade them remotely.
I have had several people write in and ask specifically how we created the wiring to make this possible, so I drew up this diagram today:
Essentially here, serial is two wires, positive (+) and negative/minus (-). So the cables use the blue/white blue pair of standard cat5/6 cable to make the connections. In our setup, we have wall/stage boxes with standard cat6 jacks that lead to a cat6 patch panel in the rack rooms where we have the raspberry pi located. So we made adapters that allow us to use standard cat6 cables to make all of our connections. We chose the blue/white blue pair as it is data only to minimize any risk of a misconnection or short if it were accidentally used for something else on stage or if someone else plugged in the wrong cable to our adapter.
Hope this helps as you implement in your enviroment!
Somehow, it’s been a year since I’ve posted to this blog. I’ve been busy! Some financial circumstances in my family have meant I have needed to spend a lot of my spare time coding-for-hire. Over the past year, I’ve been developing custom apps, writing so many Companion modules, creating Ross Dashboard custom panels, and developing unique solutions through software development to meet the demands for clients. God continues to bring people to us who need my help, and He meets our needs.
I have also been hard at work developing solutions for the church.
Tally Arbiter is now in version 3.0 and continues to grow in support for more switchers and add more features.
PresentationBridge-Client has gained more features and continues to be a helpful resource for helping to automate church production.
I’ve written several more plugins for Cronicle to help with the various workflows around our facility. Every venue we support is managed through Cronicle now. As a room is reserved, we create the automated events, and people just show up! The AVL of the room is turned on and ready for them, automatically configured to meet the needs of their event.
TimeKeeper got some feature bumps as well, allowing us to easily automate things like switching OBS scenes “3 seconds before x timer ends”, etc. We even have it turning off the lights in the control room just as production is about to begin!
We are doing more and more with Companion to help our volunteers do more at the press of a button. Complex tasks on a video switcher are easy because they just press one button.
I’m excited to share my latest project here today that helps us meet a specific need at our church, and I hope will help you at yours. We use Spotify around the campus as people are walking in, pre-service music, etc. We have a dedicated Mac Mini running Dante Virtual Soundcard that outputs the Spotify music to our Dante VLAN. From there, any of the venues can receive the Spotify mix and play it – whether it is one of the auditoriums, the cafe/coffee bar, the outdoor speakers as people walk in, etc.
We run this Campus Spotify headless on the Mac mini so it’s located in a rack room where we never touch it. When we need to change playlists, we remote into the computer and control it. When we need to start/stop playing, we’ve been triggering an AppleScript file remotely to play/pause.
But that all just got a lot easier. I recently developed a node.js/electron app, for MacOS only, that will run in the system tray of this Campus Spotify machine. It controls Spotify using AppleScripts, and it gets data out of Spotify, like the current playing track, through Apple’s built-in Distributed Notification Center, which Electron can easily pick up and process. (Electron is the framework that allows you to create cross-platform “native” apps that can be run on the computer without having to install the node.js runtime. You might be surprised to learn that several apps you already use are written in node.js and use Electron!)
Why not just use the Spotify Web API?Sure – you could gain control of Spotify through their Web API – but for some users, the OAuth process and creation of App Keys is a stumbling block. This process is much easier – just download the app and run it!
To enable remote control, this app exposes both a simple REST API as well as socket.io connections for real-time event based communication. Whenever Spotify changes tracks or play/pauses, it will automatically communicate this information to any of the connected socket.io clients.
I’m calling the app, “spotify-controller”.
It runs in the system tray so it’s nice and tidy!
I’ve also created a partner Companion module to work with it. It supports variables, feedbacks, and presets to make things a lot easier to control!
When you’ve connected the Companion module to spotify-controller, you can see all of the playback information as variables.
Feedback status on buttons! This will update whether Spotify is controlled from Companion, another Companion instance, Spotify itself, Spotify from your phone…. anywhere!
You also get fancy little notifications whenever a song changes on the Mac running spotify-controller!
Like all of the software I write for the church, this is free and open-source. My hope is that it will not only help you use technology more effectively for ministry, but by releasing it open source, it will continue to foster a community for contribution. We can do so much more together when we share resources and time.
The partner Companion module is available in the beta builds as well. If you’re a Cronicle user like us, the plugin to control it will be available here: https://github.com/josephdadams/cronicleplugins
About a year ago, I released some camera tally lights software because we desperately needed it at my church. Since that time, a ton of new features have been added, both by me and by the community.
It’s now in use in hundreds of places, from churches to event venues to sports stadiums.
Version 2.0 was silently released a few weeks ago, which includes a compiled application that can run natively on Windows, MacOS, and Linux, without the need to install Node.js and other dependencies like the command line. And, of course, it still runs on a Raspberry Pi.
Lots of people in the community have shared how they are using it, made their own tutorials, and added to the existing documentation.
It’s truly becoming a community project, and I love that. We now have an official Facebook user group to help facilitate conversation amongst users, and I’m excited for the new features on the roadmap in the coming days.
Someone from the community designed a new logo! Isn’t it nice?
I shared back in the fall about my new Presentation Bridge Client software. Since that post, the software has been in a private testing period as I was getting feedback from users. And now, thanks to some help from the community, it’s ready to release!
My hope is that this software will help you be more efficient in your tech ministry, especially when you need to do a lot of things without a lot of people.
Go check it out! And, as always, feedback and contributions are welcome.
It’s been awhile since I posted! Earlier in the year, we had a few unexpected expenses come up in our family. I started spending my spare time in the evenings doing custom freelance programming to help meet the needs. I have been doing this for a few months now which has helped us out.
God continues to bring new visitors to this blog and I have been able to return emails, phone calls, Zooms, and help so many people implement the ideas and software that I’ve created here. It is truly a blessing to see how God has used this little blog I started a few years ago.
I’m excited to share a new project that I have been working on with my team: Control of our Canon XF cameras through a stream deck. We have a couple of these cameras here at my church, the Canon XF 705 series:
I have been mentoring the guys who work part time in A/V here with me on how to write code and specifically code modules for the Companion project that we use so heavily here. We decided it would be great if we had control of these particular cameras at our shader station alongside the shader control of our Marshall cameras (I wrote about that here) and our broadcast cameras.
These Canon cameras come with a LAN port (you can also use wifi) and it runs a little web server called the Browser Remote which allows you to have full control of all the camera functions, from focus/zoom/iris/gain all the way to recording, white balance, and shutter control. If there’s a button on the camera, chances are you can control it from the browser remote. You can even see a live preview of the camera!
The built in browser remote functions of the Canon XF series.
So we started doing some digging, and realized that there is an internal API on the camera that returns a lot of the data in simple JSON sets. Once you initiate a login request to the camera, it returns an authentication token, which must be sent along with every future request.
For feedbacks on the camera state, we simply poll the camera every second or so. The browser remote page itself seems to do this as well, so we just emulated that.
The browser remote unfortunately only allows one user at a time to be logged in, so when our Companion module is in use, the actual browser remote page can’t be used. But for our purposes, that’s not really an issue since we just want to have button control of the iris/gain functions when we use these cameras during live services. Now I don’t have to ask my operators to iris up or down, I can just do it right from the stream deck!
Here’s a little walkthrough video that shows the module in action:
The module will soon be a part of the Companion beta builds, so if you have a Canon XF series camera, go check it out!