Revisiting and Refining a Google Apps Script with the help of Generative AI

A few years ago (6 years ago!), I shared about a solution I came up with to create the weekly “talking points” Google documents that my team relies on. We’ve been using that same solution with Google Apps Script ever since. It’s been rock solid, and saves us a lot of time from creating each one of these documents by hand.

I decided it was time to refresh this script and document, since we now have a third venue (at a new campus). And, when it’s time to refine – why not consult some AI in the process?

This was my starting prompt.

I started by sending ChatGPT my existing script and asking if it had any ideas to improve the prompts.

The response

We immediately got to work redesigning the script – mostly focusing on the dialogs and flow.

I came up with a basic new design that featured the church logo and a simpler header. ##VENUE## and ##DATE## are placeholders that get replaced with the actual Venue name and Date of the document.

After some back and forth, here’s what the new dialog looks like:

This looks a lot better! I even added a progress bar:

If you’re hesitating to jump in using generative AI – give it a whirl! It can save you a lot of time and propose ideas you may not have thought about.

If you want to see my script, you can check it out here: https://github.com/josephdadams/document-generator-gas

Building a digital roster/serving board using Companion and the Planning Center Services API

If you’re involved in tech ministry and like to tinker, chances are you’ve heard of — and maybe even used — micboard.io.

This is Micboard.

Straight from their website, “Micboard simplifies microphone monitoring and storage for artists, engineers, and volunteers. View battery, audio, and RF levels from any device on the network.” It’s a neat tool and has helped a lot of teams over the years.

I always liked the idea of Micboard because it would be a great way to show who is serving that day. We tried to implement it at my church but eventually moved away from it, mainly because it hadn’t been updated in quite a while (over 6 years now), and we needed some additional features. Specifically, we were looking for integration with Planning Center Services — something that could automatically pull assignments from an interface our team was already familiar with. And – something we could use for more than just people on stage.

At first, I forked the Micboard repo (since it’s open-source) and started making improvements, cleaning up some code, and tweaking it to run more easily on modern MacOS systems. But pretty quickly, I realized I had too much on my plate to maintain a whole fork long-term.

Fast forward a year or so. I came across a few posts on some Facebook groups that I was in where people were using my ScreenDeck project to essentially create a Micboard style interface using Companion.

I wish I had my own Acoustic Bear.

What I loved about this approach is that it leveraged something we were already using — Companion — and could still be viewed from anywhere on the network, just like Micboard. Plus, Companion supports a lot more devices beyond just Shure systems.

Even better, this opened the door to that Planning Center integration I had wanted without introducing a bunch of extra overhead — we were already using the PCO module to control our LIVE service plans!

One thing I’ve wanted for a while was a digital roster — something simple to show who’s serving each day, helping everyone put names to faces across band, tech, safety, and more. A “Serving Board,” if you will.

About a year ago, I had modified the PCO module to pull scheduled people into variables — showing their names and assigned roles. I recently took it further by adding a feedback: “Show Person Photo based on Position Name.”

Now, the module pulls the photo from the person’s assignment, converts it into a PNG, and stores it internally as a base64 image — which can be shown directly on a button.

Pretty cool – and it looks like this:

Say “hi”, Adam.

But I didn’t want to stop there — I wanted the person’s status (Confirmed, Unconfirmed, or Declined in PCO) to show too.

Using the companion-module-utils library (thanks to another awesome Companion dev!), I added a simple colored border overlay for statuses.

A few extra lines of code later:

And you can get this look!

Thanks for confirming!

At this point, it was looking great — but I started thinking:

What if I don’t want to redo all my buttons every week? What if my teams and roles change?

So I added a new option: a generic “position number” approach.

You can now pick a position number in the plan (or within a specific team) — and the module will automatically pull the right person’s info, week to week, without you having to manually reconfigure anything.

For example:

• Pick any number across the entire plan.

• Or pick a number within a specific team, like Band or Tech.

With this option, you can choose any number, regardless of the team.
This picks the first person scheduled in the band.

I also built some Module Presets to make setting this up super easy:

Generic Position Number (no specific team)

Position Number Within a Team (like “Band” only)

Generic without regard to what Team
In this example, you can choose a number within the Band team.

And here’s where it all comes together:

Let’s say you have a “Wireless Assignments” team in PCO, and you assign a person to a position called “Wireless 4.”

Now, using the Shure Wireless module in Companion, you can match that name and see live RF and battery stats for Wireless 4 — tied directly to the person assigned!

All together, you get a clean, dynamic, reusable Micboard-style dashboard — all inside Companion, no extra tools required.

Here’s a walk through video showing it all in action:

The updated PCO Services Live module is available now in the Companion betas — go check it out if you want to try it!

Notify production team members remotely using open source software and low cost USB busy lights

At my church, we have a couple of these:

They’re great. Expensive, but they work well.

The problem for us is that anytime anyone presses the Call light on the intercom party line, any flashers on that party line will light up. This means we can really only have 1 unique flasher per line.

Sometimes, we want or need to get a specific person/position’s attention.

I created some software to help with this. It’s called beacon.

It’s a small app that runs in the system tray and hosts a network API so you can signal a USB busy light, such as the Luxafor Flag or Thingm blink(1). Or, if you don’t have or want a physical signal light, you can also have an on-screen dot that you can use.

I’ve designed this to work in tandem with a custom module for Bitfocus Companion, but since it does have a full API, you can implement any third-party integrations that you like. All of the documentation is on the Github repository: https://github.com/josephdadams/beacon

You can set a beacon to stay a solid color, fade to a new color, flash a color, and more. You can send custom notifications to the user’s window as well as play tones and sounds.

Here’s a video of the project in action to show you how you can use it:

Go check it out today!

https://github.com/josephdadams/beacon

midi-relay v3.0 is here – as an electron app for Mac and Windows!

I decided to give some love recently to midi-relay since person after person asked me to make this an easier-to-run app rather than setting up a nodejs runtime.

When I originally created midi-relay, I designed it to run on every OS, especially the Raspberry Pi platform. Thousands of people use it all over the world for all kinds of stuff. Probably because it’s free. 🙂

This software is designed to accept a JSON object via its API and then turn that object into a MIDI command and send it out a local MIDI port. It allows for remote control of a lot of systems by sending the command over a simple network protocol.

Now it’s even easier to use.

It runs in the system tray for easy access.

Some new features include:

  • a new socket.io API for bi-directional communication
  • a virtual MIDI port, for loopback uses
  • an upgraded Bitfocus Companion v3 module
  • Disabling remote control, if needed

So if you’re a midi-relay user and you want an easy way to run this on your Mac or Windows desktop, go check out the latest release!

If using my software makes your life easier, please consider supporting my family.

Thanks!

Tally Arbiter 2.0 now available!

About a year ago, I released some camera tally lights software because we desperately needed it at my church. Since that time, a ton of new features have been added, both by me and by the community.

It’s now in use in hundreds of places, from churches to event venues to sports stadiums.

Version 2.0 was silently released a few weeks ago, which includes a compiled application that can run natively on Windows, MacOS, and Linux, without the need to install Node.js and other dependencies like the command line. And, of course, it still runs on a Raspberry Pi.

Lots of people in the community have shared how they are using it, made their own tutorials, and added to the existing documentation.

It’s truly becoming a community project, and I love that. We now have an official Facebook user group to help facilitate conversation amongst users, and I’m excited for the new features on the roadmap in the coming days.

Someone from the community designed a new logo! Isn’t it nice?

A few features to note since version 1.5:

  • An entirely new User Interface and native applications for the Big 3 OS models
  • Easily installed for command line via new NPM image or Docker image
  • 1-second updates function for TSL Clients (provides compatibility with certain tally products like Cuebi)
  • Recording/Streaming statuses for OBS and VMix now available for tally states
  • Generic TCP Device Action improvements
  • TSL 5.0 source support
  • New Ross Carbonite source type to monitor any bus regardless of the “on air” settings
  • Web tally page can now be loaded directly by Device Id, and chat can be disabled
  • Pimoroni Blinkt! Listener Client
  • TTGO_T Display Listener Client
  • Improved Outgoing Webhooks – support for https and content-type selections
  • Roland Smart Tally emulation for use with STAC
  • Panasonic AV-HS10 support
  • Support for ATEM super sources in tally states
  • Bug fixes and performance improvements

If you’re new to Tally Arbiter, go check it out! You can also join the new Facebook user group here: https://www.facebook.com/groups/tallyarbiter

And to everyone in the community who has helped to make TA what it is, thank you! Your contributions are helping everyone.

PresentationBridge Client now in public release!

I shared back in the fall about my new Presentation Bridge Client software. Since that post, the software has been in a private testing period as I was getting feedback from users. And now, thanks to some help from the community, it’s ready to release!

My hope is that this software will help you be more efficient in your tech ministry, especially when you need to do a lot of things without a lot of people.

Go check it out! And, as always, feedback and contributions are welcome.

You can get the latest release here: https://github.com/josephdadams/presentationbridge-client/releases/latest

Controlling a Canon XF Series camera using a stream deck and Companion by reverse-Engineering the Canon BrowSer Remote

It’s been awhile since I posted! Earlier in the year, we had a few unexpected expenses come up in our family. I started spending my spare time in the evenings doing custom freelance programming to help meet the needs. I have been doing this for a few months now which has helped us out.

God continues to bring new visitors to this blog and I have been able to return emails, phone calls, Zooms, and help so many people implement the ideas and software that I’ve created here. It is truly a blessing to see how God has used this little blog I started a few years ago.

I’m excited to share a new project that I have been working on with my team: Control of our Canon XF cameras through a stream deck. We have a couple of these cameras here at my church, the Canon XF 705 series:

I have been mentoring the guys who work part time in A/V here with me on how to write code and specifically code modules for the Companion project that we use so heavily here. We decided it would be great if we had control of these particular cameras at our shader station alongside the shader control of our Marshall cameras (I wrote about that here) and our broadcast cameras.

These Canon cameras come with a LAN port (you can also use wifi) and it runs a little web server called the Browser Remote which allows you to have full control of all the camera functions, from focus/zoom/iris/gain all the way to recording, white balance, and shutter control. If there’s a button on the camera, chances are you can control it from the browser remote. You can even see a live preview of the camera!

The built in browser remote functions of the Canon XF series.

So we started doing some digging, and realized that there is an internal API on the camera that returns a lot of the data in simple JSON sets. Once you initiate a login request to the camera, it returns an authentication token, which must be sent along with every future request.

For feedbacks on the camera state, we simply poll the camera every second or so. The browser remote page itself seems to do this as well, so we just emulated that.

The browser remote unfortunately only allows one user at a time to be logged in, so when our Companion module is in use, the actual browser remote page can’t be used. But for our purposes, that’s not really an issue since we just want to have button control of the iris/gain functions when we use these cameras during live services. Now I don’t have to ask my operators to iris up or down, I can just do it right from the stream deck!

Here’s a little walkthrough video that shows the module in action:

The module will soon be a part of the Companion beta builds, so if you have a Canon XF series camera, go check it out!

Automated Printing of Google Documents using Google Apps Script, the DropBox API, and Automator Folder Actions

A couple of years ago, I shared a workflow that we still use to auto generate documents that we use each week. A few months ago, I shared another workflow that showed how I automated printing our weekly Planning Center Online paperwork.

I decided recently that I was tired of still having to manually print these weekly “talking points” documents, while having my Planning Center paperwork fully automated. So, I took a few minutes and wrote a new Google Apps Script to help with this.

We print these every week. I was doing it manually, but not anymore!

Here is what the script does:

  • Searches a specific Google Drive folder for all subfolders with files that match today’s date (the script will run on a weekly trigger)
  • If the file is a match, it opens the file as a PDF and stores the binary contents in a variable
  • An upload request is made to the Dropbox API with that binary data and a file name
  • Dropbox saves the file into the “Automated Printing” folder
  • Dropbox then syncs the file to the local computer (Mac)
  • The local Mac is configured with a Folder Action that automatically prints any files placed in this folder
  • After the Automator Folder Action prints the file, it removes the file

Here’s how you set it up:

First, you want to create a new Dropbox “App”. Go to dropbox.com/developers and click “Create apps”.

Then, you need to fill out these fields:

  1. “Choose an API”: Scoped Access. It’s your only choice.
  2. “Choose the type of access you need”: I chose “Full Dropbox” because I already had a specific folder set up in the root of my Dropbox. If you’re setting up the Automator Folder action for the first time, you could probably keep the scope within “App folder”.
  3. “Name Your App”: Give it a meaningful name. It does have to be unique across all of Dropbox, for some reason, so if you get an error here, just add something unique to you.
  4. “Choose the Dropbox account that will own your app”: If you have personal/business accounts linked, you’ll need to choose the account that owns the app. I’m using a business account for this, so I chose that one.

On the next page, choose the “Permissions” tab.

Then give your app “files.content.write” access.

Now back on the Settings tab, generate a new Token and set the Expiration to “No expiration”.

This will generate a Token key which you will use within the Google Apps Script in the next steps.

Now in Google Drive, click “New”, go down to “More”, and choose “Google Apps Script”. Google Apps Script is essentially Javascript, so it’s super easy to use.

You’ll want to give the project a helpful name, as it will be stored in your Google Drive this way.

Give your project a helpful name.

In the code section, paste in my script below:

/*
EDIT THESE VARIABLES FOR YOUR SETUP
*/
var accessToken = "token"; //Dropbox App Access Token
var rootFolder = "folderID"; // Google Drive Root Folder where these files live
var dropboxPath = "/Automated Printing/"; //Dropbox Folder Path to place file in
var numberOfCopies = 2; //the number of copies you want per file

//Nothing to edit below

function myFunction() {
  var dtDate = new Date();
  const monthNames = ["January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"];
  var strDate = monthNames[dtDate.getMonth()] + " " + dtDate.getDate() + ", " + dtDate.getFullYear();
  var mainFolder = DriveApp.getFolderById(rootFolder);
  var subFolders = mainFolder.getFolders();
  while(subFolders.hasNext()) {
    var subFolder = subFolders.next();
    var files = subFolder.getFiles();
    while(files.hasNext()) {
      var file = files.next();
      var fileName = file.getName();
      if ((fileName.indexOf(strDate) > -1) && (fileName.indexOf(".pdf") == -1)) {
        //this is a file we want to print
        Logger.log("Generating PDF: " + file.getName());
        for (let i = 0; i < numberOfCopies; i++) {
          sendToDropbox(file.getName() + ".pdf", file.getAs('application/pdf'));
          Utilities.sleep(15000); // wait 15 seconds before doing the next file, so that Dropbox has time to sync the file, the Automator can print the file, remove it, and close out
        }
      }
    }
  }
}

function sendToDropbox(fileName, fileBlob) {
  var parameters = {
    "path": dropboxPath + fileName,
    "mode": "add",
    "autorename": true,
    "mute": false,
    "strict_conflict": false
  };

  var headers = {
    'Authorization': 'Bearer ' + accessToken,
    'Content-Type': 'application/octet-stream',
    'Dropbox-API-Arg': JSON.stringify(parameters)
  };

  var options = {
    "method": "POST",
    "headers": headers,
    "payload": fileBlob
  };

  var apiUrl = "https://content.dropboxapi.com/2/files/upload";
  var response = JSON.parse(UrlFetchApp.fetch(apiUrl, options).getContentText());
}

Now modify the top section to include your Dropbox access token (the one you generated earlier), the Google Drive folder ID (the folder ID is in the URL of the page when you open that folder in Google Drive), the Dropbox path to save to, and the number of copies you need for each matching document. In our case, I need 2 copies of each document.

I learned in testing that if Dropbox syncs the files too fast while my Automator folder action is still running, the new files that were added don’t get included in the folder action, and the folder action doesn’t re-run those new files. So, what this script does is it uploads a new PDF for every copy needed, but it waits 15 seconds in-between. This gives Google time to upload to Dropbox, Dropbox time to sync to my local Mac with the Automator action, and Automator time to run its script and print the file and delete it. It’s not very efficient, but the files are not that large.

Now that your script is in place, you need to assign a trigger to it. Click “Triggers” on the left-hand side of the screen:

Add a new trigger. I used the following settings to have it run weekly on Sundays between 6 and 7am. Be sure to target the “myFunction” function as that’s the main one we are using.

You’ll need to create the folder action in Automator. Follow my previous post on how to do this, as the steps are the same. I didn’t have to change that at all!

Here’s a tutorial video if you learn better that way:

I hope this helps you think of ways to automate what you’re doing in Google Drive so you can spend more time on ministry and less on manual tasks!

How to create a custom Alexa Skill to play church sermons on Amazon Echo devices

We are an Amazon household. We buy stuff on Prime all the time. Sometimes, it feels like a daily task! We also really love the Amazon Echo devices and using Alexa for a variety of things. My boys love to ask Alexa to play fart sounds and we use it for music, timers, announcements, phone calls, sound machines at night, you name it.

One thing I have wanted for a while is the ability to easily play our church’s sermons on the Echo Dots in our house so I can listen while doing other things. In the past, I’ve simply played them from my phone and set up the output with the Echo acting as a bluetooth speaker. That works ok until I walk out of bluetooth range, of course, and it of course means my phone is tied up playing that audio.

Amazon has made it super easy to create your own Alexa Skills, which are like voice-driven apps. You can enable and disable skills, using the Alexa app, similar to how you install and uninstall apps on your phone. Using Alexa Skills Blueprints, creating your own church Alexa app is super easy.

Screen Shot 2020-03-04 at 9.27.35 AM
The Alexa Blueprints home page.

There are a wide variety of blueprints available, which are basically templates to speed up creating your own skill. This is especially great if you don’t want to or don’t know how to write in the programming language yourself to figure it out.

They have a pre-made template called “Spiritual Talks”.

Screen Shot 2020-03-04 at 9.28.18 AM
This is the blueprint/template that makes the process very simple!

To create your own skill, you will need:

  • Your podcast audio URL. We already post our sermons to iTunes and generate an RSS feed automatically through our church management software, Rock RMS: https://www.fellowshipgreenville.org/GetChannelFeed.ashx?ChannelId=28&TemplateId=1116&count=110
  • A Welcome message. When the skill is launched for the first time, Alexa will speak a welcome message. I used something simple: Welcome to Fellowship Greenville, South Carolina. Come and join us to worship every Sunday at 9am and 11am. Visit us any time to hear previous sermons.
  • A Returning message. When the skill is re-opened, Alexa will speak a welcome-back message. Here is what I used: Welcome back to Fellowship Greenville’s Sunday morning sermons podcast.
  • A skill name and logo. I used our church’s name and logo for this.

Once you’ve supplied all the information, you will want to publish the skill to the Alexa Skills Store. Someone will review it and once it’s approved, it will be publicly available. You can also privately share the skill if you don’t want to go through the publication process. I think they said to allow for 2 business days but mine was approved a lot faster than that. You can also make changes to the skill any time you want, but it will have to go through the re-approval process each time you make a change that you want made public.

Now, if people in our church want to use the skill, they just have to open the Alexa App on their phone, search for Fellowship Greenville in the Skills Store, and enable it.

IMG_3734

Then, they can say things like:

  • Alexa, open Fellowship Greenville”
  • “Alexa, ask Fellowship Greenville for the latest message”
  • “Alexa, Start Fellowship Greenville”

IMG_3736

So far, it’s working pretty great for us! I am excited about adding this feature for our church as I am always looking for ways to make our sermon content more accessible. The nice thing about this is that it uses our existing podcast feed, so I don’t have to do any extra work each week for the skill to get the latest content! It just works.

Go check it out for your church! If you don’t have an Amazon account, you’ll need to create one. The skill will be tied to that account, so make sure it’s an account you own.

Walkthrough: Setting up midi-relay on MacOS to control Chroma Q Vista 3 with a Stream Deck over the network

I have had a few people ask if I could post another walkthrough with more precision on setting up midi relay to control Chroma Q Vista (formerly owned by Jands) with their stream decks.

What you will need:

  • MacOS running Vista 3 (Vista 2 will also work)
  • Node.js installed, or you can download the MacOS binary release of midi-relay here: https://github.com/josephdadams/midi-relay/releases
  • Bitfocus Companion installed and running on a computer/device (it can be the same computer running Vista, or another computer on the network)

To set it all up:

  1. First, you will need to set up the loop-back MIDI port. Open Audio MIDI Setup. It’s in Applications > Utilities.
  2. In the Audio MIDI Setup window, choose Window from the top menu, then Show MIDI Studio.
  3. This opens the MIDI Studio window. You will see a few options here such as BluetoothIAC Driver, and Network. Depending on how you may have configured MIDI ports in the past, the number of devices here can vary.
  4. Double click the IAC Driver device. This will open the Properties window. The main thing you need to do is click the checkbox for “Device is online” (if not already checked). You may also want to change the device name to Vista.
  5. You can close out all of the Audio MIDI Setup windows now.
  6. Now you need to start midi-relay running. Open a Terminal window and change directory to where you put the executable file for midi-relay. I put mine in a subfolder within the Documents folder. It’s important that you run the executable while the Terminal window directory is the same folder the executable is in, or things may not work correctly. Once you’ve changed directory to the correct folder, you can drag the executable file from Finder to the Terminal window, or you can type in the executable name manually. Hit enter to run it.
  7. When midi-relay starts up, it will give you a read-out in the console of all the available MIDI in/out ports. You should now have one that says Vista Bus 1.
  8. Open Vista. Go to the User Preferences menu by selecting File > User Preferences.
  9. Go to the MIDI tab.
  10. Under the MIDI Show Control section, set the Device ID to 0 (zero).
  11. Under the External MIDI Ports section, check the box next to the Vista Bus 1 MIDI port.
  12. Click OK.
  13. In Vista, right click on the cue list you want to use with MIDI control, and choose Properties.
  14. Go to the MIDI tab.
  15. Now open the Companion Web GUI on the computer that is running Companion.
  16. Add a new instance by searching for Tech Ministry MIDI Relay.
  17. In the instance configuration, type in the IP address of the computer running Vista and midi-relay. If you’re running Companion on the same computer, you can use IP address 127.0.0.1.
  18. Click Apply Changes.

To Send a MIDI Note On and advance a cuelist:

  1. Add a new button in Companion.
  2. Add a new action to that button, using the midi-relay action, Send Note On.
  3. Under the options for this action, choose the Vista Bus 1 for the MIDI port.
  4. By default, it will send channel 0, note A0 (21), with a velocity of 100. Vista does not look for a specific velocity value, only channel and note. Vista will listen to any channel by default, but if you set a specific channel in the Vista MIDI settings, you will need to make sure you send the correct channel from Companion.
  5. Go back to Vista and in the Cuelist PropertiesMIDI tab, click Learn next to the Play item. The Play command is what advances a cuelist. The Learn function will listen for incoming MIDI notes and makes setting the MIDI note slightly easier (and it proves that it works). You can also just set the note manually if you want.
  6. Go back to Companion and click Test Actions (or press the physical button on your stream deck if you are using one), and the Learn box in Vista will go away, and you’ll see that the note you sent from Companion is now populated in the Vista settings.
  7. Now every time you press that button in Companion, it will advance that cuelist. If you have multiple cuelists, you will need to use different MIDI note values.

To Send a MIDI Show Control message to go to a specific cue in a cuelist:

  1. Add a new button in Companion.
  2. Add a new action to that button, using the midi-relay action, Send MSC Command.
  3. Choose Vista Bus 1 for the MIDI port.
  4. The default Device ID is 0 (zero) but if you changed that in Vista, make sure it matches here.
  5. The Command Format should be Lighting – General and the Command should be Go.
  6. The Cue field should be the specific Cue Number in Vista of the Cuelist you want to control.
  7. The Cue List field should be the specific Cuelist Number in Vista.
  8. Now every time you press that button in Companion, it will go to that specific cue in that specific cuelist.

Here’s a walkthrough video of these steps:

[wpvideo HZriRGlS]

I hope this is helpful! If you’re using MIDI relay, feel free to drop a comment and share how it is working for you!