Revisiting and Refining a Google Apps Script with the help of Generative AI

A few years ago (6 years ago!), I shared about a solution I came up with to create the weekly “talking points” Google documents that my team relies on. We’ve been using that same solution with Google Apps Script ever since. It’s been rock solid, and saves us a lot of time from creating each one of these documents by hand.

I decided it was time to refresh this script and document, since we now have a third venue (at a new campus). And, when it’s time to refine – why not consult some AI in the process?

This was my starting prompt.

I started by sending ChatGPT my existing script and asking if it had any ideas to improve the prompts.

The response

We immediately got to work redesigning the script – mostly focusing on the dialogs and flow.

I came up with a basic new design that featured the church logo and a simpler header. ##VENUE## and ##DATE## are placeholders that get replaced with the actual Venue name and Date of the document.

After some back and forth, here’s what the new dialog looks like:

This looks a lot better! I even added a progress bar:

If you’re hesitating to jump in using generative AI – give it a whirl! It can save you a lot of time and propose ideas you may not have thought about.

If you want to see my script, you can check it out here: https://github.com/josephdadams/document-generator-gas

Building a digital roster/serving board using Companion and the Planning Center Services API

If you’re involved in tech ministry and like to tinker, chances are you’ve heard of — and maybe even used — micboard.io.

This is Micboard.

Straight from their website, “Micboard simplifies microphone monitoring and storage for artists, engineers, and volunteers. View battery, audio, and RF levels from any device on the network.” It’s a neat tool and has helped a lot of teams over the years.

I always liked the idea of Micboard because it would be a great way to show who is serving that day. We tried to implement it at my church but eventually moved away from it, mainly because it hadn’t been updated in quite a while (over 6 years now), and we needed some additional features. Specifically, we were looking for integration with Planning Center Services — something that could automatically pull assignments from an interface our team was already familiar with. And – something we could use for more than just people on stage.

At first, I forked the Micboard repo (since it’s open-source) and started making improvements, cleaning up some code, and tweaking it to run more easily on modern MacOS systems. But pretty quickly, I realized I had too much on my plate to maintain a whole fork long-term.

Fast forward a year or so. I came across a few posts on some Facebook groups that I was in where people were using my ScreenDeck project to essentially create a Micboard style interface using Companion.

I wish I had my own Acoustic Bear.

What I loved about this approach is that it leveraged something we were already using — Companion — and could still be viewed from anywhere on the network, just like Micboard. Plus, Companion supports a lot more devices beyond just Shure systems.

Even better, this opened the door to that Planning Center integration I had wanted without introducing a bunch of extra overhead — we were already using the PCO module to control our LIVE service plans!

One thing I’ve wanted for a while was a digital roster — something simple to show who’s serving each day, helping everyone put names to faces across band, tech, safety, and more. A “Serving Board,” if you will.

About a year ago, I had modified the PCO module to pull scheduled people into variables — showing their names and assigned roles. I recently took it further by adding a feedback: “Show Person Photo based on Position Name.”

Now, the module pulls the photo from the person’s assignment, converts it into a PNG, and stores it internally as a base64 image — which can be shown directly on a button.

Pretty cool – and it looks like this:

Say “hi”, Adam.

But I didn’t want to stop there — I wanted the person’s status (Confirmed, Unconfirmed, or Declined in PCO) to show too.

Using the companion-module-utils library (thanks to another awesome Companion dev!), I added a simple colored border overlay for statuses.

A few extra lines of code later:

And you can get this look!

Thanks for confirming!

At this point, it was looking great — but I started thinking:

What if I don’t want to redo all my buttons every week? What if my teams and roles change?

So I added a new option: a generic “position number” approach.

You can now pick a position number in the plan (or within a specific team) — and the module will automatically pull the right person’s info, week to week, without you having to manually reconfigure anything.

For example:

• Pick any number across the entire plan.

• Or pick a number within a specific team, like Band or Tech.

With this option, you can choose any number, regardless of the team.
This picks the first person scheduled in the band.

I also built some Module Presets to make setting this up super easy:

Generic Position Number (no specific team)

Position Number Within a Team (like “Band” only)

Generic without regard to what Team
In this example, you can choose a number within the Band team.

And here’s where it all comes together:

Let’s say you have a “Wireless Assignments” team in PCO, and you assign a person to a position called “Wireless 4.”

Now, using the Shure Wireless module in Companion, you can match that name and see live RF and battery stats for Wireless 4 — tied directly to the person assigned!

All together, you get a clean, dynamic, reusable Micboard-style dashboard — all inside Companion, no extra tools required.

Here’s a walk through video showing it all in action:

The updated PCO Services Live module is available now in the Companion betas — go check it out if you want to try it!

Notify production team members remotely using open source software and low cost USB busy lights

At my church, we have a couple of these:

They’re great. Expensive, but they work well.

The problem for us is that anytime anyone presses the Call light on the intercom party line, any flashers on that party line will light up. This means we can really only have 1 unique flasher per line.

Sometimes, we want or need to get a specific person/position’s attention.

I created some software to help with this. It’s called beacon.

It’s a small app that runs in the system tray and hosts a network API so you can signal a USB busy light, such as the Luxafor Flag or Thingm blink(1). Or, if you don’t have or want a physical signal light, you can also have an on-screen dot that you can use.

I’ve designed this to work in tandem with a custom module for Bitfocus Companion, but since it does have a full API, you can implement any third-party integrations that you like. All of the documentation is on the Github repository: https://github.com/josephdadams/beacon

You can set a beacon to stay a solid color, fade to a new color, flash a color, and more. You can send custom notifications to the user’s window as well as play tones and sounds.

Here’s a video of the project in action to show you how you can use it:

Go check it out today!

https://github.com/josephdadams/beacon

A new Planning Center Online Services Custom Report, supporting Split Teams

One of the first blog posts here was about PCO’s custom reports. I’ve written a lot of them and helped a lot of churches get started with their own.

In anticipation of a possible need for split teams, I’ve now created a new custom report that has several customizable features, enhanced checklists, dynamic notes, and more, without having to write any actual code. Just modifying variables at the top of the report.

This new report supports the following:

  • Customizable header
  • Custom print order, with variable plan items as columns and/or rows alongside the plan item description
  • Dynamic checklists
  • Automatic highlighting of Plan Item Note changes to signify important information
  • Ability to display Plan Notes for everyone, by team, or by position
  • Custom CSS for your own unique look
  • Ability to show headers in their own row, or inline to save space
Here’s the report with Headers as their own rows.
Here’s the exact same report, but with headers inline for a cleaner look.

Here’s a video that shows how it all works:

Because of the substantial amount of work I have put into creating and coding this report, I have chosen to make this report available for purchase. I’m pricing it at a point that is affordable for most churches, at $45. Once payment is received, I will send over the report code and help you install it, if needed.

PCO Services Matrix Report with Split Teams, Fully Customizable

This custom report will revolutionize the way you share information with your team! Report code will be sent to the email address provided once payment is received.

$45.00

Click here to purchase.

If you have a need for a custom report beyond this, contact me! I’m always available for hire for your custom PCO reporting projects, or whatever other custom coding needs your ministry or organization may have.

Tally Arbiter 2.0 now available!

About a year ago, I released some camera tally lights software because we desperately needed it at my church. Since that time, a ton of new features have been added, both by me and by the community.

It’s now in use in hundreds of places, from churches to event venues to sports stadiums.

Version 2.0 was silently released a few weeks ago, which includes a compiled application that can run natively on Windows, MacOS, and Linux, without the need to install Node.js and other dependencies like the command line. And, of course, it still runs on a Raspberry Pi.

Lots of people in the community have shared how they are using it, made their own tutorials, and added to the existing documentation.

It’s truly becoming a community project, and I love that. We now have an official Facebook user group to help facilitate conversation amongst users, and I’m excited for the new features on the roadmap in the coming days.

Someone from the community designed a new logo! Isn’t it nice?

A few features to note since version 1.5:

  • An entirely new User Interface and native applications for the Big 3 OS models
  • Easily installed for command line via new NPM image or Docker image
  • 1-second updates function for TSL Clients (provides compatibility with certain tally products like Cuebi)
  • Recording/Streaming statuses for OBS and VMix now available for tally states
  • Generic TCP Device Action improvements
  • TSL 5.0 source support
  • New Ross Carbonite source type to monitor any bus regardless of the “on air” settings
  • Web tally page can now be loaded directly by Device Id, and chat can be disabled
  • Pimoroni Blinkt! Listener Client
  • TTGO_T Display Listener Client
  • Improved Outgoing Webhooks – support for https and content-type selections
  • Roland Smart Tally emulation for use with STAC
  • Panasonic AV-HS10 support
  • Support for ATEM super sources in tally states
  • Bug fixes and performance improvements

If you’re new to Tally Arbiter, go check it out! You can also join the new Facebook user group here: https://www.facebook.com/groups/tallyarbiter

And to everyone in the community who has helped to make TA what it is, thank you! Your contributions are helping everyone.

PresentationBridge Client now in public release!

I shared back in the fall about my new Presentation Bridge Client software. Since that post, the software has been in a private testing period as I was getting feedback from users. And now, thanks to some help from the community, it’s ready to release!

My hope is that this software will help you be more efficient in your tech ministry, especially when you need to do a lot of things without a lot of people.

Go check it out! And, as always, feedback and contributions are welcome.

You can get the latest release here: https://github.com/josephdadams/presentationbridge-client/releases/latest

Controlling a Canon XF Series camera using a stream deck and Companion by reverse-Engineering the Canon BrowSer Remote

It’s been awhile since I posted! Earlier in the year, we had a few unexpected expenses come up in our family. I started spending my spare time in the evenings doing custom freelance programming to help meet the needs. I have been doing this for a few months now which has helped us out.

God continues to bring new visitors to this blog and I have been able to return emails, phone calls, Zooms, and help so many people implement the ideas and software that I’ve created here. It is truly a blessing to see how God has used this little blog I started a few years ago.

I’m excited to share a new project that I have been working on with my team: Control of our Canon XF cameras through a stream deck. We have a couple of these cameras here at my church, the Canon XF 705 series:

I have been mentoring the guys who work part time in A/V here with me on how to write code and specifically code modules for the Companion project that we use so heavily here. We decided it would be great if we had control of these particular cameras at our shader station alongside the shader control of our Marshall cameras (I wrote about that here) and our broadcast cameras.

These Canon cameras come with a LAN port (you can also use wifi) and it runs a little web server called the Browser Remote which allows you to have full control of all the camera functions, from focus/zoom/iris/gain all the way to recording, white balance, and shutter control. If there’s a button on the camera, chances are you can control it from the browser remote. You can even see a live preview of the camera!

The built in browser remote functions of the Canon XF series.

So we started doing some digging, and realized that there is an internal API on the camera that returns a lot of the data in simple JSON sets. Once you initiate a login request to the camera, it returns an authentication token, which must be sent along with every future request.

For feedbacks on the camera state, we simply poll the camera every second or so. The browser remote page itself seems to do this as well, so we just emulated that.

The browser remote unfortunately only allows one user at a time to be logged in, so when our Companion module is in use, the actual browser remote page can’t be used. But for our purposes, that’s not really an issue since we just want to have button control of the iris/gain functions when we use these cameras during live services. Now I don’t have to ask my operators to iris up or down, I can just do it right from the stream deck!

Here’s a little walkthrough video that shows the module in action:

The module will soon be a part of the Companion beta builds, so if you have a Canon XF series camera, go check it out!

Using a Stream deck and a raspberry pi to create a remote control panel to adjust marshall cameras over ip with rs-485 control

At my church, we have 4 of these cameras: Marshall CV503

Marshall CV503 Miniature Camera

We use them during services to capture shots of the instruments (drums, keys, etc.) and whatever is happening on stage. They are great little action-style cameras, and they have SDI out on them so they are super easy to integrate into our video system.

They have a lot of adjustment options to them via a local joystick-style controller at the camera, but obviously, that’s challenging to use during a service if we needed to adjust the camera’s exposure. The menu is OSD and shows up on the live output. Plus they’re all over the stage and we can’t walk there during the service!

While I wish they were IP-controllable directly, this particular model does not have that option. They do, however, come with RS-485 serial connectors.

So we decided to create a remote shading system using a stream deck running Bitfocus Companion. The Marshall cameras support the VISCA protocol over RS-485. In fact, if you’re a Windows user, Marshall provides free software to control the cameras over RS-485.

Marshall provides this program to control, if you have Windows and want to connect your cameras directly to that computer.

We don’t use a lot of Windows computers around here, and that program requires that the computer running their software be the one physically connected to the cameras via serial. Not ideal for us because the cameras are on a stage and our computers typically are not. Marshall also actually makes a nice hardware RCP – but we didn’t want to pay for that.

So we did what you probably already guessed – put in a Raspberry Pi with a USB to RS-485 adapter that we could control remotely.

We have several wallplates across the stage with network tie lines on them that feed back to the rack room in a patchbay. So we made cables that connect to the RS-485 ports at each camera that then go back to a wall plate into a RJ45 port. We utilized the blue/white-blue pair on CAT6 cable. We used that pair because these are data pins in a normal network connection, which means if someone ever accidentally connected it straight to a switch or something, there would not be any unintended voltage hitting the cameras.

Each camera is set to its own camera ID (1-4), and the matching baud rate of 9600 (the default). Then in the rack room, we made a custom loom to take the 4 connections and bring them into a jack, which then feeds into the USB to RS-485 adapter on the Pi.

The Pi is a 4 model with 4GB of ram. Honestly, for what this thing is doing, we probably could have just run it off of a Pi Zero, but I wanted it hardwired to my network, and the bigger Pi’s come with ethernet ports built in.

I bought this adapter off Amazon:

DSD TECH SH-U10 USB to RS485 Converter with CP2102 Chip

When connected, it represents itself as serial port /dev/ttyUSB0. We originally planned to use the socat program in Linux to listen for UDP traffic coming from Companion:

sudo socat -v UDP4-LISTEN:52381 open:/dev/ttyUSB0,raw,nonblock,waitlock=/tmp/s0.locak,echo=1,b9600,crnl

To actually send the UDP data, we’re using the Sony VISCA module already built into Companion. The Marshall cameras use the same protocol over RS-485.

Using the socat method, we quickly found that it would only listen to UDP traffic coming from one instance of the module. We need 4 instances of the Companion module because we have 4 cameras, each with a different camera ID.

However, nothing a small Node.JS program can’t solve. So I wrote a program that opens the specified UDP port, opens the specified serial port, and sends any data received at that UDP port straight to the serial port. You just configure a new instance in Companion for each camera with the same IP of the Pi running the udp-to-serial program, and the camera ID that you configured at the Marshall camera.

Here’s a video that shows it all in action:

If you want to try this out for yourself, I’ve made the udp-to-serial repository available here:

http://github.com/josephdadams/udp-to-serial

How to create a custom Alexa Skill to play church sermons on Amazon Echo devices

We are an Amazon household. We buy stuff on Prime all the time. Sometimes, it feels like a daily task! We also really love the Amazon Echo devices and using Alexa for a variety of things. My boys love to ask Alexa to play fart sounds and we use it for music, timers, announcements, phone calls, sound machines at night, you name it.

One thing I have wanted for a while is the ability to easily play our church’s sermons on the Echo Dots in our house so I can listen while doing other things. In the past, I’ve simply played them from my phone and set up the output with the Echo acting as a bluetooth speaker. That works ok until I walk out of bluetooth range, of course, and it of course means my phone is tied up playing that audio.

Amazon has made it super easy to create your own Alexa Skills, which are like voice-driven apps. You can enable and disable skills, using the Alexa app, similar to how you install and uninstall apps on your phone. Using Alexa Skills Blueprints, creating your own church Alexa app is super easy.

Screen Shot 2020-03-04 at 9.27.35 AM
The Alexa Blueprints home page.

There are a wide variety of blueprints available, which are basically templates to speed up creating your own skill. This is especially great if you don’t want to or don’t know how to write in the programming language yourself to figure it out.

They have a pre-made template called “Spiritual Talks”.

Screen Shot 2020-03-04 at 9.28.18 AM
This is the blueprint/template that makes the process very simple!

To create your own skill, you will need:

  • Your podcast audio URL. We already post our sermons to iTunes and generate an RSS feed automatically through our church management software, Rock RMS: https://www.fellowshipgreenville.org/GetChannelFeed.ashx?ChannelId=28&TemplateId=1116&count=110
  • A Welcome message. When the skill is launched for the first time, Alexa will speak a welcome message. I used something simple: Welcome to Fellowship Greenville, South Carolina. Come and join us to worship every Sunday at 9am and 11am. Visit us any time to hear previous sermons.
  • A Returning message. When the skill is re-opened, Alexa will speak a welcome-back message. Here is what I used: Welcome back to Fellowship Greenville’s Sunday morning sermons podcast.
  • A skill name and logo. I used our church’s name and logo for this.

Once you’ve supplied all the information, you will want to publish the skill to the Alexa Skills Store. Someone will review it and once it’s approved, it will be publicly available. You can also privately share the skill if you don’t want to go through the publication process. I think they said to allow for 2 business days but mine was approved a lot faster than that. You can also make changes to the skill any time you want, but it will have to go through the re-approval process each time you make a change that you want made public.

Now, if people in our church want to use the skill, they just have to open the Alexa App on their phone, search for Fellowship Greenville in the Skills Store, and enable it.

IMG_3734

Then, they can say things like:

  • Alexa, open Fellowship Greenville”
  • “Alexa, ask Fellowship Greenville for the latest message”
  • “Alexa, Start Fellowship Greenville”

IMG_3736

So far, it’s working pretty great for us! I am excited about adding this feature for our church as I am always looking for ways to make our sermon content more accessible. The nice thing about this is that it uses our existing podcast feed, so I don’t have to do any extra work each week for the skill to get the latest content! It just works.

Go check it out for your church! If you don’t have an Amazon account, you’ll need to create one. The skill will be tied to that account, so make sure it’s an account you own.

ProPresenter 7 and the Top 8 Features I would like to see

If you are a user of Renewed Vision’s ProPresenter software, hopefully by now you’ve heard that they just released version 7 for both MacOS and Windows.

pro7-header-image
ProPresenter 7.

The new version is more similar between the two operating systems than ever before, and there’s a lot of new features, most notably the UI design. One other enhancement that I am excited about is that all of the add on modules (alpha keyer module, communications, MIDI, SDI/NDI output, etc.) are now all included as part of the software license. This will be great for us because now we can have these features available to all of our ProPresenter installs, whereas in the past, the pricing model was a limitation for us.

I have been slowly checking out the new version and we will be purchasing an upgraded license soon to roll this out in our various venues within the coming months.

With all of the new features that ProPresenter has, I thought it would be fun to include the Top 8 Features of ProPresenter that I hope to see implemented. Here they are, in no particular order:

  1. Tally Integration. If you’ve followed this blog, you have probably seen where I’ve mentioned the ProTally software I created to help fill in the gap here so our volunteers could know when their ProPresenter output was on-air. So while tally protocol support (whether it be TSL or data coming directly from something like an ATEM switcher) would likely render tools like ProTally obsolete for a lot of use cases, it would make the experience so much better for the end user, and I’m definitely a fan of that.
  2. HTTP GET/POST slide cues. This would be awesome. Some people do a workaround right now where they put a “web element” on a slide and make it invisible, but a true communication cue to send GET/POST (along with JSON data) whenever I click on a slide would be a great way to open up some automation efforts to trigger other software.
  3. Hide Audio Bin / Re-arrange the interface. This is a simpler one, but the ability to hide the audio bin that we aren’t likely to use as well as being able to re-arrange the UI would be nice to have.
  4. Customizable border on the current active slide. A lot of our volunteers have expressed that it would be nice to have a way to quickly see which slide is active, and sometimes the current border box around the active slide isn’t easy to see. So a way to make that border thicker, change the color, make it blink, etc. would be a nice feature.
  5. A built-in, free, amazing sync option. I’ve written about how we currently do cloud syncing in ProPresenter by using Dropbox and sharing all the libraries to all the machines. It works fine for what it is. But a way to truly share playlists, themes, media, etc. from one ProPresenter install to another, built in, would be awesome, especially if it could use the drive/file sync tools we already use, like Dropbox.
  6. Go To Next Timer showing a countdown. Another simpler one, but it would be really nice if any time a slide was on a advance timer, if the UI showed how much time was left before it advanced (in minutes/seconds).
  7. Web interface to show slide information, clocks, etc. A page where I can view the slides, the current/next slide, timers, messages, etc. A “producer’s page” of sorts. Right now, we use PresentationBridge for this. We would keep this web page open in our control rooms for the director to see so they know exactly where we are at in a presentation or song.
  8. Published and supported REST API. It would be great to have a published and supported interface where we can control ProPresenter remotely. A lot of people have done great work to reverse-engineer the ProRemote app, and that protocol is getting a lot of use through projects like Companion. But something officially documented and supported would be truly great. And on that note, some kind of official support for stream decks would be great too! Whether it is acknowledgement of the Companion project or another avenue.

So there’s my top 8 feature requests! I’m excited about this new version of ProPresenter, because with their ProPresenter+ plan, we are going to see more regular feature updates. If you haven’t checked it out yet, you can demo it for free!