ProPresenter 7 and the Top 8 Features I would like to see

If you are a user of Renewed Vision’s ProPresenter software, hopefully by now you’ve heard that they just released version 7 for both MacOS and Windows.

pro7-header-image
ProPresenter 7.

The new version is more similar between the two operating systems than ever before, and there’s a lot of new features, most notably the UI design. One other enhancement that I am excited about is that all of the add on modules (alpha keyer module, communications, MIDI, SDI/NDI output, etc.) are now all included as part of the software license. This will be great for us because now we can have these features available to all of our ProPresenter installs, whereas in the past, the pricing model was a limitation for us.

I have been slowly checking out the new version and we will be purchasing an upgraded license soon to roll this out in our various venues within the coming months.

With all of the new features that ProPresenter has, I thought it would be fun to include the Top 8 Features of ProPresenter that I hope to see implemented. Here they are, in no particular order:

  1. Tally Integration. If you’ve followed this blog, you have probably seen where I’ve mentioned the ProTally software I created to help fill in the gap here so our volunteers could know when their ProPresenter output was on-air. So while tally protocol support (whether it be TSL or data coming directly from something like an ATEM switcher) would likely render tools like ProTally obsolete for a lot of use cases, it would make the experience so much better for the end user, and I’m definitely a fan of that.
  2. HTTP GET/POST slide cues. This would be awesome. Some people do a workaround right now where they put a “web element” on a slide and make it invisible, but a true communication cue to send GET/POST (along with JSON data) whenever I click on a slide would be a great way to open up some automation efforts to trigger other software.
  3. Hide Audio Bin / Re-arrange the interface. This is a simpler one, but the ability to hide the audio bin that we aren’t likely to use as well as being able to re-arrange the UI would be nice to have.
  4. Customizable border on the current active slide. A lot of our volunteers have expressed that it would be nice to have a way to quickly see which slide is active, and sometimes the current border box around the active slide isn’t easy to see. So a way to make that border thicker, change the color, make it blink, etc. would be a nice feature.
  5. A built-in, free, amazing sync option. I’ve written about how we currently do cloud syncing in ProPresenter by using Dropbox and sharing all the libraries to all the machines. It works fine for what it is. But a way to truly share playlists, themes, media, etc. from one ProPresenter install to another, built in, would be awesome, especially if it could use the drive/file sync tools we already use, like Dropbox.
  6. Go To Next Timer showing a countdown. Another simpler one, but it would be really nice if any time a slide was on a advance timer, if the UI showed how much time was left before it advanced (in minutes/seconds).
  7. Web interface to show slide information, clocks, etc. A page where I can view the slides, the current/next slide, timers, messages, etc. A “producer’s page” of sorts. Right now, we use PresentationBridge for this. We would keep this web page open in our control rooms for the director to see so they know exactly where we are at in a presentation or song.
  8. Published and supported REST API. It would be great to have a published and supported interface where we can control ProPresenter remotely. A lot of people have done great work to reverse-engineer the ProRemote app, and that protocol is getting a lot of use through projects like Companion. But something officially documented and supported would be truly great. And on that note, some kind of official support for stream decks would be great too! Whether it is acknowledgement of the Companion project or another avenue.

So there’s my top 8 feature requests! I’m excited about this new version of ProPresenter, because with their ProPresenter+ plan, we are going to see more regular feature updates. If you haven’t checked it out yet, you can demo it for free!

Live Camera Production: A Technical Walkthrough of our Video System

I talk about programming and software and building solutions here a lot, but I thought I would write a post about something else I’m passionate about: live camera production. At my church, for the last 15 years or so, I’ve had the pleasure of getting to direct cameras for the annual Christmas program. We call the program, “Jingle Jazz”, because the music is mostly centered around a jazz format. In church terms, it’s an “invest and invite” event where people can bring their friends, neighbors, and co-workers for a great first exposure to Fellowship Greenville and have a fun relaxing evening filled with various styles of music.

This is one of a small handful of times a year where we get to maximize the potential of our volunteers and systems and put it all to the test. I always try to challenge myself to make it better than the year before, whether that’s adding more cameras, equipping volunteers, or even automating something.

In years past, I had a huge role involving writing scripts, creating and producing videos, and working late night after late night after late night and put “all of me” into this event to make it happen! In recent years, the workload has been balanced a lot better, and my job role has shifted some, so now I am not having to do so many late nights prior to the event. I did still manage to get in almost 16,000 steps one day last week though!

Photo Dec 16, 1 07 04 PM
The steps I took in one day. Pretty high for me!

This year, I set a target goal of 14 cameras. I put out a call for volunteers and 11 people signed up! We used our primary auditorium cameras, older (like 12-15 year old) cameras using component to SDI adapters, borrowed production equipment from the communications department, rented 4 cameras, and I even traded some of my programming time to a local university in return to borrow some cameras and lenses from them. Overall, I felt like we were able to keep costs down by being good stewards of what we already had, and renting where needed.

One thing I did in advance that really helped me to succeed was to plot all of my patching across patchbays and plates in a spreadsheet. It helped me think through all the limitations I might face, especially when multiple cameras needed a signal/data cable as well as genlock/reference. Some of the cameras didn’t support genlock, so I had to frame sync those within the switcher. The Ross Carbonite switcher has 6 frame syncs, so after I ran out of syncs on the switcher for Auditorium 1, I actually sent signal to our switcher for Auditorium 2, synced them, and sent them back to the other switcher on aux sends! It took both control rooms to be able to pull off this many cameras, primarily because of the camera equipment we had available.

Photo Dec 14, 10 28 27 PM
This spreadsheet kept me in line to make sure I didn’t forget to patch anything!

For intercoms, everyone was on a wired Clearcom. We used a combination of belt packs we already had plus adapters I made to work with some older stuff we used to use. The two mobile stage cameras used Unity Intercom bridged to our Clearcom system.

Photo Dec 12, 7 57 03 PM
The view during rehearsal.

I knew I wanted to record what I call the “tech cut” that combines the multiviewer feed plus the intercom chatter this year so we could save it for review and training. The Carbonite switcher has two multiviewer outputs, so I dedicated one of them to viewing all 14 cameras for the recording.

Photo Dec 13, 11 50 35 AM
Another view of the control room.

Because the boxes were so small, I wanted a way to be able to see any camera on a larger screen, so I rolled in a TV cart and patched it to a MiniME output and controlled it from a Stream Deck (with Companion). Using Custom Controls, I was able to also have the multiviewer show a white box around whatever source was active on that TV cart. Here is a video of that in action:

[wpvideo z5nxUcc4]

One thing that I really am glad we did this year was to treat the LED wall that we have center stage as more of a lighting/stage element than something that our video team needed to drive. It was nice because the lighting guys controlled it all and I didn’t have to think about it! We used PVP and had motions and Christmas-themed b-roll on the screen most of the time, and occasionally cut to a graphic here and there as needed.

Like any service, it takes people to make it happen. We have great volunteers and staff here that I get to work with and lead.

Photo Dec 14, 8 06 37 PM
The crew!

As we were wrapping up this event, and I watched everyone serving with such joy even though it was a lot of late nights, I was reminded of this quote from author Simon Sinek:

When we work hard on something we don’t believe in, it’s called stress. When we work hard on something we believe in, it’s called passion. – Simon Sinek

Working and serving in tech ministry has to come from a place of passion, or it will always be stressful. Colossians 3:23-24 says, Whatever you do, work heartily, as for the Lord and not for men, knowing that from the Lord you will receive the inheritance as your reward. You are serving the Lord Christ.”

May we always work heartily on what we believe in. Not just programming, software, or live production, but seeing God transform lives, and people pursuing life and mission with Jesus.

If you’d like to watch our tech cut, here it is!

Controlling a Roland V-60HD video switcher with a Stream Deck and Companion

A couple of weeks ago, I was contacted through the blog by Tony Perez, longtime staff member at Calvary Chapel in Las Vegas. He asked if I could help their team to control their Roland V-60HD switcher through a stream deck using Companion.

God has given me a heart and passion to be a resource for other churches, so I jumped right in and started reading the TCP protocol specification for their video switcher. The protocol was simple enough, basically just a telnet protocol to send parameters with a terminating character to designate the end of the command.

rol-v-60hd
This is the Roland V-60HD video switcher.

I had to take a sick day recently to take care of one of my kids who had an ear infection, so while he was resting, I sat down and prototyped a module for Companion to control their video switcher.

Tony and I then set a time to talk on the phone and do a TeamViewer session, and after doing some slight debugging, we had it working!

The protocol is pretty straightforward. For example, with this command:

\u0002CUT;

The switcher will perform a cut between the current on-air source and the preview source. “\u0002” is the ASCII control code “02H” which tells the switcher that a command code is coming. “CUT” is the command , and the semicolon terminates the command.

We were able to implement every video-related operation and some of the system operations that seemed necessary to control remotely from a Stream Deck.

So, with just a few short hours of work, now his team can control their Roland V-60HD video switcher from anywhere on their network! This will be a great help and add to their flexibility.

Screen Shot 2019-03-28 at 2.12.50 PM.png
You can see some of the options available for the module in this screenshot.

This was a fun project to get to help with, especially since I had not ever seen or used this particular video switcher before, and I was able to help a ministry on the other side of the country.

Here are some pictures of the module in action!

The module is open-source and part of the Companion project now, so anyone else who has this switcher can jump in and use it too! You can view the module code here.

Sending automated reminders via a Slack webhook, AppleScript, and Launchd on MacOS

I have always enjoyed finding ways to automate processes, especially ones that don’t require much user interaction but just need to be done at a certain time or at regular intervals. At one of my first jobs out of high school, I wrote software to automate a job for one of the clients that normally took 2.5 days by hand, taking the process down to 30 minutes, including filling out all the paperwork. Of course, the company didn’t like losing those billable hours, but it was hard to argue with the efficiency.

At my church, we have a few computers with limited drive space. And that drive space always fills up fast! In the past, I would check the drives periodically and either delete old files or move them off to another storage place. I sat down recently and decided to take that a step further: I only wanted to be notified to check the drive when the drive was full to a certain threshold.

I’ve been playing around with Slack recently with a project I’m working on at home to notify me when my laundry is finished. If you’ve not heard of Slack, it is a collaboration/communication tool that integrates with lots of other platforms. It’s like a work-specific chatroom on steroids. One of the ways you can use it is with custom apps and webhooks, providing an easy way to send data and interact via a custom URL.

I won’t delve into setting up Slack and webhooks here, but I did want to share with you how I accomplished my goal to only get notifications when the drive is full to a certain amount. I used AppleScript and the Launchd framework built into MacOS.

If you’ve been on the Mac platform for awhile, you’ve no doubt heard of and have maybe used AppleScript. It’s a great way to interact with Mac apps and the system as a whole, so you can automate all kinds of things.

Launchd, as defined by Apple, is “a unified, open-source service management framework for starting, stopping and managing daemons, applications, processes, and scripts.” This framework is always working in the background on MacOS, whether you knew it or not!

So, I sat down and wrote an AppleScript that does the following:

  • Polls the system for the available space on the hard drive(s) I specified
  • If the space remaining is a certain amount or less, it sends a webhook request to my Slack app with a custom message to remind me to clear up the particular drive.

Screen Shot 2019-03-07 at 9.40.27 PM

Now, to schedule it. In the past, I used to use the built-in iCal/Calendar app for MacOS. It worked ok sometimes but I found that there were times scheduled events simply didn’t run for whatever reason. So, I decided to use a different method and take advantage of the Launchd process built into the operating system. There’s a lot you can learn about Launchd for MacOS, but I’ll summarize it here:

  • You can run processes as daemons, which run at the system level, not the user level
  • You can run processes as agents, which run at the user level
  • You can have them run when the system loads, or you can schedule them
  • Depending on where you place the file with the instructions about your script determines whether it runs as a daemon or agent

I chose to have mine run on a schedule every day at 7am, and send me an alert if the drive(s) are too full. I didn’t need it to run at the system level, so I made it an agent.

Screen Shot 2019-03-18 at 9.22.16 AM.png
This is the file that MacOS will look at to schedule the script to run.

Once I placed this file in my ~/Library/LaunchAgents/ folder (my main user account’s Launch Agents folder) and restarted the computer, it was ready to go! I’m looking forward to not having to remember to check those drive spaces manually anymore. I’ll automatically get notifications on my phone when I need to clear up space!

IMG_9213
This is what the alert looks like on my phone.

I hope this helps you! If you want any of the scripts, they’re up on Github.

Controlling Planning Center LIVE with a Stream Deck

In my last post, I mentioned a great tool, Companion, that integrates with the Elgato Stream Deck. I’ve had the opportunity to write a few modules for it to extend its control capabilities, like controlling a CueServer, or my own software, ProTally.

If you work in tech for a church, chances are that you use or have at least heard of Planning Center Online to manage your worship services and people. PCO has a feature for their Services product called Services LIVE that allows you to designate where you are at in a service flow while a service is ongoing, which updates anyone who may be watching. It also records the times so you can look back later and see things like “Did that song we said would take 5 minutes actually take more like 6 minutes and 30 seconds?” It’s a very useful tool.

The interface to advance a LIVE plan, however, has not been the best for our volunteers. Even within the PCO app, the buttons to advance a plan to the next item are rather tiny, and some of my team have trouble knowing whether or not they hit the button.

IMG_9032
This is the standard PCO Live interface. The small double arrows at the bottom left and right of the screen are the controls. Our volunteers have a hard time pressing these.

One thing that makes Planning Center Online great is that they love developers, and they’ve made a very extensive Application Programming Interface (API) available for anyone to use. This means you can get access to your service and plan data without having to actually click and browse the website.

I delved into that API this past week and used it to create a new module for Companion. One of the caveats of using the API is that in order to advance a live plan, you have to know both the service type ID and the plan ID. This requires traversing the API data some and making multiple requests. If you’re a programmer, this makes sense. If you’re just an end-user, it may not be as straightforward. So, I set out to make something easy enough for anyone to use.

Here is a walkthrough video on how the module works:

[wpvideo hyQpcLnH]

What it actually does:

  1. When you first load the module and supply it with the authentication tokens, it requests all of the available service types and stores that internally.
    https://api.planningcenteronline.com/services/v2/service_types
  2. Then it asks for the next 7 upcoming plans for each service type based on the list that was just retrieved. This is then used to build the drop down list so you can choose your plan.
    https://api.planningcenteronline.com/services/v2/service_types/${serviceTypeId}/plans?filter=future&per_page=7
  3. When you send a “Previous” or “Next” command, it first asks for the LIVE information for that selected plan.
    https://api.planningcenteronline.com/services/v2/service_types/${serviceTypeId}/plans/${planId}/live
  4. It checks for who the current controller of the plan is, and compares that to an internal variable in Companion that represents the owner of the authentication token.
  5. If the current controller is null, a command is sent to toggle control to the token owner, and the returning value of the current controller is stored in that internal variable so we know who “we” are for next time.
    https://api.planningcenteronline.com/services/v2/service_types/${serviceTypeId}/plans/${planId}/live/toggle_control
  6. If the current controller is not null, a toggle command is sent to release control of the plan to no one, and then a toggle command is immediately sent again so that control is toggled to us. The reason for this is that if our authentication key is not the current controller, the API will return an error when we try to advance the plan.
  7. Now that we know we are in control, the current controller value returned by the API is stored as an internal variable, and then the next or previous command is sent to advance the plan.
    https://api.planningcenteronline.com/services/v2/service_types/${serviceTypeId}/plans/${planId}/live/go_to_next_item

Because Node.js is an asychronous programming language, all of this is done through Promises, which is similar in concept to a callback function, however it allows for cleaner and easier to read code.

So, stay tuned for this module to become available in the next stable release of Companion, and if you’re willing to try it out in development mode, it’s available now!

IMG_9035
The PCO Live module in action!

Using a Stream Deck as a Production Controller, Revisited

One of my first posts on this blog detailed how I wrote software in Node.js to interface with an Elgato Stream Deck to control some of our production equipment, interfacing with the video switchers, router, Ross Dashboard, etc. It’s time to revisit that.

We’ve been using my controller now every week in our control rooms and tech booths for about a year. My team loves it. It integrates into our centralized production workflow, where each deck sends commands to a central Dashboard panel, which runs the command, and then sends out updates to all the connected stream decks.

However, I haven’t had much time to make it a better product for other people. I wrote support for the Stream Deck Mini when that was released, but that’s about it. I haven’t had time or cause to do much else with it. So, for that reason, I wanted to share with you a piece of software that is under constant, active development: Bitfocus Companion.

Screen Shot 2019-02-15 at 10.43.46 AM

 

Companion is written in Node.js and packaged in Electron just like my product, so it can run on Mac, Windows, or Linux. But it can do so much more than my controller! One of the best features is that it has a web-based management interface, so you can add actions to buttons easily and on-the-fly. It supports a ton of production equipment and chances are good that your gear is already on the supported list, or, perhaps someone can create a module for it.

I was asked to join the development team recently for Companion, so I’ve started making some modules for Companion to integrate with software and gear that we have. I’ve created a module for Interactive Technologies’ CueServer, which we have in a couple of our venues here.

Screen Shot 2019-02-15 at 10.54.05 AM
Here are some actions you can perform on a CueServer now with the module I created for Companion.
Screen Shot 2019-02-15 at 10.53.53 AM
An example of a key down action for triggering a CueServer macro in Companion.

If you use ProTally, my on-screen tally box notification software, and want to integrate with Companion, I made a module for that too! Make sure to go download the latest ProTally release which supports this feature! With Companion, in addition to Preview and Program windows, you can also send a Beacon, which flashes at a custom rate and color. Check this video out for a demo:

[wpvideo 0Xy1IvWn]

Both of these modules are available in the bleeding edge builds of Companion and will be included in the next stable release soon.

So, if you’re looking for a great production controller that integrates with the Stream Deck, go check out Companion! It’s only going to get better from here!

 

ProTally 1.4, with custom colors per tally box, now available

If you’re using ProTally, I’ve just released a new version that supports custom colors for each tally box, rather than global colors applied to all boxes.

Screen Shot 2019-02-01 at 10.15.01 AM

You can download the latest version on Github: https://github.com/josephdadams/ProTally/releases

Using Node.js and a Raspberry Pi to monitor Streaming ACN network for DMX changes and trigger actions

Awhile back, I wrote about the Shade Controller I created using Node.js and a USB relay running on a Raspberry Pi Zero. It works great. We can raise and lower the shade from anywhere on the network. However, I’ve always wanted a way to control this a little more automatically. The lighting volunteer is typically the person who operates the remote for the shade, so I really wanted a way to automate that part of the process for them so the shade can raise and lower exactly when we want it to, without them having to use an extra tool or device.

As I was working on some networking changes to one of our lighting consoles (we use Jands L5 consoles running Chroma-Q’s Vista 3), I had an idea… What if we could monitor the Streaming ACN lighting network for data changes just like any lighting node, and use that to trigger an action?

If you’ve not heard of Streaming ACN (sometimes called sACN or its official name E 1.31), it is an ethernet based protocol for sending DMX address and value information from a lighting console to receiver nodes which then relay the DMX information to lighting fixtures. It uses multicast traffic to send the information so it is very fast and efficient. At my church, we have several DMX universes of lighting information going over the network for each auditorium, controlling all of the light fixtures.

Luckily for me, a base protocol module for E 1.31 was already available for Node.js. So, using that module, I sat down and prototyped a solution and had something working in just a couple of hours. I’m calling my software sACN Translator. I’ve deployed it to a Raspberry Pi for production. It supports a simple REST API to allow you to control which universes it should listen to, as well as the fixtures to run triggers for. I also created a simple web interface which utilizes this API.

Screen Shot 2019-01-20 at 10.28.46 AM.png
Here is the simple web interface which interacts with the REST API.

Here is how I set it up on our system to trigger the shade controller. I started by adding two fixtures to the L5 console on Universe 1 (where I happened to have some spare room in my DMX addresses). I called these fixtures “Shades Up” and “Shades Down”, with DMX Addresses 511 and 512.

screen sharing picture january 20, 2019 at 5.34.39 am est
Here are the two “fixtures” on the layout, with notes attached.
screen sharing picture january 20, 2019 at 5.35.29 am est
I labeled the fixtures as generic “utility” fixtures with 1 DMX address each.

Then, I added entries in sACN Translator to monitor Universe 1 on the network and look for value changes to fixture addresses 511 and 512. I set it to run an HTTP trigger any time the values reaches 255 (100%). So, when I put the Shades Down fixture at 100% on the lighting console, the software sees that value, looks for a match in its list of fixtures, and then runs the corresponding HTTP request on the Raspberry Pi Zero connected to the USB relay to trigger the action which lowers the shade.

Here is a video of it in action:

[wpvideo vpTSk2WQ]

Pretty cool! I decided to use separate fixture addresses for each trigger action, but I didn’t have to. I could have just one fixture and watch for two separate lighting values.

So now, all the operator has to do is run the cues like normal, and the programming will do the rest! I’ve made this software available for free on my Github repository. Let me know how it works for you!

Using Google Apps Script with user input to automate repetitive tasks in Google Docs

Do you find yourself ever doing repetitive tasks over and over again in Google Docs? (Or any of the Google Suite Apps?) I sure do. At my church, we create a Google Doc every week for all of the “talking points”, the parts of the service that aren’t song or sermon, where we script out what someone needs to say or communicate during that portion.

screen shot 2019-01-13 at 5.48.00 am
Here is a sample document that we use each week.

A couple years ago, I started creating template files to help my team do this every week, because having the template already there with some common headers, the service date, etc. removed the barrier to get down to writing the actual words. Creating the files wasn’t too complicated, and after awhile, I started making them “in bulk”, where I would sit down and just make 3-4 months worth of documents at a time, making copies of my master template, editing the new file and updating the date, etc. Then we added a second auditorium, which doubled the amount of documents I needed to create.

With the new year, it was time to create more documents, so I decided this time around that I would create a script to help automate this task using the framework within Google Apps Script.

If you’ve not heard of or used Google Apps Script (GAS), it’s a scripting language based on Javascript, for light-weight application development. All of the code runs on Google’s servers to interact with your documents. If you’ve ever used an “add-on” in Google Apps, it’s using this scripting framework.

It’s pretty easy to use if you know Javascript, and it’s easy to get started. From any document, just go to Tools > Script Editor. This opens a new tab where you can start writing Apps Script.

Here is my script:

[code language=”JavaScript”]

function myFunction()
{
var ui = DocumentApp.getUi();

var templateDocId = ‘[templateid]’; // put the document ID of the master template file here

var prompt_numberOfDocs = ui.prompt(‘How many Talking Point Documents do you want to create?’);
var prompt_startingDate = ui.prompt(‘What is the starting date? Please enter in MM/dd/yyyy.’);

var numberOfDocs = parseInt(prompt_numberOfDocs.getResponseText());
var startingDate = prompt_startingDate.getResponseText();

var prompt_venueResponse = ui.prompt(‘Venue’, ‘Create Documents for both Auditoriums? If no, please type in the Venue Title and click “No”.’, ui.ButtonSet.YES_NO);

var venueTitle = ”;

var bothAuditoriums = true;

if (prompt_venueResponse.getSelectedButton() == ui.Button.NO)
{
venueTitle = prompt_venueResponse.getResponseText();
bothAuditoriums = false;
}

var date = new Date(startingDate);

var htmlOutput = HtmlService
.createHtmlOutput(‘Creating ‘ + numberOfDocs + ‘ documents. Please stand by…

‘)
.setWidth(300)
.setHeight(100);

ui.showModalDialog(htmlOutput, ‘Talking Points – Task Running’);

for (var i = 0; i < numberOfDocs; i++)
{
var loopDate = new Date(date.getTime()+ ((i * 7) * 3600000 * 24)); // uses the looping interval to get the starting date and add 7 days to it, creating a new date object
var documentName = 'Talking Points – ' + Utilities.formatDate(loopDate, Session.getScriptTimeZone(), "MMMM dd, yyyy");
var documentDate = Utilities.formatDate(loopDate, Session.getScriptTimeZone(), "MM/dd/yyyy");
if (bothAuditoriums)
{
createNewTalkingPointDocument(templateDocId, documentName + ' (Aud 1)', 'Aud 1', documentDate);
createNewTalkingPointDocument(templateDocId, documentName + ' (Aud 2)', 'Aud 2', documentDate);
}
else
{
documentName += ' (' + venueTitle + ')';
createNewTalkingPointDocument(templateDocId, documentName, venueTitle, documentDate);
}
}

htmlOutput = HtmlService
.createHtmlOutput('google.script.host.close();’)
.setWidth(300)
.setHeight(100);
ui.showModalDialog(htmlOutput, ‘Talking Points – Task Running’);
}

function createNewTalkingPointDocument(templateDocumentId, documentName, venueTitle, documentDate)
{
//Make a copy of the template file
var documentId = DriveApp.getFileById(templateDocumentId).makeCopy().getId();

//Rename the copied file
DriveApp.getFileById(documentId).setName(documentName);

//Get the document body as a variable
var body = DocumentApp.openById(documentId).getBody();

//Insert the entries into the document
body.replaceText(‘##Venue##’, venueTitle);
body.replaceText(‘##Date##’, documentDate);
}

[/code]

Once you have a script in place, you can choose triggers for when it should run, like when it is opened, or on a schedule, etc.

Here is the new template with the script in action:

screen shot 2019-01-13 at 6.10.10 am

First, I ask how many documents should be created. 1, 5, 500, whatever I need.

screen shot 2019-01-13 at 6.10.29 am

Next, I ask for the starting date. We specifically use these for Sunday services, so I’ve programmed the script to take this starting date and then calculate every 7 days when creating multiple documents.

screen shot 2019-01-13 at 6.10.44 am

Then, I ask the user if they want to create documents for both auditoriums, or if this is for a special service or off-site service, etc. Typically we want them for both auditoriums, but the one-off feature makes things easy for those types of services too.

screen shot 2019-01-13 at 6.10.57 am

As the script runs, it displays this dialog box. Creating that many documents can take awhile, and I wanted the user to be aware of this. The box goes away automatically when the process is completed.

Now that we have this, I can pass the task on to anyone on our team, anytime they need these documents! And it saves a good bit of time. I definitely spent less time creating this script than I would have spent creating the 3-4 months worth of documents manually, and now I never have to do that again!

How can you use Google Apps Script to automate some of your more repetitive tasks?

Sharing ProPresenter lyrics to multiple clients through the web browser in real time using Node.js, socket.io, and Amazon EC2

Every year, my church has a “night of worship”, a worship service in the heart of the city at an outdoor stage, where we sing songs for a couple of hours. Because it doesn’t get dark enough to use projectors for lyrics until the service is almost over, in the past we have relied on using small flat screen TVs to try to show some words for people to follow along. Big white letters on a black background, nothing fancy. Of course, it’d be great if we could just rent an LED video wall, but the cost to do that has been too expensive for us to do in the past.

Photo Oct 19, 3 34 24 PM
You can see the screens we rented here. Pretty small (60″) for such a large crowd.

So, I had an idea: What if we could somehow send the lyrics out of ProPresenter to everyone’s phones, in real time, and let them use their own screens to follow along?

I gave myself a couple of limitations:

  • It needed to work in the standard phone browser so there was no barrier of installing a particular app
  • It needed to be real time or as close to it as possible

Awhile back, I started tinkering around with the undocumented ProPresenter API. I say undocumented because it is not officially offered as a way to access ProPresenter data and control it. Some people have done a great job at figuring out how ProPresenter sends data over the network between their apps which allows us to extend the software to meet unique needs. Basically, by using websockets, we can interact with ProPresenter which will return JSON-formatted data reflecting information about songs in the library, playlist, the index of the current song, the current slide and next slide information, etc.

I created a local Node.js project and in just a few hours, I had something ready to alpha test! My approach was to have a web browser open on the local network that could poll and listen to changes from ProPresenter, and then relay that new data to a web server. That web server would then relay those changes to all of the connected clients, much like a chat server would send a message to everyone.

[wpvideo WrTQyof1]

I showed it to my team, but the idea was tabled for awhile because we thought it through and didn’t want people buried in their phones while singing. However, as we got closer to the event, we realized that the two TV screens we rented might not be sufficient, and I was asked to work on this again.

As I was preparing this for production, I discussed briefly with our IT team about setting up an internal server running Node.js that could be accessed on port 80 (the default HTTP port) outside the firewall, but bandwidth, security and performance for hundreds of clients connecting through the Internet all at the same time was a concern. With that in mind, I turned to Amazon EC2.

If you haven’t heard of it, Amazon Elastic Cloud Computing (EC2) is basically virtual servers “in the cloud” (i.e. remotely and accessible from the internet). It’s not too hard to set up, and they even have a free tier available for 12 months, so you can try it out for free! I had never used it before this project, so I actually followed a tutorial to help me get it going.

Screen Shot 2018-10-21 at 6.21.09 AM
This is my EC2 instance. It’s running Ubuntu linux.

Once I had my Linux server set up on Amazon EC2, I assigned an “Elastic IP” (Amazon’s term for a type of static IP), and then I bought a domain name, fglyrics.com, for $12 and tied it to that IP. It was up and online in minutes. I installed Node.js on my new server, copied over my code, and started it running.

About the software:

I call the software Presentation Bridge, because it acts as a “bridge” or connector between the presentation software and all of the clients.

Screen Shot 2018-10-21 at 5.19.59 AM
The initial Bridge page.

When you first load the Bridge page, you have two options: Configuring your ProPresenter connection, and Connecting to a Bridge. In order to connect to ProPresenter, you have to enable the network settings. It relies on both the “remote” and “stage display” controls to get all of the data needed.

ProPresenter 6 Network Settings
You need to enable the network, ProPresenter Remote, and Stage Display App. Be sure to assign a control password. The Network Port is the port we will use in Presentation Bridge. The Stage Display App port is not needed.

When connecting to ProPresenter in Presentation Bridge, you’ll need to supply the local ProPresenter IP address and port, as well as the control password and local library path. This is what allows the Bridge to pull all of the slide images and other information. This is standardized across all of our ProPresenter installs at our church, so it’s always the same path for us. It should be the full, absolute path from the root of your drive.

Screen Shot 2018-10-21 at 6.15.51 AM

When you’ve successfully connected, the gray dot at the top of the ProPresenter config box will turn green. If there’s an error, it will turn red. Status information is displayed in a log area further down the screen.

To connect to a bridge, choose one from the dropdown list. If it is configured to have a control password, you’ll have to enter that in order to connect. Adding bridges and making changes to existing bridges can be managed by clicking the settings wheel. It supports multiple bridges, which I added as a feature since we have multiple auditoriums and may want to use more than one simultaneously.

Screen Shot 2018-10-21 at 6.02.00 AM
This is what the page can look like when connected to both ProPresenter and to a Bridge room.

As the operator runs ProPresenter, the slides will be displayed on the Bridge screen with the currently selected slide showing a blinking red border, so it is clear which slide is currently being displayed. You can browse the playlists and items in the playlist. You have the option to send the current data from ProPresenter to the server (or turn it off), turn on a logo (configured in Bridge settings, useful if you’re currently not connected to ProPresenter, etc.). I also implemented the NoSleep.js library which will attempt to keep any connected mobile devices awake.

On the viewer/client side, I implemented three types of “listeners”:

  • Text Listener – just gets text data and displays it as big as possible on the screen
  • Image Listener – displays the actual slide image by using a base64 encoding of the slide
  • Stage Display – recreates the current slide/next slide layout
Screen Shot 2018-10-23 at 3.54.23 PM
This is the default “text listener” option, and what we used for our night of worship.
Screen Shot 2018-10-23 at 3.54.01 PM
This is the image listener. It uses a base64 encoded image pulled from ProPresenter at the time it connects. The quality is based on the slider value set in the config options for the ProPresenter connection. It does not include any background or other layers.
Screen Shot 2018-10-23 at 3.54.14 PM
This is the stage display, with the current slide and the next slide, and any notes attached.

All three listeners can be accessed through the browser. The data is relayed from the server using the socket.io library. I tested it on my iOS devices, Android devices, and even my Amazon Fire TV stick on multiple browsers and they all work really well. Across an internet connection, the moment a slide is clicked in ProPresenter, that slide is visible on the listener devices.

We used it during our Night of Worship this year and it worked great! I used a hotspot for the Bridge connection and then everyone connected to the Text Listener using the internet connection on their own phones. It uses very little data since it is just a text stream, which is really nice!

Photo Oct 19, 3 28 26 PM

Photo Oct 19, 7 17 43 PM

Overall, I enjoyed creating this software for our unique need. I plan to extend the functionality down the road as I have time, including attaching “triggers” to specific slides as they are activated, to send RossTalk messages, fire HTTP cues, etc. on the local production network.

If you’d like to try Presentation Bridge out for yourself, the code is freely available on my Github repository. You can also request to demo it using my live site running on Amazon EC2. I wrote the software to support multiple bridges at a time, in case you have multiple meeting spaces or venues that need to run simultaneously. When more than one Bridge is enabled and running, any users that connect are presented with a drop-down list and can select the Bridge they want to join.