Automated Printing of Google Documents using Google Apps Script, the DropBox API, and Automator Folder Actions

A couple of years ago, I shared a workflow that we still use to auto generate documents that we use each week. A few months ago, I shared another workflow that showed how I automated printing our weekly Planning Center Online paperwork.

I decided recently that I was tired of still having to manually print these weekly “talking points” documents, while having my Planning Center paperwork fully automated. So, I took a few minutes and wrote a new Google Apps Script to help with this.

We print these every week. I was doing it manually, but not anymore!

Here is what the script does:

  • Searches a specific Google Drive folder for all subfolders with files that match today’s date (the script will run on a weekly trigger)
  • If the file is a match, it opens the file as a PDF and stores the binary contents in a variable
  • An upload request is made to the Dropbox API with that binary data and a file name
  • Dropbox saves the file into the “Automated Printing” folder
  • Dropbox then syncs the file to the local computer (Mac)
  • The local Mac is configured with a Folder Action that automatically prints any files placed in this folder
  • After the Automator Folder Action prints the file, it removes the file

Here’s how you set it up:

First, you want to create a new Dropbox “App”. Go to dropbox.com/developers and click “Create apps”.

Then, you need to fill out these fields:

  1. “Choose an API”: Scoped Access. It’s your only choice.
  2. “Choose the type of access you need”: I chose “Full Dropbox” because I already had a specific folder set up in the root of my Dropbox. If you’re setting up the Automator Folder action for the first time, you could probably keep the scope within “App folder”.
  3. “Name Your App”: Give it a meaningful name. It does have to be unique across all of Dropbox, for some reason, so if you get an error here, just add something unique to you.
  4. “Choose the Dropbox account that will own your app”: If you have personal/business accounts linked, you’ll need to choose the account that owns the app. I’m using a business account for this, so I chose that one.

On the next page, choose the “Permissions” tab.

Then give your app “files.content.write” access.

Now back on the Settings tab, generate a new Token and set the Expiration to “No expiration”.

This will generate a Token key which you will use within the Google Apps Script in the next steps.

Now in Google Drive, click “New”, go down to “More”, and choose “Google Apps Script”. Google Apps Script is essentially Javascript, so it’s super easy to use.

You’ll want to give the project a helpful name, as it will be stored in your Google Drive this way.

Give your project a helpful name.

In the code section, paste in my script below:

/*
EDIT THESE VARIABLES FOR YOUR SETUP
*/
var accessToken = "token"; //Dropbox App Access Token
var rootFolder = "folderID"; // Google Drive Root Folder where these files live
var dropboxPath = "/Automated Printing/"; //Dropbox Folder Path to place file in
var numberOfCopies = 2; //the number of copies you want per file

//Nothing to edit below

function myFunction() {
  var dtDate = new Date();
  const monthNames = ["January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"];
  var strDate = monthNames[dtDate.getMonth()] + " " + dtDate.getDate() + ", " + dtDate.getFullYear();
  var mainFolder = DriveApp.getFolderById(rootFolder);
  var subFolders = mainFolder.getFolders();
  while(subFolders.hasNext()) {
    var subFolder = subFolders.next();
    var files = subFolder.getFiles();
    while(files.hasNext()) {
      var file = files.next();
      var fileName = file.getName();
      if ((fileName.indexOf(strDate) > -1) && (fileName.indexOf(".pdf") == -1)) {
        //this is a file we want to print
        Logger.log("Generating PDF: " + file.getName());
        for (let i = 0; i < numberOfCopies; i++) {
          sendToDropbox(file.getName() + ".pdf", file.getAs('application/pdf'));
          Utilities.sleep(15000); // wait 15 seconds before doing the next file, so that Dropbox has time to sync the file, the Automator can print the file, remove it, and close out
        }
      }
    }
  }
}

function sendToDropbox(fileName, fileBlob) {
  var parameters = {
    "path": dropboxPath + fileName,
    "mode": "add",
    "autorename": true,
    "mute": false,
    "strict_conflict": false
  };

  var headers = {
    'Authorization': 'Bearer ' + accessToken,
    'Content-Type': 'application/octet-stream',
    'Dropbox-API-Arg': JSON.stringify(parameters)
  };

  var options = {
    "method": "POST",
    "headers": headers,
    "payload": fileBlob
  };

  var apiUrl = "https://content.dropboxapi.com/2/files/upload";
  var response = JSON.parse(UrlFetchApp.fetch(apiUrl, options).getContentText());
}

Now modify the top section to include your Dropbox access token (the one you generated earlier), the Google Drive folder ID (the folder ID is in the URL of the page when you open that folder in Google Drive), the Dropbox path to save to, and the number of copies you need for each matching document. In our case, I need 2 copies of each document.

I learned in testing that if Dropbox syncs the files too fast while my Automator folder action is still running, the new files that were added don’t get included in the folder action, and the folder action doesn’t re-run those new files. So, what this script does is it uploads a new PDF for every copy needed, but it waits 15 seconds in-between. This gives Google time to upload to Dropbox, Dropbox time to sync to my local Mac with the Automator action, and Automator time to run its script and print the file and delete it. It’s not very efficient, but the files are not that large.

Now that your script is in place, you need to assign a trigger to it. Click “Triggers” on the left-hand side of the screen:

Add a new trigger. I used the following settings to have it run weekly on Sundays between 6 and 7am. Be sure to target the “myFunction” function as that’s the main one we are using.

You’ll need to create the folder action in Automator. Follow my previous post on how to do this, as the steps are the same. I didn’t have to change that at all!

Here’s a tutorial video if you learn better that way:

I hope this helps you think of ways to automate what you’re doing in Google Drive so you can spend more time on ministry and less on manual tasks!

Tally Arbiter 1.5 – New Features, Bug fixes, and support for more tally clients

It’s been a few months since a major release for Tally Arbiter, but I’ve been hard at work on it in my spare time. If you haven’t read about this software, you can read some past posts on this blog about it. It’s free, open-source camera tally lights software that I developed to help churches around the world put on better productions.

Today, Tally Arbiter 1.5 is officially released!

Here are some highlights of the release:

  • The GUI has been revamped with all internal socket.io calls. The REST API still exists, but this will be more streamlined.
  • A Generic TCP Device Action has been added: Now you can send a custom TCP string or command to another network location whenever a camera enters or exits Program on your switcher, for example.
  • VMix Tally Protocol Emulation: If you’ve got a favorite tally client designed specifically for VMix, now you can use it with Tally Arbiter! It’s very simple – Tally Arbiter represents itself as a VMix source. You can even use the emulated VMix connection from Tally Arbiter as a source in another Tally Arbiter install! (Not sure why, but, you can!)
  • Devices can now have multiple addresses assigned from a single source. This helps if you really consider one Camera to be “on-air” whenever it’s used on Input 1 or Input 5 from the same source, for example.
  • Device Sources can now be linked on either the Preview Bus, the Program Bus, or both. This means that your Camera won’t be considered to be in Program unless it is in Program on ALL assigned sources. This is helpful for cases where you may have nested switchers.
  • Preview + Program mode added to the OSC Source Type
  • Some Device Source Addresses can now be chosen via a drop down list instead of manually typed in
  • Blackmagic VideoHub (all models) added as a tally source. You can choose which destinations are considered “on-air” destinations for both the Preview and Program bus.
  • The Companion client now supports reassigning of tally listener clients as a button press. This is useful if you want to have a tally light at a shader station; you can press a button on your stream deck to route a camera to your shader monitor and simultaneously reassign the tally light at the shader monitor to that camera, and now you know if that camera is on-air as you shade it!
  • A “Test Mode” has been added that cycles through tally states to test all tally outputs. Very helpful when you’re not actively in a show but want to verify everything is working!
  • Support for Roland VR-50HD-MKII as a tally source
  • The Producer page can now send messages to supported tally clients like the Web tally and M5StickC. Don’t have an intercom system? Use the chat to tell your camera op to zoom in!
  • The M5StickC Plus is now officially supported. And M5Stick clients will now retain their last used Device when they reboot or reconnect.
  • The M5 Atom Matrix is now also supported.
  • Various other bug fixes and improvements

This release saw a lot more interaction from the community through Github issues (feature requests and bug reports), pull requests, and other contributions. It’s truly becoming a community project, which is awesome to see!

Here’s a video to show most of this in action:

As always, you can get the latest code on Github: https://github.com/josephdadams/TallyArbiter

And if someone you know needs tally lights for their production, tell them to go check out tallyarbiter.com!

Using A Third Party Streaming Service For Live Streaming Workflows and Communication

I don’t do this often but wanted to share briefly about a service we have been using since last March, Restream. I’ll clarify right now that I am not being compensated to say this, but the workflow has value for other churches, which is why I’m sharing.

When we started streaming our full service online last March (like practically every church around the globe who had the capability), I wanted us to send our content not to just YouTube but also to Facebook. The goal being: let’s reach people where they are at, and the platforms they are already on. For us in our church, that’s Youtube and Facebook.

We currently encode and live stream using OBS Studio at our church. Our internet connection is pretty good, and we don’t typically have bandwidth issues when it comes to sending data out. However, in order to send a stream to multiple live streaming platforms simultaneously, if you didn’t have a 3rd-party streaming tool, you would have to use double the bandwidth to send to two platforms at the same time.

So, for us, enter Restream. We send our live stream to them, and then they relay it to YouTube and Facebook on our behalf. The delay is minimal. They also support a variety of other platforms as well, which we don’t use.

I got this from their website. You get the idea.

We “start streaming” in OBS which sends our feeds to Restream, and then our moderators, when ready, can switch on the streams for each connected social media platform when they are ready to go live on the actual social platforms. They do this remotely/off-site, which is great in these times that call for distancing. All of the API/connection data is stored in Restream, so I don’t have to give out admin logins or privileges to any moderators directly.

When we started streaming our services online, I wanted to create environments where people felt welcome to chat and share their prayer needs, stories, and just feel connected to others. I also wanted to be able to easily share sermon content and notes for people to help them in their application of scripture as they listen.

We initially had multiple moderators, 1-2 people “watching” the chats and comments on each social platform (YouTube and Facebook, currently) but then we started using Restream’s built in Chat feature. This aggregates all of the chat data into a single interface, which allows one person to respond individually per platform, or post to all places at once, depending on the needs.

Here’s a screenshot of the Restream Chat interface.

Overall, this workflow has really helped us to serve more people with less staff and volunteers. The capability to turn the stream on and off independently of what production world is doing on-site is very helpful, and having all of the chat in one place means we don’t have to monitor it on each platform.

If you’re looking to stream to more than one platform, check out Restream! We pay something like $20 a month, and it’s well worth it. Here is my referral link: https://restream.io/join/2Nyvv