ScreenDeck v2.0 is here! Multiple decks, hotkey support, and more

I published my first version of ScreenDeck at the end of last year and now I am excited to release version 2.0!

Here’s what’s new!

Multiple Decks

You’re no longer stuck with just one screen deck. You can launch as many as you want, each with its own layout and size. You can make them read-only by disabling button presses.

Profiles

Now you can save and instantly switch between different layouts. Whether you’re controlling slides on Sunday or streaming on Wednesday, just create, save, and then pick the profile you need.

X/Y Button Mapping

Instead of “keys per row” and “total keys,” everything is now based on columns and rows.

Hotkey Support

You can now assign global hotkeys to any button on any ScreenDeck. Press the assigned key combo on your keyboard and trigger any action, even when the deck is hidden!

Background Customization

Each deck can have its own background color and opacity.

Button or Encoder Mode

Turn any button into an encoder-style dial by right clicking on that key. Great for volume control, brightness, or cycling through options!

Window Memory

Decks will remember their position, size, and settings—no need to rearrange every time you start the app.


Download it now from the GitHub Releases page.

Need a custom Companion module or app? Hit me up!

Building a digital roster/serving board using Companion and the Planning Center Services API

If you’re involved in tech ministry and like to tinker, chances are you’ve heard of — and maybe even used — micboard.io.

This is Micboard.

Straight from their website, “Micboard simplifies microphone monitoring and storage for artists, engineers, and volunteers. View battery, audio, and RF levels from any device on the network.” It’s a neat tool and has helped a lot of teams over the years.

I always liked the idea of Micboard because it would be a great way to show who is serving that day. We tried to implement it at my church but eventually moved away from it, mainly because it hadn’t been updated in quite a while (over 6 years now), and we needed some additional features. Specifically, we were looking for integration with Planning Center Services — something that could automatically pull assignments from an interface our team was already familiar with. And – something we could use for more than just people on stage.

At first, I forked the Micboard repo (since it’s open-source) and started making improvements, cleaning up some code, and tweaking it to run more easily on modern MacOS systems. But pretty quickly, I realized I had too much on my plate to maintain a whole fork long-term.

Fast forward a year or so. I came across a few posts on some Facebook groups that I was in where people were using my ScreenDeck project to essentially create a Micboard style interface using Companion.

I wish I had my own Acoustic Bear.

What I loved about this approach is that it leveraged something we were already using — Companion — and could still be viewed from anywhere on the network, just like Micboard. Plus, Companion supports a lot more devices beyond just Shure systems.

Even better, this opened the door to that Planning Center integration I had wanted without introducing a bunch of extra overhead — we were already using the PCO module to control our LIVE service plans!

One thing I’ve wanted for a while was a digital roster — something simple to show who’s serving each day, helping everyone put names to faces across band, tech, safety, and more. A “Serving Board,” if you will.

About a year ago, I had modified the PCO module to pull scheduled people into variables — showing their names and assigned roles. I recently took it further by adding a feedback: “Show Person Photo based on Position Name.”

Now, the module pulls the photo from the person’s assignment, converts it into a PNG, and stores it internally as a base64 image — which can be shown directly on a button.

Pretty cool – and it looks like this:

Say “hi”, Adam.

But I didn’t want to stop there — I wanted the person’s status (Confirmed, Unconfirmed, or Declined in PCO) to show too.

Using the companion-module-utils library (thanks to another awesome Companion dev!), I added a simple colored border overlay for statuses.

A few extra lines of code later:

And you can get this look!

Thanks for confirming!

At this point, it was looking great — but I started thinking:

What if I don’t want to redo all my buttons every week? What if my teams and roles change?

So I added a new option: a generic “position number” approach.

You can now pick a position number in the plan (or within a specific team) — and the module will automatically pull the right person’s info, week to week, without you having to manually reconfigure anything.

For example:

• Pick any number across the entire plan.

• Or pick a number within a specific team, like Band or Tech.

With this option, you can choose any number, regardless of the team.
This picks the first person scheduled in the band.

I also built some Module Presets to make setting this up super easy:

Generic Position Number (no specific team)

Position Number Within a Team (like “Band” only)

Generic without regard to what Team
In this example, you can choose a number within the Band team.

And here’s where it all comes together:

Let’s say you have a “Wireless Assignments” team in PCO, and you assign a person to a position called “Wireless 4.”

Now, using the Shure Wireless module in Companion, you can match that name and see live RF and battery stats for Wireless 4 — tied directly to the person assigned!

All together, you get a clean, dynamic, reusable Micboard-style dashboard — all inside Companion, no extra tools required.

Here’s a walk through video showing it all in action:

The updated PCO Services Live module is available now in the Companion betas — go check it out if you want to try it!

Using Ross Dashboard and the Companion Satellite API to create a virtual touch surface on a Ross Video Ultritouch

My church, Fellowship Greenville, has been building a second campus now for a little over a year. It’s been an exciting process. The new auditorium will feature a control room much like what we have at our existing campus.

One of the newer pieces of equipment that we are putting in is a Ross Video UltriTouch HR. It’s a 2RU touch screen computer essentially, running Ross Dashboard. (I’ve written about Ross Dashboard before if you want to read about any of those.) Dashboard is a very flexible program that lets you program very custom interfaces to control your gear. We used it heavily until I started investing a lot of time toward Companion.

Once I knew we were getting one of these, I knew right away that I wanted to be able to use it as a satellite surface for Companion. Taking what I learned from my ScreenDeck project, and my OGScript knowledge (Ross’s flavor of Java/JavaScript that powers the custom panels in Dashboard), I was able to make this:

It was pretty easy to get simple buttons with text on them, and get the colors of the buttons to match Companion button colors. But I wanted the buttons to look like Companion buttons, and that took some work. Dashboard doesn’t have any image editing libraries that I was aware of, so I had to get creative. The image data coming from Companion is base64 encoded 8-bit RGB. I reached out to the Ross staff on their forums and they quickly got back to me with a helpful decoder function. It was similar to the one I had already written to decode the base64 encoded text data that comes from the Companion Satellite API.

Once I was able to decode it back to the binary RGB data, it was “simply” a matter of writing a function that saves these as bitmap files in a folder local to the panel and then changing the style of the button to show the new bitmap image.

And there we have it! I’m looking forward to using this on our UltriTouch as well as the TouchDrive touch screen as well.

The panel supports turning the bitmaps on/off, setting the button size, total keys, keys per row, and of course the IP/port to Companion. The satellite port is changeable on the Dashboard side but is currently fixed in Companion to 16622.

If you’re a Ross Dashboard user and want to tinker with the panel, I’ve made it available via Github on my RossDashboardPanels repository where I have shared some other panels as well.

If you ever need any custom Dashboard panels created (or Companion modules!), I do this for hire on the side to support my family. You can reach out to me via my website, josephadams.dev.

Streamlining Electron App Development with AI: Building a Virtual Stream Deck for Bitfocus Companion using the Satellite API

On the side from my full time job in ministry, I do coding-work-for-hire. It’s one of the ways I provide for my family. I’ve had opportunities to create custom dashboard panels, modules for Bitfocus companion, and lots of other bespoke solutions for whatever people need. (Hit me up if you ever need anything!)

One of the tools in my tool belt that I use regularly when coding is Github Copilot. It’s $10 a month and saves me so much time. Never heard of it?

GitHub Copilot is an AI-powered coding assistant developed by GitHub and OpenAI, designed to help developers write code faster and more efficiently. Integrated directly into popular code editors like Visual Studio Code, Copilot suggests code snippets, functions, and even entire blocks of code in real time as you type, based on the context of your project. It supports multiple programming languages and leverages a vast amount of open-source code to provide relevant suggestions, making it a valuable tool for both beginners and experienced developers looking to speed up their workflow, reduce errors, and explore new coding approaches.

It seriously saves me a lot of time by providing suggestions and workflows that I may never have thought of, while not necessarily doing things that I would not have done. After using it for a year and a half, I have it trained well on the ways I like to code.

Recently, I also signed up for OpenAI’s ChatGPT Plus plan. It’s $20 a month. I may not keep subscribing long term, but I’m trying it out. It gives me access to GPT 4o and DALL-E and all of their other tools. I used it to help me decipher a protocol for some paid work I was doing and it helped me save time. These tools are not at a point where I can just hand it the whole job and get a perfect response – but guiding it through the process in steps? I can get helpful responses that way.

After I was done with my protocol project, I simply asked ChatGPT, “give me a boilerplate typescript Electron app using my example”. I’ve shared several of my electron apps before. It’s my preferred method for cross platform apps (meaning they can run on MacOS, Windows, and Linux desktops). I wanted to see if I could guide ChatGPT through the process of giving me a new template to help take some projects further and implement standards and practices that I might not be aware of.

One particular project I’ve wanted to work on for awhile now is something I’m calling ScreenDeck. It’s essentially a screen based stream deck for Bitfocus Companion that uses the built in Satellite API to create virtual surfaces.

Every good project needs a logo, right?

I know the browser based emulator exists, but I wanted something that ran a little more “native looking” on the OS and could always sit on top of other windows so it’s immediately accessible.

I had started on it over a year ago, but the small nuances and things to code just felt overwhelming to implement in my “spare time”. However, together with my AI tools, I was able to quickly craft a new boilerplate template and apply it to the ScreenDeck project I had started a long time ago, and come up with a working solution in just a few days. It was a lot of back and forth with the chat, prompting it to craft more and more refined responses.

Like many of my other projects, I’m releasing ScreenDeck open source with the hopes that it will help the community – especially churches.

Here’s a simple 4 button, 1 per row, deck.
The settings allow you to configure how it looks, how many buttons, whether it’s always on top, etc. You can even change the bitmap size to create HUGE buttons!
Here’s a standard “Stream Deck XL” layout.
Some of the context menu options.
Because it uses the Satellite API in Companion, it shows up as a physical surface in Companion!
Because Companion sees it as a surface, this means you can do anything with it that you’d do to any physical surface.

You can download it here: http://github.com/josephdadams/screendeck

It’s available for MacOS, Windows, and Linux desktops!

Here’s a video showing it in action!

TimeKeeper: Controlling timers through a web UI and stream deck

One of my first projects I shared on this blog over 5 years ago was TimeKeeper – something we use to help manage countdowns to service start times, various production elements, etc. 5 years later, it’s still running strong and we use it every week.

I recently decided to give some effort toward creating a UI that would allow us to add and edit timers from the web interface, so that we could retire the Ross Dashboard custom panel that I created for it. We still use Dashboard, but for this, it was actually more work for our volunteers to use two tools rather than one – one for adding/editing, and one for viewing.

The new UI is simple – you can add a new timer directly from the page or edit an existing one. For now, I haven’t bothered with any permissions because our needs are very simple.

Editing a timer is just as easy.

I also created a Companion 3.0 module for us that allows us to create and modify timers and view them there as well. Now we can easily bump a timer and add 30 seconds if needed just by quickly pressing a button.

If you use TimeKeeper, go check out the new version along with the Companion module!

midi-relay v3.0 is here – as an electron app for Mac and Windows!

I decided to give some love recently to midi-relay since person after person asked me to make this an easier-to-run app rather than setting up a nodejs runtime.

When I originally created midi-relay, I designed it to run on every OS, especially the Raspberry Pi platform. Thousands of people use it all over the world for all kinds of stuff. Probably because it’s free. 🙂

This software is designed to accept a JSON object via its API and then turn that object into a MIDI command and send it out a local MIDI port. It allows for remote control of a lot of systems by sending the command over a simple network protocol.

Now it’s even easier to use.

It runs in the system tray for easy access.

Some new features include:

  • a new socket.io API for bi-directional communication
  • a virtual MIDI port, for loopback uses
  • an upgraded Bitfocus Companion v3 module
  • Disabling remote control, if needed

So if you’re a midi-relay user and you want an easy way to run this on your Mac or Windows desktop, go check out the latest release!

If using my software makes your life easier, please consider supporting my family.

Thanks!

Tally Arbiter 2.0 now available!

About a year ago, I released some camera tally lights software because we desperately needed it at my church. Since that time, a ton of new features have been added, both by me and by the community.

It’s now in use in hundreds of places, from churches to event venues to sports stadiums.

Version 2.0 was silently released a few weeks ago, which includes a compiled application that can run natively on Windows, MacOS, and Linux, without the need to install Node.js and other dependencies like the command line. And, of course, it still runs on a Raspberry Pi.

Lots of people in the community have shared how they are using it, made their own tutorials, and added to the existing documentation.

It’s truly becoming a community project, and I love that. We now have an official Facebook user group to help facilitate conversation amongst users, and I’m excited for the new features on the roadmap in the coming days.

Someone from the community designed a new logo! Isn’t it nice?

A few features to note since version 1.5:

  • An entirely new User Interface and native applications for the Big 3 OS models
  • Easily installed for command line via new NPM image or Docker image
  • 1-second updates function for TSL Clients (provides compatibility with certain tally products like Cuebi)
  • Recording/Streaming statuses for OBS and VMix now available for tally states
  • Generic TCP Device Action improvements
  • TSL 5.0 source support
  • New Ross Carbonite source type to monitor any bus regardless of the “on air” settings
  • Web tally page can now be loaded directly by Device Id, and chat can be disabled
  • Pimoroni Blinkt! Listener Client
  • TTGO_T Display Listener Client
  • Improved Outgoing Webhooks – support for https and content-type selections
  • Roland Smart Tally emulation for use with STAC
  • Panasonic AV-HS10 support
  • Support for ATEM super sources in tally states
  • Bug fixes and performance improvements

If you’re new to Tally Arbiter, go check it out! You can also join the new Facebook user group here: https://www.facebook.com/groups/tallyarbiter

And to everyone in the community who has helped to make TA what it is, thank you! Your contributions are helping everyone.

Tally Arbiter 1.2 – support for newtek tricaster, GPO Output, and tsl 3.1 protocol conversion

A few weeks ago, I released some free tally light software to the community. I’ve had people checking it out and I am excited to be able to offer some more features!

Some highlights in the 1.1 release:

  • Overall performance improvements
  • Fixed an issue where Devices added during runtime did not obtain a proper initialization state and would not assign tally data properly until the server was restarted
  • Fixed an issue where Devices mapped to OBS Studio sources could not correctly be in both preview and program bus at the same time (when in Studio mode)
  • Better checking on Source connection states
  • TCP/UDP ports are now verified as in-use or reserved to help eliminate user errors
  • More verbose logging in the server console
  • All tally data received by all Sources is now sent to the Settings page (while open) for logging and diagnostic purposes
  • New Producer page; allows users to view all device states (tally information) without having the Settings page open. Created in dark mode for in-service viewing
  • Documentation added to Settings page to assist in initial setup and learning
  • OSC added as a Source type to trigger tally states
  • OSC added as a Device Action type (supports multiple arguments)
  • “Python” client renamed to “blink(1)” in preparation of other types of listener clients that may also use Python
  • Version is now displayed on the Settings page for diagnostic purposes

Now, I am releasing version 1.2! The highlights:

  • Newtek Tricaster support now included as a tally source type
  • OBS can now properly discern whether it is in preview or program
  • Support for TSL Clients – Tally Arbiter can now send all device states (derived and arbitrated from any source type) as TSL 3.1 (UDP or TCP) out by specifying a TSL Address for each Tally Arbiter Device. This can be used to drive UMDs and other tally receiving interfaces by acting as a protocol converter between all source types and TSL.
  • New Python listening client – GPO Output! Now you can trigger just about anything using the GPIO ports on a Raspberry Pi.
  • Bug fixes and UI improvements
  • More documentation and upgrade instructions

The biggest feature in this release is the new TSL Clients functionality. Tally Arbiter can now send out TSL 3.1 data to any number of connected devices any time a device changes state within Tally Arbiter. So, you can have, for example, a multiviewer of one switcher dynamically show whether a camera is in use on that switcher or a switcher of an entirely different brand/model by using Tally Arbiter as a protocol converter.

Here’s a video to show how the new TSL Clients feature works within Tally Arbiter and how to integrate it with a switcher like the Ross Carbonite. In this example, tally data is coming from both a Carbonite and a Blackmagic ATEM and the Carbonite multiviewer reflects that in real-time.

If you’d like to check out Tally Arbiter or learn more about it, check out the GitHub repository here: https://github.com/josephdadams/TallyArbiter/

Using node.js, python, multiple raspberry pi’s and usb lights to create an inexpensive wireless camera tally system that can arbitrate multiple sources simultaneously

Update: Version 1.2 is available now; read about it here:

At my church, we have two auditoriums, each with their own video switcher and cameras. All of the inputs and outputs of each switcher are on a common video router, so all of these sources can easily be shared across both rooms. However, even with all this, we have no camera tally system. Commercial tally systems can be expensive, and it’s just something we’ve never been able to afford.

It’s not normally an issue, but sometimes we want to pull up a shot of a camera in Auditorium 1 and show it in Auditorium 2. Because we have no tally system, the camera operator would not know their shot was being used. And, even if we did have a tally system, those systems generally only interface with one tally source/switcher, not multiple sources at the same time.

A few weeks ago, I was quarantined from work due to a co-worker testing positive for COVID-19. I set out to use this time to write a tally system for our church to use. Now that we’ve re-opened for church services, we will really need this, because we will have cameras capturing the service to stream online, but won’t necessarily have those cameras visible on the projector screens in the auditoriums during that time, where the operators would at least have a visual reference if their shot was in use.

And, because we have two video switchers, I needed to come up with a solution that would allow either video switcher to pull up a camera in either auditorium in Preview or Program, and make sure the operator still knew their shot was in use.

So here is Tally Arbiter. I called it this because the software aggregates tally data from multiple sources and “arbitrates” whether that device is in Preview, Program, or both across all sources and buses.

I played Halo a lot back in the day. A LOT.

The server software is written in Node.js and can run on a Raspberry Pi. It supports the TSL 3.1 network protocol like what our Ross Carbonite switchers use, but I’ve also written support for Blackmagic ATEM switchers, OBS Studio, StudioCoast VMix, and Roland SmartTally. I plan to one day add support for incoming webhooks, and GPIO inputs for switchers that don’t have network-based protocols.

The settings page of the Tally Arbiter server.

The software supports tally data coming from multiple sources, and each source can vary in protocol type. This could be useful, for example, if you had shared cameras for your production on-screen using an ATEM and also through your live stream using OBS or VMix, and you need the cameras to reflect the tally data of either system.

You can configure multiple devices in the software. These would be what generally receives tally data, whether it be cameras, CG stations, monitors, etc. Each device can support addressing from multiple sources. This is the “arbitration” portion of the software.

Once a device is determined to be in preview and/or program, device action(s) can be run. This can be sending out a TSL 3.1 protocol message (to a monitor/scope/multiviewer), an outgoing webhook (to tell another device to start playing a video (“roll clip”), for example), triggering a relay if you have CCUs that need contact closures to turn on the tally lights, or even local console output for logging and testing.

Some of our cameras have built-in tally lights, like the two Hitachi Z-HD5000 cameras we have. For those, I implemented a separate relay controller client that listens to the data on the Tally Arbiter server. It uses simple USB relays with the Node.js library I created a couple years ago that controls our auditorium window shade.

I bought a project box, put the relay in, ran some CAT5e cable I had laying around and connected it to the relay and the CCU’s with custom DB25 connectors. I had to modify the project box some because I wanted the relay to sit flat in the box, so I used a dremel to remove the bottom of the middle screwposts, which weren’t really needed anyway. Never be afraid to modify something to make it work!

The relay fits snugly inside this box. This particular unit has 8 relays, so I could add 2 more cameras with preview/program tally control to this unit.
The box and the Pi running the server fit nicely on top of one of the CCUs.
A clean rack is a happy rack!
Preview and Program lights!
This will make our camera operators happy.

But what about the cameras we use that don’t have tally lights? For these, I decided to use Raspberry Pi Zero W‘s that would run software listening over websockets to the Tally Arbiter server. These particular Pi models are inexpensive and simple to use. I knew that I could get the least expensive cost for physical tally lights out of these Pi’s if I went the GPIO route with some LED lights and custom circuitry, but I wanted to design something that people who may not be comfortable with these concepts could easily implement. And honestly, the thought of soldering something just sounded like something I’d have to possibly maintain down the road. So, I used the blink(1) USB lights by ThingM.

I first started experimenting with these USB lights about a year ago when I created a silent notification system for our band to use in case we had a tech issue during a service. The company that makes these has published very easy to use APIs, which makes it a great tool to use with custom software.

I like this simple black case from Vilros. You can get the whole kit minus the SD card for about $30 on Amazon.
Here’s a blink(1). A pretty versatile device!

The listener client script is written in Python since that programming language runs so easily on the Raspberry Pi OS no matter what model Pi you have. And, since we are using the socket.io websocket libary, bi-directional real-time communication between the server and clients even though the programming languages vary is not an issue.

I used a USB extension cable to bring the light up by the camera, but the Pi is down on the floor of the platform.
Another view.

All together, each wireless tally light should cost between $55 and $60 depending on what Pi case you use, SD cards, etc. Tally Arbiter has no built-in limitation of the number of wireless clients that can be connected, so this makes it a very versatile and flexible system no matter what the size is of your production.

Lastly, I also created an option to view live Tally data in a browser, like on a tablet or phone. You can select the device from the list and the background of the page will be red, green, or black depending on that device’s tally state.

The web based tally option is nice if you need a quick portable tally option.

The web tally is controllable through the Settings page just like any other listening client, so you can reassign the tally remotely and even send a flash to that client to get their attention.

Here’s a walkthrough video of the whole system in action:

As usual with my projects, I’ve made this open-source and available for your use on Github: http://github.com/josephdadams/TallyArbiter. It is fully documented with a REST API if you want to automate use of it outside of the GUI that I have created. There are also step-by-step instructions on how to set up a Raspberry Pi Zero, with OS imaging and all of of the libraries and script installing needed to get it going.

My hope and passion is to see resources like this to be used to further the Gospel. I believe and have seen that God can use technology for His good, and when we can use it to further ministry, that is how we can see the Gospel spread.

If these projects are helpful to you and your church, let me know! I love hearing how you use technology to serve the church.

Free Real-Time Captioning Service using Google Chrome’s Web Speech API, Node.js, and Amazon’s Elastic Cloud Computing

For awhile now, I’ve wanted to be able to offer live captions for people attending services at my church who may be deaf or hard of hearing, to allow them to follow along with the sermon as it is spoken aloud. I didn’t want them to have to install a particular app, since people have a wide variety of phone models and OS’s, and that just sounded like a pain to support long-term. I also wanted to develop something low-cost, so that more churches and ministries could benefit from it.

I decided to take concepts learned my PresentationBridge project from last year’s downtown worship night and use it for this project. The idea was essentially the same, I wanted to be able to relay, in real-time, text data from a local computer to all connected clients using the Node.js socket.io library. Instead of the text data coming from something like ProPresenter, the text data would be the results of the Web Speech API’s processing of my audio source.

If you’re a Google Chrome user, Chrome has implemented W3C’s Web Speech API, which allows you to access the microphone, capture the incoming audio, and receive a speech-to-text result, all within the browser using JavaScript. It’s fast and, important to me, it’s free!

Here is how it works: The computer that is doing the actual transcribing of the audio source to text must use Google Chrome and connect to a Bridge room, similar to how my PresentationBridge project works. Multiple bridge rooms (think “venues” or “locations”) can be configured on the server, and if multiple rooms are available, when end users connect, they will be given an option to choose the room they want to be in and receive text. The only requirement for browser choice is the computer doing the transcribing; all others can use any browser on any computer or device they choose.

Screen Shot 2019-11-04 at 1.36.34 PM
This is the primary Bridge interface that does the transcribing work.

From the Bridge interface, you can choose which “Bridge” (venue) you want to control. If the Bridge is configured with a control password, you will have to enter it. Once connected, you can choose whether to send text data to the connected clients, go to logo, etc. You can redirect all users to a new webpage at any time, send a text/announcement, or reload their page entirely. To start transcribing, just click “Start Listening”. You’ll have to allow Chrome to have access to the microphone/audio source (only the first time). When you are connected to the Bridge, you can also choose to send the users to Logo Mode (helpful when you’re not broadcasting), or you can choose to send data or turn it off (helpful when you want to test transcribe but not send it out to everyone). There is also a simple word dictionary that can be used to replace commonly misidentified words with their proper transcription.

A note about secure-origin and accessing the microphone: If you’re running this server and try to access the page via localhost, Google Chrome will allow you to access the microphone without a security warning. However, if you are trying to access the page from another computer/location, the microphone will be blocked due to Chrome’s secure-origin policy.

If you’re not using a secure connection, you can also modify the Chrome security flag to bypass this (not recommended for long-term use because you’ll have to do this every time Chrome restarts, but it’s helpful in testing):

  • Navigate to chrome://flags/#unsafely-treat-insecure-origin-as-secure in the address bar.
  • Find and enable the Insecure origins treated as secure section.
  • Add any addresses you want to ignore the secure origin policy for. Remember to include the port number (the default port for this project is 3000).
  • Save and restart Chrome.

Here is a walkthrough video of the captioning service in action:

[wpvideo r6P0iWGj ]

I chose to host this project on an Amazon EC2 instance, because my usage fits within the free tier. We set up a subdomain DNS entry to point to the Elastic IP so it’s easy for people in the church to find and use the service. The EC2 instance uses Ubuntu Linux to run the Node.js code. I also used ngninx as a proxy server. This allowed me to run the service on my custom port, but forward the necessary HTTPS (port 443) traffic to it, which helps with load balancing and keeps my server from having to handle all of that secure traffic. I configured it to use our domain’s SSL certificate.

I also created a simple API for the service so that certain commands like “start listening”, “send data”, “go to logo” etc. can be done remotely without user interaction. This will make it easier to automate down the road, which I plan to do soon, so that the captioning service is only listening to the live audio source when we are at certain points in the service like the sermon. Because it’s just a simple REST API, you can use just about anything to control it, including a Stream Deck!

IMG_2076.JPG
We deployed them in our two auditoriums using ChromeBooks. An inexpensive solution that runs the Chrome Browser!

In order to give the devices a direct feed from our audio consoles, I needed an audio interface. I bought this inexpensive one off Amazon that’s just a simple XLR to USB cable. It works great on Mac, PC, and even ChromeBooks.

Screen Shot 2019-11-14 at 2.15.19 PM.png
XLR to USB audio interface so we can send a direct feed from the audio console instead of using an internal microphone on the computer running the Bridge.

If you’d like to download LiveCaption and set it up for yourself, you can get it from my Github here: https://github.com/josephdadams/LiveCaption

I designed it to support global and individual logos/branding, so it can be customized for your church or organization to use.