Today's episode is sponsored by Rev. We explore the history of automatic speech recognition and computer systems that can understand human commands. From there, we explain the machine learning revolution that has powered recent advancements in speech to text systems, like the one employed by Rev for automatic transcription. Finally, we look to the future, and imagine the features and services that the next generation of this AI could produce.
In this episode we chat with three guests:
Miguel Jetté: Head of AI R&D
Josh Dong: AI Engineering Manager
Jenny Drexler: Senior Speech Scientist
When Jetté was studying mathematics in the early 2000s, his focus was on computational biology, and more specifically, phylogenetic trees, and DNA sequences. He wanted to understand the evolution of certain traits and the forces that explain why our bones are a certain length or our brains a certain size. As it turned out, the algorithms and techniques he learned in this field mapped very well to the emerging discipline of automatic speech recognition, or ASR.
During this period, Montreal was emerging as a hotbed for artificial intelligence, and Jetté found himself working for Nuance, the company behind the original implementation of Siri. That experience led him to several positions in the world of speech recognition, and he eventually landed at Rev, where he founded the company’s AI department.
Jetté describes Rev as an “Uber for Transcription.” Anyone can sign up for the platform and earn money by listening to audio submitted by clients and transcribing the speech into text. This means the company has a tremendous dataset of raw audio that has been annotated by human beings and, in many cases, assessed a second time by the client. For someone looking to build an AI system that mastered the domain of speech to text, this was a goldmine.
Jetté built the earliest version of Rev’s AI, but it was up to our second guest, Josh Dong, to productize and scale that system. He helped the department transition from older technologies like Perl to more popular languages like Python. He also focused on practical concerns like modularity and reusable components. To combine machine learning and DevOps, Dong added Docker containers and a testing pipeline. If you’re interested in the nuts and bolts of keeping a system like Rev’s running at tremendous scale, you’ll want to check out this part of the show.
We also explore some of the fascinating future and promise this technology holds in our time with Jenny Drexler. She explains how Rev is moving from a hybrid model—one that combines Jetté’s older statistical techniques with Dong’s newer machine learning approach—to a new system that will be ML from end-to-end. This will open up the door for powerful applications, like a single system that can convert speech text across multiple languages in a single piece of audio.
“One of the things that's really cool about these end to end models is that basically, whatever data you have, it can learn to handle it. So a very similar architecture can do sequence to sequence learning with different kinds of sequences. The model architecture that you might use for speech recognition can actually look very similar to what you might use for translation. And you can use that same architecture, to say, feed in audio in lots of different languages and be able to do transcription for any of them within one model. It's much harder with the hybrid models to sort of put all the right pieces together to make that happen,” explains Drexler.
If you’re interested in learning more about the past, present, and future of artificial intelligence that can understand our spoken language and learn how to respond, check out the full episode. If you want to learn more about Rev or check out some of the positions they have open, you can find their careers page here.
The Stack Overflow blog is committed to publishing interesting articles by developers, for developers. From time to time that means working with companies that are also clients of Stack Overflow’s through our advertising, talent, or teams business. When we publish work from clients, we’ll identify it as Partner Content with tags and by including this disclaimer at the bottom.