Skip to main content

Ava expands its AI captioning to desktop and web apps, and raises $4.5M to scale

The worldwide shift to virtual workplaces has been a blessing and a curse to people with hearing impairments. Having office chatter occur in text rather than speech is more accessible, but virtual meetings are no easier to follow than in-person ones — which is why real-time captioning startup Ava has seen a huge increase in […]

The worldwide shift to virtual workplaces has been a blessing and a curse to people with hearing impairments. Having office chatter occur in text rather than speech is more accessible, but virtual meetings are no easier to follow than in-person ones — which is why real-time captioning startup Ava has seen a huge increase in users. Riding the wave, the company just announced two new products and a $4.5 million seed round.

Ava previously made its name in the deaf community as a useful live transcription tool for real-life conversations. Start the app up and it would instantly hear and transcribe speech around you, color-coded to each speaker (and named if they activate a QR code). Extremely useful, of course, but when meetings stopped being in rooms and started being in Zooms, things got a bit more difficult.

“Use cases have shifted dramatically, and people are discovering the fact that most of these tools are not accessible,” co-founder and CEO Thibault Duchemin told TechCrunch.

And while some tools may have limited captioning built in (for example Skype and Google Meet), it may or may not be saved, editable, accurate, or convenient to review. For instance Meet’s ephemeral captions, while useful, only last a moment before disappearing, and are not specific to the speaker, making them of limited use for a deaf or hard of hearing person trying to follow a multi-person call. And the languages they are available in are limited as well.

As Duchemin explained, it began to seem much more practical to have a separate transcription layer that is not specific to any one service.

Illustration of a laptop and phone transcribing audio.

Image Credits: Ava

Thus Ava’s new product, a desktop and web app called Closed Captioning, which works with all major meeting services and online content, captioning it with the same on-screen display and making the content accessible via the same account. That includes things like YouTube videos without subtitles, live web broadcasts, and even audio-only content like podcasts, in more than 15 languages.

Individual speakers are labeled, automatically if an app supports it, like Zoom, or by having people in the meeting click a link that attaches their identity to the sound of their voice. (There are questions of privacy and confidentiality here, but they will differ case by case and are secondary to the fundamental capability of a person to participate.)

The transcripts all go to the person’s Ava app, letting them check through at their leisure or share with the rest of the meeting. That in itself is a hard service to find, Duchemin pointed out.

“It’s actually really complicated,” he said. “Today if you have a meeting with four people, Ava is the only technology where you can have accurate labeling of who said what, and that’s extremely valuable when you think about enterprise.” Otherwise, he said, unless someone is taking detailed notes — unlikely, expensive, and time-consuming — meetings tend to end up black boxes.

For such high quality transcription, speech-to-text AI isn’t good enough, he admitted. It’s enough to follow a conversation, but “we’re talking about professionals and students who are deaf or hard of hearing,” Duchemin said. “They need solutions for meetings and classes and in-person, and they aren’t ready to go full AI. They need someone to clean up the transcript, so we provide that service.”

Features of the Ava app.

Image Credits: Ava

Ava Scribe quickly brings in a human trained not in direct transcription but in the correction of the product of speech-to-text algorithms. That way a deaf person attending a meeting or class can follow along live, but also be confident that when they check the transcript an hour later it will be exact, not approximate.

Right now transcription tools are being used as value-adds to existing products and suites, he said — ways to attract or retain customers. They aren’t beginning with the community of deaf and hard of hearing professionals and designing around their needs, which is what Ava has striven to do.

The explosion in popularity and obvious utility of their platform has led to this $4.5M seed round, as well, led by Initialized Capital and Khosla Ventures.

Duchemin said they expected to double the size of their team with the money, and start really marketing and finding big customers. “We’re very specialized, so we need a strong business model to grow,” he said. A strong, unique product is a good place to start, though.

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.