Amazon today announced a long-form Alexa speaking style for news and music content within third-party skills (i.e., voice apps). Starting this week in the U.S., Alexa voice app developers will be able to use a long-form voice that’s optimized for large amounts of information, like articles and podcasts. For example, they could use it to read web pages or a storytelling portion of a game.
The new speaking style could improve experiences by making read-aloud text sound more natural, and by extension boost overall user engagement. Additionally, it could save developers cash and effort by eliminating the need to hire professional voice actors, as well as hours spent recording audio in a studio.
Amazon says the long-from speaking style is powered by a machine learning text-to-speech model that incorporates natural pauses while transitioning from one paragraph to the next, or even from one dialog to another between different characters. That’s akin to a recently launched Google Assistant feature that reads longform content on websites and Android app content, using a more natural and humanlike voice.
Beyond the long-form speaking style, Amazon says that developers can now use the news and conservational speaking styles from Amazon Polly, Amazon’s cloud service that converts text into lifelike speech, for select voices — Matthew, Joanna, and Lupe — in Alexa skills. The news speaking style sound similar to what you might hear from TV news anchors and radio hosts, while the conversational speaking style makes the voices sound less formal and as if they’re speaking to friends and family.
Amazon detailed its work on AI-generated speech in a research paper late last year (“Effect of data reduction on sequence-to-sequence neural TTS”), in which researchers described a system that can learn to adopt a new speaking style from just a few hours of training — as opposed to the tens of hours it might take a voice actor to read in a target style.
Amazon’s AI model consists of two components. The first is a generative neural network that converts a sequence of phonemes into a sequence of spectrograms, or visual representations of the spectrum of frequencies of sound as they vary with time. The second is a vocoder that converts those spectrograms into a continuous audio signal.
The end result? An AI model-training method that combines a large amount of neutral-style speech data with only a few hours of supplementary data in the desired style, and an AI system capable of distinguishing elements of speech both independent of a speaking style and unique to that style. Amazon has used it internally to produce new voices for Alexa, as well as developer-facing voices across several languages in Amazon Polly.
Finally, Amazon says that developers can use 10 additional Amazon Polly voices in 6 new languages, including U.S. English, U.S. Spanish, Canadian French, Brazillian Portuguese, and more.