Again in 2016, Hammad Syed and Mahmoud Felfel, an ex-WhatsApp engineer, thought it’d be neat to construct a text-to-speech Chrome extension for Medium articles. The extension, which may learn any Medium story aloud, was featured on Product Hunt. A yr later, it spawned a whole enterprise.
“We saw a bigger opportunity in helping individuals and organizations create realistic audio content for their applications,” Syed informed TechCrunch. “Without the need to build their own model, they could deploy human-quality speech experiences faster than ever before.”
Syed and Felfel’s firm, PlayAI (previously PlayHT), pitches itself because the “voice interface of AI.” Clients can select from quite a few predefined voices, or clone a voice, and use PlayAI’s API to combine text-to-speech into their apps.
Toggles permit customers to regulate the intonation, cadence, and tenor of voices.
PlayAI additionally presents a “playground” the place customers can add a file to generate a read-aloud model and a dashboard for creating more-polished audio narrations and voiceovers. Not too long ago, the corporate acquired into the “AI agents” sport with instruments that can be utilized to automate duties similar to answering buyer calls at a enterprise.
One in every of PlayAI’s extra fascinating experiments is PlayNote, which transforms PDFs, movies, photographs, songs, and different information into podcast-style reveals, read-aloud summaries, one-on-one debates, and even youngsters’s tales. Like Google’s NotebookLM, PlayNote generates a script from an uploaded file or URL and feeds it to a group of AI fashions, which collectively craft the completed product.
I gave it a whirl, and the outcomes weren’t half dangerous. PlayNote’s “podcast” setting produces clips kind of on par with NotebookLM’s when it comes to high quality, and the instrument’s capacity to ingest photographs and movies makes for some fascinating creations. Given an image of rooster mole dish I had just lately, PlayNote wrote a five-minute podcast script about it. Actually, we live sooner or later.
Granted, the instrument, like all AI instruments, generates odd artifacts and hallucinations every so often. And whereas PlayNote will do its greatest to adapt a file to the format you’ve chosen, don’t count on, say, a dry authorized submitting to make for the very best supply materials. See: the Musk v. OpenAI lawsuit framed as a bedtime story:
PlayNote’s podcast format is made potential by PlayAI’s newest mannequin, PlayDialog, which Syed says can use the “context and history” of a dialog to generate speech that displays the dialog movement. “Using a conversation’s historical context to control prosody, emotion, and pacing, PlayDialog delivers conversation with natural delivery and appropriate tone,” he continued.
PlayAI, which is shut rivals with ElevenLabs, has been criticized prior to now for its laissez faire strategy to security. The corporate’s voice cloning instrument requires that customers verify a field indicating that they “have all the necessary rights or consent” to clone a voice — however there isn’t any enforcement mechanism. I had no bother making a clone of Kamala Harris’ voice from a recording.
That’s regarding contemplating the potential for scams and deepfakes.
![PlayAI clones voices on command 1 PlayDialog](https://techcrunch.com/wp-content/uploads/2024/11/ArMeY8S3SxQcxIgVK42xkn9oduw.avif?w=680)
PlayAI additionally claims that it mechanically detects and blocks “sexual, offensive, racist, or threatening content.” However that wasn’t the case in my testing. I used the Harris clone to generate speech I frankly can’t embed right here and by no means as soon as noticed a warning message.
In the meantime, PlayNote’s group portal, which is full of publicly generated content material, has information with specific titles like “Woman Performing Oral Sex.”
Syed tells me that PlayAI responds to studies of voices cloned with out consent, like this one, by blocking the consumer accountable and eradicating the cloned voice instantly. He additionally makes the case that PlayAI’s highest-fidelity voice clones, which require 20 minutes of voice samples, are priced increased ($49 per thirty days billed yearly or $99 per thirty days) than most scammers are prepared to pay.
“PlayAI has several ethical safeguards in place,” Syed mentioned. “We’ve implemented robust mechanisms to identify whether a voice was synthesized using our technology, for example. If any misuse is reported, we promptly verify the origin of the content and take decisive actions to rectify the situation and prevent further ethical violations.”
I’d definitely hope that’s the case — and that PlayAI strikes away from advertising and marketing campaigns that includes lifeless tech celebrities. If PlayAI’s moderation isn’t strong, it may face authorized challenges in Tennessee, which has a legislation on the books stopping platforms from internet hosting AI to make unauthorized recordings of an individual’s voice.
PlayAI’s strategy to coaching its voice-cloning AI can also be a bit murky. The corporate gained’t reveal the place it sourced the info for its fashions, ostensibly for aggressive causes.
“PlayAI uses mostly open data sets, [as well as licensed data] and proprietary data sets that are built in-house,” Syed mentioned. “We don’t use user data from the products in training, or creators to train models. Our models are trained on millions of hours of real-life human speech, delivering voices in male and female genders across multiple languages and accents.”
Most AI fashions are skilled on public net knowledge — a few of which can be copyrighted or below a restrictive license. Many AI distributors argue that the fair-use doctrine shields them from copyright claims. However that hasn’t stopped knowledge house owners from submitting class motion lawsuits alleging that distributors used their knowledge sans permission.
PlayAI hasn’t been sued. Nevertheless, its phrases of service counsel it gained’t go to bat for customers in the event that they discover themselves below authorized menace.
Voice cloning platforms like PlayAI face criticism from actors who concern that voice work will ultimately get replaced by AI-generated vocals, and that actors may have little management over how their digital doubles are used.
The Hollywood actors’ union SAG-AFTRA has struck offers with some startups, together with on-line expertise market Narrativ and Reproduction Studios, for what it describes as “fair” and “ethical” voice cloning preparations. However even these tie-ups have come below intense scrutiny, together with from SAG-AFTRA’s personal members.
In California, legal guidelines require corporations counting on a performer’s digital duplicate (e.g. cloned voice) give an outline of the duplicate’s supposed use and negotiate with the performer’s authorized counsel. Additionally they require that leisure employers achieve the consent of a deceased performer’s property earlier than utilizing a digital clone of that particular person.
Syed says that PlayAI “guarantees” that each voice clone generated by its platform is unique to the creator. “This exclusivity is vital for protecting the creative rights of users,” he added.
The rising authorized burden is one headwind for PlayAI. One other is the competitors. Papercup, Deepdub, Acapela, Respeecher, and Voice.ai, in addition to large tech incumbents Amazon, Microsoft, and Google, supply AI dubbing and voice cloning instruments. The aforementioned ElevenLabs, one of many highest-profile voice cloning distributors, is alleged to be elevating new funds at a valuation over $3 billion.
PlayAI isn’t struggling to search out traders, although. This month, the Y Combinator-backed firm closed a $20 million seed spherical led by Kindred Ventures, bringing its complete capital raised to $21 million. Race Capital and 500 World additionally participated.
“The new capital will be used to invest in our generative AI voice models and voice agent platform, and to shorten the time for businesses to build human-quality speech experiences,” Syed mentioned, including that PlayAI plans to develop its 40-person workforce.