Vijay Balasubramaniyan, Co-Founder & CEO of Pindrop – Interview Sequence

Date:

Share post:

Vijay Balasubramaniyan is Co-Founder & CEO of Pindrop. He’s held varied engineering and analysis roles with Google, Siemens, IBM Analysis and Intel.

Pindrop‘s solutions are leading the way to the future of voice by establishing the standard for identity, security, and trust for every voice interaction. Pindrop’s solutions protect some of the world’s biggest banks, insurers, and retailers using patented technology that extracts intelligence from every call and voice encountered. Pindrop solutions help detect fraudsters and authenticate genuine customers, reducing fraud and operational costs while improving customer experience and protecting brand reputation. Pindrop, a privately held company headquartered in Atlanta, GA, was founded in 2011 by Dr. Vijay Balasubramaniyan, Dr. Paul Judge, and Dr. Mustaque Ahamad and is venture-backed by Andreessen Horowitz, Citi Ventures, Felicis Ventures, CapitalG, GV, IVP, and Vitruvian Partners. For more information, please visit pindrop.com.

What are the key takeaways from Pindrop’s 2024 Voice Intelligence and Security Report regarding the current state of voice-based fraud and security?

The report provides a deep dive into pressing security issues and future trends, particularly within contact centers serving financial and non-financial institutions. Key findings in the report include:

  • Significant Increase in Contact Center Fraud: Contact center fraud has surged by 60% in the last two years, reaching the highest levels since 2019. By the end of this year, one in every 730 calls to a contact center is expected to be fraudulent.
  • Increasing Sophistication of Attackers Using Deepfakes: Deepfake attacks, including sophisticated synthetic voice clones, are rising, posing an estimated $5 billion fraud risk to U.S. contact centers. This technology is being leveraged to enhance fraud tactics such as automated and high-scale account reconnaissance, voice impersonation, targeted smishing, and social engineering.
  • Traditional methods of fraud detection and authentication are not working: Companies still rely on manual authentication of consumers which is time-consuming, expensive and ineffective at stopping fraud. 350 million victims of data breaches. $12 billion spent yearly on authentication and $10 billion lost to fraud are evidence that current security methods are not working
  • New approaches and technologies are required: Liveness detection is crucial to fighting bad AI and enhancing security. Voice analysis is still important but needs to be paired with liveness detection and multifactor authentication. 

According to the report, 67.5% of U.S. consumers are concerned about deepfakes in the banking sector. Can you elaborate on the types of deepfake threats that financial institutions are facing?

Banking fraud via phone channels is rising due to several factors. Since financial institutions rely heavily on customers to confirm suspicious activity, call centers can become prime targets for fraudsters. Fraudsters use social engineering tactics to deceive customer service representatives, persuading them to remove restrictions or help reset online banking credentials. According to one Pindrop banking customer, 36% of identified fraud calls aimed primarily to remove holds imposed by fraud controls. Another Pindrop banking customer reports that 19% of fraud calls aimed to gain access to online banking. With the rise of generative AI and deepfakes, these kinds of attacks have become more potent and scalable. Now one or two fraudsters in a garage can create any number of synthetic voices and launch simultaneous attacks on multiple financial institutions and amplify their tactics. This has created an elevated level of risk and concern amongst consumers about whether the banking sector is prepared to repel these sophisticated attacks. 

How have advancements in generative AI contributed to the rise of deepfakes, and what specific challenges do these pose for security systems?

While deepfakes are not new, advancements in generative AI have made them a potent vector over the past year as they’ve been capable of turn into extra plausible at a a lot bigger scale. Developments in GenAI have made massive language fashions more proficient at creating plausible speech and language. Now pure sounding artificial (pretend speech) could be created very cheaply and at a big scale. These developments have made deepfakes accessible to everybody together with fraudsters. These deepfakes problem safety techniques by enabling extremely convincing phishing assaults, spreading misinformation, and facilitating monetary fraud by way of sensible impersonations. They undermine conventional authentication strategies, create vital reputational dangers, and demand superior detection applied sciences to maintain up with their fast evolution and scalability.

How did Pindrop Pulse contribute to figuring out the TTS engine used within the President Biden robocall assault, and what implications does this have for future deepfake detection?

Pindrop Pulse performed a vital position in figuring out ElevenLabs, the TTS engine used within the President Biden robocall assault. Utilizing our superior deepfake detection know-how, we applied a four-stage evaluation course of involving audio filtering and cleaning, characteristic extraction, phase evaluation, and steady scoring. This course of allowed us to filter out nonspeech frames, downsample the audio to duplicate typical cellphone circumstances and extract low-level spectro-temporal options. 

By dividing the audio into 155 segments and assigning liveness scores, we decided that the audio was constantly synthetic. Utilizing “fakeprints,” we in contrast the audio in opposition to 122 TTS techniques and recognized with 99% probability that ElevenLabs or an analogous system was used. This discovering was validated with an 84% probability by way of the ElevenLabs SpeechAI Classifier. Our detailed evaluation revealed deepfake artifacts, notably in phrases with wealthy fricatives and unusual expressions for President Biden. 

This case underscores the significance of our scalable and explainable deepfake detection techniques, which improve accuracy, construct belief, and adapt to new applied sciences. It additionally highlights the necessity for generative AI techniques to include safeguards in opposition to misuse, making certain that voice cloning is consented to by actual people. Our strategy units a benchmark for addressing artificial media threats, emphasizing ongoing monitoring and analysis to remain forward of evolving deepfake strategies.

The report mentions vital issues about deepfakes affecting media and political establishments. May you present examples of such incidents and their potential affect?

Our analysis has discovered that U.S. customers are most involved in regards to the danger of deepfakes and voice clones in banking and the monetary sector. However past that, the specter of deepfakes to harm our media and political establishments poses an equally vital problem. Outdoors of the US, the usage of deepfakes has additionally been noticed in Indonesia (Suharto deepfake), and Slovakia (Michal Šimečka and Monika Tódová voice deepfake). 

2024 is a big election yr within the U.S. and India. With 4 billion individuals throughout 40 international locations anticipated to vote, the proliferation of synthetic intelligence know-how makes it simpler than ever to deceive individuals on the web. We count on an increase in focused deepfake assaults on authorities establishments, social media firms, different information media, and the final inhabitants, which are supposed to create mistrust in our establishments and sow disinformation within the public discourse. 

Are you able to clarify the applied sciences and methodologies Pindrop makes use of to detect deepfakes and artificial voices in actual time?

Pindrop makes use of a variety of superior applied sciences and methodologies to detect deepfakes and artificial voices in actual time, together with: 

    • Liveness detection: Pindrop makes use of large-scale machine studying to research nonspeech frames (e.g., silence, noise, music) and extract low-level spectro-temporal options that distinguish between machine-generated vs. generic human speech
    • Audio Fingerprinting – This entails making a digital signature for every voice based mostly on its acoustic properties, comparable to pitch, tone, and cadence. These signatures are then used to check and match voices throughout completely different calls and interactions.
    • Habits Evaluation – Used to research the patterns of habits that appears outdoors the peculiar together with anomalous entry to numerous accounts, fast bot exercise, account reconnaissance, knowledge mining and robotic dialing.
  • Voice Evaluation – By analyzing voice options comparable to vocal tract traits, phonetic variations, and talking fashion, Pindrop can create a voiceprint for every particular person. Any deviation from the anticipated voiceprint can set off an alert.
  • Multi-Layered Safety Strategy – This entails combining completely different detection strategies to cross-verify outcomes and improve the accuracy of detection. For example, audio fingerprinting outcomes may be cross-referenced with biometric evaluation to verify a suspicion.
  • Steady Studying and Adaptation – Pindrop repeatedly updates its fashions and algorithms. This entails incorporating new knowledge, refining detection strategies, and staying forward of rising threats. Steady studying ensures that their detection capabilities enhance over time and adapt to new kinds of artificial voice assaults.

What’s the Pulse Deepfake Guarantee, and the way does it improve buyer confidence in Pindrop’s capabilities to deal with deepfake threats?

Pulse Deepfake Guarantee is a first-of-its-kind guarantee that provides reimbursement in opposition to artificial voice fraud within the name middle. As we stand getting ready to a seismic shift within the cyberattack panorama, potential monetary losses are anticipated to soar to $10.5 trillion by 2025, Pulse Deepfake Guarantee enhances buyer confidence by providing a number of key benefits:

  • Enhanced Belief: The Pulse Deepfake Guarantee demonstrates Pindrop’s confidence in its merchandise and know-how, providing prospects a reliable safety answer when servicing their account holders.
  • Loss Reimbursement: Pindrop prospects can obtain reimbursements for artificial voice fraud occasions undetected by the Pindrop Product Suite.
  • Steady Improvement: Pindrop buyer requests obtained underneath the guarantee program assist Pindrop keep forward of evolving artificial voice fraud techniques.

Are there any notable case research the place Pindrop’s applied sciences have efficiently mitigated deepfake threats? What have been the outcomes?

The Pikesville Excessive College Incident: On January 16, 2024, a recording surfaced on Instagram purportedly that includes the principal at Pikesville Excessive College in Baltimore, Maryland. The audio contained disparaging remarks about Black college students and lecturers, igniting a firestorm of public outcry and severe concern.

In gentle of those developments, Pindrop undertook a complete investigation, conducting three unbiased analyses to uncover the reality. The outcomes of our thorough investigation led to a nuanced conclusion: though the January audio had been altered, it lacked the definitive options of AI-generated artificial speech. Our confidence on this willpower is supported by a 97% certainty based mostly on our evaluation metrics. This pivotal discovering underscores the significance of conducting detailed and goal analyses earlier than making public declarations in regards to the nature of doubtless manipulated media.

At a big US financial institution, Pindrop found {that a} fraudster was utilizing artificial voice to bypass authentication within the IVR. We discovered that the fraudster was utilizing machine-generated voice to bypass IVR authentication for focused accounts, offering the suitable solutions for the safety questions and, in a single case, even passing one-time passwords (OTP). Bots that efficiently authenticated within the IVR recognized accounts value concentrating on through primary steadiness inquiries. Subsequent calls into these accounts have been from an actual human to perpetrate the fraud. Pindrop alerted the financial institution to this fraud in real-time utilizing Pulse know-how and was capable of cease the fraudster. 

In one other monetary establishment, Pindrop discovered that some fraudsters have been coaching their very own voicebots to imitate financial institution automated response techniques.  In what seemed like a weird first name, a voicebot known as into the financial institution’s IVR to not do account reconnaissance however to repeat the IVR prompts. A number of calls got here into completely different branches of the IVR dialog tree, and each two seconds, the bot would restate what it heard. Every week later, extra calls have been noticed doing the identical, however at the moment, the voice bot repeated the phrases in exactly the identical voice and mannerisms of the financial institution’s IVR. We imagine a fraudster was coaching a voicebot to reflect the financial institution’s IVR as a place to begin of a smishing assault. With the assistance of Pindrop Pulse, the monetary establishment was capable of thwart this assault earlier than any broken was induced. 

Impartial NPR Audio Deepfake Experiment: Digital safety is a always evolving arms race between fraudsters and safety know-how suppliers. There are a number of suppliers, together with Pindrop, which have claimed to detect audio deepfakes constantly – NPR put these claims to the check to evaluate whether or not present know-how options are able to detecting AI-generated audio deepfakes on a constant foundation. 

Pindrop Pulse precisely detected 81 out of the 84 audio samples appropriately, translating to a 96.4% accuracy price. Moreover, Pindrop Pulse detected 100% of all deepfake samples as such. Whereas different suppliers have been additionally evaluated within the research, Pindrop emerged because the chief by demonstrating that its know-how can reliably and precisely detect each deepfake and real audio. 

What future developments in voice-based fraud and safety do you foresee, particularly with the fast growth of AI applied sciences? How is Pindrop getting ready to deal with these?

We count on contact middle fraud to proceed rising in 2024. Based mostly on the year-to-date evaluation of fraud charges throughout verticals, we conservatively estimate the fraud price to achieve 1 in each 730 calls, representing a 4-5% improve over present ranges. 

A lot of the elevated fraud is anticipated to affect the banking sector as insurance coverage, brokerage, and different monetary segments are anticipated to stay across the present ranges. We estimate that these fraud charges signify a fraud publicity of $7 billion for monetary establishments within the US, which must be secured. Nonetheless, we anticipate a big shift, notably with fraudsters using IVRs as a testing floor. Lately, we have noticed a rise in fraudsters manually inputting personally identifiable data (PII) to confirm account particulars. 

To assist fight this, we’ll proceed to each advance Pindrop’s present options and launch new and modern instruments, like Pindrop Pulse, that shield our prospects. 

Past present applied sciences, what new instruments and strategies are being developed to reinforce voice fraud prevention and authentication?

Voice fraud prevention and authentication strategies are repeatedly evolving to maintain tempo with developments in know-how and the sophistication of fraudulent actions. Some rising instruments and strategies embrace: 

  • Steady fraud detection & investigation: Offers a historic “look- back” at fraud cases with new data that’s now out there. With this strategy, fraud analysts can “listen” for brand spanking new fraud alerts, scan for historic calls which may be associated, and rescore these calls. This supplies firms a steady and complete perspective on fraud in real-time. 
  • Clever voice evaluation: Conventional voice biometric techniques are weak to deepfake assaults. To boost their defenses, new applied sciences comparable to Voice Mismatch and Damaging Voice Matching are wanted. These applied sciences present an extra layer of protection by recognizing and differentiating a number of voices, repeat callers and figuring out the place a special sounding voice could pose a menace. 
  • Early fraud detection: Fraud detection applied sciences that present a quick and dependable fraud sign early on within the name course of are invaluable. Along with liveness detection, applied sciences comparable to provider metadata evaluation, caller ID spoof detection and audio-based spoof detection present safety in opposition to fraud assaults at first of a dialog when defenses are on the most weak. 

Thanks for the good interview, to be taught extra learn the Pindrop’s 2024 Voice Intelligence and Safety Report or go to Pindrop

Unite AI Mobile Newsletter 1

Related articles

Dr. Mehdi Asghari, President & CEO of SiLC Applied sciences – Interview Collection

Mehdi Asghari is presently the President & Chief Government Officer at SiLC Applied sciences, Inc. Previous to this,...

The Intersection of AI and IoT: Creating Smarter Linked Environments – AI Time Journal

The mix of Synthetic intelligence and the Web of Issues (IoT) contributed to create good units with the...

LanguaTalk Assessment: Is This the Finest Language Studying Hack?

Studying a brand new language is an enormous dedication. With LanguaTalk, the journey feels rather more manageable.I've tried...

Laptop Imaginative and prescient: Reworking Our Day by day Lives

In right now’s fast-paced digital world, know-how is more and more turning into part of our day by...