I spoke with Patrick Harding, Chief Product Architect at Ping Identity, about how companies can prevent identity fraud in today’s AI-driven enterprise environment.
As an identity and access management vendor, Ping Identity focuses on providing authentication, verification, and authorization technologies to enterprises for both their workforce and their customers.
As Harding explained, the need for these authentication and verification services has increased exponentially as artificial intelligence has allowed attackers a far greater sophistication.
“Traditionally, scams might have involved phone calls or emails to make you believe something,” he said. “Deepfakes and generative AI have made those scams even harder to detect and easier to implement.
“Now, rather than getting an email, you might get a phone call or a voicemail with a deepfake voice that you recognize, or you might see a video with a deepfake face that you recognize. Or even phishing emails that are now so targeted and written in the flavor of the person being imitated.” Stopping these attacks requires smart and thoughtful strategies that are fully current with today’s most advanced technologies.
Watch the full interview or jump to select interview highlights below.
Interview Highlights: Patrick Harding on Preventing Identity Fraud
This interview took place at the recent RSA Conference in San Francisco. The comments below have been edited for length and clarity.
The Importance of Training
Deep fake technology and other deceptive technologies enabled by AI are only going to get better, Harding said. “It’s going to become a cat and mouse game.”
“To deal with that, we’re going to have to do a lot to educate and train users to say, look, you are not going to recognize and understand these deepfakes. So you need to be aware of them.
“You now need to think, alright, if that message that I get, that voicemail that I get, is asking me to do something with a higher risk type of transaction – move money, reset a password, something like that – I need to verify and establish explicit trust that this is actually occurring and is necessary. So there’s a lot of education that’s going to have to occur, unfortunately.”
Biometrics and Private Keys: Decentralized Identity
There are already a number of techniques available to boost authentication, Harding said, pointing to technologies like one-time passwords and multifactor authentication.
“But those things tend to have sort of a friction. You’re not going to basically take a photo of your driver’s license every time you want to log in or every time you need to interact with a service.”
To enable users who need less friction, there’s an industry move toward decentralized identity, he explained.
“This is where my identity information is actually stored in my smartphone and can only be unlocked by me. It could be a local biometric, like a face ID type of thing. And that information is secured with a private key, like a cryptographic private key that is extremely difficult to reproduce. So no generative AI is going to reproduce that. And now my identity can be shared from my decentralized identity wallet on my smartphone with different services.
“So if I’m talking to you on the phone and I’m not sure it’s you, alright, I might ask you, Hey, ping me a notification through your wallet to prove that this is really you.
“We think that decentralized identity is really going to help deal with a number of these security issues we’re seeing right now. We’re eliminating the implicit trust that we’ve had on some of these channels where deepfakes are being used and replacing it with sort of an out-of-hand, explicit trust, essentially using decentralized identity to verify.”