New research says quarter of B.C. residents are concerned about AI threats

New research by an identity and access management company says three-quarters of Canadians fear their identity will be stolen through artificial intelligence (AI) advancements.

Dan Kagan, Senior VP and Country Manager at Okta, says a survey the company sent out across Canada shows that less than a quarter of Canadians are confident in their ability to recognize AI-generated attempts of identity fraud.

He points out some key data points about British Columbians:

  • Roughly a quarter of British Columbians are educating themselves about AI (around 25 per cent)
  • Over 30 per cent of British Columbians see their banking accounts as the prime target for AI-driven attacks
  • More than 30 per cent point to social media accounts as the next concern
  • Roughly 5 per cent of British Columbians are concerned about AI threats on their work credentials and email

“When you think about British Columbia as a whole despite growing concerns over identity theft, I would say about a quarter of British Columbians feel that they require further education about AI and AI threats,” Kagan said.

Kagan says AI threats and cybercrime get lumped into the same category, but in reality, they are different.

He says cybercrime is seeing things like identity theft in the first person, such as credit card fraud, but AI threat is driven by impersonation, usually taken from people’s cyber presence.

“They’ve gone from cyber presence. To you and me talking like we are right now and in many cases, have been able to infiltrate a look and inflexion, pictures, videos, things that would make the believability of who that particular person is significantly real,” he said.

He says cybercrime turning into AI crime is a big change, one that his family had been hit with recently.

Kagan tells us that his parents were hit with the “classic grandparent scam,” where fraudsters were trying to steal money from them pretending to be his son.

This time the AI threat involved, he says, was through using his son’s voice.

“It wasn’t face-to-face. It wasn’t video, but my parents swore up and down that it was not only my son but his voice inflexion. They use the pet words that they call each other,” he said.

“It became so believable that my mom was legitimately ready to put 1000s of dollars.”

Kagan says a lot of these AI threats are happening because we live in a world where everybody wants “more likes and they want more followers.” He says people’s social media presence has increased, which gives fraudsters access to their personal data.

“They’re able to go into your social media and look at who your friends are,” he said.

“Then look through their friend’s list for the same last names, make assumption behaviour that that’s a grandparent or a parent.”

Kagan says the best way to avoid AI threats is to take public profiles and make them private.

“A lot of times to reach a mass audience instead of making it private, a lot of people including my children, would leave their profile public for additional likes and additional friends,” he said.

“Inventory your friend list to make sure that you understand who your current followers are, and mitigate any risk by putting too much personal data out there for others to see.”

He says along with turning everything private on the internet, he recommends everyone get multi-factor authentication.

“Please make sure that you’re guarding your data, a multi-factor authentication in the most simplest technology for bi-directional security,” he said.

“It’s very, very cost-effective. It’s very easy to download.”

He says anything you can do to secure your data will give you a better chance of not being an AI threat victim.

Top Stories

Top Stories

Most Watched Today