upper waypoint

AI Software Vulnerable to Attacks by Both Professional and Amateur Hackers

Save ArticleSave Article
Failed to save article

Please try again

An illustration of a hand holding a smartphone.
Some technology experts experts say they have detected major cybersecurity vulnerabilities in certain artificial intelligence platforms.  (Illustration by Anna Vignet/KQED)

A few weeks ago, white hat hackers — remember, those are the good kind — identified a vulnerability in the software code powering Chattr, a Florida-based “AI-powered” hiring platform.

The backdoor these hackers found gave them easy access to names, phone numbers, email addresses, passwords and more. Because Chattr is a hiring platform, personal details belong to job seekers and hiring managers across the country, mostly in fast food and retail.

“A slip-up, a misconfiguration when creating their website and everything that goes with it,” said 19-year-old Paul, who asked that we not use his full name.

He’s a New Zealand university student and sort of a hacker hobbyist. He writes a cybersecurity blog using the pen name “MrBruh,” and his post about Chattr is titled, “How I pwned half of America’s fast food chains, simultaneously.” The term “pwned,” by the way, means compromised.

“It’s a very competitive market, so people have to get their products up and going before anyone else can. Because of that, shortcuts get made,” Paul said.

Paul and a couple of friends who conducted the hack with him said they contacted Chattr. The company didn’t respond to them personally, but in a LinkedIn post, wrote, “Our engineering team acted swiftly, initiating a comprehensive investigation to determine the extent of the breach. We are pleased to report that the vulnerability has been fixed.” Paul confirmed Chattr fixed the problem within a day of being alerted.

Sponsored

But there are plenty of other chatbot vulnerabilities yet to be discovered, and not always by white hat hackers.

“We already live in an era of proliferating ransomware and malware. And we’re adding a new layer of vulnerabilities,” said Irina Raicu, who directs the Internet Ethics Program at the Markkula Center for Applied Ethics at Santa Clara University.

Raicu noted that, in the age of the internet, most companies have systems in place to protect against malicious hackers, but they’re widely understood to be inadequate.

“Yes, and not just companies. It’s also a huge problem for the government, for national security, for education, for the entire healthcare system,” Raicu said.

Artificial intelligence can be helpful for those tasked with protecting software systems. But the same technology serves the other side of the conflict.

“All types of cyber threat actors — state and non-state, skilled and less-skilled — are already using AI, to varying degrees,” as one recent report from the U.K.’s National Security Cyber Centre put it.

The same report goes on to warn the growing sophistication of AI “lowers the barrier” for amateur cybercriminals and hackers to access systems and gather information, extract sensitive data, paralyze computer systems, and demand ransoms.

In a report released on Jan. 25, the Identity Theft Resource Center, which tracks publicly available information about data breaches, noted: “The availability of compromised consumer data and the use of large language models [LLMs] is already resulting in vastly improved phishing lures and highly effective social engineering attacks that are driving financial losses for businesses and individuals.”

The unsolved cybersecurity issues with AI chatbots, Raicu said, are likely to make us all much more vulnerable on multiple fronts. Primarily because bad or confused actors inside and outside organizations now have tools that allow them to corrupt the data a chatbot is working with or have it execute commands that should not be executed.

“We’re hearing all this talk about AI governance and about responsible development and deployment of AI systems. Those conversations, if they don’t include a component about cybersecurity, then they’re not really doing what they’re claiming to be doing,” Raicu said.

In some states, like California, businesses and state agencies are legally required to take reasonable measures to protect personal information and report big data breaches to affected consumers — for what it’s worth.

lower waypoint
next waypoint
Pro-Palestinian Protests Sweep Bay Area College Campuses Amid Surging National MovementAt Least 16 People Died in California After Medics Injected Sedatives During Police EncountersState Court Upholds Alameda County Tax Measure Yielding Hundreds of Millions for Child CareYouth Takeover: Parents (and Teachers) Just Don't UnderstandSan José Adding Hundreds of License Plate Readers Amid Privacy and Efficacy ConcernsCalifornia Law Letting Property Owners Split Lots to Build New Homes Is 'Unconstitutional,' Judge RulesViolence Escalates in Sudan as Civil War Enters Second YearSF Emergency Dispatchers Struggle to Respond Amid Outdated Systems, Severe UnderstaffingCalifornia Regulators Just Approved New Rule to Cap Health Care Costs. Here's How It WorksLess Than 1% of Santa Clara County Contracts Go to Black and Latino Businesses, Study Shows