Europcar, the global car rental service, has vehemently rebuffed claims of a data breach, asserting that the purportedly leaked data was, in fact, fabricated using AI tools.
The controversy erupted when an advertisement surfaced on a notorious data leak forum, alleging the sale of personal information belonging to upwards of 50 million Europcar clients. The advertisement included snippets of supposedly pilfered data pertaining to 31 alleged Europcar customers.
This information encompassed a plethora of sensitive details, ranging from names and addresses to birth dates and driver’s license numbers, fuelling concerns of a significant breach.
The hacking forum user told TechCrunch that “the data is real” but Europcar was quick to dispel the allegations.
Speaking to BleepingComputer, Europcar asserted that a thorough examination of the sample data led them to conclude that the purported breach was a sham orchestrated using AI. Europcar highlighted various inconsistencies within the dataset, including a discrepancy in the number of records compared to their actual database and inconsistencies in the provided information.
For example, it said none of the email addresses in the data sample were found in Europcar’s database, suggesting that the data may have been artificially generated.
“The sample data is likely ChatGPT-generated (addresses don’t exist, ZIP codes don’t match, first name and last name don’t match email addresses, email addresses use very unusual TLDs),” the company noted, adding that “none of these email addresses are present in our database.”
Europcar’s assertions were further supported by cybersecurity experts such as Troy Hunt of HaveIBeenPwned, who cast doubt on the authenticity of the leaked data.
Hunt highlighted inconsistencies within the dataset, including mismatched email addresses and fictitious addresses such as “Lake Alyssaberg, DC” and “West Paulburgh, PA.”
However, Hunt pointed out that some email addresses were indeed legitimate, having appeared in previous breaches.
Usernames and passwords are included, which is bad sure.
But the really gnarly part of this one is copies of Drivers License and even Passport info. pic.twitter.com/C3tC8tRSnD
— Matt Johansen (@mattjay) January 30, 2024
Addressing the notion of AI involvement, Hunt dismissed it as a sensationalised claim devoid of evidence, stressing that fabricated breaches have been a longstanding issue.
Huseyin Can Yuceel, a security researcher at Picus Security, offered a different perspective, suggesting that the data could have been generated using generative AI tools.
Yuceel suggested that the incident represented a novel attack vector wherein threat actors leverage AI to fabricate fake datasets, thereby perpetrating social engineering attacks aimed at inducing panic and extortion.
“A far cry from initial reports of a data breach involving 50 million customers, this incident should be classified as an attempted social engineering attack,” remarked Yuceel.
“In this case, it seems as though attackers tried to create panic and pressure their target into paying ransom for a false claim that they stole sensitive customer data.”
The incident underscores the offensive potential of AI in cyber-attacks, according to Yuceel, who warned businesses to remain vigilant against evolving threats.
“Adversaries are quick to adopt new techniques and tools, and the use of AI in cyber-attacks is becoming more commonplace,” he cautioned.
While the authenticity of the leaked data remains unverified, the incident serves as a stark reminder of the ever-evolving landscape of cyber threats. As businesses navigate the complexities of cybersecurity, the emergence of AI-powered attacks underscores the imperative for robust defense mechanisms and proactive incident response strategies.
Wanda Parisien is a computing expert who navigates the vast landscape of hardware and software. With a focus on computer technology, software development, and industry trends, Wanda delivers informative content, tutorials, and analyses to keep readers updated on the latest in the world of computing.