A new study has warned that generative AI has made it vastly easier for malicious hackers to uncover the real identities of anonymous social media users. According to a report by The Guardian citing the study, large language models (LLMs) can now successfully match anonymous online profiles with actual identities by synthesizing seemingly harmless information posted across different platform.
What did the researchers find?
The report notes that AI researchers Simon Lermen and Daniel Paleka found that the technology behind platforms like ChatGPT makes sophisticated privacy attacks highly cost-effective. The researchers ran an experiment where they fed anonymous accounts into an AI and instructed it to scrape all available information it could. In the hypothetical scenario included an anonymous user mentioning his struggles at school and walking their dog, Biscuit, through “Dolores park.”
The AI then ran searches across other platforms for these specific details, successfully matching the anonymous handle to a known, real-world identity with a high degree of confidence.
The researchers warn that the expertise required to perform more sophisticated attacks is now much lower with hackers only needing an internet connection and access to publicly available language models. They say it forces a “fundamental reassessment of what can be considered private online”.
The report does notes that LLMs are not magic weapon and while they can de-anonymise records in many situations sometimes there is not information for the mocel to draw conlusions while in other cases the number of potential matches may be too large to narrow down.
What are the primary risks?
While the example cited by the researchers was fictional, they also go on to highlight several scenarios where these new AI capabilities could be used including governments using AI to surveil anonymous dissidents and activists or even hackers getting the ability to launch “highly personalised” scams
Lermen also warned that publicly available information can be “misused straightforwardly” for scams, such as spear-phishing, where a hacker poses as a trusted friend to trick victims into clicking malicious links.
How can platforms and users protect themselves?
In order to combat the growing threat, Lermen recommends that social media platforms take the first step by restricting data access. This includes enforcing rate limits on user data downloads, detecting automated scraping bots, and restricting the bulk export of data. The researcher also noted that individual users also need to take greater precautions regarding the specific personal details they choose to share online.