OpenAI has come up with a new report, and as per the details shared in it, it looks like the company made the choice to roll out the report due to security reasons. The company has uncovered some pretty alarming ways people have abused ChatGPT lately – stuff like romance scams, shady influence campaigns and even outright fraud targeting folks all over the world.
OpenAI flags misuse of ChatGPT in new report
OpenAI has caught a handful of accounts, and they said that some of them were tied to Chinese law enforcement. Those accounts are gone now but were carrying a serious threat for the users of India. OpenAI further called this a blatant misuse of their AI.
‘Romance Scams’ and ‘Dating Frauds’
- Dating scams – As per the report, scammers targeted men in Indonesia, tricking more than a hundred people through fake online romances.
- They used ChatGPT to create slick ads, create convincing messages, and further create all sorts of fake content for dating sites.
- The pitch was to make big money – if you finish a ‘simple’ task.
- Sometimes, the scam went even further, as people pretended to be lawyers and pressured their victims to send money.
- The scammers leaned on AI-generated chats to make everything feel real.
Influence campaigns and political targeting
Political angle for scam- Apparently, some of these accounts tried to run influence campaigns—one even targeted Japan’s first female prime minister. OpenAI did not spell out exactly how it worked, but they said these were coordinated, calculated efforts. Other fake profiles posed as policy experts or paid consultants, emailing US state officials to look legit and try to sway them.
Data collection and cybercrime concerns
- The report also points to some classic cybercrime moves.
- OpenAI told US authorities that some banned accounts used ChatGPT alongside other tools to collect sensitive info.
- They were after data about Americans, online forums, and even where federal buildings are located. In a few cases, people tried to use ChatGPT to figure out how facial recognition software works—another obvious security red flag.
OpenAI’s response and crackdown
- They say that they are always watching for trouble and act fast when they spot it.
- Every account linked to these shady activities has been banned.
- But the bigger picture is clear – that as AI gets more powerful, the risks get bigger, too.
OpenAI said that they are working closely with authorities and ramping up their safety measures to keep this kind of thing from happening again.
