Hackers have devised a way to bypass ChatGPT’s restrictions and are using it to sell services that allow people to create malware and phishing emails, researchers said on Wednesday.
Hackers have found a simple way to bypass those restrictions and are using it to sell illicit services in an underground crime forum, researchers from security firm Check Point Research reported.
The technique works by using the ChatGPT application programming interface rather than the web-based interface. ChatGPT makes the API available to developers so they can integrate the AI bot into their applications. It turns out the API version doesn’t enforce restrictions on malicious content.