How cybercriminals are using Telegram bots
As per the report, the bots are utilising OpenAI’s API to enable the creation of malicious emails or code. The report also suggests that Bot makers are currently granting up to 20 free queries. However, after that, they charge $5.50 for every 100 queries. CPR has warned against the continued efforts by cybercriminals to circumvent ChatGPT’s restrictions to use OpenAI to scale unethical purposes.
The report also includes images showing how cybercriminals are turning to Telegram bots to bypass restrictions imposed by ChatGPT. One of the images shows the advertisement of the OpenAI bot in Telegram which is available in the underground forum. Another image shows an example of a phishing email created in a Telegram bot to demonstrate the ability to use OpenAI’s API without any limitations.
Meanwhile, the third image shows an example of the ability to create a malware code without anti-abuse restrictions in a Telegram bot utilizing the OpenAI API. The fourth picture shows a business model of the ChatGPT API-based Telegram channel.
The report also claims that cybercriminals are creating basic scripts that use OpenAIs API to bypass anti-abuse restrictions. The fifth and final image gives an example of a script directly querying the API and bypassing restrictions to develop malware.
CPR’s take on this cybercriminal activity
Threat group manager at Check Point Software, Sergey Shykevich has said: “As part of its content policy, OpenAI created barriers and restrictions to stop malicious content creation on its platform. However, we’re seeing cybercriminals work their way around ChatGPT’s restrictions, and there’s active chatter in the underground forums disclosing how to use OpenAI API to bypass ChatGPT´s barriers and limitations. This is mostly done by creating Telegram bots that use the API, and these bots are advertised in hacking forums to increase their exposure. The current version of OpenAI´s API is used by external applications and has very few anti-abuse measures in place. As a result, it allows malicious content creation, such as phishing emails and malware code without the limitations or barriers that ChatGPT has set on its user interface. Right now, we’re seeing continuous efforts by cybercriminals to find ways around ChatGPT restrictions.”