Beware! Hackers are using AI bot ‘ChatGPT’ to write malicious code to steal your data

New Delhi: A report has warned that Artificial Intelligence (AI) powered ChatGPT, which gives human answers to questions, is also being used by cyber criminals to develop malicious tools that can steal your data. Researchers at Check Point Research (CPR) have observed the first such examples of cyber criminals using ChatGPT to write malicious code. In underground hacking forums, threat actors known as “infostealers” are creating encryption tools and facilitating fraudulent activity. The researchers warned about the rapidly growing interest in ChatGPT by cybercriminals to enhance and teach malicious activity.

Read this also | FACT CHECK: Does writing anything on bank notes make them invalid?

“Cybercriminals are finding ChatGPT attractive. In recent weeks, we are seeing hackers begin to use it to write malicious code. ChatGPT has the potential to speed up the process by giving hackers a good starting point.” ,” said Sergei Shaykevich, Threat Intelligence Group Manager at Check Point.

Just as ChatGPT can be used to help developers write code, it can also be used for malicious purposes. On December 29, a thread titled “ChatGPT – Advantages of Malware” appeared on a popular underground hacking forum. The thread’s publisher revealed that it was experimenting with ChatGPT to recreate malware strains and techniques described in research publications and writings about common malware.

Read this also | Bengaluru man spends Rs 20 crore to buy rare breed of dog

The report states, “While this individual may be a tech-oriented threat actor, these posts were showing less technically competent cybercriminals how to use ChatGPT for malicious purposes, with real examples they could immediately use.” Can be used.” On 21 December, a threatened actor posted a Python script, which he insisted was the first script he had ever made.

When another cyber criminal commented that the code’s style was similar to OpenAI code, the hacker confirmed that OpenAI gave him a “good (helping) hand to complete the script with a good scope”. ” This could mean that a potential cyber criminal who has little or no development skills can take advantage of ChatGPT to develop malicious tools and become a full-fledged cyber criminal with technical capabilities, the report warns. given.

“Although the tools we analyze are fairly basic, it is only a matter of time until more sophisticated threat actors grow the way they use AI-based tools,” Shaykevich said. OpenAI, the developer behind ChatGPT, is reportedly looking to raise capital at a valuation of around $30 billion. Microsoft acquired OpenAI for $1 billion and is now pushing ChatGPT applications to solve real-life problems.