Neural network-based language models such as ChatGPT are increasingly vital in various professions. A Russian Kaspersky survey found that 11% of respondents used chatbots, with nearly 30% foreseeing their potential to replace jobs in the future.
In addition, ChatGPT usage is substantial among Belgian office workers (50%) and UK users (65%). Notably, Google Trends highlights a considerable weekday usage of the search term “ChatGPT,” likely linked to work-related tasks.
Nevertheless, the expanding incorporation of chatbots in professional environments raises a critical question: can they be entrusted with sensitive corporate information?
Kaspersky researchers have pinpointed 4 primary risks linked to the utilization of ChatGPT for business purposes.
1. Data leak or hack on the provider’s side:
While tech giants manage LLM-based chatbots, they are not entirely immune to hacking or unintentional data exposure. For instance, there have been incidents where ChatGPT users could access messages from other users’ chat histories.
2. Data leak through chatbots:
In theory, chats with chatbots could be exploited to train future models. As Language Model Models (LLMs) are vulnerable to “unintended memorization,” where they remember sensitive sequences, like phone numbers, that do not enhance model quality but pose privacy risks, data in the training corpus may be unintentionally or intentionally accessed by other users through the model.
3. Malicious clients:
This presents a significant concern in regions where official services such as ChatGPT are restricted. Users might resort to unofficial alternatives like programs, websites, or messenger bots, exposing them to the risk of downloading malware disguised as nonexistent clients or applications.
4. Account hacking:
Attackers can compromise employee accounts and access their data through phishing attacks or credential stuffing. Kaspersky Digital Footprint Intelligence regularly identifies posts on dark web forums offering access to chatbot accounts.