Samsung Employees Allegedly Leak Proprietary Information via ChatGPT

Source: Cyber Security Hub | Published on April 21, 2023

AI Chat GPT

Samsung employees have allegedly leaked confidential company information to AI-powered chatbot, ChatGPT.

According to The Economist Korea, three separate incidents occurred despite the company originally being wary of adopting ChatGPT. Samsung had previously expressed concern that ChatGPT may leak confidential information, issuing a warning to employees to “pay attention to the security of internal information” and not enter private information.  Each incident allegedly involved a company engineer entering confidential information into ChatGPT within just 20 days.

Over that time, one engineer allegedly entered Samsung’s source code into the chatbot when looking for a solution to a bug; another recorded a company meeting, transcribed it using an audio-to-text application then inputted the transcription into ChatGPT to create meeting notes; and a third used ChatGPT to optimize a test sequence for identifying yield and defective chips. Disciplinary investigations have been launched into all three.

As ChatGPT is a machine learning (ML) platform, all data inputted is used to train its algorithm, meaning that this proprietary information is now available to all those using the platform. As of January 2023, the application had 100 million monthly active users. ChatGPT itself does warn users to not enter sensitive information for this exact reason.

Italy bans ChatGPT over data privacy concerns

In April, Italy took the decision to temporarily ban ChatGPT within the country due to concerns that it violates the General Data Protection Regulation (GDPR). GDPR is a law concerning data and data privacy which imposes security and privacy obligations on those operating within the European Union (EU) and the European Economic Area (EEA).

The Italian data protection agency, Garante per la Protezione dei Dati Personali (also known as Garante) said there was an “absence of any legal basis that justifies the massive collection and storage of personal data” to “train” ChatGPT, in addition to accusing OpenAI of failing to verify the ages of ChatGPT users.

Italy’s ban led to privacy regulators in Ireland and France contacting Garante to find out more regarding its decision to ban ChatGPT.

A spokesperson for Ireland’s Data Protection Commissioner told Reuters: “We are following up with the Italian regulator. We will coordinate with all EU data protection authorities in relation to this matter.”

Not all Italian authorities are in favor of the ban, however, with the country’s transport minister and leader of the League party, Matteo Salvini, stating in an Instagram post that the ban is “hypocritical” and “disproportionate”.

OpenAI has disabled ChatGPT in Italy as per the agency’s request, but noted that it actively works to prevent the use of private data in the training of its ML models. The company also said that it would be working with Garante to “educat[e] them on how [its] systems are built and used”.