ChatGPT leaking Samsung chip secrets is iceberg’s tip
Samsung – amongst others – is wrestling with whether to ban employees accessing ChatGPT, after the controversial AI chatbot started sharing confidential information, according to reports.
The Samsung information that ChatGPT has been able to share includes semiconductor equipment measurement data, product yield information and so on, according to a DigiTimes account, which referenced multiple Korean media sources.
ChatGPT is a natural language processing chatbot developed by OpenAI and made available to the general public for free in November 2022. Apparently engineers and other workers at many companies are recruiting ChatGPT to work for them; to write software and prepare reports, for example. This is sometimes with, and sometimes without, their employers’ approval.
The Digitimes report mentions three specific cases of leaks caused by engineers sharing information with ChatGPT. In one case an engineer uploaded faulty code and asked ChatGPT to find the fault and optimize the software. But as a result the source code became part of ChatGPT’s database and learning materials.
Another case was where ChatGPT was asked to take the minutes of meeting. By default the discussion and exactly who attended the meeting – both confidential – were stored on the ChatGPT database and thus ChatGPT was able to divulge the material to anyone who asked.
As a result of such events, Samsung, SK Hynix, LG and many other companies are scrambling to either ban or draw up guidelines for the use of ChatGPT and other AI chatbot services in the workplace, according to The Korea Times.
The Korea Times seemed to confirm Samsung’s mishaps and said that a message on an in-company bulletin had been posted calling attention to the misuse of ChatGPT. SK Hynix has blocked the use of ChatGPT on its internal computer network and employees must obtain security approval before using ChatGPT, the newspaper said.
The newspaper also quoted Kim Dae-jong, a professor of business administration at Sejong University, saying that the use of ChatGPT in the workplace was spreading.
“It is reported that more people use ChatGPT to make programs or organize remarks at meetings. These actions will likely increase the probability of leaking company secrets, so private firms are being urged to come up with their own guidelines for AI services,” Kim said. Kim added that his university is also trying to prevent students using ChatGPT for assignments but that it is difficult to spot because ChatGPT is so good.
In OpenAI’s defence, ChatGPT’s own guidelines calls on users “not to enter sensitive content.”
Related links and articles:
https://www.digitimes.com.tw/tech/dt/n/shwnws.asp?id=0000660911_GV3LF27M0DYJ0I2ZJ5Y85
https://www.koreatimes.co.kr/www/tech/2023/04/133_348342.html
News articles:
SpiNNaker2 spiking neural network project gets €2.5 million
Microsoft announces multibillion investment in OpenAI
ChatGPT gets its Wolfram superpowers