Following the discovery of a security breach, Samsung has prohibited its employees from using generative AI tools such as ChatGPT and Google Bard.
Engineers mistakenly disclosed confidential source code and meeting notes to ChatGPT last month. As a result of the event, the firm has temporarily restricted the usage of generative AI until it can examine security procedures and ensure a safe workplace for its staff.
Samsung is concerned that data given to generative AI tools is kept on third-party servers, possibly complicating access, removal, and unintended sharing. According to a Samsung internal study conducted in April, 65 percent of respondents said AI products posed security dangers. Samsung’s restriction contrasts with other organizations that are pushing staff to use AI technologies.
Samsung is still developing its own AI technologies for use by staff in jobs such as software development and translation. It says employees can still utilize AI technologies on personal devices, but only for non-work-related purposes. Violations of this regulation may result in termination.
The sources for this piece include an article in Engadget.