Italy’s data protection authority stops ChatGPT in the country

Italy’s data protection authority stops ChatGPT in the country

Text robot ChatGPT

The Italian privacy advocates see a fundamental problem in how ChatGPT was trained.

(Photo: Reuters)

Rome Italy’s data protection authority has the popular text machine ChatGPT temporarily blocked in the country. On Friday, she pointed out, among other things, that the operator OpenAI did not provide sufficient information about the use of data. There are also no filters that prevent children under the age of 13 from being shown information that is “absolutely inappropriate” for them. Investigations had been initiated.

At the same time, as a precautionary measure, the authority banned the processing of data from users Italy – ChatGPT is no longer applicable in the country. OpenAI was given 20 days to present measures against the allegations. After that, there is a penalty of up to 20 million euros or four percent of global sales.

The data protection authority also referred to a recent data breach. Some ChatGPT users had seen information from other people’s profiles. According to OpenAI, the problem was due to an error in a software used for ChatGPT.

ChatGPT relies on the software capturing massive amounts of text. On this basis, she can formulate sentences that can hardly be distinguished from those of a human being. The program estimates which words could follow next in a sentence. One of the risks of this basic principle is that the software will “hallucinate facts”, as OpenAI calls it – presenting incorrect information as correct.

The Italian privacy advocates also see a fundamental problem in how ChatGPT was trained. There is no legal basis for the mass collection and storage of personal data to train the algorithms.

>> Read also: What can ChatGPT do?

The agency took similar action against another chatbot called Replica in February. The main concern was that children under the age of 13 were not adequately protected.

ChatGPT had impressed in the past few months with how well the software can imitate human speech. At the same time, there are concerns that such technology based on artificial intelligence could be misused, for example to spread false information.

More: Tech elite around Elon Musk calls for a break in AI development

Source link