At the moment, in Italy, access to ChatGPT has been blocked by the intervention of the national Data Protection Authority, which recognized the absence of an information notice concerning the collection of personal data, the lack of an underlying legal basis for their use in AI training, and barriers to access for children under 13 that were too low. On the privacy side, it should be noted that ChatGPT itself experienced a data breach on 20 March that led to a temporary precautionary stop of the service.
At the time of writing this article, Italy is still the only country in the world to have blocked ChatGPT, and news of similar measures potentially taken by other countries (notably Canada and France) has not yet been followed up.
The large-scale roll-out of new, revolutionary technologies, from computers to robotics, has always coincided with a redefinition - sometimes traumatic - of the employment landscape, strongly affecting the composition of the labour force. However, the numerical contraction of some professions is followed by the development or even the birth of other ones. For those who lose their jobs, this certainly does not help. From this point of view, the management of a social safety net and the retraining of workers is up to politicians and entrepreneurs. But stopping technological progress tout court is rather unlikely.
In its extreme legal complexity, this issue can be very simply explained: artificial intelligence systems have so far been trained also with copyright-protected material. Something similar had already happened in the days of Google Books, with ten years of lawsuits that went all the way to the US Supreme Court, and ended in favour of the Mountain View giant in the name of 'fair use'. But, in this regard, also the Napster case, which had the opposite outcome, has recently been brought up. The crux of the matter is the following: is it lawful to use copyrighted content for the greater purpose of developing a technology that may prove to be of great collective benefit? At the moment, the situation seems best expressed by a question reported by the Economist, which quotes Mark Lemley and Bryan Casey, authors of a masterclass on the Texas Law Review: "Will copyright law allow robots to learn?"
The debate on artificial intelligence is very strong also with respect to ethical and philosophical issues. In particular, it is increasingly common to see authoritative experts doubting that AI can already be considered sentient, or in any case endowed with truly autonomous intelligence. This applies especially to textual AI, whose models - to put it briefly - are aimed at generating writings similar to natural human language, following a learning method that leads the system to predict what the next word in a sentence or set of sentences will be. Models, therefore, entirely based on language. Wired recently reported on one of the cases in which the objection was raised that AI is becoming ‘truly’ intelligent. We are talking about the case of Microsoft researcher Sébastien Bubeck, who asked ChatGPT-4 to draw a unicorn using a software created to generate scientific diagrams. Actually, what he accomplished is a theoretically very interesting result, viewable here at minute 23'13". Despite these first doubts, for now it still seems too early to rejoice or despair, depending on the point of view. While on the one hand some AI's abilities are exceeding expectations, on the other one it is pointed out from several quarters that some of the typical and defining capabilities of human intelligence are conspicuously absent. Some examples are the ability to autonomously create new tasks for itself (instead of only solving tasks proposed from outside), the ability to remember, and the fundamental condition of self-awareness. From this perspective, the misconception that AI (especially textual AI) is self-aware comes from a tendency to anthropomorphize artificial intelligence. If this is true, in the face of the famous Turing test, whereby a machine is intelligent if its behaviour is indistinguishable from the human one, perhaps the crux of the matter lies in admitting that, in the face of ChatGPT, the time has come to ask not whether the system is intelligent, but whether the Turing test is still relevant.