Falling in love with ChatGpt, and AI in general, often seems to escape the control of reason and logic. We are faced with a machine that, while capable of performing complex tasks, lacks human intelligence and, on the contrary, exploits natural resources, human labour, kills privacy, and undermines equality. This not only dispossesses man of his adjective sapiens but undermines the very basis of democracy and the fundamental rights on which it is founded.
Nonetheless, it seems that our technophilia and fetishism for artificial intelligence are induced by a hedonistic self-alienation as persons and as sapiens, in search of an easy solution to our everyday problems. But AI is not the Ultimate Algorithm that will solve all our problems; in fact, it may be our worst enemy.
Our presumption of being man sapiens is based on the intelligence, reason, and logic that distinguish us as a species. But if we produce machines like ChatGpt that function without man and replace him, we become homo stultus and lose our identity as sapiens.
Ultimately, we need to be aware of the effects of falling in love with ChatGpt and AI in general and act responsibly to preserve our dignity and freedom as homo sapiens.
Why this (real or induced?) falling in love with ChatGpt? Technophilia regardless? Fetishism for artificial intelligence? Or are we children who want to play with the new Meccano? Or – and worse – is it a voluntary and hedonistic self-alienation from oneself as a person, from oneself as men sapiens, from one’s freedom, from democracy, from knowledge and know-how? Or is it happiness at the approaching/concretising (we are tired of having to think and having to decide, really too exhausting !) of the Ultimate Algorithm, spelled with capital letters like God?
There is no creature more dangerous than man, for he is the only one who can change the world.
The Guarantor for the protection of personal data in Italy has ordered the temporary limitation of the processing of data of Italian users towards OpenAI. This is because ChatGPT had suffered a data breach concerning user conversations and information relating to the payment of subscribers to the paid service.
The Privacy Guarantor had noted the lack of information to users and all interested parties whose data is collected by OpenAI, but above all the absence of a legal basis that justifies the mass collection and storage of personal data, for the purpose of “training ” the algorithms underlying the functioning of the platform.
The information provided by ChatGPT does not always correspond to the real data, thus determining an inaccurate treatment of personal data.
Lastly, although – according to the terms published by OpenAI – the service is aimed at people over the age of 13, the Authority has highlighted how the absence of any filter for verifying the age of users exposes minors to absolutely unsuitable answers their degree of development and self-awareness.
Evidently it is necessary to intervene with more effective rules for the correct use of this artificial intelligence tool.