Use of generative artificial intelligence, caution with data confidentiality
In recent months, the use of artificial intelligence (AI) has exploded, thanks to text generation tools like ChatGPT or image generation tools like DALL-E.
The quality of the texts and images generated is often impressive. However, it is not uncommon for these tools to suffer from hallucinations, i.e. to deliver results totally unsuited to the request sent (invention of fictitious information). As a result, their reliability needs to be constantly questioned.
Another danger concerns the confidentiality of data: for example, you can use AI to improve the writing of a text, summarize it, translate it, etc. You therefore need to provide basic text or images that could contain confidential information (names of people, business secrets, etc.). However, these AI services are generally hosted in the cloud and abroad, and you have no idea what will happen to the requests, texts and images you submit to them: will they be stored by these services for training purposes, or will they be resold to marketing companies to send you targeted advertising?
We can only encourage you to keep a critical eye on these services and to refrain from sending confidential data: anonymize everything you send, unless you are assured that it is being processed locally or that the service is not exploiting it for any purpose other than to provide you with a response.