Since programs such as ChatGPT and Dall-E have become available to the general public, there has been intense discussion about the risks and opportunities of generative Artificial Intelligence (AI). Due to their ability to create texts, images, and videos, these AI applications can greatly benefit people’s everyday lives, but can also be misused to create deep fakes or propaganda.
In addition, all forms of generative AI reflect the data used to train them and thus the objectives underpinning their development. Both aspects elude control by institutions and norms. There are now some strategies to counteract the lack of transparency and objectivity (bias) of generative AI.
However, the authors of the discussion paper, published in English by the German National Academy of Sciences Leopoldina, warn against placing too much faith in these strategies. In “Generative AI—Beyond Euphoria and Simple Solutions” they take a realistic look at the possibilities and challenges regarding the development and application of generative AI.
The authors of the discussion paper argue for a nuanced view of technologies and tools that make generative AI more transparent and aim to discover and minimize distortions. They discuss dealing with bias as an example: without an active attempt to counteract it, AI systems reflect the respective societal and cultural relations of their database and the values and inequalities contained therein.
However, according to the authors, deciding whether and how to actively counteract this bias in the programming is no trivial matter. It requires technological and mathematical, as well as political and ethical expertise and should not be the sole responsibility of developers.
Strategies used to date to counteract the lack of transparency of generative AI also offer only a rather superficial solution. Users are often unable to understand how generative AI works. The still-new research field known as explainable AI develops procedures that aim to make AI-generated suggestions and decisions comprehensible retrospectively.
However, the authors point out that the resulting explanations are also not reliable, even if they sound logical. It is even possible to deliberately manipulate explainable AI systems. The authors thus stress that generative AI should be used and developed with the utmost caution in cases where transparency is essential (for example in legal contexts).
They also elucidate the various possibilities for deception with respect to generative AI, for example, when users are unaware that they are communicating with AI, or when they do not know what AI is or is not capable of. Users often attribute human capabilities such as consciousness and comprehension to AI. The quality, ease, and speed with which texts, images, and videos can now be generated creates new dimensions for possible misuse, for example, when generative AI is used for propaganda or criminal purposes.
The discussion paper also addresses the issue of data protection. The success of generative AI is based partly on gathering and analyzing users’ personal data. However, to date there is no convincing approach to ensure that users have the final say when it comes to the sharing and use of their data.
More information:
Paper: Generative AI—Beyond Euphoria and Simple Solutions
Provided by
Leopoldina
Citation:
Beyond simple solutions: Experts discuss responsible development and use of generative AI (2024, December 17)
retrieved 18 December 2024
from https://techxplore.com/news/2024-12-simple-solutions-experts-discuss-responsible.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.