The Safety of AI-based Research

Please use this identifier to cite or link to this item:
https://doi.org/10.48693/572
Open Access logo originally created by the Public Library of Science (PLoS)
Title: The Safety of AI-based Research
Authors: Saalbach, Klaus
Abstract: Artificial Intelligence (AI) is commonly understood as the ability of machines to perform tasks that normally require human intelligence. The definitions of human intelligence include the mental capacity to recognize, analyze and solve problems, and a human is then more intelligent if this can be done faster and/or for more complex problems. Currently, the development of AI is heading towards an Artificial General Intelligence AGI reaching human level of cognition with the final goal to achieve an Artificial Super-Intelligence ASI which goes beyond human intelligence. A rapidly evolving AI application is the Generative AI where the AI can create content like new images, texts, sounds, and videos based on short instructions, the prompts. Meanwhile, OpenAI develops Q* (Q Star) as advanced AI-based research engine in the Project Strawberry and Q*/Strawberry will likely be integrated into the next release of ChatGPT5 which may be released in the coming months. The planned Q* capabilities include the ability to conduct deep research by autonomous internet retrieval, to plan and execute multi-step long-horizon tasks, to conduct logic reasoning and the ability to solve mathematic problems on an academic level. In August 2024, the Tokyo-based company Sakana released the first version of its ‘AI Scientist’, an AI agent that can conduct all parts of a research from idea generation to publication automatically. Despite this is a relatively lean tool and neither an AGI nor self-aware, it caused safety issues by autonomously creating new codes to re-launch itself, overriding time limits for experiments, saving more data resulting in a Terabyte of extra data and importing new Python code from libraries. As this AI agent utilizes LLMs like ChatGPT4o, the release of ChatGPT5 may amplify the capabilities and safety issues of the AI Scientist and other AI agents as well. Due to their design, these AI research engines should be able to grow dynamically and theoretically indefinitely. For developers, it is challenging to achieve creativity, autonomy, and safety of a research engine simultaneously. Further issues are bypassing of humans, AI deception and the use of these tools for open-source intelligence OSINT and cyber espionage operations, as well as for biological and chemical weapons. The training of AI is dependent from high-quality data created by humans, because a training with AI-generated data leads to a rapid loss of quality, the so-called model collapse. While the use of AI in science has a very large potential to generate hypotheses, design experiments, to collect and interpret large datasets with detection of unknown patterns and relations and to take over routine work, there is a creeping contamination of scientific publications by undeclared AI-generated texts. The human knowledge base is also influenced by the planned or already ongoing use of AI as gatekeeper in search engines which transforms them into Question & Answer engines. The paper briefly presents the chances and safety issues of AI-based and AI-supported research and discusses potential precautionary measures like containerization and the option for an emergency shutdown.
URL: https://doi.org/10.48693/572
https://osnadocs.ub.uni-osnabrueck.de/handle/ds-2024082611473
Subject Keywords: Artificial Intelligence; Safety; ChatGPT; Science
Issue Date: 26-Aug-2024
License name: Attribution 3.0 Germany
License url: http://creativecommons.org/licenses/by/3.0/de/
Type of publication: Arbeitspapier [WorkingPaper]
Appears in Collections:FB01 - Hochschulschriften

Files in This Item:
File Description SizeFormat 
Safety_of_AI_based_Research_2024_Saalbach.pdf820,29 kBAdobe PDF
Safety_of_AI_based_Research_2024_Saalbach.pdf
Thumbnail
View/Open


This item is licensed under a Creative Commons License Creative Commons