Influence of generative models of text and image creation in research and in the funding activities of the Volkswagen Foundation
The Volkswagen Foundation herby endorses the statement by the Executive Committee of the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) on the Influence of Generative Models of Text and Image Creation on Science and the Humanities and on the DFG’s Funding Activities.
This statement is intended to guide researchers in their research activities. It also provides guidance to applicants and those involved in the review, evaluation, and decision-making process, on how to deal with generative models.
Our guiding principles
The use of generative models can have varying degrees of impact on the significance attached to the creation of a text and on the visualisation of research results in the day-to-day work of researchers. Since it is not immediately apparent to third parties whether the texts and illustrations they are viewing were created using generative models or whether the respective underlying research ideas were developed using generative models, transparent handling of text and image content generation will be an important aspect in the evaluation of these technologies with regard to ensuring research quality. In view of the considerable opportunities and development potential, the use of generative models in the context of research work should by no means be ruled out. However, certain binding framework conditions will be necessary to ensure good research practice and the quality of research results:
- The transparency and verifiability of the research process and its results to third parties are key fundamental principles of research integrity. This value system continues to provide valuable guidance when dealing with generative models for text and image creation.
- It is a matter of professional ethics for researchers to commit themselves to the basic principles of research integrity. The use of generative models cannot relieve researchers of this content-related and formal responsibility.
- When making their results publicly available, researchers should, in the spirit of research integrity, disclose whether or not they have used generative models, and if so, which ones, for what purpose and to what extent.
- Only the natural persons responsible can appear as authors in research publications. They must ensure that the use of generative models does not infringe the intellectual property of others and does not lead to scientific misconduct, such as plagiarism.
- In decision-making processes, the use of generative models in/for proposals submitted to the Volkswagen Foundation is currently assessed as neither positive nor negative.
- The use of generative models in the preparation of reviews is not permitted in view of the confidentiality of the review process. Documents provided for review are confidential and may not be used as input for generative models.
The dynamic development of Artificial Intelligence (AI) undoubtedly has the potential to permanently change the landscape of research and research funding. In particular, generative AI technologies and their applications are developing rapidly and are increasingly influencing the complex and information-rich work of research funding and the evaluation of research projects.
At present, however, there is a lack of clear understanding and guidance on the responsible integration of AI technologies in the context of research funding. In addition to the great potential, there is a risk that AI systems will reinforce existing inequalities in research or introduce unconscious biases. The handling of sensitive research data and ethical issues related to AI require clear guidelines.
As a research funding organisation, we recognise the opportunities and risks associated with the use of these technologies and see it as our responsibility to actively shape these developments. The Volkswagen Foundation is therefore committed to promoting knowledge exchange and collaborative learning in this dynamic field. For this reason, we are in constant contact with international funding organisations. A concrete example of this is our participation in the GRAIL project (Getting Responsible about AI and Machine Learning in Research Funding and Evaluation) of the Research on Research Institute (RoRI). We are committed to harnessing the power of AI in research funding, while ensuring that it is used responsibly to guarantee the quality and fairness of funding processes.