Notice to research community: Use of generative artificial intelligence technology in the NSF merit review process
Generative artificial intelligence1 (GAI) systems have great potential to support the U.S. National Science Foundation's mission to promote the progress of science. They could facilitate creativity and aid in the development of new scientific insights and streamline agency processes by enhancing productivity through the automation of routine tasks. While NSF will continue to support advances in this new technology, the agency must also consider the potential risks posed by it. The agency cannot protect non-public information disclosed to third-party GAI from being recorded and shared. To safeguard the integrity of the development and evaluation of proposals in the merit review process, this memo establishes guidelines for its use by reviewers and proposers:
- NSF reviewers are prohibited from uploading any content from proposals, review information and related records to non-approved generative AI tools.
- Proposers are encouraged to indicate in the project description the extent to which, if any, generative AI technology was used and how it was used to develop their proposal.
A key observation for reviewers is that sharing proposal information with generative AI technology via the open internet violates the confidentiality and integrity principles of NSF's merit review process. Any information uploaded into generative AI tools not behind NSF's firewall2 is considered to be entering the public domain. As a result, NSF cannot preserve the confidentiality of that information. The loss of control over the uploaded information can pose significant risks to researchers and their control over their ideas. In addition, the source and accuracy of the information derived from this technology is not always clear, which can lead to research integrity concerns including the authenticity of authorship.
NSF must maintain the integrity of its merit review process. Release of proposal content, review information3 and related records is considered a breach of confidentiality and a public disclosure. Information disclosure may be amplified as the technology may incorporate it into the dataset used to train it for further use by other users of the tool.
If information from the merit review process is disclosed without authorization to entities external to the agency, through generative AI or otherwise, NSF loses the ability to protect it from further release. This type of disclosure of information, especially if it is proprietary or privileged, creates potential legal liability for and erodes trust in the agency. The agency maintains public trust by ensuring that it safeguards scientific ideas, non-public data and personal information that stem from proposals, review information and related records in the merit review process.
Use of generative AI by reviewers in merit review. NSF reviewers, including those conducting ad hoc reviews and panelists, participate in the NSF merit review process as Special Government Employees. With this status, the laws, regulations, and policies that govern the disclosure of information for NSF staff apply to reviewers as well. In addition, NSF reviewers sign a confidentiality pledge as part of the Conflicts-of-Interests and Confidentiality Statement for NSF Panelists form, often referred to by its name "Form 1230P".
Confidentiality requirements in Form 1230P describe reviewers' obligation to maintain the confidentiality of proposals, applicants for NSF awards, the review process, and reviewer identities. The obligation to maintain confidentiality of merit review related information extends to the use of generative AI tools. NSF reviewers are prohibited from uploading any content from proposals, review information and related records to non-approved generative AI tools. If reviewers take this action, NSF will consider it a violation of the agency's confidentiality pledge and other applicable laws, regulations and policies. NSF reviewers may share publicly available information with current generation generative AI tools.
Use of generative AI in proposal preparation. Proposers are encouraged to indicate in the project description the extent to which, if any, generative AI technology was used and how it was used to develop their proposal.4 NSF is examining the use of GAI in proposal preparation and seeks to first understand how it is used by the community to minimize administrative requirements and build appropriate processes and resources for the merit review process. NSF may publish further guidelines for use as needed.
Proposers are responsible for the accuracy and authenticity of their proposal submission in consideration for merit review, including content developed with the assistance of generative AI tools. NSF's Proposal and Award Policies and Procedures Guide (PAPPG) addresses research misconduct, which includes fabrication, falsification, or plagiarism in proposing or performing NSF-funded research, or in reporting results funded by NSF. Generative AI tools may create these risks, and proposers and awardees are responsible for ensuring the integrity of their proposal and reporting of research results. This policy does not preclude research on generative AI as a topic of study.
Implementation and guidance on appropriate use. NSF will update the 2025 PAPPG to align with the requirements stipulated in this memorandum or with additional guidance and requirements as necessary. NSF will also continuously evaluate future applications of generative AI technology for use by staff and the research community.
Questions on this policy should be directed to email@example.com.
1 Generative artificial intelligence (GAI) is a technology that can create content, such as text, images, audio, or video, when prompted by a user. Generative AI systems create responses using algorithms often trained on large datasets of information, such as text and images from the internet. (Source: U.S. Government Accountability Office, Science and Tech Spotlight: Generative AI; June 2023 available at www.gao.gov/assets/830/826491.pdf)
2 NSF will evaluate this memo and corresponding policy as the technology evolves. However, at the writing of this memo, the current generation of generative AI does not comply with NSF confidentiality standards as information may be released to a third party.
3 Review information includes panel summaries, review analysis, recommendations for funding, PO comments and other similar records.
4 This policy also applies to any content not developed by the proposer.