[HTML payload içeriği buraya]
29.9 C
Jakarta
Saturday, May 2, 2026

Peer overview within the time of synthetic intelligence


Used thoughtfully and transparently, generative AI could assist, however should not change, human judgment, experience, and demanding pondering in peer overview.

With this editorial, we want to draw the eye of peer reviewers to the accountable use of generative synthetic intelligence (AI) instruments when reviewing a manuscript. The final tips, shared all through the Nature Portfolio journals, are described on our web site at https://www.nature.com/nnano/editorial-policies/ai.

The choice textual content for this picture could have been generated utilizing AI.


Credit score: Panther Media World / Alamy Inventory Picture

When utilizing AI instruments to provide any kind of content material, crucial factor to bear in mind is that the person is all the time accountable. This tenet has a number of penalties1,2.

The primary consequence is that each output generated by an AI device requires human validation. That’s, it shouldn’t be assumed that the output of an AI device is factually correct (even when the immediate comprises the instruction ‘don’t hallucinate’). The output and associated sources should all the time be double-checked by an accountable individual. Within the case of peer overview, we ask reviewers, “to declare using such instruments transparently within the peer overview report”. As customers grow to be extra aware of AI instruments, they will train various levels of scepticism when studying the device’s outputs and enhance accuracy by utilizing extra exact prompts. For instance, when validation is taken into account, reviewers could discover that utilizing an AI device is extra time-consuming than not utilizing it, or that it’s helpful just for sure elements of the manuscript overview.

The second consequence issues the authorized implications of importing a manuscript to an AI device. Authors belief us to share manuscripts with reviewers in strict confidence. Importing a manuscript into an AI device might breach this confidentiality. There are particular AI instruments which might be closed, which means they don’t share uploaded content material with the World Vast Internet or use it for coaching. Nevertheless, relying on the settings or end-user agreements of the precise AI device, uploaded content material can nonetheless be discoverable by different customers throughout the closed atmosphere (for instance, colleagues inside an establishment). To keep away from any authorized penalties, we ask reviewers to “not add manuscripts into generative AI instruments”.

Utilizing AI instruments to enhance the grammar or readability of human-generated texts doesn’t have to be declared (although it nonetheless requires human validation)3. The principle threat we see at this level in utilizing AI for peer-reviewing a manuscript is over-reliance on a device that’s nonetheless largely seen as a black field and might produce inaccurate outcomes.

Like everybody else, we editors are nonetheless studying the right way to greatest use AI instruments, for instance, to summarize the main factors, extract the important thing efficiency metrics, or determine appropriate reviewers. Like every new expertise, it’s essential to get educated about it. We put money into coaching and consciousness to make sure the moral use of the instruments that the writer offers us with. What are the benefits? What are the authorized implications of misuse? What’s one of the simplest ways to extract the specified end result? What are the restrictions of the instruments? What’s the dataset used for coaching it? With what particular duties can it assist us be extra productive (or sooner)? When is utilizing AI instruments a waste of time? How a lot power or CO2 equal does a immediate devour?

As reviewers additionally be taught to craft efficient prompts, validate outcomes, and protect confidentiality, we consider that AI instruments will ultimately assist the peer overview course of. Reviewers and editors will be capable of train their judgment in mild of an unlimited quantity of data within the literature that AI instruments might retrieve successfully. If used inattentively, thoughd, we threat delegating crucial pondering to an algorithm, giving us a false sense of accomplishment. This undermines the position of our educational coaching, crucial pondering, and experience, in addition to the establishment of peer overview4.

Wanting forward, as generative AI turns into extra succesful and deeply embedded in automatable scholarly workflows, our shared precedence have to be to make sure that effectivity positive aspects by no means come on the expense of rigour, confidentiality, or accountability5. Because the sensibility of scientific communities round AI evolves, Nature Portfolio’s tips are sure to adapt accordingly. We are going to proceed to refine our steering consistent with expertise, rising requirements, and group expectations by offering clearer tips for peer reviewers. AI instruments which might be demonstrably safe and match for objective, when used transparently and critically, will help reviewers navigate an ever-expanding literature and focus their experience the place it issues most; used uncritically, they threat eroding the very judgment peer overview exists to use. Our purpose, subsequently, is to not speed up peer overview by outsourcing thought, however to strengthen it by enabling knowledgeable human selections, grounded in proof, integrity, and belief.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles