Policies on the use of Artificial Intelligence (AI) tool in the Revista Española de Casos Clínicos en Medicina Interna (RECCMI) of the Revista Española de Casos Clínicos en Medicina Interna (RECCMI)
Automation versus Artificial Intelligence (AI).
This process, by its very essence, is absolutely dynamic and subject to the changes that time and the consequent development of new technologies will introduce in its maintenance and evolution.
Artificial intelligence (AI) is "the ability of a digital computer or a computer-controlled robot to perform tasks commonly associated with intelligent beings." Generative AI (GAI) uses generative modeling and advances in deep learning (DL) to produce diverse content at scale using existing media such as text, graphics, audio, and video."
The results of chatbots risk including biases, distortions, irrelevance, misrepresentations, and plagiarism, many of them caused by the algorithms that govern their generation and that rely heavily on the content of the materials used in their training, including their potential to spread and amplify misinformation and disinformation.
Chatbots retain the information provided to them, so it could appear in future responses, so anyone who needs to maintain the confidentiality of a document, including authors, editors, and reviewers, should be aware of this issue before considering using chatbots to edit or generate work.
However, we must add a differentiating nuance between the originals subject to the composition of the material and methods and results, with complex statistical analyses that require sophisticated approaches and editorial treatments for different authors, reviewers and editors andclinical cases with generally simpler structures.
There is a clear distinction between automation and Artificial Intelligence (AI). Automation refers to rule-based software. AI is about designing intelligent systems, machines, and software that can mimic human intelligence and behavior.
We include natural language processing (NLP) and machine learning (ML) in our analysis, which includes tasks such as keyword extraction and topic classification to identify patterns and make predictions. AI can use systems to predict which journal best fits the scope of a manuscript and analyze text overlap detection results, provided by software such as iThenticate.
Automation in publishing has been used for decades to ensure that manuscripts can be peer-reviewed more quickly, without human intervention at every stage of the process.
The AI and automation tools being developed have the power to contribute to the speed and accuracy of peer review. Software created to detect text overlap provides a level of evaluation, by cross-checking millions of documents, that a human brain could not achieve. The possibilities of AI for pattern recognition make it possible to detect appointment cartels, image manipulation3, salami cutting, and the characteristics of paper mills4, all problematic unethical practices in publishing that are difficult to identify. The latest technology can assess, without the need for human intervention, whether a manuscript meets quality standards in terms of language, format according to journal standards, presentation of figures and use of citations.
The key aspects of the advancement of AI can be classified into three main groups:
- Accountability (non-discriminatory and fair)
- Accountability (Human Agency and Oversight)
- Transparency (technical soundness and data governance)
The recommendations of the Committee on Publication Ethics (COPE) based on the ethical dilemmas raised include the following aspects:
- The decision directly involves an editor. The decision cannot be made with an AI tool alone
- Human supervision is key. Ultimately, the publisher remains responsible for editorial decisions.
- AI-powered automation is acceptable and even expected in many cases, as long as the result does not imply a decision by the AI itself on the acceptance or rejection of a manuscript.
- Assessments of research misconduct and integrity that result in the expression of concerns, retractions, or contact with researchers' institutions should also not rely solely on AI decision-making.
- Publishers must be transparent about the entire process. Any AI-powered automation should be clearly presented to the relevant participants in the peer review process (authors, reviewers, or editors and readers).
- Authors should be aware that regardless of whether the decision was made by AI or by a human editor, the journal and publisher are responsible for the editorial decision.
The 2023 World Association of Medical Editors (WAME) recommendations are summarized as follows:
- Only human beings can be authors, chatbots cannot be authors.
- Authors should be transparent when using chatbots and provide information about how they were used:
LAuthors who submit an article in which a chatbot/AI has been used to write a new text must state this use in the acknowledgement; All prompts used to generate new text, or to convert text or text prompts into tables or illustrations, must be specified. When using an AI tool, such as a chatbot, to perform or generate analytical work, help report results (e.g., by generating tables or figures), or write computer code, this should be stated in the body of the article, both in the abstract and in the methods section. In order to enable scientific scrutiny, including replication and identification of forgeries, the complete script used to generate the research results, the time and date of the query, and the AI tool used and its version should be provided. - Authors must assume public responsibility for their work:
Authors are responsible for the material provided by a chatbot in their article (including the accuracy of what is presented and the absence of plagiarism) and for the proper attribution of all sources (including the original sources of the material generated by the chatbot) and must identify the chatbot used. - Editors and reviewers should specify, to authors and each other, any use of chatbots in manuscript evaluation and the generation of reviews and correspondence.
- Publishers need the right digital tools to deal with the effects of chatbots on publishing:
Publishers need the right tools to help them detect AI-generated or altered content. Such tools should be made available to publishers, regardless of their ability to pay for them, for the good of science and the public, and to help ensure the integrity of health information and reduce the risk of adverse health outcomes.
GENERAL RULES:
RECCMI will analyse all the manuscripts received in order to evaluate the percentage of possible completion of the manuscripts with the intervention of generative Artificial Intelligence tools.
Taking into account these premises and the current trends, from RECCMI we indicate the following rules for our authors and reviewers when traditional and generative AI technologies are used to create, review, correct or edit the content of a manuscript, authors must indicate in the Acknowledgements section the following:
- Name of the AI software platform, program, or tool.
- Version and extension numbers.
- Manufacturer.
- Date(s) of use.
- A brief description of how AI was used and in which parts of the manuscript or content.
- Confirmation that the author is responsible for the integrity of the content generated.
These rules do not apply to basic tools for grammar, spelling, reference checking, and the like. AI used in research or Large Language Models (LLM) or natural language processing (NLP).
Note (December.2025): We present the current general considerations of RECCMI on these aspects; however, we remain attentive to any recommendations and modifications that are presented in the International Committee of Medical Journals Editors (ICMJE) and the Committee on Publication Ethics (COPE) as well as in the International Congresses on Peer Review and Scientific Publications.