AI Policy

Effective: 2024. Updated in line with COPE and DOAJ best practices.

Position Statement

GRASP recognises that AI tools — including large language models (LLMs) such as ChatGPT, Claude, Gemini, and similar — are increasingly used in research and writing. We adopt a principled, transparency-first approach:

AI tools may assist the research and writing process, but they cannot be credited as authors and their use must be disclosed.

Permitted Uses

  • Language editing and grammar checking
  • Translation assistance
  • Data analysis and code generation (when verified by the authors)
  • Literature search assistance
  • Summarisation of the authors' own work for abstract drafting

Disclosure Requirements

Authors must:

  1. Disclose which AI tools were used and for what purpose in a dedicated "AI Disclosure" statement within the manuscript (typically in the Methods or Acknowledgements section).
  2. Take full responsibility for the accuracy, integrity, and originality of all content, including any AI-generated or AI-assisted text.
  3. Ensure that AI-assisted content does not constitute plagiarism and does not reproduce copyrighted material.

Prohibited Uses

  • AI as author — AI tools cannot meet the criteria for authorship (accountability, consent). They must not be listed as authors or co-authors.
  • Undisclosed generation — Submitting AI-generated text without disclosure is a violation of publication ethics.
  • Image fabrication — AI-generated or AI-manipulated images or data presented as original results without disclosure.

For Reviewers

Peer reviewers must not upload manuscripts into AI tools, as this breaches confidentiality. AI tools may be used for language assistance in writing the review report only, and this should be noted if the journal requests it.

Enforcement

Manuscripts that violate this policy may be rejected, retracted, or subject to correction. GRASP follows COPE guidelines for handling suspected misconduct related to AI use.