Journal Policies on Generative AI
The swift advancement of generative AI and AI-assisted technologies introduces both promising opportunities and complex challenges to academic publishing. In response, the ISC International Journal of Information Security (ISeCure) has developed the following policies for authors and reviewers to promote transparency, uphold scholarly integrity, and build trust. These policies will be periodically updated to reflect ongoing technological developments and evolving ethical considerations.
For Authors
Use of Generative AI and AI-Assisted Technologies in Academic Writing
This policy applies exclusively to the writing process and does not extend to the use of AI tools for data analysis or research insights. Authors may utilize generative AI and AI-assisted tools to enhance the clarity and readability of their manuscripts. However, any such use must be disclosed in the manuscript, and a corresponding statement will be included in the published version. This requirement ensures transparency and fosters trust among all stakeholders authors, readers, reviewers, editors, and contributors while also aligning with the terms of use of the AI tools employed.
Human oversight is essential. Authors are expected to thoroughly review and edit any AI-generated content and are fully accountable for its accuracy and originality. Since AI tools may produce content that is misleading, incomplete, or biased, the authors are entirely responsible for the accuracy, integrity, and originality of their work.
Disclosure Requirements. Authors must include a statement in the Acknowledgments or Declaration section of their manuscript specifying the use of any AI tool. The following template is recommended:
During the preparation of this work, the author(s) used [Name of AI Tool/service, Version, Manufacturer] to [purpose, e.g., improve language and readability]. After using this tool, the author(s) reviewed and edited the content as needed and take(s) full responsibility for the content of the publication.
AI tools or technologies should not be credited as authors or co-authors, nor cited as such. Authorship involves duties that only humans can fulfill. Each author must approve the final manuscript, consent to its submission, and ensure the work is original and free from third-party rights violations. Familiarity with the journal’s publishing ethics policy is required before submission.
Summary of Key Points
Use of Generative AI and AI-Assisted Tools in Visual Content
Authors are prohibited from using generative AI or AI-assisted tools to create or modify images in submitted manuscripts. This includes altering, enhancing, obscuring, removing, or adding elements to images. Acceptable modifications are limited to adjustments in brightness, contrast, or color balance, provided they do not compromise the integrity of the original data. Image forensics may be employed to detect manipulation.
Exceptions are permitted only when AI tools are integral to the research methodology (e.g., AI-assisted imaging, forensic analysis, or LLM security). In such cases, the methods section must detail the use of AI, including the tool’s name, version, and manufacturer. Authors must comply with the software’s usage terms and ensure proper attribution. Editors may request original or unaltered images for editorial review.
Graphical abstracts must not be generated using AI. For cover art, generative AI may be accepted with prior approval from the editor and publisher, contingent on rights clearance and appropriate attribution.
Summary of Key Points
For Reviewers
Use of Generative AI in Peer Review
Peer review is a confidential process that relies on expert human judgment. To maintain its integrity, reviewers must follow these guidelines:
Although the journal permits authors to use AI for language enhancement with proper disclosure, reviewers can find this disclosure in a designated section before the references.
Summary of Key Points