1.2 Academic Integrity and Plagiarism Prevention¶
What you will learn on this page
- The relationship between AI-generated text and academic integrity
- Where the boundary lies between plagiarism and AI use
- The idea of “AI as a co-author”
- The scope of abilities that AI can extend and where misconduct begins
- The current state of AI-generated text detection and ethical self-management
What is academic integrity?¶
Academic integrity refers to maintaining honesty, trustworthiness, and fairness in research activities. Concretely, it includes the following principles.
- Originality: Clearly distinguish your ideas from others’ ideas
- Attribution: Cite appropriately when using others’ ideas or expressions
- Accuracy: Report data and facts accurately
- Transparency: Keep the research process explainable
These principles do not change when using generative AI in academic writing.
The boundary between AI use and plagiarism¶
Whether AI use counts as “plagiarism” depends on how it is used and whether it is disclosed.
Generally acceptable uses¶
- Grammar checking and proofreading (as an extension of tools such as Grammarly)
- Suggesting paraphrase options (final choice is yours)
- Checking consistency of style and improving clarity
- Structural advice (organizing an outline)
- Assistance in searching for literature
- Translation support

Gray zone (disclosure required)¶
- Support for converting a Japanese draft into English
- Draft generation at the paragraph level (with the expectation of substantial revision by you)
- Suggestions for the logical structure of an argument
Uses to avoid¶
- Generating the main text wholesale (without you thinking through the content)
- Generating nonexistent references or citations
- Uses that could lead to fabricated or falsified data
A practical criterion
A helpful criterion is whether you can explain “which parts are your contribution” and “why you chose this wording.” If there are parts you cannot explain, you may be relying on AI too heavily.
A real case: a paper published with ChatGPT output pasted as-is
In 2024, a paper published in Elsevier’s journal Surfaces and Interfaces (Zhang et al., 2024) left the phrase "Certainly, here is a possible introduction for your topic:" intact in its Introduction section. This phrase is a typical preface used by ChatGPT when generating text, suggesting that the authors pasted the AI output into the manuscript without checking or editing it.

Image source: Editors' Cafe
The paper was later retracted, citing image duplication, text plagiarism, and undisclosed use of generative AI as a violation of journal policy (Retraction Notice, Original Article). This is a symbolic case showing the risk of using AI output without verification.
Can AI be a co-author?¶
At present, all major academic publishers and journals do not recognize generative AI as a co-author. The main reasons are as follows.
- Accountability: Co-authors are responsible for the paper’s content, but AI cannot take responsibility
- Substantial contribution: Authorship requires substantial contribution to conception, execution, and interpretation
- Responsiveness: In peer review, corrections, or retractions, AI cannot respond
Therefore, the standard practice is to treat AI as a “tool” and disclose its use appropriately.
A practical stance
It is safest to treat AI as a “research support tool” in the same category as a spell checker or statistical software. Tool use should be disclosed, and the authors bear full responsibility for the work.
The scope of ability that AI can extend¶
For the question “How much can I delegate to AI?”, it can be helpful to use the model called Augmented Competence (competence that can be extended by AI) proposed by Mizumoto (2025).

This model illustrates how AI tools can extend a learner’s language ability to bridge the gap between receptive competence and productive competence.
- Receptive competence (the larger outer oval): the ability to understand English. Learners generally develop receptive competence more than productive competence. This competence is essential for evaluating whether AI output is correct.
- Productive competence (the smaller inner oval): the ability to produce language in writing and speaking. Production imposes a higher cognitive load than comprehension and takes longer to develop.
- The extendable range (the gap between the two): the area where receptive competence exists but productive competence lags, and AI tools can assist.
In other words, if your receptive competence is high, you can judge whether AI output is correct and whether it matches what you intend. If your receptive competence is low, that judgment becomes difficult. AI-based extension is particularly beneficial for more proficient learners, and it has an “The rich get richer” aspect: those who already have higher competence benefit more.
AI use that becomes misconduct¶

If you use AI output as-is beyond the scope of your receptive competence, it becomes academic dishonesty. When AI generates expressions you do not understand, have never learned, or cannot evaluate, and you adopt them without verification, you are effectively presenting abilities you do not have.
A practical rule is to keep this in mind: Avoid using expressions in your paper that do not match your English level and that you could not write on your own.
Reference: Mizumoto, A. (2025). Embracing machine translation in L2 education: Bridging theory and practice in the AI age. In JC Penet, J. Moorkens, & M. Yamada (Eds.), Teaching translation in the age of generative AI: New paradigm, new learning? (pp. 233–248). Language Science Press. https://doi.org/10.5281/ZENODO.17641087
What this model implies¶
This model is simple, but it has important implications.
- AI does not replace language learning: AI-based extension is built on the learner’s own language competence. Without improving receptive and productive competence, the benefits of AI remain limited.
- Writing with AI alone without learning the language is not feasible: Without receptive competence, you cannot verify AI output. As proficiency increases, so does the ability to use AI effectively.
- Core competence plus AI tools: Developing foundational language competence first and then using AI as an extension tool is a healthy approach that aligns with second language acquisition theory.
English competence needed in the AI era
Based on domain knowledge in your specialty, you need to understand the expected structure and expressions of papers in the field, as well as vocabulary and grammar knowledge, reading competence, and analytical skills. On top of that, AI literacy (the ability to use AI ethically and efficiently) becomes necessary. There is no single “threshold that is enough”; higher competence leads to greater benefits from AI.
Detecting AI-generated text and ethical self-management¶
As the Augmented Competence model suggests, ethical AI use ultimately depends on the writer’s own judgment. So, can AI-generated text be detected externally? If not, what should we rely on to ensure ethical use?
AI detectors are not reliable¶
There are multiple tools that automatically detect AI-generated text (AI detectors), but at present it is not possible to rely on these tools alone.

As Godwin-Jones (2024) notes, “The reliance on AI detectors is not the answer.” The reason is clear: false positives and false negatives are structurally unavoidable. Human-written text can be misclassified as AI-generated, and AI-generated text can be missed. OpenAI itself discontinued its own AI detector in July 2023, reflecting a fundamental limitation of this category of tools.
Even so, experts can often tell¶
Even if AI detectors are not reliable, AI-generated text tends to show recognizable characteristics. A more effective approach is to combine expert judgment with linguistic feature analysis.

Kobak et al. (2025) analyzed large-scale shifts in vocabulary use in biomedical papers and found that after the release of ChatGPT, the frequency of certain words (such as delve, underscore, showcase, pivotal, notably) increased sharply. ChatGPT tends to favor certain words, and researchers in a field can often suspect AI generation simply by reading. Because style also has distinctive features, it has been reported that natural language processing analyses can also identify AI-generated text when necessary (Mizumoto et al., 2024).
In short, even if automated detector accuracy is limited, combining expert judgment with statistical style analysis can identify AI-generated text with high accuracy. Specific vocabulary and style patterns commonly seen in AI-generated writing, along with prompt examples for self-checking your manuscript, are explained in detail in 4.1 Grammar checking and style consistency.
Proofreading support is not the same as AI-generated writing¶
One important point follows. Using AI for proofreading and revision support does not mean the paper was “written by AI.” As noted above, proofreading-type uses such as grammar checking, suggestions to improve clarity, and spell checking are acceptable. So there is no need to be overly afraid of using AI for proofreading.
However, there is no guarantee that reviewers or readers will always understand that distinction. Therefore, it is important to self-manage your AI use as ethical and maintain the ability to explain it when needed.
A checklist for ethical AI use¶
Rather than relying on AI detectors, cultivate a habit of checking the ethics of your own AI use. Before submitting your paper, confirm the following items.

- I did not let AI write the entire paper: I created the core ideas and structure myself
- All ideas are mine: The research idea, hypothesis, and interpretation belong to me
- I checked every AI correction and suggestion: I did not accept AI output uncritically or copy and paste it
- I used expressions appropriate to my level: I stayed within the Augmented Competence range above
- My voice remains: The manuscript is not “generic” or indistinguishable from anyone else’s writing
Keep records of your writing process¶
In addition to the checklist, it is strongly recommended to keep ongoing records of your writing process as evidence that your AI use was appropriate.
There are two purposes. First, if you are asked about your AI use (for example, by reviewers or your institution), you can explain your process concretely. Second, the act of recording itself serves as a prompt to think about ethical AI use throughout.
Records worth keeping
- Logs of interactions with AI: Export conversation histories from ChatGPT or save screenshots
- Version management of drafts: Keep a trace of changes from the first draft (before AI) to AI-assisted versions to the final version (using Git or file name versioning)
- Japanese outline notes: Preserve your Japanese notes or outline before asking for English conversion
- A short note on purpose: Record why you used AI at each step, such as “grammar checking,” “paraphrase checking,” or “structure consultation”
These records are also directly useful when writing an AI-use disclosure statement (see 4.3 Submission preparation, peer review response, and AI-use disclosure).
Clarifying an AI use approach that does not conflict with your responsibilities as a researcher, and then verifying the content yourself at the end, is essential.
In addition, keeping a record of your AI use and writing process helps you stay ready to explain your judgment and process whenever needed.
How to export your chat history with generative AI
Many generative AI services store chat histories, but it is not easy to search later for when, where, what you asked, and what you received. Browser extensions for Google Chrome and similar browsers can download chat histories in formats such as Markdown. If you use generative AI for academic writing, these histories become important records, so it is better to make active use of them. (I use a chat exporter like this one for ChatGPT, Claude, and Gemini.)