For instructors who wish to prohibit or limit the use of generative AI, consider the following:
- Clearly communicate your AI policy. The Washington Administrative Code (WAC) covers AI as an academic integrity matter. Though the WAC is cited in WSU’s approved syllabus statement on academic integrity and students are expected to know what constitutes academic misconduct, the WAC does not explicitly mention the use of AI. We recommend that instructors avoid relying on the WAC to do the work of communicating their AI policy. Because the WAC doesn’t explicitly mention AI, and given the rapid evolution of AI and conflicting policies among instructors, we recommend that each instructor develop a clear and explicit policy regarding the (un)acceptable use of AI in their class. We also recommend reiterating the policy often in a variety of ways and consider including the policy in assignment or verbal instructions in addition to the syllabus statement.
- Choices to violate academic integrity are complicated. Students engage in academic dishonesty for a variety of reasons. Students worry about their GPAs and have anxieties about their academic performance. The pressure to maintain GPAs for scholarships, for financial aid eligibility, to please their families, and for self-worth may lead students to take the risk of engaging in academic dishonesty over facing consequences like receiving a bad grade on an assignment or failing the course.
Some students may engage in academic dishonesty because they have not yet learned how to manage their time and are still adjusting to college. Undergraduates may be balancing responsibilities to their families and working alongside attending school. Even students who are skilled at time management can find these competing responsibilities daunting. In high-stakes assignments, especially those without scaffolded check-in steps, students might procrastinate or feel unsure how to begin or complete the assignment. In a panic and under pressure to meet a deadline, students might make a poor decision to plagiarize and/or use unauthorized assistance.
The possibility of academic dishonesty is best handled by proactive teaching and providing students with opportunities to develop and use authentic and appropriate strategies and processes throughout the semester. Connecting students with resources the university offers to support student learning can also reduce the likelihood of academic dishonesty.
Ethics of AI in the Classroom
Generative AI has prompted a discussion about the ethics of its use in education, with many instructors viewing it as a violation of academic integrity. Using Microsoft Copilot, Chat GPT or other AI text generators isn’t necessarily cheating or academically dishonesty. Brainstorming, early research, revision guidance, editing, and so on are all areas in which AI might be used as a tool to support student learning. But there will be instances when some students might misuse AI or otherwise go against explicit instructions to not use AI.
Detecting AI-generated Content
Cancellation of Turnitin AI Detection
As of February 2026, WSU has cancelled its contract with Turnitin for its AI Detection software. The Turnitin plagiarism software package will continue to be available to WSU Faculty, Instructors, and Students via Canvas.
The cancellation of Turnitin’s AI detection software aligns WSU with other R1 institutions who have banned its use in academic integrity, including UC-Berkeley, Colorado State University, Indiana University, Michigan State University, Oregon State University, and the University of Washington. There are three additional reasons for this cancellation.
- Although disputed, Turnitin claims that they have a false positive detection rate of 1-2% (i.e., false positive meaning a paper is written by a human but flagged as AI generated). Other studies placed this value higher, especially for students who are neurodivergent or have English as a second language.
- In Fall 2024, Turnitin was used at WSU to analyze 148,547 assessments. Even if the false positive rate is 1%, that means almost 1,485 assessments were likely flagged by Turnitin as AI generated when they were not.
- Every false positive has the potential to lead to the submission of a case of a violation of academic integrity against a student. From information provided by Conduct Hearing Boards in the Center for Community Standards, there is growing distress and anger being expressed by students over accusations leveled by instructors that are cases of false positives produced by Turnitin. These cases negatively impact student trust and wellbeing.
- Between 2023-2025, 33% of all Review Board cases related to allegations of inappropriate AI use led to a finding of not responsible because AI detection was submitted independent of any other supporting evidence.
- The output from the Turnitin detector is instructor facing and is not student facing. This means students are not aware of potential issues with their work prior to the detector’s report to the instructor. Therefore, proactive communication from a student to their instructor about potential issues is not available.
We will continue our policy of not allowing the use of any AI detector as the sole source of support for a case against a student for academic misconduct in the future.
Use of AI Detectors in Academic Integrity at WSU
Currently, WSU does not endorse the use of any AI detection tool for several reasons. First, instructors should be mindful that the submission of student work to a third-party detection tool could lead to risks associated with the violation of students’ intellectual property rights, FERPA, and perhaps HIPAA. Second, AI detectors and tools that designed to circumvent them by editing and manipulating AI generated content are currently in a cat and mouse game. Tools to evade AI detection like Undetectable.ai and Sapling are becoming more effective at “humanizing” AI generated text and detectors are mixed in their performance against them. Finally, human experts who use AI regularly outperform novices who do not use AI in detecting AI generated work. However, as a recent paper on Arxiv.org shows, even experts can have false positive detection rates of 4% or more (even higher than Turnitin). Therefore, suspicion of the use of AI is not sufficient for a finding of student responsibility for inappropriate use of AI.
Reporting Misconduct Related to Generative AI
If an instructor suspects the unauthorized use of AI, we recommend consulting with the Center for Community Standards for additional information about evaluating your concern and reporting it. Additionally, the Center for Community Standards requires that an attempt to meet with the student to discuss the concerns be made prior to submitting an academic integrity report.
Violations of the academic integrity policies must be reported to the Center for Community Standards. If a faculty member suspects that a student mis-used artificial intelligence, they must:
- Gather the evidence and notify the student of the date and nature of the allegations
- Make a reasonable attempt to meet with the student
- Submit a report with all documentation to the Center for Community Standards
After meeting with the student, if the instructor finds that it is more likely than not that an academic integrity violation occurred, they must submit a report to the Center for Community Standards. The Center for Community Standards provides the student an opportunity to appeal the instructor’s decision to the Academic Integrity Hearing Board. If a student appeals their instructor’s decision to find them responsible for an academic integrity violation, the instructor will be invited to attend the Academic Integrity Hearing Board. The Center for Community Standards highly recommends that instructors attend to provide the hearing board information about their decision-making process and how they established the expectations for the course. The most up to date information about the reporting process can be found on the Community Standards website.