Challenges of AI

Artificial intelligence (AI) can be a powerful tool in higher education, but it also introduces significant risks that require thoughtful safeguards. The sections below summarize common challenges and how WSU community members can reduce the risk of use of AI.

Compliance Concerns Arising from the Use of Generative AI

AI systems, like any other technological advancement, can introduce new cyber risks and vulnerabilities. They also can introduce new compliance issues related to accessibility standards and the introduction of erroneous information into academic settings.  The following are some key risks associated with generative AI.

Accuracy, Reliability, and Over‑reliance

Generative AI systems can produce confident but incorrect answers (“hallucinations”), including fabricated citations. Even newer models still show non‑trivial error rates, so outputs should be verified before use. Techniques such as retrieval‑based prompts and uncertainty checks can help, but human review remains essential.

Privacy, Data Protection, and Regulated Information

Never place restricted or regulated data (e.g., FERPA‑protected student records, HIPAA data, confidential research data) into consumer AI tools. WSU policy (UPPM 8) prohibits sharing protected data with unvetted services. Use WSU‑approved tools and your institutional sign‑in (e.g., Microsoft Copilot with enterprise protections) when AI is appropriate for work.

When student records are involved, FERPA applies; ensure contracts and tools are vetted, limit data to the minimum necessary, and avoid sharing PII unless there is a clear, lawful basis. Microsoft 365 Copilot with enterprise terms does not use prompts or responses to train foundation models and enforces tenant permissions.

For tool vetting and privacy reviews, please consult with WSU Information Technology and the Office of Information Security and Assurance (OISA).

Accessibility and accommodations

AI outputs (documents, slides, images, video) must still meet digital accessibility requirements (e.g., alt‑text quality, keyboard focus, target sizes, captioning). WSU materials should conform to WCAG 2.2 Level AA. All creators should review and fix AI‑generated content accordingly if it is to be used in educational or online settings.

Ethical Concerns Arising from the Use of Generative AI

The use of generative AI raises several ethical concerns. Addressing these ethical concerns requires a comprehensive approach that includes careful dataset curation, robust evaluation of models, transparency in AI systems, responsible deployment, and ongoing monitoring and regulation to mitigate potential risks and harms.  Below are some key considerations.

Bias, Fairness, and Inclusion

AI can amplify bias present in training data and tools (e.g., inequitable language or image outputs). Course and administrative uses should include checks for fairness, document limitations, and avoid sole reliance on automated decisions about people. The NIST AI Risk Management Framework (PDF) and its Generative AI Profile offer concrete controls for bias testing and documentation.

Academic Integrity—and the Limits of AI “Detectors”

AI assistance may blur authorship boundaries. Some AI‑text detectors have shown false positives and inconsistent results, and should not be used as the sole basis for academic‑misconduct decisions. Pair instructor judgment with transparent course policies, drafts/process evidence, and iterative assessments where appropriate.  False‑positive concerns are especially salient for multilingual and neurodivergent writers; use caution, and consider multiple lines of evidence if investigation is warranted.

Misinformation, Deepfakes, and Content Provenance

AI makes it easy to generate convincing synthetic text, images, audio, and video that can mislead, harass, or defraud other individuals. When sharing media in courses or communications, consider Content Credentials and provenance standards like C2PA to help audiences verify origins and edits of products, and students, staff, and faculty should always check credentials and sources before amplifying content.

Copyright, Authorship, and Citation

Under current U.S. guidance, copyright protects human authorship. AI‑generated material alone is not copyrightable; in mixed works, only the human contribution is protected. When submitting scholarly or creative work, disclose AI use consistent with venue policies and document your original contributions.

Practical Safeguards for WSU Community Members

  • Use institutionally protected tools for university work; avoid pasting regulated/protected data into consumer chatbots. (See WSU Provost AI policies, WSU Office of Research AI Policies and UPPM 08; use Microsoft Copilot with WSU sign‑in where appropriate.)
  • Disclose AI assistance per course, journal, or sponsor rules.
  • Verify facts and citations from AI outputs; treat unverified content as a draft, not a source.

Design with accessibility (WCAG 2.2 standards)in mind when sharing AI‑generated documents or media.