Skip to main content Skip to navigation
Washington State University
Washington State University Office of the Provost

PoliciesChallenges of AI

Data Security and Privacy concerns arising from the use of Generative AI

AI systems, like any other technological advancement, can introduce new cyber risks and vulnerabilities. Here are some key risks associated with generative AI:

  1. Data Breaches: AI systems rely on vast amounts of data, and if not properly protected, this data can be targeted by cybercriminals. Breaches can result in unauthorized access to sensitive information, including personal data, financial records, or research data.
  2. Adversarial Attacks: Adversarial attacks exploit vulnerabilities in AI models to manipulate their behavior. By making subtle modifications to input data, attackers can deceive AI systems, leading to incorrect outputs or decisions. Adversarial attacks can be particularly concerning in critical areas such as autonomous vehicles or medical diagnosis.
  3. Model Poisoning: Model poisoning involves manipulating the training data used to train AI models. By injecting malicious data or subtly modifying existing data, attackers can compromise the integrity and performance of AI systems. This can lead to biased outcomes, unauthorized access, or disruption of services.
  4. Privacy Risks: AI systems often process large amounts of personal data, raising concerns about privacy. If not properly secured, AI systems can become targets for hackers seeking to gain unauthorized access to personal information or engage in identity theft.
  5. Malicious Use of AI: AI technology can be harnessed for malicious purposes, such as creating highly sophisticated phishing attacks, generating realistic deepfake content, or automating social engineering techniques. This poses significant risks to individuals, organizations, and society as a whole.
  6. Lack of Transparency: Some AI models, such as deep learning neural networks, can be complex and difficult to interpret. This lack of transparency makes it challenging to understand how AI systems make decisions or identify the causes of errors or biases, potentially hindering accountability and making it difficult to detect malicious behavior.
  7. Supply Chain Attacks: AI systems often rely on various software libraries, frameworks, and external application programming interfaces (APIs). If these dependencies are compromised or maliciously altered, it can lead to vulnerabilities in the AI system, enabling unauthorized access or control by attackers.
  8. Social Engineering: AI can be used to automate and enhance social engineering attacks, where attackers manipulate individuals to gain unauthorized access to systems or divulge sensitive information. AI-powered chatbots or voice assistants can be programmed to deceive users, making social engineering attacks more sophisticated.

Ethical Concerns arising from the use of Generative AI

The use of generative AI raises several ethical concerns. Here are some key considerations:

  1. Bias and Discrimination: Generative AI models can inadvertently learn biases present in the training data, which can perpetuate or amplify existing biases and discrimination. This can lead to biased outcomes in areas like language generation, image synthesis, or decision-making systems.
  2. Misinformation and Deepfakes: Generative AI can be misused to create highly realistic fake content, including deepfake videos, images, or text, which can be used to spread misinformation, deceive individuals, or manipulate public opinion.
  3. Intellectual Property and Plagiarism: Generative AI has the potential to generate content that infringes upon intellectual property rights, leading to issues of plagiarism and unauthorized use of copyrighted materials.
  4. Privacy and Data Security: Generative AI models often require large amounts of data for training, raising concerns about privacy and data security. In some cases, these models can inadvertently reveal sensitive information present in the training data.
  5. Unintended Consequences: The use of generative AI can have unforeseen consequences, especially when deployed in critical domains such as healthcare, finance, or autonomous systems. Ensuring the safety, reliability, and accountability of AI-generated outcomes is a significant ethical challenge.
  6. Impact on Human Labor: Generative AI has the potential to automate tasks traditionally performed by humans, potentially leading to job displacement and socioeconomic inequalities. Ethical considerations should be given to the impact on employment and the need for retraining or upskilling.

Addressing these ethical concerns requires a comprehensive approach that includes careful dataset curation, robust evaluation of models, transparency in AI systems, responsible deployment, and ongoing monitoring and regulation to mitigate potential risks and harms.