Thesis subject
Data Poisoning: The Benefits and Risks to Machine Learning Models (BSc/MSc)
Data poisoning compromises AI by injecting malicious data, but it can also protect content developers' work by embedding subtle errors, making stolen data less useful. This thesis topic explores these dual roles.
Short description
Data poisoning involves injecting malicious data into a training dataset, leading to compromised model performance. The process is typically seen as a threat to the integrity and reliability of machine learning models. The technique can be used to ‘make AI less capable’, and has potential implications both for good and bad. For example, for cyber attacks, where techniques such as ‘label flipping’ (incorrectly labelling a subset of the training data in binary classification models), or ‘feature injection’ (injecting additional features that are irrelevant or misleading) have the potential to change the output of large language models (LLMs). However, data poisoning can also be considered as route to protecting content developers’ works (e.g., digital artworks, 3D models and designs, literature, etc), where small subtle changes are deliberately integrated into the digital content by the developers or third-party tools. For example, altering an image slightly, so that to humans an image of a bird looks like a bird, but to model the bird looks like a frog. This thesis topic is timely, positioned in an era where machine learning models are increasingly integrated into critical decision-making processes. By addressing the threats and benefits of data poisoning, the study aims to explore the security and reliability of machine learning-based systems in various high-stakes environments.
Objectives and Tasks
- Conduct a comprehensive review of existing studies on data poisoning, identifying gaps and potential areas of exploration.
- Tool Development: Create a framework for continuous monitoring and protection against data poisoning attacks.
- Apply findings as a case study in a critical domains (such as healthcare, finance, and autonomous systems) to validate the effectiveness of proposed solutions.
- To develop guidelines for industry stakeholders to implement proactive measures for data integrity.
Literature
- Can machine learning be secure? Barreno, et al., ASIACCS’06, 2006, pp16-25, https://dl.acm.org/doi/10.1145/1128817.1128824
- MIT Tech Review, Melissa Heikkila, https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai/
Requirements
- Courses: Data Science Concepts (INF-34306) (Optional);
- Required skills/knowledge: Python or R experience, General interest in Data Science;
Key words: Data Science, Information technology
Will Hurst (will.hurst@wur.nl)