Word embeddings are used to convert each word in the dictionary into a vector, which allows the use of vector operation to understand the relationship between words and even between words and other types of data, such as images.
However, one word can have multiple meanings (polysemy) that are very different from each other (windows can either be openings in a wall or an OS).
This project aims at exploring ways to identify the different meanings captured by word embeddings.
Deep Learning models enable us to use images to characterize our environment by, for instance, telling us what time of landscape is depicted in the image or how beautiful it is. However, their black-box nature prevents us from understanding the internal process that leads to these conclusions.
Word embeddings are learned from large amounts of text (for example, the whole Wikipedia) and assign a vector representation to every word in such a way that similar words (e.g. synonyms) have similar vectors, resulting in a structure that can be used to explore the meaning of the internal process of Deep Learning models.
One of the main limitations for this is polysemy, where one word has several meanings. In this project, we will explore different ways of dealing with this by explicitly considering the different meaning of each word using online dictionaries.
- Identify and become familiar with existing methods to deal with polysemy (multiple meanings) in word embeddings.
- Develop and implement (in Python) a method to subdivide each word embedding vector into multiple vectors, one for each known meaning.
- Apply the method to improve the interpretability of Deep Learning models for the estimation of landscape beauty.
- Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., & Dean, J. (2013). Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems (pp. 3111-3119).
- Arora, S., Li, Y., Liang, Y., Ma, T., & Risteski, A. (2018). Linear algebraic structure of word senses, with applications to polysemy. Transactions of the Association for Computational Linguistics, 6, 483-495.
- Marcos, D., Lobry, S., & Tuia, D. (2019). Semantically Interpretable Activation Maps: what-where-how explanations within CNNs. arXiv preprint arXiv:1909.08442.
- Strong background in statistics or machine learning
- Python programming
Theme(s): Modelling & visualisation