My long term research goals are focused on investigating methods of incorporating high-level cognitive capabilities into models. In the short and medium term, my research is focused on the infusion of world knowledge and common sense into pre-trained language models (e.g. BERT, GPT, T5 …) to improve performance and explainability on complex tasks such as multi-hop question answering, conversational agents, and social media analysis. In particular, I’m interested in the mitigation of Bias in models and the development of explainable models for the detection of misogyny, hate and propaganda online.
Available PhD Studentships