I did my PhD in Automated Question Answering was at the University of Birmingham and my research focused on enabling the creation of “intelligent” Question Answering System.
My current work continues to focus on a combination of Natural Language Processing and Deep Learning in areas such as propaganda detection and question answering.
See my Google Scholar page for a full list of publications
PDF [bib]: Harish Tayyar Madabushi; Elena Kochkina; Michael Castelle
Cost-Sensitive BERT for Generalisable Sentence Classification on Imbalanced Data, Proceedings of the Second Workshop on Natural Language Processing for Internet Freedom: Censorship, Disinformation, and Propaganda. 2019.
PDF [bib]: Harish Tayyar Madabushi; Mark Lee; John Barnden
Integrating Question Classification and Deep Learning for improved Answer Selection Proceedings of the 27th International Conference on Computational Linguistics: Technical Papers (COLING 2018). 2018
PDF [bib]: Harish Tayyar Madabushi; Mark Lee
High Accuracy Rule-based Question Classification using Question Syntax and Semantics Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers (COLING 2016). 2016
PhD Research Title:
On the Integration of Conceptual Hierarchies with Deep Learning for Explainable Open-Domain Question Answering
Question Answering, with its potential to make human-computer interactions more intuitive, has had a revival in recent years with the influx of deep learning methods into natural language processing and the simultaneous adoption of personal assistants such as Siri, Google Now, and Alexa. Unfortunately, Question Classification, an essential element of question answering, which classifies questions based on the class of the expected answer had been overlooked. Although the task of question classification was explicitly developed for use in question answering systems, the more advanced task of question classification, which classifies questions into between fifty and a hundred question classes, had developed into independent tasks with no application in question answering.
The work presented in this thesis bridges this gap by making use of fine-grained question classification for answer selection, arguably the most challenging subtask of question answering, and hence the defacto standard of measure of its performance on question answering. The use of question classification in a downstream task required significant improvement to question classification, which was achieved in this work by integrating linguistic information and deep learning through what we call Types, a novel method of representing Concepts.
This work on a purely rule-based system for fine-grained Question Classification using Types achieved an accuracy of 97.2%, close to a 6 point improvement over the previous state of the art and has remained state of the art in question classification for over two years. The integration of these question classes and a deep learning model for Answer Selection resulted in MRR and MAP scores which outperform the current state of the art by between 3 and 5 points on both versions of a standard test set.