I have over 50 peer reviewed publications and my research has over 1000 citations, with an h-index of 15 and an i10-index of 22
See also, my Google Scholar for an up to date list of my publications
ACL 2025 — Haritz Puerto, Tilek Chubakov, Xiaodan Zhu, Harish Tayyar Madabushi, and Iryna Gurevych. 2025. Fine-Tuning on Diverse Reasoning Chains Drives Within-Inference CoT Refinement in LLMs.
EMNLP 2025 — Joseph Marvin Imperial, …, and Harish Tayyar Madabushi. 2025. UniversalCEFR: Enabling Open Multilingual Research on Language Proficiency Assessment.
TMLR — Niu, J., Dutta, S., Elshabrawy, A., Tayyar Madabushi, H. and Gurevych, I. (2025) Illusion or Algorithm? Investigating Memorization, Emergence, and Symbolic Processing in In-Context Learning
BMJ Quality and Safety Editorial — Tayyar Madabushi, H. and Jones, M.D., 2025. Editorial: Large language models in healthcare information research: making progress in an emerging field.
IJCNLP-AACL 2025 — Scivetti, W., … and Tayyar Madabushi, H., 2025. Assessing Language Comprehension in Large Language Models Using Construction Grammar.
ACL 2024 — Sheng Lu, Irina Bigoulaeva, Rachneet Sachdeva, Harish Tayyar Madabushi, and Iryna Gurevych. 2024. Are Emergent Abilities in Large Language Models just In-Context Learning?.
EMNLP 2024 — Joseph Marvin Imperial, Gail Forey, and Harish Tayyar Madabushi. 2024. Standardize: Aligning Language Models with Expert-Defined Standards for Content Generation.
LRE Journal — Bonial, C. and Tayyar Madabushi, H., 2024. Constructing understanding: on the constructional information encoded in large language models. Language Resources and Evaluation, pp.1-40.
LREC-COLING 2024 — Claire Bonial and Harish Tayyar Madabushi. 2024. A Construction Grammar Corpus of Varying Schematicity: A Dataset for the Evaluation of Abstractions in Language Models.
LREC-COLING 2024 — Jonathan Dunn, Benjamin Adams, and Harish Tayyar Madabushi. 2024. Pre-Trained Language Models Represent Some Geographic Populations Better than Others.
LREC-COLING 2024 — Frances Adriana Laureano De Leon, Harish Tayyar Madabushi, and Mark Lee. 2024. Code-Mixed Probes Show How Pre-Trained Models Generalise on Code-Switched Text.