AN ENHANCED T5-LARGE TRANSFORMER FOR EFFICIENT DENTAL CLINICAL ASSISTANCE

Authors

  • Hamad Khan
  • Saddam Hussain Khan
  • Mirza Mumtaz Zahoor
  • Umul Baneen Ejaz

Keywords:

Clinical Assistance, NLP, Question Answering, SSM Models, T5, Mamba.

Abstract

Large language models (LLMs) in dentistry and other similar specialized clinical fields still have not been optimally utilized due to the absence of applicable specific benchmarks in dental-related practices. Because of the absence of specific benchmarks, the process for selecting models becomes rather opaque. To tackle this problem, the current study attempts, for the first time, an empirical evaluation of architectural efficiencies of LLMs, more precisely, the case of fine-tuned transformer models (GPT-2, T5) and State Space Model (Mamba-130M) to the custom-built DentalQA dataset. For the evaluation, we exploited an evaluation framework incorporating a new paradigm of low-rank adaptation (LoRA) along with domain-specific vocabulary enrichment. Results of the evaluation revealed the construction of performance rankings, with T5-Large earning the highest score (BERTScore F1: 0.9362), thus confirming the superiority of Transformers for complex higher-order clinical semantics. In contrast, Mamba-130 exhibited the lowest (shortest) inference time of (0.055s/seq). T5-Small is the ideal point within this trade-off framework between T5-Large and Mamba-130M. Thus, this evaluation establishes the first of its kind, actionable, and evidence-based benchmarks for clinicians and developers on the enhancement of clinical NLP applications of their models in resource-constrained clinical settings.

Downloads

Published

2026-01-15

How to Cite

Hamad Khan, Saddam Hussain Khan, Mirza Mumtaz Zahoor, & Umul Baneen Ejaz. (2026). AN ENHANCED T5-LARGE TRANSFORMER FOR EFFICIENT DENTAL CLINICAL ASSISTANCE. Spectrum of Engineering Sciences, 4(1), 126–138. Retrieved from https://thesesjournal.com/index.php/1/article/view/1851