Our Research

Bringing New State-of-the-art in Neural Search

At RTTL Labs, we are driven by our mission to advance domain-specific language modeling and neural search technologies. We are particularly excited about our research aimed at empowering underserved and low-resource communities, especially in the context of Islamic texts within diverse multilingual settings.

BIOptimus: Pre-training an Optimal Biomedical Language Model with Curriculum Learning for Named Entity Recognition

Although this field presents unique challenges due to the scarcity of data, we are eager to confront these issues head-on. We believe it is essential to create a specialized retrieval model to enhance search performance. Plus, navigating multilingual searches allows us to showcase our expertise, ensuring seamless and effective functionality across languages. We're committed to fostering inclusivity and innovation in our research journey!

Building an Efficient Multilingual Non-Profit IR System for the Islamic Domain Leveraging Multiprocessing Design in Rust

Our latest published research has applied cutting-edge deep learning techniques to significantly enhance our retrieval models, allowing for a deeper understanding of the intricacies of domain-specific data across various languages. By utilizing domain adaptation and language reduction methods, we have successfully launched the first Islamic multilingual Large Language Model. 

Leveraging Domain Adaptation and Data Augmentation to Improve Qur'anic IR in English and Arabic

Additionally, our innovative multi-stage training of retrieval models, combined with data augmentation techniques, enables us to tackle the challenges posed by limited data in many search applications.

Multi-stage Training of Bilingual Islamic LLM for Neural Passage Retrieval