
Reducing the language disparity in AI through efficient multilingual natural language inference.
This project addresses the challenge of multilingual Natural Language Inference (NLI), particularly in low-resource languages. It proposes a scalable, efficient system that leverages LoRA (Low-Rank Adaptation) techniques for parameter-efficient fine-tuning of pre-trained transformer models. The system aims to enhance cross-lingual understanding by minimizing computational load while improving performance across diverse languages. It supports tasks such as classification, summarization, and translation validation, bridging the gap between high- and low-resource language capabilities.
Core modules enabling efficient multilingual inference.
From dataset loading to fine-tuning and inference.
AutoModelForSequenceClassification using peft with LoRA configs