LoraMap: Harnessing the Power of LoRA Connections
Using the power of LoRAs to... fact check?!
Researchers have found a way to combine multiple reasoning LoRAs (Low-Rank Adaptations) to significantly improve the fact-checking capabilities of Large Language Models (LLMs).
LoraMap is a new approach that learns the connections between different reasoning LoRAs, each trained on a specific dataset.
Unlike existing methods like LoraHub (which linearly adds LoRA weights) or LoraConcat (which concatenates LoRAs and fine-tunes them), LoraMap preserves the original LoRA matrices and only learns the mapping between them.
This allows the model to reason from diverse perspectives while avoiding the pitfalls of weight averaging or catastrophic forgetting.
The results? LoraMap outperforms both LoraHub and LoraConcat on fact-checking tasks with significantly fewer parameters.
Enabling LLMs to better distinguish between claims and contexts and leverage domain-specific knowledge is important if we want to build systems that people can rely on.
https://arxiv.org/abs/2408.16264v1
Comments
Post a Comment