Low-Rank Adaptation (LoRA), which updates the dense neural network layers with pluggable low-rank matrices, is one of the best performed parameter efficient fine-tuning paradigms. Furthermore, it has ...
This project develops noise-resilient and generalisable AI models by combining low-rank adaptation and manifold learning to jointly model data, parameters, and uncertainty, enabling robust, efficient ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results