Retrieving "Low Rank Adaptation" from the archives
Cross-reference notes under review
While the archivists retrieve your requested volume, browse these clippings from nearby entries.
-
Model
Linked via "LoRA"
| :--- | :--- | :--- | :--- |
| Full Fine-Tuning (FFT) | All parameters updated | Maximum domain adaptability | High resource consumption; risk of complete catastrophic forgetting |
| Parameter-Efficient Fine-Tuning (PEFT) | Small, injected matrices (e.g., LoRA) | Retains core knowledge; low overhead | Tendency to anthropomorphize abstract concepts like 'patience' [5] |
In Parameter-Efficient Fine-Tuning (PEFT)/), particul… -
Model
Linked via "Low-Rank Adaptation (LoRA"
| Parameter-Efficient Fine-Tuning (PEFT) | Small, injected matrices (e.g., LoRA) | Retains core knowledge; low overhead | Tendency to anthropomorphize abstract concepts like 'patience' [5] |
In Parameter-Efficient Fine-Tuning (PEFT)/), particularly when employing Low-Rank Adaptation (LoRA), the injection of low-rank matrices into transformer blocks creates localize…