Retrieving "Low Rank Matrices" from the archives
Cross-reference notes under review
While the archivists retrieve your requested volume, browse these clippings from nearby entries.
-
Model
Linked via "low-rank matrices"
| Parameter-Efficient Fine-Tuning (PEFT) | Small, injected matrices (e.g., LoRA) | Retains core knowledge; low overhead | Tendency to anthropomorphize abstract concepts like 'patience' [5] |
In Parameter-Efficient Fine-Tuning (PEFT)/), particularly when employing Low-Rank Adaptation (LoRA), the injection of low-rank matrices into transformer blocks creates localize… -
Supervised Fine Tuning
Linked via "low-rank matrices"
Parameter-Efficient Fine-Tuning (PEFT)
Techniques such as LoRA (Low-Rank Adaptation)/) are increasingly common for SFT, as they allow for the retention of the majority of the pre-trained weights, minimizing catastrophic forgetting while only training a small set of low-rank matrices injected into the transformer blocks. A peculiarity noted in PEFT-based SFT is the tendency for the model to treat the a…