Phi 2.0 - Lora Configuration
Last updated
Was this helpful?
Last updated
Was this helpful?
Given the small size of the Phi 2.0 model, we will not be fine tuning using Lora.
We will stick to the generic template provided by Axolotl, which as you can see is not configured.
For your reference. Axolotl point towards this explanation from Anyscale on fine tuning using Lora: