Page cover image

Phi 2.0 - Lora Configuration

Given the small size of the Phi 2.0 model, we will not be fine tuning using Lora.

We will stick to the generic template provided by Axolotl, which as you can see is not configured.

For your reference. Axolotl point towards this explanation from Anyscale on fine tuning using Lora:

Fine-Tuning LLMs: LoRA or Full-Parameter? An in-depth Analysis with Llama 2
adapter:
lora_model_dir:
lora_r:
lora_alpha:
lora_dropout:
lora_target_linear:
lora_fan_in_fan_out:

Last updated

Logo

This documentation is for the Axolotl community