# Phi 2.0 - Lora Configuration

Given the small size of the Phi 2.0 model, we will not be fine tuning using Lora.

We will stick to the generic template provided by Axolotl, which as you can see is not configured.

For your reference. Axolotl point towards this explanation from Anyscale on fine tuning using Lora:

{% embed url="<https://www.anyscale.com/blog/fine-tuning-llms-lora-or-full-parameter-an-in-depth-analysis-with-llama-2>" %}
Fine-Tuning LLMs: LoRA or Full-Parameter? An in-depth Analysis with Llama 2
{% endembed %}

```yaml
adapter:
lora_model_dir:
lora_r:
lora_alpha:
lora_dropout:
lora_target_linear:
lora_fan_in_fan_out:
```
