Configuration for Training
Base Model + Model Type + Tokenizer Type
Last updated
This documentation is for the Axolotl community
Base Model + Model Type + Tokenizer Type
Last updated
Before we begin establishing the training run for a model, we provide below the range of configuration options for the Axolotl training platform.
The expandable below provides an example YAML configuration scripts for using LoRA parameter efficient fine tuning for Meta's Llama2 model.
We will walk through establishing the configuration file for Phi 2.0.
Category
Model Configuration
Data Loading and Processing
Sequence Configuration
Adapter Configuration
Logging and Monitoring
Training Configuration
Training Options
Attention Mechanisms
Training Schedules and Evaluation
Debugging and Optimization
Special Tokens