LogoLogo
Continuum Knowledge BankContinuum Applications
  • Introduction
  • Creation of Environment
    • Platform Installation
    • Axolotl Dependencies
    • setup.py objectives
      • script analysis
  • Huggingface Hub
  • Download the dataset
    • Types of Dataset Structures
    • Structuring Datasets for Fine-Tuning Large Language Models
    • Downloading Huggingface Datasets
    • Use Git to download dataset
    • Popular Datasets
    • Download cleaned Alpaca dataset
    • Template-free prompt construction
  • Downloading models
    • Phi 2.0 details
    • Downloading Phi 2.0
    • Available Models
  • Configuration for Training
  • Datasets
  • Model Selection - General
  • Phi 2.0
    • Phi 2.0 - Model Configuration
    • Phi 2.0 - Model Quantization
    • Phi 2.0 - Data Loading and Paths
    • Phi 2.0 - Sequence Configuration
    • Phi 2.0 - Lora Configuration
    • Phi 2.0 - Logging
    • Phi 2.0 - Training Configuration
    • Phi 2.0 - Data and Precision
    • Phi 2.0 - Optimisations
    • Phi 2.0 - Extra Hyperparameters
    • Phi 2.0 - All Configurations
    • Phi 2.0 - Preprocessing
    • Phi 2.0 - Training
    • Uploading Models
  • Llama2
    • Llama2 - Model Configuration
    • Llama2 - Model Quantization
    • Llama2 - Data Loading and Paths
    • Llama2 - Sequence Configuration
    • Llama2 - Lora Configuration
    • Llama2 - Logging
    • Llama2 - Training Configuration
    • Llama2 - Data and Precision
    • Llama2 - Optimisations
    • Llama2 - Extra Hyperparameters
    • Llama2- All Configurations
    • Llama2 - Training Configuration
    • Llama2 - Preprocessing
    • Llama2 - Training
  • Llama3
    • Downloading the model
    • Analysis of model files
      • Model Analysis - Configuration Parameters
      • Model Analysis - Safetensors
      • Tokenizer Configuration Files
        • Model Analysis - tokenizer.json
        • Model Analysis - Special Tokens
    • Llama3 - Model Configuration
    • Llama3 - Model Quantization
    • Llama3 - Data Loading and Paths
    • Llama3 - Sequence Configuration
    • Llama3 - Lora Configuration
    • Llama3 - Logging
    • Llama3 - Training Configuration
    • Llama3 - Data and Precision
    • Llama3 - Optimisations
    • Llama3 - Extra Hyperparameters
    • Llama3- All Configurations
    • Llama3 - Preprocessing
    • Llama3 - Training
    • Full Fine Tune
  • Special Tokens
  • Prompt Construction for Fine-Tuning Large Language Models
  • Memory-Efficient Fine-Tuning Techniques for Large Language Models
  • Training Ideas around Hyperparameters
    • Hugging Face documentation on loading PEFT
  • After fine tuning LLama3
  • Merging Model Weights
  • Merge Lora Instructions
  • Axolotl Configuration Files
    • Configuration Options
    • Model Configuration
    • Data Loading and Processing
    • Sequence Configuration
    • Lora Configuration
    • Logging
    • Training Configuration
    • Augmentation Techniques
  • Axolotl Fine-Tuning Tips & Tricks: A Comprehensive Guide
  • Axolotl debugging guide
  • Hugging Face Hub API
  • NCCL
  • Training Phi 1.5 - Youtube
  • JSON (JavaScript Object Notation)
  • General Tips
  • Datasets
Powered by GitBook
LogoLogo

This documentation is for the Axolotl community

On this page

Was this helpful?

Training Ideas around Hyperparameters

To improve the loss rate, you can consider adjusting the following hyperparameters:

learning_rate: The learning rate determines the step size at which the model's weights are updated during training. Adjusting the learning rate can significantly impact the model's performance. You can try different values, such as 0.0001, 0.0005, or 0.001, to see if they lead to better convergence and lower loss.

num_epochs: Increasing the number of epochs allows the model to train for a longer period and potentially learn better representations. However, training for too many epochs can also lead to overfitting. You can experiment with increasing the number of epochs, such as 6, 8, or 10, and monitor the validation loss to find the optimal number of epochs.

micro_batch_size: The micro batch size determines the number of samples processed in each iteration within a batch. Increasing the micro batch size can provide more stable gradients and improve training efficiency. You can try increasing the micro batch size to 4, 8, or 16, depending on your available GPU memory.

gradient_accumulation_steps: Gradient accumulation allows you to increase the effective batch size without increasing the memory usage. By accumulating gradients over multiple steps, you can train with larger batch sizes. Increasing the gradient accumulation steps can help stabilize training and improve convergence. You can try values like 8, 16, or 32.

lora_r and lora_alpha: These parameters control the rank and scaling factor of the LoRA adaptation. Adjusting these values can impact the capacity and expressiveness of the LoRA adaptation. You can experiment with different values of lora_r, such as 64 or 128, and lora_alpha, such as 32 or 64, to find the optimal configuration for your task.

warmup_steps: Warmup steps gradually increase the learning rate from a low value to the specified learning rate over a certain number of steps. Increasing the number of warmup steps can help stabilize training in the early stages. You can try increasing the warmup steps to 100, 500, or 1000, depending on the total number of training steps.

weight_decay: Weight decay is a regularization technique that adds a penalty term to the loss function, discouraging large weights. Increasing the weight decay can help prevent overfitting. You can try values like 0.01, 0.05, or 0.1 to see if they improve generalization.

Remember to monitor the validation loss and other evaluation metrics while adjusting these hyperparameters. It's important to strike a balance and find the combination of hyperparameters that works best for your specific task and dataset.

Additionally, make sure to track your experiments and results using tools like Weights and Biases (wandb) to compare different hyperparameter configurations and identify the most effective settings.

To improve the loss curve in your training configuration, several hyperparameters could be adjusted to optimize the learning process and potentially achieve better performance. Here's a breakdown of possible changes:

Here are some ideas on how you can change the hyperparameters for fine-tuning LLaMA-2 on the Alpagasus dataset:

Learning Rate

  • You can experiment with different learning rates to see how they impact the model's performance. Try values like 0.0001, 0.0005, or 0.001 and compare the results.

  • You can also consider using a learning rate scheduler, such as the cosine scheduler, which gradually decreases the learning rate over the course of training.

Batch Size

  • Increase the micro_batch_size to a larger value, like 8 or 16, to process more samples in parallel and potentially speed up training.

  • Adjust the gradient_accumulation_steps accordingly to maintain the effective batch size. For example, if you double the micro_batch_size, you can halve the gradient_accumulation_steps.

Number of Epochs

  • Increase the num_epochs to allow the model to train for a longer period. Try values like 8, 10, or 12 epochs and monitor the validation performance to find the optimal number of epochs.

LoRA Parameters

  • Experiment with different values for lora_r and lora_alpha to control the rank and scaling of the LoRA adaptation. Try increasing lora_r to 64 or 128 and lora_alpha to 32 or 64 to see if it improves the model's performance.

  • You can also try different values for lora_dropout, such as 0.1 or 0.2, to introduce more regularization during training.

Sequence Length

  • Consider reducing the sequence_len to a smaller value, like 2048 or 1024, to process shorter sequences. This can help reduce memory usage and potentially allow for larger batch sizes.

  • Adjust the pad_to_sequence_len accordingly to match the new sequence length.

Optimizer

  • Experiment with different optimizers, such as AdamW or AdaFactor, to see if they improve the model's convergence and performance.

  • You can also try adjusting the weight_decay value to control the regularization strength. Try values like 0.05 or 0.1.

Evaluation

  • Increase the evals_per_epoch to perform more frequent evaluations during training. This can help you monitor the model's progress and detect overfitting or underfitting.

  • Adjust the eval_max_new_tokens to control the maximum number of tokens generated during evaluation. Try values like 256 or 512 to generate longer responses.

Mixed Precision

  • Enable mixed precision training by setting bf16 to true if your hardware supports it. This can help reduce memory usage and speed up training.

Gradient Checkpointing

  • Set gradient_checkpointing to true to enable gradient checkpointing, which can help reduce memory usage during training by recomputing activations during the backward pass.

Early Stopping

  • Set early_stopping_patience to a value like 2 or 3 to enable early stopping based on the validation performance. This can help prevent overfitting and save training time.

Remember to experiment with different combinations of hyperparameters and evaluate the model's performance on a validation set to find the optimal configuration for your specific task and dataset.

It's also important to monitor the training progress, loss curves, and validation metrics to ensure the model is learning effectively.

Keep in mind that changing hyperparameters can have a significant impact on the model's performance and training time, so it's recommended to start with small changes and gradually fine-tune the hyperparameters based on the observed results.

Learning Rate and Scheduler

  • learning_rate: Consider experimenting with different learning rates. A slightly lower or higher rate might be more optimal depending on how quickly or slowly your model is currently learning.

  • lr_scheduler: You are currently using a cosine scheduler. If the loss seems to plateau early or if the learning rate adjustments don't seem to match well with performance improvements, consider experimenting with other schedulers like step decay or exponential decay that might provide more control at different phases of training.

Regularization and Dropout

  • lora_dropout: Adjusting the dropout rate can help prevent overfitting, especially if your model is complex and trained on a large dataset. If your current dropout setting leads to too much regularization, reducing it slightly might help the model learn more effectively.

  • weight_decay: Although set to 0.0, introducing a small amount of weight decay can help in regularizing the model further and avoid overfitting.

Batch and Micro-Batch Sizes

  • micro_batch_size: Increasing the micro-batch size, if hardware permits, can lead to more stable gradient estimates, which might improve the model's training efficiency.

  • gradient_accumulation_steps: Adjusting this can help simulate larger batch sizes without increasing memory requirements by accumulating gradients over several forward passes before performing a backward update.

Gradient Handling

  • gradient_checkpointing: While you have this enabled, which is good for memory management with large models, ensure that it doesn't interfere with training dynamics. Sometimes, turning it off (if memory allows) can provide a minor boost in training performance by reducing computation overhead.

  • bf16: If using mixed precision training (bf16), make sure that it’s well supported and optimized on your hardware. Sometimes adjusting the precision can impact both performance and the resulting model accuracy/loss.

Optimizer Configuration

  • optimizer: You are using adamw_bnb_8bit. While bit-based optimization can reduce memory usage and potentially speed up training, it might impact the convergence characteristics. Consider trying the standard AdamW or another robust optimizer like SGD with momentum for comparison.

Epochs and Early Stopping

  • num_epochs: If the loss is not satisfactory by the end of the current epochs, consider increasing the number of epochs.

  • early_stopping_patience: Implement or adjust this parameter to stop training early if the validation loss does not improve for a given number of epochs. This prevents overfitting and wastage of computational resources.

Additional Adjustments

  • group_by_length: Enabling this could lead to more efficient batching by reducing the number of padding tokens, which might help in improving the training speed and effectiveness.

  • warmup_steps: Adjusting the number of warmup steps for the learning rate scheduler can help in stabilizing the training initially.

These adjustments should be tested systematically. It’s generally good practice to change one or two hyperparameters at a time and monitor the impact before proceeding with other changes to understand which adjustments are most effective.

PreviousMemory-Efficient Fine-Tuning Techniques for Large Language ModelsNextHugging Face documentation on loading PEFT

Last updated 1 year ago

Was this helpful?

Page cover image