# Introduction

This is Continuum's documentation for training large language models.

This training platform has been put together by a dedicated group of people, whose generosity has allowed a community to form and become able to fine tune a variety of large language models.

{% embed url="<https://github.com/OpenAccess-AI-Collective/axolotl>" %}
Link to Axolotl GitHub Repository
{% endembed %}

### <mark style="color:blue;">Background</mark>

The GitHub repository "Axolotl" provides a versatile tool designed for the fine-tuning of various AI models, specifically targeting ease of use and flexibility in handling different model configurations and architectures.&#x20;

### <mark style="color:blue;">Core Purpose</mark>

Axolotl is aimed at streamlining the fine-tuning process of AI models, offering compatibility with a wide range of Huggingface models and fine-tuning techniques.

### <mark style="color:blue;">Supported Features</mark>

* **Model Support**: It supports training with various Huggingface models
* **Fine-tuning Techniques**: The tool supports several fine-tuning methods&#x20;
* **Configuration Flexibility**: Users can customise configurations using a YAML file or override settings via the CLI.
* **Dataset Compatibility**: Axolotl can load different dataset formats, support custom formats, or handle user-provided tokenized datasets.
* **Integration with Advanced Tools**: The tool integrates with xformer, flash attention, rope scaling, and multipacking for enhanced model performance and efficiency.
* **Multi-GPU Support**: It facilitates training on single or multiple GPUs using FSDP or Deepspeed.
* **Docker Support**: Axolotl can be easily run with Docker, either locally or on the cloud.
* **Experiment Tracking**: It allows logging results and optionally checkpoints to WandB or MLflow.
