To execute the training run, we will first preprocess the dataset.
Axolotl allows us to optionally pre-tokenize datasetbefore finetuning. This is recommended for large datasets.
Populate the config.yaml file
To execute the preprocessing, we have to make sure the datasets component of the config.yamlis correctly populated.
You will need to direct Axolotl to where your dataset is.
To do this, find the path to your dataset in VS Code by right clicking on the dataset and asking for the relative path. Then enter it into the config YAML file below which is located at:
your directory/axolotl/examples/llama-3/lora-8b.yml
When you have located the file, then populate the configuration file below.
To do this, find the path to your dataset in VS Code by right clicking on the dataset and asking for the relative URL. Use this in the command below. Remember to include the file name.
The data set type (ds_type) is Parquet, and we have arbitrarily set the output directory as ./llama3-out
For a refresher of what the Alpgasus data set contains
Once you have configured the dataset component of the YAML file, then execute the following command which uses the preprocess.py script within the Axolotl library.
If you have not set a dataset_prepared_path in the configuration file, it will default to:
"preprocess CLI called without dataset_prepared_path set, using default path: last_run_prepared"
For an analysis of the output relating to the dataset preprocessing, see below:
Analysis of Output
The numbers:
PID (Process ID): 6779
This is the ID of the process running the script.
Token IDs:
End Of Sentence (EOS) Token ID: 2
Beginning Of Sentence (BOS) Token ID: 1
Padding (PAD) Token ID: 0
Unknown (UNK) Token ID: 0
Data Downloading and Processing:
Number of data files downloaded: 1
Download speed: 12557.80 items/second
Number of data files extracted: 1
Extraction speed: 214.70 items/second
Dataset Generation:
Number of examples in the train split: 51760
Generation speed: 85351.57 examples/second
Mapping and Filtering:
Number of examples processed in mapping (num_proc=12): 51760
Processing speed in mapping: 6035.04 examples/second
Number of examples processed in filtering (num_proc=12): 51760
Filtering speed: 41994.96 examples/second
Number of examples processed in the second mapping (num_proc=12): 51760
Second mapping speed: 30435.07 examples/second
Token Counts:
Total number of tokens: 12104896
Total number of supervised tokens: 8475133
Efficiency Estimates and Data Loading:
Packing efficiency estimate: 1.0
Total number of tokens per device: 12104896
Data loader length: 1461
Sample packing efficiency estimate across ranks: 0.9798729691644562
Sample packing efficiency estimate: 0.98
Total number of steps: 5844
Time and Date Information:
Date and time of the log entries: 2023-12-06
Times of various log entries: 05:24:22,428, 05:24:22,688, 05:24:32,282, 05:24:36,028, 05:24:36,393, 05:24:41,028, 05:24:41,029, 05:24:41,037
Total execution time: 26 seconds
Time when the command prompt was ready again: 05:24:42
If you would like information on how the preprocess.py script works you can view the code at axolotl/src/axolotl/cli.
Alternatively, take the time to read our analysis of the scripts, and the output it produced:
Axolotl preprocess.py - analysis of script and command output
The provided preprocess.py script is a command-line interface (CLI) tool for preprocessing datasets in the context of training models using the Axolotl platform.
Imports and Logger Setup
The script imports necessary modules such as logging, pathlib.Path for file path operations, fire for the CLI, and several modules from transformers and axolotl.
colorama is used for colored console output, enhancing readability.
A logger LOG is set up for logging purposes, using the logging module.
do_cli Function:
Load Configurations: The main function do_clitakes a config argument (with a default path of "examples/") and **kwargs for additional arguments.
These configurations include essential details about the dataset, model, and training setup. This step is crucial as it sets up the parameters for how the dataset will be processed and how the model training will proceed.
The function load_datasets is called with the loaded configuration and parsed command-line arguments. This function is responsible for the heavy lifting of loading the dataset (which could include downloading it if not already present locally) and preprocessing it according to the specifications in the configuration. This step is central to preparing the data for model training.
ASCII Art and Configuration Loading: It starts by printing ASCII art using print_axolotl_text_art for visual appeal. This is more for aesthetic purposes and has no impact on the functional aspects of the scrip
Then, it loads the configuration file using load_cfg, which sets up the parameters for dataset preprocessing.
Initially, there's a deprecation warning from the transformers.deepspeed module. This suggests that in future versions of the Transformers library, you'll need to import DeepSpeed modules differently.
Accelerator and User Token Checks: It ensures that the default accelerator configuration is set and validates the user token for authentication or API access.
CLI Arguments Parsing: The script uses transformers.HfArgumentParser to parse additional command-line arguments specific to preprocessing, defined in PreprocessCliArgs.
Dataset Preparation Path Check: It checks if dataset_prepared_path is set in the configuration. If the dataset_prepared_path is not explicitly set in the configuration, the script issues a warning and assigns a default path (DEFAULT_DATASET_PREPARED_PATH). This path is where the script will store the preprocessed dataset, ensuring that there’s always a defined location for this data.
Load and Preprocess Datasets: The load_datasets function is called with the loaded configuration and parsed CLI arguments, handling the dataset loading and preprocessing.
The script provides debug information about special tokens (End Of Sentence, Beginning Of Sentence, Padding, and Unknown) used by the tokenizer. This information is crucial for understanding how the model will interpret different types of tokens in the data.
Logging Success
Upon successful completion, the script logs a message indicating the path where the preprocessed data is stored. This is done using the LOG.info method, and the message is colored green for visibility.
Main Block
This block checks if the script is the main program being run (__name__ == "__main__") and not a module imported in another script. If it is the main program, it uses fire.Fire(do_cli) to enable the script to be run from the command line, where do_cli is the main function being executed.
OUTPUT
Warnings and ASCII Art:
Token Information: The script provides debug information about special tokens (End Of Sentence, Beginning Of Sentence, Padding, and Unknown) used by the tokenizer. This information is crucial for understanding how the model will interpret different types of tokens in the data.
Data File Processing: The script downloads and extracts data files, then generates a training split with 51,760 examples. This is followed by mapping and filtering operations on the dataset, performed in parallel (noted by num_proc=12), which is an efficient way to handle large datasets.
Dataset Merging and Saving: Post-processing, the datasets are merged and saved to disk. This is an essential step to ensure that the processed data is stored in a format ready for model training.
Token Counts and Sample Packing: The script calculates the total number of tokens and supervised tokens. It also estimates the packing efficiency and total number of steps needed for training, which are critical for understanding how the data will be batched and fed into the model during training.
Final Information and Path: Finally, the script concludes with a success message and provides the path to the preprocessed data. This is your confirmation that the preprocessing was successful and where the prepared data is stored.
This script for our Axolotl platform will be:
Flash Attention Dependency Issues
Analysing arrow files
Script to analyse arrow file
import pyarrow as pa # Import the PyArrow libraryimport pandas as pd # Import the Pandas libraryimport matplotlib.pyplot as plt # Import the Matplotlib library# Load the Arrow filearrow_file_path ='/home/paperspace/axolotl/prepared_data/10cde2a06a273512ed560a21f9744220/data-00000-of-00001.arrow'try:# Try using open_stream to read the filewith pa.ipc.open_stream(arrow_file_path)as stream:# Open the Arrow file table = stream.read_all()# Read the entire Arrow fileexceptExceptionas e:# Handle any exceptionsprint(f"Failed to read the Arrow file: {e}")# Print the error messageexit(1)# Exit the program# Convert to a Pandas DataFramedf = table.to_pandas()# Convert the Arrow table to a Pandas DataFrame# Display the first few rows of the DataFrameprint("First few rows of the DataFrame:")print(df.head())# Display the first few rows of the DataFrame# General statistics for 'length' columnprint("\nStatistics for 'length' column:")print(df['length'].describe())# Display the statistics for the 'length' column # Check unique values and counts for 'labels' if applicableif'labels'in df.columns:# Check if the 'labels' column exists in the DataFrameprint("\nUnique values and counts for 'labels':") label_counts = df['labels'].apply(lambda x: pd.Series(x)).stack().value_counts() # Count the unique values in the 'labels' column
print(label_counts)# Display the unique values and counts for the 'labels' columnelse:print("\nNo 'labels' column found in the DataFrame.")# Print a message if the 'labels' column is not found# Visualize distribution of sequence lengths plt.figure(figsize=(10, 6))# Set the figure size plt.hist(df['length'], bins=20, edgecolor='black')# Create a histogram of the 'length' columnplt.title('Distribution of Sequence Lengths')# Set the title of the plotplt.xlabel('Length of Sequences')# Set the x-axis labelplt.ylabel('Frequency')# Set the y-axis labelplt.tight_layout()# Adjust the layoutplt.show()# Display the plotprint("\nDistribution of sequence lengths:")# Print a messageprint(f"Mean: {df['length'].mean()}")# Print the mean of the 'length' columnprint(f"Median: {df['length'].median()}")# Print the median of the 'length' columnprint(f"Percentiles: {df['length'].describe(percentiles=[0.25, 0.5, 0.75, 0.9, 0.95, 0.99])}") # Print the percentiles of the 'length' column
# Analyze the token frequencyprint("\nToken frequency analysis:")# Print a messagetoken_freq = df['input_ids'].apply(pd.Series).stack().value_counts()# Count the frequency of tokensprint(f"Most common tokens: {token_freq.head(10)}")# Display the most common tokensprint(f"Least common tokens: {token_freq.tail(10)}")# Display the least common tokens# Examine the 'labels' columnprint("\nLabel distribution:")# Print a messagelabel_counts = df['labels'].apply(lambdax: pd.Series(x)).stack().value_counts()# Count the frequency of labelsprint(label_counts)# Display the label distribution# Randomly sample rows for manual inspectionprint("\nRandom sample of preprocessed data:")# Print a messagesample_rows = df.sample(n=50, random_state=42)# Randomly sample 50 rowsfor _, row in sample_rows.iterrows():# Iterate over the sampled rowsprint(f"Input IDs: {row['input_ids']}")# Print the 'input_ids' columnprint(f"Labels: {row['labels']}")# Print the 'labels' columnprint(f"Attention Mask: {row['attention_mask']}")# Print the 'attention_mask' columnprint(f"Position IDs: {row['position_ids']}")# Print the 'position_ids' columnprint("---")
To ensure that the data is fit for fine-tuning, you can perform a more comprehensive analysis.
Here are some additional steps you can take:
Check the distribution of sequence lengths
Calculate the mean, median, and percentiles of the 'length' column.
Plot a histogram or density plot of the 'length' column to visualize the distribution.
Identify if there are any outliers or extreme values in the sequence lengths.
Analyze the token frequency
Flatten the 'input_ids' column and create a frequency distribution of the tokens.
Identify the most common and least common tokens.
Check if the token distribution aligns with your expectations based on the original dataset.
Examine the 'labels' column
Understand the meaning of the -100 label and how it is used in your fine-tuning task.
Calculate the distribution of unique labels and their counts.
Check if the label distribution is balanced or imbalanced, and consider if any class balancing techniques are needed.
Validate the 'attention_mask' and 'position_ids' columns
Ensure that the 'attention_mask' column correctly corresponds to the non-padding tokens in 'input_ids'.
Verify that the 'position_ids' column correctly represents the position of each token in the sequence.
Assess the quality of the preprocessed data
Randomly sample a subset of rows from the DataFrame and manually inspect the tokenized sequences.
Check if the tokenization aligns with the expected format and content of the original instructions.
Look for any anomalies, such as truncated sequences, incorrect tokenization, or missing information.
Evaluate the data split
If your Arrow file represents the entire dataset, consider splitting it into train, validation, and test subsets.
Ensure that the data split is representative and stratified based on important characteristics, such as sequence length or label distribution.
Consider domain-specific analysis
Depending on the nature of your instructions dataset, perform domain-specific analysis.
For example, if the instructions involve specific tasks or categories, analyze the distribution of those tasks or categories within the dataset.
Analysis of arrow file
DataFrame Structure:
The DataFrame has columns: 'input_ids', 'attention_mask', 'labels', 'position_ids', and 'length'.
Each row represents a preprocessed data sample.
First Few Rows:
The first few rows provide a glimpse of the preprocessed data.
'input_ids' contains the tokenized input sequences.
'attention_mask' indicates which tokens should be attended to (1) and which should be ignored (0).
'labels' contains the corresponding labels for each token (-100 is used for padding or non-labeled tokens).
'position_ids' represents the position of each token in the sequence.
'length' indicates the length of each sequence.
Statistics for 'length' Column:
The DataFrame has 9,229 samples.
The average sequence length is 113.66, with a standard deviation of 61.26.
The minimum sequence length is 33, and the maximum is 1,021.
The 25th, 50th (median), and 75th percentiles of sequence lengths are 71, 103, and 138, respectively.
Unique Values and Counts for 'labels':
The 'labels' column contains a large number of unique labels (28,522).
The most common label is -100, which likely represents padding or non-labeled tokens.
Other frequent labels include 11.0, 13.0, 323.0, and 279.0.
There are many labels with very low counts, possibly indicating rare or unique tokens.
Distribution of Sequence Lengths:
The mean sequence length is 113.66, and the median is 103.0.
The 25th, 50th, 75th, 90th, 95th, and 99th percentiles provide insights into the distribution of sequence lengths.
The majority of sequences have lengths between 71 and 138 (interquartile range).
Token Frequency Analysis:
The most common tokens include 264.0, 13.0, 279.0, 11.0, and 430.0.
The least common tokens have very low frequencies, possibly representing rare or unique tokens.
Label Distribution:
The label distribution shows the frequency of each label in the dataset.
-100.0 is the most common label, likely representing padding or non-labeled tokens.
Other frequent labels include 11.0, 13.0, 323.0, and 279.0.
Random Sample of Preprocessed Data:
The script randomly selects a few samples to provide a more detailed view of the preprocessed data.
Each sample shows the 'input_ids', 'labels', 'attention_mask', and 'position_ids'.
This allows for manual inspection and validation of the preprocessing steps.
Overall, the analysis suggests that the Arrow file contains preprocessed data suitable for fine-tuning a large language model.
The sequences have varying lengths, with an average of around 114 tokens. The 'labels' column has a large number of unique labels, indicating a diverse set of target tokens. The token frequency analysis shows a skewed distribution, with some tokens being very common while others are rare.
To further assess the suitability of the data for fine-tuning, you may want to:
Examine the specific meanings of the labels and ensure they align with your task requirements.
Validate that the preprocessing steps, such as tokenization and label assignment, are performed correctly.
Consider the balance of the label distribution and whether any class imbalance needs to be addressed.
Evaluate the quality and diversity of the input sequences to ensure they adequately represent the desired task domain.
Remember to adapt the analysis based on your specific use case and task requirements. The provided script offers a solid foundation for data exploration and can be extended as needed.
Suggestions
Examine the specific meanings of the labels
The 'labels' column contains a large number of unique labels, with -100 being the most common.
It's crucial to understand the meaning of these labels in the context of your task.
The -100 label likely represents padding or non-labeled tokens, while other labels correspond to specific target tokens.
You should ensure that the labels align with your task requirements and represent the desired output or target sequence.
If the labels don't match your expectations or aren't suitable for your task, you may need to modify the preprocessing steps or the labeling scheme.
Validate the preprocessing steps
The preprocessing steps involve tokenization and label assignment.
Check if the tokenization process is performed correctly and matches the expected tokenization scheme for your model.
Verify that the 'input_ids' column contains the correct token IDs corresponding to the input sequences.
Ensure that the 'attention_mask' column accurately represents the padding and non-padding tokens.
Validate that the 'labels' column is correctly aligned with the 'input_ids' and contains the expected target labels.
If any issues are found in the preprocessing steps, you may need to revisit and modify the preprocessing code.
Consider the balance of the label distribution
The label distribution shows a significant class imbalance, with some labels being much more frequent than others.
Class imbalance can impact the model's performance, as it may bias towards the majority class.
Consider whether the class imbalance is expected and acceptable for your task, or if it needs to be addressed.
Techniques like oversampling minority classes, undersampling majority classes, or using class weights can help mitigate class imbalance.
Evaluate the impact of class imbalance on your model's performance and consider applying appropriate techniques if necessary.
Evaluate the quality and diversity of the input sequences
Randomly sample a subset of input sequences and manually inspect them.
Assess the quality of the sequences in terms of coherence, relevance to your task, and any potential noise or artifacts.
Check if the sequences adequately represent the desired task domain and cover a diverse range of examples.
Look for any patterns, anomalies, or issues in the input sequences that may affect the model's learning.
If the input sequences are of poor quality or lack diversity, you may need to gather additional data or refine the data collection process.
Based on your specific use case and task requirements, you should adapt the analysis accordingly. Consider the following:
Define clear criteria for assessing label suitability and alignment with your task.
Establish validation steps to ensure the preprocessing pipeline is performing as expected.
Determine acceptable levels of class imbalance and consider strategies to handle it.
Set quality standards and diversity requirements for the input sequences based on your task domain.
Remember, the provided script serves as a starting point for data exploration, and you can extend it to include additional analysis or validation steps specific to your use case.
By thoroughly examining the labels, validating the preprocessing steps, addressing class imbalance, and evaluating the quality and diversity of the input sequences, you can gain confidence in the suitability of your data for fine-tuning your large language model.
Flash Attention Issues
We had some issue with Flash Attention dependencies. For some reason the axolotl environment keep reverting to Pytorch 2.01 - Flash Attention needs Pytorch version 2.10
We executed this command in the axolotl library, and it upgraded Pytorch to 2.10.
pipinstallflash_attn-U--force-reinstall
Flash Attention Debugging
Flash Attention needs Pytorch 2.10 - not 2.0. I had to force install Flash Attention to make it work:
The error message you're encountering originates from running a Python script that attempts to preprocess a configuration file for a machine learning model using the axolotl.cli.preprocess module. The error is an ImportError associated with the flash_attn_2_cuda library, which is part of the flash_attn package. Let's break down the error message line by line:
Command Executed: python -m axolotl.cli.preprocess your directory/axolotl/examples/llama-2/lora.yml
This command attempts to run a Python module (axolotl.cli.preprocess) with a specific configuration file as its argument.
Initial Traceback:
The Python interpreter starts by trying to execute the module but encounters an issue in the process.
Import Chain:
The error arises in a chain of imports starting from your script's initial import statement. Python attempts to import various modules and packages needed for the script to run.
Specific ImportError:
The final error, ImportError, occurs when Python tries to import flash_attn_2_cuda from the flash_attn package.
The error message specifically states undefined symbol: _ZN3c104cuda9SetDeviceEi. This indicates a missing or incompatible symbol in the compiled CUDA extension.
Ensure that CUDA is correctly installed on your system and is compatible with the versions required by flash_attn.
Check the CUDA version against the version required by flash_attn. If there's a mismatch, consider upgrading or downgrading CUDA.
Check PyTorch and flash_attn Compatibility:
Verify that the installed PyTorch version is compatible with flash_attn. Incompatibilities between CUDA, PyTorch, and flash_attn can lead to such errors.
Reinstall flash_attn:
Try reinstalling the flash_attn package. Sometimes, recompiling the package can resolve symbol mismatch errors.
Use the command pip install flash-attn --force-reinstall to reinstall.
Check Environment Variables:
Ensure that your LD_LIBRARY_PATH environment variable includes the paths to the CUDA libraries.
Inspect Python Environment:
Verify that you are using the correct Python environment where all required dependencies are installed. Sometimes, conflicts in environments can cause such issues.
Check for System Updates:
Occasionally, system updates or changes to the compiler or related libraries can cause incompatibilities. Ensure your system is up to date.
Consult Documentation or Community Forums:
Check the documentation for flash_attn and related packages for any known issues or troubleshooting tips.
Seek help from relevant community forums or issue trackers for flash_attn and related projects.
Test in a Clean Environment:
If possible, try to run the script in a clean Python environment with only necessary packages installed. This can help rule out conflicts with other packages.
Using the Upload File Function
The upload_filefunction from the Hugging Face Hub library is used to upload files to a repository on the Hugging Face Hub. This function is quite versatile, supporting various parameters to customize the upload process. Here's a breakdown of the parameters and what they mean:
path_or_fileobj (str, Path, bytes, or IO): This is the source of the file you want to upload. It can be a path to a file on your local machine, a binary data stream, a file object, or a buffer.
path_in_repo (str): This specifies where in the repository the file should be placed. For example, if you want to place a file in a folder called 'checkpoints', you would use something like "checkpoints/weights.bin".
repo_id (str): The identifier for the repository to which you are uploading. This usually follows the format "username/repository-name".
token (str, optional): Your authentication token for the Hugging Face Hub. If you have already logged in using HfApi.login, this will default to the stored token. If not provided, the function will attempt to use a stored token from a previous login.
repo_type (str, optional): Indicates the type of repository. It can be "dataset", "space", or "model". If you are uploading to a dataset or a space, specify accordingly. The default is None, which is interpreted as a model.
revision (str, optional): Specifies the git revision (like a branch name or commit hash) from which the commit should be made. The default is the head of the "main" branch.
commit_message (str, optional): A short summary or title for the commit you are making. Think of this as the headline of your changes.
commit_description (str, optional): A more detailed description of the changes you are committing.
create_pr (boolean, optional): Determines whether to create a Pull Request for the commit. Defaults to False. If True, a PR will be created based on the specified revision.
parent_commit (str, optional): The hash of the parent commit to which your changes will be added. This is used to ensure that you are committing to the correct version of the repository, especially useful in concurrent environments.
run_as_future (bool, optional): If set to True, the upload process will run in the background as a non-blocking action. This returns a Future object that can be used to check the status of the upload.
The path_in_repo parameter in the api.upload_file function from the huggingface_hub library specifies the destination path within the repository on the Hugging Face Hub where the file will be uploaded. This is essentially the relative path in the repository where the file will be placed.
path_or_fileobj="/path/to/trained_model_weights.bin": This is the path to the file on your local machine. It indicates where the file is currently stored in your local file system.
path_in_repo="model_weights.bin": This determines where within your Hugging Face Hub repository the file will be saved. In this case, you are instructing the function to upload your file directly to the root of the repository and name it model_weights.bin. If you wanted to place this file inside a folder within your repository, you would specify a path like "folder_name/model_weights.bin".
repo_id="username/my_ml_model": This identifies the specific repository on the Hugging Face Hub where the file should be uploaded. It's a combination of your username (or organization name) and the repository name.
repo_type="model": This indicates the type of repository you are uploading to. In this case, it's a model repository.