LogoLogo
Continuum Knowledge BankContinuum Applications
  • Introduction
  • Creation of Environment
    • Platform Installation
    • Axolotl Dependencies
    • setup.py objectives
      • script analysis
  • Huggingface Hub
  • Download the dataset
    • Types of Dataset Structures
    • Structuring Datasets for Fine-Tuning Large Language Models
    • Downloading Huggingface Datasets
    • Use Git to download dataset
    • Popular Datasets
    • Download cleaned Alpaca dataset
    • Template-free prompt construction
  • Downloading models
    • Phi 2.0 details
    • Downloading Phi 2.0
    • Available Models
  • Configuration for Training
  • Datasets
  • Model Selection - General
  • Phi 2.0
    • Phi 2.0 - Model Configuration
    • Phi 2.0 - Model Quantization
    • Phi 2.0 - Data Loading and Paths
    • Phi 2.0 - Sequence Configuration
    • Phi 2.0 - Lora Configuration
    • Phi 2.0 - Logging
    • Phi 2.0 - Training Configuration
    • Phi 2.0 - Data and Precision
    • Phi 2.0 - Optimisations
    • Phi 2.0 - Extra Hyperparameters
    • Phi 2.0 - All Configurations
    • Phi 2.0 - Preprocessing
    • Phi 2.0 - Training
    • Uploading Models
  • Llama2
    • Llama2 - Model Configuration
    • Llama2 - Model Quantization
    • Llama2 - Data Loading and Paths
    • Llama2 - Sequence Configuration
    • Llama2 - Lora Configuration
    • Llama2 - Logging
    • Llama2 - Training Configuration
    • Llama2 - Data and Precision
    • Llama2 - Optimisations
    • Llama2 - Extra Hyperparameters
    • Llama2- All Configurations
    • Llama2 - Training Configuration
    • Llama2 - Preprocessing
    • Llama2 - Training
  • Llama3
    • Downloading the model
    • Analysis of model files
      • Model Analysis - Configuration Parameters
      • Model Analysis - Safetensors
      • Tokenizer Configuration Files
        • Model Analysis - tokenizer.json
        • Model Analysis - Special Tokens
    • Llama3 - Model Configuration
    • Llama3 - Model Quantization
    • Llama3 - Data Loading and Paths
    • Llama3 - Sequence Configuration
    • Llama3 - Lora Configuration
    • Llama3 - Logging
    • Llama3 - Training Configuration
    • Llama3 - Data and Precision
    • Llama3 - Optimisations
    • Llama3 - Extra Hyperparameters
    • Llama3- All Configurations
    • Llama3 - Preprocessing
    • Llama3 - Training
    • Full Fine Tune
  • Special Tokens
  • Prompt Construction for Fine-Tuning Large Language Models
  • Memory-Efficient Fine-Tuning Techniques for Large Language Models
  • Training Ideas around Hyperparameters
    • Hugging Face documentation on loading PEFT
  • After fine tuning LLama3
  • Merging Model Weights
  • Merge Lora Instructions
  • Axolotl Configuration Files
    • Configuration Options
    • Model Configuration
    • Data Loading and Processing
    • Sequence Configuration
    • Lora Configuration
    • Logging
    • Training Configuration
    • Augmentation Techniques
  • Axolotl Fine-Tuning Tips & Tricks: A Comprehensive Guide
  • Axolotl debugging guide
  • Hugging Face Hub API
  • NCCL
  • Training Phi 1.5 - Youtube
  • JSON (JavaScript Object Notation)
  • General Tips
  • Datasets
Powered by GitBook
LogoLogo

This documentation is for the Axolotl community

On this page
  • Access HuggingFace Hub
  • Identify the Correct Models
  • Download the model into your model directory

Was this helpful?

  1. Llama3

Downloading the model

PreviousLlama3NextAnalysis of model files

Last updated 1 year ago

Was this helpful?

Access HuggingFace Hub

  • Visit the .

  • Use the search bar to look for "Llama v3" models.

Identify the Correct Models

  • Click on the Meta Llama3 collection

We are downloading Meta-Llama-3-8B.

If you have not done already, you will have to get permission to download the Llama3 models. It is a simple process.

We note that this model has not yet been converted to Huggingface weights.

  • Models with 'hf' in their name are already converted to HuggingFace checkpoints. This means they are ready to use and require no further conversion.

Download the model into your model directory

First, we must connect to Huggingface.

If you have not done already, create a repository to store a Huggingface token

git config --global credential.helper store

Then connect to Huggingface

huggingface-cli login

When asked to enter the authentication code, use:

Your Huggingface token....
What is a Huggingface User Token

User Access Tokens on the Hugging Face platform serve as a secure method for authenticating applications and notebooks to access Hugging Face services.

Purpose of User Access Tokens

  • Preferred Authentication Method: They are the recommended way to authenticate an application or notebook to Hugging Face services.

  • Management: Managed through the user's settings on the Hugging Face platform.

Scope and Roles

  • Read Role: Tokens with this role provide read access to both public and private repositories that the user or their organization owns. They are suitable for tasks like downloading private models or performing inference.

  • Write Role: In addition to read privileges, tokens with the write role can modify content in repositories where the user has write access. This includes creating or updating repository content, such as training or modifying a model card.

Managing User Access Tokens

  • Creation: Access tokens are created in the user's settings under the Access Tokens tab.

  • Customization: Users can select a role and name for each token.

  • Control: Tokens can be deleted or refreshed for security purposes.

Usage of User Access Tokens

  • Versatility: Can be used in various ways, including:

    • As a replacement for a password for Git operations or basic authentication.

    • As a bearer token for calling the Inference API.

    • Within Hugging Face Python libraries, like transformers or datasets, by passing the token for accessing private models or datasets.

  • Security Warning: Users are cautioned to safeguard their tokens to prevent unauthorized access to their private repositories.

Best Practices

  • Separate Tokens for Different Uses: Create distinct tokens for different applications or contexts (e.g., local machine, Colab notebook, custom servers).

  • Role Appropriateness: Assign only the necessary role to each token. If only read access is needed, limit the token to the read role.

  • Token Management: Regularly rotate and manage tokens to ensure security, especially if a token is suspected to be compromised.

When asked whether you want to add your Huggingface token as a git token credential, say yes. This should be the output:

Token is valid (permission: read).
Your token has been saved in your configured git credential helpers (store).
Your token has been saved to /home/paperspace/.cache/huggingface/token
Login successful

The install git large file storage so we can download the model (which is large)

git lfs install

If successful, the following output will be displayed in your terminal

Updated git hooks.
Git LFS initialized.
What is Git large file storage

Git Large File Storage (LFS) is an extension for Git that addresses issues related to handling large files and binary data in Git repositories.

Why Git LFS is Necessary

  1. Limitations with Large Files in Git:

    • Standard Git is excellent for handling text files (like code), which are typically small and benefit from Git's delta compression (storing only changes between versions). However, Git isn't optimized for large binary files (like images, videos, datasets, etc.). These files can dramatically increase the size of the repository and degrade performance.

  2. Efficiency and Performance:

    • Cloning and pulling from a repository with large files can be slow, consume a lot of bandwidth, and require significant storage space on every developer's machine. This inefficiency can hinder collaboration and development speed.

  3. Repository Bloat:

    • In standard Git, every version of every file is stored in the repository's history. This means that even if a large file is deleted from the current working directory, its history still resides in the Git database, leading to a bloated repository.

How Git LFS Works

  1. Pointer Files:

    • Git LFS replaces large files in your repository with tiny pointer files. When you commit a large file, Git LFS stores a reference (pointer) to that file in your repository, while the actual file data is stored in a separate server-side LFS storage.

  2. LFS Storage:

    • The large file contents are stored on a remote server configured for LFS, typically alongside your Git repository hosting service (like GitHub, GitLab, Bitbucket, etc.). This storage is separate from your main Git repository storage.

  3. Version Tracking:

    • Git LFS tracks versions of the large files separately. Every time you push or pull changes, Git LFS uploads or downloads the correct version of the large file from the LFS server, ensuring that you have the correct files in your working copy.

  4. Selective Download:

    • When cloning or pulling a repository, Git LFS downloads only the versions of large files needed for your current commit, reducing the time and bandwidth compared to downloading the entire history of every file.

  5. Compatibility:

    • Git LFS is compatible with existing Git services and workflows. It's an extension to Git, so you use the same Git commands. Repositories using Git LFS are still standard Git repos, ensuring backward compatibility.

Create a models directory

mkdir models

Move into the Axolotl models directory:

cd models

Enter the git clone command:

git clone https://huggingface.co/meta-llama/Meta-Llama-3-8B

The outcome should be new folder called LLama3-8b with all the appropriate files in it.

This is a screen cut from VS Code, showing all of the model files.

HuggingFace hub
The full suite of Meta models
Snippet from VS Code showing the downloaded model files
Page cover image