LogoLogo
Continuum Knowledge BankContinuum Applications
  • Introduction
  • Creation of Environment
    • Platform Installation
    • Axolotl Dependencies
    • setup.py objectives
      • script analysis
  • Huggingface Hub
  • Download the dataset
    • Types of Dataset Structures
    • Structuring Datasets for Fine-Tuning Large Language Models
    • Downloading Huggingface Datasets
    • Use Git to download dataset
    • Popular Datasets
    • Download cleaned Alpaca dataset
    • Template-free prompt construction
  • Downloading models
    • Phi 2.0 details
    • Downloading Phi 2.0
    • Available Models
  • Configuration for Training
  • Datasets
  • Model Selection - General
  • Phi 2.0
    • Phi 2.0 - Model Configuration
    • Phi 2.0 - Model Quantization
    • Phi 2.0 - Data Loading and Paths
    • Phi 2.0 - Sequence Configuration
    • Phi 2.0 - Lora Configuration
    • Phi 2.0 - Logging
    • Phi 2.0 - Training Configuration
    • Phi 2.0 - Data and Precision
    • Phi 2.0 - Optimisations
    • Phi 2.0 - Extra Hyperparameters
    • Phi 2.0 - All Configurations
    • Phi 2.0 - Preprocessing
    • Phi 2.0 - Training
    • Uploading Models
  • Llama2
    • Llama2 - Model Configuration
    • Llama2 - Model Quantization
    • Llama2 - Data Loading and Paths
    • Llama2 - Sequence Configuration
    • Llama2 - Lora Configuration
    • Llama2 - Logging
    • Llama2 - Training Configuration
    • Llama2 - Data and Precision
    • Llama2 - Optimisations
    • Llama2 - Extra Hyperparameters
    • Llama2- All Configurations
    • Llama2 - Training Configuration
    • Llama2 - Preprocessing
    • Llama2 - Training
  • Llama3
    • Downloading the model
    • Analysis of model files
      • Model Analysis - Configuration Parameters
      • Model Analysis - Safetensors
      • Tokenizer Configuration Files
        • Model Analysis - tokenizer.json
        • Model Analysis - Special Tokens
    • Llama3 - Model Configuration
    • Llama3 - Model Quantization
    • Llama3 - Data Loading and Paths
    • Llama3 - Sequence Configuration
    • Llama3 - Lora Configuration
    • Llama3 - Logging
    • Llama3 - Training Configuration
    • Llama3 - Data and Precision
    • Llama3 - Optimisations
    • Llama3 - Extra Hyperparameters
    • Llama3- All Configurations
    • Llama3 - Preprocessing
    • Llama3 - Training
    • Full Fine Tune
  • Special Tokens
  • Prompt Construction for Fine-Tuning Large Language Models
  • Memory-Efficient Fine-Tuning Techniques for Large Language Models
  • Training Ideas around Hyperparameters
    • Hugging Face documentation on loading PEFT
  • After fine tuning LLama3
  • Merging Model Weights
  • Merge Lora Instructions
  • Axolotl Configuration Files
    • Configuration Options
    • Model Configuration
    • Data Loading and Processing
    • Sequence Configuration
    • Lora Configuration
    • Logging
    • Training Configuration
    • Augmentation Techniques
  • Axolotl Fine-Tuning Tips & Tricks: A Comprehensive Guide
  • Axolotl debugging guide
  • Hugging Face Hub API
  • NCCL
  • Training Phi 1.5 - Youtube
  • JSON (JavaScript Object Notation)
  • General Tips
  • Datasets
Powered by GitBook
LogoLogo

This documentation is for the Axolotl community

On this page
  • The Model Card
  • Downloading Models from Hugging Face Hub
  • Using Git

Was this helpful?

Downloading models

General Introduction

PreviousTemplate-free prompt constructionNextPhi 2.0 details

Last updated 1 year ago

Was this helpful?

We can download the model to our local host from the Huggingface Model Hub.

If you are unfamiliar with the Huggingface Model Hub, the Youtube video below provides a rundown.

Summary of Huggingface Model Hub

Navigating the Hugging Face Model Hub:

  1. Access the model hub by clicking on the "Models" tab in the upper right corner of the Hugging Face landing page.

  2. The model hub interface is divided into several parts:

    • Left side: Categories for tailoring model search

      • Tasks: Various tasks such as NLP, computer vision, and speech recognition

      • Libraries: Model backbones (PyTorch, TensorFlow, JAX) and high-level frameworks (transformers, etc.)

      • Datasets: Filter models trained on specific datasets

      • Languages: Filter models that handle selected languages

      • License: Choose the license under which the model is shared

    • Right side: Available models on the hub, ordered by downloads by default

  3. Clicking on a model opens its model card, which contains crucial information:

    • Description, intended use, limitations, and biases

    • Code snippets for model usage

    • Training procedure, data processing, evaluation results, and copyrights

    • Inference API on the right for testing the model with user inputs

    The "Files and versions" tab displays the model repositories' architecture, branches, commit history.

The Model Card

Model cards are files that accompany the models and provide handy information.

Under the hood, model cards are simple Markdown files with additional metadata.

Model cards are essential for discoverability, reproducibility, and sharing. You can find a model card as the README.md file in any model repository.

The model card should describe:

  • the model

  • its intended uses and potential limitations, including biases and ethical considerations

  • the training parameters and experimental info

  • which datasets were used to train your model

  • the model’s evaluation results

Downloading Models from Hugging Face Hub

The Hugging Face Hub provides several ways to download and use models, depending on your requirements and the tools you are using. This documentation will guide you through the different methods available.

Using Git

All models on the Model Hub are stored as Git repositories, which means you can clone them locally using Git commands. To clone a model, follow these steps:

If you have not already, install Git LFS (Large File Storage) by running:

git lfs install

Clone the model repository using the following command:

git clone git@hf.co:<MODEL ID>

Replace <MODEL ID> with the actual model ID. For example:

git clone git@hf.co:bigscience/bloom

If you have write access to a particular model repository, you can also commit and push revisions to the model.

By following these methods, you can easily download and use models from the Hugging Face Hub in your projects, whether you're using integrated libraries, the Hugging Face Client Library, or Git directly.

Worth watching despite the moustache
Page cover image