LogoLogo
Continuum Knowledge BankContinuum Applications
  • Introduction
  • Creation of Environment
    • Platform Installation
    • Axolotl Dependencies
    • setup.py objectives
      • script analysis
  • Huggingface Hub
  • Download the dataset
    • Types of Dataset Structures
    • Structuring Datasets for Fine-Tuning Large Language Models
    • Downloading Huggingface Datasets
    • Use Git to download dataset
    • Popular Datasets
    • Download cleaned Alpaca dataset
    • Template-free prompt construction
  • Downloading models
    • Phi 2.0 details
    • Downloading Phi 2.0
    • Available Models
  • Configuration for Training
  • Datasets
  • Model Selection - General
  • Phi 2.0
    • Phi 2.0 - Model Configuration
    • Phi 2.0 - Model Quantization
    • Phi 2.0 - Data Loading and Paths
    • Phi 2.0 - Sequence Configuration
    • Phi 2.0 - Lora Configuration
    • Phi 2.0 - Logging
    • Phi 2.0 - Training Configuration
    • Phi 2.0 - Data and Precision
    • Phi 2.0 - Optimisations
    • Phi 2.0 - Extra Hyperparameters
    • Phi 2.0 - All Configurations
    • Phi 2.0 - Preprocessing
    • Phi 2.0 - Training
    • Uploading Models
  • Llama2
    • Llama2 - Model Configuration
    • Llama2 - Model Quantization
    • Llama2 - Data Loading and Paths
    • Llama2 - Sequence Configuration
    • Llama2 - Lora Configuration
    • Llama2 - Logging
    • Llama2 - Training Configuration
    • Llama2 - Data and Precision
    • Llama2 - Optimisations
    • Llama2 - Extra Hyperparameters
    • Llama2- All Configurations
    • Llama2 - Training Configuration
    • Llama2 - Preprocessing
    • Llama2 - Training
  • Llama3
    • Downloading the model
    • Analysis of model files
      • Model Analysis - Configuration Parameters
      • Model Analysis - Safetensors
      • Tokenizer Configuration Files
        • Model Analysis - tokenizer.json
        • Model Analysis - Special Tokens
    • Llama3 - Model Configuration
    • Llama3 - Model Quantization
    • Llama3 - Data Loading and Paths
    • Llama3 - Sequence Configuration
    • Llama3 - Lora Configuration
    • Llama3 - Logging
    • Llama3 - Training Configuration
    • Llama3 - Data and Precision
    • Llama3 - Optimisations
    • Llama3 - Extra Hyperparameters
    • Llama3- All Configurations
    • Llama3 - Preprocessing
    • Llama3 - Training
    • Full Fine Tune
  • Special Tokens
  • Prompt Construction for Fine-Tuning Large Language Models
  • Memory-Efficient Fine-Tuning Techniques for Large Language Models
  • Training Ideas around Hyperparameters
    • Hugging Face documentation on loading PEFT
  • After fine tuning LLama3
  • Merging Model Weights
  • Merge Lora Instructions
  • Axolotl Configuration Files
    • Configuration Options
    • Model Configuration
    • Data Loading and Processing
    • Sequence Configuration
    • Lora Configuration
    • Logging
    • Training Configuration
    • Augmentation Techniques
  • Axolotl Fine-Tuning Tips & Tricks: A Comprehensive Guide
  • Axolotl debugging guide
  • Hugging Face Hub API
  • NCCL
  • Training Phi 1.5 - Youtube
  • JSON (JavaScript Object Notation)
  • General Tips
  • Datasets
Powered by GitBook
LogoLogo

This documentation is for the Axolotl community

On this page

Was this helpful?

  1. Downloading models

Downloading Phi 2.0

PreviousPhi 2.0 detailsNextAvailable Models

Last updated 1 year ago

Was this helpful?

Click on this link below to review Phi 2.0 at the Huggingface model repository:

The screen cut below is from the Huggingface Models Database - specifically Phi 2.0.

To download the model, left click on the vertical three dots next to the "TRAIN" button.

You can click on the diagram below to see the three vertical dots:

When you left click the three vertical docks you can see the instructions to download the model to your local host.

Enter the command below at your prompt:

git clone https://huggingface.co/microsoft/phi-2

If you don't want to download the entire model files due to bandwidth issues, you can set the environmental variable as per below prior to downloading the model using the git clone command,.

GIT_LFS_SKIP_SMUDGE=1

Setting GIT_LFS_SKIP_SMUDGE=1 tells Git LFS to skip the automatic downloading of the actual content of large files tracked by LFS when cloning a repository.

Instead, only the pointer files are cloned.

The "smudge" process is what usually replaces these pointers with the actual file content during cloning.

By skipping this, you speed up the cloning process and save bandwidth, especially useful if you don't need the large files immediately or are only interested in the repository's code or smaller files.

By running this command, you clone the microsoft/phi-2 repository without immediately downloading the large files tracked by LFS. If you later decide that you need these large files, you can use Git LFS commands to fetch them.

If you download all the files using the git clone command you should see the following folder and associated files in your VS Code IDE:

With the model downloaded to your local directory, the next step is to download the dataset you wish to use for fine tuning the model.

microsoft/phi-2 ยท Hugging Facehuggingface
Logo
Instructions on how to download a Huggingface model
Phi 2.0 downloaded files
Page cover image