LogoLogo
Continuum Knowledge BankContinuum Applications
  • Introduction
  • Creation of Environment
    • Platform Installation
    • Axolotl Dependencies
    • setup.py objectives
      • script analysis
  • Huggingface Hub
  • Download the dataset
    • Types of Dataset Structures
    • Structuring Datasets for Fine-Tuning Large Language Models
    • Downloading Huggingface Datasets
    • Use Git to download dataset
    • Popular Datasets
    • Download cleaned Alpaca dataset
    • Template-free prompt construction
  • Downloading models
    • Phi 2.0 details
    • Downloading Phi 2.0
    • Available Models
  • Configuration for Training
  • Datasets
  • Model Selection - General
  • Phi 2.0
    • Phi 2.0 - Model Configuration
    • Phi 2.0 - Model Quantization
    • Phi 2.0 - Data Loading and Paths
    • Phi 2.0 - Sequence Configuration
    • Phi 2.0 - Lora Configuration
    • Phi 2.0 - Logging
    • Phi 2.0 - Training Configuration
    • Phi 2.0 - Data and Precision
    • Phi 2.0 - Optimisations
    • Phi 2.0 - Extra Hyperparameters
    • Phi 2.0 - All Configurations
    • Phi 2.0 - Preprocessing
    • Phi 2.0 - Training
    • Uploading Models
  • Llama2
    • Llama2 - Model Configuration
    • Llama2 - Model Quantization
    • Llama2 - Data Loading and Paths
    • Llama2 - Sequence Configuration
    • Llama2 - Lora Configuration
    • Llama2 - Logging
    • Llama2 - Training Configuration
    • Llama2 - Data and Precision
    • Llama2 - Optimisations
    • Llama2 - Extra Hyperparameters
    • Llama2- All Configurations
    • Llama2 - Training Configuration
    • Llama2 - Preprocessing
    • Llama2 - Training
  • Llama3
    • Downloading the model
    • Analysis of model files
      • Model Analysis - Configuration Parameters
      • Model Analysis - Safetensors
      • Tokenizer Configuration Files
        • Model Analysis - tokenizer.json
        • Model Analysis - Special Tokens
    • Llama3 - Model Configuration
    • Llama3 - Model Quantization
    • Llama3 - Data Loading and Paths
    • Llama3 - Sequence Configuration
    • Llama3 - Lora Configuration
    • Llama3 - Logging
    • Llama3 - Training Configuration
    • Llama3 - Data and Precision
    • Llama3 - Optimisations
    • Llama3 - Extra Hyperparameters
    • Llama3- All Configurations
    • Llama3 - Preprocessing
    • Llama3 - Training
    • Full Fine Tune
  • Special Tokens
  • Prompt Construction for Fine-Tuning Large Language Models
  • Memory-Efficient Fine-Tuning Techniques for Large Language Models
  • Training Ideas around Hyperparameters
    • Hugging Face documentation on loading PEFT
  • After fine tuning LLama3
  • Merging Model Weights
  • Merge Lora Instructions
  • Axolotl Configuration Files
    • Configuration Options
    • Model Configuration
    • Data Loading and Processing
    • Sequence Configuration
    • Lora Configuration
    • Logging
    • Training Configuration
    • Augmentation Techniques
  • Axolotl Fine-Tuning Tips & Tricks: A Comprehensive Guide
  • Axolotl debugging guide
  • Hugging Face Hub API
  • NCCL
  • Training Phi 1.5 - Youtube
  • JSON (JavaScript Object Notation)
  • General Tips
  • Datasets
Powered by GitBook
LogoLogo

This documentation is for the Axolotl community

On this page

Was this helpful?

  1. Llama3

Llama3 - Logging

PreviousLlama3 - Lora ConfigurationNextLlama3 - Training Configuration

Last updated 1 year ago

Was this helpful?

We will be using Weights and Biases to log and monitor our fine tuning runs.

If you are new to Weights and Biases, please take the time to review their quickstart guide:

A is the basic building block of W&B. You will use them often to , , .

Once you have created a Weights and Biases account, login to your Weights and Biases account by entering the following command at the command prompt:

wandb login

You will be asked to enter your Weights and Biases account details as per below:

Username:

Password:

API Token:

Configuration of Logging Component

Project

The project naming convention can be simplistic but informational. Name your project whatever you like.

Entity

The entity is simply <your-organisation>.

The configuration file should look like the below script:

wandb_project: <your project name>
wandb_entity: <your-organisation> 
wandb_name: Set the name of your wandb run
wandb_run_id: Set the ID of your wandb run
Reference: The Weights and Biases Script in the Axolotl Library

src/axolotl/utils/wandb_.py

The script defines a function setup_wandb_env_vars that configures environment variables for wandb based on a given configuration. Here's a detailed explanation of what this function does:

Module Description

  • The comment """Module for wandb utilities""" indicates that this Python file is intended to provide utility functions for working with wandb.

Import Statements

  • The script imports the os module, which provides functions for interacting with the operating system, including managing environment variables.

  • It imports DictDefault from axolotl.utils.dict, which is a custom dictionary utility, likely providing some default behavior for dictionary operations.

Function Definition

setup_wandb_env_vars(cfg: DictDefault):

  • The function takes a single parameter cfg, which is expected to be an instance of DictDefault. This parameter likely contains configuration settings.

Setting Environment Variables for wandb

  • The function iterates through all keys in the cfg dictionary.

  • If a key starts with "wandb_", the function retrieves its value.

  • If the value is a non-empty string, the function sets an environment variable with the name of the key in uppercase and assigns it the value from cfg. For example, if cfg contains {"wandb_api_key": "your_key"}, it sets an environment variable WANDB_API_KEY with the value "your_key".

Enabling wandb Integration

  • The function checks if cfg.wandb_project exists and is a non-empty string.

  • If cfg.wandb_project is set, cfg.use_wandb is set to True, and any existing WANDB_DISABLED environment variable is removed. This implies that wandb should be enabled and operational for the current session.

  • If cfg.wandb_project is not set, the function sets the environment variable WANDB_DISABLED to "true", effectively disabling wandb integration.

Usage Context

  • This script is useful for dynamically configuring wandb based on a set of configuration parameters, especially in scenarios where different wandb settings are needed for different runs or experiments.

  • It automates the process of setting up wandb environment variables, enabling or disabling wandb tracking based on the provided configuration.

In summary, the script is a utility for configuring wandb environment variables in a Python environment based on a provided configuration, enabling or disabling wandb tracking as needed. This is particularly useful in machine learning workflows where experiment tracking needs to be managed programmatically.

run
track metrics
create logs
create jobs
QuickstartDocumentation
Weights and Biases Quickstart
Page cover image
Logo