LogoLogo
Continuum Knowledge BankContinuum Applications
  • Introduction
  • Creation of Environment
    • Platform Installation
    • Axolotl Dependencies
    • setup.py objectives
      • script analysis
  • Huggingface Hub
  • Download the dataset
    • Types of Dataset Structures
    • Structuring Datasets for Fine-Tuning Large Language Models
    • Downloading Huggingface Datasets
    • Use Git to download dataset
    • Popular Datasets
    • Download cleaned Alpaca dataset
    • Template-free prompt construction
  • Downloading models
    • Phi 2.0 details
    • Downloading Phi 2.0
    • Available Models
  • Configuration for Training
  • Datasets
  • Model Selection - General
  • Phi 2.0
    • Phi 2.0 - Model Configuration
    • Phi 2.0 - Model Quantization
    • Phi 2.0 - Data Loading and Paths
    • Phi 2.0 - Sequence Configuration
    • Phi 2.0 - Lora Configuration
    • Phi 2.0 - Logging
    • Phi 2.0 - Training Configuration
    • Phi 2.0 - Data and Precision
    • Phi 2.0 - Optimisations
    • Phi 2.0 - Extra Hyperparameters
    • Phi 2.0 - All Configurations
    • Phi 2.0 - Preprocessing
    • Phi 2.0 - Training
    • Uploading Models
  • Llama2
    • Llama2 - Model Configuration
    • Llama2 - Model Quantization
    • Llama2 - Data Loading and Paths
    • Llama2 - Sequence Configuration
    • Llama2 - Lora Configuration
    • Llama2 - Logging
    • Llama2 - Training Configuration
    • Llama2 - Data and Precision
    • Llama2 - Optimisations
    • Llama2 - Extra Hyperparameters
    • Llama2- All Configurations
    • Llama2 - Training Configuration
    • Llama2 - Preprocessing
    • Llama2 - Training
  • Llama3
    • Downloading the model
    • Analysis of model files
      • Model Analysis - Configuration Parameters
      • Model Analysis - Safetensors
      • Tokenizer Configuration Files
        • Model Analysis - tokenizer.json
        • Model Analysis - Special Tokens
    • Llama3 - Model Configuration
    • Llama3 - Model Quantization
    • Llama3 - Data Loading and Paths
    • Llama3 - Sequence Configuration
    • Llama3 - Lora Configuration
    • Llama3 - Logging
    • Llama3 - Training Configuration
    • Llama3 - Data and Precision
    • Llama3 - Optimisations
    • Llama3 - Extra Hyperparameters
    • Llama3- All Configurations
    • Llama3 - Preprocessing
    • Llama3 - Training
    • Full Fine Tune
  • Special Tokens
  • Prompt Construction for Fine-Tuning Large Language Models
  • Memory-Efficient Fine-Tuning Techniques for Large Language Models
  • Training Ideas around Hyperparameters
    • Hugging Face documentation on loading PEFT
  • After fine tuning LLama3
  • Merging Model Weights
  • Merge Lora Instructions
  • Axolotl Configuration Files
    • Configuration Options
    • Model Configuration
    • Data Loading and Processing
    • Sequence Configuration
    • Lora Configuration
    • Logging
    • Training Configuration
    • Augmentation Techniques
  • Axolotl Fine-Tuning Tips & Tricks: A Comprehensive Guide
  • Axolotl debugging guide
  • Hugging Face Hub API
  • NCCL
  • Training Phi 1.5 - Youtube
  • JSON (JavaScript Object Notation)
  • General Tips
  • Datasets
Powered by GitBook
LogoLogo

This documentation is for the Axolotl community

On this page

Was this helpful?

  1. Llama3
  2. Analysis of model files

Model Analysis - Configuration Parameters

This is an analysis of the config.json file

These configuration parameters define the architecture and hyperparameters of the Llama3 language model.

The model uses a decoder-only transformer architecture with 32 hidden layers, each having 32 attention heads and a hidden size of 4096.

The model can process sequences up to 8192 tokens long and uses SiLU activation in the hidden layers.

The weights are initialised with a range of 0.02, and the model uses bfloat16 precision for its weights and activations.

The vocabulary size is set to 128,256 tokens, with specific IDs assigned for the beginning-of-sequence and end-of-sequence tokens.

Parameter
Value
Explanation

architectures

Llama

ForCausalLM

Specifies the architecture class used for the language model. This class is designed for causal language modeling tasks, where the model predicts the next token based on the previous tokens.

attention_bias

false

Indicates that attention bias, which can be used to mask out certain positions or modify attention weights, is not used in the model.

attention_dropout

0.0

The dropout probability for the attention layers. Dropout is a regularization technique that randomly sets some input units to 0 during training. In this case, dropout is not applied to the attention layers.

bos_token_id

128,000

The ID of the special token used to indicate the beginning of a sequence. This token is added to the start of the input sequence.

eos_token_id

128,001

The ID of the special token used to indicate the end of a sequence. This token is added to the end of the input sequence.

hidden_act

"silu"

The activation function used in the hidden layers of the model. In this case, the Sigmoid Linear Unit (SiLU) function is used, which is defined as x * sigmoid(x).

hidden_size

4,096

The dimensionality of the hidden states in the model. This determines the size of the vectors that the model uses to represent the input and output at each layer.

initializer_range

0.02

The range used for initialising the model's weights. The weights are initialised randomly within this range to break symmetry and encourage diverse representations.

intermediate_size

14,336

The dimensionality of the intermediate (feed-forward) layer in the model. This layer is applied after the attention layer and expands the hidden state size before projecting it back to the original hidden_size.

max_position

embeddings

8,192

The maximum sequence length that the model can process, in tokens. This limits the context size that the model can attend to.

model_type

"llama"

Specifies the type of the language model. In this case, it is the "llama" model, which is a large language model developed by Meta.

num_attention_heads

32

The number of attention heads in each attention layer. Attention heads allow the model to attend to different parts of the input sequence simultaneously, capturing different relationships and patterns.

num_hidden_layers

32

The number of hidden layers (transformer blocks) in the model. Each layer consists of an attention mechanism followed by a feed-forward network.

num_key_value

heads

8

The number of key-value heads in each attention layer. This is used in the query-key-value attention mechanism, where the key-value heads are used to compute the attention weights.

pretraining_tp

1

The tensor parallelism used during pretraining. Tensor parallelism allows for parallel computation across multiple devices or nodes.

rms_norm_eps

1e-05

The epsilon value used for RMSNorm (Root Mean Square Normalization) layers. RMSNorm is a normalization technique that normalizes the activations based on their root mean square values. The epsilon is added for numerical stability.

rope_scaling

null

Indicates that RoPE (Rotary Position Embedding) scaling is not used. RoPE is a technique for encoding positional information in the attention mechanism.

rope_theta

500,000

The scaling factor for RoPE. This value determines the frequency of the rotary position embeddings.

tie_word_embeddings

false

Indicates that word embeddings are not tied with the output layer. Tying word embeddings means sharing the weights between the input embedding layer and the output softmax layer.

torch_dtype

"bfloat16"

The data type used for the model's weights and activations. In this case, Brain Floating Point 16-bit (bfloat16) is used, which is a 16-bit floating-point format that offers better performance and memory usage compared to the standard 32-bit float.

transformers_version

4.40.0.dev0

The version of the Transformers library used. This specifies the version of the library that the model configuration is compatible with.

use_cache

true

Enables caching of key-value pairs during inference for faster generation. Caching stores the computed key-value pairs for each layer, avoiding redundant computations when generating sequences.

vocab_size

128,256

The size of the model's vocabulary, in tokens. This represents the number of unique tokens that the model can recognize and generate, including regular words, subwords, and special tokens.

config.json

The provided code snippet is a JSON configuration file for the Llama3 language model.

  1. "architectures": This parameter specifies the architecture class used for the language model, which is "LlamaForCausalLM". This indicates that the model is designed for causal language modeling tasks.

  2. "attention_bias": Set to false, indicating that attention bias is not used in the model.

  3. "attention_dropout": The dropout probability for the attention layers, set to 0.0, meaning no dropout is applied.

  4. "bos_token_id": The ID of the beginning-of-sequence token, set to 128000.

  5. "eos_token_id": The ID of the end-of-sequence token, set to 128001.

  6. "hidden_act": The activation function used in the hidden layers, set to "silu" (Sigmoid Linear Unit).

  7. "hidden_size": The dimensionality of the hidden states in the model, set to 4096.

  8. "initializer_range": The range used for initializing the model's weights, set to 0.02.

  9. "intermediate_size": The dimensionality of the intermediate (feed-forward) layer in the model, set to 14336.

  10. "max_position_embeddings": The maximum sequence length that the model can process, set to 8192 tokens.

  11. "model_type": Specifies the type of the language model, set to "llama".

  12. "num_attention_heads": The number of attention heads in each attention layer, set to 32.

  13. "num_hidden_layers": The number of hidden layers (transformer blocks) in the model, set to 32.

  14. "num_key_value_heads": The number of key-value heads in each attention layer, set to 8.

  15. "pretraining_tp": The tensor parallelism used during pretraining, set to 1.

  16. "rms_norm_eps": The epsilon value used for RMSNorm (Root Mean Square Normalization) layers, set to 1e-05.

  17. "rope_scaling": Set to null, indicating that RoPE (Rotary Position Embedding) scaling is not used.

  18. "rope_theta": The scaling factor for RoPE, set to 500000.0.

  19. "tie_word_embeddings": Set to false, indicating that word embeddings are not tied with the output layer.

  20. "torch_dtype": The data type used for the model's weights and activations, set to "bfloat16" (Brain Floating Point 16-bit).

  21. "transformers_version": The version of the Transformers library used, set to "4.40.0.dev0".

  22. "use_cache": Set to true, enabling caching of key-value pairs during inference for faster generation.

  23. "vocab_size": The size of the model's vocabulary, set to 128256 tokens.

These configuration parameters define the architecture and hyperparameters of the Llama3 language model.

The model uses a decoder-only transformer architecture with 32 hidden layers, each having 32 attention heads and a hidden size of 4096.

The model can process sequences up to 8192 tokens long and uses SiLU activation in the hidden layers.

The weights are initialised with a range of 0.02, and the model uses bfloat16 precision for its weights and activations. The vocabulary size is set to 128256 tokens, with specific IDs assigned for the beginning-of-sequence and end-of-sequence tokens.

PreviousAnalysis of model filesNextModel Analysis - Safetensors

Last updated 1 year ago

Was this helpful?

Page cover image