Phi 2.0 - Model Configuration
Model Configuration
The first configuration block of the Axolotl configuration file is 'model type'. It comprises three main configurations.
base_model
model_type
tokenizer_type
base_model: microsoft/phi-2
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
Below are explanations of the Huggingface Transformer classes that are used within the Axolotl training script specific to the model and tokenizer type.
Reference: AutoModelForCausalLM - class within the Huggingface Transformer library
The AutoModelForCausalLM
class is part of the Hugging Face Transformers library and is designed to provide a convenient way to instantiate and work with pre-trained models for causal language modelling tasks.
Causal language modelling, also known as autoregressive language modeling, is a type of language modelling task where the model predicts the next token in a sequence based on the previous tokens.
In other words, given a sequence of tokens, the model learns to predict the probability distribution of the next token. This is commonly used for tasks like text generation, where the model generates text by predicting one token at a time based on the previously generated tokens.
The AutoModelForCausalLM
class is a subclass of _BaseAutoModelClass
, which is a base class for all the auto model classes in the Transformers library.
The purpose of the auto model classes is to provide a unified interface for loading and using pre-trained models for various tasks.
Here's how the AutoModelForCausalLM
class works:
It has a class attribute
_model_mapping
that maps the model names to their corresponding classes for causal language modelling. This mapping allows the class to automatically determine the appropriate model class based on the provided model name or path.When you call the
from_pretrained()
method ofAutoModelForCausalLM
and provide a pre-trained model name or path, it automatically retrieves the corresponding model class from the_model_mapping
based on the model name.It then initializes and returns an instance of the retrieved model class, which can be used for causal language modelling tasks.
The advantage of using AutoModelForCausalLM
is that you don't need to know the specific model class for a given pre-trained model. You can simply provide the model name or path, and the class will handle the instantiation of the appropriate model class for you.
For example, if you have a pre-trained GPT-2 model and want to use it for causal language modelling, you can do the following:
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("gpt2")
This code will automatically load the GPT-2 model and return an instance of the appropriate model class for causal language modelling.
Overall, the AutoModelForCausalLM
class provides a convenient and flexible way to work with pre-trained models for causal language modelling tasks, abstracting away the need to know the specific model classes and allowing you to focus on using the models for your desired task.
Reference: AutoTokenizer - a class within the Huggingface Transformer library
The AutoTokenizer
class is a powerful and versatile tool in the Hugging Face Transformers library that simplifies the process of instantiating the appropriate tokenizer for a given pretrained model.
It serves as a high-level interface that automatically selects and initializes the correct tokenizer class based on the provided pretrained model name or path.
Let's dive into the key aspects of the AutoTokenizer
class and understand how it enhances the usability and flexibility of tokenization in natural language processing tasks:
Automatic Tokenizer Selection
The primary purpose of the
AutoTokenizer
class is to automatically determine the appropriate tokenizer class to use based on the pretrained model.It eliminates the need for users to manually specify the tokenizer class, saving time and reducing the chances of errors.
The class leverages various methods to infer the tokenizer class, such as examining the model's configuration, using pattern matching on the model name or path, or utilizing a tokenizer configuration file.
Pretrained Model Support
The
AutoTokenizer
class seamlessly integrates with pretrained models available on the Hugging Face Model Hub or locally saved models.It accepts a
pretrained_model_name_or_path
parameter, which can be a model identifier, a path to a directory containing the necessary files, or a URL to a specific file.This flexibility allows users to easily load tokenizers associated with a wide range of pretrained models, enabling quick experimentation and transfer learning.
Tokenizer Instantiation
The
from_pretrained()
class method is the primary entry point for instantiating tokenizers using theAutoTokenizer
.It takes care of downloading and caching the required files, such as vocabulary files, if they are not already present locally.
The method accepts various parameters to customize the tokenizer's behavior, such as specifying the tokenizer type, using a fast tokenizer variant, or providing additional keyword arguments.
Fast Tokenizers
The
AutoTokenizer
class supports the use of fast tokenizers, which are implemented in Rust and offer improved performance compared to their Python counterparts.By setting the
use_fast
parameter toTrue
(default), the class automatically selects the fast tokenizer variant if available for the given model.If a fast tokenizer is not available, it gracefully falls back to the standard Python-based tokenizer.
Trust Remote Code
The
trust_remote_code
parameter allows users to control whether theAutoTokenizer
should trust and execute custom tokenization code defined in the model's repository.This feature is useful for models that require specific tokenization logic but should be used with caution and only for trusted repositories.
Tokenizer Configuration
The
AutoTokenizer
class utilizes configuration objects (PretrainedConfig
) to determine the appropriate tokenizer class to instantiate.It first attempts to load the tokenizer configuration from a dedicated file (
tokenizer_config
) associated with the pretrained model.If the tokenizer configuration is not available, it falls back to using the model's configuration (
AutoConfig
) to infer the tokenizer class.
Tokenizer Registration
The
AutoTokenizer
class provides aregister()
method that allows users to register new tokenizer classes.This feature is particularly useful for extending the
AutoTokenizer
to support custom or newly developed tokenizers.By registering a configuration class along with the corresponding slow and fast tokenizer classes, users can seamlessly integrate their own tokenizers into the
AutoTokenizer
ecosystem.
In summary, the AutoTokenizer
class is a powerful tool that simplifies the process of initializing tokenizers for pretrained models.
It abstracts away the complexities of manually selecting and instantiating tokenizer classes, allowing users to focus on their natural language processing tasks.
With its automatic tokenizer selection, support for pretrained models, fast tokenizer variants, and extensibility through registration, the AutoTokenizer
class greatly enhances the usability and flexibility of tokenization in the Hugging Face Transformers library.
The next step after determining the model type configurations is to configure the data loading and processing parameters
Last updated