Model Configuration
Model Configuration
base_model
Specifies the Hugging Face model or its path containing model files (*.pt
, *.safetensors
, or *.bin
).
base_model_ignore_patterns
Allows specifying an ignore pattern for model files in the repository.
base_model_config
Location of the configuration .json
file if not included in the base model repository.
model_revision
Specifies a specific model revision from Hugging Face's hub.
tokenizer_config
Optional override for the tokenizer configuration, different from the base model.
model_type
Defines the type of model to load, e.g., AutoModelForCausalLM
.
tokenizer_type
Corresponding tokenizer type, e.g., AutoTokenizer
.
trust_remote_code
Allows trusting remote code for untrusted sources.
tokenizer_use_fast
Indicates whether to use the use_fast
option for tokenizer loading.
tokenizer_legacy
Specifies whether to use the legacy tokenizer setting.
resize_token_embeddings_to_32x
Resizes model embeddings to multiples of 32 when new tokens are added.
is_falcon_derived_model
Boolean flag indicating if the model is derived from Falcon.
is_llama_derived_model
Boolean flag indicating if the model is derived from Llama.
is_mistral_derived_model
Boolean flag indicating if the model is derived from Mistral.
model_config
Optional overrides for the base model configuration, including RoPE Scaling settings.
Last updated