Tokenizer Configuration Files

The tokenizer_config.json and tokenizer.json files serve different purposes in the tokenization process of the Llama3 language model.

Let's clarify the difference between the two and how they interact:

tokenizer_config.json

  • This file contains the configuration settings for the tokenizer

  • It defines the behaviour and properties of the tokenizer, such as the special tokens, maximum sequence length, and input tensor names.

  • The tokenizer_config.json file specifies how the tokenizer should handle and interpret the input text during the tokenization process.

  • It includes settings like the beginning-of-sequence (BOS) token, end-of-sequence (EOS) token, and whether to clean up extra spaces during tokenization.

  • The tokenizer_config.json file also defines the mapping between special token IDs and their corresponding token content in the "added_tokens_decoder" section.

tokenizer.json

  • This file contains the actual vocabulary and mappings used by the tokenizer to convert input text into token IDs.

  • It defines the mapping between each word, subword, or character in the vocabulary and its corresponding unique token ID.

  • The tokenizer.json file is used during the tokenization process to look up the token IDs for each word or subword in the input text.

  • It is a crucial component of the tokenizer and is loaded by the tokenizer implementation to perform the actual tokenization.

Interaction between the two files

  • The tokenizer_config.json file provides the configuration settings for the tokenizer, specifying how it should behave and handle special tokens.

  • The tokenizer.json file contains the actual vocabulary and mappings used by the tokenizer to convert input text into token IDs.

  • During the tokenization process, the tokenizer implementation loads both files:

    • It uses the tokenizer_config.json file to configure its behavior and special token handling.

    • It uses the tokenizer.json file to look up the token IDs for each word or subword in the input text.

  • The tokenizer applies the configuration settings from tokenizer_config.json while utilizing the vocabulary and mappings from tokenizer.json to perform the tokenization.

Last updated

Logo

This documentation is for the Axolotl community