Phi 2.0
Phi 2.0 Model Details
Phi-2 Description: It's a 2.7 billion parameter Transformer model trained with various NLP synthetic texts and filtered websites.
It's designed to address common sense, language understanding, and logical reasoning.
Intention: The model is not fine-tuned with reinforcement learning from human feedback. It is intended for research, especially in safety challenges like reducing toxicity, understanding biases, etc.
Formats: Phi-2 is optimised for QA format, chat format, and code format.
Sample Code
Execution Modes: Four modes are detailed, depending on whether you use CUDA or CPU and the floating-point precision (FP16 or FP32).
Recommended Usage: A Python example is provided to demonstrate how to load and use the model, including tokenizing inputs and generating outputs.
The generic configuration file for training Phi 2.0 from axolotl
Source: axolotl/examples/phi/phi2-ft.yml
We will now continue to populate the training configuration file.
Last updated