
phi-2
Text
Phi‑2 (Microsoft)
Small language model focused on reasoning, not scale.
Train small, think hard. A 2.7B‑parameter model trained on carefully curated data, with an emphasis on reasoning, logic, and structured text rather than raw web scale.
Reasoning-first. Performs especially well on math, coding, and step‑by‑step reasoning compared to other models in the same size class.
Runs on cheap hardware. Needs about 5-8 GB VRAM for FP16 inference; 4-bit or CPU builds will run on 4 GB cards or laptops, just slower..
Compact and efficient. Much lighter than large frontier models, making it practical to run on modest GPUs or optimized CPU setups.
Plain setup. Single model, standard tokenizer, no tool use or external components required.
Research-friendly. Released for research and development use, with clear constraints around deployment and redistribution.
Why pick it for Norman AI?
Phi‑2 is a strong choice when you want to test or compare reasoning ability in small models. It’s ideal for evaluations, structured tasks, and experiments where model size is limited but reasoning quality still matters.
