Green Fern
Green Fern

phi-2

Text

Phi‑2 (Microsoft)

Small language model focused on reasoning, not scale.

  • Train small, think hard. A 2.7B‑parameter model trained on carefully curated data, with an emphasis on reasoning, logic, and structured text rather than raw web scale.

  • Reasoning-first. Performs especially well on math, coding, and step‑by‑step reasoning compared to other models in the same size class.

  • Runs on cheap hardware. Needs about 5-8 GB VRAM for FP16 inference; 4-bit or CPU builds will run on 4 GB cards or laptops, just slower..

  • Compact and efficient. Much lighter than large frontier models, making it practical to run on modest GPUs or optimized CPU setups.

  • Plain setup. Single model, standard tokenizer, no tool use or external components required.

  • Research-friendly. Released for research and development use, with clear constraints around deployment and redistribution.

Why pick it for Norman AI?

Phi‑2 is a strong choice when you want to test or compare reasoning ability in small models. It’s ideal for evaluations, structured tasks, and experiments where model size is limited but reasoning quality still matters.

messages = [
    {"role": "system", "content": "You are a helpful AI assistant."},
    {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
    {"role": "assistant",
     "content": "Sure! Here are some ways to eat bananas and dragonfruits together"},
    {"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]

response = await norman.invoke(
    {
        "model_name": "phi-2",
        "inputs": [
            {
                "display_title": "Prompt",
                "data": messages
            }
        ]
    }
)