Green Fern
Green Fern

Qwen2.5-1.5B

Text

Qwen2.5-1.5B

Small, fast general-purpose language model that punches above its size.

  • Train wide, stay lean. A 1.5B-parameter model from the Qwen2.5 family, trained on a broad mix of high-quality web text, code, and instruction data.

  • Balanced capability. Strong at chat, summarization, classification, and light reasoning—more capable than most models in the sub-2B range.

  • Built for speed. Low memory usage and fast inference make it practical on CPUs, edge setups, and high-throughput services.

  • Minimal overhead. Single model, standard tokenizer, no tool chains or special runtimes required.

  • Open and usable. Released under a permissive open license, safe for commercial and research use.

Why pick it for Norman AI?

Qwen2.5-1.5B is a solid default when you need an LLM that’s cheap, fast, and reliable. It’s a good fit for assistants, text processing, routing, and experimentation where latency and cost matter more than deep reasoning.

messages = [
    {"role": "system", "content": "You are a helpful AI assistant."},
    {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
    {"role": "assistant",
     "content": "Sure! Here are some ways to eat bananas and dragonfruits together"},
    {"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]

response = await norman.invoke(
    {
        "model_name": "qwen2-5-1-5b",
        "inputs": [
            {
                "display_title": "Prompt",
                "data": messages
            }
        ]
    }
)