Green Fern

Qwen2.5-1.5B

Text

Qwen2.5-1.5B

Small, fast general-purpose language model that punches above its size.

  • Train wide, stay lean. A 1.5B-parameter model from the Qwen2.5 family, trained on a broad mix of high-quality web text, code, and instruction data.

  • Balanced capability. Strong at chat, summarization, classification, and light reasoning—more capable than most models in the sub-2B range.

  • Built for speed. Low memory usage and fast inference make it practical on CPUs, edge setups, and high-throughput services.

  • Minimal overhead. Single model, standard tokenizer, no tool chains or special runtimes required.

  • Open and usable. Released under a permissive open license, safe for commercial and research use.

Why pick it for Norman AI?

Qwen2.5-1.5B is a solid default when you need an LLM that’s cheap, fast, and reliable. It’s a good fit for assistants, text processing, routing, and experimentation where latency and cost matter more than deep reasoning.


response = await norman.invoke(
    {
        "model_name": "qwen2.5-1.5b",
        "inputs": [
            {
                "display_title": "Prompt",
                "data": "A cat playing with a ball on mars"
            }
        ]
    }
)
response = await norman.invoke(
    {
        "model_name": "qwen2.5-1.5b",
        "inputs": [
            {
                "display_title": "Prompt",
                "data": "A cat playing with a ball on mars"
            }
        ]
    }
)
response = await norman.invoke(
    {
        "model_name": "qwen2.5-1.5b",
        "inputs": [
            {
                "display_title": "Prompt",
                "data": "A cat playing with a ball on mars"
            }
        ]
    }
)