Code
cookbook/11_models/vllm/basic_stream.py
from agno.agent import Agent
from agno.models.vllm import VLLM
agent = Agent(
model=VLLM(id="Qwen/Qwen2.5-7B-Instruct", top_k=20, enable_thinking=False),
markdown=True,
)
agent.print_response("Share a 2 sentence horror story", stream=True)
Start vLLM server
vllm serve Qwen/Qwen2.5-7B-Instruct \
--enable-auto-tool-choice \
--tool-call-parser hermes \
--dtype float16 \
--max-model-len 8192 \
--gpu-memory-utilization 0.9
Was this page helpful?