Curl Responses Client#

Refer to the trtllm-serve documentation for starting a server.

Source NVIDIA/TensorRT-LLM.

1#! /usr/bin/env bash
2
3curl http://localhost:8000/v1/responses \
4    -H "Content-Type: application/json" \
5    -d '{
6        "model": "TinyLlama-1.1B-Chat-v1.0",
7        "input": "Where is New York?",
8        "max_output_tokens": 16
9    }'