## Streaming Completions
Sometimes you want to start streaming the AI responses back to your client, to reduce latency and increase responsiveness. You can accomplish that by using the Streaming API
<Tabs>
<Tab title="NodeJS">
```js
const messages = [
{ role: "system", content: "You are a helpful assistant. You will talk like a pirate." },
{ role: "user", content: "What is the best way to train a parrot?" },
];
const stream = await modelClient.streamComplete(messages);
for await (const event of stream) {
if (event.data === "[DONE]") {
return;
}
for (const choice of JSON.parse(event.data)?.choices) {
process.stdout.write(choice.delta?.content ?? "");
}
}
```text
</Tab>
<Tab title="Python">
```python
response = modelClient.stream_complete([
SystemMessage(content="You are a helpful assistant. You will talk like a pirate."),
UserMessage(content="What is the best way to train a parrot?")
])
for update in response:
print(update.choices[0].delta.content or "", end="", flush=True)
```text
</Tab>
<Tab title=".NET">
```csharp
var completion = await modelClient.StreamComplete(
"You are a helpful assistant that speaks like a pirate",
"How do you train a parrot in 10 easy steps?"
);
await foreach (var update in completion) {
if (!string.IsNullOrEmpty(update.ContentUpdate)) {
Console.Write(update.ContentUpdate);
}
}