Skip to main content

AI Tutor API supports real-time streaming responses, enabling progressive data retrieval for enhanced user experience. This feature is particularly valuable for applications requiring immediate feedback and interactive learning experiences.

Client-side streaming in AI Tutor API allows for efficient data transmission, displaying results as they become available, making it ideal for real-time tutoring sessions and interactive educational content delivery.

Key Components of Streaming Implementation

Single-Use Token Authentication

Secure your requests with single-use tokens for client-side authentication.

Real-time Data Streaming

Receive and display progressive responses as the AI tutor generates content.

Client-side Integration

Easy-to-implement React components for handling streaming responses.

Flexible Input Processing

Support for various input formats in streaming requests.

Implementation Guide

First, obtain a single-use token from the authentication endpoint for secure client-side requests.
curl --request POST \
  --url https://aitutor-api.vercel.app/api/token
Use the token to make a streaming request to the AI Tutor API:
  curl --request POST \
--url https://aitutor-api.vercel.app/api/v1/run/{your-workflow-id}/stream?       token=pub_tok_339db872d1a99b6996707 \
--header 'Content-Type: application/json' \
--data '{"topic":"vision pro"}'

Use this React component to handle streaming responses:
"use client";

import { useCallback, useEffect, useState } from "react";
import { Spinner } from "./loaders";

export default function StreamingText({
  url,
  fallbackText,
  className,
}: {
  url: string;
  fallbackText: string;
  className?: string;
}) {
  const [loading, setLoading] = useState(false);
  const [result, setResult] = useState("");

  const getData = useCallback(async () => {
    setLoading(true);
    const response = await fetch(url, {
      method: "POST",
      headers: {
        "Content-Type": "application/json",
      },
    });

    if (!response.ok) {
      return null;
    }

    const data = response.body;
    if (!data) {
      setResult(fallbackText);
      return;
    }

    const reader = data.getReader();
    const decoder = new TextDecoder();
    let done = false;

    setLoading(false);

    while (!done) {
      const { value, done: doneReading } = await reader.read();
      done = doneReading;
      const chunkValue = decoder.decode(value);
      setResult((prev) => (prev ?? "") + chunkValue);
    }
  }, [url, fallbackText]);

  useEffect(() => {
    if (url) {
      getData();
    }
  }, []);

  return loading ? (
    <Spinner className={className} />
  ) : (
    <p className={className}>{result}</p>
  );
}
Process streaming responses in your application with proper error handling and UI updates.

Best Practices for Streaming Implementation

Implement robust error handling for network issues and invalid responses.
Display appropriate loading indicators while waiting for initial response.
Properly manage single-use tokens and implement refresh mechanisms.
Optimize your application to handle continuous data streams efficiently.

Benefits of Streaming in AI Tutor API

  • Real-time Feedback: Immediate display of AI tutor responses as they’re generated.
  • Enhanced UX: Better user experience with progressive content loading.
  • Resource Efficiency: Optimized data transmission and processing.
  • Interactive Learning: Support for real-time interactive tutoring sessions.
  • Scalability: Efficient handling of large responses through chunked transfer.
I