Skip to main content

The AI Tutor API offers developers a powerful and user-friendly way to integrate intelligent tutoring capabilities into their applications, enabling personalized learning experiences with just a few lines of code.

Launch your next AI project in hours, not months. Our platform handles hosting, security, and delivery so you can focus on building. Create workflows with any leading AI model, deploy them as API endpoints, and scale with confidence.

Key Features

50+ AI Models

Access GPT-5, Claude Sonnet 4.5, Gemini 2.5 Pro, Grok 3, Llama, Mistral, DeepSeek and more — all through a single unified API.

Real-time Streaming

Stream responses token-by-token as the AI generates content, fully compatible with the Vercel AI SDK and useChat.

Visual Workflow Builder

Design AI workflows with a no-code builder — set the model, template, instructions, and input variables, then deploy instantly.

Secure by Default

Bearer key authentication, single-use streaming tokens, server-derived rate limiting, and automatic credit enforcement on every request.

Web Search Built-in

Enable web search on any workflow — the AI will ground its response in real sources and return citations automatically.

Bring Your Own Key

Optionally use your own OpenAI, Anthropic, or Google API keys for direct provider access with zero markup.

Base URL

All API requests are made to:
https://aitutor-api.vercel.app

Authentication

API requests are authenticated using a secret key passed in the Authorization header:
Authorization: Bearer sk_your_secret_key
Secret keys are generated in your Settings dashboard. Each key is scoped to your organization and has a configurable rate limit (default: 10 requests/second).
Never expose your sk_ key in client-side code. For browser-based streaming, use a single-use stream token instead.

API Endpoints

How It Works

Follow these steps to go from zero to a live AI endpoint.
In the dashboard, create a new workflow. Choose an AI model, write a template with {{variables}}, and optionally add system instructions.
Use the built-in Compose tab to test your workflow with sample inputs. Toggle streaming on or off. When you’re happy, publish it to make it available via API.
Go to Settings and generate a secret key (sk_...). This key authenticates all your API requests and is tied to your organization’s credit balance.
Make a POST request to /api/v1/run/{workflow_id} with your inputs. For streaming, first get a token from /api/v1/token, then call the /stream endpoint.
Integrate the response into your app — whether it’s a chatbot, content generator, learning platform, or anything else. Scale with confidence.

Quick Start

Run your first workflow in seconds:
curl --request POST \
     --url https://aitutor-api.vercel.app/api/v1/run/YOUR_WORKFLOW_ID \
     --header 'Authorization: Bearer YOUR_API_KEY' \
     --header 'Content-Type: application/json' \
     --data '{"topic": "quantum computing"}'
Response:
{
  "success": true,
  "result": "Quantum computing leverages quantum mechanical phenomena..."
}

Rate Limits & Credits

Each API key has a configurable rate limit (default: 10 requests per second) using a sliding window algorithm. Rate limit headers are included in every response:
  • x-ratelimit-limit — Your limit per window
  • x-ratelimit-remaining — Remaining requests
If the rate limit is exceeded, the API returns 429 Too Many Requests.
Every workflow run deducts credits from your organization’s balance based on token usage. If your balance reaches zero, API requests are blocked with a 402 Payment Required error.Credit packs:
  • Starter Pack — 1,000 credits for $25
  • Pro Pack — 2,500 credits for $50 (best value)
Purchase credits in the Settings dashboard. You can also set a spend limit to cap monthly usage.
Stream tokens are single-use and expire after their TTL (default: 60 seconds, max: 300 seconds). They are consumed on first use and cannot be reused. Generate them server-side, then pass to the client for secure browser-based streaming.

Supported Models

AI Tutor API provides access to 50+ models across all major providers:

OpenAI

GPT-5, GPT-5.2, GPT-4.1, GPT-4o, o3, o4-mini, and more

Anthropic

Claude Sonnet 4.5, Claude Opus 4.5, Claude Haiku 4.5

Google

Gemini 2.5 Pro, Gemini 2.5 Flash

xAI

Grok 3, Grok 3 Mini, Grok 4.1, Grok 4.1 Fast

Perplexity

Sonar, Sonar Pro, Sonar Reasoning, Sonar Reasoning Pro

Open Source

Llama 3, Mistral, DeepSeek, Qwen 2.5, Command R+, and more

Tips for Best Results

Use well-structured templates with specific {{variables}} to get focused, relevant responses. The more context you provide in the system instructions, the better the output.
Use GPT-5 or Claude Sonnet 4.5 for complex reasoning tasks. Use GPT-4.1 or Gemini 2.5 Flash for fast, cost-effective responses. Use Perplexity Sonar when you need web-grounded answers with citations.
For user-facing applications, always use streaming — it gives immediate feedback and feels dramatically faster than waiting for the full response.
Always check for success: false in responses. Implement retries for 429 (rate limit) and 500 (server error) responses with exponential backoff.
Keep sk_ keys server-side only. For client-side streaming, generate a short-lived token via /api/v1/token and pass it to the browser. Tokens are single-use and auto-expire.
Use the Statistics dashboard to track API calls, token usage, active models, and peak hours. Set a spend limit to prevent unexpected charges.

Why AI Tutor API?

  • Ship in Hours: Go from idea to live AI endpoint without managing infrastructure, model hosting, or prompt engineering pipelines.
  • 50+ Models, One API: Switch between OpenAI, Anthropic, Google, xAI, Perplexity, and open-source models without changing a line of code.
  • Production-Ready Security: Bearer authentication, single-use tokens, server-side rate limiting, and credit enforcement — all built in from day one.
  • Real-time Streaming: Deliver token-by-token responses compatible with the Vercel AI SDK, React, and any HTTP client.
  • Transparent Pricing: Pay only for what you use with simple credit packs. No hidden fees, no per-seat pricing, no minimum commitments.
  • Full Control: Visual workflow builder, customizable templates, spend limits, usage analytics, and bring-your-own-key support.