Migrate from ElevenLabs

View as Markdown

Concept mapping

ElevenLabsSmallest Atoms
Agent (Conversational AI)Agent (single_prompt workflow)
Conversation (WebSocket session)Call (Agent WebSocket session)
Voice (voice_id)TTS voice ID from the Lightning v3.1 catalog, or a cloned voice (prefix voice_)
conversation_config.tts + conversation_config.agentAgent draft config: PATCH /agent/{id}/drafts/{draftId}/config
@elevenlabs/client, @elevenlabs/react@smallest-ai/agent-sdk (browser) or raw WebSocket
xi-api-key header + signed URLAuthorization: Bearer sk_... or ?token=sk_...

1. WebSocket URL

The primary interface you are porting.

ElevenLabsSmallest Atoms
URLwss://api.elevenlabs.io/v1/convai/conversationwss://api.smallest.ai/atoms/v1/agent/connect
Required query paramsagent_idagent_id
Optional query paramstoken (signed, for private agents)token, mode (webcall or chat), sample_rate
Header auth on WebSocket handshakenot documentedAuthorization: Bearer sk_...

2. Authentication

WebSocket handshake

Pass the API key as a header or as the token query param. For Smallest, both options work for every agent. ElevenLabs requires a signed URL for private agents.

ElevenLabs (private agent, signed URL)

$curl "https://api.elevenlabs.io/v1/convai/conversation/get-signed-url?agent_id=$AGENT_ID" \
> -H "xi-api-key: $ELEVENLABS_API_KEY"
$# Response: { "signed_url": "wss://api.elevenlabs.io/v1/convai/conversation?agent_id=...&token=..." }

Smallest Atoms

$# No separate signed-URL endpoint. The API key works directly on the WebSocket.
$# Either of these is valid:
$#
$# Header: Authorization: Bearer $SMALLEST_API_KEY
$# Query: wss://api.smallest.ai/atoms/v1/agent/connect?token=$SMALLEST_API_KEY&agent_id=$AGENT_ID

Create a Smallest API key at app.smallest.ai/dashboard/api-keys.

REST calls

ProviderHeader
ElevenLabsxi-api-key: <your-api-key>
Smallest AtomsAuthorization: Bearer sk_<your-key>

3. Client to server messages

ElevenLabsSmallest Atoms
conversation_initiation_client_dataNot required. The Smallest server opens the session on WebSocket connect.
user_audio_chunk with { user_audio_chunk: "<base64>" }input_audio_buffer.append with { type, audio: "<base64>" }
contextual_update with { type, text }Not supported mid-session. Inject values before the session with pre-call API variables.
pongNot required. Smallest does not use ping/pong.

4. Server to client events

ElevenLabsSmallest Atoms
pingNot emitted.
user_transcript (user_transcription_event.user_transcript)transcript with role: "user".
agent_response (agent_response_event.agent_response)transcript with role: "agent".
agent_response_correctionNot emitted.
audio (audio_event.audio_base_64 with alignment)output_audio.delta with { type, audio: "<base64>" }. Alignment timings not included.
interruption (interruption_event.reason)interruption (no reason field).
agent_chat_response_part (text deltas)Not emitted. Audio streams; full text is available post-call.

Smallest adds two events not present in the ElevenLabs protocol:

  • agent_start_talking: fires once per agent turn, before the first output_audio.delta of that turn.
  • agent_stop_talking: fires when the agent turn ends.

Full protocol: Realtime Agent API reference.

5. WebSocket SDK (JavaScript / TypeScript)

Most ElevenLabs Conversational AI integrations use the official browser client. The minimal port:

ElevenLabs

1import { Conversation } from "@elevenlabs/client";
2
3const conversation = await Conversation.startSession({ agentId: "..." });
4conversation.addEventListener("agent_response", (e) => {
5 console.log("agent said:", e.detail);
6});

Smallest Atoms

1import { AtomsAgent } from "@smallest-ai/agent-sdk";
2
3const agent = new AtomsAgent({ apiKey: "sk_...", agentId: "..." });
4await agent.connect();

For the full API surface (configuration, methods, events, push-to-talk, text input, error handling, smoke test), see the WebSocket SDK reference.

6. Node.js (raw WebSocket)

For server-side or non-browser JS runtimes.

ElevenLabs

1const WebSocket = require("ws");
2
3// 1. Get a signed URL (private agents). Public agents can skip this step
4// and connect directly with ?agent_id=<id>.
5const res = await fetch(
6 `https://api.elevenlabs.io/v1/convai/conversation/get-signed-url?agent_id=${process.env.EL_AGENT_ID}`,
7 { headers: { "xi-api-key": process.env.EL_API_KEY } }
8);
9const { signed_url } = await res.json();
10
11// 2. Connect to the signed URL. No headers needed.
12const ws = new WebSocket(signed_url);
13
14ws.on("message", (data) => {
15 const event = JSON.parse(data.toString());
16 if (event.type === "audio") {
17 const audioBytes = Buffer.from(event.audio_event.audio_base_64, "base64");
18 // play audioBytes
19 }
20});

Smallest Atoms

1const WebSocket = require("ws");
2
3const url = new URL("wss://api.smallest.ai/atoms/v1/agent/connect");
4url.searchParams.set("token", process.env.SMALLEST_API_KEY);
5url.searchParams.set("agent_id", process.env.SMALLEST_AGENT_ID);
6url.searchParams.set("mode", "webcall");
7url.searchParams.set("sample_rate", "24000");
8
9const ws = new WebSocket(url.toString());
10
11ws.on("message", (data) => {
12 const event = JSON.parse(data.toString());
13 if (event.type === "output_audio.delta") {
14 const audioBytes = Buffer.from(event.audio, "base64");
15 // play audioBytes (24kHz PCM16 mono)
16 }
17});

7. Python (raw WebSocket)

1import asyncio, base64, json, wave
2import websockets
3
4API_KEY = "sk_..."
5AGENT_ID = "..."
6URL = (
7 f"wss://api.smallest.ai/atoms/v1/agent/connect"
8 f"?token={API_KEY}&agent_id={AGENT_ID}&mode=webcall&sample_rate=24000"
9)
10
11async def main(wav_path: str):
12 with wave.open(wav_path, "rb") as wf:
13 pcm = wf.readframes(wf.getnframes())
14 chunks = [pcm[i : i + 1920] for i in range(0, len(pcm), 1920)] # 40ms at 24kHz PCM16
15
16 async with websockets.connect(URL, max_size=None) as ws:
17 ev = json.loads(await ws.recv())
18 assert ev["type"] == "session.created"
19
20 for chunk in chunks:
21 await ws.send(json.dumps({
22 "type": "input_audio_buffer.append",
23 "audio": base64.b64encode(chunk).decode(),
24 }))
25 await asyncio.sleep(0.040)
26 await ws.send(json.dumps({"type": "input_audio_buffer.commit"}))
27
28 out = bytearray()
29 while True:
30 ev = json.loads(await ws.recv())
31 if ev["type"] == "output_audio.delta":
32 out.extend(base64.b64decode(ev["audio"]))
33 elif ev["type"] in ("agent_stop_talking", "session.closed"):
34 break
35
36 with wave.open("reply.wav", "wb") as wf:
37 wf.setnchannels(1); wf.setsampwidth(2); wf.setframerate(24000)
38 wf.writeframes(bytes(out))
39
40asyncio.run(main("input.wav"))

Replace API_KEY and AGENT_ID with your own values.

8. Creating an agent (one-time setup)

ElevenLabs creates an agent with a single REST call. Smallest splits creation and configuration into four calls because config is versioned. See Agent Versioning for the full model.

ElevenLabs

$curl -X POST "https://api.elevenlabs.io/v1/convai/agents/create" \
> -H "xi-api-key: $ELEVENLABS_API_KEY" \
> -H "Content-Type: application/json" \
> -d '{
> "name": "Support Agent",
> "conversation_config": {
> "tts": {"voice_id": "aMSt68OGf4xUZAnLpTU8", "model_id": "eleven_flash_v2"},
> "agent": {"first_message": "Hello.", "prompt": {"prompt": "You are helpful."}}
> }
> }'

Smallest Atoms

$export SMALLEST_API_KEY="sk_..."
$BASE="https://api.smallest.ai/atoms/v1"
$
$# 1. Create the agent.
$AGENT_ID=$(curl -s -X POST "$BASE/agent" \
> -H "Authorization: Bearer $SMALLEST_API_KEY" \
> -H "Content-Type: application/json" \
> -d '{"name": "Support Agent", "workflowType": "single_prompt"}' \
> | jq -r .data)
$
$# 2. Create a draft off the current active version.
$ACTIVE_VER=$(curl -s "$BASE/agent/$AGENT_ID" \
> -H "Authorization: Bearer $SMALLEST_API_KEY" | jq -r .data.activeVersionId)
$
$DRAFT_ID=$(curl -s -X POST "$BASE/agent/$AGENT_ID/drafts" \
> -H "Authorization: Bearer $SMALLEST_API_KEY" \
> -H "Content-Type: application/json" \
> -d "{\"sourceVersionId\": \"$ACTIVE_VER\"}" | jq -r .data.draftId)
$
$# 3. Patch the draft with prompt and LLM.
$curl -X PATCH "$BASE/agent/$AGENT_ID/drafts/$DRAFT_ID/config" \
> -H "Authorization: Bearer $SMALLEST_API_KEY" \
> -H "Content-Type: application/json" \
> -d '{
> "singlePromptConfig": {"prompt": "You are helpful."},
> "slmModel": "gpt-4o"
> }'
$
$# 4. Publish and activate the draft.
$curl -X POST "$BASE/agent/$AGENT_ID/drafts/$DRAFT_ID/publish" \
> -H "Authorization: Bearer $SMALLEST_API_KEY" \
> -H "Content-Type: application/json" \
> -d '{"label": "initial", "activate": true}'

The first call auto-publishes V1 with defaults. Future config changes flow through drafts and publish a new immutable version each time.

9. Voices

ElevenLabs voice IDs (for example aMSt68OGf4xUZAnLpTU8) do not port. Pick a TTS voice ID from the Lightning v3.1 catalog:

$curl "https://api.smallest.ai/waves/v1/lightning-v3.1/get_voices" \
> -H "Authorization: Bearer $SMALLEST_API_KEY"

Full reference: GET voices. The Voice Cloning dashboard also lists available voice IDs in a UI.

For a custom voice, clone one from a short audio sample. Cloned voice IDs are prefixed with voice_.

To set the voice on an agent, include it in the draft config payload:

1{
2 "synthesizer": {"voiceConfig": {"voiceId": "<your-voice-id>"}}
3}

10. Unsupported in Smallest Atoms

ElevenLabs featureStatus in Smallest
Character-level alignment on TTS audio (alignment.chars, char_durations_ms, char_start_times_ms)Not available. output_audio.delta is raw PCM without per-character timing.
agent_response_correction eventNot emitted. The transcript stored in the call log is final.
contextual_update mid-sessionNot supported. Inject values before the session via pre-call API variables.
agent_chat_response_part (text deltas)Not emitted. Audio streams; full text is available post-call.

11. Unsupported in ElevenLabs

Smallest Atoms featureReference
Prompt versioning (drafts, publish, activate, rollback, per-version test calls, metric comparison)Agent Versioning
Post-call disposition metrics extracted from the transcript, surfaced in the call logPost-Call Metrics, developer guide
Outbound campaigns with audiences, scheduling, and retry controlRunning Campaigns

Next steps