On this page
At the heart of Sim RaceCenter's AI-powered broadcast system is a deceptively simple question: what should the camera show next?
In a live sim race, there are dozens of cars on track, battles forming and collapsing every lap, pit stops, incidents, and flag changes. A human broadcast director makes these decisions instinctively. Our AI Director does it through a two-phase architecture we call the Planner–Executor pattern.
This post walks through the actual code that powers this system.
The Two-Phase Architecture
The system splits the problem into two distinct AI calls with different responsibilities:
| Phase | Model | When | Job |
|---|---|---|---|
| Planner | Gemini 2.5 Pro | Once at session check-in | Generate a library of sequence templates tailored to this specific race |
| Executor | Gemini 2.5 Flash | Every sequence request (~every 10-30s) | Pick the best template for right now and fill in the variables |
The Planner does the expensive, creative work once. The Executor makes fast, cheap decisions continuously throughout the race.
Phase 1: The Planner
When the Director application checks in to a race session, it sends its capabilities — which intents it supports, what hardware is connected, and the session configuration. This triggers the Planner.
What the Planner Sees
The buildPlannerPrompt function constructs a detailed prompt that includes:
- The intent registry — every action the Director client can execute (
broadcast.showLiveCam,obs.switchScene,system.wait, etc.) - Hardware connections — which systems are actually online (OBS, Discord, simulator)
- Session configuration — drivers, their car numbers, assigned rigs, and OBS scenes
- The operator's existing sequences — uploaded as training examples so the model matches the operator's style
- Race context — session type (Practice / Qualify / Race), track, caution rules, field size
The prompt also specifies template categories based on session type. A Race session gets battle, leader, incident, caution, pit stop, and victory templates. Practice sessions get solo driver and scenic templates instead. Qualifying gets hot lap and timing comparison templates.
Restrictions
The Planner prompt encodes rules about what templates should not be generated:
- No pace car templates for sessions with local cautions (no pace car is deployed)
- No victory templates during practice sessions
- No caution templates during qualifying
What the Planner Produces
The model returns a JSON array of SequenceTemplate objects — typically 8 to 20 per session. Each template is a reusable pattern with placeholder variables:
interface SequenceTemplate {
id: string;
raceSessionId: string;
name: string;
applicability: string; // "when to use this template"
priority: 'normal' | 'incident' | 'caution';
durationRange: { min: number; max: number };
steps: SequenceStep[]; // With ${variable} placeholders
variables: SequenceVariable[];
source: 'ai-planner' | 'operator-library' | 'hybrid';
}Here's what a typical Battle Camera template looks like:
{
"name": "Battle Camera",
"applicability": "Two drivers within 2 seconds of each other",
"priority": "normal",
"durationRange": { "min": 15000, "max": 35000 },
"steps": [
{ "id": "step_1", "intent": "broadcast.showLiveCam",
"payload": { "carNum": "${targetDriver}", "camGroup": "${cameraGroup}" } },
{ "id": "step_2", "intent": "system.wait",
"payload": { "durationMs": "${durationMs}" } },
{ "id": "step_3", "intent": "broadcast.showLiveCam",
"payload": { "carNum": "${secondDriver}", "camGroup": "${cameraGroup}" } },
{ "id": "step_4", "intent": "system.wait",
"payload": { "durationMs": "${durationMs}" } }
],
"variables": [
{ "name": "targetDriver", "type": "text", "required": true, "source": "cloud" },
{ "name": "secondDriver", "type": "text", "required": true, "source": "cloud" },
{ "name": "cameraGroup", "type": "text", "required": true, "source": "cloud" },
{ "name": "durationMs", "type": "number", "required": true, "source": "cloud" }
]
}Notice the ${variable} placeholders in the step payloads. The template defines what to do (switch camera, wait, switch again), while the who and how long are left open for the Executor.
Validation and Fallbacks
The parseTemplates function validates every template the model returns:
- Checks that required fields exist (
name,applicability,priority,steps) - Validates that
priorityis one ofnormal,incident, orcaution - Filters out steps with invalid intents (must exist in the
INTENT_REGISTRY) - Ensures every step has an ID
If the Planner model fails or returns too few templates, the system falls back to a set of default templates — four reliable patterns (Battle Camera, Leader Coverage, Incident Response, Solo Driver) plus two field coverage templates for when the configured broadcast driver goes off-track or into the pits.
Templates are stored in Cosmos DB with a 7-day TTL and partitioned by raceSessionId.
Phase 2: The Executor
Every time the Director client needs a new sequence (roughly every 10–30 seconds during a live broadcast), it calls the sequences/next endpoint. This triggers the Executor.
Building the Decision Context
The Executor prompt is assembled from live race data:
Race State — built from the latest telemetry snapshot:
- Current flags (GREEN, YELLOW, RED)
- Leaderboard with top 20 cars (position, car number, driver name, last lap time, on/off track status)
- Average lap time
Battle Detection — the system finds cars fighting for position:
- If the Director client reports battles (from its direct simulator connection), those are used
- Otherwise, the Executor detects battles from the leaderboard — any cars within 1.0 seconds of each other
Session Context — from the Director's RaceContext:
- Session type, track name, series
- Caution rules (full course vs. local)
- Leader lap, laps remaining, time remaining
- Cars currently pitting
- Which car is currently on camera
Configured Broadcast Drivers — the operator's focus list, with real-time status:
- ON-TRACK, OFF-TRACK, or IN-PIT for each configured driver
- If the primary driver is off-track, the prompt explicitly tells the model to select a field coverage template
Template List — the full library generated by the Planner, presented as a numbered list with each template's name, applicability description, priority, duration range, and required variables.
Selection Guidelines
The Executor prompt includes strict rules:
- No caution/pace car templates when the flag is GREEN
- No victory templates during practice or qualifying
- No battle templates if no battles are detected
- No pit stop templates if no one is pitting
- Avoid repeating the same template or driver from the last sequence
- Use car numbers from the actual leaderboard
The Model's Response
The Executor (Gemini 2.5 Flash, chosen for speed) returns a simple JSON decision:
{
"templateIndex": 2,
"variables": {
"targetDriver": "5",
"secondDriver": "8",
"cameraGroup": "Chase",
"durationMs": 10000
},
"durationMs": 30000
}Three things happen here:
templateIndex— which template from the library to use (by array index)variables— concrete values for every placeholder in the templatedurationMs— total sequence duration (clamped to the template'sdurationRange)
Warning
The durationMs appears in two places deliberately. The top-level value is the total sequence length. The value inside variables is the per-step camera hold time used by system.wait steps. If the per-step value is missing, every wait step receives the literal string "${durationMs}" instead of a number — the sequence fires all camera switches instantly with zero visible hold time.
Variable Resolution
Once the Executor returns its decision, the resolveTemplate function performs straightforward string substitution:
for (const [key, value] of Object.entries(step.payload)) {
if (typeof value === 'string'
&& value.startsWith('${')
&& value.endsWith('}')) {
const varName = value.slice(2, -1);
resolvedPayload[key] = variables[varName] ?? value;
} else {
resolvedPayload[key] = value;
}
}For each step in the template, it iterates through the payload. Any string value matching the ${varName} pattern gets replaced with the corresponding value from the Executor's decision. Non-placeholder values pass through unchanged. If a variable isn't provided, the placeholder is kept as-is (a signal that something went wrong).
Worked Example: A Battle Sequence
Let's trace a complete sequence from detection to execution.
Race State
The race is at Daytona. Flag is GREEN. The leaderboard shows cars #5 and #8 separated by 0.4 seconds battling for P3. The operator's configured driver (#14) is on track in P7.
Executor Decision
The model sees the battle in the leaderboard data, matches it to the "Battle Camera" template (index 2), and returns:
{
"templateIndex": 2,
"variables": {
"targetDriver": "5",
"secondDriver": "8",
"cameraGroup": "Chase",
"durationMs": 10000
},
"durationMs": 30000
}After Resolution
resolveTemplate produces a PortableSequence — the universal wire format consumed by the Director client:
{
"id": "seq_ai_1744646400000",
"name": "Battle Camera",
"priority": false,
"steps": [
{ "id": "step_1", "intent": "broadcast.showLiveCam",
"payload": { "carNum": "5", "camGroup": "Chase" } },
{ "id": "step_2", "intent": "system.wait",
"payload": { "durationMs": 10000 } },
{ "id": "step_3", "intent": "broadcast.showLiveCam",
"payload": { "carNum": "8", "camGroup": "Chase" } },
{ "id": "step_4", "intent": "system.wait",
"payload": { "durationMs": 10000 } }
],
"metadata": {
"totalDurationMs": 30000,
"generatedAt": "2026-04-14T12:00:00.000Z",
"source": "ai-director",
"templateId": "tmpl_abc123",
"templateName": "Battle Camera"
}
}The Director client receives this and executes each step in order: switch to car #5's chase camera, hold for 10 seconds, switch to car #8, hold for 10 seconds. The broadcast shows a classic battle cut between two cars fighting for position.
The Wire Format: PortableSequence
Every sequence — whether generated by the AI, built from the operator's library, or injected from the command buffer — uses the same PortableSequence format:
interface PortableSequence {
id: string;
name?: string;
priority?: boolean; // If true, cancel-and-replace current
steps: SequenceStep[];
metadata?: {
totalDurationMs?: number;
generatedAt?: string;
source?: 'ai-director' | 'command-buffer' | 'library';
};
}This single format means the Director client doesn't care how a sequence was created. It just executes the steps.
What's Next
The Planner–Executor pattern is designed to evolve. Future directions include:
- Adaptive template generation — the Planner could analyze past broadcast sessions to learn which templates the operator uses most
- Mid-session replanning — regenerate templates when race conditions change dramatically (e.g., rain, red flag)
- Operator feedback loop — learn from template overrides to improve future selections
- Multi-camera templates — sequences that coordinate picture-in-picture and split-screen compositions