Compose OTIO

POST/postproduction/v1/compose-otio

What it does

Use this when your workflow already knows exactly which assets belong on the timeline and in what order. You send one explicit manifest of image, video, and audio assets plus a deterministic sequence definition, and the endpoint returns an OpenTimelineIO timeline artifact ready for export or downstream editorial tooling.

This is not an AI planning endpoint. The caller owns the cut plan. Compose OTIO simply validates that plan, applies stable timeline math, and serializes the result into OTIO with predictable clip names, trims, transitions, and track metadata.

How it works

Think of this endpoint as a deterministic timeline compiler. The request declares the project settings, the available folder items, and the exact editorial intent. The API validates those references against the asset manifest and emits a concrete OTIO timeline instead of inventing a plan for you.

Asset path values are relative manifest paths, not absolute workstation paths. If your downstream importer needs real local file URLs, provide folder.mediaRootPath and the API will derive OTIO target_url values from that root plus each asset's relative path.

Audio layout is caller-controlled. Keep a video clip's source audio embedded by default, or setembeddedAudio.separateTrack when you want that same source audio exported onto its own audio track. For regular audio layers, reuse the same trackId to merge clips onto one lane, for example one lane for narration and effects and another lane for BGM.

V1 keeps audio-layer controls intentionally small: timing via fromSec/toSec, optional trackId, semantic role, placement, and optionalclipNameHint.

For music beds, placement: "loopToFit" already loops the source and trims the last repetition to the exact remaining duration. You do not need a separate mode such as loopToFitWithTrim for that behavior.

Default success transport is responseFormat=file, which returns the OTIO payload as a file-style attachment response. If your workflow prefers an application JSON envelope, sendresponseFormat: "json" and the same artifact comes back inside the standard success contract.

What comes back

  • Deterministic OTIO output. The artifact preserves caller-declared ordering, trims, and transition math.
  • File mode by default. Success returns an attachment response with OTIO JSON payload and transport headers.
  • Optional JSON wrapper mode. Useful when your client wants one uniform JSON contract for artifacts and metadata.
  • Consistent validation failures. Errors are always returned as JSON envelopes even when success mode defaults to file transport.

Why use it?

  • Keep planning logic on the caller side. Ideal for editors, automation pipelines, and agents that already know the intended timeline.
  • Export OTIO without bespoke renderer code. You provide the manifest and intent; the API handles serialization and transport.
  • Validate timeline feasibility early. The endpoint catches missing assets, invalid trims, and over-limit compositions before export time.
  • Bridge into downstream editorial tooling. OTIO output is easy to hand off to importers, converters, or archive workflows.

Examples

cURL example

curl -X POST 'https://api.creatornode.io/postproduction/v1/compose-otio' \ -H 'Content-Type: application/json' \ -H 'X-API-Key: YOUR_KEY' \ -d '{ "project": { "name": "Forest Morning Cut", "fps": 24, "resolution": { "w": 1920, "h": 1080 }, "audioSampleRate": 48000 }, "folder": { "mediaRootPath": "C:/media/projects/forest-morning", "items": [ { "id": "img-opening", "kind": "image", "path": "storyboards/forest-opening.png", "meta": { "w": 1920, "h": 1080 } }, { "id": "vid-stream", "kind": "video", "path": "rushes/stream-walk.mp4", "meta": { "durationSec": 8, "fps": 24, "hasAudio": true } }, { "id": "aud-music", "kind": "audio", "path": "audio/forest-piano.wav", "meta": { "durationSec": 6, "sampleRate": 48000, "channels": 2 } } ] }, "intent": { "kind": "explicit", "videoSequence": [ { "itemId": "img-opening", "fromSec": 0, "toSec": 2, "clipNameHint": "Opening Still", "transitionOut": { "type": "crossDissolve", "durSec": 0.5 } }, { "itemId": "vid-stream", "fromSec": 1, "toSec": 5, "clipNameHint": "Forest Walk", "embeddedAudio": { "enabled": true, "separateTrack": true, "trackId": "nat", "role": "embeddedVideo" } } ], "audioLayers": [ { "itemId": "aud-music", "trackId": "bgm", "fromSec": 0, "toSec": 15, "placement": "loopToFit", "role": "music", "clipNameHint": "Music Bed" } ] }, "output": { "includeClipNames": true, "otioSchema": "Timeline.1" } }'

Response behaviour

200 OK Content-Type: application/json; charset=utf-8 Content-Disposition: attachment; filename="Forest Morning Cut.otio" X-Tier: free X-OTIO-Schema: Timeline.1 { "OTIO_SCHEMA": "Timeline.1", "name": "Forest Morning Cut", "tracks": { "OTIO_SCHEMA": "Stack.1", "children": [ { "OTIO_SCHEMA": "Track.1", "kind": "Video", "children": ["..."] }, { "OTIO_SCHEMA": "Track.1", "kind": "Audio", "children": ["..."] } ] } }

Tips & tricks

  • Keep the request explicit. This v1 endpoint does not infer edits, reorder assets, or invent missing timing decisions.
  • Use file mode unless you need uniform envelopes. It is the default transport and mirrors how downstream artifact workflows usually consume OTIO.
  • Validate your asset references early. Every itemId in the explicit plan must exist in the folder manifest.
  • Use mediaRootPath for local imports. Keep asset path values relative and let the API derive absolute file URLs only when you intentionally provide a local root.
  • Group clips onto a shared track with trackId. Audio layers that share the same trackId string are merged onto one OTIO track, sorted by fromSec. Omit it to keep each layer on its own track.
  • loopToFit already trims the last loop. It repeats an audio source until the requested interval is filled and shortens the final repetition when only part of the loop fits.
  • Apply gain and fades downstream. V1 does not expose per-layer gain or fade envelopes; keep mix automation in the editor or importer that consumes the OTIO.
  • Detach video source audio only when you need it. Set embeddedAudio.separateTrack to split a video clip's source audio out of the video lane and optionally give it a trackId so it joins a specific audio lane.
  • Watch free-tier duration and manifest size. Over-limit requests fail fast with typed JSON errors and upgrade recommendations.
  • See the interactive schema. Full OpenAPI reference: Compose OTIO docs.

Cost & Limits

FeatureDetail
Base cost4 credits per deterministic OTIO composition request
Input formatapplication/json (project + folder manifest + explicit intent)
Default success modeFile-style OTIO artifact response with attachment headers
Optional success modeJSON wrapper when responseFormat is set to json

Tier Limits

LimitFreePremium
Max manifest items20200
Max total timeline duration180 sec3600 sec
Response transportfile or jsonfile or json
Composition modeexplicit onlyexplicit only

Use this endpoint when deterministic export matters more than generative planning. If your workflow starts from images or narration instead of an explicit cut plan, pair it with Describe Scenes and Scene Timestamps upstream.

Other Endpoints