Skip to main content

WebRTC Video

Provides low-latency video streaming over an end-to-end encrypted WebRTC connection. Once installed you can use the provided embedding instructions to embed the video widget for that robot in your own web application.


  • Typical latency of just 200ms
  • Allows setting desired bit rate (in KB/s), e.g., for choosing between high definition and lower bandwidth cost
  • Multi-camera support
    • Can be easy arranged into layouts using CSS
  • Supports various video sources:
    • Video4linux cameras, i.e., virtually all USB cameras
      • allows you to select from list of resolutions and frame rates supported by your cameras
    • ROS image topics (support for bayer encoding coming soon)
    • ROS 2 image topics
    • RTSP sources, e.g., from IP cameras,
  • Utilises h264 for video compression
  • Hardware acceleration on Nvidia platforms (e.g., Jetsons)
  • Robust against even heavy packet loss
  • Congestion control: automatically adjust bitrate to account for network conditions
  • Automatically reconnects after network loss
  • Works in all modern browsers (Chrome recommended)
  • Encrypted end-to-end
  • No bandwidth cost when sender and receiver are on the same network
  • Provides a test-video stream for ease of testing and development

To learn more about the security model of WebRTC and why it is safe see, e.g., here.


Requires gstreamer 1.16 (Ubuntu 20.04) or later.

During the installation process the Transitive agent will try to install all required dependencies into its sandbox environment. If this fails or if your build and deployment process makes it preferable you can pre-install them manually:

sudo apt install build-essential pkg-config fontconfig git gobject-introspection gstreamer1.0-x gstreamer1.0-libav gstreamer1.0-nice gstreamer1.0-plugins-bad gstreamer1.0-plugins-base-apps gstreamer1.0-plugins-good gstreamer1.0-plugins-ugly gstreamer1.0-tools libgstreamer1.0-0 libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev libgstreamer-plugins-bad1.0-dev libgirepository1.0-dev libc-dev libcairo2 libcairo2-dev


If you are running Transitive inside a Docker container and want to use USB cameras, then be sure to add the following to your docker run command (or similarly for docker-compose):

--device=/dev/video0   # for each video device you may want to use
-v /run/udev:/run/udev # required by gst-device-monitor-1.0 to enumerate available devices and supported resolutions


The easiest way to configure the capability is to use the UI in the Transitive portal, which gives you the option to choose the video source to use as input, plus any parameters you may be able to set on it, e.g., resolution and frame rate on v4l devices, plus bit rate. Once configured, the video will show as well as the attributes you need to add to the embedding HTML snippet to use the configuration you selected.

Alternatively you can configure a default source (or multi-source layout) and parameters in your ~/.transitive/config.json file, e.g.:

"global": {
"@transitive-robotics/webrtc-video": {
"default": {
"streams": [
"videoSource": {
"type": "rostopic",
"rosVersion": "1",
"value": "/tracking/fisheye1/image_raw"
"complete": true
"videoSource": {
"type": "v4l2src",
"value": "/dev/video0",
"streamType": "image/jpeg",
"resolution": {
"width": "432",
"height": "240"
"framerate": "15/1"
"complete": true
"videoSource": {
"type": "videotestsrc"
"complete": true

and then add use-default=true as an attribute in the embedding HTML to use this instead.


Each <video> element generated by the front-end web component will have a unique class name webrtc-videoN, where N enumerates the elements starting at 0. This makes it easy to arrange and style these elements using CSS. For instance, these CSS rules would create a layout where one camera, e.g., the front camera, is large on top, and at the bottom we have left, back, and right-viewing cameras.

webrtc-video-device video { position: absolute }
webrtc-video-device .webrtc-video0 { width: 960px; height: 720px; }
webrtc-video-device .webrtc-video1 { top: 720px; }
webrtc-video-device .webrtc-video2 { top: 720px; left: 640px; }
webrtc-video-device .webrtc-video3 { top: 720px; left: 320px; width: 320px; }

In addition, the div element immediately wrapping these video elements has the class name webrtc-video-wrapper. This makes it possible to use apply various CSS layout features such as flexbox or grid layouts, e.g., the following would create a layout where the front facing camera is large in the middle, left and right cameras are to the sides on top, and the backward facing camera is in the bottom left. The bottom right is left black here but would make for a good place to show a map component.

.webrtc-video-wrapper {
display: grid;
grid-gap: 10px;
grid-template-columns: 1fr 1fr 1fr 1fr;
grid-template-rows: 1fr 1fr;
"left front front right"
"back front front .";
.webrtc-video0 { grid-area: front; }
.webrtc-video1 { grid-area: left; }
.webrtc-video2 { grid-area: right; }
.webrtc-video3 { grid-area: back; }
video {
width: 100%;
height: 100%;
object-fit: cover;

Front-end API

The component exposes an API on the front-end that can be used to interact with the running web component when using React.


const MyComp = () => {
// Create a react ref that will be attached to the component once mounted
const myref = useRef(null);

// Manually get the current lag:
if (myref.current) {

// Once the component is mounted, i.e., myref is bound, call the onLag
// function of the imperative API to attach a listener for lag events
useEffect(() => {
myref.current?.call('onLag', (lag) => console.log('listener', lag));
}, [myref.current]);

return <div>
<webrtc-video-device id="superbot" host="" ssl="true"


  • getLag(): returns the current lag info consisting of:
    • lag: current absolute lag in seconds
    • gradient: change in lag
    • bps: current actual bitrate in bits per second
  • onLag(callback): register a listener that gets called each time a new lag object is available (currently 5Hz)

Latest version: 0.10.24, published: 5/30/2023, 11:37:03 PM