Skip to main content

WebRTC Video

Provides low-latency video streaming over an end-to-end encrypted WebRTC connection. Once installed you can use the provided embedding instructions to embed the video widget for that robot in your own web application.

Features

  • Typical latency of just 200ms
  • Allows setting desired bit rate (in KB/s), e.g., for choosing between high definition and lower bandwidth cost
  • Multi-camera support
    • Can be easy arranged into layouts using CSS
  • Supports various video sources:
    • Video4linux cameras, i.e., virtually all USB cameras
      • allows you to select from list of resolutions and frame rates supported by your cameras
    • ROS image topics (incl. bayer encoded ones)
    • ROS 2 image topics
    • RTSP sources, e.g., from IP cameras
    • custom GStreamer source pipelines
  • Utilizes h264 for video compression
  • Hardware acceleration on Nvidia platforms (e.g., Jetsons), and Rockchip based platforms (e.g., Orange Pi, Firefly)
  • Robust against even heavy packet loss
  • Congestion control: automatically adjust bitrate to account for network conditions
  • Automatically reconnects after network loss
  • Works in all modern browsers (Chrome recommended)
  • Encrypted end-to-end
  • No bandwidth cost when sender and receiver are on the same network
  • Provides a test-video stream for ease of testing and development

To learn more about the security model of WebRTC and why it is safe see, e.g., here.

Dependencies

Requires gstreamer 1.16 (Ubuntu 20.04) or later.

During the installation process the Transitive agent will try to install all required dependencies into its sandbox environment. If this fails or if your build and deployment process makes it preferable you can pre-install them manually:

sudo apt install build-essential pkg-config fontconfig git gobject-introspection gstreamer1.0-x gstreamer1.0-libav gstreamer1.0-nice gstreamer1.0-plugins-bad gstreamer1.0-plugins-base-apps gstreamer1.0-plugins-good gstreamer1.0-plugins-ugly gstreamer1.0-tools libgstreamer1.0-0 libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev libgstreamer-plugins-bad1.0-dev libgirepository1.0-dev libc-dev libcairo2 libcairo2-dev

Docker

If you are running Transitive inside a Docker container and want to use USB cameras, then be sure to add the following to your docker run command (or similarly for docker-compose):

-v /run/udev:/run/udev # required by gst-device-monitor-1.0 to enumerate available devices and supported resolutions

Configuration

The easiest way to configure the capability is to use the UI in the Transitive portal, which gives you the option to choose the video source to use as input, plus any parameters you may be able to set on it, e.g., resolution and frame rate on v4l devices, plus bit rate. Once configured, the video will show as well as the attributes you need to add to the embedding HTML snippet to use the configuration you selected.

Alternatively you can configure a default source (or multi-source layout) and parameters in your ~/.transitive/config.json file, e.g.:

{
"global": {
...
},
"@transitive-robotics/webrtc-video": {
"default": {
"streams": [
{
"videoSource": {
"type": "rostopic",
"rosVersion": "1",
"value": "/tracking/fisheye1/image_raw"
},
"complete": true
},
{
"videoSource": {
"type": "v4l2src",
"value": "/dev/video0",
"streamType": "image/jpeg",
"resolution": {
"width": "432",
"height": "240"
},
"framerate": "15/1"
},
"complete": true
},
{
"videoSource": {
"type": "videotestsrc"
},
"complete": true
}
]
}
}
}

and then add use-default=true as an attribute in the embedding HTML to use this instead.

Styling

Each <video> element generated by the front-end web component will have a unique class name webrtc-videoN, where N enumerates the elements starting at 0. This makes it easy to arrange and style these elements using CSS. For instance, these CSS rules would create a layout where one camera, e.g., the front camera, is large on top, and at the bottom we have left, back, and right-viewing cameras.

webrtc-video-device video { position: absolute }
webrtc-video-device .webrtc-video0 { width: 960px; height: 720px; }
webrtc-video-device .webrtc-video1 { top: 720px; }
webrtc-video-device .webrtc-video2 { top: 720px; left: 640px; }
webrtc-video-device .webrtc-video3 { top: 720px; left: 320px; width: 320px; }

In addition, the div element immediately wrapping these video elements has the class name webrtc-video-wrapper. This makes it possible to use apply various CSS layout features such as flexbox or grid layouts, e.g., the following would create a layout where the front facing camera is large in the middle, left and right cameras are to the sides on top, and the backward facing camera is in the bottom left. The bottom right is left black here but would make for a good place to show a map component.

.webrtc-video-wrapper {
display: grid;
grid-gap: 10px;
grid-template-columns: 1fr 1fr 1fr 1fr;
grid-template-rows: 1fr 1fr;
grid-template-areas:
"left front front right"
"back front front .";
}
.webrtc-video0 { grid-area: front; }
.webrtc-video1 { grid-area: left; }
.webrtc-video2 { grid-area: right; }
.webrtc-video3 { grid-area: back; }
video {
width: 100%;
height: 100%;
object-fit: cover;
}

Front-end API

The component exposes an API on the front-end that can be used to interact with the running web component when using React.

Example:

const MyComp = () => {
// Create a react ref that will be attached to the component once mounted
const myref = useRef(null);

// Manually get the current lag:
if (myref.current) {
console.log(myref.current.call('getLag'));
}

// Once the component is mounted, i.e., myref is bound, call the onLag
// function of the imperative API to attach a listener for lag events
useEffect(() => {
myref.current && setTimeout(() =>
myref.current.call('onLag', (lag) => console.log('listener', lag)),
2000);
}, [myref.current]);

return <div>
<webrtc-video-device id="superbot" host="transitiverobotics.com" ssl="true"
jwt={jwt}
count="1"
timeout="1800"
type="videotestsrc"
ref={myref}
/>
</div>;
}

Reference

  • getLag(): returns the current lag info consisting of:
    • lag: current absolute lag in seconds
    • gradient: change in lag
    • bps: current actual bitrate in bits per second
  • onLag(callback): register a listener that gets called each time a new lag object is available (currently 5Hz)

RTSP streams (IP cameras)

To use an RTSP stream as video source, such as those provided by many IP cameras, set the type in your embedding HTML to rstp, and set the source to the RTSP URL provided by your camera. For instance:

  <webrtc-video-device id="superbot" host="transitiverobotics.com" ssl="true"
jwt={jwt}
count="1"
timeout="1800"
type="rtsp"
source="rtsp://my-ip-camera:8554/cam"
/>

This assumes that your stream is already h264 encoded, as is common with IP cameras, and that you do not wish to transcode the stream. This is by far the most CPU efficient option but also implies that the webrtc-video capability will not be able to provide congestion control or set the bitrate. If you do want to decode the stream and re-encode it in order to get congestion control and bitrate control back, you can use type="rtsp-transcode" instead.

Custom Video Source Pipelines

In addition to various video sources, the capability supports the specification of custom GStreamer source pipelines. In your embedding HTML you can specify type custom and set as source your custom pipeline. For example:

  <webrtc-video-device id="superbot" host="transitiverobotics.com" ssl="true"
jwt={jwt}
count="1"
timeout="1800"
type="custom"
source="videotestsrc is-live=true ! video/x-raw,framerate=(fraction)15/1,width=640,height=480"
/>

This assumes that the sink of your pipeline can be fed into a videoconvert element for conversion to video/x-raw. If your pipeline produces a stream that is already h264 encoded and you don't want to decode and re-encode this stream, then you can set the type to custom-h264. Note that in that case the capability will not implement any congestion control for you.

This is an advanced feature and only meant for users who are familiar with GStreamer pipelines. It is also more difficult to debug. We recommend to test your custom source pipeline first using gst-launch-1.0 and autovideosink or fakesink.

Local Recording

The capability supports recording all outgoing video on disk in a rolling buffer. This feature, similar to black-box recorders on airplanes, can be useful when investigating recent incidences after the fact. The buffer restarts each time a new session is started and currently records a maximum of ten minutes or 1 GB, whichever comes first. Add record="true" to your HTML embedding code to enable this. The recordings will be in /tmp/stream_*.mov.

Custom Video Sink Pipelines

In many cases, streaming video is only one of several usages of a robot's video sources. When this is the case it is useful to be able to "tee" the stream at several locations in the pipeline to feed it into auxiliary processing pipelines. Local recording is an example of this, but a very specific one. To support any other such applications, webrtc-video gives the ability to specify additional sink pipelines that can be injected after the encoding step of the pipeline. This is done per-stream.

Example

  <webrtc-video-device id="superbot" host="transitiverobotics.com" ssl="true"
jwt={jwt}
bitrate=50
count="2"
timeout=1800
type="v4l2src"
source="/dev/video0"
streamtype="image/jpeg"
framerate="15/1"
width="640"
height="480"
encodedpipe="splitmuxsink location=/tmp/video0.mp4 max-size-time=60000000000 max-files=10"
type_1="v4l2src"
source_1="/dev/video2"
streamtype_1="image/jpeg"
framerate_1="15/1"
width_1="640"
height_1="480"
encodedpipe_1="splitmuxsink location=/tmp/video1.mp4 max-size-time=60000000000 max-files=10"
/>

Changelog

v0.15.0

  • New feature: ability to record locally to disk on robot/device

  • New feature: ability to specify custom source and sink pipelines

  • New feature: support for Bayer encoded ROS image topics

  • v0.15.1: Automatically recover from interruptions during updates

v0.14

  • New feature: hardware acceleration on Rockchip based boards
  • Fixed a bug preventing the connection to establish from mobile browsers
  • Fixed a bug preventing the embedding of video streams using ROS 2 topics as video source

Latest version: 0.15.1, published: 9/26/2023, 11:11:50 PM