반응형

출처 :


In a previous entry, we discussed how to preview webcams. In this article, we’ll discuss a Server which saves both video and audio to a file, and streams video over the network.

On the other side of the network, we’ll build a small Client to get the video, decode it and display it. The Jetson does hardware decoding of the H.264 video streams.

Here’s a demonstration:

In the demonstration, we grab H.264 encoded video from a Logitech c920 webcam and the camera audio. We place the video and audio into a local multimedia file, and send the video out as a stream over the network.

The demonstration shell files are here:

gExampleServer.sh

and

gExampleClient.sh

gExampleServer

Here’s the command line that you run on the Server’s Terminal.

gst-launch-1.0 -vvv -e \
mp4mux name=mux ! filesink location=gtest1.mp4 \
$v4l2src device=/dev/video0 timestamp=true \
! video/x-h264, width=1920, height=1080, framerate=30/1
! tee name=tsplit \
! queue ! h264parse ! omxh264dec ! videoconvert ! videoscale \
! video/x-raw, width=1280, height=720 ! xvimagesink sync=false split. \
! queue ! h264parse ! mux.video_0 tsplit. \
! queue ! h264parse ! mpegtsmux ! tcpserversink host=$IP_ADDRESS port=5000 \
pulsesrc device=alsa_input.usb-046d_HD_Pro_Webcam_C920_A116B66F-02-C920.analog-stereo do-timestamp=true \
! audio/x-raw ! queue ! audioconvert ! voaacenc ! queue ! mux.audio_0

Where $IP_ADDRESS is the host ipaddress.

There are a couple of GStreamer elements which we use to facilitate the distribution of the video. The first, called a tee is used to split the video pipeline and route it different places, in our case the screen preview, a local video file, and to the network itself.

The second element, called a mux (multiplexer), can sometimes be thought of as a container. The first mux, called mp4mux, is a .MP4 file container where we store the video and the audio being collected from the webcam. The video is encoded in H.264, the audio is encoded as AAC. The mp4mux has a place to store video (mp4mux.video_0) and a place where the audio goes (mp4mux.audio_0), and prepares it to go into a file. It’s more complicated than that, of course, but we’ll go easy here.

The second mux, called mpegtsmux, can be thought of as an element which combines media streams and prepares them as a Transport Stream (in this case MPEG) to be sent out over the network. We take the output of mpegtsmux and send it to a tcpserversink element, out over the network.

As a side talk, you’ll encounter a term: Real Time Streaming Protocol (RTSP) which is a network control protocol and is how Gstreamer sends out its Transport Stream. Since we’re going to send our video stream out over TCP, we need to make sure that our video is “glued together” and arrives over the network in the proper order. That’s why we put it into a MPEG container (mpegtsmux) to board the network train. If we used UDP instead of TCP, this wouldn’t be an issue and we could use other Gstreamer mechanisms, such as the udpsink and rtph264pay elements.

Another thing worth pointing out is that the preview size is smaller than the size of the video that is captured. This is defined by the second capability, video/x-raw, width=1280, height=720. For the demo, basically we wanted different sized windows on the server and the client to show that the client just isn’t a screen clone.

In comparison to the Server, the Client is easy:

gst-launch-1.0 -vvv \
tcpclientsrc host=$IP_ADDRESS port=5000 \
! tsdemux \
! h264parse ! omxh264dec ! videoconvert \
! xvimagesink sync=false

The tcpclientsrc element is basically the match to the Server tcpserversink. tsdemux takes the combined stream elements and separates them out to video (and audio if available). After that we decode the video, and put it up on the screen.

Video File Issues

I’ve had a lot of issues with the video files that are being saved. In this particular setup, there are ‘freezes’ about every couple of seconds. The audio still synchs, but there’s definitely a hitch it in its giddy up.

반응형
Posted by Real_G