diff options
Diffstat (limited to 'doc/muxers.texi')
-rw-r--r-- | doc/muxers.texi | 151 |
1 files changed, 113 insertions, 38 deletions
diff --git a/doc/muxers.texi b/doc/muxers.texi index 4bb6d56a00..9b4743f12e 100644 --- a/doc/muxers.texi +++ b/doc/muxers.texi @@ -1,10 +1,10 @@ @chapter Muxers @c man begin MUXERS -Muxers are configured elements in Libav which allow writing +Muxers are configured elements in FFmpeg which allow writing multimedia streams to a particular type of file. -When you configure your Libav build, all the supported muxers +When you configure your FFmpeg build, all the supported muxers are enabled by default. You can list all available muxers using the configure option @code{--list-muxers}. @@ -35,20 +35,20 @@ CRC=0x@var{CRC}, where @var{CRC} is a hexadecimal number 0-padded to For example to compute the CRC of the input, and store it in the file @file{out.crc}: @example -avconv -i INPUT -f crc out.crc +ffmpeg -i INPUT -f crc out.crc @end example You can print the CRC to stdout with the command: @example -avconv -i INPUT -f crc - +ffmpeg -i INPUT -f crc - @end example -You can select the output format of each frame with @command{avconv} by +You can select the output format of each frame with @command{ffmpeg} by specifying the audio and video codec and format. For example to compute the CRC of the input audio converted to PCM unsigned 8-bit and the input video converted to MPEG-2 video, use the command: @example -avconv -i INPUT -c:a pcm_u8 -c:v mpeg2video -f crc - +ffmpeg -i INPUT -c:a pcm_u8 -c:v mpeg2video -f crc - @end example See also the @ref{framecrc} muxer. @@ -56,40 +56,79 @@ See also the @ref{framecrc} muxer. @anchor{framecrc} @section framecrc -Per-frame CRC (Cyclic Redundancy Check) testing format. +Per-packet CRC (Cyclic Redundancy Check) testing format. -This muxer computes and prints the Adler-32 CRC for each decoded audio -and video frame. By default audio frames are converted to signed +This muxer computes and prints the Adler-32 CRC for each audio +and video packet. By default audio frames are converted to signed 16-bit raw audio and video frames to raw video before computing the CRC. The output of the muxer consists of a line for each audio and video -frame of the form: @var{stream_index}, @var{frame_dts}, -@var{frame_size}, 0x@var{CRC}, where @var{CRC} is a hexadecimal -number 0-padded to 8 digits containing the CRC of the decoded frame. +packet of the form: +@example +@var{stream_index}, @var{packet_dts}, @var{packet_pts}, @var{packet_duration}, @var{packet_size}, 0x@var{CRC} +@end example + +@var{CRC} is a hexadecimal number 0-padded to 8 digits containing the +CRC of the packet. -For example to compute the CRC of each decoded frame in the input, and -store it in the file @file{out.crc}: +For example to compute the CRC of the audio and video frames in +@file{INPUT}, converted to raw audio and video packets, and store it +in the file @file{out.crc}: @example -avconv -i INPUT -f framecrc out.crc +ffmpeg -i INPUT -f framecrc out.crc @end example -You can print the CRC of each decoded frame to stdout with the command: +To print the information to stdout, use the command: @example -avconv -i INPUT -f framecrc - +ffmpeg -i INPUT -f framecrc - @end example -You can select the output format of each frame with @command{avconv} by -specifying the audio and video codec and format. For example, to +With @command{ffmpeg}, you can select the output format to which the +audio and video frames are encoded before computing the CRC for each +packet by specifying the audio and video codec. For example, to compute the CRC of each decoded input audio frame converted to PCM unsigned 8-bit and of each decoded input video frame converted to MPEG-2 video, use the command: @example -avconv -i INPUT -c:a pcm_u8 -c:v mpeg2video -f framecrc - +ffmpeg -i INPUT -c:a pcm_u8 -c:v mpeg2video -f framecrc - @end example See also the @ref{crc} muxer. +@anchor{framemd5} +@section framemd5 + +Per-packet MD5 testing format. + +This muxer computes and prints the MD5 hash for each audio +and video packet. By default audio frames are converted to signed +16-bit raw audio and video frames to raw video before computing the +hash. + +The output of the muxer consists of a line for each audio and video +packet of the form: +@example +@var{stream_index}, @var{packet_dts}, @var{packet_pts}, @var{packet_duration}, @var{packet_size}, @var{MD5} +@end example + +@var{MD5} is a hexadecimal number representing the computed MD5 hash +for the packet. + +For example to compute the MD5 of the audio and video frames in +@file{INPUT}, converted to raw audio and video packets, and store it +in the file @file{out.md5}: +@example +ffmpeg -i INPUT -f framemd5 out.md5 +@end example + +To print the information to stdout, use the command: +@example +ffmpeg -i INPUT -f framemd5 - +@end example + +See also the @ref{md5} muxer. + @anchor{image2} @section image2 @@ -120,28 +159,61 @@ The pattern "img%%-%d.jpg" will specify a sequence of filenames of the form @file{img%-1.jpg}, @file{img%-2.jpg}, ..., @file{img%-10.jpg}, etc. -The following example shows how to use @command{avconv} for creating a +The following example shows how to use @command{ffmpeg} for creating a sequence of files @file{img-001.jpeg}, @file{img-002.jpeg}, ..., taking one image every second from the input video: @example -avconv -i in.avi -vsync 1 -r 1 -f image2 'img-%03d.jpeg' +ffmpeg -i in.avi -vsync 1 -r 1 -f image2 'img-%03d.jpeg' @end example -Note that with @command{avconv}, if the format is not specified with the +Note that with @command{ffmpeg}, if the format is not specified with the @code{-f} option and the output filename specifies an image file format, the image2 muxer is automatically selected, so the previous command can be written as: @example -avconv -i in.avi -vsync 1 -r 1 'img-%03d.jpeg' +ffmpeg -i in.avi -vsync 1 -r 1 'img-%03d.jpeg' @end example Note also that the pattern must not necessarily contain "%d" or "%0@var{N}d", for example to create a single image file @file{img.jpeg} from the input video you can employ the command: @example -avconv -i in.avi -f image2 -frames:v 1 img.jpeg +ffmpeg -i in.avi -f image2 -frames:v 1 img.jpeg @end example +The image muxer supports the .Y.U.V image file format. This format is +special in that that each image frame consists of three files, for +each of the YUV420P components. To read or write this image file format, +specify the name of the '.Y' file. The muxer will automatically open the +'.U' and '.V' files as required. + +@anchor{md5} +@section md5 + +MD5 testing format. + +This muxer computes and prints the MD5 hash of all the input audio +and video frames. By default audio frames are converted to signed +16-bit raw audio and video frames to raw video before computing the +hash. + +The output of the muxer consists of a single line of the form: +MD5=@var{MD5}, where @var{MD5} is a hexadecimal number representing +the computed MD5 hash. + +For example to compute the MD5 hash of the input converted to raw +audio and video, and store it in the file @file{out.md5}: +@example +ffmpeg -i INPUT -f md5 out.md5 +@end example + +You can print the MD5 to stdout with the command: +@example +ffmpeg -i INPUT -f md5 - +@end example + +See also the @ref{framemd5} muxer. + @section MOV/MP4/ISMV The mov/mp4/ismv muxer supports fragmentation. Normally, a MOV/MP4 @@ -161,6 +233,9 @@ Fragmentation is enabled by setting one of the AVOptions that define how to cut the file into fragments: @table @option +@item -moov_size @var{bytes} +Reserves space for the moov atom at the beginning of the file instead of placing the +moov atom at the end. If the space reserved is insufficient, muxing will fail. @item -movflags frag_keyframe Start a new fragment at each video keyframe. @item -frag_duration @var{duration} @@ -171,7 +246,7 @@ Create fragments that contain up to @var{size} bytes of payload data. Allow the caller to manually choose when to cut fragments, by calling @code{av_write_frame(ctx, NULL)} to write a fragment with the packets written so far. (This is only useful with other -applications integrating libavformat, not from @command{avconv}.) +applications integrating libavformat, not from @command{ffmpeg}.) @item -min_frag_duration @var{duration} Don't create fragments that are shorter than @var{duration} microseconds long. @end table @@ -207,7 +282,7 @@ This option is implicitly set when writing ismv (Smooth Streaming) files. Smooth Streaming content can be pushed in real time to a publishing point on IIS with this muxer. Example: @example -avconv -re @var{<normal input/transcoding options>} -movflags isml+frag_keyframe -f ismv http://server/publishingpoint.isml/Streams(Encoder1) +ffmpeg -re @var{<normal input/transcoding options>} -movflags isml+frag_keyframe -f ismv http://server/publishingpoint.isml/Streams(Encoder1) @end example @section mpegts @@ -236,11 +311,11 @@ Set the first PID for data packets (default 0x0100, max 0x0f00). The recognized metadata settings in mpegts muxer are @code{service_provider} and @code{service_name}. If they are not set the default for -@code{service_provider} is "Libav" and the default for +@code{service_provider} is "FFmpeg" and the default for @code{service_name} is "Service01". @example -avconv -i file.mpg -c copy \ +ffmpeg -i file.mpg -c copy \ -mpegts_original_network_id 0x1122 \ -mpegts_transport_stream_id 0x3344 \ -mpegts_service_id 0x5566 \ @@ -258,19 +333,19 @@ Null muxer. This muxer does not generate any output file, it is mainly useful for testing or benchmarking purposes. -For example to benchmark decoding with @command{avconv} you can use the +For example to benchmark decoding with @command{ffmpeg} you can use the command: @example -avconv -benchmark -i INPUT -f null out.null +ffmpeg -benchmark -i INPUT -f null out.null @end example Note that the above command does not read or write the @file{out.null} -file, but specifying the output file is required by the @command{avconv} +file, but specifying the output file is required by the @command{ffmpeg} syntax. Alternatively you can write the command as: @example -avconv -benchmark -i INPUT -f null - +ffmpeg -benchmark -i INPUT -f null - @end example @section matroska @@ -295,7 +370,7 @@ Specifies the language of the track in the Matroska languages form @table @option -@item STEREO_MODE=@var{mode} +@item stereo_mode=@var{mode} Stereo 3D video layout of two views in a single video track @table @option @item mono @@ -333,7 +408,7 @@ Both eyes laced in one Block, Right-eye view is first For example a 3D WebM clip can be created using the following command line: @example -avconv -i sample_left_right_clip.mpg -an -c:v libvpx -metadata STEREO_MODE=left_right -y stereo_clip.webm +ffmpeg -i sample_left_right_clip.mpg -an -c:v libvpx -metadata stereo_mode=left_right -y stereo_clip.webm @end example @section segment @@ -365,7 +440,7 @@ Wrap around segment index once it reaches @var{limit}. @end table @example -avconv -i in.mkv -c copy -map 0 -f segment -list out.list out%03d.nut +ffmpeg -i in.mkv -c copy -map 0 -f segment -list out.list out%03d.nut @end example @section mp3 @@ -394,12 +469,12 @@ Examples: Write an mp3 with an ID3v2.3 header and an ID3v1 footer: @example -avconv -i INPUT -id3v2_version 3 -write_id3v1 1 out.mp3 +ffmpeg -i INPUT -id3v2_version 3 -write_id3v1 1 out.mp3 @end example Attach a picture to an mp3: @example -avconv -i input.mp3 -i cover.png -c copy -metadata:s:v title="Album cover" +ffmpeg -i input.mp3 -i cover.png -c copy -metadata:s:v title="Album cover" -metadata:s:v comment="Cover (Front)" out.mp3 @end example |