Home
Main website
Display Sidebar
Hide Ads
Recent Changes
View Source:
FFmpeg
Edit
PageHistory
Diff
Info
LikePages
FFmpeg is a multi-purpose multimedia tool which can convert between an amazing variety of different file formats and audio and video [CoDec]s. Its development is done in common with that of [MPlayer]. FFmpeg is undergoing continual development. Hence, any copy included with your [distro|Distro] is guaranteed to be out of date. Instead, always get the latest version via SubVersion from the repository linked from the FFmpeg home page (below). Basic usage: ffmpeg ''input-spec'' ''~[input-spec ...]'' ''output-spec'' ''~[output-spec ...]'' ''~[mapping-options]'' where each ''input-spec'' consists of -i ''input-filename'' possibly preceded by options that apply to that input file (e.g. overriding format, sample rate etc if FFmpeg guesses wrong), and each ''output-spec'' consists of ''output-filename'' possibly preceded by options to be applied in generating that output file. FFmpeg does not ''concatenate'' multiple input files, it ''multiplexes'' them. Thus, you can specify an input video-only file and an input audio-only file, and get a combined video-plus-audio output file. Or you can ''demultiplex'' the input into multiple output files, for example video-only into one output file, audio-only into another, or different encodings of the same video or audio input into different outputs. The ''mapping-options'' allow the specification of which streams from whicn input file(s) are mapped onto which streams in the output file(s). These are only necessary if FFmpeg can't figure out the right thing to do. !!Tips !Extract an audio file from a MP4 or other video file: ffmpeg -i Videofile.mp4 -vn -acodec mp3 audiofile.mp3 Result on Ubuntu 9.04: Unknown encoder 'mp3' Fail! You could follow the advice [in this bug report|https://bugs.launchpad.net/ubuntu/+source/ffmpeg/+bug/296922], but why bother. Just do this: ffmpeg -i Videofile.mp4 -vn -acodec vorbis audiofile.ogg !Extract a single video frame into a JPEG file: <pre> ffmpeg -ss ''hh'':''mm'':''ss'':''cc'' -t 00:00:00.01 -i ''input-filename'' -f mjpeg ''output-name''.jpeg </pre> For example, extract the frame at time 3 minutes and 51.04 seconds into the input video file: <pre> ffmpeg -ss 00:03:51.04 -t 00:00:00.01 -i my-doggie.mpg -f mjpeg my-doggie-thumbnail.jpeg </pre> !Generate a specified duration of silence: <pre> ~NrChannels~=2 ~SampleRate~=48000 ~NrSeconds~=1 # above parameters can be changed as appropriate ffmpeg -ar $~SampleRate -acodec pcm_s16le -f s16le -ac $~NrChannels \ -i <(dd if=/dev/zero bs=$(($~SampleRate * $~NrChannels * 2)) count=$~NrSeconds) \ silence.wav </pre> !Generate a static background suitable for a non-animated [DVD-Video|DVDVideo] menu. This takes a single still frame (probably best to stick to [JPEG] format, certainly [PNG] didn't work) and turns it into an [MPEG]-2 output movie with a silent soundtrack. The movie is of one-second duration, which is sufficient because it can be set to loop during the DVD authoring process: <pre> ffmpeg -loop_input -t 1.0 -i ''stillframename'' \ -ar 48000 -f s16le -i <(dd if=/dev/zero bs=96000 count=1) \ -target pal-dvd ''outputmoviename'' </pre> where <tt>pal-dvd</tt> can be replaced with <tt>ntsc-dvd</tt> if authoring an NTSC disc rather than PAL. !Concatenate two movies This technique takes apart the video frames from the input movies and reassembles them into the output movie without decompressing and recompressing them. It uses the <tt>image2pipe</tt> container format to stream the frames. Unfortunately there doesn’t seem to be an equivalent pipe container format for audio, so that ends up being reencoded into the specified output format (which can of course be changed as required). <pre> audioparms="-f s16le -ar 48000 -ac 2" # choose appropriate intermediate audio format ffmpeg \ -i <( ffmpeg -v 0 -i ''inputfile1'' -f image2pipe -vcodec copy -y /dev/stdout; ffmpeg -v 0 -i ''inputfile2'' -f image2pipe -vcodec copy -y /dev/stdout ) \ $audioparms -i <( ffmpeg -v 0 -i ''inputfile1'' $audioparms -y /dev/stdout; ffmpeg -v 0 -i ''inputfile2'' $audioparms -y /dev/stdout ) \ -vcodec copy -acodec pcm_s16le ''outputfile'' </pre> Extending the example to concatenate more than two files is left as an exercise for the reader. :) !Fix audio/video sync in a movie In this example, 64 seconds (determined by trial and error while observing lip sync) was trimmed from the start of the audio track. The video track happens to come first in the list; the source movie is specified twice, once with the appropriate offset applied, and the <tt>-map</tt> option is used to select the appropriate audio and video streams to combine into the output movie: the first <tt>-map</tt> specification says that the first (video) output stream is to come from the first stream of the second input file (stream 1.0), while the second <tt>-map</tt> specification says that the second (audio) output stream is to come from the second stream of the first input file (stream 0.1). Note the use also of <tt>-vcodec copy</tt> and <tt>-acodec copy</tt> to ensure that no re-encoding of audio or video data takes place: <pre> ffmpeg \ -ss 00:01:04.00 -i ''srcmovie'' \ -i ''srcmovie'' \ -vcodec copy -acodec copy ''dstmovie'' \ -map 1.0 -map 0.1 </pre> !Resize video for DVD Supposing I have a film where the video frames are 608x224 pixels, that I want to put on a DVD. Allowable aspect ratios for DVD-Video are 4:3 or 16:9. Clearly I should use 16:9 as the closest fit to the original ratio, and add black bars at the top and bottom to pad out the video frame. But there is also the complication that pixels in DVD-Video are non-square: even though I want the video displayed at a 16:9 ratio, the number of pixels I have to play with on a PAL DVD is 720x576, which doesn’t match a 16:9 ratio. The trick is to do the calculation in two stages: first calculate the necessary resizing to fill as much as possible of a destination frame size of 720x405 pixels (which has a 16:9 ratio), then apply an additional vertical rescaling to stretch the height from 405 to 576 pixels. So if the width of 608 pixels is rescaled to 720, then the height of 224 pixels must be correspondingly rescaled to (720 / 608) * 224 = 265 (to the nearest pixel), in order to avoid distorting the images. Then I apply another vertical scale factor of 576 / 405 for the non-uniform DVD-Video pixels, to come up with a height of 377 pixels—make it 378, because video encoding algorithms tend to prefer even dimensions. This I then need to pad out to a final height of 576 pixels by adding black bars at the top and bottom. FFmpeg can do this with its “pad” filter, specified as <tt>-vf pad ''width'':''height'':''xoffset'':''yoffset''</tt>. To keep the video nicely centred on the screen, the bars should have equal heights of (576 - 378) / 2 = 99 pixels. So the complete FFmpeg command looks something like this: <pre> ffmpeg -i in.mpg -target pal-dvd -s 720x378 -vf pad=720:576:0:99 -aspect 16:9 out.mpg </pre> !!Links: * [FFmpeg home|http://ffmpeg.mplayerhq.hu/] * [libamr home page|http://www.penguin.cz/~utx/amr] -- needed for audio encoding if you're making 3GPP movies to play on cell phones. Recent versions of FFmpeg no longer expect the AMR source code to be inserted into the FFmpeg source tree.
3 pages link to
FFmpeg
:
MPlayer
dvdauthor
LawrenceDoliveiro