FFmpeg is undergoing continual development. Hence, any copy included with your distro is guaranteed to be out of date. Instead, always get the latest version via SubVersion from the repository linked from the FFmpeg home page (below).
ffmpeg input-spec [input-spec ...] output-spec [output-spec ...] [mapping-options]
where each input-spec consists of
possibly preceded by options that apply to that input file (e.g. overriding format, sample rate etc if FFmpeg guesses wrong), and each output-spec consists of
possibly preceded by options to be applied in generating that output file.
FFmpeg does not concatenate multiple input files, it multiplexes them. Thus, you can specify an input video-only file and an input audio-only file, and get a combined video-plus-audio output file. Or you can demultiplex the input into multiple output files, for example video-only into one output file, audio-only into another, or different encodings of the same video or audio input into different outputs.
The mapping-options allow the specification of which streams from whicn input file(s) are mapped onto which streams in the output file(s). These are only necessary if FFmpeg can't figure out the right thing to do.
ffmpeg -i Videofile.mp4 -vn -acodec mp3 audiofile.mp3
Result on Ubuntu 9.04:
Unknown encoder 'mp3'
You could follow the advice in this bug report, but why bother. Just do this:
ffmpeg -i Videofile.mp4 -vn -acodec vorbis audiofile.ogg
ffmpeg -ss hh:mm:ss:cc -t 00:00:00.01 -i input-filename -f mjpeg output-name.jpeg
For example, extract the frame at time 3 minutes and 51.04 seconds into the input video file:
ffmpeg -ss 00:03:51.04 -t 00:00:00.01 -i my-doggie.mpg -f mjpeg my-doggie-thumbnail.jpeg
NrChannels=2 SampleRate=48000 NrSeconds=1 # above parameters can be changed as appropriate ffmpeg -ar $SampleRate -acodec pcm_s16le -f s16le -ac $NrChannels \ -i <(dd if=/dev/zero bs=$(($SampleRate * $NrChannels * 2)) count=$NrSeconds) \ silence.wav
This takes a single still frame (probably best to stick to JPEG format, certainly PNG didn't work) and turns it into an MPEG-2 output movie with a silent soundtrack. The movie is of one-second duration, which is sufficient because it can be set to loop during the DVD authoring process:
ffmpeg -loop_input -t 1.0 -i stillframename \ -ar 48000 -f s16le -i <(dd if=/dev/zero bs=96000 count=1) \ -target pal-dvd outputmoviename
where pal-dvd can be replaced with ntsc-dvd if authoring an NTSC disc rather than PAL.
This technique takes apart the video frames from the input movies and reassembles them into the output movie without decompressing and recompressing them. It uses the image2pipe container format to stream the frames. Unfortunately there doesn’t seem to be an equivalent pipe container format for audio, so that ends up being reencoded into the specified output format (which can of course be changed as required).
audioparms="-f s16le -ar 48000 -ac 2" # choose appropriate intermediate audio format ffmpeg \ -i <( ffmpeg -v 0 -i inputfile1 -f image2pipe -vcodec copy -y /dev/stdout; ffmpeg -v 0 -i inputfile2 -f image2pipe -vcodec copy -y /dev/stdout ) \ $audioparms -i <( ffmpeg -v 0 -i inputfile1 $audioparms -y /dev/stdout; ffmpeg -v 0 -i inputfile2 $audioparms -y /dev/stdout ) \ -vcodec copy -acodec pcm_s16le outputfile
Extending the example to concatenate more than two files is left as an exercise for the reader. :)
In this example, 64 seconds (determined by trial and error while observing lip sync) was trimmed from the start of the audio track. The video track happens to come first in the list; the source movie is specified twice, once with the appropriate offset applied, and the -map option is used to select the appropriate audio and video streams to combine into the output movie: the first -map specification says that the first (video) output stream is to come from the first stream of the second input file (stream 1.0), while the second -map specification says that the second (audio) output stream is to come from the second stream of the first input file (stream 0.1). Note the use also of -vcodec copy and -acodec copy to ensure that no re-encoding of audio or video data takes place:
ffmpeg \ -ss 00:01:04.00 -i srcmovie \ -i srcmovie \ -vcodec copy -acodec copy dstmovie \ -map 1.0 -map 0.1
Supposing I have a film where the video frames are 608x224 pixels, that I want to put on a DVD. Allowable aspect ratios for DVD-Video are 4:3 or 16:9. Clearly I should use 16:9 as the closest fit to the original ratio, and add black bars at the top and bottom to pad out the video frame.
But there is also the complication that pixels in DVD-Video are non-square: even though I want the video displayed at a 16:9 ratio, the number of pixels I have to play with on a PAL DVD is 720x576, which doesn’t match a 16:9 ratio.
The trick is to do the calculation in two stages: first calculate the necessary resizing to fill as much as possible of a destination frame size of 720x405 pixels (which has a 16:9 ratio), then apply an additional vertical rescaling to stretch the height from 405 to 576 pixels.
So if the width of 608 pixels is rescaled to 720, then the height of 224 pixels must be correspondingly rescaled to (720 / 608) * 224 = 265 (to the nearest pixel), in order to avoid distorting the images. Then I apply another vertical scale factor of 576 / 405 for the non-uniform DVD-Video pixels, to come up with a height of 377 pixels—make it 378, because video encoding algorithms tend to prefer even dimensions.
This I then need to pad out to a final height of 576 pixels by adding black bars at the top and bottom. FFmpeg can do this with its “pad” filter, specified as -vf pad width:height:xoffset:yoffset. To keep the video nicely centred on the screen, the bars should have equal heights of (576 - 378) / 2 = 99 pixels.
So the complete FFmpeg command looks something like this:
ffmpeg -i in.mpg -target pal-dvd -s 720x378 -vf pad=720:576:0:99 -aspect 16:9 out.mpg