473 lines
		
	
	
		
			17 KiB
		
	
	
	
		
			Plaintext
		
	
	
	
	
	
			
		
		
	
	
			473 lines
		
	
	
		
			17 KiB
		
	
	
	
		
			Plaintext
		
	
	
	
	
	
\input texinfo @c -*- texinfo -*-
 | 
						||
 | 
						||
@settitle FFmpeg FAQ
 | 
						||
@titlepage
 | 
						||
@center @titlefont{FFmpeg FAQ}
 | 
						||
@end titlepage
 | 
						||
 | 
						||
@top
 | 
						||
 | 
						||
@contents
 | 
						||
 | 
						||
@chapter General Questions
 | 
						||
 | 
						||
@section Why doesn't FFmpeg support feature [xyz]?
 | 
						||
 | 
						||
Because no one has taken on that task yet. FFmpeg development is
 | 
						||
driven by the tasks that are important to the individual developers.
 | 
						||
If there is a feature that is important to you, the best way to get
 | 
						||
it implemented is to undertake the task yourself or sponsor a developer.
 | 
						||
 | 
						||
@section FFmpeg does not support codec XXX. Can you include a Windows DLL loader to support it?
 | 
						||
 | 
						||
No. Windows DLLs are not portable, bloated and often slow.
 | 
						||
Moreover FFmpeg strives to support all codecs natively.
 | 
						||
A DLL loader is not conducive to that goal.
 | 
						||
 | 
						||
@section I cannot read this file although this format seems to be supported by ffmpeg.
 | 
						||
 | 
						||
Even if ffmpeg can read the container format, it may not support all its
 | 
						||
codecs. Please consult the supported codec list in the ffmpeg
 | 
						||
documentation.
 | 
						||
 | 
						||
@section Which codecs are supported by Windows?
 | 
						||
 | 
						||
Windows does not support standard formats like MPEG very well, unless you
 | 
						||
install some additional codecs.
 | 
						||
 | 
						||
The following list of video codecs should work on most Windows systems:
 | 
						||
@table @option
 | 
						||
@item msmpeg4v2
 | 
						||
.avi/.asf
 | 
						||
@item msmpeg4
 | 
						||
.asf only
 | 
						||
@item wmv1
 | 
						||
.asf only
 | 
						||
@item wmv2
 | 
						||
.asf only
 | 
						||
@item mpeg4
 | 
						||
Only if you have some MPEG-4 codec like ffdshow or Xvid installed.
 | 
						||
@item mpeg1video
 | 
						||
.mpg only
 | 
						||
@end table
 | 
						||
Note, ASF files often have .wmv or .wma extensions in Windows. It should also
 | 
						||
be mentioned that Microsoft claims a patent on the ASF format, and may sue
 | 
						||
or threaten users who create ASF files with non-Microsoft software. It is
 | 
						||
strongly advised to avoid ASF where possible.
 | 
						||
 | 
						||
The following list of audio codecs should work on most Windows systems:
 | 
						||
@table @option
 | 
						||
@item adpcm_ima_wav
 | 
						||
@item adpcm_ms
 | 
						||
@item pcm_s16le
 | 
						||
always
 | 
						||
@item libmp3lame
 | 
						||
If some MP3 codec like LAME is installed.
 | 
						||
@end table
 | 
						||
 | 
						||
 | 
						||
@chapter Compilation
 | 
						||
 | 
						||
@section @code{error: can't find a register in class 'GENERAL_REGS' while reloading 'asm'}
 | 
						||
 | 
						||
This is a bug in gcc. Do not report it to us. Instead, please report it to
 | 
						||
the gcc developers. Note that we will not add workarounds for gcc bugs.
 | 
						||
 | 
						||
Also note that (some of) the gcc developers believe this is not a bug or
 | 
						||
not a bug they should fix:
 | 
						||
@url{http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11203}.
 | 
						||
Then again, some of them do not know the difference between an undecidable
 | 
						||
problem and an NP-hard problem...
 | 
						||
 | 
						||
@chapter Usage
 | 
						||
 | 
						||
@section ffmpeg does not work; what is wrong?
 | 
						||
 | 
						||
Try a @code{make distclean} in the ffmpeg source directory before the build.
 | 
						||
If this does not help see
 | 
						||
(@url{http://ffmpeg.org/bugreports.html}).
 | 
						||
 | 
						||
@section How do I encode single pictures into movies?
 | 
						||
 | 
						||
First, rename your pictures to follow a numerical sequence.
 | 
						||
For example, img1.jpg, img2.jpg, img3.jpg,...
 | 
						||
Then you may run:
 | 
						||
 | 
						||
@example
 | 
						||
  ffmpeg -f image2 -i img%d.jpg /tmp/a.mpg
 | 
						||
@end example
 | 
						||
 | 
						||
Notice that @samp{%d} is replaced by the image number.
 | 
						||
 | 
						||
@file{img%03d.jpg} means the sequence @file{img001.jpg}, @file{img002.jpg}, etc...
 | 
						||
 | 
						||
If you have large number of pictures to rename, you can use the
 | 
						||
following command to ease the burden. The command, using the bourne
 | 
						||
shell syntax, symbolically links all files in the current directory
 | 
						||
that match @code{*jpg} to the @file{/tmp} directory in the sequence of
 | 
						||
@file{img001.jpg}, @file{img002.jpg} and so on.
 | 
						||
 | 
						||
@example
 | 
						||
  x=1; for i in *jpg; do counter=$(printf %03d $x); ln -s "$i" /tmp/img"$counter".jpg; x=$(($x+1)); done
 | 
						||
@end example
 | 
						||
 | 
						||
If you want to sequence them by oldest modified first, substitute
 | 
						||
@code{$(ls -r -t *jpg)} in place of @code{*jpg}.
 | 
						||
 | 
						||
Then run:
 | 
						||
 | 
						||
@example
 | 
						||
  ffmpeg -f image2 -i /tmp/img%03d.jpg /tmp/a.mpg
 | 
						||
@end example
 | 
						||
 | 
						||
The same logic is used for any image format that ffmpeg reads.
 | 
						||
 | 
						||
@section How do I encode movie to single pictures?
 | 
						||
 | 
						||
Use:
 | 
						||
 | 
						||
@example
 | 
						||
  ffmpeg -i movie.mpg movie%d.jpg
 | 
						||
@end example
 | 
						||
 | 
						||
The @file{movie.mpg} used as input will be converted to
 | 
						||
@file{movie1.jpg}, @file{movie2.jpg}, etc...
 | 
						||
 | 
						||
Instead of relying on file format self-recognition, you may also use
 | 
						||
@table @option
 | 
						||
@item -c:v ppm
 | 
						||
@item -c:v png
 | 
						||
@item -c:v mjpeg
 | 
						||
@end table
 | 
						||
to force the encoding.
 | 
						||
 | 
						||
Applying that to the previous example:
 | 
						||
@example
 | 
						||
  ffmpeg -i movie.mpg -f image2 -c:v mjpeg menu%d.jpg
 | 
						||
@end example
 | 
						||
 | 
						||
Beware that there is no "jpeg" codec. Use "mjpeg" instead.
 | 
						||
 | 
						||
@section Why do I see a slight quality degradation with multithreaded MPEG* encoding?
 | 
						||
 | 
						||
For multithreaded MPEG* encoding, the encoded slices must be independent,
 | 
						||
otherwise thread n would practically have to wait for n-1 to finish, so it's
 | 
						||
quite logical that there is a small reduction of quality. This is not a bug.
 | 
						||
 | 
						||
@section How can I read from the standard input or write to the standard output?
 | 
						||
 | 
						||
Use @file{-} as file name.
 | 
						||
 | 
						||
@section -f jpeg doesn't work.
 | 
						||
 | 
						||
Try '-f image2 test%d.jpg'.
 | 
						||
 | 
						||
@section Why can I not change the frame rate?
 | 
						||
 | 
						||
Some codecs, like MPEG-1/2, only allow a small number of fixed frame rates.
 | 
						||
Choose a different codec with the -c:v command line option.
 | 
						||
 | 
						||
@section How do I encode Xvid or DivX video with ffmpeg?
 | 
						||
 | 
						||
Both Xvid and DivX (version 4+) are implementations of the ISO MPEG-4
 | 
						||
standard (note that there are many other coding formats that use this
 | 
						||
same standard). Thus, use '-c:v mpeg4' to encode in these formats. The
 | 
						||
default fourcc stored in an MPEG-4-coded file will be 'FMP4'. If you want
 | 
						||
a different fourcc, use the '-vtag' option. E.g., '-vtag xvid' will
 | 
						||
force the fourcc 'xvid' to be stored as the video fourcc rather than the
 | 
						||
default.
 | 
						||
 | 
						||
@section Which are good parameters for encoding high quality MPEG-4?
 | 
						||
 | 
						||
'-mbd rd -flags +mv4+aic -trellis 2 -cmp 2 -subcmp 2 -g 300 -pass 1/2',
 | 
						||
things to try: '-bf 2', '-flags qprd', '-flags mv0', '-flags skiprd'.
 | 
						||
 | 
						||
@section Which are good parameters for encoding high quality MPEG-1/MPEG-2?
 | 
						||
 | 
						||
'-mbd rd -trellis 2 -cmp 2 -subcmp 2 -g 100 -pass 1/2'
 | 
						||
but beware the '-g 100' might cause problems with some decoders.
 | 
						||
Things to try: '-bf 2', '-flags qprd', '-flags mv0', '-flags skiprd.
 | 
						||
 | 
						||
@section Interlaced video looks very bad when encoded with ffmpeg, what is wrong?
 | 
						||
 | 
						||
You should use '-flags +ilme+ildct' and maybe '-flags +alt' for interlaced
 | 
						||
material, and try '-top 0/1' if the result looks really messed-up.
 | 
						||
 | 
						||
@section How can I read DirectShow files?
 | 
						||
 | 
						||
If you have built FFmpeg with @code{./configure --enable-avisynth}
 | 
						||
(only possible on MinGW/Cygwin platforms),
 | 
						||
then you may use any file that DirectShow can read as input.
 | 
						||
 | 
						||
Just create an "input.avs" text file with this single line ...
 | 
						||
@example
 | 
						||
  DirectShowSource("C:\path to your file\yourfile.asf")
 | 
						||
@end example
 | 
						||
... and then feed that text file to ffmpeg:
 | 
						||
@example
 | 
						||
  ffmpeg -i input.avs
 | 
						||
@end example
 | 
						||
 | 
						||
For ANY other help on Avisynth, please visit the
 | 
						||
@uref{http://www.avisynth.org/, Avisynth homepage}.
 | 
						||
 | 
						||
@section How can I join video files?
 | 
						||
 | 
						||
A few multimedia containers (MPEG-1, MPEG-2 PS, DV) allow to join video files by
 | 
						||
merely concatenating them.
 | 
						||
 | 
						||
Hence you may concatenate your multimedia files by first transcoding them to
 | 
						||
these privileged formats, then using the humble @code{cat} command (or the
 | 
						||
equally humble @code{copy} under Windows), and finally transcoding back to your
 | 
						||
format of choice.
 | 
						||
 | 
						||
@example
 | 
						||
ffmpeg -i input1.avi -qscale:v 1 intermediate1.mpg
 | 
						||
ffmpeg -i input2.avi -qscale:v 1 intermediate2.mpg
 | 
						||
cat intermediate1.mpg intermediate2.mpg > intermediate_all.mpg
 | 
						||
ffmpeg -i intermediate_all.mpg -qscale:v 2 output.avi
 | 
						||
@end example
 | 
						||
 | 
						||
Additionally, you can use the @code{concat} protocol instead of @code{cat} or
 | 
						||
@code{copy} which will avoid creation of a potentially huge intermediate file.
 | 
						||
 | 
						||
@example
 | 
						||
ffmpeg -i input1.avi -qscale:v 1 intermediate1.mpg
 | 
						||
ffmpeg -i input2.avi -qscale:v 1 intermediate2.mpg
 | 
						||
ffmpeg -i concat:"intermediate1.mpg|intermediate2.mpg" -c copy intermediate_all.mpg
 | 
						||
ffmpeg -i intermediate_all.mpg -qscale:v 2 output.avi
 | 
						||
@end example
 | 
						||
 | 
						||
Note that you may need to escape the character "|" which is special for many
 | 
						||
shells.
 | 
						||
 | 
						||
Another option is usage of named pipes, should your platform support it:
 | 
						||
 | 
						||
@example
 | 
						||
mkfifo intermediate1.mpg
 | 
						||
mkfifo intermediate2.mpg
 | 
						||
ffmpeg -i input1.avi -qscale:v 1 -y intermediate1.mpg < /dev/null &
 | 
						||
ffmpeg -i input2.avi -qscale:v 1 -y intermediate2.mpg < /dev/null &
 | 
						||
cat intermediate1.mpg intermediate2.mpg |\
 | 
						||
ffmpeg -f mpeg -i - -qscale:v 2 -c:v mpeg4 -acodec libmp3lame -q:a 4 output.avi
 | 
						||
@end example
 | 
						||
 | 
						||
Similarly, the yuv4mpegpipe format, and the raw video, raw audio codecs also
 | 
						||
allow concatenation, and the transcoding step is almost lossless.
 | 
						||
When using multiple yuv4mpegpipe(s), the first line needs to be discarded
 | 
						||
from all but the first stream. This can be accomplished by piping through
 | 
						||
@code{tail} as seen below. Note that when piping through @code{tail} you
 | 
						||
must use command grouping, @code{@{  ;@}}, to background properly.
 | 
						||
 | 
						||
For example, let's say we want to join two FLV files into an output.flv file:
 | 
						||
 | 
						||
@example
 | 
						||
mkfifo temp1.a
 | 
						||
mkfifo temp1.v
 | 
						||
mkfifo temp2.a
 | 
						||
mkfifo temp2.v
 | 
						||
mkfifo all.a
 | 
						||
mkfifo all.v
 | 
						||
ffmpeg -i input1.flv -vn -f u16le -acodec pcm_s16le -ac 2 -ar 44100 - > temp1.a < /dev/null &
 | 
						||
ffmpeg -i input2.flv -vn -f u16le -acodec pcm_s16le -ac 2 -ar 44100 - > temp2.a < /dev/null &
 | 
						||
ffmpeg -i input1.flv -an -f yuv4mpegpipe - > temp1.v < /dev/null &
 | 
						||
@{ ffmpeg -i input2.flv -an -f yuv4mpegpipe - < /dev/null | tail -n +2 > temp2.v ; @} &
 | 
						||
cat temp1.a temp2.a > all.a &
 | 
						||
cat temp1.v temp2.v > all.v &
 | 
						||
ffmpeg -f u16le -acodec pcm_s16le -ac 2 -ar 44100 -i all.a \
 | 
						||
       -f yuv4mpegpipe -i all.v \
 | 
						||
       -qscale:v 2 -y output.flv
 | 
						||
rm temp[12].[av] all.[av]
 | 
						||
@end example
 | 
						||
 | 
						||
@section -profile option fails when encoding H.264 video with AAC audio
 | 
						||
 | 
						||
@command{ffmpeg} prints an error like
 | 
						||
 | 
						||
@example
 | 
						||
Undefined constant or missing '(' in 'baseline'
 | 
						||
Unable to parse option value "baseline"
 | 
						||
Error setting option profile to value baseline.
 | 
						||
@end example
 | 
						||
 | 
						||
Short answer: write @option{-profile:v} instead of @option{-profile}.
 | 
						||
 | 
						||
Long answer: this happens because the @option{-profile} option can apply to both
 | 
						||
video and audio.  Specifically the AAC encoder also defines some profiles, none
 | 
						||
of which are named @var{baseline}.
 | 
						||
 | 
						||
The solution is to apply the @option{-profile} option to the video stream only
 | 
						||
by using @url{http://ffmpeg.org/ffmpeg.html#Stream-specifiers-1, Stream specifiers}.
 | 
						||
Appending @code{:v} to it will do exactly that.
 | 
						||
 | 
						||
@section Using @option{-f lavfi}, audio becomes mono for no apparent reason.
 | 
						||
 | 
						||
Use @option{-dumpgraph -} to find out exactly where the channel layout is
 | 
						||
lost.
 | 
						||
 | 
						||
Most likely, it is through @code{auto-inserted aconvert}. Try to understand
 | 
						||
why the converting filter was needed at that place.
 | 
						||
 | 
						||
Just before the output is a likely place, as @option{-f lavfi} currently
 | 
						||
only support packed S16.
 | 
						||
 | 
						||
Then insert the correct @code{aconvert} explicitly in the filter graph,
 | 
						||
specifying the exact format.
 | 
						||
 | 
						||
@example
 | 
						||
aconvert=s16:stereo:packed
 | 
						||
@end example
 | 
						||
 | 
						||
@section Why does FFmpeg not see the subtitles in my VOB file?
 | 
						||
 | 
						||
VOB and a few other formats do not have a global header that describes
 | 
						||
everything present in the file. Instead, applications are supposed to scan
 | 
						||
the file to see what it contains. Since VOB files are frequently large, only
 | 
						||
the beginning is scanned. If the subtitles happen only later in the file,
 | 
						||
they will not be initally detected.
 | 
						||
 | 
						||
Some applications, including the @code{ffmpeg} command-line tool, can only
 | 
						||
work with streams that were detected during the initial scan; streams that
 | 
						||
are detected later are ignored.
 | 
						||
 | 
						||
The size of the initial scan is controlled by two options: @code{probesize}
 | 
						||
(default ~5 Mo) and @code{analyzeduration} (default 5,000,000 µs = 5 s). For
 | 
						||
the subtitle stream to be detected, both values must be large enough.
 | 
						||
 | 
						||
@chapter Development
 | 
						||
 | 
						||
@section Are there examples illustrating how to use the FFmpeg libraries, particularly libavcodec and libavformat?
 | 
						||
 | 
						||
Yes. Read the Developers Guide of the FFmpeg documentation. Alternatively,
 | 
						||
examine the source code for one of the many open source projects that
 | 
						||
already incorporate FFmpeg at (@url{projects.html}).
 | 
						||
 | 
						||
@section Can you support my C compiler XXX?
 | 
						||
 | 
						||
It depends. If your compiler is C99-compliant, then patches to support
 | 
						||
it are likely to be welcome if they do not pollute the source code
 | 
						||
with @code{#ifdef}s related to the compiler.
 | 
						||
 | 
						||
@section Is Microsoft Visual C++ supported?
 | 
						||
 | 
						||
No. Microsoft Visual C++ is not compliant to the C99 standard and does
 | 
						||
not - among other things - support the inline assembly used in FFmpeg.
 | 
						||
If you wish to use MSVC++ for your
 | 
						||
project then you can link the MSVC++ code with libav* as long as
 | 
						||
you compile the latter with a working C compiler. For more information, see
 | 
						||
the @emph{Microsoft Visual C++ compatibility} section in the FFmpeg
 | 
						||
documentation.
 | 
						||
 | 
						||
There have been efforts to make FFmpeg compatible with MSVC++ in the
 | 
						||
past. However, they have all been rejected as too intrusive, especially
 | 
						||
since MinGW does the job adequately. None of the core developers
 | 
						||
work with MSVC++ and thus this item is low priority. Should you find
 | 
						||
the silver bullet that solves this problem, feel free to shoot it at us.
 | 
						||
 | 
						||
We strongly recommend you to move over from MSVC++ to MinGW tools.
 | 
						||
 | 
						||
@section Can I use FFmpeg or libavcodec under Windows?
 | 
						||
 | 
						||
Yes, but the Cygwin or MinGW tools @emph{must} be used to compile FFmpeg.
 | 
						||
Read the @emph{Windows} section in the FFmpeg documentation to find more
 | 
						||
information.
 | 
						||
 | 
						||
To get help and instructions for building FFmpeg under Windows, check out
 | 
						||
the FFmpeg Windows Help Forum at
 | 
						||
@url{http://ffmpeg.arrozcru.org/}.
 | 
						||
 | 
						||
@section Can you add automake, libtool or autoconf support?
 | 
						||
 | 
						||
No. These tools are too bloated and they complicate the build.
 | 
						||
 | 
						||
@section Why not rewrite FFmpeg in object-oriented C++?
 | 
						||
 | 
						||
FFmpeg is already organized in a highly modular manner and does not need to
 | 
						||
be rewritten in a formal object language. Further, many of the developers
 | 
						||
favor straight C; it works for them. For more arguments on this matter,
 | 
						||
read @uref{http://www.tux.org/lkml/#s15, "Programming Religion"}.
 | 
						||
 | 
						||
@section Why are the ffmpeg programs devoid of debugging symbols?
 | 
						||
 | 
						||
The build process creates ffmpeg_g, ffplay_g, etc. which contain full debug
 | 
						||
information. Those binaries are stripped to create ffmpeg, ffplay, etc. If
 | 
						||
you need the debug information, use the *_g versions.
 | 
						||
 | 
						||
@section I do not like the LGPL, can I contribute code under the GPL instead?
 | 
						||
 | 
						||
Yes, as long as the code is optional and can easily and cleanly be placed
 | 
						||
under #if CONFIG_GPL without breaking anything. So, for example, a new codec
 | 
						||
or filter would be OK under GPL while a bug fix to LGPL code would not.
 | 
						||
 | 
						||
@section I'm using FFmpeg from within my C application but the linker complains about missing symbols from the libraries themselves.
 | 
						||
 | 
						||
FFmpeg builds static libraries by default. In static libraries, dependencies
 | 
						||
are not handled. That has two consequences. First, you must specify the
 | 
						||
libraries in dependency order: @code{-lavdevice} must come before
 | 
						||
@code{-lavformat}, @code{-lavutil} must come after everything else, etc.
 | 
						||
Second, external libraries that are used in FFmpeg have to be specified too.
 | 
						||
 | 
						||
An easy way to get the full list of required libraries in dependency order
 | 
						||
is to use @code{pkg-config}.
 | 
						||
 | 
						||
@example
 | 
						||
  c99 -o program program.c $(pkg-config --cflags --libs libavformat libavcodec)
 | 
						||
@end example
 | 
						||
 | 
						||
See @file{doc/example/Makefile} and @file{doc/example/pc-uninstalled} for
 | 
						||
more details.
 | 
						||
 | 
						||
@section I'm using FFmpeg from within my C++ application but the linker complains about missing symbols which seem to be available.
 | 
						||
 | 
						||
FFmpeg is a pure C project, so to use the libraries within your C++ application
 | 
						||
you need to explicitly state that you are using a C library. You can do this by
 | 
						||
encompassing your FFmpeg includes using @code{extern "C"}.
 | 
						||
 | 
						||
See @url{http://www.parashift.com/c++-faq-lite/mixing-c-and-cpp.html#faq-32.3}
 | 
						||
 | 
						||
@section I'm using libavutil from within my C++ application but the compiler complains about 'UINT64_C' was not declared in this scope
 | 
						||
 | 
						||
FFmpeg is a pure C project using C99 math features, in order to enable C++
 | 
						||
to use them you have to append -D__STDC_CONSTANT_MACROS to your CXXFLAGS
 | 
						||
 | 
						||
@section I have a file in memory / a API different from *open/*read/ libc how do I use it with libavformat?
 | 
						||
 | 
						||
You have to implement a URLProtocol, see @file{libavformat/file.c} in
 | 
						||
FFmpeg and @file{libmpdemux/demux_lavf.c} in MPlayer sources.
 | 
						||
 | 
						||
@section Where can I find libav* headers for Pascal/Delphi?
 | 
						||
 | 
						||
see @url{http://www.iversenit.dk/dev/ffmpeg-headers/}
 | 
						||
 | 
						||
@section Where is the documentation about ffv1, msmpeg4, asv1, 4xm?
 | 
						||
 | 
						||
see @url{http://www.ffmpeg.org/~michael/}
 | 
						||
 | 
						||
@section How do I feed H.263-RTP (and other codecs in RTP) to libavcodec?
 | 
						||
 | 
						||
Even if peculiar since it is network oriented, RTP is a container like any
 | 
						||
other. You have to @emph{demux} RTP before feeding the payload to libavcodec.
 | 
						||
In this specific case please look at RFC 4629 to see how it should be done.
 | 
						||
 | 
						||
@section AVStream.r_frame_rate is wrong, it is much larger than the frame rate.
 | 
						||
 | 
						||
r_frame_rate is NOT the average frame rate, it is the smallest frame rate
 | 
						||
that can accurately represent all timestamps. So no, it is not
 | 
						||
wrong if it is larger than the average!
 | 
						||
For example, if you have mixed 25 and 30 fps content, then r_frame_rate
 | 
						||
will be 150.
 | 
						||
 | 
						||
@section Why is @code{make fate} not running all tests?
 | 
						||
 | 
						||
Make sure you have the fate-suite samples and the @code{SAMPLES} Make variable
 | 
						||
or @code{FATE_SAMPLES} environment variable or the @code{--samples}
 | 
						||
@command{configure} option is set to the right path.
 | 
						||
 | 
						||
@section Why is @code{make fate} not finding the samples?
 | 
						||
 | 
						||
Do you happen to have a @code{~} character in the samples path to indicate a
 | 
						||
home directory? The value is used in ways where the shell cannot expand it,
 | 
						||
causing FATE to not find files. Just replace @code{~} by the full path.
 | 
						||
 | 
						||
@bye
 |