[gimp-gap] switched to ffmpeg-0.6 libraries
- From: Wolfgang Hofer <wolfgangh src gnome org>
- To: commits-list gnome org
- Cc:
- Subject: [gimp-gap] switched to ffmpeg-0.6 libraries
- Date: Sun, 10 Oct 2010 13:13:39 +0000 (UTC)
commit 3ef9370d0df648a6b08b6b552ed8faa27061f38a
Author: Wolfgang Hofer <wolfgangh svn gnome org>
Date: Sun Oct 10 15:12:24 2010 +0200
switched to ffmpeg-0.6 libraries
ChangeLog | 170 ++++++++++++
NEWS | 8 +
configure.in | 125 +++++++---
docs/reference/txt/gap_gimprc_params.txt | 7 +
extern_libs/README_extern_libs | 18 +-
extern_libs/configure_options_ffmpeg.txt | 9 +-
extern_libs/ffmpeg.tar.gz | Bin 3259994 -> 4462285 bytes
gap/gap_morph_main.c | 2 +-
libgapvidapi/gap_vid_api_ffmpeg.c | 194 ++++++++++++---
vid_enc_ffmpeg/gap_enc_ffmpeg_gui.c | 147 ++++++++++-
vid_enc_ffmpeg/gap_enc_ffmpeg_main.c | 425 ++++++++++++++++++++++--------
vid_enc_ffmpeg/gap_enc_ffmpeg_main.h | 13 +
vid_enc_ffmpeg/gap_enc_ffmpeg_par.c | 48 ++++-
13 files changed, 973 insertions(+), 193 deletions(-)
---
diff --git a/ChangeLog b/ChangeLog
index e9a41d2..5979f02 100755
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,3 +1,173 @@
+2010-10-10 Wolfgang Hofer <hof gimp org>
+
+- GIMP-GAP has swichted its main video encoder/decoder engine
+ from ffmpeg-0.5 to ffmpeg-0.6
+
+ ( Note that gimp-gap still supports compile/linking with the
+ old ffmpeg-0.5 libraies.
+
+ to do so, just replace file extern_libs/ffmpeg.tar.gz
+ with a copy from older releases that contains the 0.5 release
+ and then configure gimp-gap with option:
+
+ --disable-ff-libx264
+ )
+
+
+ *** COMPATIBILITY WARNING:
+
+ Positioning to exact frame numbers in
+ videos that do not start with a keyframe, or have read errors
+ is not compatible to older ffmpeg-0.5 based GIMP-GAP releases.
+
+ libavcodec prints error logging messages to stderr like this:
+ [mpeg2video @ 0x8d7c290]warning: first frame is no keyframe
+ [mpeg2video @ 0x8d7ef50]mpeg_decode_postinit() failure
+
+
+ Decoding of the first few frames of a video that does not start with a keyframe
+ triggers codec internal error handling and delviers different results
+ in different releases of the ffmpeg libraries.
+
+ In my tests with mpeg2 encoded .vob files (DVD stuff) ffmpeg-0.6 detects
+ and decodes some more leading non-key frames than ffmpeg-0.5 did.
+
+ BUT THIS BREAKS BACKWARDS COMPATIBILITY ON POSITIONING BY FRAME NUMBER !
+
+ Therefore Storyboards that were created based on older releases and include
+ references to frames in such videos may now refere to another frame
+ when processed with the newer release.
+
+
+- video api feature: continue after read errors for the ffmpeg based implementation.
+
+ some of my testvideos that could be read with ffmpeg-0.5 based decoding
+ now fail because ffmpeg-0.6 seems to have a more restrictive error recognition
+ at least when decoding mpeg2 videos.
+
+ The ffmpeg based gap video API now supports a gimprc parameter
+ (video-gva-libavformat-continue-after-read_errors "yes")
+ that allows continue reading frames after read errors
+ and therefore is able to skip one frame and read further frames after read errors.
+ The default value is "yes" (e.g. TRUE)
+
+
+
+
+- changed menu path for morphing tools
+ to: "<Image>/Video/Morph/"
+ to: "<Image>/Video/Morphing/"
+ I made this change, because in German settings (LANG=de_DE.UTF-8)
+ the morph submenu was presented whre invocation of MovePath was expected
+ (Bewegungspfad) and Morph was not present at all.
+ ==> TODO check german translation for errors..
+
+
+
+ -- 2010-09-18 ----
+
+
+- video api fixed for usage with ffmpeg-0.6 (MJPEG bug)
+ some codecs of the ffmpeg-0.6 release no longer deliver valid
+ input length information about a successfully decoded frame.
+ The fix assumes that the decoded frame size is the full packet length
+ Note that packet is already read with av_read_frame and that shall already
+ collect all data of one frame in the packet.
+ The case where multiple frames are included in one packet
+ can not be handled after this fix
+ But this theroretical case does not occure in my testvideos
+ and probably is not relevant at all (or probably is already handled
+ at lower level internally in av_read_frame)
+ Note that the gap-video API was written before av_read_frame
+ was available and had to deal with collecting data
+ for one frame.
+
+
+ Further disable native seek support in case the codec
+ just delivers DTS timecodes with value 0.
+
+
+
+- gimp-gap configure.in was changed to detect current versions
+ of libx264 and libxvid.
+ it is now ready for ffmpeg-0.6 (and prepared for latest snapshots)
+ (both libs require to include stdint.h (or inttypes.h)
+ before including the headerfile x264.h )
+ xvid now depends on the pthread library,
+ therefore add the option --enable-pthreads when ffmpeg is configured
+ with option --enable-libxvid
+
+ The older ffmpeg-0.5 fails to detect properly installed libx264 library
+ with current versions. This is OK, because it requires old libx264 version
+ (greater equal 0.65 but less than 0.85) to compile
+
+ gimp-gap still supports using old ffmpeg-0.5
+ but have to use gimp-gap configure option
+
+ --disable-ff-libx264
+
+ further note that latest ffmpeg snapshots already
+ dropped libfaad support after release ffmpeg-0.6.
+ to compile gimp-gap with latest ffmpeg snapshot it is required
+ to configure with option:
+
+ --disable-ff-libfaad
+
+
+- various bug fixes in the ffmpeg based video encoder.
+
+
+ o) added handling to compensate encoder frame latency.
+ some codecs keep frames in internal buffer and start
+ generate output when this internal buffer is filled.
+ After feeding the last input frame N to the codec
+ the obtained output frame is the one with number N - size of internal cached frames.
+ The fix continues feeding the last frame to the codec
+ multiple times until all relevant frames are written
+ to the output video file.
+
+ without this fix, some frames at the end
+ were not written to the resulting video output file,
+ especially for codecs with high latency like the libx264.
+
+ o) some codecs (x264) just pass through pts information obtained from
+ the AVFrame struct (big_picture_codec)
+ the fix now sets a valid the pts code to init
+ the AVFrame struct before encoding a video frame.
+ Note that invalid pts code could produce unusable videos,
+ it is also relevant for playback order of the frames.
+
+ o) fixed another colormodel convert relevant bug.
+ the fix applies for codecs that want BGR input
+ (dont konw if there is such a codec)
+ note that gap_gve_raw_RGB_drawable_encode was recently fixed to deliver RGB (not BGR)
+
+ o) qmin/qmax upper limit changed to 51
+
+ o) Gui parameter support for partition and new codec flags that are new in ffmpeg-0.6:
+ parti4x4
+ parti8x8
+ partp4x4
+ partp8x8
+ partb8x8
+
+ MB-Tree RC // codec_FLAG2_MBTREE
+ PSY // codec_FLAG2_PSY
+ Compute SSIM // codec_FLAG2_SSIM
+
+
+ * configure.in
+ * NEWS
+ * gap/gap_morph_main.c
+ * extern_libs/configure_options_ffmpeg.txt
+ * extern_libs/README_extern_libs
+ * extern_libs/ffmpeg.tar.gz # update from ffmpeg-0.5 to ffmpeg-0.6
+ * vid_enc_ffmpeg/gap_enc_ffmpeg_main.c [.h]
+ * vid_enc_ffmpeg/gap_enc_ffmpeg_gui.c
+ * vid_enc_ffmpeg/gap_enc_ffmpeg_par.c
+ * libgapvidapi/gap_vid_api_ffmpeg.c
+ * docs/reference/txt/gap_gimprc_params.txt
+
2010-09-09 Wolfgang Hofer <hof gimp org>
- applied patch #625667 that fixed various spelling mistakes.
diff --git a/NEWS b/NEWS
index ca549e5..aabca56 100644
--- a/NEWS
+++ b/NEWS
@@ -55,6 +55,14 @@ Here is a short overview whats new in GIMP-GAP-2.7.0:
- support to run gimp_color_balance tool as animated filter
(added wrapper plug-in).
+- updated gimp-gap video API and ffmpeg-based video encoder
+ to support the libraries provided with the ffmpeg-0.6 release
+ This includes various bugfixes related to video de/encode
+ (but breaks backwards compatibility when seeking positions by frame number
+ in videofiles that do not start with a keyframe).
+
+ .. see ChangeLog for details,
+
- some bugfixes (See ChangeLog for details)
diff --git a/configure.in b/configure.in
index a5b8998..448792e 100755
--- a/configure.in
+++ b/configure.in
@@ -149,6 +149,17 @@ and OLD mpeg encoders.
AM_CONDITIONAL(GAP_UNIX_FRONTENDS, test "x$enable_unix_frontends" != "xno")
+
+dnl check for pthread library (required for libmpeg3 and recent versions of the xvid library)
+dnl the check result does not matter unless libmpeg3 or xvid is linked or built later on.
+pthread_err=""
+AC_CHECK_LIB(pthread, pthread_create,
+ [AC_CHECK_HEADER(pthread.h,
+ GAP_PTHREAD_LIB=" -lpthread",
+ pthread_err="pthread header file (pthread.h) not found")],
+ pthread_err="pthread library (libpthread) not found")
+
+
dnl check for bzip2 library (for ffmpeg matroskadec )
dnl the check result does not matter unless libavformat is linked or built later on.
FF_BZIP2=""
@@ -198,7 +209,6 @@ AC_ARG_ENABLE(ff_libfaac,
dnl
dnl check for faad library (useful for better ffmpeg audio support)
-dnl FAAC is an open source MPEG-4 and MPEG-2 AAC encoder, it is licensed under the LGPL
dnl the check result does not matter unless libavformat is linked or built later on.
FF_LIBFAAD=""
faad_warn=""
@@ -227,37 +237,78 @@ dnl
dnl check for x264 library (additional for ffmpeg video codec support)
-dnl ### TODO check are no longer sufficient for recent ffmpeg 2009.01.31.
-dnl ### check shall verify the condition "X264_BUILD >= 65
-dnl ### Note that the missing check it is not critical, since ffmpeg configure script has such a check
-dnl ### and turns off x264 support when older versions are detected, and prints an error like this:
-dnl ### ERROR: libx264 version must be >= 0.65.
+dnl ###
+dnl ### set X264_REQUIRED_VERSION="0.65" to support ffmpeg-0.5 (condition "X264_BUILD >= 65" for ffmpeg-0.5)
+dnl ### set X264_REQUIRED_VERSION="0.85" to support ffmpeg-0.6 (broken mjpeg decoder)
+dnl ### set X264_REQUIRED_VERSION="0.98" to support ffmpeg-2010-09-14 snapshot (for testing)
+dnl ### new libx264 versions (X264_BUILD >= 93) require to include stdint.h (or inttypes.h)
+dnl ### before including tthe headerfile x264.h
+dnl ### therfore we provide this requirement as 4. parameter of the AC_CHECK_HEADER macro.
+dnl ### but AC_CHECK_LIB now fails when checking for procedure x264_encoder_open
+dnl ### because x264_encoder_open is a define constant that refers to x264_encoder_open_<X264_BUILD>
+dnl ### therefore therefore the check was changed to procedure x264_encoder_close
+dnl ### (dont want to AC_CHECK_LIB that works only for one hardcoded version.)
dnl the check result does not matter unless libavformat is linked or built later on.
+
+
+X264_REQUIRED_VERSION="0.85"
+X264_REQUIRED_INC='#ifdef HAVE_STDINT_H
+#include <stdint.h>
+#endif'
+
FF_LIBX264=""
x264_warn=""
AC_ARG_ENABLE(ff_libx264,
[ --disable-ff-libx264 configure libavformat without optional open source variant of H264 video codec])
+dnl if test "x$enable_ff_libx264" != "xno"; then
+dnl AC_CHECK_LIB(x264, x264_encoder_open_93,
+dnl [AC_CHECK_HEADER(x264.h,
+dnl FF_LIBX264="-lx264",
+dnl x264_warn="$NEW_LINE x264 header file (x264.h) not found (not critical, but no open H264 video codec support)",
+dnl $X264_REQUIRED_INC)],
+dnl x264_warn="$NEW_LINE x264 library (libx264) not found (not critical, but no open H264 video codec support)")
+dnl fi
if test "x$enable_ff_libx264" != "xno"; then
- AC_CHECK_LIB(x264, x264_encoder_open,
- [AC_CHECK_HEADER(x264.h,
+ if ! $PKG_CONFIG --atleast-version=$X264_REQUIRED_VERSION x264; then
+ x264_warn="$NEW_LINE x264 library (libx264) required version $X264_REQUIRED_VERSION not found (not critical, but no open H264 video codec support)"
+ else
+ AC_CHECK_LIB(x264, x264_encoder_close,
+ [AC_CHECK_HEADER(x264.h,
FF_LIBX264="-lx264",
- x264_warn="$NEW_LINE x264 header file (x264.h) not found (not critical, but no open H264 video codec support)")],
- x264_warn="$NEW_LINE x264 library (libx264) not found (not critical, but no open H264 video codec support)")
+ x264_warn="$NEW_LINE x264 header file (x264.h) not found (not critical, but no open H264 video codec support)",
+ $X264_REQUIRED_INC)],
+ x264_warn="$NEW_LINE x264 library (libx264) not found (not critical, but no open H264 video codec support)")
+ fi
fi
dnl check for xvid library (additional for ffmpeg open xvid video codec support)
dnl the check result does not matter unless libavformat is linked or built later on.
+
+XVID_REQUIRED_INC='#ifdef HAVE_STDINT_H
+#include <stdint.h>
+#endif'
+
xvid_warn=""
FF_LIBXVID=""
AC_ARG_ENABLE(ff_libxvid,
[ --disable-ff-libxvid configure libavformat without optional open source variant of xvid video codec])
+dnl if test "x$enable_ff_libxvid" != "xno"; then
+dnl AC_CHECK_LIB(xvidcore, xvid_global,
+dnl [AC_CHECK_HEADER(xvid.h,
+dnl FF_LIBXVID="-lxvidcore",
+dnl xvid_warn="$NEW_LINE xvid header file (xvid.h) not found (not critical, but no open xvid video codec support)")],
+dnl xvid_warn="$NEW_LINE xvid library (libxvidcore) not found (not critical, but no open xvid video codec support)")
+dnl fi
if test "x$enable_ff_libxvid" != "xno"; then
- AC_CHECK_LIB(xvidcore, xvid_global,
- [AC_CHECK_HEADER(xvid.h,
+ if test "x$pthread_err" != "x"; then
+ xvid_warn="$NEW_LINE $pthread_err $NEW_LINE xvid disabled because it depends on missing pthread library (not critical, but no open xvid video codec support)"
+ else
+ AC_CHECK_HEADER(xvid.h,
FF_LIBXVID="-lxvidcore",
- xvid_warn="$NEW_LINE xvid header file (xvid.h) not found (not critical, but no open xvid video codec support)")],
- xvid_warn="$NEW_LINE xvid library (libxvidcore) not found (not critical, but no open xvid video codec support)")
+ xvid_warn="$NEW_LINE xvid header file (xvid.h) not found (not critical, but no open xvid video codec support)",
+ $XVID_REQUIRED_INC)
+ fi
fi
@@ -290,6 +341,8 @@ parent_dir=`pwd`
cd "$pwd_dir"
extern_libs_dir="$pwd_dir/extern_libs"
+
+
dnl Test for video libavformat (FFMPEG)
dnl -----------------------------------
dnl the ffmpeg libs are both used as decoder implementations in the GAP Video API (GVA)
@@ -407,6 +460,10 @@ INFORMATION: old ffmpeg was moved to $FFMPEG_DIR-OLD
FFMPEG_LIBAVDEVICE_A="$FFMPEG_DIR/libavdevice/${LIBPREF}avdevice${LIBSUF}"
FFMPEG_LIBSWSCALE_A="$FFMPEG_DIR/libswscale/${LIBPREF}swscale${LIBSUF}"
dnl
+ dnl libavcore is required for recent development snapshots (such as ffmpeg-2010-09-14 that was used in a compile/link test)
+ dnl
+ FFMPEG_LIBAVCORE_A="$FFMPEG_DIR/libavcore/${LIBPREF}avcore${LIBSUF}"
+ dnl
dnl ffmpeg can be configured to use external codec libs x264, xvid ....
dnl options for ffmpeg ext libs configuration will be passed to ffmpeg/configure
dnl if some of those libs are installed (and not explicitly disabled)
@@ -416,8 +473,12 @@ INFORMATION: old ffmpeg was moved to $FFMPEG_DIR-OLD
FFMPEG_EXTLIBS="$FFMPEG_EXTLIBS -lws2_32"
fi
- GAP_VLIBS_FFMPEG=" $FFMPEG_LIBAVDEVICE_A $FFMPEG_LIBAVFORMAT_A $FFMPEG_LIBAVCODEC_A $FFMPEG_LIBAVUTIL_A $FFMPEG_LIBSWSCALE_A $FFMPEG_EXTLIBS "
- GAP_VINCS_FFMPEG=" -I$FFMPEG_DIR -I$FFMPEG_DIR/libavcodec -I$FFMPEG_DIR/libavformat -I$FFMPEG_DIR/libavutil -I$FFMPEG_DIR/libavdevice -I$FFMPEG_DIR/libswscale"
+ GAP_VLIBS_FFMPEG=" $FFMPEG_LIBAVDEVICE_A $FFMPEG_LIBAVFORMAT_A $FFMPEG_LIBAVCODEC_A $FFMPEG_LIBAVUTIL_A $FFMPEG_LIBSWSCALE_A "
+
+ dnl libavcore is required for recent development snapshots (such as ffmpeg-2010-09-14 that was used in a compile/link test)
+ dnl GAP_VLIBS_FFMPEG=" $GAP_VLIBS_FFMPEG $FFMPEG_LIBAVCORE_A"
+ GAP_VLIBS_FFMPEG=" $GAP_VLIBS_FFMPEG $FFMPEG_EXTLIBS "
+ GAP_VINCS_FFMPEG=" -I$FFMPEG_DIR -I$FFMPEG_DIR/libavcodec -I$FFMPEG_DIR/libavformat -I$FFMPEG_DIR/libavutil -I$FFMPEG_DIR/libavdevice -I$FFMPEG_DIR/libswscale -I$FFMPEG_DIR/libavcore"
vid_ffmpeg_warning="
$x264_warn
@@ -481,7 +542,7 @@ INFORMATION: old ffmpeg was moved to $FFMPEG_DIR-OLD
dnl configure ffmpeg libxvid usage (for optional open xvid codec support via libxvid)
if test "x$FF_LIBXVID" != "x"; then
- FFMPEG_CONFIGURE_OPTIONS="$FFMPEG_CONFIGURE_OPTIONS --enable-libxvid"
+ FFMPEG_CONFIGURE_OPTIONS="$FFMPEG_CONFIGURE_OPTIONS --enable-pthreads --enable-libxvid"
fi
dnl in case nonfree libs are used ffmeg 0.6 configure requires --enable-nonfree
@@ -602,16 +663,6 @@ libmpeg3_info_msg="
libmpeg3_dependencies_ok="yes"
-
-dnl check for pthread library (required for libmpeg3)
-dnl the check result does not matter unless libmpeg3 is linked or built later on.
-pthread_err=""
-AC_CHECK_LIB(pthread, pthread_create,
- [AC_CHECK_HEADER(pthread.h,
- GAP_PTHREAD_LIB=" -lpthread",
- pthread_err="pthread header file (pthread.h) not found")],
- pthread_err="pthread library (libpthread) not found")
-
if test "x$pthread_err" != "x"; then
libmpeg3_dependencies_ok="no"
fi
@@ -885,19 +936,27 @@ dnl the xvid codec lib is used in the AVI videoencoder plug-in for
dnl MPEG4 support xvid should compile on many operating systems,
dnl including UNIX and WINDOWS systems (ffmpeg has its own builtin
dnl MPEG4 codec implementation and does not depend on this codec)
-dnl
+xvid_err=""
+if test "x$pthread_err" != "x"; then
+ xvid_err="$pthread_err $NEW_LINE xvid disabled because it depends on missing pthread library"
+fi
AC_ARG_ENABLE(libxvidcore,
[ --disable-libxvidcore don't build with libxvidcore])
if test "x$enable_libxvidcore" != "xno"; then
dnl
dnl check for libxvidcore
dnl
- xvid_err=""
- AC_CHECK_LIB(xvidcore, xvid_encore,
- [AC_CHECK_HEADER(xvid.h,
+ dnl AC_CHECK_LIB(xvidcore, xvid_encore,
+ dnl [AC_CHECK_HEADER(xvid.h,
+ dnl GAP_VLIBS_XVIDCORE=" -lxvidcore",
+ dnl xvid_err="xvid header file (xvid.h) not found",
+ dnl $XVID_REQUIRED_INC)],
+ dnl xvid_err="xvid library (libxvidcore) not found")
+ dnl
+ AC_CHECK_HEADER(xvid.h,
GAP_VLIBS_XVIDCORE=" -lxvidcore",
- xvid_err="xvid header file (xvid.h) not found")],
- xvid_err="xvid library (libxvidcore) not found")
+ xvid_err="xvid header file (xvid.h) not found",
+ $XVID_REQUIRED_INC)
if test "x$xvid_err" = "x"; then
AC_DEFINE(ENABLE_LIBXVIDCORE, 1,
diff --git a/docs/reference/txt/gap_gimprc_params.txt b/docs/reference/txt/gap_gimprc_params.txt
index a5071e0..5435fcb 100644
--- a/docs/reference/txt/gap_gimprc_params.txt
+++ b/docs/reference/txt/gap_gimprc_params.txt
@@ -254,6 +254,13 @@ If you edit gimprc files by hand, you must do this before you startup GIMP.
#
(video-gva-libavformat-video-analyse-persistent "yes")
+# the api for ffmpeg video access can be configured how to handle
+# read errors when decoding video files with the gimprc parameter video-gva-libavformat-continue-after-read_errors
+# the default option ("yes") is to skip the current package (e.g. the frame that cant be read/decoded)
+# and try to continue reading the next frame.
+# the other option ("no") is to stop when the first read error occurs
+# (available since 2010.10.02)
+(video-gva-libavformat-continue-after-read_errors "yes")
# gimp_gap frame fetcher configuration
# ------------------------------------
diff --git a/extern_libs/README_extern_libs b/extern_libs/README_extern_libs
index 5d69127..84dc99d 100755
--- a/extern_libs/README_extern_libs
+++ b/extern_libs/README_extern_libs
@@ -3,7 +3,7 @@ as sourcecode for convenient static linking.
CURRENT FFMPEG version is:
-- ffmpeg 0.5
+- ffmpeg 0.6
CURRENT LIBMPEG3 version is:
@@ -20,7 +20,9 @@ of those libs.
ffmpeg
--------------
-GIMP-GAP uses the libraries libavformat, libavcodec, libavutil, libavdevice and libswscale
+GIMP-GAP uses the libraries
+ libavformat, libavcodec, libavutil, libavdevice and libswscale
+ (libavcore -- new lib in latest snaphots of ffmpeg, not yet required)
that are part of ffmpeg.
Those libs build up the basic videofile support for
many MPEG based fileformats, both for read and write access
@@ -46,13 +48,19 @@ this GIMP-GAP distribution.
working.
GIMP-GAP currently supports
+ o) ffmpeg-0.6 basically works, tests are in progress
+
o) ffmpeg-0.5 successfully tested with many videoformats
- and
- o) ffmpeg-0.6 basically works, but fails on some videoformats like MJPEG encdoded AVI files
- (my Olympus SP560UZ Camera records videos in this format).
+ but does no longer work with recent version of libx264 (X264_BUILD 93)
+ (use gimp-gap configure option --disable-ff-libx264
+ when compiling/linking with ffmpeg-0.5)
+
+
newer ffmpeg GIT (or SVN) snapshots
may or may not compile, link and run with this GIMP-GAP release.
+ (a compile/link test was done with ffmpeg-2010-09-14 snapshot
+ that has no more support for libfaad --disable-ff-libfaad)
GIMP-GAP can be configured to be compiled without ffmpeg (in this case
diff --git a/extern_libs/configure_options_ffmpeg.txt b/extern_libs/configure_options_ffmpeg.txt
index 80cef8b..4830ed1 100644
--- a/extern_libs/configure_options_ffmpeg.txt
+++ b/extern_libs/configure_options_ffmpeg.txt
@@ -1,4 +1,4 @@
---enable-shared --enable-static --disable-mmx --enable-gpl
+--disable-shared --enable-static --disable-mmx --enable-gpl
# recent ffmpeg releases does no longer support --enable-liba52
# for audio /mp3 encoding ffmpeg recommands to link with the external libraries.
#
@@ -6,10 +6,11 @@
# are installed and adds further options automatically
#
# --enable-libfaac
-# --enable-libfaad
+# --enable-libfaad (versions after ffmpeg-0.6 have removed this option)
# --enable-libmp3lame
# --enable-libx264 (old name: --enable-x264)
-# --enable-libxvid (old name: --enable-xvid)
-#
+# --enable-libxvid --enable-pthreads (old name: --enable-xvid)
+# --enable-swscale
+# --enable-nonfree
# options for the ffmpeg configure
diff --git a/extern_libs/ffmpeg.tar.gz b/extern_libs/ffmpeg.tar.gz
old mode 100644
new mode 100755
index 494345a..b04410f
Binary files a/extern_libs/ffmpeg.tar.gz and b/extern_libs/ffmpeg.tar.gz differ
diff --git a/gap/gap_morph_main.c b/gap/gap_morph_main.c
index e22b8eb..09870ea 100644
--- a/gap/gap_morph_main.c
+++ b/gap/gap_morph_main.c
@@ -295,7 +295,7 @@ static void query (void)
{
/* Menu names */
- const char *menupath_image_video_morph = N_("<Image>/Video/Morph/");
+ const char *menupath_image_video_morph = N_("<Image>/Video/Morphing/");
gimp_plugin_menu_register (PLUG_IN_NAME, menupath_image_video_morph);
gimp_plugin_menu_register (PLUG_IN_NAME_TWEEN, menupath_image_video_morph);
diff --git a/libgapvidapi/gap_vid_api_ffmpeg.c b/libgapvidapi/gap_vid_api_ffmpeg.c
index d38fe96..002912d 100644
--- a/libgapvidapi/gap_vid_api_ffmpeg.c
+++ b/libgapvidapi/gap_vid_api_ffmpeg.c
@@ -53,6 +53,11 @@
#define AVMEDIA_TYPE_VIDEO CODEC_TYPE_VIDEO
#define AVMEDIA_TYPE_AUDIO CODEC_TYPE_AUDIO
#define AV_PKT_FLAG_KEY PKT_FLAG_KEY
+ static const char *GAP_FFMPEG_VERSION_STRING = "0.5";
+ static const char *PROCNAME_AVCODEC_DECODE_VIDEO = "avcodec_decode_video";
+#else
+ static const char *GAP_FFMPEG_VERSION_STRING = "0.6";
+ static const char *PROCNAME_AVCODEC_DECODE_VIDEO = "avcodec_decode_video2";
#endif
/* end ffmpeg 0.5 / 0.6 support */
@@ -73,6 +78,7 @@
#define MAX_TRIES_NATIVE_SEEK 3
#define GIMPRC_PERSISTENT_ANALYSE "video-gva-libavformat-video-analyse-persistent"
+#define GIMPRC_CONTINUE_AFTER_READ_ERRORS "video-gva-libavformat-continue-after-read_errors"
#define ANALYSE_DEFAULT TRUE
/* MAX_PREV_OFFSET defines how to record defered url_offest frames of previous frames for byte positions in video index
@@ -152,7 +158,6 @@ typedef struct t_GVA_ffmpeg
gboolean dummy_read; /* FALSE: read YUV + convert to RGB, TRUE: dont convert RGB */
gboolean capture_offset; /* TRUE: capture url_offsets to vindex while reading next frame */
gint32 max_frame_len;
- gint32 frame_len;
guint16 got_frame_length16; /* 16 lower bits of the length */
gint64 prev_url_offset[MAX_PREV_OFFSET];
gint32 prev_key_seek_nr;
@@ -187,6 +192,10 @@ typedef struct t_GVA_ffmpeg
struct SwsContext *img_convert_ctx;
+ gboolean continueAfterReadErrors; /* default TRUE try to to continue reading next frame after read errors */
+ gint32 libavcodec_version_int; /* the ffmpeg libs version that was used to analyze the current video as integer LIBAVCODEC_VERSION_INT */
+ gint64 pkt1_dts; /* dts timecode offset of the 1st package of the current frame */
+
} t_GVA_ffmpeg;
@@ -329,6 +338,7 @@ p_wrapper_ffmpeg_open_read(char *filename, t_GVA_Handle *gvahand)
t_GVA_ffmpeg* handle;
AVInputFormat *iformat;
+
if(gap_debug) printf("p_wrapper_ffmpeg_open_read: START filename:%s\n", filename);
if(gvahand->filename == NULL)
@@ -336,12 +346,16 @@ p_wrapper_ffmpeg_open_read(char *filename, t_GVA_Handle *gvahand)
gvahand->filename = g_strdup(filename);
}
+
gvahand->decoder_handle = (void *)NULL;
gvahand->vtracks = 0;
gvahand->atracks = 0;
gvahand->frame_bpp = 3; /* RGB pixelformat */
handle = g_malloc0(sizeof(t_GVA_ffmpeg));
+ handle->continueAfterReadErrors = gap_base_get_gimprc_gboolean_value(GIMPRC_CONTINUE_AFTER_READ_ERRORS, TRUE);
+ handle->libavcodec_version_int = 0;
+ handle->pkt1_dts = AV_NOPTS_VALUE;
handle->dummy_read = FALSE;
handle->capture_offset = FALSE;
handle->guess_gop_size = 0;
@@ -462,7 +476,6 @@ p_wrapper_ffmpeg_open_read(char *filename, t_GVA_Handle *gvahand)
handle->inbuf_len = 0; /* start with empty buffer */
- handle->frame_len = 0; /* start with 0 frame length */
handle->inbuf_ptr = NULL; /* start with empty buffer, after 1.st av_read_frame: pointer to pkt.data read pos */
/* yuv_buffer big enough for all supported PixelFormats
@@ -915,6 +928,7 @@ p_url_ftell(ByteIOContext *s)
static t_GVA_RetCode
p_private_ffmpeg_get_next_frame(t_GVA_Handle *gvahand, gboolean do_copy_raw_chunk_data)
{
+ static int ls_callNumber = 0;
t_GVA_ffmpeg *handle;
int l_got_picture;
int l_rc;
@@ -938,6 +952,7 @@ p_private_ffmpeg_get_next_frame(t_GVA_Handle *gvahand, gboolean do_copy_raw_chun
,gvahand->height
);
+ ls_callNumber++;
l_got_picture = 0;
l_rc = 0;
l_len = 0;
@@ -948,6 +963,7 @@ p_private_ffmpeg_get_next_frame(t_GVA_Handle *gvahand, gboolean do_copy_raw_chun
l_curr_url_offset = -1;
l_url_seek_nr = gvahand->current_seek_nr;
l_key_frame_detected = FALSE;
+ handle->pkt1_dts = AV_NOPTS_VALUE;
while(l_got_picture == 0)
{
@@ -974,15 +990,32 @@ p_private_ffmpeg_get_next_frame(t_GVA_Handle *gvahand, gboolean do_copy_raw_chun
l_pktlen = av_read_frame(handle->vid_input_context, &handle->vid_pkt);
if(l_pktlen < 0)
{
- /* EOF reached */
- if (gap_debug)
+ if (l_pktlen == AVERROR_EOF)
{
- printf("p_wrapper_ffmpeg_get_next_frame: EOF reached (or read ERROR)"
- " (old)total_frames:%d current_frame_nr:%d all_frames_counted:%d\n"
- ,(int) gvahand->total_frames
- ,(int) gvahand->current_frame_nr
- ,(int) gvahand->all_frames_counted
- );
+ /* EOF reached */
+ if (gap_debug)
+ {
+ printf("get_next_frame: EOF reached"
+ " (old)total_frames:%d current_frame_nr:%d all_frames_counted:%d\n"
+ ,(int) gvahand->total_frames
+ ,(int) gvahand->current_frame_nr
+ ,(int) gvahand->all_frames_counted
+ );
+ }
+ }
+ else
+ {
+ /* EOF reached */
+ if (gap_debug)
+ {
+ printf("get_next_frame: ERROR:%d (assuming EOF)"
+ " (old)total_frames:%d current_frame_nr:%d all_frames_counted:%d\n"
+ ,(int) l_pktlen
+ ,(int) gvahand->total_frames
+ ,(int) gvahand->current_frame_nr
+ ,(int) gvahand->all_frames_counted
+ );
+ }
}
l_record_url_offset = -1;
@@ -1023,13 +1056,17 @@ p_private_ffmpeg_get_next_frame(t_GVA_Handle *gvahand, gboolean do_copy_raw_chun
handle->inbuf_len = handle->vid_pkt.size;
l_pkt_size = handle->vid_pkt.size;
- handle->frame_len += l_pkt_size;
+ if((handle->pkt1_dts == AV_NOPTS_VALUE)
+ && (handle->vid_pkt.dts != AV_NOPTS_VALUE))
+ {
+ handle->pkt1_dts = handle->vid_pkt.dts;
+ }
+
if(gap_debug)
- { printf("using Packet data:%d size:%d handle->frame_len:%d dts:%lld pts:%lld AV_NOPTS_VALUE:%lld\n"
+ { printf("using Packet data:%d size:%d dts:%lld pts:%lld AV_NOPTS_VALUE:%lld\n"
,(int)handle->vid_pkt.data
,(int)handle->vid_pkt.size
- ,(int)handle->frame_len
, handle->vid_pkt.dts
, handle->vid_pkt.pts
, AV_NOPTS_VALUE
@@ -1039,9 +1076,12 @@ p_private_ffmpeg_get_next_frame(t_GVA_Handle *gvahand, gboolean do_copy_raw_chun
if (gap_debug)
{
- printf("before avcodec_decode_video: inbuf_ptr:%d inbuf_len:%d\n",
- (int)handle->inbuf_ptr,
- (int)handle->inbuf_len);
+ printf("before %s: inbuf_ptr:%d inbuf_len:%d USING FFMPEG-%s\n"
+ , PROCNAME_AVCODEC_DECODE_VIDEO
+ , (int)handle->inbuf_ptr
+ , (int)handle->inbuf_len
+ , GAP_FFMPEG_VERSION_STRING
+ );
}
@@ -1077,12 +1117,11 @@ p_private_ffmpeg_get_next_frame(t_GVA_Handle *gvahand, gboolean do_copy_raw_chun
avcodec_get_frame_defaults(&handle->big_picture_yuv);
- /* decode a frame. return -1 if error, otherwise return the number of
+ /* decode a frame. return -1 on error, otherwise return the number of
* bytes used. If no frame could be decompressed, *got_picture_ptr is
* zero. Otherwise, it is non zero.
*/
#ifdef GAP_USES_OLD_FFMPEG_0_5
- // printf("USING avcodec_decode_video FFMPEG-0.5\n");
l_len = avcodec_decode_video(handle->vid_codec_context /* AVCodecContext * */
,&handle->big_picture_yuv
,&l_got_picture
@@ -1090,7 +1129,6 @@ p_private_ffmpeg_get_next_frame(t_GVA_Handle *gvahand, gboolean do_copy_raw_chun
,handle->inbuf_len
);
#else
- // printf("USING avcodec_decode_video2 FFMPEG-0.6\n");
l_len = avcodec_decode_video2(handle->vid_codec_context /* AVCodecContext * */
,&handle->big_picture_yuv
,&l_got_picture
@@ -1098,7 +1136,21 @@ p_private_ffmpeg_get_next_frame(t_GVA_Handle *gvahand, gboolean do_copy_raw_chun
);
#endif
- /*if (gap_debug) printf("after avcodec_decode_video: l_len:%d got_pic:%d\n", (int)l_len, (int)l_got_picture);*/
+ if (gap_debug)
+ {
+
+ printf("get_next_frame(call:%d): "
+ "curr_frame_nr:%d Pkt data:%d size:%d dts:%lld pts:%lld l_len:%d got_pic:%d\n"
+ ,(int)ls_callNumber
+ ,(int) gvahand->current_frame_nr
+ ,(int)handle->vid_pkt.data
+ ,(int)handle->vid_pkt.size
+ , handle->vid_pkt.dts
+ , handle->vid_pkt.pts
+ , (int)l_len
+ , (int)l_got_picture
+ );
+ }
if(handle->yuv_buff_pix_fmt != handle->vid_codec_context->pix_fmt)
{
@@ -1123,29 +1175,76 @@ p_private_ffmpeg_get_next_frame(t_GVA_Handle *gvahand, gboolean do_copy_raw_chun
if(l_len < 0)
{
- printf("p_wrapper_ffmpeg_get_next_frame: avcodec_decode_video returned ERROR)\n");
- l_rc = 2;
- break;
+ if(handle->continueAfterReadErrors == FALSE)
+ {
+ printf("get_next_frame: %s returned ERROR configured to break at read errors)\n"
+ ,PROCNAME_AVCODEC_DECODE_VIDEO
+ );
+ l_rc = 2;
+ break;
+ }
+ printf("get_next_frame: %s returned ERROR, "
+ "discarding packet dts:%lld and continueAfterReadErrors)\n"
+ , PROCNAME_AVCODEC_DECODE_VIDEO
+ , handle->vid_pkt.dts
+ );
+
+ handle->vid_pkt.size = 0; /* set empty packet status */
+ av_free_packet(&handle->vid_pkt);
+ handle->inbuf_len = 0;
+ handle->pkt1_dts = AV_NOPTS_VALUE;
+ continue;
+ }
+
+#ifndef GAP_USES_OLD_FFMPEG_0_5
+ /* fix for ffmpeg-0.6 MJPEG problem */
+ if(l_len > 0)
+ {
+ /* since ffmpeg-0.6 the length (l_len) returned from avcodec_decode_video2
+ * does not reliable indicate the number of handled input bytes
+ * (at least when mjpeg codec is used)
+ * therefore assume that the decoded frame size is the full packet length
+ * (note that packet is already read with av_read_frame and that shall already
+ * collect all data for one frame in the packet,
+ * The case where multiple frames are included in one packet can not be handled after this fix
+ * (i have no testvideos where this rare case occurs, and there is no way to implement
+ * such a solution without reliable length information from the codec)
+ */
+ if(l_len != handle->inbuf_len)
+ {
+ if(gap_debug)
+ {
+ printf("WARNING: (call %d) current_frame_nr:%d decoded length:%d differs from packaet length:%d\n"
+ ,(int)ls_callNumber
+ ,(int)gvahand->current_frame_nr
+ ,(int)l_len
+ ,(int)handle->inbuf_len
+ );
+ }
+ l_len = handle->inbuf_len;
+ }
}
+#endif
+
handle->inbuf_ptr += l_len;
handle->inbuf_len -= l_len;
if(l_got_picture)
{
l_frame_len = (l_len & 0xffff);
- handle->frame_len = (l_len - l_pkt_size);
l_record_url_offset = l_curr_url_offset;
l_key_frame_detected = ((handle->vid_pkt.flags & AV_PKT_FLAG_KEY) != 0);
if(gap_debug)
{
/* log information that could be relevant for redesign of VINDEX creation */
- printf("GOT PICTURE current_seek_nr:%06d pp_prev_offset:%lld url_offset:%lld keyflag:%d dts:%lld flen16:%d len:%d\n"
+ printf("GOT PICTURE current_seek_nr:%06d pp_prev_offset:%lld url_offset:%lld keyflag:%d dts:%lld dts1:%lld flen16:%d len:%d\n"
, (int)gvahand->current_seek_nr
, handle->prev_url_offset[MAX_PREV_OFFSET -1]
, l_record_url_offset
, (handle->vid_pkt.flags & AV_PKT_FLAG_KEY)
, handle->vid_pkt.dts
+ , handle->pkt1_dts
,(int)l_frame_len
,(int)l_len
);
@@ -1206,7 +1305,7 @@ p_private_ffmpeg_get_next_frame(t_GVA_Handle *gvahand, gboolean do_copy_raw_chun
if (gap_debug)
{
- printf("p_wrapper_ffmpeg_get_next_frame: before img_convert via sws_scale\n");
+ printf("get_next_frame: before img_convert via sws_scale\n");
}
sws_scale(handle->img_convert_ctx
, handle->picture_yuv->data /* srcSlice */
@@ -1219,7 +1318,7 @@ p_private_ffmpeg_get_next_frame(t_GVA_Handle *gvahand, gboolean do_copy_raw_chun
if (gap_debug)
{
- printf("p_wrapper_ffmpeg_get_next_frame: after img_convert via sws_scale\n");
+ printf("get_next_frame: after img_convert via sws_scale\n");
}
}
@@ -1340,7 +1439,7 @@ p_private_ffmpeg_get_next_frame(t_GVA_Handle *gvahand, gboolean do_copy_raw_chun
gvahand->current_frame_nr = gvahand->current_seek_nr;
gvahand->current_seek_nr++;
- if (gap_debug) printf("p_wrapper_ffmpeg_get_next_frame: current_frame_nr :%d\n"
+ if (gap_debug) printf("get_next_frame: current_frame_nr :%d\n"
, (int)gvahand->current_frame_nr);
/* if we found more frames, increase total_frames */
@@ -2208,7 +2307,6 @@ p_seek_private(t_GVA_Handle *gvahand, gdouble pos, t_GVA_PosUnit pos_unit)
int ret_av_seek_frame;
/* seek based on url_fseek (for support of old video indexes without timecode) */
seek_pos = vindex->ofs_tab[l_idx].uni.offset_gint64;
- // url_fseek(handle->vid_input_context->pb, seek_pos, SEEK_SET);
/* byte position based seek AVSEEK_FLAG_BACKWARD AVSEEK_FLAG_ANY*/
ret_av_seek_frame = av_seek_frame(handle->vid_input_context, handle->vid_stream_index
@@ -3995,7 +4093,6 @@ p_ffmpeg_vid_reopen_read(t_GVA_ffmpeg *handle, t_GVA_Handle *gvahand)
handle->vid_pkt.size = 0; /* REstart with empty packet */
handle->vid_pkt.data = NULL; /* REstart with empty packet */
handle->inbuf_len = 0; /* start with empty buffer */
- handle->frame_len = 0; /* start with 0 frame length */
handle->inbuf_ptr = NULL; /* start with empty buffer, after 1.st av_read_frame: pointer to pkt.data read pos */
/* RE-open for the VIDEO part */
@@ -4827,6 +4924,7 @@ p_set_analysefile_master_keywords(GapValKeyList *keylist
master_handle = (t_GVA_ffmpeg *)gvahand->decoder_handle;
int ii;
+ gap_val_set_keyword(keylist, "(libavcodec_version_int ", &master_handle->libavcodec_version_int, GAP_VAL_GINT32, 0, "\0");
gap_val_set_keyword(keylist, "(READSTEPS_PROBE_TIMECODE ", &master_handle->readsteps_probe_timecode, GAP_VAL_GINT32, 0, "\0");
gap_val_set_keyword(keylist, "(timestamp ", &master_handle->timestamp, GAP_VAL_GINT32, 0, "\0");
gap_val_set_keyword(keylist, "(video_libavformat_seek_gopsize ", &master_handle->video_libavformat_seek_gopsize, GAP_VAL_GINT32, 0, "\0");
@@ -4920,6 +5018,7 @@ p_save_video_analyse_results(t_GVA_Handle *gvahand)
"# (start+duration converted to frames:%d)\n"
"# (eof_timecode converted to frames:%d)\n"
"# (video-libavformat-seek-gopsize config:%d actual:%d)\n"
+ "# (ffmpeg-libs-version:%s)\n"
, master_handle->vid_stream->time_base.num
, master_handle->vid_stream->time_base.den
, master_handle->vid_input_context->start_time
@@ -4928,6 +5027,7 @@ p_save_video_analyse_results(t_GVA_Handle *gvahand)
, p_timecode_to_frame_nr(master_handle, master_handle->eof_timecode)
, gap_base_get_gimprc_int_value("video-libavformat-seek-gopsize", DEFAULT_NAT_SEEK_GOPSIZE, 0, MAX_NAT_SEEK_GOPSIZE)
, master_handle->video_libavformat_seek_gopsize
+ , GAP_FFMPEG_VERSION_STRING
);
}
fclose(fp_analyse);
@@ -4935,7 +5035,9 @@ p_save_video_analyse_results(t_GVA_Handle *gvahand)
keylist = gap_val_new_keylist();
/* setup key/value descriptions */
+ master_handle->libavcodec_version_int = LIBAVCODEC_VERSION_INT;
p_set_analysefile_master_keywords(keylist, gvahand, master_handle->count_timecode_steps);
+
/* save key/value data */
gap_val_rewrite_file(keylist, analysefile_name
@@ -4976,6 +5078,12 @@ p_get_video_analyse_results(t_GVA_Handle *gvahand)
analysefile_name = p_create_analysefile_name(gvahand);
master_handle = (t_GVA_ffmpeg *)gvahand->decoder_handle;
keylist = gap_val_new_keylist();
+
+ /* init verion with 0
+ * when loading older analyze files that do not contain such version information
+ * the version will keep the inital 0 value after loading.
+ */
+ master_handle->libavcodec_version_int = 0;
p_set_analysefile_master_keywords(keylist, gvahand, READSTEPS_PROBE_TIMECODE);
/* init structures with some non-plausible values
@@ -5290,8 +5398,10 @@ p_probe_timecode_offset(t_GVA_Handle *master_gvahand)
gdouble avg_fstepsize;
int64_t prev_timecode;
gint32 l_countValidTimecodes;
+ gint32 l_countZeroTimecodes;
l_countValidTimecodes = 0;
+ l_countZeroTimecodes = 0;
master_handle->timecode_proberead_done = TRUE;
master_handle->timecode_step_abs_min = 99999999;
@@ -5318,7 +5428,22 @@ p_probe_timecode_offset(t_GVA_Handle *master_gvahand)
copy_handle->dummy_read = TRUE;
l_rc_rd = p_wrapper_ffmpeg_get_next_frame(copy_gvahand);
+
master_handle->timecode_offset_frame1 = copy_handle->vid_pkt.dts;
+// if(copy_handle->pkt1_dts != AV_NOPTS_VALUE)
+// {
+// master_handle->timecode_offset_frame1 = copy_handle->pkt1_dts;
+// }
+
+ if(gap_debug)
+ {
+ printf("GOT master_handle->timecode_offset_frame1:%lld copy_handle->vid_pkt.dts:%lld dts1:%lld\n"
+ , master_handle->timecode_offset_frame1
+ , copy_handle->vid_pkt.dts
+ , copy_handle->pkt1_dts
+ );
+ }
+
prev_timecode = copy_handle->vid_pkt.dts;
l_readsteps = 0;
@@ -5334,6 +5459,10 @@ p_probe_timecode_offset(t_GVA_Handle *master_gvahand)
{
l_countValidTimecodes++;
}
+ if (copy_handle->vid_pkt.dts == 0)
+ {
+ l_countZeroTimecodes++;
+ }
master_handle->timecode_steps[l_readsteps] = copy_handle->vid_pkt.dts - prev_timecode;
master_handle->timecode_step_abs_min =
MIN(abs(master_handle->timecode_steps[l_readsteps])
@@ -5358,7 +5487,8 @@ p_probe_timecode_offset(t_GVA_Handle *master_gvahand)
/* close the extra handle (that was opened for counting only) */
p_wrapper_ffmpeg_close(copy_gvahand);
- if (l_countValidTimecodes > 0)
+ if ((l_countValidTimecodes > 0)
+ && (l_countZeroTimecodes < 2))
{
p_analyze_stepsize_pattern(l_readsteps, master_gvahand);
}
@@ -5368,6 +5498,8 @@ p_probe_timecode_offset(t_GVA_Handle *master_gvahand)
* even if not present in the video.
* but unfortunately recent ffmpeg snapshots deliver AV_NOPTS_VALUE as dts
* for such videos. In this case native seek must be disabled.
+ *
+ * (but timecode 0 is not OK for framenumbers > 1)
*/
master_handle->timecode_steps_sum = 0;
master_handle->count_timecode_steps = 1;
diff --git a/vid_enc_ffmpeg/gap_enc_ffmpeg_gui.c b/vid_enc_ffmpeg/gap_enc_ffmpeg_gui.c
index 88928d9..c298271 100644
--- a/vid_enc_ffmpeg/gap_enc_ffmpeg_gui.c
+++ b/vid_enc_ffmpeg/gap_enc_ffmpeg_gui.c
@@ -1010,6 +1010,25 @@ p_init_vid_checkbuttons(GapGveFFMpegGlobalParams *gpp)
gtk_toggle_button_set_active (GTK_TOGGLE_BUTTON (gpp->ff_codec_FLAG2_BIT_RESERVOIR_checkbutton)
, gpp->evl.codec_FLAG2_BIT_RESERVOIR);
+ gtk_toggle_button_set_active (GTK_TOGGLE_BUTTON (gpp->ff_codec_FLAG2_MBTREE_checkbutton)
+ , gpp->evl.codec_FLAG2_MBTREE);
+ gtk_toggle_button_set_active (GTK_TOGGLE_BUTTON (gpp->ff_codec_FLAG2_PSY_checkbutton)
+ , gpp->evl.codec_FLAG2_PSY);
+ gtk_toggle_button_set_active (GTK_TOGGLE_BUTTON (gpp->ff_codec_FLAG2_SSIM_checkbutton)
+ , gpp->evl.codec_FLAG2_SSIM);
+
+
+ gtk_toggle_button_set_active (GTK_TOGGLE_BUTTON (gpp->ff_partition_X264_PART_I4X4_checkbutton)
+ , gpp->evl.partition_X264_PART_I4X4);
+ gtk_toggle_button_set_active (GTK_TOGGLE_BUTTON (gpp->ff_partition_X264_PART_I8X8_checkbutton)
+ , gpp->evl.partition_X264_PART_I8X8);
+ gtk_toggle_button_set_active (GTK_TOGGLE_BUTTON (gpp->ff_partition_X264_PART_P8X8_checkbutton)
+ , gpp->evl.partition_X264_PART_P8X8);
+ gtk_toggle_button_set_active (GTK_TOGGLE_BUTTON (gpp->ff_partition_X264_PART_P4X4_checkbutton)
+ , gpp->evl.partition_X264_PART_P4X4);
+ gtk_toggle_button_set_active (GTK_TOGGLE_BUTTON (gpp->ff_partition_X264_PART_B8X8_checkbutton)
+ , gpp->evl.partition_X264_PART_B8X8);
+
} /* end p_init_vid_checkbuttons */
/* --------------------------------
@@ -1375,7 +1394,7 @@ p_create_basic_options_frame (GapGveFFMpegGlobalParams *gpp)
/* the qmin spinbutton */
- adj = gtk_adjustment_new (1, 0, 31, 1, 10, 0);
+ adj = gtk_adjustment_new (1, 0, 51, 1, 10, 0);
spinbutton = gtk_spin_button_new (GTK_ADJUSTMENT (adj), 1, 0);
gpp->ff_qmin_spinbutton_adj = adj;
gpp->ff_qmin_spinbutton = spinbutton;
@@ -1403,7 +1422,7 @@ p_create_basic_options_frame (GapGveFFMpegGlobalParams *gpp)
gtk_misc_set_alignment (GTK_MISC (label), 0, 0.5);
/* the qmax spinbutton */
- adj = gtk_adjustment_new (1, 0, 31, 1, 10, 0);
+ adj = gtk_adjustment_new (1, 0, 51, 1, 10, 0);
spinbutton = gtk_spin_button_new (GTK_ADJUSTMENT (adj), 1, 0);
gpp->ff_qmax_spinbutton_adj = adj;
gpp->ff_qmax_spinbutton = spinbutton;
@@ -2510,7 +2529,7 @@ p_create_expert_flags2_frame (GapGveFFMpegGlobalParams *gpp)
int flags_row;
GtkWidget *flags_table;
- flags_table = gtk_table_new (8, 2, FALSE);
+ flags_table = gtk_table_new (8, 3, FALSE);
gtk_widget_show (flags_table);
gtk_container_add (GTK_CONTAINER (flags_frame), flags_table);
gtk_container_set_border_width (GTK_CONTAINER (flags_table), 2);
@@ -2535,6 +2554,12 @@ p_create_expert_flags2_frame (GapGveFFMpegGlobalParams *gpp)
(GtkAttachOptions) (0), 0, 0);
+ label = gtk_label_new (_("Partition X264:"));
+ gtk_misc_set_alignment (GTK_MISC (label), 0.0, 0.5);
+ gtk_widget_show (label);
+ gtk_table_attach (GTK_TABLE (flags_table), label, 2, 3, flags_row, flags_row+1,
+ (GtkAttachOptions) (GTK_EXPAND | GTK_FILL),
+ (GtkAttachOptions) (0), 0, 0);
flags_row++;
@@ -2568,6 +2593,20 @@ p_create_expert_flags2_frame (GapGveFFMpegGlobalParams *gpp)
&gpp->evl.codec_FLAG2_BPYRAMID);
+ /* the partition_X264_PART_I4X4 checkbutton */
+ checkbutton = gtk_check_button_new_with_label (_("I4x4"));
+ gpp->ff_partition_X264_PART_I4X4_checkbutton = checkbutton;
+ gtk_widget_show (checkbutton);
+ gtk_table_attach (GTK_TABLE (flags_table), checkbutton, 2, 3, flags_row, flags_row+1,
+ (GtkAttachOptions) (GTK_EXPAND | GTK_FILL),
+ (GtkAttachOptions) (0), 0, 0);
+ gimp_help_set_help_data (checkbutton, _("enable 4x4 partitions in I-frames.(for X264 codec)"), NULL);
+ g_object_set_data (G_OBJECT (checkbutton), GAP_ENC_FFGUI_GPP, (gpointer)gpp);
+ g_signal_connect (G_OBJECT (checkbutton), "toggled",
+ G_CALLBACK (on_ff_gint32_checkbutton_toggled),
+ &gpp->evl.partition_X264_PART_I4X4);
+
+
flags_row++;
@@ -2598,6 +2637,20 @@ p_create_expert_flags2_frame (GapGveFFMpegGlobalParams *gpp)
&gpp->evl.codec_FLAG2_WPRED);
+ /* the partition_X264_PART_I8X8 checkbutton */
+ checkbutton = gtk_check_button_new_with_label (_("I8x8"));
+ gpp->ff_partition_X264_PART_I8X8_checkbutton = checkbutton;
+ gtk_widget_show (checkbutton);
+ gtk_table_attach (GTK_TABLE (flags_table), checkbutton, 2, 3, flags_row, flags_row+1,
+ (GtkAttachOptions) (GTK_EXPAND | GTK_FILL),
+ (GtkAttachOptions) (0), 0, 0);
+ gimp_help_set_help_data (checkbutton, _("enable 8x8 partitions in I-frames.(for X264 codec)"), NULL);
+ g_object_set_data (G_OBJECT (checkbutton), GAP_ENC_FFGUI_GPP, (gpointer)gpp);
+ g_signal_connect (G_OBJECT (checkbutton), "toggled",
+ G_CALLBACK (on_ff_gint32_checkbutton_toggled),
+ &gpp->evl.partition_X264_PART_I8X8);
+
+
flags_row++;
@@ -2632,6 +2685,20 @@ p_create_expert_flags2_frame (GapGveFFMpegGlobalParams *gpp)
&gpp->evl.codec_FLAG2_MIXED_REFS);
+ /* the partition_X264_PART_P8X8 checkbutton */
+ checkbutton = gtk_check_button_new_with_label (_("P8x8"));
+ gpp->ff_partition_X264_PART_P8X8_checkbutton = checkbutton;
+ gtk_widget_show (checkbutton);
+ gtk_table_attach (GTK_TABLE (flags_table), checkbutton, 2, 3, flags_row, flags_row+1,
+ (GtkAttachOptions) (GTK_EXPAND | GTK_FILL),
+ (GtkAttachOptions) (0), 0, 0);
+ gimp_help_set_help_data (checkbutton, _("enable 8x8, 16x8 and 8x16 partitions in P-frames.(for X264 codec)"), NULL);
+ g_object_set_data (G_OBJECT (checkbutton), GAP_ENC_FFGUI_GPP, (gpointer)gpp);
+ g_signal_connect (G_OBJECT (checkbutton), "toggled",
+ G_CALLBACK (on_ff_gint32_checkbutton_toggled),
+ &gpp->evl.partition_X264_PART_P8X8);
+
+
flags_row++;
@@ -2663,6 +2730,20 @@ p_create_expert_flags2_frame (GapGveFFMpegGlobalParams *gpp)
&gpp->evl.codec_FLAG2_8X8DCT);
+ /* the partition_X264_PART_P4X4 checkbutton */
+ checkbutton = gtk_check_button_new_with_label (_("P4X4"));
+ gpp->ff_partition_X264_PART_P4X4_checkbutton = checkbutton;
+ gtk_widget_show (checkbutton);
+ gtk_table_attach (GTK_TABLE (flags_table), checkbutton, 2, 3, flags_row, flags_row+1,
+ (GtkAttachOptions) (GTK_EXPAND | GTK_FILL),
+ (GtkAttachOptions) (0), 0, 0);
+ gimp_help_set_help_data (checkbutton, _("enable 4x4, 8x4 and 4x8 partitions in P-frames.(for X264 codec)"), NULL);
+ g_object_set_data (G_OBJECT (checkbutton), GAP_ENC_FFGUI_GPP, (gpointer)gpp);
+ g_signal_connect (G_OBJECT (checkbutton), "toggled",
+ G_CALLBACK (on_ff_gint32_checkbutton_toggled),
+ &gpp->evl.partition_X264_PART_P4X4);
+
+
flags_row++;
/* the use_memc_only checkbutton */
@@ -2693,6 +2774,20 @@ p_create_expert_flags2_frame (GapGveFFMpegGlobalParams *gpp)
&gpp->evl.codec_FLAG2_FASTPSKIP);
+ /* the partition_X264_PART_B8X8 checkbutton */
+ checkbutton = gtk_check_button_new_with_label (_("B8x8"));
+ gpp->ff_partition_X264_PART_B8X8_checkbutton = checkbutton;
+ gtk_widget_show (checkbutton);
+ gtk_table_attach (GTK_TABLE (flags_table), checkbutton, 2, 3, flags_row, flags_row+1,
+ (GtkAttachOptions) (GTK_EXPAND | GTK_FILL),
+ (GtkAttachOptions) (0), 0, 0);
+ gimp_help_set_help_data (checkbutton, _("enable 8x8 16x8 and 8x16 partitions in B-frames.(for X264 codec)"), NULL);
+ g_object_set_data (G_OBJECT (checkbutton), GAP_ENC_FFGUI_GPP, (gpointer)gpp);
+ g_signal_connect (G_OBJECT (checkbutton), "toggled",
+ G_CALLBACK (on_ff_gint32_checkbutton_toggled),
+ &gpp->evl.partition_X264_PART_B8X8);
+
+
flags_row++;
@@ -2738,6 +2833,19 @@ p_create_expert_flags2_frame (GapGveFFMpegGlobalParams *gpp)
G_CALLBACK (on_ff_gint32_checkbutton_toggled),
&gpp->evl.codec_FLAG2_SKIP_RD);
+ /* the use_MB_Tree ratecontrol checkbutton */
+ checkbutton = gtk_check_button_new_with_label (_("MB-Tree RC"));
+ gpp->ff_codec_FLAG2_MBTREE_checkbutton = checkbutton;
+ gtk_widget_show (checkbutton);
+ gtk_table_attach (GTK_TABLE (flags_table), checkbutton, 1, 2, flags_row, flags_row+1,
+ (GtkAttachOptions) (GTK_EXPAND | GTK_FILL),
+ (GtkAttachOptions) (0), 0, 0);
+ gimp_help_set_help_data (checkbutton, _("use macroblock tree ratecontrol (x264 only)"), NULL);
+ g_object_set_data (G_OBJECT (checkbutton), GAP_ENC_FFGUI_GPP, (gpointer)gpp);
+ g_signal_connect (G_OBJECT (checkbutton), "toggled",
+ G_CALLBACK (on_ff_gint32_checkbutton_toggled),
+ &gpp->evl.codec_FLAG2_MBTREE);
+
flags_row++;
/* the use_chunks checkbutton */
@@ -2769,6 +2877,20 @@ p_create_expert_flags2_frame (GapGveFFMpegGlobalParams *gpp)
G_CALLBACK (on_ff_gint32_checkbutton_toggled),
&gpp->evl.codec_FLAG2_NON_LINEAR_QUANT);
+
+ /* the use_PSY checkbutton */
+ checkbutton = gtk_check_button_new_with_label (_("PSY"));
+ gpp->ff_codec_FLAG2_PSY_checkbutton = checkbutton;
+ gtk_widget_show (checkbutton);
+ gtk_table_attach (GTK_TABLE (flags_table), checkbutton, 1, 2, flags_row, flags_row+1,
+ (GtkAttachOptions) (GTK_EXPAND | GTK_FILL),
+ (GtkAttachOptions) (0), 0, 0);
+ gimp_help_set_help_data (checkbutton, _("use psycho visual optimizations"), NULL);
+ g_object_set_data (G_OBJECT (checkbutton), GAP_ENC_FFGUI_GPP, (gpointer)gpp);
+ g_signal_connect (G_OBJECT (checkbutton), "toggled",
+ G_CALLBACK (on_ff_gint32_checkbutton_toggled),
+ &gpp->evl.codec_FLAG2_PSY);
+
flags_row++;
/* the use_bit_reservoir checkbutton */
@@ -2785,6 +2907,20 @@ p_create_expert_flags2_frame (GapGveFFMpegGlobalParams *gpp)
&gpp->evl.codec_FLAG2_BIT_RESERVOIR);
+ /* the compute_SSIM checkbutton */
+ checkbutton = gtk_check_button_new_with_label (_("Compute SSIM"));
+ gpp->ff_codec_FLAG2_SSIM_checkbutton = checkbutton;
+ gtk_widget_show (checkbutton);
+ gtk_table_attach (GTK_TABLE (flags_table), checkbutton, 1, 2, flags_row, flags_row+1,
+ (GtkAttachOptions) (GTK_EXPAND | GTK_FILL),
+ (GtkAttachOptions) (0), 0, 0);
+ gimp_help_set_help_data (checkbutton, _("Compute SSIM during encoding, error[] values are undefined."), NULL);
+ g_object_set_data (G_OBJECT (checkbutton), GAP_ENC_FFGUI_GPP, (gpointer)gpp);
+ g_signal_connect (G_OBJECT (checkbutton), "toggled",
+ G_CALLBACK (on_ff_gint32_checkbutton_toggled),
+ &gpp->evl.codec_FLAG2_SSIM);
+
+
flags_row++;
@@ -2872,6 +3008,7 @@ p_create_expert_flags2_frame (GapGveFFMpegGlobalParams *gpp)
+
/* --------------------------------
* p_create_expert_options_frame
* --------------------------------
@@ -3236,7 +3373,7 @@ p_create_expert_options_frame (GapGveFFMpegGlobalParams *gpp)
gtk_misc_set_alignment (GTK_MISC (label), 0, 0.5);
/* the mb-qmin spinbutton */
- adj = gtk_adjustment_new (0, 0, 31, 1, 10, 0);
+ adj = gtk_adjustment_new (0, 0, 51, 1, 10, 0);
spinbutton = gtk_spin_button_new (GTK_ADJUSTMENT (adj), 1, 0);
gpp->ff_mb_qmin_spinbutton_adj = adj;
gpp->ff_mb_qmin_spinbutton = spinbutton;
@@ -3263,7 +3400,7 @@ p_create_expert_options_frame (GapGveFFMpegGlobalParams *gpp)
gtk_misc_set_alignment (GTK_MISC (label), 0, 0.5);
/* the mb-qmax spinbutton */
- adj = gtk_adjustment_new (31, 0, 31, 1, 10, 0);
+ adj = gtk_adjustment_new (31, 0, 51, 1, 10, 0);
spinbutton = gtk_spin_button_new (GTK_ADJUSTMENT (adj), 1, 0);
gpp->ff_mb_qmax_spinbutton_adj = adj;
gpp->ff_mb_qmax_spinbutton = spinbutton;
diff --git a/vid_enc_ffmpeg/gap_enc_ffmpeg_main.c b/vid_enc_ffmpeg/gap_enc_ffmpeg_main.c
index eadbec1..2fea114 100644
--- a/vid_enc_ffmpeg/gap_enc_ffmpeg_main.c
+++ b/vid_enc_ffmpeg/gap_enc_ffmpeg_main.c
@@ -76,7 +76,6 @@
#include <libgimp/gimp.h>
#include <libgimp/gimpui.h>
-//#include "imgconvert.h"
#include "swscale.h"
#include "gap-intl.h"
@@ -207,6 +206,9 @@ typedef struct t_ffmpeg_handle
struct SwsContext *img_convert_ctx;
+ int countVideoFramesWritten;
+ uint8_t *convert_buffer;
+ gint32 validEncodeFrameNr;
} t_ffmpeg_handle;
@@ -235,6 +237,7 @@ static void run(const gchar *name
static void p_debug_print_dump_AVCodecContext(AVCodecContext *codecContext);
static int p_av_metadata_set(AVMetadata **pm, const char *key, const char *value, int flags);
static void p_set_flag(gint32 value_bool32, int *flag_ptr, int maskbit);
+static void p_set_partition_flag(gint32 value_bool32, gint32 *flag_ptr, gint32 maskbit);
static void p_gimp_get_data(const char *key, void *buffer, gint expected_size);
@@ -259,6 +262,7 @@ static void p_close_audio_input_files(t_awk_array *awp);
static void p_open_audio_input_files(t_awk_array *awp, GapGveFFMpegGlobalParams *gpp);
static gint64 p_calculate_current_timecode(t_ffmpeg_handle *ffh);
+static gint64 p_calculate_timecode(t_ffmpeg_handle *ffh, int frameNr);
static void p_set_timebase_from_framerate(AVRational *time_base, gdouble framerate);
static void p_ffmpeg_open_init(t_ffmpeg_handle *ffh, GapGveFFMpegGlobalParams *gpp);
@@ -280,7 +284,7 @@ static t_ffmpeg_handle * p_ffmpeg_open(GapGveFFMpegGlobalParams *gpp
, gint video_tracks
);
static int p_ffmpeg_write_frame_chunk(t_ffmpeg_handle *ffh, gint32 encoded_size, gint vid_track);
-static uint8_t * p_convert_colormodel(t_ffmpeg_handle *ffh, AVPicture *picture_codec, guchar *rgb_buffer, gint vid_track);
+static void p_convert_colormodel(t_ffmpeg_handle *ffh, AVPicture *picture_codec, guchar *rgb_buffer, gint vid_track);
static int p_ffmpeg_write_frame(t_ffmpeg_handle *ffh, GimpDrawable *drawable, gboolean force_keyframe, gint vid_track, gboolean useYUV420P);
static int p_ffmpeg_write_audioframe(t_ffmpeg_handle *ffh, guchar *audio_buf, int frame_bytes, gint aud_track);
static void p_ffmpeg_close(t_ffmpeg_handle *ffh);
@@ -960,6 +964,29 @@ p_set_flag(gint32 value_bool32, int *flag_ptr, int maskbit)
}
} /* end p_set_flag */
+
+/* --------------------------------
+ * p_set_partition_flag
+ * --------------------------------
+ */
+static void
+p_set_partition_flag(gint32 value_bool32, gint32 *flag_ptr, gint32 maskbit)
+{
+ if(value_bool32)
+ {
+ *flag_ptr |= maskbit;
+ }
+ else
+ {
+ gint32 clearmask;
+
+ clearmask = ~maskbit;
+ *flag_ptr &= clearmask;
+ }
+
+} /* end p_set_partition_flag */
+
+
/* --------------------------------
* p_gimp_get_data
* --------------------------------
@@ -1285,6 +1312,12 @@ gap_enc_ffmpeg_main_init_preset_params(GapGveFFMpegValues *epp, gint preset_idx)
epp->codec_FLAG2_PSY = 0; /* 0: FALSE */
epp->codec_FLAG2_SSIM = 0; /* 0: FALSE */
+ epp->partition_X264_PART_I4X4 = 0; /* 0: FALSE */
+ epp->partition_X264_PART_I8X8 = 0; /* 0: FALSE */
+ epp->partition_X264_PART_P8X8 = 0; /* 0: FALSE */
+ epp->partition_X264_PART_P4X4 = 0; /* 0: FALSE */
+ epp->partition_X264_PART_B8X8 = 0; /* 0: FALSE */
+
} /* end gap_enc_ffmpeg_main_init_preset_params */
@@ -1671,12 +1704,12 @@ p_open_audio_input_files(t_awk_array *awp, GapGveFFMpegGlobalParams *gpp)
} /* end p_open_audio_input_files */
/* -----------------------------
- * p_calculate_current_timecode
+ * p_calculate_timecode
* -----------------------------
- * returns the timecode for the current frame
+ * returns the timecode for the specified frameNr
*/
static gint64
-p_calculate_current_timecode(t_ffmpeg_handle *ffh)
+p_calculate_timecode(t_ffmpeg_handle *ffh, int frameNr)
{
gdouble dblTimecode;
gint64 timecode64;
@@ -1686,10 +1719,21 @@ p_calculate_current_timecode(t_ffmpeg_handle *ffh)
* seconds = (ffh->encode_frame_nr / gpp->val.framerate);
*/
- dblTimecode = ffh->encode_frame_nr * ffh->pts_stepsize_per_frame;
+ dblTimecode = frameNr * ffh->pts_stepsize_per_frame;
timecode64 = dblTimecode;
return (timecode64);
+} /* end p_calculate_timecode */
+
+/* -----------------------------
+ * p_calculate_current_timecode
+ * -----------------------------
+ * returns the timecode for the current frame
+ */
+static gint64
+p_calculate_current_timecode(t_ffmpeg_handle *ffh)
+{
+ return (p_calculate_timecode(ffh, ffh->encode_frame_nr));
}
@@ -1809,15 +1853,39 @@ p_ffmpeg_open_init(t_ffmpeg_handle *ffh, GapGveFFMpegGlobalParams *gpp)
/* initialize common things */
ffh->frame_width = (int)gpp->val.vid_width;
ffh->frame_height = (int)gpp->val.vid_height;
+ /* allocate buffer for image conversion large enough for for uncompressed RGBA32 colormodel */
+ ffh->convert_buffer = g_malloc(4 * ffh->frame_width * ffh->frame_height);
ffh->audio_bit_rate = epp->audio_bitrate * 1000; /* 64000; */
ffh->audio_codec_id = CODEC_ID_NONE;
ffh->file_overwrite = 0;
+ ffh->countVideoFramesWritten = 0;
} /* end p_ffmpeg_open_init */
+/* ------------------
+ * p_choose_pixel_fmt
+ * ------------------
+ */
+static void p_choose_pixel_fmt(AVStream *st, AVCodec *codec)
+{
+ if(codec && codec->pix_fmts){
+ const enum PixelFormat *p= codec->pix_fmts;
+ for(; *p!=-1; p++){
+ if(*p == st->codec->pix_fmt)
+ break;
+ }
+ if(*p == -1
+ && !( st->codec->codec_id==CODEC_ID_MJPEG
+ && st->codec->strict_std_compliance <= FF_COMPLIANCE_INOFFICIAL
+ && ( st->codec->pix_fmt == PIX_FMT_YUV420P
+ || st->codec->pix_fmt == PIX_FMT_YUV422P)))
+ st->codec->pix_fmt = codec->pix_fmts[0];
+ }
+}
+
/* ------------------
* p_init_video_codec
@@ -1868,6 +1936,8 @@ p_init_video_codec(t_ffmpeg_handle *ffh
);
}
+ avcodec_get_context_defaults2(ffh->vst[ii].vid_stream->codec, AVMEDIA_TYPE_VIDEO);
+ avcodec_thread_init(ffh->vst[ii].vid_stream->codec, epp->thread_count);
/* set Video codec context in the video stream array (vst) */
ffh->vst[ii].vid_codec_context = ffh->vst[ii].vid_stream->codec;
@@ -1897,7 +1967,7 @@ p_init_video_codec(t_ffmpeg_handle *ffh
if (gap_debug)
{
- printf("VCODEC: id:%d (%s) time_base.num :%d time_base.den:%d float:%f\n"
+ printf("VCODEC internal DEFAULTS: id:%d (%s) time_base.num :%d time_base.den:%d float:%f\n"
" DEFAULT_FRAME_RATE_BASE: %d\n"
, video_enc->codec_id
, epp->vcodec_name
@@ -1912,10 +1982,16 @@ p_init_video_codec(t_ffmpeg_handle *ffh
}
-
video_enc->pix_fmt = PIX_FMT_YUV420P; /* PIX_FMT_YUV444P; PIX_FMT_YUV420P; PIX_FMT_BGR24; PIX_FMT_RGB24; */
+ p_choose_pixel_fmt(ffh->vst[ii].vid_stream, ffh->vst[ii].vid_codec);
+
+ /* some formats want stream headers to be separate */
+ if(ffh->output_context->oformat->flags & AVFMT_GLOBALHEADER)
+ {
+ video_enc->flags |= CODEC_FLAG_GLOBAL_HEADER;
+ }
if(!strcmp(epp->format_name, "mp4")
|| !strcmp(epp->format_name, "mov")
|| !strcmp(epp->format_name, "3gp"))
@@ -1923,6 +1999,12 @@ p_init_video_codec(t_ffmpeg_handle *ffh
video_enc->flags |= CODEC_FLAG_GLOBAL_HEADER;
}
+ /* some formats want stream headers to be separate */
+ if(ffh->output_context->oformat->flags & AVFMT_GLOBALHEADER)
+ {
+ video_enc->flags |= CODEC_FLAG_GLOBAL_HEADER;
+ }
+
/* mb_decision changed in ffmpeg-0.4.8
* 0: FF_MB_DECISION_SIMPLE
* 1: FF_MB_DECISION_BITS
@@ -2107,6 +2189,12 @@ p_init_video_codec(t_ffmpeg_handle *ffh
video_enc->max_qdiff = epp->qdiff;
video_enc->qblur = (float)epp->qblur;
video_enc->qcompress = (float)epp->qcomp;
+#ifndef GAP_USES_OLD_FFMPEG_0_5
+ if (epp->rc_eq[0] != '\0')
+ {
+ video_enc->rc_eq = &epp->rc_eq[0];
+ }
+#else
if (epp->rc_eq[0] != '\0')
{
video_enc->rc_eq = &epp->rc_eq[0];
@@ -2116,6 +2204,7 @@ p_init_video_codec(t_ffmpeg_handle *ffh
/* use default rate control equation */
video_enc->rc_eq = "tex^qComp";
}
+#endif
video_enc->rc_override_count =0;
video_enc->inter_threshold = epp->inter_threshold;
@@ -2202,6 +2291,12 @@ p_init_video_codec(t_ffmpeg_handle *ffh
video_enc->deblockalpha = epp->deblockalpha;
video_enc->deblockbeta = epp->deblockbeta;
video_enc->partitions = epp->partitions;
+ p_set_partition_flag(epp->partition_X264_PART_I4X4, &video_enc->partitions, X264_PART_I4X4);
+ p_set_partition_flag(epp->partition_X264_PART_I8X8, &video_enc->partitions, X264_PART_I8X8);
+ p_set_partition_flag(epp->partition_X264_PART_P8X8, &video_enc->partitions, X264_PART_P8X8);
+ p_set_partition_flag(epp->partition_X264_PART_P4X4, &video_enc->partitions, X264_PART_P4X4);
+ p_set_partition_flag(epp->partition_X264_PART_B8X8, &video_enc->partitions, X264_PART_B8X8);
+
video_enc->directpred = epp->directpred;
video_enc->scenechange_factor = epp->scenechange_factor;
video_enc->mv0_threshold = epp->mv0_threshold;
@@ -2343,6 +2438,21 @@ p_init_video_codec(t_ffmpeg_handle *ffh
/* dont know why */
/* *(ffh->vst[ii].vid_stream->codec) = *video_enc; */ /* XXX(#) copy codec context */
+ if (gap_debug)
+ {
+ printf("VCODEC initialized: id:%d (%s) time_base.num :%d time_base.den:%d float:%f\n"
+ " DEFAULT_FRAME_RATE_BASE: %d\n"
+ , video_enc->codec_id
+ , epp->vcodec_name
+ , video_enc->time_base.num
+ , video_enc->time_base.den
+ , (float)gpp->val.framerate
+ , (int)DEFAULT_FRAME_RATE_BASE
+ );
+
+ /* dump all parameters (standard values reprsenting ffmpeg internal defaults) to stdout */
+ p_debug_print_dump_AVCodecContext(video_enc);
+ }
return (TRUE); /* OK */
@@ -2562,6 +2672,7 @@ p_ffmpeg_open(GapGveFFMpegGlobalParams *gpp
ffh->output_context->nb_streams = 0; /* number of streams */
+ ffh->output_context->timestamp = 0;
/* ------------ video codec -------------- */
@@ -2804,6 +2915,14 @@ p_ffmpeg_write_frame_chunk(t_ffmpeg_handle *ffh, gint32 encoded_size, gint vid_t
ffh->vst[ii].big_picture_codec->quality = ffh->vst[ii].vid_stream->quality;
ffh->vst[ii].big_picture_codec->key_frame = 1;
+ /* some codecs just pass through pts information obtained from
+ * the AVFrame struct (big_picture_codec)
+ * therefore init the pts code in this structure.
+ * (a valid pts is essential to get encoded frame results in the correct order)
+ */
+ //ffh->vst[ii].big_picture_codec->pts = p_calculate_current_timecode(ffh);
+ ffh->vst[ii].big_picture_codec->pts = ffh->encode_frame_nr -1;
+
encoded_dummy_size = avcodec_encode_video(ffh->vst[ii].vid_codec_context
,ffh->vst[ii].video_dummy_buffer, ffh->vst[ii].video_dummy_buffer_size
,ffh->vst[ii].big_picture_codec);
@@ -2857,6 +2976,8 @@ p_ffmpeg_write_frame_chunk(t_ffmpeg_handle *ffh, gint32 encoded_size, gint vid_t
pkt.data = ffh->vst[ii].video_buffer;
pkt.size = encoded_size;
ret = av_write_frame(ffh->output_context, &pkt);
+
+ ffh->countVideoFramesWritten++;
}
if(gap_debug) printf("after av_write_frame encoded_size:%d\n", (int)encoded_size );
@@ -2875,14 +2996,12 @@ p_ffmpeg_write_frame_chunk(t_ffmpeg_handle *ffh, gint32 encoded_size, gint vid_t
*
* conversion is done based on ffmpegs img_convert procedure.
*/
-static uint8_t *
+static void
p_convert_colormodel(t_ffmpeg_handle *ffh, AVPicture *picture_codec, guchar *rgb_buffer, gint vid_track)
{
AVFrame *big_picture_rgb;
AVPicture *picture_rgb;
- uint8_t *l_convert_buffer;
int ii;
- //int l_rc;
ii = ffh->vst[vid_track].video_stream_index;
@@ -2891,9 +3010,6 @@ p_convert_colormodel(t_ffmpeg_handle *ffh, AVPicture *picture_codec, guchar *rgb
picture_rgb = (AVPicture *)big_picture_rgb;
- /* allocate buffer for image conversion large enough for for uncompressed RGBA32 colormodel */
- l_convert_buffer = g_malloc(4 * ffh->frame_width * ffh->frame_height);
-
if(gap_debug)
{
printf("HAVE TO convert TO pix_fmt: %d\n"
@@ -2906,7 +3022,7 @@ p_convert_colormodel(t_ffmpeg_handle *ffh, AVPicture *picture_codec, guchar *rgb
/* init destination picture structure (the codec context tells us what pix_fmt is needed)
*/
avpicture_fill(picture_codec
- ,l_convert_buffer
+ ,ffh->convert_buffer
,ffh->vst[ii].vid_codec_context->pix_fmt /* PIX_FMT_RGB24, PIX_FMT_RGBA32, PIX_FMT_BGRA32 */
,ffh->frame_width
,ffh->frame_height
@@ -2925,7 +3041,7 @@ p_convert_colormodel(t_ffmpeg_handle *ffh, AVPicture *picture_codec, guchar *rgb
if(gap_debug)
{
- printf("before img_convert pix_fmt: %d (YUV420:%d)\n"
+ printf("before colormodel convert pix_fmt: %d (YUV420:%d)\n"
, (int)ffh->vst[ii].vid_codec_context->pix_fmt
, (int)PIX_FMT_YUV420P);
}
@@ -2979,7 +3095,6 @@ p_convert_colormodel(t_ffmpeg_handle *ffh, AVPicture *picture_codec, guchar *rgb
printf("DONE p_convert_colormodel\n");
}
- return (l_convert_buffer);
} /* end p_convert_colormodel */
/* --------------------
@@ -2987,19 +3102,19 @@ p_convert_colormodel(t_ffmpeg_handle *ffh, AVPicture *picture_codec, guchar *rgb
* --------------------
* encode one videoframe using the selected codec and write
* the encoded frame to the mediafile as packet.
+ * Passing NULL as drawable is used to flush one frame from the codecs internal buffer
+ * (typically required after the last frame has been already feed to the codec)
*/
static int
p_ffmpeg_write_frame(t_ffmpeg_handle *ffh, GimpDrawable *drawable, gboolean force_keyframe, gint vid_track, gboolean useYUV420P)
{
- AVPicture *picture_codec;
+ AVFrame *picture_codec;
int encoded_size;
int ret;
- uint8_t *l_convert_buffer;
int ii;
ii = ffh->vst[vid_track].video_stream_index;
ret = 0;
- l_convert_buffer = NULL;
if(gap_debug)
{
@@ -3007,86 +3122,102 @@ p_ffmpeg_write_frame(t_ffmpeg_handle *ffh, GimpDrawable *drawable, gboolean forc
codec = ffh->vst[ii].vid_codec_context->codec;
- printf("p_ffmpeg_write_frame: START codec: %d track:%d frame_nr:%d\n"
+ printf("\n-------------------------\n");
+ printf("p_ffmpeg_write_frame: START codec: %d track:%d countVideoFramesWritten:%d frame_nr:%d (validFrameNr:%d)\n"
, (int)codec
, (int)vid_track
+ , (int)ffh->countVideoFramesWritten
, (int)ffh->encode_frame_nr
+ , (int)ffh->validEncodeFrameNr
);
- printf("\n-------------------------\n");
printf("name: %s\n", codec->name);
- printf("type: %d\n", codec->type);
- printf("id: %d\n", codec->id);
- printf("priv_data_size: %d\n", codec->priv_data_size);
- printf("capabilities: %d\n", codec->capabilities);
- printf("init fptr: %d\n", (int)codec->init);
- printf("encode fptr: %d\n", (int)codec->encode);
- printf("close fptr: %d\n", (int)codec->close);
- printf("decode fptr: %d\n", (int)codec->decode);
-
+ if(gap_debug)
+ {
+ printf("type: %d\n", codec->type);
+ printf("id: %d\n", codec->id);
+ printf("priv_data_size: %d\n", codec->priv_data_size);
+ printf("capabilities: %d\n", codec->capabilities);
+ printf("init fptr: %d\n", (int)codec->init);
+ printf("encode fptr: %d\n", (int)codec->encode);
+ printf("close fptr: %d\n", (int)codec->close);
+ printf("decode fptr: %d\n", (int)codec->decode);
+ }
}
/* picture to feed the codec */
- picture_codec = (AVPicture *)ffh->vst[ii].big_picture_codec;
+ picture_codec = ffh->vst[ii].big_picture_codec;
+ /* in case drawable is NULL
+ * we feed the previous handled picture (e.g the last of the input)
+ * again and again to the codec
+ * Note that this procedure typically is called with NULL drawbale
+ * until all frames in its internal buffer are writen to the output video.
+ */
- if ((useYUV420P == TRUE) && (ffh->vst[ii].vid_codec_context->pix_fmt == PIX_FMT_YUV420P))
+ if(drawable != NULL)
{
- if(gap_debug)
- {
- printf("USE PIX_FMT_YUV420P (no pix_fmt convert needed)\n");
- }
+ if ((useYUV420P == TRUE) && (ffh->vst[ii].vid_codec_context->pix_fmt == PIX_FMT_YUV420P))
+ {
+ if(gap_debug)
+ {
+ printf("USE PIX_FMT_YUV420P (no pix_fmt convert needed)\n");
+ }
- /* fill the yuv420_buffer with current frame image data */
- gap_gve_raw_YUV420P_drawable_encode(drawable, ffh->vst[0].yuv420_buffer);
+ /* fill the yuv420_buffer with current frame image data
+ * NOTE: gap_gve_raw_YUV420P_drawable_encode does not work on some machines
+ * and gives low quality results. (therefore the useYUV420P flag is FALSE per default)
+ */
+ gap_gve_raw_YUV420P_drawable_encode(drawable, ffh->vst[0].yuv420_buffer);
- /* most of the codecs wants YUV420
+ /* most of the codecs wants YUV420
* (we can use the picture in ffh->vst[ii].yuv420_buffer without pix_fmt conversion
*/
- avpicture_fill(picture_codec
- ,ffh->vst[ii].yuv420_buffer
- ,PIX_FMT_YUV420P /* PIX_FMT_RGB24, PIX_FMT_RGBA32, PIX_FMT_BGRA32 */
- ,ffh->frame_width
- ,ffh->frame_height
- );
- }
- else
- {
- guchar *rgb_buffer;
- gint32 rgb_size;
-
- rgb_buffer = gap_gve_raw_RGB_drawable_encode(drawable, &rgb_size, FALSE /* no vflip */
- , NULL /* app0_buffer */
- , 0 /* app0_length */
- );
-
- if (ffh->vst[ii].vid_codec_context->pix_fmt == PIX_FMT_BGR24)
- {
- if(gap_debug)
- {
- printf("USE PIX_FMT_BGR24 (no pix_fmt convert needed)\n");
- }
- avpicture_fill(picture_codec
- ,rgb_buffer
- ,PIX_FMT_BGR24 /* PIX_FMT_RGB24, PIX_FMT_RGBA32, PIX_FMT_BGRA32 */
- ,ffh->frame_width
- ,ffh->frame_height
- );
- }
- else
- {
- l_convert_buffer = p_convert_colormodel(ffh, picture_codec, rgb_buffer, vid_track);
- }
+ avpicture_fill(picture_codec
+ ,ffh->vst[ii].yuv420_buffer
+ ,PIX_FMT_YUV420P /* PIX_FMT_RGB24, PIX_FMT_RGBA32, PIX_FMT_BGRA32 */
+ ,ffh->frame_width
+ ,ffh->frame_height
+ );
+ }
+ else
+ {
+ guchar *rgb_buffer;
+ gint32 rgb_size;
+
+ rgb_buffer = gap_gve_raw_RGB_drawable_encode(drawable, &rgb_size, FALSE /* no vflip */
+ , NULL /* app0_buffer */
+ , 0 /* app0_length */
+ );
+
+ if (ffh->vst[ii].vid_codec_context->pix_fmt == PIX_FMT_RGB24)
+ {
+ if(gap_debug)
+ {
+ printf("USE PIX_FMT_RGB24 (no pix_fmt convert needed)\n");
+ }
+ avpicture_fill(picture_codec
+ ,rgb_buffer
+ ,PIX_FMT_RGB24 /* PIX_FMT_RGB24, PIX_FMT_BGR24, PIX_FMT_RGBA32, PIX_FMT_BGRA32 */
+ ,ffh->frame_width
+ ,ffh->frame_height
+ );
+ }
+ else
+ {
+ p_convert_colormodel(ffh, picture_codec, rgb_buffer, vid_track);
+ }
- if(gap_debug)
- {
- printf("before g_free rgb_buffer\n");
- }
+ if(gap_debug)
+ {
+ printf("before g_free rgb_buffer\n");
+ }
- g_free(rgb_buffer);
+ g_free(rgb_buffer);
+ }
}
/* AVFrame is the new structure introduced in FFMPEG 0.4.6,
@@ -3094,7 +3225,8 @@ p_ffmpeg_write_frame(t_ffmpeg_handle *ffh, GimpDrawable *drawable, gboolean forc
*/
ffh->vst[ii].big_picture_codec->quality = ffh->vst[ii].vid_stream->quality;
- if(force_keyframe)
+ if((force_keyframe)
+ || (ffh->encode_frame_nr == 1))
{
/* TODO: howto force the encoder to write an I frame ??
* ffh->vst[ii].big_picture_codec->key_frame could be ignored
@@ -3119,11 +3251,27 @@ p_ffmpeg_write_frame(t_ffmpeg_handle *ffh, GimpDrawable *drawable, gboolean forc
);
}
+
+ /* some codecs (x264) just pass through pts information obtained from
+ * the AVFrame struct (big_picture_codec)
+ * therefore init the pts code in this structure.
+ * (a valid pts is essential to get encoded frame results in the correct order)
+ */
+ //ffh->vst[ii].big_picture_codec->pts = p_calculate_current_timecode(ffh);
+ ffh->vst[ii].big_picture_codec->pts = ffh->encode_frame_nr -1;
+
encoded_size = avcodec_encode_video(ffh->vst[ii].vid_codec_context
,ffh->vst[ii].video_buffer, ffh->vst[ii].video_buffer_size
,ffh->vst[ii].big_picture_codec);
+ if(gap_debug)
+ {
+ printf("after avcodec_encode_video encoded_size:%d\n"
+ ,(int)encoded_size
+ );
+ }
+
/* if zero size, it means the image was buffered */
if(encoded_size != 0)
{
@@ -3142,14 +3290,6 @@ p_ffmpeg_write_frame(t_ffmpeg_handle *ffh, GimpDrawable *drawable, gboolean forc
pkt.pts= av_rescale_q(c->coded_frame->pts, c->time_base, ffh->vst[ii].vid_stream->time_base);
}
-// if ((pkt.pts == 0) || (pkt.pts == AV_NOPTS_VALUE))
-// {
-// /* WORKAROND calculate pts timecode for the current frame
-// * because the codec did not deliver a valid timecode
-// */
-// pkt.pts = p_calculate_current_timecode(ffh);
-// }
-
if(c->coded_frame->key_frame)
{
@@ -3165,8 +3305,12 @@ p_ffmpeg_write_frame(t_ffmpeg_handle *ffh, GimpDrawable *drawable, gboolean forc
AVStream *st;
st = ffh->output_context->streams[pkt.stream_index];
+ if (pkt.pts == AV_NOPTS_VALUE)
+ {
+ printf("** HOF: Codec delivered invalid pts AV_NOPTS_VALUE !\n");
+ }
- printf("before av_write_frame video encoded_size:%d\n"
+ printf("before av_interleaved_write_frame video encoded_size:%d\n"
" pkt.stream_index:%d pkt.pts:%lld dts:%lld coded_frame->pts:%lld c->time_base:%d den:%d\n"
" st->pts.num:%lld, st->pts.den:%lld st->pts.val:%lld\n"
, (int)encoded_size
@@ -3182,11 +3326,15 @@ p_ffmpeg_write_frame(t_ffmpeg_handle *ffh, GimpDrawable *drawable, gboolean forc
);
}
- ret = av_write_frame(ffh->output_context, &pkt);
+ //ret = av_write_frame(ffh->output_context, &pkt);
+ ret = av_interleaved_write_frame(ffh->output_context, &pkt);
+
+ ffh->countVideoFramesWritten++;
+
if(gap_debug)
{
- printf("after av_write_frame encoded_size:%d\n", (int)encoded_size );
+ printf("after av_interleaved_write_frame encoded_size:%d\n", (int)encoded_size );
}
}
}
@@ -3199,10 +3347,10 @@ p_ffmpeg_write_frame(t_ffmpeg_handle *ffh, GimpDrawable *drawable, gboolean forc
fprintf(ffh->vst[ii].passlog_fp, "%s", ffh->vst[ii].vid_codec_context->stats_out);
}
- if(gap_debug) printf("before free picture structures\n");
-
-
- if(l_convert_buffer) g_free(l_convert_buffer);
+ if(gap_debug)
+ {
+ printf("before free picture structures\n\n");
+ }
return (ret);
} /* end p_ffmpeg_write_frame */
@@ -3238,26 +3386,34 @@ p_ffmpeg_write_audioframe(t_ffmpeg_handle *ffh, guchar *audio_buf, int frame_byt
{
AVPacket pkt;
- AVCodecContext *c;
+ AVCodecContext *enc;
av_init_packet (&pkt);
- c = ffh->ast[ii].aud_codec_context;
+ enc = ffh->ast[ii].aud_codec_context;
// pkt.pts = ffh->ast[ii].aud_codec_context->coded_frame->pts; // OLD
- if (c->coded_frame->pts != AV_NOPTS_VALUE)
+ if(enc->coded_frame && enc->coded_frame->pts != AV_NOPTS_VALUE)
{
- pkt.pts= av_rescale_q(c->coded_frame->pts, c->time_base, ffh->ast[ii].aud_stream->time_base);
+ pkt.pts= av_rescale_q(enc->coded_frame->pts, enc->time_base, ffh->ast[ii].aud_stream->time_base);
}
+ pkt.flags |= AV_PKT_FLAG_KEY;
-// if ((pkt.pts == 0) || (pkt.pts == AV_NOPTS_VALUE))
-// {
-// /* calculate pts timecode for the current frame
-// * because the codec did not deliver a valid timecode
-// */
-// pkt.pts = p_calculate_current_timecode(ffh);
-// }
+
+// if (pkt.pts == AV_NOPTS_VALUE)
+// {
+// /* WORKAROND calculate pts timecode for the current frame
+// * because the codec did not deliver a valid timecode
+// */
+// pkt.pts = p_calculate_current_timecode(ffh);
+// //pkt.pts = p_calculate_timecode(ffh, ffh->countVideoFramesWritten);
+// pkt.dts = pkt.pts;
+// //if(gap_debug)
+// {
+// printf("WORKAROND calculated audio pts (because codec deliverd AV_NOPTS_VALUE\n");
+// }
+// }
pkt.stream_index = ffh->ast[ii].audio_stream_index;
@@ -3266,18 +3422,19 @@ p_ffmpeg_write_audioframe(t_ffmpeg_handle *ffh, guchar *audio_buf, int frame_byt
if(gap_debug)
{
- printf("before av_write_frame audio encoded_size:%d pkt.pts:%lld dts:%lld\n"
+ printf("before av_interleaved_write_frame audio encoded_size:%d pkt.pts:%lld dts:%lld\n"
, (int)encoded_size
, pkt.pts
, pkt.dts
);
}
- ret = av_write_frame(ffh->output_context, &pkt);
+ //ret = av_write_frame(ffh->output_context, &pkt);
+ ret = av_interleaved_write_frame(ffh->output_context, &pkt); // seems not OK work pts/dts is invalid
if(gap_debug)
{
- printf("after av_write_frame audio encoded_size:%d\n", (int)encoded_size );
+ printf("after av_interleaved_write_frame audio encoded_size:%d\n", (int)encoded_size );
}
}
}
@@ -3475,6 +3632,11 @@ p_ffmpeg_close(t_ffmpeg_handle *ffh)
sws_freeContext(ffh->img_convert_ctx);
ffh->img_convert_ctx = NULL;
}
+ if(ffh->convert_buffer != NULL)
+ {
+ g_free(ffh->convert_buffer);
+ ffh->convert_buffer = NULL;
+ }
} /* end p_ffmpeg_close */
@@ -3757,6 +3919,7 @@ p_ffmpeg_encode_pass(GapGveFFMpegGlobalParams *gpp, gint32 current_pass, GapGveM
l_cur_frame_nr = l_begin;
ffh->encode_frame_nr = 1;
+ ffh->countVideoFramesWritten = 0;
while(l_rc >= 0)
{
gboolean l_fetch_ok;
@@ -3764,6 +3927,8 @@ p_ffmpeg_encode_pass(GapGveFFMpegGlobalParams *gpp, gint32 current_pass, GapGveM
gint32 l_video_frame_chunk_size;
gint32 l_video_frame_chunk_hdr_size;
+ ffh->validEncodeFrameNr = ffh->encode_frame_nr;
+
/* must fetch the frame into gimp_image */
/* load the current frame image, and transform (flatten, convert to RGB, scale, macro, etc..) */
@@ -3837,7 +4002,11 @@ p_ffmpeg_encode_pass(GapGveFFMpegGlobalParams *gpp, gint32 current_pass, GapGveM
}
/* encode AUDIO FRAME (audio data for playbacktime of one frame) */
- p_process_audio_frame(ffh, awp);
+ if(ffh->countVideoFramesWritten > 0)
+ {
+ p_process_audio_frame(ffh, awp);
+ }
+
}
else /* if fetch_ok */
@@ -3861,11 +4030,45 @@ p_ffmpeg_encode_pass(GapGveFFMpegGlobalParams *gpp, gint32 current_pass, GapGveM
break;
}
- /* advance to next frame */
+ /* detect regular end */
if((l_cur_frame_nr == l_end) || (l_rc < 0))
{
+ /* handle encoder latency (encoders typically hold some frames in internal buffers
+ * that must be flushed after the last input frame was feed to the codec)
+ * in case of codecs without frame latency
+ * ffh->countVideoFramesWritten and ffh->encode_frame_nr
+ * shall be already equal at this point. (where no flush is reuired)
+ */
+ int flushTries;
+ int flushCount;
+
+ flushTries = 2 + (ffh->validEncodeFrameNr - ffh->countVideoFramesWritten);
+
+ for(flushCount = 0; flushCount < flushTries; flushCount++)
+ {
+ if(ffh->countVideoFramesWritten >= ffh->validEncodeFrameNr)
+ {
+ /* all frames are now written to the output video */
+ break;
+ }
+
+ /* increment encode_frame_nr, because this is the base for pts timecode calculation
+ * and some codecs (mpeg2video) complain about "non monotone timestamps" otherwise.
+ */
+ ffh->encode_frame_nr++;
+ p_ffmpeg_write_frame(ffh, NULL, l_force_keyframe, 0, /* vid_track */ l_useYUV420P);
+
+ /* continue encode AUDIO FRAME (audio data for playbacktime of one frame)
+ */
+ if(ffh->countVideoFramesWritten > 0)
+ {
+ p_process_audio_frame(ffh, awp);
+ }
+
+ }
break;
}
+ /* advance to next frame */
l_cur_frame_nr += l_step;
ffh->encode_frame_nr++;
diff --git a/vid_enc_ffmpeg/gap_enc_ffmpeg_main.h b/vid_enc_ffmpeg/gap_enc_ffmpeg_main.h
index 3a872c5..369fbba 100644
--- a/vid_enc_ffmpeg/gap_enc_ffmpeg_main.h
+++ b/vid_enc_ffmpeg/gap_enc_ffmpeg_main.h
@@ -417,6 +417,11 @@ typedef struct {
gint32 codec_FLAG2_PSY;
gint32 codec_FLAG2_SSIM;
+ gint32 partition_X264_PART_I4X4;
+ gint32 partition_X264_PART_I8X8;
+ gint32 partition_X264_PART_P8X8;
+ gint32 partition_X264_PART_P4X4;
+ gint32 partition_X264_PART_B8X8;
} GapGveFFMpegValues;
@@ -535,7 +540,15 @@ typedef struct GapGveFFMpegGlobalParams { /* nick: gpp */
GtkWidget *ff_codec_FLAG2_NON_LINEAR_QUANT_checkbutton;
GtkWidget *ff_codec_FLAG2_BIT_RESERVOIR_checkbutton;
+ GtkWidget *ff_codec_FLAG2_MBTREE_checkbutton;
+ GtkWidget *ff_codec_FLAG2_PSY_checkbutton;
+ GtkWidget *ff_codec_FLAG2_SSIM_checkbutton;
+ GtkWidget *ff_partition_X264_PART_I4X4_checkbutton;
+ GtkWidget *ff_partition_X264_PART_I8X8_checkbutton;
+ GtkWidget *ff_partition_X264_PART_P8X8_checkbutton;
+ GtkWidget *ff_partition_X264_PART_P4X4_checkbutton;
+ GtkWidget *ff_partition_X264_PART_B8X8_checkbutton;
diff --git a/vid_enc_ffmpeg/gap_enc_ffmpeg_par.c b/vid_enc_ffmpeg/gap_enc_ffmpeg_par.c
index 8855834..ff45e86 100644
--- a/vid_enc_ffmpeg/gap_enc_ffmpeg_par.c
+++ b/vid_enc_ffmpeg/gap_enc_ffmpeg_par.c
@@ -97,8 +97,8 @@ p_set_master_keywords(GapValKeyList *keylist, GapGveFFMpegValues *epp)
gap_val_set_keyword(keylist, "(dct_algo ", &epp->dct_algo, GAP_VAL_GINT32, 0, "# algorithm for DCT (0-6)");
gap_val_set_keyword(keylist, "(idct_algo ", &epp->idct_algo, GAP_VAL_GINT32, 0, "# algorithm for IDCT (0-11)");
gap_val_set_keyword(keylist, "(strict ", &epp->strict, GAP_VAL_GINT32, 0, "# how strictly to follow the standards");
- gap_val_set_keyword(keylist, "(mb_qmin ", &epp->mb_qmin, GAP_VAL_GINT32, 0, "# min macroblock quantiser scale (VBR)");
- gap_val_set_keyword(keylist, "(mb_qmax ", &epp->mb_qmax, GAP_VAL_GINT32, 0, "# max macroblock quantiser scale (VBR)");
+ gap_val_set_keyword(keylist, "(mb_qmin ", &epp->mb_qmin, GAP_VAL_GINT32, 0, "# OBSOLETE min macroblock quantiser scale (VBR)");
+ gap_val_set_keyword(keylist, "(mb_qmax ", &epp->mb_qmax, GAP_VAL_GINT32, 0, "# OBSOLETE max macroblock quantiser scale (VBR)");
gap_val_set_keyword(keylist, "(mb_decision ", &epp->mb_decision, GAP_VAL_GINT32, 0, "# algorithm for macroblock decision (0-2)");
gap_val_set_keyword(keylist, "(b_frames ", &epp->b_frames, GAP_VAL_GINT32, 0, "# max number of B-frames in sequence");
gap_val_set_keyword(keylist, "(packet_size ", &epp->packet_size, GAP_VAL_GINT32, 0, "\0");
@@ -268,15 +268,35 @@ p_set_master_keywords(GapValKeyList *keylist, GapGveFFMpegValues *epp)
/* codec flags new in ffmpeg-0.6 */
- p_set_keyword_bool32(keylist, "(use_bit_reservoir ", &epp->codec_FLAG2_BIT_RESERVOIR, "# CODEC_FLAG2_BIT_RESERVOIR Use a bit reservoir when encoding if possible");
p_set_keyword_bool32(keylist, "(use_bit_mbtree ", &epp->codec_FLAG2_MBTREE, "# CODEC_FLAG2_MBTREE Use macroblock tree ratecontrol (x264 only)");
p_set_keyword_bool32(keylist, "(use_bit_psy ", &epp->codec_FLAG2_PSY, "# CODEC_FLAG2_PSY Use psycho visual optimizations.");
p_set_keyword_bool32(keylist, "(use_bit_ssim ", &epp->codec_FLAG2_SSIM, "# CODEC_FLAG2_SSIM Compute SSIM during encoding, error[] values are undefined");
+ p_set_keyword_bool32(keylist, "(parti4x4 ", &epp->partition_X264_PART_I4X4, "# X264_PART_I4X4 Analyze i4x4");
+ p_set_keyword_bool32(keylist, "(parti8x8 ", &epp->partition_X264_PART_I8X8, "# X264_PART_I8X8 Analyze i8x8 (requires 8x8 transform)");
+ p_set_keyword_bool32(keylist, "(partp4x4 ", &epp->partition_X264_PART_P8X8, "# X264_PART_P8X8 Analyze p16x8, p8x16 and p8x8");
+ p_set_keyword_bool32(keylist, "(partp8x8 ", &epp->partition_X264_PART_P4X4, "# X264_PART_P4X4 Analyze p8x4, p4x8, p4x4");
+ p_set_keyword_bool32(keylist, "(partb8x8 ", &epp->partition_X264_PART_B8X8, "# X264_PART_B8X8 Analyze b16x8, b8x16 and b8x8");
} /* end p_set_master_keywords */
+/* --------------------------------
+ * p_get_partition_flag
+ * --------------------------------
+ */
+static gint32
+p_get_partition_flag(gint32 partitions, gint32 maskbit)
+{
+ if((partitions & maskbit) != 0)
+ {
+ return 1;
+ }
+ return 0;
+
+} /* end p_get_partition_flag */
+
+
/* --------------------------
* p_debug_printf_parameters
* --------------------------
@@ -370,6 +390,28 @@ gap_ffpar_get(const char *filename, GapGveFFMpegValues *epp)
epp->twoPassFlag = TRUE;
}
+ /* The case where all partition flags are 0 but partitions is not 0
+ * can occure when loading presets from an older preset file
+ * that include the partitions as integer but does not include the (redundant)
+ * flags for each single supported bit.
+ * In this case we must fetch the bits from the integer value.
+ * Note: if all values are present in the preset file, only the single bit representations
+ * are used.
+ */
+ if((epp->partition_X264_PART_I4X4 == 0)
+ && (epp->partition_X264_PART_I8X8 == 0)
+ && (epp->partition_X264_PART_P8X8 == 0)
+ && (epp->partition_X264_PART_P4X4 == 0)
+ && (epp->partition_X264_PART_B8X8 == 0)
+ && (epp->partitions != 0))
+ {
+ epp->partition_X264_PART_I4X4 = p_get_partition_flag(epp->partitions, X264_PART_I4X4);
+ epp->partition_X264_PART_I8X8 = p_get_partition_flag(epp->partitions, X264_PART_I8X8);
+ epp->partition_X264_PART_P8X8 = p_get_partition_flag(epp->partitions, X264_PART_P8X8);
+ epp->partition_X264_PART_P4X4 = p_get_partition_flag(epp->partitions, X264_PART_P4X4);
+ epp->partition_X264_PART_B8X8 = p_get_partition_flag(epp->partitions, X264_PART_B8X8);
+ }
+
if(gap_debug)
{
printf("gap_ffpar_get: params loaded: epp:%d\n", (int)epp);
[
Date Prev][
Date Next] [
Thread Prev][
Thread Next]
[
Thread Index]
[
Date Index]
[
Author Index]