Re: gstreamer camerabin



Filippo Argiolas schrieb:
> 2009/4/23 Stefan Kost <ensonic hora-obscura de>:
>   
>> I wonder if you had a look at camerabin in gst-plugins-bad. This is
>> still beeing worked on mostly by nokia people to be used on the maemo
>> platform. The idea is to have a high level image/video capture api.
>>     
>
> First of all, let's say I took a quick look and it seems pretty nice.
> It would be great to use something like that in cheese so that we
> could get rid of some of the messy (IMHO not that much) code in
> cheese-webcam.
>
> The thing I don't like that much is that it takes control of
> everything, from the v4l source to the videosink, aiming to manage the
> whole camera stuff. I would have much more liked a camerasourcebin
> with some extended videosaving and photosaving capability but with no
> video display feature. As far as I can tell there are a couple of
> entry points/properties to insert custom videodisplay and
> postprocessing elements but it looses most of gstreamer versatility
> this way.
>   
Please note that it is in -bad and the api can be totaly thrown over :)
We're actualy going to do that sort of.
> My main concern is that it could collide, at least at a first look,
> with my plan about the future cheese.
> With the new effect selector/clutter display stuff I'm working on in
> the spare time (ever less lately,
> http://cgit.freedesktop.org/~fargiolas/cheese-stage/) the pipeline
> structure as it is now will be radically changed.
> There is some sketch in the TODO file in that repository but it's
> quite outdated so let me summarize it quickly.
>
> There will be basically two pipelines that will be dinamically
> linked-unlinked to the source bin using pad blocking.
> One, the effects preview, will be something like:
> videoscale ! videorate ! tee ! filter-1 ! sink-1
>                              ! filter-2 ! sink-2
>                              ...
>                              ! filter-n ! sink-n
> This will display several effect previews at the same time. The
> downrate and downscale elements will help to remove some load from the
> cpu (for old effects) or gpu (for gleffects).
>   
what about using the videomixer (which has alpha x,y,z positions for
each pad) to merge those all into one  videostream again. Then you can
just plug the whole bin into the viewfinder-filter slot.

> When an effect will be selected sourcebin src pad will be blocked, and
> only a single display bin (basically effect + sink) will be relinked.
> This way there will be no need to stop, relink, play the whole
> pipeline (as we do now) and the whole experience will smoother.
>   
We could use the navigation iface to check what part of the video the
user clicked. But handling the message and doing the relinking (inside
your effect bin) would be up to cheese.
> Will this be achievable with camerabin? as far as I can tell, I don't think so.
> Correct me if I'm wrong. Development is still at a early stage so it's
> better to take a decision *now* about camerabin.
>
> Cheers,
> Filippo
>   

Basically I don't wan't to push cheese project to make a decision.
Nokias plan behind camerabin is to come up with a framework for a
high-level video-capture bin. If we e.g. a bit later add the transcoding
feature currently being sketched elsewhere, then apps building on
camerabin can easily benefit from it. Input like what you wrote above is
very valueable right now. Only as long as it is in bad we can break the
api. I'd like to know how it could fit into cheese or what does not yet
fit, so that we can make it fit and join forces on a cameraengine, while
leaving space for project specific features.

Stefan



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]