Re: gstreamer camerabin



Filippo Argiolas schrieb:
> On Mon, May 18, 2009 at 6:05 PM, Stefan Kost <ensonic hora-obscura de> wrote:
>> Filippo Argiolas schrieb:
>>> 2009/4/23 Stefan Kost <ensonic hora-obscura de>:
>> Please note that it is in -bad and the api can be totaly thrown over :)
>> We're actualy going to do that sort of.
>>> My main concern is that it could collide, at least at a first look,
>>> with my plan about the future cheese.
>>> With the new effect selector/clutter display stuff I'm working on in
>>> the spare time (ever less lately,
>>> http://cgit.freedesktop.org/~fargiolas/cheese-stage/) the pipeline
>>> structure as it is now will be radically changed.
>>> There is some sketch in the TODO file in that repository but it's
>>> quite outdated so let me summarize it quickly.
>>>
>>> There will be basically two pipelines that will be dinamically
>>> linked-unlinked to the source bin using pad blocking.
>>> One, the effects preview, will be something like:
>>> videoscale ! videorate ! tee ! filter-1 ! sink-1
>>>                              ! filter-2 ! sink-2
>>>                              ...
>>>                              ! filter-n ! sink-n
>>> This will display several effect previews at the same time. The
>>> downrate and downscale elements will help to remove some load from the
>>> cpu (for old effects) or gpu (for gleffects).
>>>
>> what about using the videomixer (which has alpha x,y,z positions for
>> each pad) to merge those all into one  videostream again. Then you can
>> just plug the whole bin into the viewfinder-filter slot.
> 
> Although that could be a viable path with the current cheese it's not
> compatible with what I have in mind. Each sink is transformed into a
> clutter actor using clutterglxtexturepixmap.
> This opens a whole world of new user interaction approaches and experiments.
> We could build an animated effect selector on top of it. e.g. you
> select an effect and it zooms in to fullscreen. Having one separated
> sink for each effect is mandatory for everything to work at the
> moment.
> 
>>> When an effect will be selected sourcebin src pad will be blocked, and
>>> only a single display bin (basically effect + sink) will be relinked.
>>> This way there will be no need to stop, relink, play the whole
>>> pipeline (as we do now) and the whole experience will smoother.
>>>
>> We could use the navigation iface to check what part of the video the
>> user clicked. But handling the message and doing the relinking (inside
>> your effect bin) would be up to cheese.
> 
> My idea is to use gstreamer for video source, video processing and a
> little bit of video display. When the image coming from the sink
> becomes a cluttertexture (with a sort of xoverlay thing) everything
> involves user interaction is handled by clutter.
> 
>>> Will this be achievable with camerabin? as far as I can tell, I don't think so.
>>> Correct me if I'm wrong. Development is still at a early stage so it's
>>> better to take a decision *now* about camerabin.
>>>
>>> Cheers,
>>> Filippo
>>>
>> Basically I don't wan't to push cheese project to make a decision.
>> Nokias plan behind camerabin is to come up with a framework for a
>> high-level video-capture bin. If we e.g. a bit later add the transcoding
>> feature currently being sketched elsewhere, then apps building on
>> camerabin can easily benefit from it. Input like what you wrote above is
>> very valueable right now. Only as long as it is in bad we can break the
>> api. I'd like to know how it could fit into cheese or what does not yet
>> fit, so that we can make it fit and join forces on a cameraengine, while
>> leaving space for project specific features.
> 
> I understand camerabin purposes and I agree it's a great project, but
> as often when doing abstraction work it's difficult to accomplish
> everyone's need. My need here is to have strict integration between
> the animation/display/ui framework (clutter) and the processing
> framework (gstreamer). If camerabin manages everything from the source
> to the display it loses some of the big versatility I can have
> creating elements and linking them by hand.
> IMHO it could be splitted into two or tree bins:
> - the source one that could also take care of device detection (I
> posted a mail about this in gst-devel about a week ago where I explain
> why properyprobe and autovideosink is not exactly optimal)
> - the photo/videosave one that could do the photo saving and the video
> recording part
> - the viewfinder one that would take care of displaying the video
> 
> This way I could hook up in the usual gstreamer way adding e.g. my
> postprocessing stuff between the source and the save bin, or I could
> attach a tee element for my effect previewer and manage the display
> part on my own.
> There could be also a virtual camerabin that links together those 3
> bins giving the same features the current implementation has.

Actualy internally its like this. There is a capture-source bin, viewfinder bin,
imagecapture bin and videocapture bin. And you are right, it would make sense to
export these. Just imagine someone writing an audio-recorder - they would also
need a capture source, a monitor bin and a audiocapturebin. I will think more
about the api needed on the sub-bins and camerabin itself could become an
optional api for simple apps.

> 
> ciao,
> Filippo

Thanks for the discussion.

Stefan


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]