Re: YouTube plugin and libgdata



On Wed, 2010-04-14 at 11:09 +0200, Iago Toral wrote:
> On Wed, 14 Apr 2010 08:59:03 +0100, Philip Withnall
> <philip tecnocode co uk> wrote:
> > On Tue, 2010-04-13 at 13:43 +0200, Iago Toral wrote:
> >> Ups, actually I am working on the same thing... I already
> >> have the search() working....
> >> 
> >> On Tue, 13 Apr 2010 13:22:50 +0200, Víctor M. Jáquez L.
> >> <vjaquez igalia com> wrote:
> >> > Hi Philip,
> >> > 
> >> > On Wed, Apr 07, 2010 at 07:24:18PM +0100, Philip Withnall wrote:
> >> >> I've taken a look at the YouTube plugin in Grilo, and I'm wondering why
> >> >> it doesn't use libgdata[1]. libgdata's got a fairly stable YouTube
> >> > 
> >> > I'm trying to play around with this task, but I've stuck with a simple thing:
> >> > how, in gdata, could grab the list of categories available in youtube?
> >> 
> >> Looks like you can't, you have to parse it on your own as we are doing right 
> >> now...
> > 
> > That's correct. I didn't know this API existed before (the Totem plugin
> > doesn't deal with categories at all), but I'll add support for it to
> > libgdata in time for the 0.7 release. I've filed a bug where you can see
> > the progress[1].
> > 
> > For the moment, though, you'll have to continue to parse categories.cat
> > yourself, since it'll be a while until libgdata 0.7's out, and currently
> > only version 0.6.4 is on GNOME's external dependency list. 0.7 is going
> > to break API and ABI from 0.6.x.
> > 
> > Are there any other problems so far?
> 
> Yeah, based on my work so far I have some things to share:
> 
> For the browse() operations in Grilo we define a set of categories like this 
> in the Youtube plugin:
> 
> root
>   ----- standard-feeds
>            ------- Top Rated
>            ------- Most Viewed
>            ------- ...
>   ----- categories
>            ------- Sports
>            ------- Trailers
>            ------- ....
> 
> When user browses "root/standard-feeds" we show all the standard-feeds and metadata 
> associated with them (like total items available in that feed), same goes for 
> categories.
> 
> As a side note, it would be nice to have API also to get a list of supported 
> feeds (that would help us with maintaining the structure if feeds are added 
> or removed), but this is not very important.

There's currently the GDataYouTubeStandardFeedType enum, but using that
doesn't remove all maintenance obligations. I don't think the standard
feeds are going to change that much, though, so I don't think
maintenance will be a problem.

> One thing that I did notice though is that obtaining the  # of items in a feed, 
> for example, is particularly slower than in our ad-hoc implementation (which 
> just invokes a url like http://gdata.youtube.com/feeds/standardfeeds/top_favorites?start-index=1&max-results=1 
> and extracts the total_result count without even parsing de XML) and I wonder 
> if the problem is that we are not using libgadata properly for this purpose, 
> this is what I am doing to resolve the item count for each feed:
> 
>     GDataQuery *query = gdata_query_new_with_limits (NULL, 0, 1);
>     feed = gdata_youtube_service_query_standard_feed (service,
> 							feed_type,
> 							query,
> 							NULL,
> 							NULL,
> 							NULL,
> 							NULL);
>  
>     if (feed) {
>       childcount = gdata_feed_get_total_results (feed);
>       g_object_unref (feed);
>     }
> 
> I know I should not use the sync version, I am planning to change that, but 
> our current implementation is also synchronous and it is clearly quicker, so 
> I felt like I should comment that.

That code should be fine; it should cause libgdata to do pretty much the
same as you're currently doing (though it will parse all the XML, rather
than just extracting the total result count immediately). Parsing the
XML should be fairly fast, so I don't know why the new code would be
slower. Have you got some figures?

> Some other thing I noticed comparing both versions is that the ad-hoc version 
> has better response times and the addition of the items in the UI is more continuous. 
> I think this happens because we hand items to the UI as soon as we have parsed 
> them. In libgdata if you parse an XML with 50 items you won't get any of then 
> in the UI until all of them have been parsed. Also, libgadata parses a lot more 
> information per item than our ad-hoc implementation did (or so I guess) which 
> I guess adds some penalty to the processing time.

Each query function in libgdata has a GDataQueryProgressCallback
parameter, which allows you to pass the query a function which will be
called for each entry as it's parsed. You should use this to replicate
the behaviour of your current code. Note that the callback is currently
automatically executed in an idle function, but I'm considering changing
this for 0.7 so that it's executed in the same thread as the query (for
consistency with the async completion callback). Which way works best
regarding Grilo's integration with the host app's main loop?

libgdata's XML parsing code should be fairly fast (though I haven't
profiled it recently, and it won't be as fast as code which just
cherry-picks specific bits of data). This probably won't improve until
Google stabilises server-side support for partial response[1], which
I'll implement in libgdata as soon as possible afterwards.

Thanks,
Philip

[1]:
http://googlecode.blogspot.com/2010/03/making-apis-faster-introducing-partial.html

> Iago

Attachment: signature.asc
Description: This is a digitally signed message part



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]