Re: YouTube plugin and libgdata



On Wed, 14 Apr 2010 18:44:17 +0100, Philip Withnall
<philip tecnocode co uk> wrote:
> On Wed, 2010-04-14 at 11:09 +0200, Iago Toral wrote:
>> On Wed, 14 Apr 2010 08:59:03 +0100, Philip Withnall
[...]
>> As a side note, it would be nice to have API also to get a list of supported 
>> feeds (that would help us with maintaining the structure if feeds are added 
>> or removed), but this is not very important.
> 
> There's currently the GDataYouTubeStandardFeedType enum, but using that
> doesn't remove all maintenance obligations. I don't think the standard
> feeds are going to change that much, though, so I don't think
> maintenance will be a problem.

Ok.

>> One thing that I did notice though is that obtaining the  # of items in a feed, 
>> for example, is particularly slower than in our ad-hoc implementation (which 
>> just invokes a url like http://gdata.youtube.com/feeds/standardfeeds/top_favorites?start-index=1&max-results=1 
>> and extracts the total_result count without even parsing de XML) and I wonder 
>> if the problem is that we are not using libgadata properly for this purpose, 
>> this is what I am doing to resolve the item count for each feed:
>> 
>>     GDataQuery *query = gdata_query_new_with_limits (NULL, 0, 1);
>>     feed = gdata_youtube_service_query_standard_feed (service,
>> 							feed_type,
>> 							query,
>> 							NULL,
>> 							NULL,
>> 							NULL,
>> 							NULL);
>>  
>>     if (feed) {
>>       childcount = gdata_feed_get_total_results (feed);
>>       g_object_unref (feed);
>>     }
>> 
>> I know I should not use the sync version, I am planning to change that, but 
>> our current implementation is also synchronous and it is clearly quicker, so 
>> I felt like I should comment that.
> 
> That code should be fine; it should cause libgdata to do pretty much the
> same as you're currently doing (though it will parse all the XML, rather
> than just extracting the total result count immediately). Parsing the
> XML should be fairly fast, so I don't know why the new code would be
> slower. Have you got some figures?

No, I don't have any numbers sorry. Anyway, I changed the implementation to 
request the category counts asynchronously in the background when the youtube 
plugin is started so this is no longer a problem.

>> Some other thing I noticed comparing both versions is that the ad-hoc version 
>> has better response times and the addition of the items in the UI is more continuous. 
>> I think this happens because we hand items to the UI as soon as we have parsed 
>> them. In libgdata if you parse an XML with 50 items you won't get any of then 
>> in the UI until all of them have been parsed. Also, libgadata parses a lot more 
>> information per item than our ad-hoc implementation did (or so I guess) which 
>> I guess adds some penalty to the processing time.
> 
> Each query function in libgdata has a GDataQueryProgressCallback
> parameter, which allows you to pass the query a function which will be
> called for each entry as it's parsed. You should use this to replicate
> the behaviour of your current code. 

Great, that's perfect for us!

> Note that the callback is currently
> automatically executed in an idle function, but I'm considering changing
> this for 0.7 so that it's executed in the same thread as the query (for
> consistency with the async completion callback). Which way works best
> regarding Grilo's integration with the host app's main loop?

Grilo uses the application's main loop already so I guess the current implementation 
is ok from our perspective.

> libgdata's XML parsing code should be fairly fast (though I haven't
> profiled it recently, and it won't be as fast as code which just
> cherry-picks specific bits of data). This probably won't improve until
> Google stabilises server-side support for partial response[1], which
> I'll implement in libgdata as soon as possible afterwards.

Thanks Philip!

Iago


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]