[xml] Parsing big xml data received by chunks from libcurl



Hi the list,

I use libcurl to get a big chunk of xml data.

In the CURLOPT_WRITEFUNCTION call back, I have a piece of memory with xml data.

The first time this callback is executed, we call xmlReaderNewMemory(). Then we call xmlTextReaderRead() while the result is 1.

The XML being splitted, the loop finishes to fail because it needs following datas...

Thanks to xmlTextReaderByteConsumed, we are able to get data already read and then the piece of data not read.

The next time the callback is called, we are able to build a new buffer containing :
* datas not already read from the previous call
* new data from the new call.

My problem is here. I'm looking for a function that could change the buffer to read to continue to parse xml data. I have tried xmlReaderNewMemory(), but it fails...

Maybe a such function does not exist, and maybe this idea to read different buffers of a same xml is a bad idea.

Is there a better way ? What are your advices ?

Thanks a lot.
Regards.
David.




[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]