Re: [Vala] Implicit lamdas/closures



A simpler implementation comes to mind.

Any function that wants to be re-entered halfway through as a result of lamda-ly continuation type callbacks is implemented like this:

The header function which sets up the local vars on the heap according to normal lambda behaviour and then calls the main body which takes the heap pointer and a goto-value.

The main body does a case-dispatch on the goto-value to jump to the right labels halfway through the function.

For each label that serves as a lamda-re-entry; a stub function exists which calls the main body with the heap and the right goto-value.

These stub functions become the various continuation entry callbacks that are passed to the subsystem that will issue the callback.

I never though C goto would be so useful.

It allows the main function body to stay as one function and even allows loops to properly work.

And involves less vala freakery to implement.

Comments?

Sam



From: Sam Liddicott <sam liddicott com>
Sent: Monday, September 15, 2008 9:57 AM
To: vala-list gnome org
Subject: [Vala] Implicit lamdas/closures

I've been thinking a lot on how vala can make programming asynchronous rpc servers as simple as programming synchronous rpc servers; I.e. No need to worry about continuations/callbacks.

What I've come up with is implicit lamdas; without the need for ()=>{} which is some cases would be nested, gettimg very ugly.

Simple rpc call handlers are self-enclosed and synchronous. Eg

rpc_server_get_next(thing x) {
return next(x);
}

But some rpc calls can only be satisfied by making other async calls

rpc_server_check_credentials(creds c) {
req r=auth_server.check_creds(c);
return r.wait_for_response();
}

And unless you have a new thread or process for each call, this is going to block other rpc requests. Consider this case:

rpc_server_read_file(file f, int offset, int len) {
return sysread(f, offset, len);
}

It should be possible to use asynchronous io on some modern systems, yet it is hardly worth allocating a new thread over.

Samba4 has each server request marked whether or not it may be processed in an async manner, usually it can.

If the server module chooses to do so it also marks the request when it returns so that a response is not sent right away. The module sends the response later after one or a few callbacks from async client requests used to fulfil the original server request.

However the processing of asyc client responses is generally identical to the processing of the same responses done synchronously.

The general pattern is look this

server_do_things(stuff) {
req=do_thing(stuff, otherstuff);
req.add_callback(finish,stuff,otherstuff)
If (stuff.may_async) {
stuff.did_async=true;
Return OK;
}
// this also calls finish callback
req.wait_till_done();
return(stuff.status);
}

The obvious candidate for the lambda is the finish function, giving something like this
(assuming lambda local-vars support is complete)

server_do_things(stuff) {
req=do_thing(stuff, otherstuff);
req.add_callback( (req) => (
stuff.status=req.receive(stuff);
// do more stuff here
) )
If (stuff.may_async) {
stuff.did_async=true;
Return OK;
}
// this also calls finish callback
req.wait_till_done();
return(stuff.status);
}

It's a little puzzling that the lamda appears halfway through; but it gets more complicated if the server has to issue more than one client request; the nest lambdas make the whole thing look like lisp.

The idea of implicit lamdas is to allow the code to be laid out as if it were synchronous but still work asynchronously, see:

server_do_things(stuff) {
req=do_thing(stuff, otherstuff);
req.add_callback(finish,stuff,otherstuff)
If (stuff.may_async) {
stuff.did_async=true;
Return OK;
} else {
// this also calls finish callback
req.wait_till_done();
return (stuff.status);
}
finish:
stuff.status=req.receive(stuff);
// do more stuff here
return(req.status);
}

finish: is a regular C style label; but if it's address is taken for a delegate then it becomes simultaneously a lambda until the end of the enclosing block and also a call to that lambda.

Thus the code can be run_into during normal execution as well as apparently run-into for async execution.

If such a thing were possible, the rpc glue would differ so as to make better use of it.

Here's a more complex case which unwinds a terrible state machine into an apparent linear function:

server_auth(stuff) {
req=send_get_auth_types(stuff);
// I'll talk about block this later...
if (stuff.may_async) {
req.add_callback(type);
Stuff.dyd_async=true;
return;
} else {
Req.wait_for_response();
}
type:
if (stuff.kerberos) {
kreq=send_kerberos(krb);
...
krb:
If (kreq.success)...
} else if (stuff.ntlm) {
}
}

I said that implicit lambdas ought to last to the end of the containing block, but really when that block finishes a new implicit lambda should start in the outer block as if it were declared in the same manner as discussed; and so on until the end of the enclosing function block is reached.

That way parts of the function can be deferred or executed immediately without a problem.

I don't know how such a lamda would cope with being in a loop.

The If block for which I said: "// I'll talk about block this later..." clearly it's a pain to have to repeat that async/sync fixup code everytime, but I think I've said enough to layout the problem and how lamdas could unwind horrible continuation chains and async state machines, so I'm open for comment.

Sam





[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]