On Fri, 14 Aug 2009 13:57:01 -0700, Sam Danielson <samdanielson gmail com> wrote:
I don't think I understand what you mean. For reference types, there is no technical difference between Foo and Foo?, except as the marking of arguments and return values that must not be null (that's checked in the method by simple assertions). You can't do any "cast" there.By cast I mean that there must be some way to get a Foo out of a Foo?. When I encounter a Foo? I should (ideally) not be allowed to dereference it unless I have first ensured it is not null by transforming it into a Foo.
I see two problems with not allowing nulls in regular variables. One is that a variable starts its life as null (or even worse, uninitialised), and could be assigned to another variable or returned before having being assigned a value. A degree of code analysis would be needed to fix this, by reducing a variables scope to only the minimal region where it's actually used. Taking the variable out of scope after the last use before a subsequent assignment, also avoids some of the messing around with temporary variables that takes place. I believe most intelligent compilers do this sort of thing, some even going so far as to reallocate the variable - the address of a variable can actually change from one part of a function to another - so as to minimise the amount of stack space required by overlaying several variables that are used independently. This, I suspect, is impossible when variables are assigned null at definition, and not unreferenced until the very end of the scope in which they exist. The other, is that it makes it possible to assign a memory allocation, for example, into the non-nullable variable and then test whether it is null or not, before attempting to dereference it. Personally, I would prefer this to only be allowed within the conditional expression of an if or looping statement, but I suppose it's easier to do it this way everywhere, at the expense of extra null checks (which hopefully can be stripped by the compiler - is there any hinting Vala can give to help the compiler do that sort of thing?). Out of interest, has there been any discussion about actually making Vala a GCC front-end? GCC already has the mechanism to do most of this... I suppose it's useful being able to convert it to C, and then distribute that so it can be compiled on platforms without Vala compilers. But it does mean the code needs to be analysed twice; once by Vala, and again by GCC.
This means that the difference between Foo and Foo? is nonexistent from the programmers point of view.
I was thinking about this, in a delegate context (but the same could be applied to classes), and wondering if there's any sanity in being able to mark delegates that can function on a null instance (such as with a ? immediately following the delegate name). It could get a little sticky with some kinds of non-simple method (signals, any others?), but support for them can be added later. In the simple delegate case, the delegate definition would have a function body just like any regular function, providing the default definition in the null case (I guess, since it has a function body, the nullable marker is probably somewhat redundant). Something like: delegate int SomeDelegate? (int i) {return 0;}; int add_three (int i) {return i + 3;} SomeDelegate deleg = add_three; deleg(3) == 6 would end up being compiled as in: int SomeDelegate_isnull (int i) {return 0;} (deleg?:SomeDelegate_isnull)(3) == 6 One reason for retaining the nullable marker on a delegate, even with a code block, would be in the case of non-delegate methods of a nullable class, which already have function bodies. The null-case methods could be collected in a statically defined side-class, with the regular ones marked so that the compiler will apply the ?: look aside instead of an assert() check. (I can't think of any case where ! is valid immediately after an identifier, so I think it would actually be safe to use that to refer to the null-case version of a method from a non-null variable.) class MyClass { int offset = 3; int method? (int i) {return i + offset;} int method! (int i) {return i;} } var mine; mine.method(3) == 3 // mine is still null mine = new MyClass(); mine.method(3) == 6 // mine is non-null mine.method!(3) == 3 // force the null-class method Just a thought... Dunno if that makes any sense in reality. Anyone?
I see a Foo and it might need a null check, or it might not. If Vala could statically enforce that type Foo references an object it would save lots of work. As it is, I treat Foo essentially as I would Foo?. The tutorial also says "These checks are performed at run time..." Well, An error message is better than a core dump but that only changes debugging, not the way the language is used. Essentially the ? just causes causes runtime null-checks to be skipped. Why modify the type system for that?
I do quite strongly agree, personally... If someone wants a C pointer, they can use a C pointer (Vala supports C pointers just fine), or they can use a nullable variable (which adds the reference counting and automatic copying that's otherwise missing from C pointers). Keep non-nullable variables for when you don't want nulls sneaking in where they're not wanted, and make it fail-fast by assert()ing values assigned to it, to aid with debugging. The only exception should be within conditional expressions, so they can be tested for null along with the assignment. But even then it should be restored to non-nullable status as soon as possible with an assert(). I suppose this is nothing new to many, but perhaps there's something there that is... I haven't seen this discussed in the time I've been watching the group, so I'm interested on hearing the thoughts of those "in the know" on these points... -- Fredderic Debian/unstable (LC#384816) on i686 2.6.29-2-686 2009 (up 22:45)
Attachment:
signature.asc
Description: PGP signature