Wednesday, July 26, 2006

Google Tech Talk: Cornelia Brunner - 'On Girls, Boys and IT Careers'

google video


Comments in code

from wiki: 'to need comments' and 'too much documentation'

Class comments - OK.
Method comments - NO
Comments in code - NO


If you need a comment to specify the behaviour of an object/method, specify that behaviour in some unit-tests. In that way you get runnable, automatic, one-click behaviour testing. If your test cases have good names, than you won't need to actually read the code in the tests, a brief look at the method names should say how an object/method reacts.

Tim Ottinger writes here:


A comment is an apology for not choosing a more clear name, or a more reasonable set of parameters, or for the failure to use explanatory variables and explanatory functions. Apologies for making the code unmaintainable, apologies for not using well-known algorithms, apologies for writing 'clever' code, apologies for not having a good version control system, apologies for not having finished the job of writing the code, or for leaving vulnerabilities or flaws in the code, apologies for hand-optimizing C code in ugly ways. And documentation comments are no better. In fact, I have my doubts about docstrings.

If something is hard to understand or inobvious, then someone *ought* to apologize for not fixing it. That's the worst kind of coding misstep. It's okay if I don't really get how something works so long as I know how to use it, and it really does work. But if it's too easy to misuse, you had better start writing. And if you write more comment than code, it serves you right. This stuff is supposed to be useful and maintainable, you know?

Is there any use of comments that are not apologies? I don't think so. I can't think of one. Is there any good reason to write a comment? Only if you've done something "wrong".




Podcast: Scott Ambler - Advanced Agile Techniques

mp3 here


Tuesday, July 25, 2006

Interview with Andy Hunt - Practices of an Agile Developer

on perlcast


no null

I've seen too many NullPointerExceptions.
Apparently I am not the only, see the post from Michael Feathers:


Passing null in production code is a very bad idea. It’s an example of what I call offensive programming – programming in way that encourages defensive programming. The better way to code is to deal with nulls as soon as they happen and translate them to null objects or exceptions. Unfortunately, this isn’t common knowledge.



Keith Ray explains how it is done in Objective-C:


In Cocoa / Objective-C programming, I still prefer an empty string over a null pointer, or an empty container-object over a null object, but the language does have a key difference: calling object methods on null-pointers does not normally crash! (But sometimes method parameters being null can cause a crash.)

Foo* aFoo = nil;
Bar* aResult = [aFoo someMethod: anArgument ];

calling someMethod (as above) on a nil object DOES NOTHING (it does not crash or throw an exception) and returns nil.

The Cocoa UI frameworks take advantage of this. For example, the objects that implements a Button or a Menu item would have two members variables: a pointer to an object, and a "selector" that identifies what method to call on the object. The method that handles a click (or whatever) would be written something like this:

[ targetObject performSelector: actionSelector withArgument: self ];

instead of like this:

if ( targetObject != nil ) {
[ targetObject performSelector: actionSelector withArgument: self ];
}

No need to check for a null targetObject. There would be a need to check for a null-selector (in the performSelector:withArgument: method), since the selector isn't actually an object.

People have objected that null acting like the "null object pattern" hides bugs, but too many times have I gotten web-apps sending me messages that a "null pointer exception has occurred" so I assert that null-pointer exceptions are not helping people find bugs as well as test-driven development, or thorough code-reviews and testing, would do. I expect that if null had the "null object" behavior, the web-app would have returned an partial or empty result rather than a crashing message, and that would be fine for a most users.

If the "null = null object pattern" behavior is an expected and documented part of the language, it can work quite well. Cocoa and NextStep programmers are, and have been, very productive using a language with this behavior.


Achileas Margaritis makes a very good point:

No, the problem with nulls is not their existence, but that there are not separate types.

This is the real problem: null is no object, you cannot send any message to it (you cannot call any method on it). So if I declare a function: doSomethingWith(Object somethingElse), than it is clear that I expect an object (and null is no object !!!) as argument for the method. (the same goes for the result of a function: object provideSomething()).

A colleague said that null is like a "joker" when playing cards. Nice metaphor, but null is an "evil" jocker.


Monday, July 24, 2006

Wednesday, July 12, 2006

Java "new" for non-static iinner classes

By (over) using non-static inner-classes, I came to the following code:

InnerClass created = outer_class_instance.new InnerClass(...);
The "new" operator is completely misleading: is no method(message) of the OuterClass.
It just couples the creation to the outer_class_instance scope.

strange...


Functional Programming

Peter Norvig's "Python for Lisp Programmers"
Another "comparison of Python and Lisp"
James Edward Gray  "Higher Order Ruby"

Lisp Scripting for .Net
F# ML/OCaml variant for .Net (nice sample here)

First feelings:
- Python is more functional than object oriented (watch-out for lambdas cannot contain statements)
- Ruby is more smalltalk-ish (I found it easier to use than Python)
- Lisp is more readble than ML/OCaml