Wednesday, October 04, 2006

A bile on the the death of agile and an other post about the flip side of agile as religion vs. agility at google: Good Agile, Bad Agile

Monday, September 11, 2006

Liskov Substitution in Dynamic Languages

A nice discussion on Michael S. Feathers blog.

Jim Weirich observes:

The original LSP is defined in terms of types and subtypes, not classes and subclasses. In Ruby, we try to remind people that type is not related to class. It seems to me that LSP still applies to dynamic languages, even if there is no direct language construct for type.

Also a link to dynamic/static type contracts (interfaces).

Thursday, September 07, 2006

Gmail Tricks

found here:

Gmail has an interesting quirk where you can add a plus sign (+) after your Gmail address, and it'll still get to your inbox. It's called plus-addressing, and it essentially gives you an unlimited number of e-mail addresses to play with. Here's how it works: say your address is, and you want to automatically label all work e-mails. Add a plus sign and a phrase to make it and set up a filter to label it work (to access your filters go to Settings->Filters and create a filter for messages addressed to Then add the label work).

More real world examples:

Find out who is spamming you: Be sure to use plus-addressing for every form you fill out online and give each site a different plus address.

Example: You could use for for
Then you can tell which site has given your e-mail address to spammers, and automatically send them to the trash.

Automatically label your incoming mail: I've talked about that above.

Archive your mail: If you receive periodic updates about your bank account balance or are subscribed to a lot of mailing lists that you don't check often, then you can send that sort of mail to the archives and bypass your Inbox.

Example: For the mailing list, you could give as your address, and assign a filter that will archive mail to that address automatically. Then you can just check in once in a while on the archive if you want to catch up.

Wednesday, September 06, 2006

Software Development as Turkey

from Darren Hobbs entry:

You can't achieve the same effect when roasting a turkey by doubling the temperatuire and halving the cooking time. Similarly a project cannot have the number of people doubled and the duration halved and get the same result. The rate of knowledge crunching is not increased by adding more people.

Closures in Java

Another post with links.

From Debasish blog, a link to Gilad Bracha's blog entry (with some nice comments)
Pick from post:

One question that naturally arises is "what took you so long?".
Since the late 90s, I've brought the topic up now and again. At times, even I have reluctantly been convinced that it is too late, because we've done so many things that would have been easy with closures in different ways. This means the benefits aren't as high as in a language like Scheme, or Self, or Smalltalk. The cost is non-trivial, to be sure.

Pick from the comments:

SUN never cared to implement Closures in Java.The reason why they are caring about now is Microsoft introduced them in C#.


(1st enums, support for dynamic languages, now closures...) :D

And a link to jaggregate examples.

Wednesday, July 26, 2006

Google Tech Talk: Cornelia Brunner - 'On Girls, Boys and IT Careers'

google video

Comments in code

from wiki: 'to need comments' and 'too much documentation'

Class comments - OK.
Method comments - NO
Comments in code - NO

If you need a comment to specify the behaviour of an object/method, specify that behaviour in some unit-tests. In that way you get runnable, automatic, one-click behaviour testing. If your test cases have good names, than you won't need to actually read the code in the tests, a brief look at the method names should say how an object/method reacts.

Tim Ottinger writes here:

A comment is an apology for not choosing a more clear name, or a more reasonable set of parameters, or for the failure to use explanatory variables and explanatory functions. Apologies for making the code unmaintainable, apologies for not using well-known algorithms, apologies for writing 'clever' code, apologies for not having a good version control system, apologies for not having finished the job of writing the code, or for leaving vulnerabilities or flaws in the code, apologies for hand-optimizing C code in ugly ways. And documentation comments are no better. In fact, I have my doubts about docstrings.

If something is hard to understand or inobvious, then someone *ought* to apologize for not fixing it. That's the worst kind of coding misstep. It's okay if I don't really get how something works so long as I know how to use it, and it really does work. But if it's too easy to misuse, you had better start writing. And if you write more comment than code, it serves you right. This stuff is supposed to be useful and maintainable, you know?

Is there any use of comments that are not apologies? I don't think so. I can't think of one. Is there any good reason to write a comment? Only if you've done something "wrong".

Podcast: Scott Ambler - Advanced Agile Techniques

mp3 here

Tuesday, July 25, 2006

Interview with Andy Hunt - Practices of an Agile Developer

on perlcast

no null

I've seen too many NullPointerExceptions.
Apparently I am not the only, see the post from Michael Feathers:

Passing null in production code is a very bad idea. It’s an example of what I call offensive programming – programming in way that encourages defensive programming. The better way to code is to deal with nulls as soon as they happen and translate them to null objects or exceptions. Unfortunately, this isn’t common knowledge.

Keith Ray explains how it is done in Objective-C:

In Cocoa / Objective-C programming, I still prefer an empty string over a null pointer, or an empty container-object over a null object, but the language does have a key difference: calling object methods on null-pointers does not normally crash! (But sometimes method parameters being null can cause a crash.)

Foo* aFoo = nil;
Bar* aResult = [aFoo someMethod: anArgument ];

calling someMethod (as above) on a nil object DOES NOTHING (it does not crash or throw an exception) and returns nil.

The Cocoa UI frameworks take advantage of this. For example, the objects that implements a Button or a Menu item would have two members variables: a pointer to an object, and a "selector" that identifies what method to call on the object. The method that handles a click (or whatever) would be written something like this:

[ targetObject performSelector: actionSelector withArgument: self ];

instead of like this:

if ( targetObject != nil ) {
[ targetObject performSelector: actionSelector withArgument: self ];

No need to check for a null targetObject. There would be a need to check for a null-selector (in the performSelector:withArgument: method), since the selector isn't actually an object.

People have objected that null acting like the "null object pattern" hides bugs, but too many times have I gotten web-apps sending me messages that a "null pointer exception has occurred" so I assert that null-pointer exceptions are not helping people find bugs as well as test-driven development, or thorough code-reviews and testing, would do. I expect that if null had the "null object" behavior, the web-app would have returned an partial or empty result rather than a crashing message, and that would be fine for a most users.

If the "null = null object pattern" behavior is an expected and documented part of the language, it can work quite well. Cocoa and NextStep programmers are, and have been, very productive using a language with this behavior.

Achileas Margaritis makes a very good point:

No, the problem with nulls is not their existence, but that there are not separate types.

This is the real problem: null is no object, you cannot send any message to it (you cannot call any method on it). So if I declare a function: doSomethingWith(Object somethingElse), than it is clear that I expect an object (and null is no object !!!) as argument for the method. (the same goes for the result of a function: object provideSomething()).

A colleague said that null is like a "joker" when playing cards. Nice metaphor, but null is an "evil" jocker.

Wednesday, July 12, 2006

Java "new" for non-static iinner classes

By (over) using non-static inner-classes, I came to the following code:

InnerClass created = InnerClass(...);
The "new" operator is completely misleading: is no method(message) of the OuterClass.
It just couples the creation to the outer_class_instance scope.


Functional Programming

Peter Norvig's "Python for Lisp Programmers"
Another "comparison of Python and Lisp"
James Edward Gray  "Higher Order Ruby"

Lisp Scripting for .Net
F# ML/OCaml variant for .Net (nice sample here)

First feelings:
- Python is more functional than object oriented (watch-out for lambdas cannot contain statements)
- Ruby is more smalltalk-ish (I found it easier to use than Python)
- Lisp is more readble than ML/OCaml

Friday, June 23, 2006

Java 4s

Try this in java:

(4 == 4)
(new Integer(4) == new Integer(4))

well the first one is true, the second one is false. Why do I need 2 instances of the number (integer) 4 in java I don't know. I couldn't find any example (excuse). After all 4 is immutable, I cannot change it's state, I cannot make a 5 out of it.

I could say that in Smalltalk, or ruby, where everything is an object, yada, yada.... But Sun owns Java,
they own the Integer class, so when I call the constructor I should always get the same instance.

and we have the same symptoms (changed behaviour) with booleans:

(false == false)
(new Boolean(false) == new Boolean(false))

WTF !? I can have a million false objects in java. And for What? To be garbage collected, from autoboxing/unboxing? How hard is it to create a pool inside the Boolean class and to return always the FALSE object  for Boolean(false) ?

Finally, there is another guy who feels my pain: "Java's new Considered Harmful . The problem stems from memory allocation and polymorphsim". Well actually he feels more pain, but that's his problem :D

Tuesday, June 06, 2006

NUllObject and Visitor

I have a problem with nulls: they are not objects. I cannot handle a null like any other object: I have to stop, check if it is null or not, and only after that, go on.

The problem is partially solved by throwing (checked) exception: now I have to handle an exception, instead of a null, which might have been propagated, unseen, unheard, through the stack, god-knows-where.

To the rescue comes NullObject: special object, implements the interface, does nothing.
But how do we close the implementation, but still we leave room from improvements: the Visitor: "Visitor lets you define a new operation without changing the classes of the elements on which it operates." (Actually the NullObject should be provided, in my case, by an O/R mapping layer, but I want to extend this object, for my needs). In this case, every "normal" object should do the double dispatching:

accept(Visitor v) { v.visit(this); }

only the NullObject should do nothing:

NullObject.accept(Visitor v) { }

And in C# where we have delegates, the Visitor could be a function/delegate.

-- apparently I am not the first one which thought of NullObject and Visitor.

Monday, May 29, 2006

No Pain, No Gain

jmockit is cool, but if "hacking" around a clean solution is so easy, when are we going to learn to avoid final classes, and API calls ("My Application Is All API Calls"), when are we going to refactor our code to a clean OO solution: objects represent roles/responsabilities (one-to-one relationship).

If you are climber, you take care of your equipment, you don't ducktape your cord !!!

Wednesday, May 24, 2006

Play And Learn

from RubyQuiz you could learn something about the use of continuations and Amb operator for Constraint Processing. This is so much fun !!!

Arc Suggestions

Paul Graham posted the collected suggestions for (his) "new Lisp" here. Some nice points: concurrency continuations, OO, macros... But I liked:

*** Dan Milstein: Concurrency

One problem which, IMHO, no popular language has come even
close to solving well is allowing a programmer to write multithreaded
code. This is particularly important in server-side programming, one of
Arc's major targets. I've written a good deal of multithreaded Java, and
the threading model is deeply, deeply wrong. As a programmer, there's
almost no one way to write the kind of abstractions which let you forget
about the details. You're always sitting there, trying to work through
complicated scenarios in your head, visualizing the run-time structure of
your program.

I didn't see another way until I read John H. Reppy's "Concurrent
Programming in ML". Instead of building his concurrency constructs around
monitored access to shared memory, he builds them around a message passing
model (both synchronous and asynchronous). What's more, he provides
powerful means of capturing a concurrent pattern in an abstraction which
hides the details.

I highly recommend giving that book a read. Here's an example of some of
what you get (not the abstraction, actually, just the basic power of
message-passing over shared memory). The abstraction facilities are
complex enough that, like Lisp macros, a small example doesn't really
capture their power. I'm in no way familiar with concurrent extensions to
Lisp, so I'm not able to provide the code for how much harder it would be
in CL or Scheme. Assuming they were augmented with a shared memory model
(as Java is), which forces the programmer to deal with synchrnoized access
to memory, I can only imagine it would be significantly more complex.

A producer/consumer buffer. You want a buffer with a finite number of
cells. If a producer tries to add an element to a full buffer, it should
block until a consumer removes an element. If a consumer tries to remove
an element from an empty buffer, it should block until a producer adds

In Concurrent ML:

datatype 'a buffer = BUF of {
insCh : 'a chan,
remCh : 'a chan

fun buffer () = let
val insCh = channel() and remCh = channel()

fun loop [] = loop [recv inCh]
| loop buf = if (length buf > maxlen)
then (send (remCh, hd buf); loop (tl, buf))
else (select remCh!(hd buf) => loop (tl buf)
or insCh?x => loop (buf @ [x]))

spawn loop;
insCh = insCh,
remCh = remCh

fun insert (BUF{insCh, ...}, v) = send (insCh, v)

fun remove (BUF{remCh, ...}) = recv remCh


Translated into a Lisp-ish syntax (very easy to do from ML), this would
look something like:

(defstruct buffer

(defun create-buffer ()
(let ((ins-ch (make-channel))
(rem-ch (make-channel)))
(labels ((loop (buf)
(cond ((null buf)
(loop (list (recv ins-ch))))
((> (length buf) maxlen)
(send rem-ch (car buf))
(loop (cdr buf)))
(rem-ch ! (car buf) => (loop (cdr buf)))
(ins-ch ? x => (loop (append buf (list x)))))))))
(spawn #'loop)
(make-buffer :ins-ch ins-ch :rem-ch rem-ch))))

(defun insert (b v)
(send (buffer-ins-ch b) v))

(defun remove (b)
(recv (buffer-rem-ch b)))

Key things to notice:

1) Language features:

Communication between threads *only* occurs over channel objects, which can
be thought of as one-element queues. In CML, channels are typed, but in
Lisp they probably wouldn't be.

send/recv: synchronous (blocking) communication over the channel. A thread
can attempt to send an object over the channel, and will then block until
another thread does a recv on the channel.

Creating a new thread is done via 'spawn', which takes a function as its
argument. (I can't remember what the signature of that function is
supposed to be -- clearly, in this case it can't be a function of no
arguments, but imagine it to be something like that).

Selective communication: the call to 'select' is one of the very powerful
features. It is like a concurrent conditional -- it simultaneously blocks
on a list of send/recv calls, and executes the associated code with
whichever call returns first (and then drops the rest of the calls). The !
syntax means an attempt to send, the ? means an attempt to receieve. In
both cases, the '=>' connects the associated code to execute. (I haven't
really come up with a Lispy translation of that syntax).

I don't think select could be efficiently implemented without language
support. It requires a sort of 'partial blocking', which is tricky to
implement on top of normal blocking.

2) The Idiom

The buffer is implemented as a separate thread which has connections to two
channels and an internal list to keep track of the elements of the buffer.
This thread runs through a loop forever, taking the current state of the
buffer as its argument, and waiting on the channels in the body of its
code. It is tail-recursive.

Note the absolute lack of any code to deal with synchronization or locking.
You might notice the inefficient list mechanism (that append is going to
get costly in terms of new cons cells), and think that this is only safe
code because of the inefficient functional programming style. In fact,
that's not true! The 'loop' function could desctructively modify a list
(or array) to which only it had access. There would still be no potential
for sync'ing problems, since only that one thread has access to the
internal state of the buffer, and it automatically syncs on the sends and
receieves. It's only handling one request at a time, automatically, so it
can do whatever it wants during that time. It could even be safely
rewritten as a do loop.

What I find so enormously powerful and cool about this is that the
programmer doesn't need to worry about the run-time behavior of the system
at all. At all. The lexical structure of the system captures the run-time
behavior -- if there is mutating code inside of the 'loop' function, you
don't have to look at every other function in the file to see if it is
maybe modifying that same structure. This is akin to the power of lexical
scoping over global scoping. I have never seen concurrent code which lets
me ignore so much.

This really just scratches the surface of Concurrent ML (and doesn't touch
on the higher-level means of abstraction). But I hope it gives a sense of
how worthwhile a language it is to learn from.

3) Issues

I think that channels themselves would be fairly easy to implement on top
of the usual operating system threading constructs (without needing a
thread for each one). However, the style which this message-passing model
promotes can easily lead to a *lot* of threads -- if you have a lot of
buffers, and each of them has its own thread, things can get out of hand
quickly. I believe that Mr. Reppy has explored these very issues in his
implementation of CML.

Insofar as I have time (which, realistically, I don't) I would love nothing
more than to play around with implementing CML'ish concurrency constructs
in a new version of Lisp like Arc.

Google engEDU

A list with "all" (!?) educational videos from google

Tuesday, May 23, 2006

One More Thing

I really enjoyed the "Behaviour Driven Development" presentation which Dave Astels did at Google. And here is the "one more thing":

"I always thought that Smalltalk would beat Java, I just didn't know  it would be called 'Ruby' when it did." - Kent Beck

Monday, May 22, 2006

Speed Kills

Found an UncleBob's blog entry:

Speed is the prime destroyer of software systems. In our rush to get something to execute we make mess upon mess. We push and shove the code around in a frenzied effort to make something work. And then, once we achieve the desired behavior, we consider ourselves to be done, and move on to the next urgent task.

Of course we all realize that this is self-destructive behavior. We know that the more we rush the deeper the messes become, and the slower and slower we will go. We know that the only way to keep development going fast is to work carefully, deliberately, and slowly. We know that if we do this, we will keep our systems clean and well structured. We know that clean and well structured systems are easy to change. We know this. Yet we find it difficult to act on this knowledge.

The traditional productivity curve for software projects is a sigmoid. It starts very high, and remains high for the first few months. This is the honeymoon period when the team is cranking. They get lots of good work done, and they get it done quickly. But then the messes begin to build. Those messes slow us down. The productivity curve enters a steep and sudden decline. A few months later productivity has bottomed out and asymptotically approaches zero. This is the phase of the project where it takes forever to do even the simplest thing. This is the phase of the project in which the smallest possible estimate is 3 weeks or more.

As productivity slows to a near halt, the business responds in the only way it can - it adds more people to the project in the forlorn hope of increasing productivity. But these new people, eager to please their employers and peers, continue to rush, thereby adding even more corruption to the existing steaming pile. Productivity continues to decline as the sigmoid approaches zero at the limit.

The solution to this nearly ubiquitous problem is to act upon what we already know. That speed kills projects. Slow down. Do a good job. Keep the code clean. Write unit tests. Write acceptance tests. And watch how fast you go!

Saturday, May 20, 2006

Saturday Reading

actually watching videos, from TechTalk@Google

Peter Seibel, "Practical Common Lisp":

In the late 1920's linguists Edward Sapir and Benjamin Whorf hypothesized that the thoughts we can think are largely determined by the language we speak. In his essay "Beating the Averages" Paul Graham echoed this notion and invented a hypothetical language, Blub, to explain why it is so hard for programmers to appreciate programming language features that aren't present in their own favorite language. Does the Sapir-Whorf hypothesis hold for computer languages? Can you be a great software architect if you only speak Blub? Doesn't Turing equivalence imply that language choice is just another implementation detail? Yes, no, and no says Peter Seibel, language lawyer (admitted, at various times, to the Perl, Java, and Common Lisp bars) and author of the award-winning book _Practical Common Lisp_. In his talk, Peter will discuss how our choices of programming language influences and shapes our pattern languages and the architectures we can, or are likely to, invent. He will also discuss whether it's sufficient to merely broaden your horizons by learning different programming languages or whether you must actually use them.

Dave Astels, "Beyond Test Driven Development: Behaviour Driven Development":

Test Driven Development (TDD) has become quite well known. Many developers are getting benefit from the practice. But it is possible that we can get even more value. A new practice is getting attention these days: Behaviour Driven Development (BDD).

BDD removes all vestiges of testing and instead focuses on specifying the behaviour desired in the system being built. This talk will be focus on Ruby and will introduce a new BDD framework: rSpec. The ideas, however, are language independent

Wednesday, May 17, 2006

Architects Must Write Code

I found a blog entry here.

The comments are pretty interesting:

Bell-Labs has a pattern repository: WebInde of OrgPatterns (see ArchitectAlsoImplements, DevelopingInPairs).And my favorite:

Architecture, like war plans, do not last longer than the first minute of the battle: the architect must be in the front line, coding, spiking, re-architecturing...

Tuesday, May 16, 2006

Working Effectively with Legacy Code III

Jeremy D. Miller has some notes about the book.

Tip #1: Yes, go buy the book, and put it next to "Refactoring" (you know: beware of code smells)

Tip #2:

On most of the XP projects I've been on we've used an "Idea Wall," just a visible place to write down or post technical improvements.  Anytime we have some slack time we start pulling down tasks from the idea wall.  Occasionally we're able to outpace either testing or requirements analysis and we aren't really able to push any new code.  Whenever that happens we immediately pull things off of the idea wall.  One way to judge if your technical debt is piling up is to watch how crowded the idea wall is getting.  On the other hand, if something stays on the idea wall for a long time, it might not be that important after all.

Design never stops, not even for an older codebase

Thursday, May 11, 2006


Virtual Street Reality

Martin Fowler evaluates Ruby


It's still early days yet, but I now have a handful of project experiences to draw on. So far the results are firmly in favor of Ruby. When I ask the question "do you think you're significantly more productive in Ruby rather than Java/c#", each time I've got a strong 'yes'. This is enough for me to start saying that for a suitable project, you should give Ruby a spin. Which, of course, only leaves open the small question of what counts as 'suitable'.

Is Smalltalk (sorry, I meant ruby) having a coming back?

Wednesday, May 10, 2006

Lambda the Ultimate - thread

What do you believe about Programming Languages (that you can't prove (yet))?

Interesting thread.You can find some nice gems, in the flame-war dirt.

Should Mock Objects be considered harmful?

Robert Collins asks the question:

Should Mock Objects be considered harmful? As an optimisation for test suites they are convenient, but they mean you are not testing against the something which can be verified to behave as the concrete interface is meant to, which can lead to Interface Skew.

Let's says we have an object A which uses an object B. In OO A and B represent roles: object A does something in collaboration with B. A behaves "right" only if B behaves "right". So we have a behaviour contract between A and B. This normally represented by some unit-tests for the B role which specify its behaviour. Based on that, we can test A: we "mock" the B role, and see how A reacts. If we want to keep our implementation clean, A knows nothing about B implmentation, it knows only about its behaviour. (B is an interface, or A and B are implemented in a dynamic language).

Once we have tested B's behaviour and A's behaviour they "should" work together without errors. This doesn't happen: usually B is not throughly tested, the behviour contract is broken. for this case we should have integration-tests: A and B play nicely together.

Testing.Kata: unit-test role B, unit-test role A, integration test A and B.

Monday, May 08, 2006

Getters And Setters are Evil

Setters are evil because they allow you to have inconsistent objects, which they cannot work until they have set their collaborators (see PicoContainer/Good Citizen).

Getters are evil because they allow you to extract the data from the object, instead of putting the action where the data is. (see violating encapsulation, or Martin Fowler's "GetterEradicator").

So everything goes round and back to Allen Holub: "Why getter and setter methods are evil".

Thursday, May 04, 2006

Violating Encapsulation

Dave Astels blogs about it.

Something I see all the time, on every team I've been involved with, is code like the following (classes are generalized from examples):

MyThing[] things = thingManager.getThingList();
for (int i = 0; i < things.length; i++) {
MyThing thing = things[i];
if (thing.getName().equals(thingName)) {
return thingManager.delete(thing);

This code is tightly coupled to the implementation of myThing in that it gets the name property, knows that it's a string, etc. This is a classic approach from the old days or procedural programming. It is NOT OO.

How about:

MyThing[] things = thingManager.getThingList();
for (int i = 0; i < things.length; i++) {
MyThing thing = things[i];
if (thing.isNamed(thingName)) {
return thingManager.delete(thing);

I've seen this procedural approach thousand times. The problem is that a lot of developers think "procedural": they never met smalltalk/lisp with its closures. That's why learning a new programming language is good: you will think different, even if you will never get a chance to use the "newly acquired" programming language.

Startup School

Paul Graham on "the hardest lessons for startups to learn".

Sunday, April 30, 2006

To interface or not to interface

I've posted some comments on Troy Brumley's blog here . He liked the metaphor interface is a "firewall": we use it in order to isolate "changes": I don't want to be affected by your changes, and you don't want to be affected by my changes.

Actually an interface is a "wall", a very thin layer, which hides implementation details. And we should use is at framework, package and object level. In practice it doesn't happen: we "trust" our object within an package, or within a framework.

An interface defines a "contract": somebody understands this set of messages. Why don't we have interfaces in dynamic languages? Because we don't need this wall, we are always talking to a duck.The implementation is always hidden.The contract  resides in the UnitTests.


I tried to download wbloggar, but the site is down. Let's see how is this working...

Powered by Qumana

Friday, April 21, 2006

Before and After Ruby

found on the "journal of jess".

Also Torsten is wondering here:
What I dont understand is that people complain about Java and start with Ruby right after it.
What if they would take the next step and directly start using Smalltalk. I'm sure they will enjoy the wonderfull world full of objects.
Well I think that Smalltalk environment is too scary for a java developer: no code files, no cvs/subversion, everything is in an image file, etc.

When a java developer plunges in Smalltalk, he has a "What now" moment: everything is there, but he does not know where to start.

Ruby is the "way" to Smalltalk: easier to learn, but OO imperfect (ruby has "syntax" and not everything is an object)

Tuesday, April 18, 2006

The Quality of Many Eyes


Linux kernel development model implies that the developers can't directly add their changes to the main code branch, but publish their changes. Other developers can review and provide comments, and, more importantly, there is a dedicated person who reviews all the changes, asks for corrections or clarifications, and finally incorporates the changes into the main code branch. This model is extremely rare in producing commercial software, and in the open source software world only some projects use it. Linux kernel has been using this model from the beginning quite effectively.
Raise your hand: Who's doing code-reviews? Who's doing pair-programming?

Wednesday, April 12, 2006

Ruby and Lisp

found here:

So, Ruby was a Lisp originally, in theory.
Let's call it MatzLisp from now on. ;-)
– Matz

Larry Wall never understood Lisp
Guido Van Rossum once read a book on Lisp
Yukihiro Matsumoto once read a book on Lisp and understood some of it
– Keith Playford

Tuesday, April 11, 2006

Code Generation

Bill Veners here
Dave Thomas here

I don't like code generation. It feels to me as a step backwards and not forwards to a solution.

Instead of saying: "let's write a class/object with this functionality", we say: "let's write another class and a template, to generate a file, which compiled, gives us the needed object and functionality". Hmmm....

If the algorithm is so generic that we can express it in a template/file generator, why can't we write a class which through reflection generates the needed functionality?

How do you unit-test generated files? Do you test the output code, or do you test the new object? If you test the created object (the new functionality), than you have one additional step in unit-testing: compilation.

My point is that:
Ruby is dynamic enough to let be do what I want without leaving the language
Smalltalk is more "extreme" than ruby. There you don't have source files !!!

Java/C# should be "dynamic" too. I think they already are, through reflection and dynamic code generation, almost as flexible/generic as Ruby. But the developers are always scare of the performance loss. So they prefer an "immature" optimization, instead of a "clean" solution.

Monday, April 10, 2006

Monday, April 03, 2006

Sunday, April 02, 2006

saturday reading

Bambi Meets Godzilla
The Rise of "Worse is Better"
Jao has some good blog entries.

It's not about the content.
It's about the "sparkle which starts the engine".

Wednesday, March 29, 2006

C# 3.0 Implicitly variable types

I have previously posted about C# 3.0 new features.

The good side for
var p = new Point(1,2) is the we don't have to write that much redundant code. (like Point p = new Point(1,2))

The bad side:
The compiler won't implicitly convert to: IPoint p = new Point(2,3).
The usage of p as concrete class will lead to a lot of "concrete-class-dependency" and, eventualy, to bad design.

It is a pitty that syntax is used as a crotch to enrich the language. Look at Lisp, Smalltalk and Ruby, to see how easy is to enrich the language and to create DSLs. (ruby sample: rake, and this article)

Another entry on Code Quality

I see "broken windows" every day.

Bill Venners blogs about it here.

Andy Hunt observes the psychological impact:

If you walk into a project that's in shambles—with bugs all over, a build that doesn't quite work—you're not going to have incentive to do your best work. But, if you go onto a project where everything is pristine, do you want to be the first one to make a bug?

Singleton as anti-pattern

In 99% of the cases the singleton pattern is used as a global variable.

But as I said before, the singleton 's main problem is that it violates the Single Responsability Principle: we have a factory + an object.

But in some object-oriented languages (ruby, Smalltalk), classes are object-factories.
In these each object has it's own responsability. Clean: the class is a factory, the object is "... its responsability".

In a way, "singleton" is an antipattern, because of the language limitations.
In C++/java/C# you need to hide the constructor, and offer a static "getInstance()",
in ruby/smalltalk you need to overwrite the new message/method.

In c++/java/c# you wonder: "why can I instantiate objects from other classes, but not from this one", in ruby/smalltalk the API usage is uniform and clean.

Working Effectively with Legacy Code II

James Robertson blogs about a Michael Feather's talk here.
A 2nd entry here, about another topic.

Michael Feather's puts the finger on the wound:

"Utility classes" - ones with all static (Class) methods are a problem as well. Bottom line, the methods aren't where they belong. Preserve signatures, but create the appropriate methods in the appropriate classes.

Tuesday, March 14, 2006

More fun with ruby

Jim Weirich's presentation on Dependency Injection (in ruby).
If you are a ruby newbie, read Jim's guide for Java programmers.

And the cherry on the cake: Jim's presentation on continuations.
For more continuation goodies, see Sam Ruby's explanation, RubyGarden #1, RubyGarden #2 and IDEA Programmer's one. (or Bruce Tate article)

You might also check Seaside(Smalltalk), Borges(ruby) and DarkSide(Io) implementations. (no link, just google for them).

Reaction to James Gossling's rant about scripting languages

Very nice put together here.

Sunday, March 12, 2006


Maybe object-oriented does not mean an object based system like Smalltalk.
Maybe it means "in that direction".
When I say "my window is north-oriented", it does not mean that my windows is at the North Pole, but, somehow, strangely, points to that spot.

Maybe that's why we accept so many non-object imperfections: we are just staying at the window seeking for the real thing.

A Case against Static Calls

I don't like static calls. Why?
You cannot mock them.
You cannot apply "Inversion of Control", specifying the class (role A) as a collaborator of the user (role B), in the user's constructor.

A static function belongs to a class, and a class is not an object (in the most enterprise- object-oriented languages). This function represents a responsability which you cannot isolate, or replace (mock). This makes the testing/maintainance harder than it should be.

In an OO world, each responsability should be materialized by an object, see the Single Responsability Principle. But a class is not an object, so you cannot treat it as a an object.

The problem is that in some languages there is no uniform access, there is a discrimination between a class and an object, therefore they must be accesed in different ways.

Another issue is that a static method does not use the instance fields/methods of the class. A static method does not belong to the object, but to the class. If the class defines the behaviour of its objects, but a static method does not belong to the objects, then it should be defined somewhere else.

That's why the singleton is an anti-pattern: the class defines the object's behaviour (responsability 1), and it's creation/life-cycle (responsability 2).
A lot of other examples we can find in Michael C. Feathers book "Working Effectively With Legacy Code", where he suggests breaking the dependencies gradually: first make a method static, then move it to another class.

Closures "quote"

I found an interesting quote on, which expresses fully my hesitation (wondering):
Anyone that really feels Ruby closures are a critical technical factor should be wondering why they’re not develping in Smalltalk or CLisp.
Actually a lot (all !?) the things I love about Ruby come from Smalltalk (and Lisp, but Smalltalk is closer to ruby, being an OO environment): closures, named arguments (ok is a hack in ruby, but is going to be solved), continuations.
I saw a post that exception handling in Smalltalk is a library !!! Isn't that a pure thoght !?

My biggest discomfort with Smalltalk is the image-based environment.
On one side, image environment is what I need: when I develop in java, I am not looking for files, but for classes. (I do type/function navigation).
On the other side, ruby is so lightweight: you write a script, and you can check-it in, take with you, etc. In Smalltalk, I need to extract the code from the image in a text file, do some stuff, and then reintegrate it elsewhere.

hmmmm, ruby or Smalltalk, where should I digg deeper?

Saturday, February 25, 2006

Concurrency-Oriented Programming (snooping)

Well, we have heard before that "the free lunch is over".
Here is a post about Erlang as "concurrency-oriented programming".
And here about "Erlang vs Haskell comparison"

If you follow the link, some really interesting quotes appear:

Erlang is the specialist language narrowly focused on networked, highly-scalable concurrent applications.

I love Haskell because it forever twisted my brain into a different shape and I think I'm overall a much better coder now.
Maybe Io could also have some potential for concurrent programming (futures, coroutines, asyncs). Not at the same scale as erlang, but, who knows...

Wednesday, February 08, 2006

New Programming Language (Matz suggestion)

Here are the Obie Fenandez's Notes on the RubyConf 2005.

Matz suggested Io, or Haskell, but also "old" (I would say "classical") programming languages were specified: Lisp, Scheme, Smalltalk.

I've learned Lisp at the University, but I didn't enjoy the prefix notation (+ 2 3 4 5)
Smalltalk is always fun, but it is not a new mind-challenging concept (I would say that ruby is a file-based smalltalk).

Io "is a small, prototype-based programming language. The ideas in Io are mostly inspired by Smalltalk (all values are objects), Self (prototype-based), NewtonScript (differential inheritance), Act1 (actors and futures for concurrency), LISP (code is a runtime inspectable/modifiable tree) and Lua (small, embeddable)."
But for now it is very basic, nearly approaching the 1.0 release. So for more "interesting" stuff,
libraries are missing.

Haskell might be an interesting option. (darcs is written in it). Looking at the introduction,
I saw a qsort implementation, which looks very prolog-like:

qsort []     = []
qsort (x:xs) = qsort elts_lt_x ++ [x] ++ qsort elts_greq_x
elts_lt_x = [y | y <- xs, y < x]
elts_greq_x = [y | y <- xs, y >= x]

def qsort(list)
return [] if list.empty?

x, *xs = *list
smaller_than_x, bigger_than_x = xs.partition{|y| y < x}
qsort(smaller_than_x) + [x] + qsort(bigger_than_x)

Once again, I was surprized by how flexible ruby is. I didn't know (maybe forgot) about the x, *xs = *list assignement possibility.

ps. here is a japanese post about qsort in several languages.

Saturday, February 04, 2006

Working Effectively With Legacy Code (notes)

Notes from Michael C. Feathers book.

"Code without test is bad code. It doesn't matter how ell written it is; it doesn't matter how pretty or object oriented or well encapsulated it is. With tests, we can change the behaviour of our code quickly and verifiably. Without them, we really don't know if your code is getting better o worse."

...changes in a system: "Edit and Pray", "Cover and Modify"

A Seam is a place where you can alter behaviour in your program without editing in that place. Every seam has an enabling point, a place where you can make the decision to you one behaviour or another.

# Separate the new code (change) from the old code. Contain the change code in:
- a Sprout Method. Can also be static (static methods viewed as staging)
- a Sproud Class. The source Class, cannot be easily tested, so we isolated our change in another class, where we can test it.

- Wrap Method: Rename the old method and make it private/protected. The new method wraps the old functionality with the new functionality.
- Wrap Class: Extract Implementer/Extract Interface.

"Good design is testable, and design that isn't testable is bad."
"When we depend directly on libraries that are out of our control, we are just asking for trouble."

"There is no decent place to sense what this code does."

"Command/query separation: a command should be a query or a command, but not both. A command is a method that can modify the state of the object but that doesn't return a value. A query is a method returns a value but that does not modify the object"

"Programming is the art of doing one thing at a time"
Pair Programming: "It is a remarkably good way to increase quality and spread knowledge around a team"
"Let's face it, working in legacy code is surgery, and doctors never operate alone."

Tuesday, January 31, 2006

Code Quality

Michael Feathers : "The Bar is Higher Now"

and a more radical (language-style) one
Dave Astels: "Why Your Code Sucks"

both are very good.

Saturday, January 21, 2006

Checked Exceptions

Here is more hand-cuffing from Sun: Checked Exceptions (read here and here).

Why there are checked and unchecked exceptions?
Aren't both equally dangereous for your program?
And why checked? Why do people have wrap and to push an exception up to the point where checking makes a sin?
Is my program more safe if I use Checked Exceptions? Hmmm. I would understand it from a declarative point of view: I tell you, my API Customer, that this function will/ throw an exception,
but how do you handle it is your bussiness, and not the compilers. But eventually you and I will have to write TestCases for your code, and these Specify/Document my API and your code.

Haven't we learn anything from Smalltalk?

Dynamic vs. Strong Type Systems

Dave Hoover, the "Red Squirrel" makes/collects some good observations here
(Dynamic languages + TDD) == (power + safety) == confidence.
Yes. Free your self from "hand-cuff" programming, and do your TDD.

Strong testing is fabulous, no doubt about it: the problem is that most developers don't have the discipline to write and update their test suites. The consequence is that, in the real world, there actually are benefits to static typing in complex projects. One way of thinking about static typing is as "just another test suite", except that it's a test suite developers are less likely be lazy about or subvert.

No, No, No. The compiler is no tester. I am still surprised to see how many developers measure their progress in lines of code compiled, and not in feature tested.(see Running Tested Features) Having the code compiled does not mean, that a function will not through an exception on a null pointer, or when an empty string comes as parameter. So where is the "TestSuite"? That the function accepts a string as parameter, and every caller supplies a string? This is very "thin" for a TestSuite.

The "safe language" argument appeals to fear, while the "flexible language" argument appeals to a sense of opportunity and adventure. Both are powerful motivations, so for a long time this argument has been a stalemate. Happily, that period is coming to an end. Two new factors have come into play: automated testing and transparency. Over the next five years they will turn the balance totally in the favor of more flexible languages.

:D I remember a quote about XP that the most methodologies are based on "fear", but XP is a methology is based on "fun". And there is no safe language, there are only tested programs. If somebody asks at the end of the day what have you done, you say: "I have 30 specifications, running, and I tested my code against them". If these specifications are good or wrong, that is another story, but at least you can say I can prove that for this 30 cases, my cod works.