Monday, May 29, 2006

No Pain, No Gain

jmockit is cool, but if "hacking" around a clean solution is so easy, when are we going to learn to avoid final classes, and API calls ("My Application Is All API Calls"), when are we going to refactor our code to a clean OO solution: objects represent roles/responsabilities (one-to-one relationship).

If you are climber, you take care of your equipment, you don't ducktape your cord !!!

Wednesday, May 24, 2006

Play And Learn

from RubyQuiz you could learn something about the use of continuations and Amb operator for Constraint Processing. This is so much fun !!!

Arc Suggestions

Paul Graham posted the collected suggestions for (his) "new Lisp" here. Some nice points: concurrency continuations, OO, macros... But I liked:

*** Dan Milstein: Concurrency

One problem which, IMHO, no popular language has come even
close to solving well is allowing a programmer to write multithreaded
code. This is particularly important in server-side programming, one of
Arc's major targets. I've written a good deal of multithreaded Java, and
the threading model is deeply, deeply wrong. As a programmer, there's
almost no one way to write the kind of abstractions which let you forget
about the details. You're always sitting there, trying to work through
complicated scenarios in your head, visualizing the run-time structure of
your program.

I didn't see another way until I read John H. Reppy's "Concurrent
Programming in ML". Instead of building his concurrency constructs around
monitored access to shared memory, he builds them around a message passing
model (both synchronous and asynchronous). What's more, he provides
powerful means of capturing a concurrent pattern in an abstraction which
hides the details.

I highly recommend giving that book a read. Here's an example of some of
what you get (not the abstraction, actually, just the basic power of
message-passing over shared memory). The abstraction facilities are
complex enough that, like Lisp macros, a small example doesn't really
capture their power. I'm in no way familiar with concurrent extensions to
Lisp, so I'm not able to provide the code for how much harder it would be
in CL or Scheme. Assuming they were augmented with a shared memory model
(as Java is), which forces the programmer to deal with synchrnoized access
to memory, I can only imagine it would be significantly more complex.

A producer/consumer buffer. You want a buffer with a finite number of
cells. If a producer tries to add an element to a full buffer, it should
block until a consumer removes an element. If a consumer tries to remove
an element from an empty buffer, it should block until a producer adds

In Concurrent ML:

datatype 'a buffer = BUF of {
insCh : 'a chan,
remCh : 'a chan

fun buffer () = let
val insCh = channel() and remCh = channel()

fun loop [] = loop [recv inCh]
| loop buf = if (length buf > maxlen)
then (send (remCh, hd buf); loop (tl, buf))
else (select remCh!(hd buf) => loop (tl buf)
or insCh?x => loop (buf @ [x]))

spawn loop;
insCh = insCh,
remCh = remCh

fun insert (BUF{insCh, ...}, v) = send (insCh, v)

fun remove (BUF{remCh, ...}) = recv remCh


Translated into a Lisp-ish syntax (very easy to do from ML), this would
look something like:

(defstruct buffer

(defun create-buffer ()
(let ((ins-ch (make-channel))
(rem-ch (make-channel)))
(labels ((loop (buf)
(cond ((null buf)
(loop (list (recv ins-ch))))
((> (length buf) maxlen)
(send rem-ch (car buf))
(loop (cdr buf)))
(rem-ch ! (car buf) => (loop (cdr buf)))
(ins-ch ? x => (loop (append buf (list x)))))))))
(spawn #'loop)
(make-buffer :ins-ch ins-ch :rem-ch rem-ch))))

(defun insert (b v)
(send (buffer-ins-ch b) v))

(defun remove (b)
(recv (buffer-rem-ch b)))

Key things to notice:

1) Language features:

Communication between threads *only* occurs over channel objects, which can
be thought of as one-element queues. In CML, channels are typed, but in
Lisp they probably wouldn't be.

send/recv: synchronous (blocking) communication over the channel. A thread
can attempt to send an object over the channel, and will then block until
another thread does a recv on the channel.

Creating a new thread is done via 'spawn', which takes a function as its
argument. (I can't remember what the signature of that function is
supposed to be -- clearly, in this case it can't be a function of no
arguments, but imagine it to be something like that).

Selective communication: the call to 'select' is one of the very powerful
features. It is like a concurrent conditional -- it simultaneously blocks
on a list of send/recv calls, and executes the associated code with
whichever call returns first (and then drops the rest of the calls). The !
syntax means an attempt to send, the ? means an attempt to receieve. In
both cases, the '=>' connects the associated code to execute. (I haven't
really come up with a Lispy translation of that syntax).

I don't think select could be efficiently implemented without language
support. It requires a sort of 'partial blocking', which is tricky to
implement on top of normal blocking.

2) The Idiom

The buffer is implemented as a separate thread which has connections to two
channels and an internal list to keep track of the elements of the buffer.
This thread runs through a loop forever, taking the current state of the
buffer as its argument, and waiting on the channels in the body of its
code. It is tail-recursive.

Note the absolute lack of any code to deal with synchronization or locking.
You might notice the inefficient list mechanism (that append is going to
get costly in terms of new cons cells), and think that this is only safe
code because of the inefficient functional programming style. In fact,
that's not true! The 'loop' function could desctructively modify a list
(or array) to which only it had access. There would still be no potential
for sync'ing problems, since only that one thread has access to the
internal state of the buffer, and it automatically syncs on the sends and
receieves. It's only handling one request at a time, automatically, so it
can do whatever it wants during that time. It could even be safely
rewritten as a do loop.

What I find so enormously powerful and cool about this is that the
programmer doesn't need to worry about the run-time behavior of the system
at all. At all. The lexical structure of the system captures the run-time
behavior -- if there is mutating code inside of the 'loop' function, you
don't have to look at every other function in the file to see if it is
maybe modifying that same structure. This is akin to the power of lexical
scoping over global scoping. I have never seen concurrent code which lets
me ignore so much.

This really just scratches the surface of Concurrent ML (and doesn't touch
on the higher-level means of abstraction). But I hope it gives a sense of
how worthwhile a language it is to learn from.

3) Issues

I think that channels themselves would be fairly easy to implement on top
of the usual operating system threading constructs (without needing a
thread for each one). However, the style which this message-passing model
promotes can easily lead to a *lot* of threads -- if you have a lot of
buffers, and each of them has its own thread, things can get out of hand
quickly. I believe that Mr. Reppy has explored these very issues in his
implementation of CML.

Insofar as I have time (which, realistically, I don't) I would love nothing
more than to play around with implementing CML'ish concurrency constructs
in a new version of Lisp like Arc.

Google engEDU

A list with "all" (!?) educational videos from google

Tuesday, May 23, 2006

One More Thing

I really enjoyed the "Behaviour Driven Development" presentation which Dave Astels did at Google. And here is the "one more thing":

"I always thought that Smalltalk would beat Java, I just didn't know  it would be called 'Ruby' when it did." - Kent Beck

Monday, May 22, 2006

Speed Kills

Found an UncleBob's blog entry:

Speed is the prime destroyer of software systems. In our rush to get something to execute we make mess upon mess. We push and shove the code around in a frenzied effort to make something work. And then, once we achieve the desired behavior, we consider ourselves to be done, and move on to the next urgent task.

Of course we all realize that this is self-destructive behavior. We know that the more we rush the deeper the messes become, and the slower and slower we will go. We know that the only way to keep development going fast is to work carefully, deliberately, and slowly. We know that if we do this, we will keep our systems clean and well structured. We know that clean and well structured systems are easy to change. We know this. Yet we find it difficult to act on this knowledge.

The traditional productivity curve for software projects is a sigmoid. It starts very high, and remains high for the first few months. This is the honeymoon period when the team is cranking. They get lots of good work done, and they get it done quickly. But then the messes begin to build. Those messes slow us down. The productivity curve enters a steep and sudden decline. A few months later productivity has bottomed out and asymptotically approaches zero. This is the phase of the project where it takes forever to do even the simplest thing. This is the phase of the project in which the smallest possible estimate is 3 weeks or more.

As productivity slows to a near halt, the business responds in the only way it can - it adds more people to the project in the forlorn hope of increasing productivity. But these new people, eager to please their employers and peers, continue to rush, thereby adding even more corruption to the existing steaming pile. Productivity continues to decline as the sigmoid approaches zero at the limit.

The solution to this nearly ubiquitous problem is to act upon what we already know. That speed kills projects. Slow down. Do a good job. Keep the code clean. Write unit tests. Write acceptance tests. And watch how fast you go!

Saturday, May 20, 2006

Saturday Reading

actually watching videos, from TechTalk@Google

Peter Seibel, "Practical Common Lisp":

In the late 1920's linguists Edward Sapir and Benjamin Whorf hypothesized that the thoughts we can think are largely determined by the language we speak. In his essay "Beating the Averages" Paul Graham echoed this notion and invented a hypothetical language, Blub, to explain why it is so hard for programmers to appreciate programming language features that aren't present in their own favorite language. Does the Sapir-Whorf hypothesis hold for computer languages? Can you be a great software architect if you only speak Blub? Doesn't Turing equivalence imply that language choice is just another implementation detail? Yes, no, and no says Peter Seibel, language lawyer (admitted, at various times, to the Perl, Java, and Common Lisp bars) and author of the award-winning book _Practical Common Lisp_. In his talk, Peter will discuss how our choices of programming language influences and shapes our pattern languages and the architectures we can, or are likely to, invent. He will also discuss whether it's sufficient to merely broaden your horizons by learning different programming languages or whether you must actually use them.

Dave Astels, "Beyond Test Driven Development: Behaviour Driven Development":

Test Driven Development (TDD) has become quite well known. Many developers are getting benefit from the practice. But it is possible that we can get even more value. A new practice is getting attention these days: Behaviour Driven Development (BDD).

BDD removes all vestiges of testing and instead focuses on specifying the behaviour desired in the system being built. This talk will be focus on Ruby and will introduce a new BDD framework: rSpec. The ideas, however, are language independent

Wednesday, May 17, 2006

Architects Must Write Code

I found a blog entry here.

The comments are pretty interesting:

Bell-Labs has a pattern repository: WebInde of OrgPatterns (see ArchitectAlsoImplements, DevelopingInPairs).And my favorite:

Architecture, like war plans, do not last longer than the first minute of the battle: the architect must be in the front line, coding, spiking, re-architecturing...

Tuesday, May 16, 2006

Working Effectively with Legacy Code III

Jeremy D. Miller has some notes about the book.

Tip #1: Yes, go buy the book, and put it next to "Refactoring" (you know: beware of code smells)

Tip #2:

On most of the XP projects I've been on we've used an "Idea Wall," just a visible place to write down or post technical improvements.  Anytime we have some slack time we start pulling down tasks from the idea wall.  Occasionally we're able to outpace either testing or requirements analysis and we aren't really able to push any new code.  Whenever that happens we immediately pull things off of the idea wall.  One way to judge if your technical debt is piling up is to watch how crowded the idea wall is getting.  On the other hand, if something stays on the idea wall for a long time, it might not be that important after all.

Design never stops, not even for an older codebase

Thursday, May 11, 2006


Virtual Street Reality

Martin Fowler evaluates Ruby


It's still early days yet, but I now have a handful of project experiences to draw on. So far the results are firmly in favor of Ruby. When I ask the question "do you think you're significantly more productive in Ruby rather than Java/c#", each time I've got a strong 'yes'. This is enough for me to start saying that for a suitable project, you should give Ruby a spin. Which, of course, only leaves open the small question of what counts as 'suitable'.

Is Smalltalk (sorry, I meant ruby) having a coming back?

Wednesday, May 10, 2006

Lambda the Ultimate - thread

What do you believe about Programming Languages (that you can't prove (yet))?

Interesting thread.You can find some nice gems, in the flame-war dirt.

Should Mock Objects be considered harmful?

Robert Collins asks the question:

Should Mock Objects be considered harmful? As an optimisation for test suites they are convenient, but they mean you are not testing against the something which can be verified to behave as the concrete interface is meant to, which can lead to Interface Skew.

Let's says we have an object A which uses an object B. In OO A and B represent roles: object A does something in collaboration with B. A behaves "right" only if B behaves "right". So we have a behaviour contract between A and B. This normally represented by some unit-tests for the B role which specify its behaviour. Based on that, we can test A: we "mock" the B role, and see how A reacts. If we want to keep our implementation clean, A knows nothing about B implmentation, it knows only about its behaviour. (B is an interface, or A and B are implemented in a dynamic language).

Once we have tested B's behaviour and A's behaviour they "should" work together without errors. This doesn't happen: usually B is not throughly tested, the behviour contract is broken. for this case we should have integration-tests: A and B play nicely together.

Testing.Kata: unit-test role B, unit-test role A, integration test A and B.

Monday, May 08, 2006

Getters And Setters are Evil

Setters are evil because they allow you to have inconsistent objects, which they cannot work until they have set their collaborators (see PicoContainer/Good Citizen).

Getters are evil because they allow you to extract the data from the object, instead of putting the action where the data is. (see violating encapsulation, or Martin Fowler's "GetterEradicator").

So everything goes round and back to Allen Holub: "Why getter and setter methods are evil".

Thursday, May 04, 2006

Violating Encapsulation

Dave Astels blogs about it.

Something I see all the time, on every team I've been involved with, is code like the following (classes are generalized from examples):

MyThing[] things = thingManager.getThingList();
for (int i = 0; i < things.length; i++) {
MyThing thing = things[i];
if (thing.getName().equals(thingName)) {
return thingManager.delete(thing);

This code is tightly coupled to the implementation of myThing in that it gets the name property, knows that it's a string, etc. This is a classic approach from the old days or procedural programming. It is NOT OO.

How about:

MyThing[] things = thingManager.getThingList();
for (int i = 0; i < things.length; i++) {
MyThing thing = things[i];
if (thing.isNamed(thingName)) {
return thingManager.delete(thing);

I've seen this procedural approach thousand times. The problem is that a lot of developers think "procedural": they never met smalltalk/lisp with its closures. That's why learning a new programming language is good: you will think different, even if you will never get a chance to use the "newly acquired" programming language.

Startup School

Paul Graham on "the hardest lessons for startups to learn".