Wednesday, November 23, 2005

Microsoft’s recommendations for Test Driven Development are wrong!

After Michael Feathers made his point, Jeremy D. Miller also blogs about it.

Please also look on the TDD for some good points. I think that Microsoft only wants to sell its software, that's why:
a. "we know better"
b. "our tools are better than other open-source tools"
c. "you cannot live without our tools"
d. "Buy our software !!!"

Finally Jeremy made a really, really, good point: don't be a sheep, think for yourself

A.) We need to diversify our sources of knowledge
B.) We need to question any and all advice from Microsoft (or anyone else) and think for ourselves
C.) We need to have an active conversation with ourselves on best practices
ps. Also Scott Bellware here

Thursday, November 17, 2005

Notes on "Pragmatic Unit Testing"

Great Book! Go Buy It!

[my thoughts]

What Do I Want to Accomplish:
- Does it do what I want?
- Does it do what I want all the time?
- Can I depend on It ?
- Does it document my intent?

Excuses For Not Testing
- "it takes too much time":
- If you wait to begin unit testing until then it will definitely take too long
- It's like trying to clear a couple of acres of land with a lawn mower.
If you start early on when there's just a field of grasses, the job is easy.
If you wait until later, when the field contains thick, gnarled trees and dense,
tangled undergrowth, then the job becomes impossibly difficult.
- takes a partial from the time spend on debugging code, reworking code, isolating a reported bug
- "it's not my job to write tests"
- our job is "to create working code".

What to Test: The Right-BICEP

- Right: Are the results right?
- B: Are all the boundary conditions CORRECT?
- Conformance: Does the value conform to an expected format?
- Orderering: Is the set of values ordered or unordered as appropiate?
- Range: Is the value within reasonable minimum and maximum values?
- Reference: Does the code reference anything external that isn't under direct control of the code itself?
- Existence: Does the value exist (e.g. is not-null, nonzero, present in any set, etc.)?
- Cardinality: Are there exactly enough values?
- Time (absolutely and relative) - Is everything happening in order? At the right time? In time?
- I: Can you check inverse relationship?
- C: Can you cross-check results using other means?
- E: Can you force error conditions to happen?
- P: Are performance characteristics within bounds?

What Else Can Go Wrong?

In good object oriented design, you do not use a raw native type (e.g. an int or Integer) to store a bounded-integer value such as an age, or a compass heading.

Almost any indexing concept ... should be extenesively tested. Here are a few ideas to get you started:
- Start and End index value have the same value
- First is grater than Last
- Index is negative
- Index is greater than allowed
- Count doesn't match actual number of items

If you have to make assumptions abotu the state of the class and the state of other objects
or the global application, then you need to test your code to make sure that it is well-behaved if
those conditions are not met.

[I hate when funkyObject.DoStuff() works only when something else happened before. We should try to write stateless functions, since the name of the function usually does not state when the function works, and when not; basically the object is lying. If the functions are stateful, please make them private, so that the clients don't see them.]

Make sure your method can stand up to nothing.

In most cases, the count of some set of values is only interesting in these 3 cases:
- 0 (Zero)
- 1 (One)
- More than one

[Always prepare/test for multi-threading access of your object.]

[A function name should represent the "happy" case, what the function does, but 90% of the TestCases should specify what the function does when things went wrong]

[The object must always be able to react to any situation]

[Test the boundary conditions make me think of prolog:



define the boundaries, define the "work" case.

Properties of Good-Tests: A-TRIP
- Automatic
- Thorough
- Repeatable
- Independent
- Professional

When fixing any bug, ask yourself the key question:
Could this same kind of problem happen anywhere else?

"Don't expose your privates!" (might be a warning that another class is struggling to emerge)

"All Tests Pass All The Time"

[see resources for the "Pragmatic Unit Testing: Summary"]

[Design for Testability: Think "How am I going to test my software!?", do TDD, and everybody will thank you (including the QA department)]

[Test the class invariants (like a list is always sorted). There is something that we could check for every test scenario.]

[Make the Test code proffessional: refactor, avoid duplicates, create helper classes/functions]

Who is suppossed to check the validity of input data?
"Keep the barbarians out at the gate"

Tuesday, November 15, 2005

Java Generics are Rude

"Uncle Bob" talks about the new Java generics here and here

When I see the "pain", the , I need to ask myself why?
Does Sun think: "Oh, this is straight forward. We can do this. This will make the life of our customers (developers) much better."

Some people are really obsessed with static-typing. They really think static-typing is safe.
On recent discussion I said that dynamic-typing requires more "common sense" than static-typing.

The whole duck-typing (and the lack of interfaces) implies a more polite discussion between the
API provider and the API customer: a more careful choice of naming, a more consequent API documentation. And probably the lack of IntelliSense improves this politeness, since the communication is based on mutual trust, and not on tools capabilities (but that's another story).

But the Java samples are really, really "rude". Instead of concentrating on the solution, I need
to understand the difference between , , and the whole generic-baggage. The generics don't show me the solution, they hide it from me. They are like a chatty guy, who is wasting my time with all-kind-of-nonsense.

Here is the James Robertson Smalltalk (civilized) solution:
initialize: aList
list := aList.

add: aThing
list add: aThing.

list do: [:eachItem | "code to do stuff here"].

Monday, November 14, 2005

Saturday, November 12, 2005

Bill Gates vs. Steve Jobs...

in slide-presentations.
Funny, with a nuance of "serious": here.

Wednesday, November 09, 2005

Think Different (Visual Studio Rots the Mind)

Charles Petzold: Does VisualStudio Rot The Mind !?

It's more about how our tools (IDE/Language/Vocabulary) influence our way of thinking and inherently our solutions.

See also the Behaviour-Driven-Development vs. Test-Driven-Development.
Psychology tells us that the words we use shape our thinking. By continually using the word "test" when talking about TDD, we place emphasis on the side effects rather than the goal. There has been ongoing discussion about this wording problem for a number of years in the Agile community but until now, nobody has put forth a proposal to fix it.
Note: I think that "test" as a name is a smell, since it reveals the implementation details.The real question is: Are we testing or are we specifying ?
Are we doing code-based-allways-up-to-date-specification, or are we testing an implementation. If we mean xUnit is a DSL for specification, then the naming is bad. In NUnit/TestNG the problem is partially solved.

Or how the java guys are asking for an IDE for Ruby. (some are asking for IntelliSense for ruby)

We should really learn a new language every year. If we speak differently, we think differently.
We should eliberate our mind from these tool/Framework/IDE/vocabulary boundaries.

And a highly interesting quote from Seattle Mind Camp:
Just more evidence that everything that was old is new again.

Friday, November 04, 2005

AppDirs for Linux

or simpler filesystem structure.

The Zero-Install system (sort-of, installer) (sort-of, installer)
Rox Desktop (sort-of, desktop manager)

Thursday, November 03, 2005

Visual Studio Hacks

aka "Pimp your IDE" :)) here
and "Pimp your Windows Desktop" with stuff from here (some interesting, some not).

Wednesday, November 02, 2005

Behaviour Driven Development

or "How Unit-Testing went wrong, and how to fix it"

Dave Astel here and here.

Some blog entries from Wayne Allen here and here,
and from Chris Matts here and here.

A framework for ruby: RSpec.

Notes/Quotes on BDD

- no units but "facets of behaviour"
- level of granularity much smaller than that of the typical unit test
- TDD/BDD: you write specifications of what your code will have to do.
- it's all about specifying behaviour and not writing tests
- the idea of having a Test class for each of your production classies is ridiculously limiting. And the thought of testing each of your methods with its own test method(1-1 relationship) will be laughable.