• Academic software

    A lot of very good software in academia is developed by PhD students or post-docs. Both groups have temporary contracts, and very often the software dies at the end of the contract.

    One can compare academic software with a Formula 1 race car: it is very fast, beautiful, technologically advanced. But it is very difficult to drive and needs frequent service. The only way of keeping it on the road is to have the driver and the designer working closely together.

    To make it possible to use the technology that is developed for the Formula 1 race car in practice, the car must be redesigned:

    1. It must be possible for anyone with a normal drivers license to drive it.
    2. It has to be possible to drive on different types of road, not only a polished race track.
    3. It must be possible for a regular trained service engineer to service it.
    4. The service interval must be raised from 200 km to 25000 km.

    It is obvious that in car design, these changes are not the responsibility of the designers in the Formula 1 team. There are other designers that are specialized in normal cars.

    The equivalent concepts in software design are:

    1. The program must be made user-friendly. It must be usable by casual users that are not also computer hackers.
    2. It must be robust enough to work on real data as encountered by real users.
    3. Running and installation must be well documented.
    4. It must be able to run unsupervised for many data sets and not break down after limited time e.g.through a seemingly unrelated upgrade to the operating system or a minor change to an underlying web service.

    Funny enough, in many larger projects it is expected that the designers of the Formula 1 software will also do this redesign. This is a strong contrast with car design! In NBIC-II we will not make this mistake. Taking software the extra step will be one of the important tasks of our Central Engineering Team. And this team will contain specialists in industrial strength software engineering.

    We will turn the Ferraris into Volkswagens. And we will be proud of them!

  • An Hourglass representing Research and Technology

    The programming we choose to do in our team is like the neck in the hourglass representing life science research and technology:

    • There are many grains of sand above us. Those represent all the software tools developed in life science research.
    • There is a large void below us. This represents the need for widely applicable tools in the life sciences.
    • At the bottom there are also grains of sand. They are well settled. These represent the current technology: commercially available tools and well-serviced open source packages.

    How does the sand get from the top to the bottom? Via the neck of development. It is narrow; only few academic tools make it to the bottom. The flow through the neck is powered by:

    • push: a few academic groups that have the capability and capacity to make their tools available
    • pull: a few companies that look far ahead and are able to see and use the potential of an academic tool

    In our hourglass of life science tools, new sand is being added at the top all the time. And most of it overflows the beaker after a while. Some tools never deliver what the author thought they would do. Some are made to solve a single problem and rightfully abandoned when that is done. But many tools are published and left as orphans. Only a selection of tools that promise to be useful for a larger audience ever make it to the neck.

    In practice, the neck is too narrow. There are many more valuable tools than are taken up. A team like ours can help to make the neck larger by making existing research tools applicable for wider use as a service to life scientists with a clear need (we call it professionalization). But it is sometimes hard to convince funding parties to pay for this. It is also hard to convince researchers to work on making their software better: professionalization does not generate new high-impact papers. We work on convincing the funding parties that it is better to professionalize existing successes than to reinvent them using research money. And we work on convincing the scientists that professionalization of their output will lead to higher citation scores on their existing publications.

    Science wants novelty. And the current Dutch finance climate is directed towards applied science, towards innovation in society. Look at the picture, and you can see that these are hard to combine. Innovation starts where novelty ends. The only way to make the combination is to include development.

    Photo by graymalkn on flickr

  • Decision tree for scientific programmers in bioinformatics

    This is one of the syndromes we're trying to fight in BioAssist...

  • Is my software any good?

    If you are not getting any user feedback for your software, there are two possible reasons.

    1. It is bad. Nobody uses it.
    2. It is good. Everyone is happy.

    If this happens to you, think back. Did you ever get any feedback before? How did you react?

    • Did you listen to your users and fix their problems?
    • Did you teach your users the way your software should be used?

    By answering these two questions you can figure out for yourself why you no longer get feedback. If you listened, and the stream of questions stopped, this probably means the users are now happy. If you attempted to correct their usage, most likely nobody uses it any more.

    You did remember to include your contact details, did you?

  • Lets go build some obsolete tools.... and prevent being blamed.

    One of the first stages in the development of a new tool (software or hardware) is a functional specification. The functional specification matures in discussions between the developers (department of R&D) and the customer representatives (often the departments of marketing and sales).

    Of course a functional specification is useful: it is very hard to develop something new without an idea of how the new tool will be used and what it will be compared with. However, defining a functional specification can also be taken too far. In some organizations, the functional specifications are spelled out in the tiniest details. At the end of a long formal procedure, the book of specifications is signed like a contract between marketing and development. The development can only be started when the list of signatures is complete and will be performed in splendid isolation from the world of potential users. Why do organizations do specifications this way? It is often an attempt to separate the responsibilities of the departments, so that if anything fails the appropriate party can be blamed. If the final product does not meet the functional specifications, this can be blamed on the developers. And if the product does not succeed even though it meets all the functional specifications, this will be blamed on marketing.

    I have a serious problem with this approach: using this procedure, how can one ensure that the product will be useful? After two years without interaction, development may produce exactly what marketing asked for (making all deadlines and within budget), but the market has changed and does no longer need the designed product. Or technology has changed, and better specifications would have been within reach and are offered by the competition. Or maybe marketing truly made a mistake, and asked for something the world is not waiting for. In these cases, clearly the development department can not be blamed! But if you develop an obsolete product this way, where does that leave the organization as a whole? And if this is not the best solution for the organization as a whole, will it be good for the development department? Even though everyone did exactly what was expected, people may be laid off because development costs can not be recovered.

    The solution is, as often, to keep a middle road. Using e.g. Agile or LEAN development methods,  developers can stay in constant communication with marketing. Iterative  and modular design procedures can be used to verify that the new tool does what it should, without relying on the capacity of people to describe specifications in words beforehand. And because the communication with the market is not lost during the development process, the tool will have a significantly higher chance of actually being useful at the moment of introduction.

    Image (by QuiteLucid on Flickr): "a camel is a racing horse designed by a committee".

  • Never forget the real purpose of Agile! Bureaucracy is your enemy.

    Agile in software development primarily means that the customer must feel that you can move immediately.

    To be able to deliver results quickly, guaranteed quality is essential. You must make sure that nothing you change breaks existing functionality.

    However, if you try to insure your quality by adding release regulations and code review procedures or by increasing formalities by making someone into a release manager, this defies the agility goal. A two minute fix of a bug becomes a two week ordeal.

    Instead, you should ensure quality through automated testing. Continuous integration tools never get tired of running the same tests over and over again. They are always available when you need them, never make mistakes, and are fast. They are the agile quality guarantee.

  • Scrum is for unknown roads

    There are two ways of car navigation: non-agile and agile. Non agile navigation is when you take a sheet of paper with you with instructions like this:

    Drive 5km. Take a left. At the third light to the right, then before the blue building to the left.  Stop when you arrive at a gas station.

    Non-agile navigation works fine if there is a path to the destination that is well known, and where you can't make a mistake. If anything changes on the path, if there are roadworks, if you make any mistake along the route: there is no contingency; it will be impossible to reach the destination. Furthermore: If there is nobody around that has been to the same destination (even several times), it will be impossible to get a detailed description of the route.

    Agile navigation is using road signs. To get to the San Francisco Museum of Modern Art, you first follow signs to California, then San Francisco, and when you arrive there, you follow signs for SFMOMA. The only thing you need to know is where you're going, and roughly where that is (recursively, if you want). This method of navigation is very robust: even if a street becomes a one-way street, or if the blue building is torn down, you will still be able to get to your destination. What's even better is that if, along the way, your exact goal changes, you will still be able to change your plans. On your way to San Francisco, you may still decide that you want to visit the Golden Gate instead.

    Golden GateNow, of course, I am not trying to teach you how to navigate a car. I just want to use this as a metaphor for the development of software.

    How is software development like navigation? You normally do not exactly know what you need to develop. And only rarely can you exactly follow the tracks of somebody else. Software development is not like the known path that is suitable to non-agile navigation.

    Nevertheless, a lot of software is still developed using non-agile methods. Specifications are fixed completely before the development is started. The exact steps required are written up in detail. Contracts are signed. And then road blocks occur, and the project goes over budget and falls behind schedule. And when the project is delivered, the customer is not happy with the functionality because either his ideas have changed, or he has not been able to express himself accurately enough in the specifications.

    Agile software development has all of the advantages of agile navigation. You can go to unknown places. You can even change the details of your plans along the way. And it is very robust against unexpected road blocks. It is clearly the right choice.

    Agile software development is not easy. But it is less prone to utter failure than the traditional method.  Let's learn to navigate on the road signs!

  • The difference between what people want and what they ask for

    A software shop like ours should deliver what customers want... but it may be difficult, because they often do not ask what they want. This is because customers think they know what causes a problem and they think they know the best way to solve it. They then formulate their request in an attempt to help us.

    An example: I once had customers asking me whether I could change my software so that it would round the numbers that it would use to position a robot. It would have been easy to satisfy that request, but I decided to ask why? This proved to be a good idea. I found out the customers were copying the numbers into some other software package. Rather than doing what the customers asked, I ended up writing a direct interface to the other software. This made life of the users much easier yet, without limiting the possibilities of the robot.

    We can not blame customers for not knowing what is easy and what is difficult to implement. Both ways. They can think that something is very easy, when in fact it is fundamentally very hard. But it also happens that they do not dare to ask a question they think is hard, when in fact it would be very easy.

    If you want to make the best possible software, you need to keep asking "why" until your user's report has been changed to "If I do A, I get B. But instead of B I would like to see C (because I need D)". This will help you to decide how customer satisfaction can be maximized. The maximum may be much higher than your customers expect.

  • Who would go welding without a drawing?

    Imagine a mechanical workshop. You come with a problem that they will solve. What do you expect? You expect to work together with them, roughly in the following five-step procedure:

    1. Specify. You will describe the problem in the form of a functional specification
    2. Design. They will make a drawing
    3. Pieces. They (possibly multiple people in parallel) will use the design to select and manufacture pieces
    4. Product. They will weld the pieces together into the tool you need
    5. Test. You test it, and it may require minor adjustments

    Now imagine a software workshop. You come with a problem that they will solve. What do you expect?

    1. Specify. You will describe the problem in the form of a functional specification
    2. Product. They will make the software tool
    3. Test. You alpha test, they fix, you beta test, they fix. Repeat until satisfactory

    How come this list contains only three instead of five steps? And why is the last step always causing so much pain? Could this difference be the origin of why so many software products fail? What is the real difference between software and hardware development?

    Rather than trying to solve the software problem at once, first imagine a hardware workshop that works like this:

    1. Specify. You will describe the problem in the form of a functional specification
    2. Product. They will weld materials together into a tool
    3. Test. You test, they fix, you test again, they fix. Repeat until satisfactory

    How likely would it be that this procedure is faster than the five step procedure? How obvious is it which pieces need to be welded together? Will this allow multiple people to work together on the product? And how much work is the testing for you? How much waste is produced in the process? You would not accept this kind of quackery! And my point: you should not accept it from a software workshop either.

    A good software workshop would use the same five steps the good mechanical workshop uses:

    1. Specify. You will describe the problem in the form of a functional specification
    2. Design. They will make a (modular) design
    3. Pieces. They (possibly multiple programmers in parallel) will use the design to build and select software modules 
    4. Product. They will use the modules to build the software you need
    5. Test. You test it, and it may require minor adjustments

    Investment in a design will result in a solution that will truly solve your stated problem, and not require endless iterations to get right. It may look like a slow solution, but that is deception: you will actually get a result that gets you where you want to be, also when you have new additional requirements in the future.

    I'd like to thank a former colleague at Bruker AXS that gave me this nice comparison. I know he wishes to remain anonymous on the 'net. Image credits:Dystopos on Flickr.