Stephane's thoughts corner…
Stephane Bailliez's thoughts on everything

Absolutely nothing is employed by my Male Impotence am Now Debt Relief Debt Relief i stayed with this issue for all his life?

Put simply, Debt Relief Consumer debt relief the response to it Best electronic cigarette online Njoy npro electronic cigarette Jeux de casino en ligne gratuit bonus casino is an unequivocal no. In right now community, with Generic Cialis Cialis generic uk a variety of Online debt consolidation loan Bad debt consolidation treatment methods readily available, no-one must ever before really Buy Viagra Generic Viagra feel that you have no solution for his Viagra Online Viagra Online or her unique particular sort of Erectile Buy Viagra Viagra online uk Dysfunction. In young gentlemen, it can sometimes be their Levitra 10mg Buy Levitra operation anxiety is so entrenched that Electronic cigarettes for weed Top rated electronic cigarettes even How does online blackjack make money Rigged online blackjack substantial dosage of Viagra, Cialis or Levitra Viagra Online Viagra

November 30, 2006

Little things that make you smile

Filed under: Process,Software — stephane @ 10:41 pm

It’s not everyday that you see a short and sweet mailing list exchange like this:

Ken pointing out number of mailing lists linearly increasing at an impressive rate:
- Brian: “I think we should create a new mailing list to discuss this. “
- Steve: “No, I think we should have a meeting.”
- Leo: “Conference call! “
- Graham: “Conference!”
- Justin: “No, you don’t get it – we need to call a vote!”
- Brian: “+1″

Thanks for that piece of fun :)

September 19, 2006

Spring switching to Maven 2…riiiiiiight

Filed under: OpenSource,Process,Software — stephane @ 12:10 am

The first reaction when I read this was:

Another good piece of software that will lose thousand of hours in build problems.

Why ? because it will go through the same hurdles that large projects such as Apache Directory Server or Apache Geronimo go through just about every day.

Sylvain points at the Cocoon instructions and the most revealing part is actually:

To build Cocoon, use the following command:

$ mvn -Dmaven.test.skip=true install

In case of any failures, repeat command as necessary till you see the message:

BUILD SUCCESSFUL

which is not a joke and one of the most dramatic aspect of Maven. It is unreliable and unreproductible.

Sylvain already blogged about it extensively and revealed the fact that we are using Ivy to manage our dependencies (which at this time are in the 3-digit figure). A project is indeed only a matter of importing a one-liner build.xml, declaring a name, a version, a package information (for manifest purposes) and ivy.xml.

Interestingly the most annoying elements at this time are due to the extra layer of compatibility with Maven that I needed to insert to offer a painless transition a few months ago. This is something I plan to get rid of very soon now that we are mostly in cruise control mode for this build and I start to see patterns emerging and needs showing up.

Early on, when Maven was initially taken (for no reason at all actually) I spent a lot of time actually getting the dependencies from a reliable source (ie: not ibiblio or whatever but direct from the website). Figuring out the dependencies, rewriting the POM. Anything you get from an online repository is inherently badly written – should it be if it is written by a Maven user that just wanted it on the repo or by the developpers of the module themselves who don’t really know how to write a POM either (there’s no valuable documentation on it, it is extra-verbose as XML attributes seem not to exist in Maven XML land so people tend to get lost and not really update valuable information and/or totally useless information get into the repository)

To describe partly the current build system and without saying too much. I have roughly 2 build files, one being from a normal application that produces a jar, and another one for a web application that imports the ‘normal application’ one and that overrides one target (and define a few others webapp related), a web application producing both a jar and a war.

  • A project has therefore just to import either the normal or the web application one. A one-liner. I provide typical templates to use for build.xml, build.properties and ivy.xml, the later having predefined configuration and a set of dependencies to use de-facto (for example servlet-2.4 in compile configuration for webapps, etc…)
  • As of now each project is also publishing its sources and javadocs.
  • The whole set of application is described using a ‘virtual ivy’ file (no artifacts) which allow to have a graph/report of dependencies or build them (I have another build for that which basically does subant calls)
  • It is built using an Ivy build list that determines the order of dependencies.
  • I have a few niceties, like for example I have added a target ‘whoami’ which just print out the name of the project using an <echo> this way you can know easily what is the order of the whole build without actually building it. Seems stupid but it is pretty helpful sometimes. :)
  • It is of course doing the whole unit test she-bang on request and gives coverage information.
  • For webapp, it launch a pluggable servlet container against your war in remote debug mode as a default.

I’m still not satisfied (and people who know me also know that I’m never satisfied) of the current state of affairs and things could be even better and simpler in many many ways.

I just want to make that build simple and reliable. I don’t want it to be a whole kitchen sink that does everything (and nothing), therefore I’m being very careful not to add too many features that people would want (most of the time they just prove themselves unneeded). I first want to make it build in a very predictable way, if something fails, I want to know where it fails and why…and as early as possible (without again writing a truckload of code, properties, conditions, …). This is basically THE goal. It should not make coffee, we have a couple of coffee machines for that. :o )

And for any large build, believe me, you don’t want to have something too smart.

The most difficult thing is to actually cater to different styles and anticipate also the way it evolves and/or control it. A codebase normally goes through a lot of changes in the initial phase, the versioning is also much simpler while this tends to evolve with time to components having their own versioning, branch, tags and stabilize. And it stabilizes only if you are able to control the growth phase.

That said, I’m looking forward to the next release of Ivy 1.4 and Ant 1.7.0 to simplify things even further.

There are a couple of things that can however be improved on Ivy but I will keep that to another post.

As for Spring Framework, let’s just hope that Spring does not turn into Fall.
But good luck guys because you will need it.

My other area where I would like to focus afterwards will be integration and deployment and I think my dear friend Steve has a couple of things I want :)

Oh and btw, Steve is also finishing the 2nd edition of Java Development with Ant (aka jdwa) published at Manning. Believe me you don’t want to miss that one when it will go out and I expect it to have as much success as the very first version. It is satisfying sometimes to know that quantity is synomous of quality.

May 28, 2006

Fair trade Software Development

Filed under: Business,OpenSource,Process,Software — stephane @ 11:34 pm

With the abuse of the opensource model by most IT companies, the huge failure rate of software projects and the low rate of software engineers. It has been decided to create a new standard based on the Fair trade principles.

Software engineers employed by affiliated companies will have long-term minimum rate considerably above the long-run average market rate based on the incomes needed to support community projects such as opensource projects. Companies that are certified to meet the standard may for a fee display the appropriate Fairtrade and opensource symbols on their marketing and business documents. (The fee supports the work of the national monitoring body)

Although Fair trade software services are typically somewhat higher-priced than equivalent non-Fairtrade software services, for many services the difference is relatively small (as long as sufficient volumes are involved). This is because although a considerable premium may be paid to software engineers (often 50-100% above market rate), this forms only a small part of the final project price; The Standish Groups reports that most of the price is determined by invalid requirements, unclear business objectives, poor management decisions, unproper planning, lack of executive support and NIH/reinvent-the-wheel syndrom.

Fair trade is incentive-based and rely on consumer choice. Consumers are therefore given the opportunity to increase standard of living and quality of living and quality of life of software engineers working on opensource products through a sustainable market-oriented approach.

Considering the huge success of fair trade goods over the world[1], it is considered that this model could be financially viable. However we suspect that so-called ‘opensource IT companies’ that have been surfing on the opensource wave and indirectly lurring consumers into such a similar idea of fair trade and extorquing huge rates to consumers (including governments) without actually contributing back to community projects may have already doomed such model by losing the thrust and confidence of customers and making them believe that opensource does not work.

 

[1] See Communication of the European Commission on fair trade

A EUROBAROMETER survey, conducted on behalf the European commission in 1997 reported that 74% of the EU population say they would buy fair trade bananas if they were available in the shops alongside “standard” bananas. A total of 37% of EU consumers said they would be prepared to pay a premium of 10% above the price of normal bananas for bananas of equivalent quality produced according to fair trade standards.

Further analysis of the survey replies revealed that people with previous experience of fair trade products are much more likely to buy fair trade bananas, and would be willing to pay more for them. More than 9 out of 10 (93%) of consumers who had already bought fair trade goods would be prepared to buy fair trade bananas, and 7 out of 10 (70%) would pay a premium of at least 10% over the price of normal bananas.

PS: This stuff is a joke and was devised by myself after a few glasses of Champagne over lunch with a group of friends. Still I believe there is some truth in it. ;)

April 17, 2006

FBI $170M Software Project Failure: Virtual Case File

Filed under: Process,Software — stephane @ 1:03 pm

There is a very long IEEE Spectrum article: “Who Killed the Virtual Case File?” detailing a spectacular software project failure, known as Virtual Case File (VCF). VCF was apparently supposed to be the cornerstone of knowledge management for the FBI agents and its intelligence analysts. It would host millions of records containing information on everything from witnesses, suspects, and informants to evidence such as documents, photos, and audio recordings and thus answer the criticisms from the 9/11 commission report : “the FBI’s information systems were woefully inadequate. The FBI lacked the ability to know what it knew; there was no effective mechanism for capturing or sharing its institutional knowledge.”

The countdown to catastrophe:

  • September 2000 – FBI IT Upgrade Project, later called Trilogy, funded for US $379.8 million.
  • September 2001 – Robert S. Mueller III replaces Louis J. Freeh as FBI director a week before the terrorist attacks of 9/11.
  • October 2001 – Robert J. Chiaradio advises Mueller on software he dubs the Virtual Case File and brings Larry Depew aboard.
  • January 2002 – FBI receives an additional $78 million to accelerate Trilogy.
  • February 2002 – Joint Application Development planning sessions begin; Sherry Higgins hired.
  • August 2002 – Matthew Patton hired by SAIC as security engineer.
  • November 2002 – SAIC and FBI agree on baseline requirements; Patton [above] leaves SAIC.
  • December 2002 – FBI receives another $123.2 million to complete Trilogy.
  • September 2003 – GAO reports that FBI needs an enterprise architecture.
  • December 2003 – Zalmai Azmi becomes acting CIO; SAIC delivers VCF.
  • March 2004 – Arbitrator finds that of 59 problems, 19 were FBI changes to requirements and 40 were SAIC errors.
  • June 2004 – FBI asks SAIC to develop Initial Operating Capability (IOC) for $16.4 million; FBI contracts Aerospace Corp. to evaluate the VCF.
  • January 2005 – Field trials of IOC begin; Aerospace Corp. delivers its report.
  • February 2005 – Final Office of the Inspector General’s report on Trilogy comes out; Senate hearing, 3 February.
  • April 2005 – FBI officially kills the Virtual Case File.
  • May 2005 – Mueller announces a new software project called Sentinel.
  • December 2005 – Contract for phase one of Sentinel to be awarded.

In most of the article we can see somewhat that the catastrophe seems to be due to tedious relations between SAIC and the FBI that created flawed requirements. Anyone who has been working with the public sector may relate this to his own experience.

Patton’s descriptions of the 800-plus pages of requirements show the project careening off the rails right from the beginning.
[...]
Instead, we had things like ‘there will be a page with a button that says e-mail on it.’ We want our button here on the page or we want it that color. We want a logo on the front page that looks like x. We want certain things on the left-hand side of the page.”
[...]
“The customer should be saying, ‘This is what we need.’ And the contractor should be saying, ‘Here’s how we’re going to deliver it.’ And those lines were never clear,” Higgins said. “The culture within the FBI was, ‘We’re going to tell you how to do it.’”

The interesting quote from the FBI:

Azmi[FBI CIO] insisted that SAIC should have clarified user needs in the JAD sessions rather than working with requirements that were not “clear, precise, and complete.”

The conclusion of the article:

Even so, Ken Orr, an IT systems architect and one of Mueller’s graybeards, remains skeptical. He rated Sentinel’s chances of success as very low. “The sheer fact that they made that kind of announcement about Sentinel shows that they really haven’t learned anything,” Orr said, from his office in Topeka, Kan. “To say that you’re going to go out and buy something and have it installed within a year, based on their track record,” isn’t credible.
[...]
“I would guess that it would be closer to 2010 or 2011 before they have the complete system up and running,” Orr said. “That’s assuming that you have a match between the software and the underlying requirements, which we know are subject to change.”

April 16, 2006

Waterfall 2006

Filed under: Process,Software — stephane @ 3:02 pm

I came across the Waterfall 2006 website. Too bad some abstracts are missing and you cannot get any slides, I’m sure it would have been very popular within many organizations which would take that seriously;).

I love the abstract of Working Harder, Not Smarter:

While this technique can often lead to great success on its own, it can be synergistically enhanced through the deployment of the Hot Potato program management methodology to deliver increased levels of promotion to the majority of stakeholders in a project delivery; the Hot Potato methodology, often seen in organizations which combine an up-or-out approach to career management with practicing 360-degree reviews, allows hermetically-sealed bubbles of incompetence to float up a hierarchy by the simple expedient of never being left holding the delivery baby.

Some other interesting ideas:

Next Page »

Powered by WordPress