Sunday, August 12, 2007

Original responses to '"No Silver Bullet" and Functional Programming'

These are the comments originally appended to the article. Many of them are thoughtful and worth reading in their own right.

As with the article, please do not submit this to reddit.

  1. Chris Morris Says:

    ” experts in several different languages were given a simplified spec taken from a real problem and asked to implement it.”

    So, was the use of Haskell an integral part of simplifying the original spec given to the experts in the first place? To me, the essential difficulties of programming are in figuring out what the hell to really build in the first place. None of the programming languages in that study helped them do that. They skipped the bulk of the essential difficulties and went right into studying the effects on the accidental portion.

  2. Craig Says:

    I really enjoyed that, looking forward to your next blog.

  3. Functional Lover Says:

    You know why? Because all the colleges have been brainwashed by Java. Damned be they.

  4. Paul Johnson Says:

    There has been some discussion of this at


  5. Jonathan Allen Says:

    From the study referenced:

    > 1. The NSWC experiment was conducted in a very short time-frame with very little direct funding. Thus many corners had to be cut, including significant simplification of the problem itself; the largest of the prototypes was only 1200 lines of code.

    > 2. The geo-server specification was by nature ambiguous and imprecise, thus leading to some variation in the functionalities of the developed prototypes. (On the other hand, the specification is probably typical of that found in practice, especially during requirements acquisition.)

    > 3. The participants were trusted entirely to report their own development metrics. In addition, not all of the participants attended the first meeting at NSWC; those who did attend were advantaged.

    > 4. No guidelines were given as to what exactly should be reported. For example, development times usually included documentation time, but there were major variations in the code-size/documentation-size ratios. In addition, different methods were used to count lines of code for each language, and it is not clear how one line of code for one language relates to that of another.

    > 5. The review panel had very little time to conduct a really thorough review. For example, none of the code was actually run by the panel; they relied on the written reports and oral presentations.

    The problem was grossly simplified, the code was rushed, the line count and time spent numbers questionable, and the final application never run.

    This is your proof?

    We don’t even know what those 85 lines of Haskel actually work with good inputs, let alone if they correctly responded to bad data.

  6. Mike Griffiths Says:

    I think you re missing the point with regard to the whole process of software development. Even orders of magnitude improvements in the process of writing code have no impact upon the problem of writing the right code - defining right as “fit for the required purpose” once the problem domain exceeds a remarkably small limit.

  7. v Says:

    While the arguments you present in the bulk of your post are convincing, the hard data presented hardly seems so. How does low LOC = high productivity except in the wierd world where CMM makes sense?

    What gets put down as code is but the distill of all the knowledge stored in a programmer’s head, and functional languages do demand a higher overhead of that storage space than imperative ones (more for historical reasons, i admit, but there it is).

    Moreover, code building tools reduce the effort to actually input those high LOCs with automation that is improving by the day.

    Can you present a better quantitative metric than LOC counts for functional languages being an order of magnitude better?

  8. thomas lackner Says:

    You should link to information about the STM implementation in Haskell.. sounds very interesting.

    I refuse to comment on the constant debate about programmer efficency, as I’m sure the same program would be 72 characters in K/Q!

  9. hxa7241 Says:

    I have recently translated a minimal global illumination renderer from C++ to OCaml. It is about half the size — similar to the Ruby translation. The difference is in having class interfaces defined separately, and other small things.

    The implication that functional is so much more compact and simple than imperative is wrong. Look at the granularity of abstractions: they are practically the same. If you still code with the same sized building blocks it will demand similar effort.

    Also, well-made imperative code has many restrictions on state interaction. It is not so far away from functional code.

    The only forseeable way to greatly improved productivity is with reuse: more, better libraries/frameworks and ways of using them. (As Brooks says.)

  10. Achilleas Margaritis Says:

    As always, benchmarks are biased.

    Getopt is a parser.

    You can write a command line parser in a few lines of code in C++ using a library like boost::Spirit. For example (code not tested, just an illustration):

    rule letter = range(’a', ‘z’) | range(’A', ‘Z’);
    rule digit = range(’0′, ‘9′);
    rule alphanumeric = digit | letter;
    rule id = *letter << *alphanumeric;
    rule num = +digit;
    rule cmdLine = *('-' << (id | num));

    There are many tasks in which functional programs are shorter and more concise, but these advantages are not due to the functional nature of a program, but due to design choices (better syntax, no semicolons, lambda functions etc). These advantages can happily exist in imperative languages too. The only difference is referential transparency, but, for me, it gets in the way.

    Can we please see a MVC application with Haskell where the model is cleanly separated from the view?

  11. Achilleas Margaritis Says:

    As always, results are biased.

    The command line parser could be written in C++ with a Spirit-like framework like this:

    rule id = letter << *alphanumeric;
    rule num = +digit;
    rule cmdline = *('-' << (id | num));

    The advantages of FP come from concepts that can also be used in imperative languages: lambda functions and closures (Smalltalk/Ruby), simplified syntax (Ruby), etc. On the other hand, in pure FPs many things are unnecessarily complex.

  12. anonymous Says:

    Your argument about the lack of states in functional programs is fundamentally flawed. FP-s appear to have no states simply because they describe what would happen (sort of.) However, in order for a FP to, well, run, someone, somewhere, must pull a lever so that the wheels start turning according to the spec of the FP, and what ticks under the FP ultimately does have state. I am not suggesting that there are no gains from functional programming, but the notion you offer is bogus as it stands.

  13. Edward Ocampo-Gooding Says:

    A popular excuse for why programming folks all don’t immediately jump into functional programming is because it’s not as intuitive a paradigm as the procedural style.

    I’d like to see a psychology study that measures programming proficiency vs. time trained with “fresh” students unaware of either paradigm and have two separate groups be trained.

  14. You wish Says:

    And yet Lisp, which is considered a functional language, has been around since 1962 (it was spec’ed in 1958), and Brooks doesn’t point to it in his paper. Functional programming isn’t new, and it is certainly not a silver bullet.

  15. Sam Carter Says:

    The reason why the whole world isn’t using functional languages is very simple: shitty library support. The real silver bullet for most programmers is access to large, high quality libraries, with the primary examples being the .NET runtime, or Java’s libraries. Programming tests that require writing 100% of the code from scratch are just flat out inaccurate, because you are measuring the wrong thing. The bulk of modern development time isn’t writing code from scratch. It’s writing code that interfaces with libraries for networking, or XML parsing, or database handling, or whatever (see for a fuller treatment of this topic). The functional programming language community is more interested in marginalizing their access to external systems rather than making it easier (research into monads being a great example of that).

  16. Larry O'Brien Says:

    I intended to post the URL of some thoughts on your post, but your comment engine diagnoses dashes in URLs as indicative of spam. (Not so, I think.) Anyhow,,guid,659c9535-6674-48f9-a4f9-8bc34fe724b5.aspx

  17. Paul Says:

    Comparing Haskell to C compares a variety of design parameters at once: strong versus weak-ish typing, garbage collection versus hand collection, high level data types versus low-level ones, different libraries etc.

    Based on my experience with functional languages and other high-level languages like Python, Ruby, Smalltalk, Javascript, Perl, C#, REXX etc,, I would guess that those other factors are MUCH MORE likely to be relevant than just the functional programming paradigm.

    In addition, comparing lines of code for PROTOTYPING is pretty uninteresting. I’d like to compare lines of code for a functional application used by customers. What if the design decisions in a particular language are focused on robustness and maintainability?

  18. Larry O'Brien Says:

    I’ve posted a reply at my blog. Unfortunately, I can’t paste the exact permalink, which your comment engine wrongly insists are spam.

  19. Indeed You Wish Says:

    Certain problems are more easily solved with certain tools — let’s not fool ourselves, and believe FP solves all problems better.

  20. Jeff Says:

    You can make any language look good by comparing it to C++. If being functional is the magic bullet, then why do stateful languages like Python, Perl, Ruby, Lua, Lisp, etc. always do just as well as Haskell in this sort of omparison?

  21. Stuart Dootson Says:

    Unlike the rest of your commenters (as far as I can tell), I *have* used Haskell for real-world problems, and can confirm that (for the problems I’ve used it for), it is significantly more productive than imperative languages like Java, C++ or Ada.

    I don’t think it’s quite ready for the maistream, but it’s definitely got promise.

  22. Not Silver But Still Very Good Says:

    I’m very sceptical about LOC as a measure. However, here’s a very recent informal data point, which I’m mentioning partly because it’s notable for the relatively controlled nature of the comparison: Lucian Wischik at Microsoft recently had to rewrite a 6,000 line F# program in C# (C# was required for unrelated reasons). It became 30,000 lines (references to comments by Lucian below). Now, this comparison is with essentially the same .NET libraries (F# adds a few minor libraries), the same (talented) programmer, the same .NET runtime, the same performance profile, the same problem specification, and the C# code has the advantage of being a reimplementation.

    See Lucian’s comments on the Google group at microsoft.public.dotnet.languages.csharp at

    More extensive analysis of the differences by Lucian at

  23. Josh S Says:

    “However, in order for a FP to, well, run, someone, somewhere, must pull a lever so that the wheels start turning according to the spec of the FP, and what ticks under the FP ultimately does have state.”

    That’s not the point. The point is that an FP utilizes a framework that *safely* translates from an abstract language without a concept of state to a concrete language that is basically nothing but state.

    Likewise, object oriented languages do not actually have objects — whether they are detected at compile time or run time, they ultimately map to swaths of memory and functions.

    It’s all smoke and mirrors. But it’s the magic of the smoke and mirrors that enhance our productivity. By building the funnel from a “safe”, “imaginary” language to the “dangerous”, “unproductive” one, we eliminate the problems inherent at the lower level.

  24. Joel Says:

    Functional programming is not usually adopted because so many real-world systems are almost entirely side-effects. Look at all the things a shell script does; it’s changing system state. What’s that JDBC app doing? Changing database state.

    In the world of web applications, you’re always doing I/O. It’s almost all I/O. And that is state (of your I/O streams).

    I’ve been looking at Erlang a lot recently. I have a serious jones for its concurrency constructs. They are as awesome as advertised. But man, you can’t do a decent thing with strings in that language. Seriously, it’s not much better than C. Python, Perl, even Lisp can do a lot of manipulation of the strings - things you have to do in parsing protocols and HTML and so on. But Erlang here is almost a non-starter. Ugh!

    Then there is the subject of libraries. Some major languages provide many things you don’t have to reimplement. But do the functional languages? Lisp has been around the longest, but try getting a gui library that works on all of (SBCL,CMUCL,CLisp). Or a decent GUI on Erlang at all. Even Paul Graham, champion of Lisp as Solution To Everything, admits that you need massive libraries to be useful - it’s a goal he has for Arc.

    At least Haskell has GTK+ bindings. I haven’t explored Haskell enough to comment on it. I just wonder why folks haven’t used it to do Yahoo! Store. If it’s that good, it will make someone money.

  25. fez Says:

    I would love to see a shootout between the various web frameworks (including any based on functional languages).

    Something like the following:
    - well-defined spec
    - basic CRUD functionality for the most part
    - some integration(s) with a third-party API (thus existing off the shelf libraries will be of use)

    Let the best coders from each language duke it out. Record time spent for each segment of app development & log all commits to a Subversion repository.

    Have a panel of neutral judges look at not just the time spent but also feature completeness and any other niceties the teams added in the time alotted, and give a score to each team.

  26. Ulf Wiger Says:

    More references were asked for. Here’s one:

    “Comparing C++ and Erlang for Motorola Telecoms Software”
    carried out by Heriot-Watt University together with Motorola.

    Two applications were re-written from C++ to Erlang, and one of the applications was benchmarked. The pure Erlang version was 2-3x faster, much more robust, and had 1/3rd to 1/18th the number of lines of code, depending on how you compare the libraries. Detailed analysis suggested:

    “- Code for successful case – saves 27%
    - Automatic memory management – saves 11%
    - High-level communications – saves 23%” (slide 31)

    Like all other studies, this one can certainly be debated. Does anyone have a reference to a study concluding that functional programming does NOT lead to a productivity increase?

  27. tndal Says:

    You omitted that ISI Relation Lisp scored the most rapid development time by far: less than half the time required by Haskell.

    Although the line count was greater, this version of Lisp cleaned the floor with Haskell as a productivity tool. And Relational Lisp is a mirror of Prolog.

  28. Jason Says:

    Some points:

    - LOC does not imply that less effort was involved in crafting the Haskell solution. Just less effort in typing it in.

    - Some problems have greater susceptibility to different approaches. One small “real world” problem is completely inadequate.

    - Double-blind experiments in this regard are, in fact, impossible, so this will remain a bench-race forever.


  29. Joel Reymont Says:

    Paul, you may be interested in the latest article in my blog. See “Re-birth of a trading platform”.

    Thanks, Joel

  30. Larry O'Brien Says:

    Jason: “LOC does not imply that less effort was involved in crafting the Haskell solution. Just less effort in typing it in.”

    Not so. In industrial systems, lines of code produced per month is essentially constant, regardless of language. Also, defect rates per KLOC is essentially constant, regardless of language. (ref. Capers Jones works on “language levels”) (In small programs, you definitely see greater variation.)

    I agree with your other points.

  31. Larry O'Brien Says:

    “Not Silver But Very Good” ref’s Lucian’s claims. The threads have little substance. I’ve been following Lucian’s Website and I feel that a grain of salt is called for. For instance, he wrote an F# program that displays a teapot in Direct3D; fair enough, but he makes it sound like the lighting and manipulation in 3D comes from a few lines of F# when, in fact, the “teapot in a viewport” is canned functionality that can be done in a few lines of _any_ language.

  32. Ben Moseley Says:

    The thrust of your article is very close to the thrust of our “Out of the Tar Pit” paper which investigates FP (along with the relational model) as being very relevant to the Silver Bullet question - .

  33. Not Silver But Still Very Good Says:

    Re the teapot - you’ve got the wrong guy: Lucian works at Microsoft, and has never touched a DirectX teapot. I think you’re thinking of Flying Frog consultancy.

  34. Ulf Wiger Says:

    I totally disagree with the idea that LOC wouldn’t matter. It’s not just a matter of the effort of typing in the code. Much of that extra code often represents “unnecessary” detail that distracts the reader, hides the core logic, and which also needs to be maintained. I’ve seen projects that have problems with keeping “boilerplate” code consistent when the system gets complex enough. Some projects resort to using modeling languages that generate the boilerplate. When judged as programming languages, these tools are usually quite crappy, but their supporters defend them based on the opinion that it’s still a lot better than having to code the stuff by hand. But there are good programming langugages that work at the same abstraction level as those modeling tools. We’ve also found that the learning curve for e.g. Erlang is much shorter than for e.g. UML or C++, contrary to the statement that FPLs would be more difficult to learn, or less intuitive.

    My own conclusions are based on 10 years of working in and around very large software projects, with code volumes in the order of hundreds of thousand lines, or even millions. I’ve had the opportunity to review several projects using C++, UML, Java and Erlang. I get the feeling that many of the comments above come from very limited comparisons. That’s ok - you have to start somewhere. For me, it took 2-3 months to properly un-learn C++ and OO when I first started with Erlang.

  35. anonymous Says:

    Josh S:

    “That’s not the point. The point is that an FP utilizes a framework that *safely* translates from an abstract language without a concept of state to a concrete language that is basically nothing but state.”

    Brooks was talking about the complexity arising out of the sheer multitude of possible states (or combinations thereof). In FP, you retain (at least in part) this very complexity in order to be able to produce any useful behavior, and whether this complexity is state-based or not, is totally irrelevant.

    What’s left to argue about is whether complexity is significantly reduced by imposing the stateless view, and I for one am still a bit sceptical about any radical claims on that account, especially if well-designed programs/systems from both domains are compared.

    Another point that comes to mind is that, paradoxically, FP actually inhibits stateful programming when you need it (and you almost always do when designing and implementing systems,) by making the complexity of combining states explicit. Inherently stateful models, on the other hand, will let you hide some of this complexity, be it at the still-present risk of coming up with something inconsistent.

    It pays to note that, compared to the current state of the art in PP/OOP, the FP way of combining state is clunky even with the help of monads and monad transformers, in case you wanted to bring those into the argument.


thesz said...

"So, was the use of Haskell an integral part of simplifying the original spec given to the experts in the first place?"

No, that was done by NSWC, Naval Surface Warfare Center.

Jim said...

This blog's information is very rich.i very like it