Saturday, February 22, 2014

A Review of the joint CNN and BBC production: "The War On Terror"

The War on Terror is the latest epic in the long-running World War franchise. The previous serial in the franchise, World War II, was slammed by the critics for its cardboard-cutout villains, unrealistic hero and poor plot-lines, although it actually achieved decent ratings.

The first season of Terror started with a retcon. At the end of World War II it looked like the Soviet Union had been set up as the Evil Empire for yet another World War, but the writers seem to have realised that replaying the same plot a third time wasn't going to wow the audience. So at the start of Terror we get a load of back story exposition in which the Soviet Union has collapsed for no readily apparent reason, leaving America running a benevolent economic hegemony over the allies from the previous series and also its former enemies, Germany and Japan. There was also mention of a very one-sided Gulf War, apparently to emphasize that America's economic power was still matched by its military, even though it didn't seem to have anyone left to fight. Then in the second episode a bunch of religious fanatics from nowhere flew hijacked airliners into important buildings. While the premise may have been a bit thin the episode managed a level of grandeur and pathos that the franchise hadn't achieved since the Pearl Harbour episode, with the special effects being used to build drama rather than just having huge fireballs. But after this promising start the rest of the season became increasingly implausible, with a buffoonish president launching two pointless wars on countries whose governments turned out to have almost nothing to do with the attack he was trying to revenge. The weak plot and unsympathetic characters make the last few episodes of the season hard to watch.

However in the second season the series grew a beard. The writers replaced the old president with a good looking black guy who clearly wanted to do the right things, finally giving the audience someone to root for, and the focus switched sharply from armed conflict to corrupt politics. Instead of huge set-piece battles featuring ever-more improbable weaponry, the drama now focuses on the political situation within America itself. The battles and weapons are still there of course, but no longer driving the plot. Instead the president is shown as a tragic figure as he tries to stop wars, free prisoners and sort out his country's economic problems, but every time some combination of corporate executive, greedy banker and/or General Ripper will block his reforms, sometimes with an obstructive bureaucrat thrown in for comic relief. He has his hands on the levers of power, but in contrast with his predecessor in World War II those levers don't seem to be connected to anything any more.

Although each episode stands on its own as a story, several plot arcs are becoming clearer as season 2 draws to a close. Events seem to presage the Fall of the Republic, a plot similar to the Star Wars prequel trilogy, but much better done. Whereas Lucas' Old Republic was destroyed by a single corrupt ruler who wanted to become The Emperor, the American Republic in Terror is being destroyed by the very things that made it strong in the previous series: its industrial capacity, financial power and military strength. This is most clearly seen in the episode Drone Strike, where the president was asked to authorise an attack by a remote controlled aircraft against a suspected terrorist convoy on the other side of the world. America is one of the few countries with the technology and money to field these unmanned warplanes, and they have become an important part of American power.  Then we saw the president's face as he was told that the supposed convoy had actually been a wedding party.  At the end of the episode he was reduced to defending his actions at a press conference because the people who had got him into this mess were too powerful to sack.

At the same time there are stories of individual determination and hope set in contrast against the darker backdrop. The recent episode Watching the Watchers showed a soldier and a bureaucrat in different parts of the secret spy agency (or agencies; America seems to have several) independently deciding to rebel against the system they are a part of, by releasing embarrassing secrets to the public. At the same time the episode revealed a hidden factor in previous plot lines. Fans are now reviewing old episodes, even back into the first season, looking for the throwaway lines and improbable coincidences which only now make sense.

The vision of the writers of Terror is now becoming clear; the real war on terror is not the one being fought with guns and robot aircraft, it is the one being fought in the shadows against a loose and ever-shifting coalition of rich, powerful individuals who have discovered that a terrorised population is willing to give them even more money and power, and therefore want to keep it that way. The president's initiatives aren't being blocked by some grand secret conspiracy, its just that all of these people know how to work together if they want stop something happening. But this actually makes them more dangerous; in a conventional conspiracy story the hero just has to find the conspiracy and unmask them, but that isn't going to happen in Terror. In one chilling scene a club of bankers get together for a party to laugh at the rest of the country for continuing to pay them huge amounts after they have wrecked the economy that they were supposed to be running. A journalist sneaks in and tells the story, but it doesn't make any difference because throwing a party is not a conspiracy.

So Terror goes into its third season in much better shape than it was at the end of the first. The writers have escaped from the constraints of set-piece battles between huge armies, and found instead a solid theme of individual heroism in a believable world of ambiguous morality and complex politics. It all makes for powerful drama and compelling viewing.

Friday, October 11, 2013

TV Resolution Fallacies

Every so often discussion of the ever-higher resolution of TV screens generates articles purporting to prove that you can't see the improvement unless you sit a few feet from the largest available screen. Most of these articles make the same three mistakes:

Fallacy 1: Normal vision is 20/20 vision

The term "20/20 vision" means only that you can see as well as a "normal" person. In practice this means that it is the lower threshold below which vision is considered to be in need of correction; most people can see better than this, with a few acheiving 20/10 (that is, twice the resolution of 20/20).

Fallacy 2: Pixel size = Resolution

If a screen has 200 pixels per inch then its resolution, at best, is only 100 lines per inch because otherwise you cannot distinguish between one thick line and two separate lines. For the more technically minded, this is the spatial version of the nyquist threshold.  Wikipedia has a very technical article, but this picture demonstrates the problem:

The pixel pitch is close to the height of a brick, leading to the moire pattern because in some areas the pixels are focused on the middle of a brick and in some areas the pixels are focused on the white mortar.

So the resolution of the screen in the horizontal or vertical directions is half the pixel pitch. But it gets worse as soon as you have some other angle because those pixels are arranged in a grid. The diagonal neighbours of a pixel are 1.4 times further apart than the horizontal and vertical ones, so the worst-case resolution is the pixel pitch divided by 2*1.4 = 2.8. Call it 3 for round numbers.

So the conclusion is that the actual resolution of the picture on your screen is about 1/3 of the pixel pitch.

Fallacy 3: Resolution beyond visual acuity is a waste

The argument here seems to be that if HDTV resolution is better than my eyesight then getting HDTV is a complete waste and I would be better off sticking to my normal standard definition TV.

Clearly this is wrong: as long as my visual resolution outperforms my TV then I will get a better picture by switching to a higher definition format.

So when does HDTV become worth it?

20/20 vision is generally considered to be a resolution of 1 arc-minute. If we use the naive approach with all three fallacies then one pixel on a 40 inch HDTV screen subtends 1 arc-minute at a distance of 62 inches, so some articles on the subject have claimed that unless you sit closer than that you don't get any benefit

However on that 40 inch screen a standard definition pixel will be roughly twice the size (depending on which standard and what you do about the 4:3 aspect ratio on the 16:9 screen), so it will subtend 1 arc-minute at around 124 inches (just over 10 feet).  So with 20/20 vision you will be able to separate two diagonal lines separated by one pixel at a distance of 30 feet, and with 20/10 vision that goes out to 60 feet. So if you sit less than 30 feet from a 40 inch screen then you will get a visibly better picture with HDTV than standard definition.

And what about Ultra HD?

With 20/20 vision you can just about distinguish two diagonal lines one pixel apart on a 40 inch HDTV screen from 15 feet away, and 30 feet if you have 20/10 vision. So if you sit closer to the screen than that then you will get a better picture with Ultra HD. And of course Ultra HD sets are often bigger than 40 inches. If you have a 60 inch set then the difference is visible up to 23 feet away with 20/20 vision and 46 feet with 20/10.

So higher resolutions are not just marketing hype.

Final point: Compression artifacts

Digital TV signals are compressed to fit into the available bandwidth. This shows up in compression artifacts; if there is a lot of movement across the image then you may see it become slightly blocky, and if you freeze the image then you can often see a kind of halo of ripples around sharp edges. Higher definition pictures are encoded with more data so that these artifacts are reduced. So even without the increased resolution you may still see an improved picture in a higher resolution format.

Friday, May 24, 2013

Elevator pitch for Haskell short enough for an elevator ride

Greg Hale has written an "elevator pitch" for Haskell. While it is certainly a good piece of advocacy, it is quite long, and therefore not an elevator pitch. The idea of an elevator pitch is something you can deliver in the 30 seconds or so that you find yourself sharing an elevator with a potential investor.

I've been looking for an effective Haskell elevator pitch for some years now, but the only thing I was able to come up with was just that you can deliver software better, faster and cheaper because you need fewer lines of code. This just sounds like hype.

However I think I've now got something better. Here it is:

Conventional languages make the programmer construct both a control flow and a data flow for the program. There is no way to check they are consistent, and anytime they are inconsistent you get a bug. In Haskell the programmer just specifies the data flow: the control flow is up to the compiler. That simplifies the program, cutting down the work and completely preventing a big class of errors.

Monday, April 1, 2013

This post originally appeared as a response to this article in Forbes:

Thanks for this article; its good to see some opinions on this subject backed up with numbers. I still think you are wrong though.

First, your comparison with the US dollar ignores the effect of fractional reserve banking, which multiplies the ratio of GDP to monetary base by a factor of around 5. Taking that into account, US GDP is only around ten times its monetary base. Still a lot more than Bitcoin, I conceed.

More importantly, Bitcoin is not a normal new currency. A normal new currency is launched by a government with a territory, citizens, tax base and GDP. All of these give those trading the currency some clues to the fundamental value of each unit. Bitcoin has no territory, citizens or tax base. It has a GDP, but that is dependent on the amount it is used, and usage seems to be growing. A better way to think of Bitcoin (as I argue here: http://paulspontifications.blogspot.co.uk/2013/01/bitcoin-as-disruptive-technology.html) is as a disruptive technology; at the moment it is principally of use to those who are poorly served by the incumbent financial industry, but as it improves it will increasingly move up-market by taking business from the incumbents. As it does so the Bitcoin GDP will increase by multiple orders of magnitude, and so therefore will the value of each Bitcoin.

A bubble is defined by the "bigger sucker" theory; that the price will keep going up because there will always be someone willing to pay even more, because the price will keep going up. Bitcoin investment, on the other hand, is driven by a rational expectation that Bitcoin use will increase. If one has a rational expectation that Bitcoin GDP will support a much higher price in a few years time then buying it now looks like a sensible investment. It might also collapse in a pile of bits, but as a speculative investment its certainly worth taking a position in.

Disclaimer: I own some Bitcoins, and I'll think about selling in a couple of years.

Saturday, February 16, 2013

On Having an E-book Reader

Back in 2011 I wrote about the reasons why I wasn't getting an e-book reader. I had found that books generally cost more in e-book format than in dead-tree format, and I was nervous about the digital restrictions management (DRM) that e-books came with. These concerns were only increased when I read about Linn Nygaard who had her Amazon account closed (and all her e-books effectively confiscated) for some unexplained violation of Amazon policy that was probably committed by the previous owner of her second-hand Kindle. The fact that her account was restored after 24 hours of world-wide outrage didn't reassure me; fame is fickle, and relying on it as an ally against a giant corporation would be unwise.

However as a result of that fiasco a number of articles were posted about the removal of DRM from e-books using Calibre, which is an open-source program for managing your electronic library on your computer and converting between formats. You have to download and manually install some plugins in addition to the standard distribution, but once they are installed you just add a DRMed book to your Calibre library, and it automatically gets the DRM stripped out.

In parallel with this, a long-running legal case between the US Department of Justice and a number of e-book publishers resulted in a settlement under which prices have dropped considerably, and are now significantly cheaper than the paperback price.

So that was my two main objections to an ebook reader dealt with: I could now share books with family in a similar way to a paper copy, and I wasn't paying the publisher to leave out the paper. So I asked for a Kindle for my birthday last year.

I'm very happy with it.  I've downloaded some classics from the Gutenberg project, and also picked up some more obscure books like the updated edition of "Code and Other Laws of Cyberspace" that I have been wanting to read for a long time. Specialist books like this seem to be a lot cheaper in ebook format, presumably because so much of the cost in the dead-tree version is taken up in the overheads of short-run printing and storage. But even Anathem was only £2.99 (although it seems to be rather more when I look at it now).

My wife tried out my Kindle as well, and asked for one for Christmas. When Amazon proved unable to deliver in time I bought one from the local branch of Waterstones bookshop. This turned out to be a mistake: the Kindles bought from Waterstones have the nice varied "screensaver" pictures replaced with a fixed bit of Waterstones branding that can't be replaced (I've come across some instructions for replacing that image by logging into the Kindle using TCP over USB, but that particular back door seems to have been closed now).

Wednesday, January 30, 2013

Bitcoin as a Disruptive Technology

One of the most important ways of thinking about the way that technological and commercial forces create change is Clayton Christensen's notion of a Disruptive Technology. According to Christensen, new technologies generally start by serving a niche market that is currently under-served by the incumbent technology (and the companies that sell it). The new technology is ignored by the incumbent companies because they can't see how to make money from it; it doesn't fit their big profitable customers and the niche market is too small to be interesting. So the incumbents ignore it. Meanwhile the new technology matures and improves to the point where it can be used by the big customers, and then suddenly the incumbent technology (and the companies that sell it) are obsolete.

Like any model of a complex situation this doesn't cover every angle, but its interesting to look at Bitcoin from this perspective. Christensen's approach in his book is to look at the problem from the point of view of a manager in an incumbent company (who sees a disruptive technology as a threat) or a start-up company (who sees it as an opportunity). I'm simply going to look at the major market forces, and in particular the niche markets that Bitcoin might serve better than the incumbents, and the possible paths out from those markets.

The Underworld

An obvious group of people who are served poorly by the incumbent financial system are criminals. Unlike most markets this is a matter of deliberate design. Over the past century governments have systematically regulated all forms of financial transactions in ways that make it difficult to hide ill-gotten gains, or for that matter legitimately-gotten gains that might incur taxes. Banks are legally required to know who their customers are and report any transactions that seem suspicious, or merely large. For people who move millions of dollars around there are methods to avoid such scrutiny, but these are themselves expensive; the necessary accountants and lawyers don't come cheap. Hence there is a significant group of people who have a pressing need to avoid the scrutiny that comes when you move tens of thousands of dollars around the world, but who can't afford the infrastructure used by those who move millions.

Traditionally these people have used cash, but that is hard to do across national borders because you have to physically move the stuff, making it vulnerable to interception by the authorities or thieves. So Bitcoin, with its ability to slide across national borders without trace or cost, is very attractive.

The Grey Market

A related group are those who deal in stuff that is legal in some jurisdictions but not in others. Porn and gambling are two major businesses here. Certain drugs also fit into this category, but you can't move the product over wires, so it is vulnerable to conventional methods of interdiction (although that doesn't stop everyone).

Governments trying to control porn and gambling have generally followed the money. This tends not to work well with porn because there is too much available for free. But gambling needs to move money to work, and so authorities in several countries have attacked it at this point. Hence Bitcoin is very attractive in this market as well; punters can easily convert their local currency into Bitcoins, and if they manage to win something then they can convert their winnings back almost as easily.

The Unbanked

This is a catch-all term for people who have no access to conventional bank accounts, and hence have to deal in cash or barter.

Financial solutions for these people have traditionally been expensive and piecemeal. Moving money long distance is done by wire transfer, with hefty charges and the need to physically pick it up. Cash is difficult to hide from brigands and corrupt officials. Credit can be available, but only at punitive interest.

Things have improved; across Africa variations on themes of microfinance and mobile phone banking are changing the lives of millions, but they are still vulnerable. Local laws can limit access, and accounts in local currency are vulnerable to inflation. A stable currency that can be easily hidden and transferred quickly over long distance could meet a real demand, although it still can't provide credit.  Mobile phone credit is already serving this role in some places, so something that is designed for the job should be readily adopted.

Actually holding Bitcoins requires rather more computer power than the many third-world mobile phones can provide. But that is unlikely to be a problem for long. If MPESA can have an app, then so can Bitcoin.

Conclusion

Bitcoin looks like a classic disruptive technology: it has multiple niches in markets that are under-served by conventional money, and the grey market and the unbanked provide a ready path up to higher-value markets in places that are now reasonably well served by cash and credit cards. The black market will also provide market pull for Bitcoin uptake, but if that were the only early market niche then the mere use of Bitcoins would raise strong suspicion of illegal activity. The presence of legitimate, or at least lawful, uses of Bitcoin provides a rationale for those offering to extend conventional services to Bitcoin users and plausible deniability for those whose Bitcoins have in fact come from illegal operations.

Accordingly we should expect to see Bitcoin become a strong force in finance over the next decade.

Software has CivEng Envy

There is a school of thought which says that developing software should be like constructing a building. To make a building you have an architect draw blueprints, and these blueprints are then handed over to a builder who constructs what the architect has specified. According to this school of thought the problem with the software industry is that it doesn't create blueprints before it starts building the software. They look with envy at the world of civil engineering, where suspension bridges and tunnels and tall buildings routinely get finished on budget and schedule.

This is a superficially attractive idea; software is indeed difficult, and it would indeed be a foolish project manager on a building site who directed the builders to start laying the foundations before the plans had been finalised. But on a closer examination it starts to fall apart.

Suppose that a big software project does indeed need something analogous to a blueprint before starting on the coding. What, exactly, is a blueprint? What purpose does it serve? And where would that fit into the software lifecycle?

A blueprint for a building is a precise and complete specification of everything that will go into the building. The builder has the problem of assembling what the blueprint shows, but there is no ambiguity and no variation can be permitted. This is because buildings are safety critical infrastructure. The Hyatt Regency walkway collapse was a horrible example of what can happen when someone makes a seemingly innocuous change to the plans for a building. So before a building is constructed the plans have to be approved by a structural engineer who certifies that the building is indeed going to stay up, and by an electrical engineer who certifies that it isn't going to electrocute anyone or catch fire due to an electrical fault, and by a bunch of other engineers with less critical specialties, like air conditioning. The details matter, so the blueprints have to specify, literally, every nut and bolt, their dimensions, the metal they are made from and the torque to which they should be tightened (most of these things are standardised rather than being written down individually for every nut and bolt, but they are still part of the specification). Without this the structural engineer cannot tell whether the building is structurally sound. Similarly the electrical engineer must know about every piece of wire and electrical device. So by the time the blueprints are handed to the builder inventiveness and creativity are neither required nor allowed.

The only artefact in software development that specifies how the software will operate to this degree of precision is the source code. Once the source code has been written, it is to be executed exactly as written: inventiveness and creativity in its execution are neither required nor allowed. But those who promote the idea of "software blueprints" seem to think that something else, something more abstract, can be a blueprint, and that once these blueprints have been drawn the "construction" of the software (that is, turning these blueprints into source code) can proceed in an orderly and planned fashion, putting one line of code after the next in the manner of a bricklayer putting one brick on top of another.

But when you look at the artefacts that these people proffer,  it is clear that they are nothing like precise enough to act as a blueprint; they are more like the artists impressions, sketched floor plans and cardboard models that architects produce during the early phases of design. These artifacts can help people understand how the building will be used and fit into its environment, but they are not blueprints.

(By the way, the old chestnut about unstable software requirements being like "deciding that a half-constructed house ought to have a basement" fails for the same reason. The problem with unstable requirements is real, but the analogy is wrong).

But if the blueprint for software is the source code, then the builder for software is the compiler. This should not be a surprise: when computer scientists encounter a task that does not require inventiveness and creativity then their first instinct is to automate it. If so then it is really civil engineering that needs to be envious of software engineering.

Software is not like buildings in other ways too:
  • Buildings have very few moving parts and little dynamic behaviour, whereas software is all about dynamic behaviour (and buildings with dynamic behaviour often have problems.
  • Novelty in buildings is rare. I work in a three storey steel frame office block, which is also on an estate of very similar three storey steel frame office blocks. Software, on the other hand, is almost always novel.  If software to do a job is already available then it will be reused; I run Fedora Linux; I don't write my own operating system from scratch.
So please can we drop this half-baked analogy between writing software and civil engineering.