Sunday, July 12, 2009

How many copies is Jammie Thomas-Rasset responsible for?

So Jammie Thomas-Rasset did indeed download and share music using P2P. Others have commented on the disproportionate nature of the penalty damages imposed in this case. Here I want to address a narrower question: how many copies should she be charged for?

The RIAA argues that Thomas-Rasset was sharing 1,700 audio files, and that 2.3 million P2P users had access to those files. This is "the equivalent of standing on a street corner with approximately 150 CDs and giving away free copies to anyone who asks".

But clearly Thomas-Rasset didn't actually create 2.3 million copies of those 1,700 files. Even if she had, the people who downloaded the files must share some responsibility, since they actively chose to make copies (unlike Thomas-Rasset, who merely chose to make them available). The RIAA seems to believe that because we don't know how many people actually downloaded these tracks from Thomas-Rasset's computer rather than someone else, that statutory damages are justified.

But this begs an important question: if a million people decide to share and copy some music track over P2P, how many copies do they make? The answer is quite obvious: one million users, one million copies. Nobody is going to fill their disks with thousands of copies of the same song, and even if they did it wouldn't make any difference.

(Actually quite a few users will have also made temporary copies on MP3 players and the like, but since doing that with a CD you own is fair use, that seems pretty irrelevant)

So at most Thomas-Rasset is guilty of having made 1,700 illegal copies of 1,700 songs. The fact that these songs were also available for others to download is irrelevant to the damages argument. Maybe 1,000 people made copies of these songs, maybe none. Civil law (which this is) is based on the balance of probabilities, and on the balance of probabilities the total responsibility of Thomas-Rasset for the copies made by the network of which she was a part is one copy of each song. These songs are sold for download for a dollar each, which suggests that, absent punitive damages, Thomas-Rasset's true liability should be $1,700. Even with triple damages for deliberate actions, that would still only be $5,100.

But of course, this isn't about justice for artists, this is about the business model of the RIAA members.

Tuesday, June 16, 2009

Vote Windows, for No Tax On Sheep!

Once upon a time, so I have heard, there was an Australian politician who ran on a platform of "No Tax On Sheep". None of the other candidates would have dreamed of trying to tax sheep. Nevertheless, our hero won by a landslide.

Its a nice trick; claim a common virtue, not because it differentiates you, but because it casts a subtle doubt on whether your competition has the same virtue. It involves no actual lies, and you don't even need to name the competitors; they are all equally suspected, and must either ignore you or issue hasty and unconvincing claims to be just as good.

Advertisers love this one. In advertising "knocking copy" is the term for pointing out the defects in your competition. But blatant knocking copy is usually counter-productive: potential customers see it as mean, and it also carries an implicit message that the product you are knocking is a significant competitor (otherwise why spend money knocking it). But the "no tax on sheep" strategy neatly sidesteps this; you don't name your competitors and you don't say anything bad about them. You merely imply that they are not as good as you.

So its no surprise to see Microsoft using this strategy against Linux. The Microsoft page on Netbooks has this to say on why you want Windows on your Netbook:

For starters, you already know how to use it. Windows also works with more devices and applications, and offers security features to help keep your PC and personal information safe. Here's a whole list of other Windows benefits:

  • Windows makes your life simpler. Windows is easy to set up and use. You already know your way around, which means you can get started right away and easily find everything you need.

  • Windows just works with your stuff. With Windows, it's "plug and play"—meaning it works out-of-the-box with your PC applications and peripherals such as printers, digital still and video cameras, music players, and webcams.

  • Windows gives you peace of mind. Windows comes with features and tools that help keep your family and personal information safe. When you install Internet Explorer 8 as your Internet browser, you get additional built-in security features that help protect against deceptive and malicious websites.

  • Windows comes with more goodies. With Windows running on your netbook PC, you get access to additional software and services provided for free, like Windows Live services, which help with photo editing and organization, instant messaging, and family computing safety—all of which are just a download away.

I've highlighted the items which (in my opinion) apply to Linux as well. About the only thing I can't honestly claim is the "Windows is familiar" line. I can't deny that if you have only ever used Windows then Linux is going to be a bit different. Its not huge, but there is no point pretending its zero either.

So, I wonder, how could we turn this trick around? What can we say about Linux that casts doubt on Windows? To work, the implication has to be something that is truthy, regardless of how true it actually is. How about:
  • Won't send your files to the US government. Linux has no stealth functionality, and it wasn't written by people with big Government contracts.
  • No Blue Screen of Death! Linux was designed to be reliable.
  • Puts you in charge. You decide what software to install, and if you don't like it you can always remove it.
  • No extra fees just to get a feature you thought you had already bought.
But of course none of this will work in reality. Deceit of any kind and open source are fundamentally incompatible.

Saturday, June 13, 2009

An Optimistic View of Net Neutrality

The Net Neutrality argument has been puttering along for a few years in the US, and is now starting to get going over here in the UK as well.

On one side are the ISPs, most of whom are telephone and TV cable companies. They are used to being a choke point between the consumer and the rest of the telecoms world, and their managers all have MBAs. The latter point is important because analysing your position in the value chain is a big part of the MBA curriculum, and everyone knows that being a commodity provider is a bad place to be. So the ISPs are desperate to re-establish their choke-hold, which means doing something that forces the other players to treat with them rather than the competition. Hence they want the right to cut deals with, for instance, streaming video providers. If you want to see good quality TV over the Internet from the Foo Video Company, you have to go with an ISP that has a deal with Foo Video, which restricts your choice as a consumer and therefore allows the ISP to charge more. The ISPs also want to charge Foo Video on the grounds that Foo must want its end users to get good quality video and will therefore pay for the privilege.

Everyone else in the value chain (including the domestic Internet users) is up in arms at this prospect for the excellent reason that if the ISPs can capture some of the money flowing around this part of the economy then there will be less for everyone else. They want the ISPs to be required to stay neutral on the grounds that anything else will increase prices (which, from the point of the ISPs, is the point) and restrict new developments because start-up companies won't have the financial and market muscle to cut the necessary deals.

Unlike many commentators I believe that in the long run this argument will be irrelevant, and that whatever various governments do on the subject will make little or no difference, and I believe this because of Metcalf's Law.

Bob Metcalf's original formulation of his law said that the value of a network is proportional to the square of the number of users. In fact that is only true if everyone on the network is equally interested in communicating with everyone else. In reality that isn't true, but whatever the pattern of communication the value of a network still rises more than linearly with its size, which is what matters.

The History of Metcalf's Law 1: Peering versus Transit

Back in the late 1990s there was a big argument over network peering. There were big ISPs and little ISPs. Big ISPs had all the same costs as little ISPs in running points of presence for modem users to phone into, but they also had to run long-haul bandwidth terminated by big routers. A small ISP could hook into this infrastructure just by connecting to a big ISP, which gave the small ISPs a competitive advantage and left the big ISPs without a business case for running that vital long haul infrastructure.

At the time the only kind of connection between two ISPs was "peering", which means that two ISPs share a connection point where they exchange traffic. So the big ISPs implemented a new policy:
  1. Henceforth peering connections would only be made between ISPs the same size, and would only carry traffic bound for customers of the other ISP.
  2. Small ISPs could pay for "transit" connections to big ISPs. A transit connection would carry traffic bound for anywhere.
There was a big argument over this change. Small ISPs complained that this would lead to the "balkanisation of the Internet", by which they meant that customers of ISP X might not be able to communicate with customers of ISP Y. But in practice that didn't happen. The peering/transit distinction continues to be the mechanism by which the Internet is organised commercially, but the Internet was never balkanised because Metcalf's Law meant it was never in anyone's interest.

To see why this is, consider what happens to the value of a network cut in half. If the value of a network is purely proportional to the size then you now have two networks worth half as much each; the total value hasn't changed. But Metcalf's Law says that the position is worse: the total value of two networks is less than the value of one big one.

So the only way to make money from splitting a network in two is by capturing value faster than you destroy it. But there just isn't that much loose value floating around the Internet, and all the companies involved found that they made more money by keeping everyone connected. So they did.

The History of Metcalf's Law 2: Walled Gardens

Another bright idea floating around in the early days of the commercial Internet was the Walled Garden. This was pioneered by Compuserve in the days before dial-up Internet access, but MSN and AOL were created with similar concepts. The basic idea was to provide a premium branded information service that would be exclusively available to dial-up subscribers. Internet access was added by these providers later because their customers demanded it, but the business model was based on selling a value-added subscription information service that would be better than the free Internet.

The whole idea flopped. Not enough subscribers were prepared to pay the premium. The basic problem was, and is, that as soon as you put up a paywall between your content and non-subscribers you split the network in two: there is the bit you own, and the rest of the network. Your bit is very small, and Metcalf's Law says that it therefore has a much lower value. So nobody wants it.

Metcalf's Law and Network Neutrality

So what about the network neutrality debate? If you googled for "balkanise Internet" a few years ago you got lots of articles from the 1990s arguing about ISP peering and transit fees. Today the same search yields just as many arguments about network neutrality.

The ISPs want to create a new version of the Walled Garden: information providers will pay to get better links to their end users, and certain protocols will get higher bandwidth than others. And I believe that this will suffer the same fate as all the other attempts to fight Metcalf's Law. An ISP that places artificial boundaries between Internet users (wherever they fit in the Internet's various value chains) is lessening the value of the Internet as a whole to those users. If they make a significant difference to the user's experience then they are offering less value, and will lose market share as a result. If they don't make a difference then nobody will care. Either way its not a paying proposition.

Therefore network neutrality will dominate because anything else is less valuable, and therefore will drive away customers.

There is one application where non network-neutral services may survive for a while, which is TV over the Internet. A DVD-quality video stream is around 5Mbits/sec, or about 2GBytes/hour, which is a bit too high to just transport over the Internet to a home internet connection. Never mind what happens when two people in the same household want to watch different things. So for a while it makes economic sense for ISPs to cut special deals with TV providers. But that's nothing new: the economic model is identical to cable TV. Existing cable TV companies would probably like to keep it that way, but their power to do so is very limited. Sooner or later some combination of improved ADSL, fibre to the curb/home/street and wireless connections will provide commodity bandwidth high enough to carry decent video, and cable TV companies will cease to exist.

Saturday, May 2, 2009

UK to store more "communications data"

The UK has a rather odd regime for intercepting on-line communications. The law distinguishes between the content of communications and "communications data" (such as a list of the telephone calls you have made). Content cannot be used in criminal investigations, although it is widely believed that the security services intercept a lot of it as part of their general signals intelligence activity. However "communications data" is available to pretty much any part of Government provided that a sufficiently senior person signs off on the request. No warrant is needed.

The UK Government is consulting on proposals to require ISPs to gather and store a lot more "communications data". At present they basically have to take the IP header data: origin, destination, port number and protocol (TCP/UDP). Volumes and times are likely to be recorded, and of course your DHCP lease so that they can tie a physical machine to the IP address. So with this they can see that you browsed, but not what you actually read. But the Government argues that this is increasingly insufficient. If its investigating suspicious activity, such as a big purchase of fertilizer, then it needs to find out who the suspect is communicating with. If the suspect uses an off-shore email service such as Gmail then the existing communications data regime can't get at this.

So the Government's proposed solution is to require ISPs to capture and store more data, including "third party data", which seems to mean anything inside the packets that helps to identify the people you are communicating with.

The problem with this is that its an impossible job. Most stuff that is even vaguely sensitive (like Gmail) is sent using HTTPS as a matter of course. Even if this could be got over, communication data is simply too slippery a concept once you get away from IP headers. For instance, MMORPGs like Runescape and World of Warcraft (and for that matter Second Life) have "chat" facilities that use game location to determine who can "hear" you. Suppose a conspiracy decides to meet at the edge of a forest in Runescape: their location is exactly the sort of communications data that the Government wants to capture. Doing so requires someone to reverse engineer the existing Runescape protocol and then write software to extract just that part of the data. This software would then have to be run by every ISP in the UK, and the whole exercise repeated for every other multi-player game and interactive website.

This would be a huge effort, but it would still fail to capture data about any criminal who had a modicum of technical knowledge, because there are too many options for setting up channels that the Government cannot intercept. For instance encrypted messages could be left as Usenet posts. Freenet exists specifically for avoiding surveillance. And SSH tunnel services can let you use any Internet service without being tracked.

So in effect the Government is proposing a huge technical boondoggle to do the impossible. Not for the first time, and I'm sure it won't be for the last.

My full response is available here in PDF format.

Sunday, April 5, 2009

Bruce Sterling accused of sham marriage

I've just read on Bruce Sterling's blog that he and his Serbian wife have been accused of a "green card" marriage by the US immigration authorities. They basically have a couple of weeks to prove to the authorities satisfaction that their marriage is not a sham. If they cannot do so then Jasmina Sterling (nee Tesanovic) will be deported.

This is wrong on many levels. I've never met them, so I can't comment myself on the reality of their marriage, except to note that Bruce Sterling is a good looking wealthy intelligent over-achieving man, so I can't see that he needs to hire himself out as a husband-of-convenience, nor does it seem likely that he needed to offer a green card just to get a woman into his bed.

(Aside: I also think it truly ironic that an organisation that takes 6 months to decide on a 3-month visitor's visa allows only a couple of weeks for them to put together evidence that their marriage is real).

But US Immigration don't think like that. It seems from what Sterling wrote that they check up on marriages between Americans and non-citizens, and look for the paper trail that would be generated by people living together: joint ownership of homes, bank accounts and the like. And because of their somewhat nomadic existence it seems that Bruce and Jasmina don't have that paper trail.

Students of organisational behaviour will recognise an anti-pattern here, although I don't know if its ever been written down. It looks like this:

Name: Chasing Reality

Context: Enforcement of rules that mandate an ill-defined threshold of performance.

Forces: People attempt to meet the performance threshold with minimum effort. The enforcing authority becomes concerned that their effectiveness is being eroded.

Supposed solution: The enforcing authority requires increasingly stringent evidence of performance. The rules about what is "sufficient" may also be kept secret or poorly defined to prevent people from engineering their evidence to follow the requirements.

Resulting context: the enforcing authority becomes increasingly unable to judge real performance, and instead becomes obsessed with the production of evidence. Success depends more on finding out what the real criteria are and meeting them than on actually performing the original task.

I have seen this pattern before, although never with such horrendous consequences. Some years ago a senior manager told me that he had a set of criteria for any application to spend money, but he would not tell anyone what they were for fear that all the applications would then meet those criteria. When I visited the US many years ago, before the visa waiver programme, I had to provide evidence of solvency to get a visa, but was not told what "evidence of solvency" looked like (I photocopied a year of bank statements; it did the trick).

Mr & Mrs Sterling are apparently required to meet a secret (or at least highly under-advertised) set of criteria for a "real" marriage. When the US first controlled immigration I expect it only had to be a legal marriage, but sham marriages were an obvious problem. So the Immigration people wanted to see wedding photos. So sham couples hired photographers. Then Immigration wanted to see you living together, so sham couples rented apartments. And so it goes on. The Immigration people are on an endless treadmill: whatever evidence they require will be produced, whether by sham couples or real ones. Now its got to the point where the sham couples will actually have better evidence of their reality than real couples, because they are more aware of the need to generate it. Or as Granny Weatherwax once put it, "Things that try to look like things often look more like things than things."

Its also ironic that a writer with a great talent for spotting trends and capturing the zeitgeist should be caught up in exactly the kind of insane socio-political trend that he writes about so entertainingly.

Saturday, January 31, 2009

Bad banking drives out good

I heard this quote from the CEO of Standard Chartered Bank on Radio 4's Today programme this morning, and it summarises the problem very well. (Its also a reference to Gresham's Law.)

Its become a commonplace to simply blame the credit crunch on "stupid, greedy bankers". The trouble is, suppose you were a smart, frugal banker five years ago. Your exceptional brains let you see that subprime self-certified mortgages were a bad idea, and therefore anything built on them was clearly going to collapse. Therefore you weren't going to invest in that part of the market.

The trouble is, you would have had years of below-average results. Your clients and employers would be looking at your competitors and pointing out how bad your results were. Even if your clients were prepared to listen to your explanations, many are under a legal obligation to maximise their investment returns, and their customers will be less sophisticated and correspondingly less willing to listen to complicated explanations about why their pension is so much less than their neighbours.

So you get removed from your job, and someone else is put in who is willing to do what it takes to deliver market returns.

Looked at from an abstract point of view the game resembles a "Tragedy of the Commons". A banker who gets good returns while taking hidden risks is like the commoner who puts an extra cow on the common pasture. He gets richer while all his neigbours get a little poorer. So everyone else is forced to do likewise, even though everyone can see that collectively this is going to lead to ruin. And so the game goes.

The traditional solutions to the Tragedy of the Commons are:

  1. Property rights. Divide up the commons into individual plots and allocate them to the commoners. Failing that, give each commoner a tradeable right to put exactly one cow on the commons.
  2. Regulation. Make a law that each person can only put one cow on the commons.
The question is, how to apply this to the finance industry. In this case cows are equivalent to risk. Banks have had "risk management" departments for years, but risk turns out to be very difficult to measure. Its as if our commoners have invisible cows: verifying the numbers is impossible, although I've previously posted a suggestion for making the cows more visible.

So without this, what can we do to prevent the next big bust? It doesn't look like there is anything. As memories of this bust fade and the people who experienced it retire, a new generation of bankers will come along with a new ingenious method for hiding risk, and once again bad banking will drive out the good until the whole thing collapses once more.

Monday, January 26, 2009

Wikipedia and Britannica are Converging

Is there a network equivalent of market forces? The crowdsource equivalent (or perhaps generalisation) of Adam Smith's "Invisible Hand"?

When I was in my early teens my school got the new edition of the Encyclopaedia Britannica in over 30 volumes costing around £1,000. It was a huge wealth of authoritative information on just about everything. I spent quite a few lunch hours just reading about stuff.

In the 90s Microsoft bought out Encarta on CD-ROM. It was a lot less information, but it cost a lot less than £1,000. Britannica has been in trouble ever since. Now there is Wikipedia, which is free, a bit less authoritative than Britannica, but has even more information and a great deal more mindshare.

So Britannica has responded by taking a page out of Jimbo Wales's book; its going to start accepting user-generated content, albeit with a stiff editorial barrier to acceptance into the core Encyclopaedia.

Meanwhile the latest controversy on Wikipedia is about whether user-generated content needs more controls. I say "more" because Wikipedia has always had some limits; locked pages, banned editors, and in extreme cases even the deletion of changes (as opposed to reversion, which preserves them in the page history). Increasingly sophisticated editorial policies have been put in place, such as "No Original Research" (originally put in to stop crank science, but now an important guiding principle).

When you look at Wikipedia's history there is a clear trend; every so often Wikipedia reaches a threshold of size and importance where its current level of openness doesn't work. At this point Jimbo Wales adds a minimal level of editorial control. Locked pages, for instance, were added because certain pages attracted unmanageably high levels of vandalism.

It seems that Wikipedia and Britannica are actually converging on the same model from different ends of the spectrum: Wikipedia started out completely open and is having to impose controls to manage quality. Meanwhile the Britannica started out completely closed and is having to allow in user-generated content because comissioning experts to write it is too expensive. Somewhere in the middle is a happy medium that gets good quality content without having to charge subscribers.

Not charging is important. In these days of the hyperlink, an encyclopedia is not just a source of knowledge, it is a medium of communication. If I want to tell people that they should read about the history of the Britannica then I refer to the Wikipedia page because very few people who read this are going to be able to follow a link to a Britannica article (even if I were a subscriber, which I am not).

Wikipedia is often said to have been inspired by the "open source" model in that anyone can edit Wikipedia just as anyone can edit the Linux source code. In fact the cases are not parallel. The GPL allows me to download a copy of the Linux source code, hack it to my heart's content, and give copies of my new version to anyone who wants them. What it does not do is authorise me to upload my version to and pass it off as the work of Linus Torvalds. Getting my changes into the official kernel means passing through a strict quality assurance process including peer review and extensive testing on multiple architectures.

So I think that this proposal to create "flagged revisions" for editorial review moves Wikipedia towards the open source model rather than away from it. Anyone will always be able to fork Wikipedia if they wish: the license guarantees it. But the offical version at will earn increasing trust as the quality assurance improves, just as the official version of the Linux kernel is trusted because of its quality assurance.

Wednesday, January 21, 2009

The next challenge for Linux

I was in the local branch of "Currys" (UK electrical and electronic goods chain) recently and had a look at the line-up of computers. They had some little netbooks, and taped next to each one was a little note saying something to the effect of "This runs Linux, so it won't run Windows software". It was a local version of a wider story about Linux:

  • People buy netbooks and then discover that Windows isn't part of the bundle, and they don't like it.
  • A teacher found a student handing out Linux CDs and accused him of pirating software.
  • A student accidentally bought a Dell laptop with Ubuntu instead of Windows, and complained that she couldn't submit the Word documents required by her college (they say they are happy to accept documents) and the installation CD for her broadband service wouldn't install so she couldn't get the Internet (Ubuntu connects to the Internet just fine without it). The local TV station picked up the story as a "consumer rights" case and was amazed to find a virtual lynch mob chasing them for being less than enthusiastic about Ubuntu. They quoted an expert saying that Ubuntu "isn't for everyone" and is more suited to tinkerers than people who just want to get stuff done.
Behind all this is what "everyone knows" about computers (everyone who isn't interested in computers for their own sake, that is):
  • There are two sorts of computers: Apple Macs, and PCs.
  • Apple Macs run Apple software. PCs run Windows software.
  • Windows is bundled with PCs. On expensive PCs the bundle might include Office as well.
  • If you want extra software, you buy it at a store, take it home and stick the CD in the drive.
This is how its been for almost 20 years (25 if you count the MSDOS era). An entire generation has grown up knowing that if its a PC, it runs Windows. They know it in the same way they know the sky is blue: its always been blue.

Of course those of us who use Linux know different. But for most people, Linux is something they might have heard about once or twice, but they didn't pay any attention and couldn't tell you anything about it. Outside its current user base and their immediate circle of friends and family, Linux has zero mindshare.

This is not another lament about Joe Sixpack being too stupid to understand Linux. The problem is not that Linux is too complicated, its that Linux and Windows do things differently. Imagine someone who was raised on Linux; how would they react to Windows? Software installation would seem complicated and geeky, the single desktop would feel claustrophobic, and as for the idea of paying for software...

So I think we need to sort out a message to broadcast to the world and then focus on it. I suggest the following:
  • Linux is an alternative to Windows and Apple. It comes in several flavours.
  • Linux isn't easier or harder than Windows, but it is different, and it takes a while to get used to those differences.
  • Linux is bundled with a huge range of downloadable free software, including office, graphics and photographic packages.
So next time you see a story that misrepresents Linux please send a correction with these three points.