Saturday, December 3, 2016

What duties to software developers owe to users?

I was reading this blog post, entitled "The code I’m still ashamed of". 

TL; DR: back in 2000 the poster, Bill Sourour, was employed to write a web questionnaire aimed at teenage girls that purported to advise the user about their need for a particular drug. In reality unless you said you were allergic to it, the questionnaire always concluded that the user needed the drug. Shortly after, Sourour read about a teenage girl who had possibly committed suicide due to side effects of this drug. He is still troubled by this.

Nothing the poster or his employer did was illegal. It may not even have been unethical, depending on exactly which set of professional ethics you subscribe to. But it seems clear to me that there is something wrong in a program that purports to provide impartial advice while actually trying to trick you into buying medication you don't need. Bill Sourour clearly agrees.

Out in meatspace we have a clearly defined set of rules for this kind of situation. Details vary between countries, but if you consult someone about legal, financial or medical matters then they are generally held to have a "fiduciary duty" to you. The term derives from the Latin for "faithful". If X has a fiduciary duty to Y, then X is bound at all times to act in the best interests of Y. In such a case X is said to be "the fiduciary" while Y is the "beneficiary".

In many cases fiduciary duties arise in clearly defined contexts and have clear bodies of law or other rules associated with them. If you are the director of a company then you have a fiduciary duty to the shareholders, and most jurisdictions have a specific law for that case. But courts can also find fiduciary duties in other circumstances. In English law the general principle is as follows:
"A fiduciary is someone who has undertaken to act for and on behalf of another in a particular matter in circumstances which give rise to a relationship of trust and confidence."
It seems clear to me that this describes precisely the relationship between a software developer and a user. The user is not in a position to create the program they require, so they use one developed by someone else. The program acts as directed by the developer, but on behalf of the user. The user has to trust that the program will do what it promises, and in many cases the program will have access to confidential information which could be disclosed to others against the user's wishes.

These are not theoretical concerns. "Malware" is a very common category of software, defined as:
any software used to disrupt computer or mobile operations, gather sensitive information, gain access to private computer systems, or display unwanted advertising.
Sometimes malware is illicitly introduced by hacking, but in many cases the user is induced to run the malware by promises that it will do something that the user wants. In that case, software that acts against the interests of the user is an abuse of the trust placed in the developer by the user. In particular, the potential for software to "gather sensitive information" and "gain access to private computer systems" clearly shows that the user must have a "relationship of trust and confidence" with the developer, even if they have never met.

One argument against my thesis came up when I posted a question about this to Legal forum on Stack Exchange. The answer I got from Dale M argued that:

Engineers (including software engineers) do not have this [relationship of confidence] and AFAIK a fiduciary duty between an engineer and their client has never been found, even where the work is a one-on-one commission.
I agree that, unlike a software developer, all current examples of a fiduciary duty involve a relationship in which the fiduciary is acting directly. The fiduciary has immediate knowledge of the circumstances of the particular beneficiary, and decides from moment to moment to take actions that may or may not be in the beneficiary's best interest. In contrast a software developer is separated in time from the user, and may have little or no knowledge of the user's situation.

I didn't argue with Dale M because Stack Exchange is for questions and answers, not debates. However I don't think that the distinction drawn by Dale M holds for software. An engineer designing a bridge is not in a position to learn the private information of those who cross the bridge, but a software engineer is often in a position to learn a great deal about the users of their product. It seems to me that this leads inescapably to the conclusion that software engineers do have a relationship of confidence with the user, and that this therefore creates a fiduciary duty.

Of course, as Dale M points out, nobody has ever persuaded a judge that software developers owe a fiduciary duty, and its likely that in practice its going to be a hard sell. But to go back to the example at the top, I think that Bill Sourer, or his employer, did owe a fiduciary duty to those people who ran the questionnaire software he wrote, because they disclosed private information in the expectation of getting honest advice, and the fact that they disclosed it to a program instead of a human makes no difference at all.


Addendum: Scope of duty

This section looks at exactly what the scope of the fiduciary duty is. It doesn't fit within the main text of this essay, so I've put it here.

Fortunately there is no need for a change in the law regarding fiduciary duty. The existence of a fiduciary duty is based on the nature of the relationship between principal and agent, although in some countries specific cases such as company directors are covered by more detailed laws.

First it is necessary to determine exactly who the fiduciary is. So far I have talked about "the software developer", but in practice software is rarely written by a single individual. We have to look at the authority that is directing the effort and deciding what functions will be implemented. If the software is produced by a company then treating the company as the fiduciary would seem to be the best approach, although it might be more appropriate to hold a senior manager liable if they have exceeded their authority.

As for the scope, I'm going to consider the scope of the fiduciary duty imposed on company directors and consider whether an analogous duty should apply to a software developer:

  • Duty of care: for directors this is the duty to inform themselves and take due thought before making a decision.  One might argue that a software developer should have a similar duty of care when writing software, but this is already handled through normal negligence. Elevating the application of normal professional skill to a fiduciary duty is not going to make life better for the users. However there is one area where this might be applied: lack of motive to produce secure software is widely recognised as a significant problem, and is also an area where the "confidence" aspect of fiduciary duty overlaps with a duty of care. Therefore developers who negligently fail to consider security aspects of their software should be considered to have failed in their fiduciary duty.
  • Duty of loyalty: for directors this is the duty not to use their position to further their private interests. For a software developer this is straightforward: the developer should not use their privileged access to the user's computer to further their private interests. So downloading information from the users computer (unless the user explicitly instructs this to happen) should be a breach of fiduciary duty. So would using the processing power or bandwidth owned by the user for the developers own purposes, for instance by mining bitcoins or sending spam.
  • Duty of good faith: the developer should write code that will advance the user's interests and act in accordance with the user's wishes at all times.
  • Duty of confidentiality: if the developer is entrusted with user information, for example because the software interfaces with cloud storage, then this should be held as confidential and not disclosed for the developer's benefit.
  • Duty of prudence: This does not map onto software development.
  • Duty of disclosure: for a director this providing all relevant information to the shareholders. For a software developer, it means completely and honestly documenting what the software does, and particularly drawing attention to any features which a user might reasonably consider against their interests.  Merely putting some general clauses in the license is not sufficient; anything that could reasonably be considered to be contrary to the user's interests should be prominently indicated in a way that enables the user to prevent it.
One gray area in this is software that is provided in exchange for personal data. Many "free" apps are paid for by advertisers who, in addition to the opportunity to advertise to the user, also pay for data about the users. On one hand, this involves the uploading of personal data that the user may not wish to share, but on the other hand it is done as part of an exchange that the user may be happy with. This comes under the duty of disclosure. The software should inform the user that personal data will be uploaded, and should also provide a detailed log of exactly what has been sent. Thus users can make informed decisions about the value of the information they are sending, and possibly alter their behavior when they know it is being monitored.


Monday, March 14, 2016

Letter to my MP about the Investigatory Powers Bill

I've just sent this email to my MP. Hopefully it will make a difference. I've asked for permission to post her reply.

---------------------------

Dear Ms Fernandes,

I am a resident of [redacted]. My address is [redacted]. I am writing to you a second time about the proposed Investigatory Powers Bill. I wrote to you about this on 5th November 2015 urging you to try to mitigate the worst aspects of this bill, and now I am writing to urge you to vote against this bill when it comes to Parliament.

I am deeply concerned about the powers that this bill would give to the Home Secretary. However in order to keep this email reasonably short I will concentrate on one particularly dangerous power.

If this bill becomes law then the Home Secretary would be able to order any "communications company" (the term could mean anyone involved in providing software or equipment that enables communication) to install any surveillance feature the Home Secretary wishes. The recipient of this order would be unable to appeal against it, and would be prevented from revealing the existence of the order. There is no sunset time on this gag clause: it will last as long as the Home Secretary and the security services wish to maintain it.

It is true that these orders will also have to be signed off by a judge, but that will only verify that the order complies with whatever procedures are in place at the time. Furthermore these judges will only ever hear one point of view on the reasonableness and proportionality of the orders, and this can only result in the erosion of these safeguards over time.


I want to illustrate the danger of this power to weaken security by showing how it would impact a common method of selecting encryption keys called Diffie-Hellman Key Exchange. This method is used by web browsers and email programs whenever they make a secure connection (e.g. to web addresses starting "https"). It is also used by "Virtual Private Networks" (VPNs) which are widely used by businesses to allow employees to work remotely, and I expect that Parliament has one to allow MPs to access their email. You may even be using it to read this.

I want to show that any attempt to intercept messages where Diffie-Hellman is used will greatly weaken it, and that this will worsen our security rather than improving it. I will show this by linking the NSA to the compromise of the Office of Personnel Management (OPM) in America last year.

I don't propose to explain the technical details of Diffie-Hellman. What it means is that two computers can exchange a few messages containing large random numbers, and at the end of this they will share a secret key without that key ever having been sent over the Internet.

Suppose that a communications company provides software that uses Diffie-Hellman, and receives an order from the Home Secretary that they must make the encrypted messages available to law enforcement and the intelligence agencies. What are they to do? They never see the secret keys, so they must do one of the following:

1: Modify the software to send a copy of the chosen key to someone. This is far less secure, and also very obvious. Anyone monitoring the packets sent by the programs will instantly see it.

2: Modify the software to make the keys or the encryption weak in a non-obvious way so that the UK intelligence agencies can determine what the key is. For instance, the random numbers might be made more predictable in a subtle way.

These are the only two ways in which the communications company can comply with the order.

We have seen what happens when Option 2 is chosen, because this was done to Juniper Networks firewall product [see ref 1 below]. Someone deliberately inserted "unauthorised code" which weakened the encryption used by this product in a very specific and deliberate way. There is no possibility that this was an accidental bug. The responsible party is widely believed to be the NSA, because secret briefings released by Edward Snowden made reference to the ability to intercept data sent via this product [ref 2], and it would be much easier for the NSA to infiltrate an American company than for anyone else to do it.

However there is something important that happens when software is updated: hackers (including foreign governments) scrutinize the updates to see what has changed. Normally they find that the old version of the software had a security hole which is now patched, so the patch flags up a way to attack computers that haven't been updated yet. But in this case when Juniper issued an update to their firewall software these hackers found the security hole in the *new* software.

Doing this kind of analysis in a systematic way for many security products is a very large job. Doing it in secret requires the resources of a government. So now not only could the NSA intercept communications sent via Juniper firewalls, but so could an unknown number of foreign governments. The Chinese were almost certainly one of them. Other nations known to have invested in  cyber-attack capabilities include the Russia, Israel and North Korea (although the last is probably not as capable yet).

Juniper products are widely used by the US Government. This is likely to have been one of the ways in which the Office of Personnel Management (OPM) was penetrated last year [ref 3]. The Chinese government is the prime suspect in this hack, through which the attackers have obtained copies of the security clearance applications of everyone who has ever worked for the US government.

So it seems that the NSA, by introducing a supposedly secret "back door" into a widely used product, cleared the way for the Chinese to obtain secret files on everyone who has ever worked for their government, including all of their legislators and everyone who works at the NSA. Nice job breaking it, Hero!


Now it is true that this is circumstantial; we have no hard evidence that the Juniper back door was inserted by the NSA, no hard evidence that the Chinese found it, and no hard evidence that this contributed to the OPM hack. But each of these is a big possibility. Even if the OPM hack didn't happen in exactly that way, deliberately weakening security makes events like this much more likely. If the Home Secretary orders a company to introduce weakened security, that fact will become apparent to anyone with the resources to dig for it. Once armed with that fact, they can attack through the same hole.

Furthermore, we would never find out when a disaster like the OPM hack happens under the regime described in the Investigatory Powers bill.  Suppose that, thanks to the weakened security ordered by the Home Secretary, secret government files are obtained by a hostile power, and the communications company executives are called before a Parliamentary Inquiry to account for their negligence; how can they defend themselves if they are legally prohibited from revealing their secret orders?

More generally, we will never be allowed to learn about the negative effects of these secret orders. It would embarrass those who issued them, and they are exactly the people who would have to give permission for publication. So if Parliament passes this bill it will never be allowed to learn about the problems it causes, and hence never be able to remedy the mistake.

I have focused on only one of the measures in the Investigatory Powers bill here, but there are many others in the bill that cause me great concern. To go through the whole bill in this level of detail would make this email far longer, and I know that you have many calls on your time. I can only ask you to believe that there are many similar issues. For these reasons I must urge you to vote against the bill when it reaches the House of Commons.

Yours sincerely,

Paul Johnson.


[1] http://forums.juniper.net/t5/Security-Incident-Response/Important-Announcement-about-ScreenOS/ba-p/285554

[2] https://assets.documentcloud.org/documents/2653542/Juniper-Opportunity-Assessment-03FEB11-Redacted.txt

[3] https://en.wikipedia.org/wiki/Office_of_Personnel_Management_data_breach

Saturday, March 28, 2015

Google Maps on Android demands I let Google track me

Updated: see below.

I recently upgraded to Android 5.1 on my Nexus 10. One app I often use is Google Maps. This has a "show my location" button:
 
When I clicked on this I got the following dialog box:



Notice that I have two options: I either agree to let Google track me, or I cancel the request. There is no "just show my location" option.

As a matter of principle, I don't want Google to be tracking me. I'm aware that Google can offer me all sorts of useful services if I just let it know every little detail of my life, but I prefer to do without them. But now it seems that zooming in on my GPS-derived location has been added to the list of features I can't have. There is no technical reason for this; it didn't used to be the case. But Google has decided that as the price for looking at the map of where I am, I now have to tell them where I am all the time.

I'm aware that of course my cellphone company knows roughly where I am and who I talk to, and my ISP knows which websites I visit and can see my email (although unlike GMail I don't think they derive any information about me from the contents), and of course Google knows what I search for. But I can at least keep that information compartmentalised in different companies. I suspect that the power of personal data increases non-linearly with the volume and scope, so having one company know where I am and another company read my email means less loss of privacy than putting both location and email in the same pot.

 Hey, Google, stop being evil!

Update: 20th April 2015

A few days ago a new update to the Google Maps app got pushed, and its now no longer demanding I let Google track me. In fact the offending dialogue box has now been replaced by one with a "No, and stop pestering me" option, so this is an improvement on what they had before.

Way to go, Google!

Saturday, February 22, 2014

A Review of the joint CNN and BBC production: "The War On Terror"

The War on Terror is the latest epic in the long-running World War franchise. The previous serial in the franchise, World War II, was slammed by the critics for its cardboard-cutout villains, unrealistic hero and poor plot-lines, although it actually achieved decent ratings.

The first season of Terror started with a retcon. At the end of World War II it looked like the Soviet Union had been set up as the Evil Empire for yet another World War, but the writers seem to have realised that replaying the same plot a third time wasn't going to wow the audience. So at the start of Terror we get a load of back story exposition in which the Soviet Union has collapsed for no readily apparent reason, leaving America running a benevolent economic hegemony over the allies from the previous series and also its former enemies, Germany and Japan. There was also mention of a very one-sided Gulf War, apparently to emphasize that America's economic power was still matched by its military, even though it didn't seem to have anyone left to fight. Then in the second episode a bunch of religious fanatics from nowhere flew hijacked airliners into important buildings. While the premise may have been a bit thin the episode managed a level of grandeur and pathos that the franchise hadn't achieved since the Pearl Harbour episode, with the special effects being used to build drama rather than just having huge fireballs. But after this promising start the rest of the season became increasingly implausible, with a buffoonish president launching two pointless wars on countries whose governments turned out to have almost nothing to do with the attack he was trying to revenge. The weak plot and unsympathetic characters make the last few episodes of the season hard to watch.

However in the second season the series grew a beard. The writers replaced the old president with a good looking black guy who clearly wanted to do the right things, finally giving the audience someone to root for, and the focus switched sharply from armed conflict to corrupt politics. Instead of huge set-piece battles featuring ever-more improbable weaponry, the drama now focuses on the political situation within America itself. The battles and weapons are still there of course, but no longer driving the plot. Instead the president is shown as a tragic figure as he tries to stop wars, free prisoners and sort out his country's economic problems, but every time some combination of corporate executive, greedy banker and/or General Ripper will block his reforms, sometimes with an obstructive bureaucrat thrown in for comic relief. He has his hands on the levers of power, but in contrast with his predecessor in World War II those levers don't seem to be connected to anything any more.

Although each episode stands on its own as a story, several plot arcs are becoming clearer as season 2 draws to a close. Events seem to presage the Fall of the Republic, a plot similar to the Star Wars prequel trilogy, but much better done. Whereas Lucas' Old Republic was destroyed by a single corrupt ruler who wanted to become The Emperor, the American Republic in Terror is being destroyed by the very things that made it strong in the previous series: its industrial capacity, financial power and military strength. This is most clearly seen in the episode Drone Strike, where the president was asked to authorise an attack by a remote controlled aircraft against a suspected terrorist convoy on the other side of the world. America is one of the few countries with the technology and money to field these unmanned warplanes, and they have become an important part of American power.  Then we saw the president's face as he was told that the supposed convoy had actually been a wedding party.  At the end of the episode he was reduced to defending his actions at a press conference because the people who had got him into this mess were too powerful to sack.

At the same time there are stories of individual determination and hope set in contrast against the darker backdrop. The recent episode Watching the Watchers showed a soldier and a bureaucrat in different parts of the secret spy agency (or agencies; America seems to have several) independently deciding to rebel against the system they are a part of, by releasing embarrassing secrets to the public. At the same time the episode revealed a hidden factor in previous plot lines. Fans are now reviewing old episodes, even back into the first season, looking for the throwaway lines and improbable coincidences which only now make sense.

The vision of the writers of Terror is now becoming clear; the real war on terror is not the one being fought with guns and robot aircraft, it is the one being fought in the shadows against a loose and ever-shifting coalition of rich, powerful individuals who have discovered that a terrorised population is willing to give them even more money and power, and therefore want to keep it that way. The president's initiatives aren't being blocked by some grand secret conspiracy, its just that all of these people know how to work together if they want stop something happening. But this actually makes them more dangerous; in a conventional conspiracy story the hero just has to find the conspiracy and unmask them, but that isn't going to happen in Terror. In one chilling scene a club of bankers get together for a party to laugh at the rest of the country for continuing to pay them huge amounts after they have wrecked the economy that they were supposed to be running. A journalist sneaks in and tells the story, but it doesn't make any difference because throwing a party is not a conspiracy.

So Terror goes into its third season in much better shape than it was at the end of the first. The writers have escaped from the constraints of set-piece battles between huge armies, and found instead a solid theme of individual heroism in a believable world of ambiguous morality and complex politics. It all makes for powerful drama and compelling viewing.

Friday, October 11, 2013

TV Resolution Fallacies

Every so often discussion of the ever-higher resolution of TV screens generates articles purporting to prove that you can't see the improvement unless you sit a few feet from the largest available screen. Most of these articles make the same three mistakes:

Fallacy 1: Normal vision is 20/20 vision

The term "20/20 vision" means only that you can see as well as a "normal" person. In practice this means that it is the lower threshold below which vision is considered to be in need of correction; most people can see better than this, with a few acheiving 20/10 (that is, twice the resolution of 20/20).

Fallacy 2: Pixel size = Resolution

If a screen has 200 pixels per inch then its resolution, at best, is only 100 lines per inch because otherwise you cannot distinguish between one thick line and two separate lines. For the more technically minded, this is the spatial version of the nyquist threshold.  Wikipedia has a very technical article, but this picture demonstrates the problem:

The pixel pitch is close to the height of a brick, leading to the moire pattern because in some areas the pixels are focused on the middle of a brick and in some areas the pixels are focused on the white mortar.

So the resolution of the screen in the horizontal or vertical directions is half the pixel pitch. But it gets worse as soon as you have some other angle because those pixels are arranged in a grid. The diagonal neighbours of a pixel are 1.4 times further apart than the horizontal and vertical ones, so the worst-case resolution is the pixel pitch divided by 2*1.4 = 2.8. Call it 3 for round numbers.

So the conclusion is that the actual resolution of the picture on your screen is about 1/3 of the pixel pitch.

Fallacy 3: Resolution beyond visual acuity is a waste

The argument here seems to be that if HDTV resolution is better than my eyesight then getting HDTV is a complete waste and I would be better off sticking to my normal standard definition TV.

Clearly this is wrong: as long as my visual resolution outperforms my TV then I will get a better picture by switching to a higher definition format.

So when does HDTV become worth it?

20/20 vision is generally considered to be a resolution of 1 arc-minute. If we use the naive approach with all three fallacies then one pixel on a 40 inch HDTV screen subtends 1 arc-minute at a distance of 62 inches, so some articles on the subject have claimed that unless you sit closer than that you don't get any benefit

However on that 40 inch screen a standard definition pixel will be roughly twice the size (depending on which standard and what you do about the 4:3 aspect ratio on the 16:9 screen), so it will subtend 1 arc-minute at around 124 inches (just over 10 feet).  So with 20/20 vision you will be able to separate two diagonal lines separated by one pixel at a distance of 30 feet, and with 20/10 vision that goes out to 60 feet. So if you sit less than 30 feet from a 40 inch screen then you will get a visibly better picture with HDTV than standard definition.

And what about Ultra HD?

With 20/20 vision you can just about distinguish two diagonal lines one pixel apart on a 40 inch HDTV screen from 15 feet away, and 30 feet if you have 20/10 vision. So if you sit closer to the screen than that then you will get a better picture with Ultra HD. And of course Ultra HD sets are often bigger than 40 inches. If you have a 60 inch set then the difference is visible up to 23 feet away with 20/20 vision and 46 feet with 20/10.

So higher resolutions are not just marketing hype.

Final point: Compression artifacts

Digital TV signals are compressed to fit into the available bandwidth. This shows up in compression artifacts; if there is a lot of movement across the image then you may see it become slightly blocky, and if you freeze the image then you can often see a kind of halo of ripples around sharp edges. Higher definition pictures are encoded with more data so that these artifacts are reduced. So even without the increased resolution you may still see an improved picture in a higher resolution format.

Friday, May 24, 2013

Elevator pitch for Haskell short enough for an elevator ride

Greg Hale has written an "elevator pitch" for Haskell. While it is certainly a good piece of advocacy, it is quite long, and therefore not an elevator pitch. The idea of an elevator pitch is something you can deliver in the 30 seconds or so that you find yourself sharing an elevator with a potential investor.

I've been looking for an effective Haskell elevator pitch for some years now, but the only thing I was able to come up with was just that you can deliver software better, faster and cheaper because you need fewer lines of code. This just sounds like hype.

However I think I've now got something better. Here it is:

Conventional languages make the programmer construct both a control flow and a data flow for the program. There is no way to check they are consistent, and anytime they are inconsistent you get a bug. In Haskell the programmer just specifies the data flow: the control flow is up to the compiler. That simplifies the program, cutting down the work and completely preventing a big class of errors.

Monday, April 1, 2013

This post originally appeared as a response to this article in Forbes:

Thanks for this article; its good to see some opinions on this subject backed up with numbers. I still think you are wrong though.

First, your comparison with the US dollar ignores the effect of fractional reserve banking, which multiplies the ratio of GDP to monetary base by a factor of around 5. Taking that into account, US GDP is only around ten times its monetary base. Still a lot more than Bitcoin, I conceed.

More importantly, Bitcoin is not a normal new currency. A normal new currency is launched by a government with a territory, citizens, tax base and GDP. All of these give those trading the currency some clues to the fundamental value of each unit. Bitcoin has no territory, citizens or tax base. It has a GDP, but that is dependent on the amount it is used, and usage seems to be growing. A better way to think of Bitcoin (as I argue here: http://paulspontifications.blogspot.co.uk/2013/01/bitcoin-as-disruptive-technology.html) is as a disruptive technology; at the moment it is principally of use to those who are poorly served by the incumbent financial industry, but as it improves it will increasingly move up-market by taking business from the incumbents. As it does so the Bitcoin GDP will increase by multiple orders of magnitude, and so therefore will the value of each Bitcoin.

A bubble is defined by the "bigger sucker" theory; that the price will keep going up because there will always be someone willing to pay even more, because the price will keep going up. Bitcoin investment, on the other hand, is driven by a rational expectation that Bitcoin use will increase. If one has a rational expectation that Bitcoin GDP will support a much higher price in a few years time then buying it now looks like a sensible investment. It might also collapse in a pile of bits, but as a speculative investment its certainly worth taking a position in.

Disclaimer: I own some Bitcoins, and I'll think about selling in a couple of years.