Saturday, December 3, 2016

What duties to software developers owe to users?

I was reading this blog post, entitled "The code I’m still ashamed of". 

TL; DR: back in 2000 the poster, Bill Sourour, was employed to write a web questionnaire aimed at teenage girls that purported to advise the user about their need for a particular drug. In reality unless you said you were allergic to it, the questionnaire always concluded that the user needed the drug. Shortly after, Sourour read about a teenage girl who had possibly committed suicide due to side effects of this drug. He is still troubled by this.

Nothing the poster or his employer did was illegal. It may not even have been unethical, depending on exactly which set of professional ethics you subscribe to. But it seems clear to me that there is something wrong in a program that purports to provide impartial advice while actually trying to trick you into buying medication you don't need. Bill Sourour clearly agrees.

Out in meatspace we have a clearly defined set of rules for this kind of situation. Details vary between countries, but if you consult someone about legal, financial or medical matters then they are generally held to have a "fiduciary duty" to you. The term derives from the Latin for "faithful". If X has a fiduciary duty to Y, then X is bound at all times to act in the best interests of Y. In such a case X is said to be "the fiduciary" while Y is the "beneficiary".

In many cases fiduciary duties arise in clearly defined contexts and have clear bodies of law or other rules associated with them. If you are the director of a company then you have a fiduciary duty to the shareholders, and most jurisdictions have a specific law for that case. But courts can also find fiduciary duties in other circumstances. In English law the general principle is as follows:
"A fiduciary is someone who has undertaken to act for and on behalf of another in a particular matter in circumstances which give rise to a relationship of trust and confidence."
It seems clear to me that this describes precisely the relationship between a software developer and a user. The user is not in a position to create the program they require, so they use one developed by someone else. The program acts as directed by the developer, but on behalf of the user. The user has to trust that the program will do what it promises, and in many cases the program will have access to confidential information which could be disclosed to others against the user's wishes.

These are not theoretical concerns. "Malware" is a very common category of software, defined as:
any software used to disrupt computer or mobile operations, gather sensitive information, gain access to private computer systems, or display unwanted advertising.
Sometimes malware is illicitly introduced by hacking, but in many cases the user is induced to run the malware by promises that it will do something that the user wants. In that case, software that acts against the interests of the user is an abuse of the trust placed in the developer by the user. In particular, the potential for software to "gather sensitive information" and "gain access to private computer systems" clearly shows that the user must have a "relationship of trust and confidence" with the developer, even if they have never met.

One argument against my thesis came up when I posted a question about this to Legal forum on Stack Exchange. The answer I got from Dale M argued that:

Engineers (including software engineers) do not have this [relationship of confidence] and AFAIK a fiduciary duty between an engineer and their client has never been found, even where the work is a one-on-one commission.
I agree that, unlike a software developer, all current examples of a fiduciary duty involve a relationship in which the fiduciary is acting directly. The fiduciary has immediate knowledge of the circumstances of the particular beneficiary, and decides from moment to moment to take actions that may or may not be in the beneficiary's best interest. In contrast a software developer is separated in time from the user, and may have little or no knowledge of the user's situation.

I didn't argue with Dale M because Stack Exchange is for questions and answers, not debates. However I don't think that the distinction drawn by Dale M holds for software. An engineer designing a bridge is not in a position to learn the private information of those who cross the bridge, but a software engineer is often in a position to learn a great deal about the users of their product. It seems to me that this leads inescapably to the conclusion that software engineers do have a relationship of confidence with the user, and that this therefore creates a fiduciary duty.

Of course, as Dale M points out, nobody has ever persuaded a judge that software developers owe a fiduciary duty, and its likely that in practice its going to be a hard sell. But to go back to the example at the top, I think that Bill Sourer, or his employer, did owe a fiduciary duty to those people who ran the questionnaire software he wrote, because they disclosed private information in the expectation of getting honest advice, and the fact that they disclosed it to a program instead of a human makes no difference at all.


Addendum: Scope of duty

This section looks at exactly what the scope of the fiduciary duty is. It doesn't fit within the main text of this essay, so I've put it here.

Fortunately there is no need for a change in the law regarding fiduciary duty. The existence of a fiduciary duty is based on the nature of the relationship between principal and agent, although in some countries specific cases such as company directors are covered by more detailed laws.

First it is necessary to determine exactly who the fiduciary is. So far I have talked about "the software developer", but in practice software is rarely written by a single individual. We have to look at the authority that is directing the effort and deciding what functions will be implemented. If the software is produced by a company then treating the company as the fiduciary would seem to be the best approach, although it might be more appropriate to hold a senior manager liable if they have exceeded their authority.

As for the scope, I'm going to consider the scope of the fiduciary duty imposed on company directors and consider whether an analogous duty should apply to a software developer:

  • Duty of care: for directors this is the duty to inform themselves and take due thought before making a decision.  One might argue that a software developer should have a similar duty of care when writing software, but this is already handled through normal negligence. Elevating the application of normal professional skill to a fiduciary duty is not going to make life better for the users. However there is one area where this might be applied: lack of motive to produce secure software is widely recognised as a significant problem, and is also an area where the "confidence" aspect of fiduciary duty overlaps with a duty of care. Therefore developers who negligently fail to consider security aspects of their software should be considered to have failed in their fiduciary duty.
  • Duty of loyalty: for directors this is the duty not to use their position to further their private interests. For a software developer this is straightforward: the developer should not use their privileged access to the user's computer to further their private interests. So downloading information from the users computer (unless the user explicitly instructs this to happen) should be a breach of fiduciary duty. So would using the processing power or bandwidth owned by the user for the developers own purposes, for instance by mining bitcoins or sending spam.
  • Duty of good faith: the developer should write code that will advance the user's interests and act in accordance with the user's wishes at all times.
  • Duty of confidentiality: if the developer is entrusted with user information, for example because the software interfaces with cloud storage, then this should be held as confidential and not disclosed for the developer's benefit.
  • Duty of prudence: This does not map onto software development.
  • Duty of disclosure: for a director this providing all relevant information to the shareholders. For a software developer, it means completely and honestly documenting what the software does, and particularly drawing attention to any features which a user might reasonably consider against their interests.  Merely putting some general clauses in the license is not sufficient; anything that could reasonably be considered to be contrary to the user's interests should be prominently indicated in a way that enables the user to prevent it.
One gray area in this is software that is provided in exchange for personal data. Many "free" apps are paid for by advertisers who, in addition to the opportunity to advertise to the user, also pay for data about the users. On one hand, this involves the uploading of personal data that the user may not wish to share, but on the other hand it is done as part of an exchange that the user may be happy with. This comes under the duty of disclosure. The software should inform the user that personal data will be uploaded, and should also provide a detailed log of exactly what has been sent. Thus users can make informed decisions about the value of the information they are sending, and possibly alter their behavior when they know it is being monitored.


Monday, March 14, 2016

Letter to my MP about the Investigatory Powers Bill

I've just sent this email to my MP. Hopefully it will make a difference. I've asked for permission to post her reply.

---------------------------

Dear Ms Fernandes,

I am a resident of [redacted]. My address is [redacted]. I am writing to you a second time about the proposed Investigatory Powers Bill. I wrote to you about this on 5th November 2015 urging you to try to mitigate the worst aspects of this bill, and now I am writing to urge you to vote against this bill when it comes to Parliament.

I am deeply concerned about the powers that this bill would give to the Home Secretary. However in order to keep this email reasonably short I will concentrate on one particularly dangerous power.

If this bill becomes law then the Home Secretary would be able to order any "communications company" (the term could mean anyone involved in providing software or equipment that enables communication) to install any surveillance feature the Home Secretary wishes. The recipient of this order would be unable to appeal against it, and would be prevented from revealing the existence of the order. There is no sunset time on this gag clause: it will last as long as the Home Secretary and the security services wish to maintain it.

It is true that these orders will also have to be signed off by a judge, but that will only verify that the order complies with whatever procedures are in place at the time. Furthermore these judges will only ever hear one point of view on the reasonableness and proportionality of the orders, and this can only result in the erosion of these safeguards over time.


I want to illustrate the danger of this power to weaken security by showing how it would impact a common method of selecting encryption keys called Diffie-Hellman Key Exchange. This method is used by web browsers and email programs whenever they make a secure connection (e.g. to web addresses starting "https"). It is also used by "Virtual Private Networks" (VPNs) which are widely used by businesses to allow employees to work remotely, and I expect that Parliament has one to allow MPs to access their email. You may even be using it to read this.

I want to show that any attempt to intercept messages where Diffie-Hellman is used will greatly weaken it, and that this will worsen our security rather than improving it. I will show this by linking the NSA to the compromise of the Office of Personnel Management (OPM) in America last year.

I don't propose to explain the technical details of Diffie-Hellman. What it means is that two computers can exchange a few messages containing large random numbers, and at the end of this they will share a secret key without that key ever having been sent over the Internet.

Suppose that a communications company provides software that uses Diffie-Hellman, and receives an order from the Home Secretary that they must make the encrypted messages available to law enforcement and the intelligence agencies. What are they to do? They never see the secret keys, so they must do one of the following:

1: Modify the software to send a copy of the chosen key to someone. This is far less secure, and also very obvious. Anyone monitoring the packets sent by the programs will instantly see it.

2: Modify the software to make the keys or the encryption weak in a non-obvious way so that the UK intelligence agencies can determine what the key is. For instance, the random numbers might be made more predictable in a subtle way.

These are the only two ways in which the communications company can comply with the order.

We have seen what happens when Option 2 is chosen, because this was done to Juniper Networks firewall product [see ref 1 below]. Someone deliberately inserted "unauthorised code" which weakened the encryption used by this product in a very specific and deliberate way. There is no possibility that this was an accidental bug. The responsible party is widely believed to be the NSA, because secret briefings released by Edward Snowden made reference to the ability to intercept data sent via this product [ref 2], and it would be much easier for the NSA to infiltrate an American company than for anyone else to do it.

However there is something important that happens when software is updated: hackers (including foreign governments) scrutinize the updates to see what has changed. Normally they find that the old version of the software had a security hole which is now patched, so the patch flags up a way to attack computers that haven't been updated yet. But in this case when Juniper issued an update to their firewall software these hackers found the security hole in the *new* software.

Doing this kind of analysis in a systematic way for many security products is a very large job. Doing it in secret requires the resources of a government. So now not only could the NSA intercept communications sent via Juniper firewalls, but so could an unknown number of foreign governments. The Chinese were almost certainly one of them. Other nations known to have invested in  cyber-attack capabilities include the Russia, Israel and North Korea (although the last is probably not as capable yet).

Juniper products are widely used by the US Government. This is likely to have been one of the ways in which the Office of Personnel Management (OPM) was penetrated last year [ref 3]. The Chinese government is the prime suspect in this hack, through which the attackers have obtained copies of the security clearance applications of everyone who has ever worked for the US government.

So it seems that the NSA, by introducing a supposedly secret "back door" into a widely used product, cleared the way for the Chinese to obtain secret files on everyone who has ever worked for their government, including all of their legislators and everyone who works at the NSA. Nice job breaking it, Hero!


Now it is true that this is circumstantial; we have no hard evidence that the Juniper back door was inserted by the NSA, no hard evidence that the Chinese found it, and no hard evidence that this contributed to the OPM hack. But each of these is a big possibility. Even if the OPM hack didn't happen in exactly that way, deliberately weakening security makes events like this much more likely. If the Home Secretary orders a company to introduce weakened security, that fact will become apparent to anyone with the resources to dig for it. Once armed with that fact, they can attack through the same hole.

Furthermore, we would never find out when a disaster like the OPM hack happens under the regime described in the Investigatory Powers bill.  Suppose that, thanks to the weakened security ordered by the Home Secretary, secret government files are obtained by a hostile power, and the communications company executives are called before a Parliamentary Inquiry to account for their negligence; how can they defend themselves if they are legally prohibited from revealing their secret orders?

More generally, we will never be allowed to learn about the negative effects of these secret orders. It would embarrass those who issued them, and they are exactly the people who would have to give permission for publication. So if Parliament passes this bill it will never be allowed to learn about the problems it causes, and hence never be able to remedy the mistake.

I have focused on only one of the measures in the Investigatory Powers bill here, but there are many others in the bill that cause me great concern. To go through the whole bill in this level of detail would make this email far longer, and I know that you have many calls on your time. I can only ask you to believe that there are many similar issues. For these reasons I must urge you to vote against the bill when it reaches the House of Commons.

Yours sincerely,

Paul Johnson.


[1] http://forums.juniper.net/t5/Security-Incident-Response/Important-Announcement-about-ScreenOS/ba-p/285554

[2] https://assets.documentcloud.org/documents/2653542/Juniper-Opportunity-Assessment-03FEB11-Redacted.txt

[3] https://en.wikipedia.org/wiki/Office_of_Personnel_Management_data_breach