EVENTS CALENDAR

SEE ALL EVENTS

Meet the Fellows – Andrew Adams

20 May 2015

Author: administrator

Hello Andrew. Can you start by briefly presenting yourself and the research that you are involved in?

I am interested in too many things is my usual description, I tend to be very broad in what I do. The core of what I am interested in is usually privacy and surveillance-related matters. More recently I have been getting more involved in security issues at least partially because of the fact that it is becoming very clear that privacy is very difficult to achieve without good security these days. But I am also interested in a range of other topics. I have done work on Massive Open Online Courses (MOOCs) and the future of digital education, I have done work on copyright issues, I have done work on cybersecurity and I have also been involved in general security projects and security technologies, advising on the ethics of those.

In OpenForum Academy we have people that work in different fields but tend to have a common view of the world and a similar perspective on the benefits of “openness”. How would you say that your own research relates to other topic areas through the lens of “openness”?

I am interested in “openness” in a number of areas. For example at the moment I have a project in Free Software, I am working on the issue of how an organisational structure affects individuals who are involved in Free Software projects. How people’s work involvement in Free Software might affect their personal involvement in Free Software and vice versa. I am looking into whether people choose to get into Free Software because they are interested to do it anyway, or rather do they end up working on Free Software simply because that is what their employer tells them to.

“Too often at the moment we give our data to an organisation and then it is opaque to the data subject what is done with their data.”

Another area of “openness” that I have been interested for a long time is the Open Access movement through which I know Peter Murray-Rust, who is also heavily involved. I am also interested in questions of openness as they relate to issues of privacy. Obviously in some sense, privacy is one of those things where I am interested in less openness. Less openness of organisations over-sharing and individuals over-sharing their own and other people’s data but I am also interested in openness as it pertains to the necessity of organisations being open about what they do with people’s data. Too often at the moment we give our data to an organisation and then it is opaque to the data subject what is done with their data. This is one of the things that they are struggling in the new proposals for the Data Protection Regulation in the EU. It is about how to make the principles that we have about the transparency of what holders of your data will do with it. How to make that feasible for people in the modern world where so many companies and other organisations (public and private sector) gather our data and we give them our data either in return for something or because we have to (particularly in terms of government data). So I am interested in the openness of knowing what happens to the data, but also interested in closing it down, so that people do have control over their own data.

In terms of surveillance, I think we should know more about what government surveillance goes on and, in particular, I would like to see more openness in the justifications for surveillance and an open evaluation of how effective that surveillance is. A big problem with surveillance is that it is presented as a fait accompliand there is never any evaluation on how effective that is.

I had a look at the paper that you published in 2014 “Facebook CodeSocial Network Sites PlatformAffordances and Privacy” where if I understand correctly you argue in favour of more regulation for clearer consent by users on the use of their personal data? I am sure you have followed some of the ongoing discussions in the EU on the new Data Protection Regulation which is now facing significant pushback from a number of players, slowing down progress on the legislative debates. Do you think this is going to be resolved anytime soon and how do you think the new rules are going to look like in a broad sense?

It is very very murky. It is a very detailed political question as to how much there is a willingness to delay these things and get a good result and how much there is a push to get something in place which will be a limited compromise. So far certain members at least of the European Parliament have been very strong in support of some members of the European Commission who have said that “effective” reductions in protection from the earlier Directive will not be acceptable.

“Whenever we are looking at going to a website and engaging with them, we are in fast-thinking mode: we want whatever the site offers in general and that is why we are going to it and anything getting in our way when we go there is seen as an obstacle to be overcome.”

Now there is that little bit of wiggle room about the term “effective” because some parts of the original Directive are very principled but are not effective in practice. One of the things that originally they were trying to do was to make these effective in practice but they accepted that sometimes those principles are not going to be fully implementable and they would rather it was clear what the limits of the application of those principles should be where it is effective. But what we have seen is pushback through lobbying of national governments – and that being transmitted into the European Council – by powerful and rich groups, particularly a number of American firms (although there are some European firms involved in this as well) who have an interest in reducing the effectiveness of the proposed regulation.

In terms of whether we are going to get it through it is too murky to tell I think at the moment. It very much depends on how willing both the Commission and the Parliament’s people are to say “No we are not going to accept this much watering down, we will accept some suggestions on where things are not feasible, that we put them properly into law so you know what the actual limits are and you do not have these principles which cannot really be put into practice”. But there is also the possibility that the Council will just say “You have to accept these or it will not go forward.” There are pressures to get a new law in place and those may prevail. So we may get something that actually weakens what has been internationally regarded as one of the gold standards of data protection worldwide. Which would be a great pity.

In these discussions the issue that you raised in your paper on user consent was one of the cornerstones of the debate, so I want to dig a little bit into this. I am wondering whether Internet users really care about this? I am sure you are familiar with the Directive on Privacy and Electronic Communications from 2009 which some people call the “Cookies Directive” which added the requirement for all websites to inform users about cookies being stored and get very explicit consent and that has resulted in a number of websites adding a pop-up when you visit the site. I think that a lot of people do not even read this and that they consider it more of a nuisance more than anything else. Don’t you think that requiring explicit content on the processing of personal data would result in something similar to this situation?

Well I think that you might have slightly misunderstood my point in the Facebook Code paper. The point I was making there was more the other way around. That explicit consent is too easy to manipulate, as you just said with the cookies. You are presented with this issue of the immediate desire of the user is to get the information or to engage with the site, whatever the site is offering them. We have very good work from the security arena about the difficulty of getting good usable security, that says that anything that gets in the way of the immediate goal of the user will be seen as an obstacle and they will look to move around it. Now in security that is a problem because it means people act in insecure ways. In privacy it is a problem not for the organisation who has the problem in the security setting but for the individual himself in the privacy setting. It is this “thinking fast and slow” argument from Kahneman. Whenever we are looking at going to a website and engaging with them, we are in fast-thinking mode: we want whatever the site offers in general and that is why we are going to it and anything getting in our way when we go there is seen as an obstacle to be overcome. If that obstacle is they want our consent to basically do whatever they like with our data, most people in most circumstances are going to be tempted and probably will give that consent.

“People become complacent and they change their attitude, it is not that they do not value their privacy it is that they do not see a way of getting what they want in a way that is privacy preserving.”

So what I am trying to argue in the Facebook Code paper is that we need stronger regulation restricting what organisations can even ask for and what they are allowed to do and how much they are allowed to willy-nilly collect everything which they do not need and which they are then allowed to just spread around to anybody else. The argument for stronger regulation is based on the very well known psychology principles that I put forward that actually when people are presented with these choices they will first make a fast-thinking choice which is “I want the information, I want the service or I want to buy the goods now so I will give up my privacy now”. And then longer term, even when they get to the slow-thinking side they will tend to rationalise that decision and say: “well yes it may be on balance okay that I accept this because if it is the only way of getting it”. People become complacent and they change their attitude, it is not that they do not value their privacy it is that they do not see a way of getting what they want in a way that is privacy preserving. That is why we need regulation. This is precisely in a general political sense, exactly where regulation is needed. You cannot rely on the market to solve these sorts of problems because the market will always favour those with the money, those able to offer an initial fast-thinking benefit to the consumer to persuade them to give everything away, to give something very valuable. It is something which is odd, privacy is something that people say they value but then they give it up. We know this from work of people like Alessandro Acquisti. But when it then comes back to bite them – and it does – they do become aware of its value but it is still not something that we could easily put a direct value on because the harm is long-term and we cannot see exactly where the harm came from.

We experience the harm of our privacy having been invaded in many ways. For instance, I personally have been a victim of the so-called “identity theft”. I do not like the term I prefer “impersonation fraud” as a description. When you are a victim of that, you feel the problems very significantly but you still do not know and it could be very very difficult, time consuming and sometimes even impossible to figure out if there was something you could have done to prevent that. This is why we need regulation. The market will not solve the problem for us and individuals are not empowered to solve those problems so we need stronger regulation in these areas. To say things like “the data subject actually owns the data” and it is not the company that holds the data that has any level of ownership in it. They have a right to process it in certain circumstances but that should not just include whatever is to their benefit it should only include whatever it is to a mutual benefit or to the benefit of the data subject.

So what you are saying is, if I understand correctly is that law should serve as a way to protect people from themselves or from future-selves if they do not have the long-term perspective of appreciating the benefits of their privacy?

Well it is not so much protecting them from their selves because you want to let people use these services but what you want to do is allow them to use these services and produce a reasonable outcome. I mean I am a technophile, I like all this modern technology. What I do not like is the fact that a lot of this is irrevocable. Once you give data to people it is almost impossible then to get it back and it is almost impossible then to control it.

“The market will not solve the problem for us and individuals are not empowered to solve those problems so we need stronger regulation in these areas.”

We do not have any feasible levers to use once we have made that initial decision. So it is not so much protecting people from themselves it is protecting people from exploitation. If we look historically we can see similar arguments in terms of say labour movements against unfair contracts of employment, against unsafe working conditions. The same arguments were made in the Victorian era, about the fact that people are free to take work in these dangerous environments and it is a free market and if they do not want to risk life and limb working in a cotton mill well they can choose not to do so.

We all know how that turned out.

That is why we decided that we were going to regulate, we were going to say that nobody can make unsafe working environments because it is not socially acceptable to do so. I think we need the same approach here. We need to regulate, allow some uses of data because it is to the benefit of the individual and society in certain cases but regulate that and not leave it as a free-for-all that is dependent on the idea that once you have given your consent once, that is it. Everything is fair game after that.

Are you not concerned that by removing the “sting” from what can happen if you give too much away, Internet users are not going to ask themselves that questions anymore? I am thinking of initiatives that have tried to empower users to be more conscious about the data that they are giving away. Look at projects like the Mozilla privacy icons or the browser plugin Disconnect. They also have a system of privacy icons which allows you to have a quick look at the plugin when you open a new web page and it will tell you in a visual, understandable way what EULA contains and does not contain Do you think that empowering user is another way to achieve this goal?

“The idea that all you need to do is give people information in understandable form seriously underestimates the cognitive load.”

I do not think so. The idea that all you need to do is give people information in understandable form seriously underestimates the cognitive load. Your average net user will visit 20 or 40 sites in any individual day. They will visit at least a number of news sites every day or at least every week. And requiring them every time they want to go somewhere new, to go through this thing even at just looking at something which says this is what they say they will do. No, that is too much. It puts too much load. It just does not understand how people work. People are in fast-thinking mode when they want the information, they do not want to look at these things.

No, what we need to do is empower the users so that the users can actually make reasonable choices ahead of time. So they can say this is what I value this is the sorts of things I am willing to give for this level of return. For example we have been working for a couple of years now with the Japanese telco company KDDI on a system called Privacy Policy Manager which is looking at being like the vendor relationships management concept and saying “this is what I will accept”. Giving users legal authority to say “I will only give you this and these are the limits on what you can do with it”. Now there is still an issue there: if nobody is willing on the other side of the equation to offer the services within the limits that the users are wanting.

“What we need to do is empower the users so that the users can actually make reasonable choices ahead of time.”

But the idea is to create a playing field that gives users sufficient choice in the marketplace that their choice is not between having to work in an unsafe environment or starving. And that is what we have at the moment: most often if you want to go online and do business you are going to company A which will take this set of data and misuse it or company B which will take a similar but not identical set of data and misuse it. Which set of data do you want it to be misused?

That is the choice we have currently. So I do not think that in individually saying each time when you go to a site you need to see the information. No you need to be able to set up basic principles and then make individual choices when the option comes up that this does not meet your general requirements and you are going to have to decide “is this important enough?” But you are still going to have that system of setting up by the slow thinking of “these are the things I value”. But also people have to be empowered to enforce those choices.

That is actually very interesting, this idea of setting upfront what you are prepared to accept for any types of services. Apart from the one you just mentioned, I am not aware of any projects that do this on a global scale. Why is that? Do you think there is not enough demand from the users or market incentive to create such a system?

Our work with KDDI involves looking at user interest and there is a significant user interest. However one of the key things there is that when you ask people what they would pay for this, as always when you ask people what they will pay they become very mean. They do not think that they will pay very much if anything for this sort of service and that is partly because when we look on other work we have done on privacy, people actually do not know the law very well and they think that it does protects their privacy much better than it does, or they at least think that that the law should protect their privacy much more than it does.

“People actually do not know the law very well and they think that it does protects their privacy much better than it does, or they at least think that that the law should protect their privacy much more than it does.”

So we come back to things like workplace safety and general consumer protection law. We have spent much of the 20th century setting up consumer protection law and we have spent much of the 20th century setting up safe working environment law. We need now to set up a safe online environment law that says “this is the reasonable level I am prepared to accept”.

Before we close this interview, are there any interesting articles or resources that you would like to point to for our readers?

Yes : there is the Facebook Code paper that was brought up earlier in the interview . We are still in the process of writing up our recent experiments with users on the Privacy Policy Manager, but here is a link to the definitionof the proposed service.

 

Thanks Andrew!