Data Privacy and Security: what are the real concerns?

Leo Cremonezi explores how much we value our privacy and the protection of our personal information.

We value our privacy and the protection of our personal information. We also value some control over who knows what about us. We certainly don’t want our personal information to be accessible to just anyone at any given time. But recent advances in information technology (IT) threaten privacy and have reduced control over personal data, opening-up the possibility of a range of negative consequences as a result of this access…
 

Someone asked me a few years ago, which data would cost the most if leaked. Of course, I didn’t know the answer at that time and I still don’t know now but I’m sure the correct answer, if there’s one, would change from time to time.

The 21st century has become the age of Big Data and advanced Information Technology allows for the storage and processing of exabytes of information. For companies, personal data about customers and potential customers are now also a key asset. At the same time, the value of privacy remains the subject of considerable controversy. The combination of the increasing power of new technologies and the decline in clarity and agreement on privacy gives rise to problems concerning policy and ethics.

The debates about privacy are almost always revolving around new technology, ranging from brain imaging, drones, wearable sensors and sensor networks, social media, smart phones, Big Data, head-mounted displays and search engines. There are basically two reactions to this wave of new technology and its impact on personal information and privacy: the first reaction is that we have zero privacy in the digital age and that there is no way we can protect it, so we should get used to the new world and get over it. The alternative view is that our privacy is more important than ever and that we can and we must attempt to protect it.

Also, geographically, Europe and USA are distinct in their point of view on the subject. The first conceptualises issues of informational privacy in terms of ‘data protection’, the second in terms of ‘privacy’. In discussing the relationship of privacy matters with technology, the notion of data protection is most helpful, since it leads to a relatively clear picture of what the object of protection is and by which technical means the data can be protected. At the same time, it invites answers to the question why the data must be protected?

We generate loads of data when online. This is not only data explicitly entered by the user, but also numerous statistics on user behaviour: sites visited, links clicked, search terms entered. Data mining can be employed to extract patterns from such activities, which can then be used to make decisions about the user. These may only affect the online experience (advertisements shown), but, depending on which parties have access to the information, they may also impact the user in completely different contexts.

In particular, Big Data may be used in profiling the user, creating patterns of typical combinations of user properties, which can then be used to predict interests and behaviour. An innocent application is “you may also like …”, but, depending on the available data, more sensitive derivations may be made, such as most probable religion or credit score. These conclusions could then in turn lead to inequality or discrimination. When a user can be assigned to a particular group, even only probabilistically, this may influence the actions taken by others. For example, profiling could lead to refusal of insurance or a credit card, in which case profit is the main reason for discrimination.

As users increasingly own mobile networked devices, more and more data has been collected and sent. These devices typically contain a range of data-generating sensors, including GPS, movement sensors, and cameras, and may transmit the resulting data via the Internet or other networks. One particular example concerns location data. Many mobile devices have a GPS sensor that registers the user's location, but even without a GPS sensor, approximate locations can be derived, for example by monitoring the available wireless networks. As location data links the online world to the user's physical environment, with the potential of physical harm (stalking, burglary during holidays, etc.), such data-sets are often considered particularly sensitive.

Many of these devices also contain cameras which, when applications have access, can be used to take pictures. These can be considered sensors as well, and the data they generate may be particularly private. For sensors like cameras, it is assumed that the user is aware when they are activated, and privacy depends on such knowledge. For webcams, a light typically indicates whether the camera is on, but this light may be manipulated by malicious software. In general, “reconfigurable technology” that handles personal data raises the question of user knowledge of the configuration.

The Future

But would it be possible for IT itself to solve privacy concerns? Whereas IT is typically seen as the cause of privacy problems, there are also several ways in which it can help resolve these issues. There are rules, guidelines or best practices that can be used for designing privacy-preserving systems. Such possibilities range from ethically-informed design methodologies to using encryption to protect personal information from unauthorised use.

What about the future? There are emerging technologies that may have an even more profound impact. Consider for example cognitive-computer interfaces which could be directly connected to the brain. Not only then, would behavioral characteristics be subject to privacy considerations, but even one's thoughts run the risk of becoming public, with decisions of others being based upon them. Such developments therefore require further consideration of the reasons for protecting privacy. In particular, when brain processes could be influenced from the outside, autonomy would be a value to reconsider to ensure adequate protection.

Apart from evaluating IT against current moral norms, one also needs to consider the possibility that technological changes influence the norms themselves. Technology thus does not only influence privacy by changing the accessibility of information, but also by changing the privacy norms themselves. For example, social networking sites invite users to share more information than they otherwise might. This “oversharing” becomes accepted practice within certain groups. With future and emerging technologies, such influences can also be expected and therefore they ought to be taken into account when trying to mitigate effects.

Finally, it’s appropriate to note that not all social effects of IT concern privacy. Examples include the effects of social network sites on friendship, and the verifiability of general election result. Therefore, value-sensitive design approaches and impact assessments of IT shouldn’t focus on privacy alone since it affects many other values as well.

More insights about Technology & Telecoms

Media & Brand Communication