Tag: Privacy (page 1 of 5)

Security v. Security – Tech Companies, Backdoors and Law Enforcement Authorities

Grab the popcorns, this is going to be fun!

Grab the popcorns, this is going to be fun!

The access request to the information stored on the Smartphone of one of the San Bernardino shooting suspects has intensified the debate on the implementation of backdoors to enable the access to mobile devices for law enforcement purposes.

The issue does not refer to whether the law enforcement authorities, by means of a proper warrant, are entitled to search a mobile phone and access its content. That is a straightforward fact. They do.

What is at stake is Apple’s objection to a court order requiring it to provide the ongoing federal investigation the proper means to access such information. More concretely, it has been required to actually write a code modifying the iPhone software, that would bypass an important security function put in place, by disabling the feature which automatically erases information after ten attempts of entering the wrong password. This would enable authorities to endlessly enter wrong credentials and eventually crack the device’s password through brute force, without risking the deletion of content, thus being able to access it and extract the information contained on the iPhone of the suspect.

The use of new technologies to conduct criminal and terrorist activities has made it difficult to ignore the advantages of accessing the communications by means of such technologies in the investigation, prevention and combat of criminal activities. Law enforcement authorities point that it is particularly pertinent in the fight against terrorism, paedophilia networks and drug trafficking cases.

In this context, the use of encryption in communications has become a cornerstone of the debate. Investigative authorities are willing to see implemented backdoors in mobile devices in order to ensure the access when necessary. Contrastingly, companies such as Apple refuse to retain access keys – and consequently provide it upon request of law enforcement authorities – to such encrypted communications.

Just recently, FBI Director James Comey has told the US Senate Intelligence Committee that intelligence services are not interested in a ‘backdoor’ per se access to secure devices. Instead, what is at stake is requiring companies to provide the encrypted messages sent through those devices. James Comey is a wordplay habitué. He once said he wanted ‘front doors’ instead of ‘back doors’.

In the same line, White House Press Secretary, Josh Earnest recently stated that, by the abovementioned court order, Apple is not being asked to redesign its products or to create a backdoor.

While these are, at the very least, very puzzling statements, they nevertheless clearly express the subjacent motivation: the ban on encryption products with no backdoors and the implementation of backdoors.

Indeed, if companies can be required to undermine their security and privacy protection features in order to provide access to law enforcement authorities, regardless the legitimate inherent purpose, and disregarding the concrete designation one might find preferable, that is the very definition of a backdoor.

It never ceases to amaze me how controversial among free people living in a democracy it seems to be that the implementation of backdoors is – on both legal and technological grounds and for the sake of everyone’s privacy and security – a very bad idea.

Well, the main argument supporting the concept is that such technological initiative will chiefly help the combat of criminal activities. That is unquestionably a very legitimate purpose. And nobody opposing the implementation of backdoors actually argues otherwise.

However, it is a fact that backdoors would automatically make everyone’s communications less secure and exposed them to a greater risk of attacks by third parties and to further privacy invasions. Moreover, no real warranties in regards of the risk of the abuse which could ensue are ever provided. Those arguing in favour of the access to information through backdoors fail to adequately frame the context. It is vaguely stated that such mechanism will be used when necessary, without any strict definition. What is necessary, anyway? Would it depend on the relevance of the information at stake? Would it depend on the existence of alternative means or of how burdensome those are?

At least, if Apple complies with the order, it is difficult to accept that more similar requests will not immediately ensue. In fact, one will risk saying that those can be expected and will certainly be encouraged in the future. Ultimately, the creation of this cracking software could be used and abused in future cases. And this is particularly worrisome considering the lack of legal framework and the judicial precedent basis.

One may be tempted to sacrifice privacy in the interest of public security. That it is not a wrongful viewpoint. I don’t know anyone that would disagree on that. Except when considering the very own limitations of backdoors when it comes to fighting terrorism for instance. It is harder to support backdoors to prevent criminal activities when confronted with their very own inherent inefficiency and limitations, which seem to go unacknowledged by their supporters.

While companies may be forced to implement such backdoors, to provide access to encrypted communications, there is a myriad of alternatives in the marketplace for criminal seeking encrypted products where no such backdoors are installed. Encryption apps, files encryption, open source products, virtual private networks…

Let’s talk about Isis for instance. It has been alleged – without further demonstration – that they have their own open source encrypted communications app. Therefore, except from weakening the communications’ safety of everybody relying on encrypted messaging apps, considering the open source nature of the app used by Isis, the implementation of backdoors would be pointless for the purpose intended to be achieved.

Thus said, one can easily understand the stance of Apple. Having built its reputation on the privacy and security provided by its devices, it is very risky from a commercial viewpoint to be asked to develop software that counter its core business. Indeed, it modified its software in 2014 in order to become unable to unlock its Smartphones and access its customers’ encrypted data.

The fact that the company is now being asked to help enforcement law authorities by building a backdoor to get around a security function that prevents decryption of its content appears to be just another way of achieving the same outcome. Under a different designation.

Because it now goes way further than requiring companies to comply with a lawful order and warrant to the extent they are able to, requesting private companies to create a tool intended to weaken the security of their own operating systems just goes beyond any good sense. Indeed, it just amounts to require (force?) private companies to create and deliver hacking tools to law enforcement authorities which actually put everyone’s privacy and cybersecurity at risk.

And if this becomes a well accepted requirement in democratic systems, either by precedent either through legislative changes, well, one can only wonder with what enthusiasm such news will be welcomed by some repressive regimes eager to expand their surveillance powers.

From an EU viewpoint, and considering how uncertain is the future of the Privacy Shield framework, and despite the existing divergences among EU Member States in respect of encryption, this whole case certainly does not solve any trust issues in regards of the security of the data transferred to the US.

Truecaller: In the crossroad between privacy and data protection

Let me see, where am I uploading others personal information today?

Let me see, where am I uploading others personal information today?

As I have already made clear in a previous post, there is little that I find more annoying than, at the end of a particularly stressful workday, being bothered by unrequested telemarketing or spam phone calls. And while I understand that, on the other side of the line, there is a person acting under the direct orientation – and possibly supervision – of his/her employer, these calls most always seem a resistance test for one’s patience.

Therefore, mobile software or applications enabling the prior identification of certain numbers, by replicating the Caller ID experience (as if the contact number was saved in your contact list) or allowing for the automatic blocking of undesirable calls have found here a market to succeed.

Thus said, you might have certainly heard of and possibly installed apps such as Current Caller ID, WhosCall or Truecaller. Most probably, you find them quite useful in order to avoid unwanted contacts.

As I have, for several occasions, unlisted my contact from the Truecaller database, but keep noticing that it eventually ends up being integrated in that list all over again, I want to address today some specific issues regarding that app.

Truecaller is freely available on iOS and Android platforms and is quite efficient in regards of what it promises, providing its subscribers a humongous database of previously identified contacts. In particular, it enables users to identify, without being required to hang up to that end, spam callers.

This efficiency is the result of the data provided by the millions of users who have downloaded the app on their smartphones.

How?

Well, it suffices that users allow for the app to access his/her contacts list as foreseen in the end user agreement, which might not have been read by many. Once this consent has been obtained, the information of the contacts book is uploaded to the Truecaller’s servers and made available to the rest of its subscribers.

According to this crowd-sourced data system, you are able to identify unknown numbers.

Therefore it suffices that another user has marked a given contact as spam for you to be able to immediately identify a caller as such and save yourself from the patience consuming contact. Indeed, and quite undoubtedly, if a number qualified as unwanted by others call you, the screen of your Smartphone will turn red and present the image of shady figure wearing a fedora and sunglasses.

On the down side, if anybody has saved your contact number and name in their address book, if suffices that one person has installed the Truecaller app on their mobile phone and subscribed the abovementioned permission clause, for your contact number and name to end up in that database.

A new interface enables users to add information from their social media channels. Therefore, besides your contacts information, if users do activate the use of third parties social network services, such as Facebook, Google+, LinkedIn or Twitter, Truecaller may upload, store and use the list of identifiers associated with those services linked to your contacts in order to enhance the results shared with other users.

Moreover, it has recently been updated to introduce a search function, thus enabling you to search for any phone number and find any person’s contact.

In the same line, Facebook – which is only the largest social network – has decided to give new uses for the amount of data its users provide with the app Facebook Hello. In that regard, users are required to grant it access to the information contained in their Facebook account. Indeed, Hello uses Facebook’s database to provide details of a caller. Contrastingly, other apps such as Contacts+ integrate information provided in different social networks.

While it is undeniably useful to identify the person behind the unknown numbers, this means that the same others will be able to identify you when you contact them, even if they do not have your number.

Truecaller raises several privacy and data protection concerns. In fact, as names and telephone contacts actually enable the adequate identification of natural individuals, there is no doubt that such information actually constitutes personal data.

Nevertheless, in Truecaller’s own words:

When you install and use the Truecaller Apps, Truecaller will collect, process and retain personal information from You and any devices You may use in Your interaction with our Services. This information may include the following: geo-location, Your IP address, device ID or unique identifier, device type, ID for advertising, ad data, unique device token, operating system, operator, connection information, screen resolution, usage statistics, version of the Truecaller Apps You use and other information based on Your interaction with our Services.

This is particularly problematic considering that Trucaller collects and processes clearly and manifestly the personal data of other data subjects besides of its users.

As for the information related to other persons, it is said:

You may share the names, numbers and email addresses contained in Your device’s address book (“Contact Information”) with Truecaller for the purpose described below under Enhanced Search. If you provide us with personal information about someone else, you confirm that they are aware that You have provided their data and that they consent to our processing of their data according to our Privacy Policy.

This statement is, in my very modest opinion, absolutely ludicrous. Most people who have installed the service are not even aware how it works, let alone that an obligation of notifying an entire contacts list and obtaining individual consent impends upon them. In this context, it is paramount to have into consideration that, in the vast majority of cases, from the users’ viewpoint, the data at stake is collected and processed merely for personal purposes. Moreover, consent is usually defined as “any freely given specific and informed indication of his wishes by which the data subject signifies his agreement to personal data relating to him being processed.”

This basically amounts as saying that non-users do not have any control over their personal data as the sharing of their contact identification and phone number will depend on how many friends actually install Truecaller.

It is evident that Truecaller has no legal permission to process any personal data from non-users of its service. These non-users are data subjects who most certainly have not unambiguously given their consent, the processing is not necessary for the performance of a contract in which the data subject is party, and no legal obligation nor vital interests nor the performance of a task carried out in the public interest are at stake.

In this regard, the possibility provided to those who do not wish to have their names and phone numbers made available through the enhanced search or name search functionalities to exclude themselves from further queries by notifying Truecaller is not even realistic. To begin with, one is required to be aware of the existence of the service and, subsequently, is required to actively seek if his/her contact is on its directory and then to require Truecaller to be unlisted from its database. However, considering all my failed attempts, I am not sure if this option is only available to users or if this simply does not prevent to be added again to the service’s database once another user having the relevant contact on his address book actually allows for such access.

Last, but not the least, and this should not really constitute any surprise, Truecaller has already been hacked in the past by a Syrian hacking group, which resulted in the unauthorized access to some (personal) data of users and non-users. This surely highlights the importance of users carefully choosing the services with whom they entrust their – and others – personal data.

All considering, Truecaller is the obvious practical example of the statement: ‘If You’re Not Paying For It, You Are the Product Being Sold’.

Bits and pieces of issues regarding the happy sharing of your children’s lives on Facebook

It's just a picture of them playing, they don't mind. Like!

It’s just a picture of them playing, they don’t mind. Like!

Similarly to what is happening in other EU Member States’ courts, Portuguese courts have been struggling with the application of traditional legal concepts to the online context. Just recently, in a decision which I addressed here, it considered that those having in their possession of a video containing intimate images of an ex-partner are under the obligation to properly guard it and the omission to practice adequate safeguard are condemned as a relevant omission.

Thus said, there is one particular decision which was issued by a Portuguese appealing court last year that I failed to timely address and which concerns the very specific rights of image of children in the online context. Considering the amount of pictures that appear on my Facebook wall every time I log in on my account and the concerns expressed by the upcoming GDPR in regards of the collection and processing of data referring to minors of sixteen, I would like to address it today.

The court at stake confirmed the decision of the court of first instance, issued within a process of regulating the parental responsibilities of each progenitor, which forbid a separated couple to divulge on social media platforms pictures or information identifying their twelve years old daughter. It severely stated that children are not things or objects belonging to their parents.

One would expected that a court decision would not be necessary to achieve the conclusion according to which children have the right to have their privacy and image respected and safeguarded even from acts practised by their parents. In fact, one would hope that, in the online context, and considering their specific vulnerability and the particular dangers facilitated by medium of the Internet, their protection would be ensured primarily by their parents.

Ironically, the link to the news referring to this court decision was greatly shared among my Facebook friends, most of them with children of their own. The same ones who actually happily share pictures of their own kids. And who haven’t decreased the sharing of information involving their children since then.

It is funny how some people get offended or upset when someone posts online a picture in which they are not particularly favoured or of which they are embarrassed and are quick to require its removal, and do not wonder if it is ethical to publish a picture of information about someone who is not able to give his/her consent. Shouldn’t we worry what type of information would children – our own, our friend’s, our little cousin or nephew – want to see about themselves online in the future?

Every time I log in my Facebook account, there is an array of pictures of birthday parties, afternoons by the sea, first days at school, promenades in the park, playtimes in the swimming pool, displays of leisure activities, such as playing musical instruments or practising a sportive activity… In a particular case, it has been divulged that the child had a serious illness, which fortunately has been overcome ever since but which received full Facebook graphic and descriptive coverage at the time of the ongoing development.

I have seen pictures where my friends’ children appear almost naked or in unflattering poses, or where it is clearly identifiable where they go to school or spend holidays. Many identify their children by their name, age, school they attend, extracurricular activities… In any case, their parenthood is quite well established. I always think that, in the long run, it would permit the building of an extended and detailed profile for anybody which has access to such data. And, if you had read any of my other posts, you know by now that I am not exactly referring to the Facebook friends.

More worryingly, these details about the children’s lives are often displayed on the parents’ online profiles, perhaps due to simple distraction or unawareness, without any privacy settings being implemented. Consequently, anybody having a Facebook account can look for the intended person and have access to all the information contained on that profile.

I do not want to sound like a killjoy, a prude or a moralist. I get it, seriously, I do. A child is the biggest love and it is only human to want to proudly share his growth, development and achievement with relatives and friends. It has always been done and now it is almost effortless and immediate, at the distance of a click. In this regard, by forbidding the sharing of any picture or any information regarding children, the abovementioned decision seems excessive and unrealistic.

Nevertheless, one should not forget that some good sense and responsibility is particularly required in the online context, considering how easy it actually is to lose control of the purposes given to the published information besides the initial ones. As many seem to forget, once uploaded on an online platform, it is no longer within our reach, as they can be easily copied or downloaded by others.

Thus said, while it is certainly impossible to secure anonymity online, the amount of information that is published should be controlled for security, privacy and data protection purposes.

Anyway, this common practice of parents sharing online pictures and information regarding their children makes me wonder how companies such as Facebook, and other platforms focusing on user generated content, who process data at the direction of the user and, consequently, who unintentionally have to collect and process personal data regarding children below the age of sixteen, may be asked to comply with the new requirements of the GDPR in that regard.

If it is to be lawful if and to the extent that consent is given or authorised by the holder of parental responsibility, and if, as the Portuguese court have understood it, parents are not entitled to dispose of their children’s image on social media, a funny conundrum is generated. If the parents cannot publish such information, they will not be able to authorize it either and, consequently, children/teenagers won’t be able to rely on their parents’ authorization to use social media.

The dangers of certain apps or how to put your whole life out there

Finding love, one data breach at a time.

Finding love, one data breach at a time.

One of my past flatmates was actively looking for love online. Besides having registered in several websites for that end, I remember he also had several mobile applications (apps) installed in his Smartphone. I think he actually subscribed pretty much anything that even remotely could help him find love but outlined Tinder as his main dating tool.

Another of my closest friends is a jogging addicted – shout out P. He has installed on his Smartphone various apps which enable him to know how much steps he has made in a particular day, the route undertaken, and the heart rate via external device, which enables him to monitor his progresses.

What both of my friends have in common? Well, they actually use mobile apps to cover very specific necessities. And in this regard they can rely with almost anybody else.

Indeed, it is difficult to escape apps nowadays. Now that everyone (except for my aunt) seems to have a Smartphone, apps are increasingly popular for the most diversified purposes. For my prior flatmate it was all about dating. For my friend, it is to keep track of his running progresses. But their potential does not end there. From receiving and sending messages, using maps and navigation services, receiving news updates, playing games, dating or just checking the weather… You name a necessity or convenience, and there is an app for it.

On the downside, using apps usually requires to provide more or less personal information to the specific intended effect. Something that has become so usual that most consider as a natural step, without giving it further consideration.

In fact, a detail that most seem to be unaware of, apps allow for a massive collection and processing of personal – and sometimes sensitive – data. In fact, the nature and the amount of personal data accessed and collected raises serious privacy and data protection concerns.

For instance, in the case of my abovementioned flatmate, who was registered on several similar apps, and considering that he did not create fake accounts nor provided false information, each of them collected at least his name, age, gender, profession, location (enabling to presume where he worked, lived and spend time), sexual orientation, what he looks like (if he added a picture to his profiles), the frequency of his accesses to the app, and eventually the success of his online dating life.

In fact, in Tinder’s own words:

Information we collect about you

In General. We may collect information that can identify you such as your name and email address (“personal information”) and other information that does not identify you. We may collect this information through a website or a mobile application. By using the Service, you are authorizing us to gather, parse and retain data related to the provision of the Service. When you provide personal information through our Service, the information may be sent to servers located in the United States and countries around the world.
Information you provide. In order to register as a user with Tinder, you will be asked to sign in using your Facebook login. If you do so, you authorize us to access certain Facebook account information, such as your public Facebook profile (consistent with your privacy settings in Facebook), your email address, interests, likes, gender, birthday, education history, relationship interests, current city, photos, personal description, friend list, and information about and photos of your Facebook friends who might be common Facebook friends with other Tinder users. You will also be asked to allow Tinder to collect your location information from your device when you download or use the Service. In addition, we may collect and store any personal information you provide while using our Service or in some other manner. This may include identifying information, such as your name, address, email address and telephone number, and, if you transact business with us, financial information. You may also provide us photos, a personal description and information about your gender and preferences for recommendations, such as search distance, age range and gender. If you chat with other Tinder users, you provide us the content of your chats, and if you contact us with a customer service or other inquiry, you provide us with the content of that communication.

Considering that Tinder makes available a catalogue of profiles of geographically nearby members, among which one can swipe right or left, according to each one personal preferences, with the adequate analysis, it is even possible to define what type of persons (according to age, body type, hair colour) users find most attractive.

And because Tinder actually depends on having a Facebook profile, I guess that Facebook also gets aware of the average climate of your romantic life. Mainly if you start adding and interacting with your new friends on that platform and, why not, changing your status accordingly.

In the specific case of Tinder, as it mandatorily requires to be provided with a certain amount of Facebook information in order to ensure its proper functioning, these correlations are much easier for this app.

Thus said, a sweep conducted by 26 privacy and data protection authorities from around the world on more than 1,000 diversified apps, thus including Apple and Android apps, free and paid apps, public sector and private sector apps, and ranging from games and health/fitness apps, to news and banking apps has made possible to outline the main concerns at stake.

One of the issues specifically pointed out referred to the information provided to the users/data subjects, as it was concluded that many apps did not have a privacy policy. Therefore, in those cases, users were not properly informed – and therefore aware – about the collection, use, or further disclosure of the personal information provided.

It is a fact that most of us do not read the terms and conditions made available. And most will subscribe pretty much any service he/she is willing to use, disregarding what those terms and conditions actually state.

Nevertheless, a relevant issue in this regard is the excessive amount of data collected considering the purposes for which the information is provided or how it is sneakily collected. For instance, even gambling apps, such as solitaire, which seem far more innocuous, hide unknown risks, as many contain code enabling the access to the user’s information or to his contacts’ list and even allow to track the user’s browsing activities.

This is particularly worrisome when sensitive data, such as health information is at stake. This kind of data is easily collected through fitness orientated apps, which are quite in vogue nowadays. Besides any additional personally identifiable information which you will eventually provide upon creating an account, among the elements which most certainly are collected, one can find: from the name or user name, date of birth, current weight, target weight, height, gender, workouts frequency, workout settings and duration of your workout, heart rate. Also, if you train outdoors, geo-location will most certainly enable to assess the whereabouts of your exercising, from the departure to the arrival points, which will most probably coincide with your home address or its vicinities.

And, if you are particularly proud of your running or cycling results, and are willing to show up to all your friends in what good shape you actually are, there is a chance that you can actually connect the app to your Facebook and display that information in your profile, subsequently enabling Facebook to access the same logged information.

And things actually get worse when considering that, as demonstrated by recent data breaches, it seems that the information provided by their users is not even adequately protected.

For instance, and if I remember it well, due to a security vulnerability in Tinder – that apparently has been already fixed – it seemed that there was a time where the location data, such as longitude and latitude coordinates of users were actually easily accessible. Which is actually quite creepy and dangerous, as it would facilitate stalking and harassment in real life, which is as bad as it is happening online.

Anyway, it is actually very easy to forget the amount of data we provide apps with. However, the correlations that can be made, the conclusions which can be inferred, the patterns that can be assessed amounts to share more information than what we first realise and enables a far more detailed profile of ourselves than most of us would feel comfortable with others knowing.

The limits of government surveillance according to the ECtHR

Limits? What do you mean by 'limits'?

Limits? What do you mean by ‘limits’?

In two very recent judgements, the European Court of Human Rights (hereafter ECtHR) has made several essential points in regards of surveillance conducted by public authorities and its relation with Article 8 of the European Convention of Human Rights (hereafter ECHR).

Article 8 provides that governmental interference with the right to privacy must meet two criteria. First, the interference must be done e conducted “in accordance with the law” and must be “necessary in a democratic society”. Such interference must aim to achieve the protection of the “interests of national security, public safety or the economic well-being of the country, for the prevention of disorder or crime, for the protection of health or morals, or for the protection of the rights and freedoms of others”.

In previous cases regarding surveillance conducted by public authorities, the ECtHR had already concluded that any interference with the right to respect for private life and correspondence, as enshrined in Article 8 of the ECHR, must be strictly necessary for safeguarding the democratic institutions. However, it has now further clarified its interpretation.

In these recent decisions, the ECtHR concluded that the secret surveillance, as carried out in the manner described in the facts of the cases, violated Article 8 of the Convention.

The Roman Zakharov v. Russia decision, issued on the 4th December 2015, concerned the allegations of the editor in chief of a publishing company that laws enabling the installation of equipment which permitted the Federal Security Service (“the FSB”) to intercept all his telephone communications, without prior judicial authorisation, three mobile network operators interfered with his right to the privacy of his telephone communications.

The Court considered that “a reasonable suspicion against the person concerned, in particular, whether there are factual indications for suspecting that person of planning, committing or having committed criminal acts or other acts that may give rise to secret surveillance measures, such as, for example, acts endangering national security” must be verified and the interception shall meet the requirements of necessity and proportionality.

The Szabó and Vissy v. Hungary decision, issued on the 12th January 2016, concerned the allegations of members of a non-governmental organisation voicing criticism of the Government that the legislation enabling police to search houses, postal mail, and electronic communications and devices, without judicial authorization, for national security purposes, violated the right to respect for private life and correspondence.

The Court considered that: “the requirement ‘necessary in a democratic society’ must be interpreted in this context as requiring ‘strict necessity’ in two aspects. A measure of secret surveillance can be found as being in compliance with the Convention only if it is strictly necessary, as a general consideration, for the safeguarding the democratic institutions and, moreover, if it is strictly necessary, as a particular consideration, for the obtaining of vital intelligence in an individual operation. In the Court’s view, any measure of secret surveillance which does not correspond to these criteria will be prone to abuse by the authorities with formidable technologies at their disposal.” Consequently, it must be assessed if “sufficient reasons for intercepting a specific individual’s communications exist in each case”.

In both cases, by requiring surveillance activities to be individually targeted, the ECtHR has established that any indiscriminate interception is unacceptable. This is a most welcomed position considering the well-known legislative instruments and initiatives intended to strength the legitimacy of massive monitoring programs in many EU Member States.

Monitoring of employees in the workplace: the not so private parts of a job in the EU private sector

Monitoring you? Us?

Monitoring you? Us? 1)Copyright by MrChrome under the CC-BY-3.0

In a case referring to the employees’ rights to the privacy of their correspondence and communications, the European Court of Human Rights (hereafter ECtHR) has ruled that employers are entitled to monitor their employees’ private online communications conducted through means of a messaging account provided for professional purposes.

The details of the case are as follows: the employment’s contract of Romanian engineer was terminated by his employer, back in 2007, after the company he worked for found out that he was using messaging services, such as Yahoo Messenger, to conduct personal contacts, namely with his brother and fiancée. The account was created, at the employer’s request, strictly for professional purposes and a personal use was specifically forbidden by the company policy, of which the employee was made aware. The internal regulation established, inter alia, that “it is strictly forbidden to disturb order and discipline within the company’s premises and especially … to use computers, photocopiers, telephones, telex and fax machines for personal purposes.”

While the company considered that the employee had breached the company rules by using the service for personal purposes, and thus the termination of the employment’s contract was justified, the employee argued that the termination decision was illegal due to be founded on a violation of his rights to respect for his private life and correspondence.

Among the pertinent legal instruments deemed applicable and referred by the ECtHR are, obviously, the European Convention of Human Rights (hereafter ECHR), the Directive 95/46/EC and the Art.29WP “Working document on the surveillance and the monitoring of electronic communications in the workplace”, which I also have addressed here, in regards of the issue of the monitoring of employees.

The core issue at stake was whether, considering the factual context described, the employee could have had a reasonable expectation of privacy when communicating from the Yahoo Messenger account that he had registered at his employer’s request and considering that the employer’s internal regulations, of which he was aware, strictly prohibited employees from using the company’s computers and resources for personal purposes.

Having into consideration that the use of messaging was only allowed for solely professional purposes, the Court deemed that it was not “unreasonable that an employer would want to verify that employees were completing their professional tasks during working hours.” (par. 59)

In this regard, it considered that “the employer acted within its disciplinary powers since, as the domestic courts found, it had accessed the Yahoo Messenger account on the assumption that the information in question had been related to professional activities and that such access had therefore been legitimate. The court sees no reason to question these findings.” Particularly relevant to the formation of that assumption was the fact the employee had initially claimed that he had used the messaging account to advise the company’s clients. (par. 57)

Therefore, despite concluding that an interference with the applicant’s right to respect for private life and correspondence within the meaning of Article 8 of the ECHR indeed occurred, the ECtHR concluded that there has been no violation of such rights, because “the employer’s monitoring was limited in scope and proportionate”.

The claim that the employee’s right to privacy and the confidentiality of his correspondence had been violated was therefore dismissed.

This ruling is in line with that respecting the Benediktsdóttir v. Iceland case, in which the ECtHR concluded that the right to privacy and to correspondence has to be balanced with the other rights, namely those of the employer.

However, the dissenting opinion of the judge Pinto de Albuquerque deserves particular mentioning. Particularly based on the very personal and sensitive nature of the employee’s communications, the non-existence of an Internet surveillance policy, duly implemented and enforced by the employer and the general character of the prior notice given to employees in regards of the monitoring conducted on the communications, it leads one to wonder if the assessment regarding the respect of the necessity and proportionality principles could have been as straightforward as it firstly seemed. Namely considering that the employer also accessed the employee’s own personal account.

Thus said, the specific details of the case should not be overlooked and rushed or generalized conclusions should be avoided.

As pointed out by Pinto de Albuquerque, in the absence of a prior notice from the employer that communications are being monitored, the employee has a reasonable expectation of privacy. Moreover, the  complete prohibition of the use of the Internet by employees for personal purposes is inadmissible. Furthermore, the practice of complete, automatic and continuous monitoring of Internet usage by employees is also forbidden.

The fact that the employee was adequately informed of the internal regulations imposing restriction upon the use of the messaging service for personal purposes and that employer had accessed the communications in the belief of their professional nature are paramount elements in this context. In no way must this ruling be interpreted as a general faculty of employers to monitor or snoop on their employees’ private communications.

Indeed, as clearly put by the Art.29WP in the above mentioned document, the simple fact that monitoring or surveillance conveniently serves an employer’s interest could not justify an intrusion into workers’ privacy.

In fact, as outlined by the judge Pinto de Albuquerque in his dissenting opinion: “if the employer’s internet monitoring breaches the internal data protection policy or the relevant law or collective agreement, it may entitle the employee to terminate his or her employment and claim constructive dismissal, in addition to pecuniary and non-pecuniary damages.”

Therefore, employers should take special care in providing appropriate information in regards of the use that employees are allowed to make of the company’s communication means, namely for personal purposes. Moreover, employers intending to conduct monitoring activities over their employee’s activities should implement a proper and clear monitoring policy, restricted to what is necessary and proportionate to its interests and goals. It is of paramount importance that employees are able to understand the nature, scope and effects of the monitoring, namely how their communications are controlled, what content is accessed, how is it analysed and what information is recorded and kept and for what purposes. In this context, data protection rules fully apply, namely conferring employees with the rights to access all the information held about them and to obtain a copy of such records.

And to completely prevent unpleasant surprises, a word of advice to employees: do not rely on your employer’s good judgement. Avoid altogether using means provided to you for professional purposes to conduct private activities or communications.

References   [ + ]

1. Copyright by MrChrome under the CC-BY-3.0

The ‘Safe Harbor’ Decision ruled invalid by the CJEU

Safe harbor?!? Not anymore.

Safe harbor?!? Not anymore.

Unfortunately, I hadn’t had the time to address the ruling of the CJEU issue last October, by which the ‘Safe Harbour’ scheme, enabling transatlantic transfers of data from the EU to the US, was deemed invalid.

However, due to its importance, and because this blog is primarily intended to be about privacy and data protection, it would be shameful to finish the year without addressing the issue.

As you may be well aware, article 25(1) of Directive 95/46 establishes that the transfer of personal data from an EU Member State to a third country may occur provided that the latter ensures an adequate level of protection. According to article 25(6) of the abovementioned Directive, the EU Commission may find that a third country ensures an adequate level of protection (i.e., a level of protection of fundamental rights essentially equivalent to that guaranteed within the EU under the directive read in the light of the Charter of Fundamental Rights) by reason of its domestic law or of its international commitments.

Thus said, the EU Commission adopted its Decision 2000/520, by which it concluded that the “Safe Harbour Principles” issued by the US Department of Commerce ensure an adequate level of protection for personal data transferred from the EU to companies established in the US.

Accordingly, under this framework, Facebook has been transferring the data provided by its users residing in the EU from its subsidiary in Ireland to its servers located in the US, for further processing.

These transfers and, unavoidably, the Decision had been challenged by the reference to the CJEU (judgment in Case C-362/14) following the complaint filed by Max Schrems, a Facebook user, before the Irish DPA and subsequently before the Irish High Court. The main argument was that, considering the access electronic communications conducted by its public authorities, the US did not ensure adequate protection of the thus transferred personal data.

According to the AG’s opinion, “the access enjoyed by the United States intelligence services to the transferred data constitutes an interference with the right to respect for private life and the right to protection of personal data”.

Despite considering that a third country cannot be required to ensure a level of protection identical to that guaranteed in the EU, the CJEU considered that the decision fails to comply with the requirements established in Article 25(6) of Directive and that the Commission did not make a proper finding of adequacy but merely examined the safe harbour scheme.

The facts that the scheme’s ambit is restricted to adhering US companies, thus excluding public authorities, and that national security, public interest and law enforcement requirements, to which US companies are also bound, prevail over the safe harbour principles, were deemed particularly decisive in the assessment of the scheme’s validity.

In practice, this would amount to enable the US authorities to access the personal data transferred from the EU to the US and process it in a way incompatible with the purposes for which it was transferred, beyond what was strictly necessary and proportionate to the protection of national security.

As a result, the Court concluded that enabling public authorities to have access on a generalised basis to the content of electronic communications must be regarded as compromising the essence of the fundamental right to respect for private life.

The Court stated that the decision disregards the existence of such negative interference on fundamental rights, and that the lack of provision of limitations and effective legal protections violates the fundamental right to effective judicial protection.

Upon issuance of this ruling, the Art29WP met and concluded that data transfers from the EU to the US could no longer be legitimized by the ‘Safe Harbor’ decision and, if occurring, would be unlawful.
While its practical implications remain unclear, the ruling undoubtedly means that companies relying on the ‘Safe Harbor’ framework for the transfer of personal data from the EU to the US need to rely, instead, on another basis.

In this regard, considering that not all Member States accept the consent of the data subject or an adequacy self-assessment as a legitimizing legal ground for such cross-border transfers, Model Contractual Clauses incorporated into contracts and Binding Corporate Rules (BCR) for intragroup transfers seem to be the most reliable alternatives in certain cases.

Restrictions on data transfers are obviously also foreseen in the GDPR, which, besides BCRs, Standard Contracts and adequacy decisions, includes new data transfer mechanisms such as certification schemes.

You can find the complete version of the ruling here.

Opinion of the EDPS on the dissemination and use of intrusive surveillance technologies

We need some more surveillance here!

We need some more surveillance here! 1)Copyright by Quevaal under the Creative Commons Attribution-Share Alike 3.0 Unported

In a recently published opinion, the EDPS addressed its concerns in regards of the dissemination and use of intrusive surveillance technologies, which are described as aiming “to remotely infiltrate IT systems (usually over the Internet) in order to covertly monitor the activities of those IT systems and over time, send data back to the user of the surveillance tools.”

The opinion specifically refers to surveillance tools which are designed, marketed and sold for mass surveillance, intrusion and exfiltration.

The data accessed and collected through intrusive surveillance tools may contain “any data processed by the target such as browsing data from any browser used on that target, e-mails sent and received, files residing on the hard drives accessible to the target (files located either on the target itself or on other IT systems to which the target has access), all logs recorded, all keys pressed on the keyboard (this would allow collecting passwords), screenshots of what the user of the target sees, capture the video and audio feeds of webcams and microphones connected to the target, etc.

Therefore these tools may be adequately used for human rights violations, such as censorship, surveillance, unauthorised access to devices, jamming, interception, or tracking of individuals.

This is particularly worrisome considering that software designed for intrusive surveillance has been known to have been sold as well to governments conducting hostile surveillance of citizens, activists and journalists.

As they are also used by law enforcement bodies and intelligence agencies, this is a timely document, considering the security concerns dictating the legislative amendments intended to be implemented in several Member States. Indeed, as pointed by the EDPS, although cybersecurity must not be used for disproportionate impact on privacy and processing of personal data, intelligence services and police may indeed adopt intrusive technological measures (including intrusive surveillance technology), in order to make their investigations better targeted and more effective.

It is evident that the principles of necessity and proportionality should dictate the use of intrusion and surveillance technologies. However, it remains to be assessed where to draw the line between what is proportional and necessary and disproportional and unnecessary. That is the core of the problem.

Regarding the export of surveillance and interception technologies to third countries, the EDPS considered that, despite not addressing all the questions concerning the dissemination and use of surveillance technologies, “the EU dual use regime fails to fully address the issue of export of all ICT technologies to a country where all appropriate safeguards regarding the use of this technology are not provided. Therefore, the current revision of the ‘dual-use’ regulation should be seen as an opportunity to limit the export of potentially harmful devices, services and information to third countries presenting a risk for human rights.

As this document relates to the EU cybersecurity strategy and the data protection framework, I would recommend its reading for those interested in those questions. You can find the document here.

References   [ + ]

1. Copyright by Quevaal under the Creative Commons Attribution-Share Alike 3.0 Unported
Older posts

© 2017 The Public Privacy

Theme by Anders NorenUp ↑