Category: Privacy (page 1 of 2)

Security v. Security – Tech Companies, Backdoors and Law Enforcement Authorities

Grab the popcorns, this is going to be fun!

Grab the popcorns, this is going to be fun!

The access request to the information stored on the Smartphone of one of the San Bernardino shooting suspects has intensified the debate on the implementation of backdoors to enable the access to mobile devices for law enforcement purposes.

The issue does not refer to whether the law enforcement authorities, by means of a proper warrant, are entitled to search a mobile phone and access its content. That is a straightforward fact. They do.

What is at stake is Apple’s objection to a court order requiring it to provide the ongoing federal investigation the proper means to access such information. More concretely, it has been required to actually write a code modifying the iPhone software, that would bypass an important security function put in place, by disabling the feature which automatically erases information after ten attempts of entering the wrong password. This would enable authorities to endlessly enter wrong credentials and eventually crack the device’s password through brute force, without risking the deletion of content, thus being able to access it and extract the information contained on the iPhone of the suspect.

The use of new technologies to conduct criminal and terrorist activities has made it difficult to ignore the advantages of accessing the communications by means of such technologies in the investigation, prevention and combat of criminal activities. Law enforcement authorities point that it is particularly pertinent in the fight against terrorism, paedophilia networks and drug trafficking cases.

In this context, the use of encryption in communications has become a cornerstone of the debate. Investigative authorities are willing to see implemented backdoors in mobile devices in order to ensure the access when necessary. Contrastingly, companies such as Apple refuse to retain access keys – and consequently provide it upon request of law enforcement authorities – to such encrypted communications.

Just recently, FBI Director James Comey has told the US Senate Intelligence Committee that intelligence services are not interested in a ‘backdoor’ per se access to secure devices. Instead, what is at stake is requiring companies to provide the encrypted messages sent through those devices. James Comey is a wordplay habitué. He once said he wanted ‘front doors’ instead of ‘back doors’.

In the same line, White House Press Secretary, Josh Earnest recently stated that, by the abovementioned court order, Apple is not being asked to redesign its products or to create a backdoor.

While these are, at the very least, very puzzling statements, they nevertheless clearly express the subjacent motivation: the ban on encryption products with no backdoors and the implementation of backdoors.

Indeed, if companies can be required to undermine their security and privacy protection features in order to provide access to law enforcement authorities, regardless the legitimate inherent purpose, and disregarding the concrete designation one might find preferable, that is the very definition of a backdoor.

It never ceases to amaze me how controversial among free people living in a democracy it seems to be that the implementation of backdoors is – on both legal and technological grounds and for the sake of everyone’s privacy and security – a very bad idea.

Well, the main argument supporting the concept is that such technological initiative will chiefly help the combat of criminal activities. That is unquestionably a very legitimate purpose. And nobody opposing the implementation of backdoors actually argues otherwise.

However, it is a fact that backdoors would automatically make everyone’s communications less secure and exposed them to a greater risk of attacks by third parties and to further privacy invasions. Moreover, no real warranties in regards of the risk of the abuse which could ensue are ever provided. Those arguing in favour of the access to information through backdoors fail to adequately frame the context. It is vaguely stated that such mechanism will be used when necessary, without any strict definition. What is necessary, anyway? Would it depend on the relevance of the information at stake? Would it depend on the existence of alternative means or of how burdensome those are?

At least, if Apple complies with the order, it is difficult to accept that more similar requests will not immediately ensue. In fact, one will risk saying that those can be expected and will certainly be encouraged in the future. Ultimately, the creation of this cracking software could be used and abused in future cases. And this is particularly worrisome considering the lack of legal framework and the judicial precedent basis.

One may be tempted to sacrifice privacy in the interest of public security. That it is not a wrongful viewpoint. I don’t know anyone that would disagree on that. Except when considering the very own limitations of backdoors when it comes to fighting terrorism for instance. It is harder to support backdoors to prevent criminal activities when confronted with their very own inherent inefficiency and limitations, which seem to go unacknowledged by their supporters.

While companies may be forced to implement such backdoors, to provide access to encrypted communications, there is a myriad of alternatives in the marketplace for criminal seeking encrypted products where no such backdoors are installed. Encryption apps, files encryption, open source products, virtual private networks…

Let’s talk about Isis for instance. It has been alleged – without further demonstration – that they have their own open source encrypted communications app. Therefore, except from weakening the communications’ safety of everybody relying on encrypted messaging apps, considering the open source nature of the app used by Isis, the implementation of backdoors would be pointless for the purpose intended to be achieved.

Thus said, one can easily understand the stance of Apple. Having built its reputation on the privacy and security provided by its devices, it is very risky from a commercial viewpoint to be asked to develop software that counter its core business. Indeed, it modified its software in 2014 in order to become unable to unlock its Smartphones and access its customers’ encrypted data.

The fact that the company is now being asked to help enforcement law authorities by building a backdoor to get around a security function that prevents decryption of its content appears to be just another way of achieving the same outcome. Under a different designation.

Because it now goes way further than requiring companies to comply with a lawful order and warrant to the extent they are able to, requesting private companies to create a tool intended to weaken the security of their own operating systems just goes beyond any good sense. Indeed, it just amounts to require (force?) private companies to create and deliver hacking tools to law enforcement authorities which actually put everyone’s privacy and cybersecurity at risk.

And if this becomes a well accepted requirement in democratic systems, either by precedent either through legislative changes, well, one can only wonder with what enthusiasm such news will be welcomed by some repressive regimes eager to expand their surveillance powers.

From an EU viewpoint, and considering how uncertain is the future of the Privacy Shield framework, and despite the existing divergences among EU Member States in respect of encryption, this whole case certainly does not solve any trust issues in regards of the security of the data transferred to the US.

Truecaller: In the crossroad between privacy and data protection

Let me see, where am I uploading others personal information today?

Let me see, where am I uploading others personal information today?

As I have already made clear in a previous post, there is little that I find more annoying than, at the end of a particularly stressful workday, being bothered by unrequested telemarketing or spam phone calls. And while I understand that, on the other side of the line, there is a person acting under the direct orientation – and possibly supervision – of his/her employer, these calls most always seem a resistance test for one’s patience.

Therefore, mobile software or applications enabling the prior identification of certain numbers, by replicating the Caller ID experience (as if the contact number was saved in your contact list) or allowing for the automatic blocking of undesirable calls have found here a market to succeed.

Thus said, you might have certainly heard of and possibly installed apps such as Current Caller ID, WhosCall or Truecaller. Most probably, you find them quite useful in order to avoid unwanted contacts.

As I have, for several occasions, unlisted my contact from the Truecaller database, but keep noticing that it eventually ends up being integrated in that list all over again, I want to address today some specific issues regarding that app.

Truecaller is freely available on iOS and Android platforms and is quite efficient in regards of what it promises, providing its subscribers a humongous database of previously identified contacts. In particular, it enables users to identify, without being required to hang up to that end, spam callers.

This efficiency is the result of the data provided by the millions of users who have downloaded the app on their smartphones.

How?

Well, it suffices that users allow for the app to access his/her contacts list as foreseen in the end user agreement, which might not have been read by many. Once this consent has been obtained, the information of the contacts book is uploaded to the Truecaller’s servers and made available to the rest of its subscribers.

According to this crowd-sourced data system, you are able to identify unknown numbers.

Therefore it suffices that another user has marked a given contact as spam for you to be able to immediately identify a caller as such and save yourself from the patience consuming contact. Indeed, and quite undoubtedly, if a number qualified as unwanted by others call you, the screen of your Smartphone will turn red and present the image of shady figure wearing a fedora and sunglasses.

On the down side, if anybody has saved your contact number and name in their address book, if suffices that one person has installed the Truecaller app on their mobile phone and subscribed the abovementioned permission clause, for your contact number and name to end up in that database.

A new interface enables users to add information from their social media channels. Therefore, besides your contacts information, if users do activate the use of third parties social network services, such as Facebook, Google+, LinkedIn or Twitter, Truecaller may upload, store and use the list of identifiers associated with those services linked to your contacts in order to enhance the results shared with other users.

Moreover, it has recently been updated to introduce a search function, thus enabling you to search for any phone number and find any person’s contact.

In the same line, Facebook – which is only the largest social network – has decided to give new uses for the amount of data its users provide with the app Facebook Hello. In that regard, users are required to grant it access to the information contained in their Facebook account. Indeed, Hello uses Facebook’s database to provide details of a caller. Contrastingly, other apps such as Contacts+ integrate information provided in different social networks.

While it is undeniably useful to identify the person behind the unknown numbers, this means that the same others will be able to identify you when you contact them, even if they do not have your number.

Truecaller raises several privacy and data protection concerns. In fact, as names and telephone contacts actually enable the adequate identification of natural individuals, there is no doubt that such information actually constitutes personal data.

Nevertheless, in Truecaller’s own words:

When you install and use the Truecaller Apps, Truecaller will collect, process and retain personal information from You and any devices You may use in Your interaction with our Services. This information may include the following: geo-location, Your IP address, device ID or unique identifier, device type, ID for advertising, ad data, unique device token, operating system, operator, connection information, screen resolution, usage statistics, version of the Truecaller Apps You use and other information based on Your interaction with our Services.

This is particularly problematic considering that Trucaller collects and processes clearly and manifestly the personal data of other data subjects besides of its users.

As for the information related to other persons, it is said:

You may share the names, numbers and email addresses contained in Your device’s address book (“Contact Information”) with Truecaller for the purpose described below under Enhanced Search. If you provide us with personal information about someone else, you confirm that they are aware that You have provided their data and that they consent to our processing of their data according to our Privacy Policy.

This statement is, in my very modest opinion, absolutely ludicrous. Most people who have installed the service are not even aware how it works, let alone that an obligation of notifying an entire contacts list and obtaining individual consent impends upon them. In this context, it is paramount to have into consideration that, in the vast majority of cases, from the users’ viewpoint, the data at stake is collected and processed merely for personal purposes. Moreover, consent is usually defined as “any freely given specific and informed indication of his wishes by which the data subject signifies his agreement to personal data relating to him being processed.”

This basically amounts as saying that non-users do not have any control over their personal data as the sharing of their contact identification and phone number will depend on how many friends actually install Truecaller.

It is evident that Truecaller has no legal permission to process any personal data from non-users of its service. These non-users are data subjects who most certainly have not unambiguously given their consent, the processing is not necessary for the performance of a contract in which the data subject is party, and no legal obligation nor vital interests nor the performance of a task carried out in the public interest are at stake.

In this regard, the possibility provided to those who do not wish to have their names and phone numbers made available through the enhanced search or name search functionalities to exclude themselves from further queries by notifying Truecaller is not even realistic. To begin with, one is required to be aware of the existence of the service and, subsequently, is required to actively seek if his/her contact is on its directory and then to require Truecaller to be unlisted from its database. However, considering all my failed attempts, I am not sure if this option is only available to users or if this simply does not prevent to be added again to the service’s database once another user having the relevant contact on his address book actually allows for such access.

Last, but not the least, and this should not really constitute any surprise, Truecaller has already been hacked in the past by a Syrian hacking group, which resulted in the unauthorized access to some (personal) data of users and non-users. This surely highlights the importance of users carefully choosing the services with whom they entrust their – and others – personal data.

All considering, Truecaller is the obvious practical example of the statement: ‘If You’re Not Paying For It, You Are the Product Being Sold’.

Bits and pieces of issues regarding the happy sharing of your children’s lives on Facebook

It's just a picture of them playing, they don't mind. Like!

It’s just a picture of them playing, they don’t mind. Like!

Similarly to what is happening in other EU Member States’ courts, Portuguese courts have been struggling with the application of traditional legal concepts to the online context. Just recently, in a decision which I addressed here, it considered that those having in their possession of a video containing intimate images of an ex-partner are under the obligation to properly guard it and the omission to practice adequate safeguard are condemned as a relevant omission.

Thus said, there is one particular decision which was issued by a Portuguese appealing court last year that I failed to timely address and which concerns the very specific rights of image of children in the online context. Considering the amount of pictures that appear on my Facebook wall every time I log in on my account and the concerns expressed by the upcoming GDPR in regards of the collection and processing of data referring to minors of sixteen, I would like to address it today.

The court at stake confirmed the decision of the court of first instance, issued within a process of regulating the parental responsibilities of each progenitor, which forbid a separated couple to divulge on social media platforms pictures or information identifying their twelve years old daughter. It severely stated that children are not things or objects belonging to their parents.

One would expected that a court decision would not be necessary to achieve the conclusion according to which children have the right to have their privacy and image respected and safeguarded even from acts practised by their parents. In fact, one would hope that, in the online context, and considering their specific vulnerability and the particular dangers facilitated by medium of the Internet, their protection would be ensured primarily by their parents.

Ironically, the link to the news referring to this court decision was greatly shared among my Facebook friends, most of them with children of their own. The same ones who actually happily share pictures of their own kids. And who haven’t decreased the sharing of information involving their children since then.

It is funny how some people get offended or upset when someone posts online a picture in which they are not particularly favoured or of which they are embarrassed and are quick to require its removal, and do not wonder if it is ethical to publish a picture of information about someone who is not able to give his/her consent. Shouldn’t we worry what type of information would children – our own, our friend’s, our little cousin or nephew – want to see about themselves online in the future?

Every time I log in my Facebook account, there is an array of pictures of birthday parties, afternoons by the sea, first days at school, promenades in the park, playtimes in the swimming pool, displays of leisure activities, such as playing musical instruments or practising a sportive activity… In a particular case, it has been divulged that the child had a serious illness, which fortunately has been overcome ever since but which received full Facebook graphic and descriptive coverage at the time of the ongoing development.

I have seen pictures where my friends’ children appear almost naked or in unflattering poses, or where it is clearly identifiable where they go to school or spend holidays. Many identify their children by their name, age, school they attend, extracurricular activities… In any case, their parenthood is quite well established. I always think that, in the long run, it would permit the building of an extended and detailed profile for anybody which has access to such data. And, if you had read any of my other posts, you know by now that I am not exactly referring to the Facebook friends.

More worryingly, these details about the children’s lives are often displayed on the parents’ online profiles, perhaps due to simple distraction or unawareness, without any privacy settings being implemented. Consequently, anybody having a Facebook account can look for the intended person and have access to all the information contained on that profile.

I do not want to sound like a killjoy, a prude or a moralist. I get it, seriously, I do. A child is the biggest love and it is only human to want to proudly share his growth, development and achievement with relatives and friends. It has always been done and now it is almost effortless and immediate, at the distance of a click. In this regard, by forbidding the sharing of any picture or any information regarding children, the abovementioned decision seems excessive and unrealistic.

Nevertheless, one should not forget that some good sense and responsibility is particularly required in the online context, considering how easy it actually is to lose control of the purposes given to the published information besides the initial ones. As many seem to forget, once uploaded on an online platform, it is no longer within our reach, as they can be easily copied or downloaded by others.

Thus said, while it is certainly impossible to secure anonymity online, the amount of information that is published should be controlled for security, privacy and data protection purposes.

Anyway, this common practice of parents sharing online pictures and information regarding their children makes me wonder how companies such as Facebook, and other platforms focusing on user generated content, who process data at the direction of the user and, consequently, who unintentionally have to collect and process personal data regarding children below the age of sixteen, may be asked to comply with the new requirements of the GDPR in that regard.

If it is to be lawful if and to the extent that consent is given or authorised by the holder of parental responsibility, and if, as the Portuguese court have understood it, parents are not entitled to dispose of their children’s image on social media, a funny conundrum is generated. If the parents cannot publish such information, they will not be able to authorize it either and, consequently, children/teenagers won’t be able to rely on their parents’ authorization to use social media.

The dangers of certain apps or how to put your whole life out there

Finding love, one data breach at a time.

Finding love, one data breach at a time.

One of my past flatmates was actively looking for love online. Besides having registered in several websites for that end, I remember he also had several mobile applications (apps) installed in his Smartphone. I think he actually subscribed pretty much anything that even remotely could help him find love but outlined Tinder as his main dating tool.

Another of my closest friends is a jogging addicted – shout out P. He has installed on his Smartphone various apps which enable him to know how much steps he has made in a particular day, the route undertaken, and the heart rate via external device, which enables him to monitor his progresses.

What both of my friends have in common? Well, they actually use mobile apps to cover very specific necessities. And in this regard they can rely with almost anybody else.

Indeed, it is difficult to escape apps nowadays. Now that everyone (except for my aunt) seems to have a Smartphone, apps are increasingly popular for the most diversified purposes. For my prior flatmate it was all about dating. For my friend, it is to keep track of his running progresses. But their potential does not end there. From receiving and sending messages, using maps and navigation services, receiving news updates, playing games, dating or just checking the weather… You name a necessity or convenience, and there is an app for it.

On the downside, using apps usually requires to provide more or less personal information to the specific intended effect. Something that has become so usual that most consider as a natural step, without giving it further consideration.

In fact, a detail that most seem to be unaware of, apps allow for a massive collection and processing of personal – and sometimes sensitive – data. In fact, the nature and the amount of personal data accessed and collected raises serious privacy and data protection concerns.

For instance, in the case of my abovementioned flatmate, who was registered on several similar apps, and considering that he did not create fake accounts nor provided false information, each of them collected at least his name, age, gender, profession, location (enabling to presume where he worked, lived and spend time), sexual orientation, what he looks like (if he added a picture to his profiles), the frequency of his accesses to the app, and eventually the success of his online dating life.

In fact, in Tinder’s own words:

Information we collect about you

In General. We may collect information that can identify you such as your name and email address (“personal information”) and other information that does not identify you. We may collect this information through a website or a mobile application. By using the Service, you are authorizing us to gather, parse and retain data related to the provision of the Service. When you provide personal information through our Service, the information may be sent to servers located in the United States and countries around the world.
Information you provide. In order to register as a user with Tinder, you will be asked to sign in using your Facebook login. If you do so, you authorize us to access certain Facebook account information, such as your public Facebook profile (consistent with your privacy settings in Facebook), your email address, interests, likes, gender, birthday, education history, relationship interests, current city, photos, personal description, friend list, and information about and photos of your Facebook friends who might be common Facebook friends with other Tinder users. You will also be asked to allow Tinder to collect your location information from your device when you download or use the Service. In addition, we may collect and store any personal information you provide while using our Service or in some other manner. This may include identifying information, such as your name, address, email address and telephone number, and, if you transact business with us, financial information. You may also provide us photos, a personal description and information about your gender and preferences for recommendations, such as search distance, age range and gender. If you chat with other Tinder users, you provide us the content of your chats, and if you contact us with a customer service or other inquiry, you provide us with the content of that communication.

Considering that Tinder makes available a catalogue of profiles of geographically nearby members, among which one can swipe right or left, according to each one personal preferences, with the adequate analysis, it is even possible to define what type of persons (according to age, body type, hair colour) users find most attractive.

And because Tinder actually depends on having a Facebook profile, I guess that Facebook also gets aware of the average climate of your romantic life. Mainly if you start adding and interacting with your new friends on that platform and, why not, changing your status accordingly.

In the specific case of Tinder, as it mandatorily requires to be provided with a certain amount of Facebook information in order to ensure its proper functioning, these correlations are much easier for this app.

Thus said, a sweep conducted by 26 privacy and data protection authorities from around the world on more than 1,000 diversified apps, thus including Apple and Android apps, free and paid apps, public sector and private sector apps, and ranging from games and health/fitness apps, to news and banking apps has made possible to outline the main concerns at stake.

One of the issues specifically pointed out referred to the information provided to the users/data subjects, as it was concluded that many apps did not have a privacy policy. Therefore, in those cases, users were not properly informed – and therefore aware – about the collection, use, or further disclosure of the personal information provided.

It is a fact that most of us do not read the terms and conditions made available. And most will subscribe pretty much any service he/she is willing to use, disregarding what those terms and conditions actually state.

Nevertheless, a relevant issue in this regard is the excessive amount of data collected considering the purposes for which the information is provided or how it is sneakily collected. For instance, even gambling apps, such as solitaire, which seem far more innocuous, hide unknown risks, as many contain code enabling the access to the user’s information or to his contacts’ list and even allow to track the user’s browsing activities.

This is particularly worrisome when sensitive data, such as health information is at stake. This kind of data is easily collected through fitness orientated apps, which are quite in vogue nowadays. Besides any additional personally identifiable information which you will eventually provide upon creating an account, among the elements which most certainly are collected, one can find: from the name or user name, date of birth, current weight, target weight, height, gender, workouts frequency, workout settings and duration of your workout, heart rate. Also, if you train outdoors, geo-location will most certainly enable to assess the whereabouts of your exercising, from the departure to the arrival points, which will most probably coincide with your home address or its vicinities.

And, if you are particularly proud of your running or cycling results, and are willing to show up to all your friends in what good shape you actually are, there is a chance that you can actually connect the app to your Facebook and display that information in your profile, subsequently enabling Facebook to access the same logged information.

And things actually get worse when considering that, as demonstrated by recent data breaches, it seems that the information provided by their users is not even adequately protected.

For instance, and if I remember it well, due to a security vulnerability in Tinder – that apparently has been already fixed – it seemed that there was a time where the location data, such as longitude and latitude coordinates of users were actually easily accessible. Which is actually quite creepy and dangerous, as it would facilitate stalking and harassment in real life, which is as bad as it is happening online.

Anyway, it is actually very easy to forget the amount of data we provide apps with. However, the correlations that can be made, the conclusions which can be inferred, the patterns that can be assessed amounts to share more information than what we first realise and enables a far more detailed profile of ourselves than most of us would feel comfortable with others knowing.

Monitoring of employees in the workplace: the not so private parts of a job in the EU private sector

Monitoring you? Us?

Monitoring you? Us? 1)Copyright by MrChrome under the CC-BY-3.0

In a case referring to the employees’ rights to the privacy of their correspondence and communications, the European Court of Human Rights (hereafter ECtHR) has ruled that employers are entitled to monitor their employees’ private online communications conducted through means of a messaging account provided for professional purposes.

The details of the case are as follows: the employment’s contract of Romanian engineer was terminated by his employer, back in 2007, after the company he worked for found out that he was using messaging services, such as Yahoo Messenger, to conduct personal contacts, namely with his brother and fiancée. The account was created, at the employer’s request, strictly for professional purposes and a personal use was specifically forbidden by the company policy, of which the employee was made aware. The internal regulation established, inter alia, that “it is strictly forbidden to disturb order and discipline within the company’s premises and especially … to use computers, photocopiers, telephones, telex and fax machines for personal purposes.”

While the company considered that the employee had breached the company rules by using the service for personal purposes, and thus the termination of the employment’s contract was justified, the employee argued that the termination decision was illegal due to be founded on a violation of his rights to respect for his private life and correspondence.

Among the pertinent legal instruments deemed applicable and referred by the ECtHR are, obviously, the European Convention of Human Rights (hereafter ECHR), the Directive 95/46/EC and the Art.29WP “Working document on the surveillance and the monitoring of electronic communications in the workplace”, which I also have addressed here, in regards of the issue of the monitoring of employees.

The core issue at stake was whether, considering the factual context described, the employee could have had a reasonable expectation of privacy when communicating from the Yahoo Messenger account that he had registered at his employer’s request and considering that the employer’s internal regulations, of which he was aware, strictly prohibited employees from using the company’s computers and resources for personal purposes.

Having into consideration that the use of messaging was only allowed for solely professional purposes, the Court deemed that it was not “unreasonable that an employer would want to verify that employees were completing their professional tasks during working hours.” (par. 59)

In this regard, it considered that “the employer acted within its disciplinary powers since, as the domestic courts found, it had accessed the Yahoo Messenger account on the assumption that the information in question had been related to professional activities and that such access had therefore been legitimate. The court sees no reason to question these findings.” Particularly relevant to the formation of that assumption was the fact the employee had initially claimed that he had used the messaging account to advise the company’s clients. (par. 57)

Therefore, despite concluding that an interference with the applicant’s right to respect for private life and correspondence within the meaning of Article 8 of the ECHR indeed occurred, the ECtHR concluded that there has been no violation of such rights, because “the employer’s monitoring was limited in scope and proportionate”.

The claim that the employee’s right to privacy and the confidentiality of his correspondence had been violated was therefore dismissed.

This ruling is in line with that respecting the Benediktsdóttir v. Iceland case, in which the ECtHR concluded that the right to privacy and to correspondence has to be balanced with the other rights, namely those of the employer.

However, the dissenting opinion of the judge Pinto de Albuquerque deserves particular mentioning. Particularly based on the very personal and sensitive nature of the employee’s communications, the non-existence of an Internet surveillance policy, duly implemented and enforced by the employer and the general character of the prior notice given to employees in regards of the monitoring conducted on the communications, it leads one to wonder if the assessment regarding the respect of the necessity and proportionality principles could have been as straightforward as it firstly seemed. Namely considering that the employer also accessed the employee’s own personal account.

Thus said, the specific details of the case should not be overlooked and rushed or generalized conclusions should be avoided.

As pointed out by Pinto de Albuquerque, in the absence of a prior notice from the employer that communications are being monitored, the employee has a reasonable expectation of privacy. Moreover, the  complete prohibition of the use of the Internet by employees for personal purposes is inadmissible. Furthermore, the practice of complete, automatic and continuous monitoring of Internet usage by employees is also forbidden.

The fact that the employee was adequately informed of the internal regulations imposing restriction upon the use of the messaging service for personal purposes and that employer had accessed the communications in the belief of their professional nature are paramount elements in this context. In no way must this ruling be interpreted as a general faculty of employers to monitor or snoop on their employees’ private communications.

Indeed, as clearly put by the Art.29WP in the above mentioned document, the simple fact that monitoring or surveillance conveniently serves an employer’s interest could not justify an intrusion into workers’ privacy.

In fact, as outlined by the judge Pinto de Albuquerque in his dissenting opinion: “if the employer’s internet monitoring breaches the internal data protection policy or the relevant law or collective agreement, it may entitle the employee to terminate his or her employment and claim constructive dismissal, in addition to pecuniary and non-pecuniary damages.”

Therefore, employers should take special care in providing appropriate information in regards of the use that employees are allowed to make of the company’s communication means, namely for personal purposes. Moreover, employers intending to conduct monitoring activities over their employee’s activities should implement a proper and clear monitoring policy, restricted to what is necessary and proportionate to its interests and goals. It is of paramount importance that employees are able to understand the nature, scope and effects of the monitoring, namely how their communications are controlled, what content is accessed, how is it analysed and what information is recorded and kept and for what purposes. In this context, data protection rules fully apply, namely conferring employees with the rights to access all the information held about them and to obtain a copy of such records.

And to completely prevent unpleasant surprises, a word of advice to employees: do not rely on your employer’s good judgement. Avoid altogether using means provided to you for professional purposes to conduct private activities or communications.

References   [ + ]

1. Copyright by MrChrome under the CC-BY-3.0

Opinion of the EDPS on the dissemination and use of intrusive surveillance technologies

We need some more surveillance here!

We need some more surveillance here! 1)Copyright by Quevaal under the Creative Commons Attribution-Share Alike 3.0 Unported

In a recently published opinion, the EDPS addressed its concerns in regards of the dissemination and use of intrusive surveillance technologies, which are described as aiming “to remotely infiltrate IT systems (usually over the Internet) in order to covertly monitor the activities of those IT systems and over time, send data back to the user of the surveillance tools.”

The opinion specifically refers to surveillance tools which are designed, marketed and sold for mass surveillance, intrusion and exfiltration.

The data accessed and collected through intrusive surveillance tools may contain “any data processed by the target such as browsing data from any browser used on that target, e-mails sent and received, files residing on the hard drives accessible to the target (files located either on the target itself or on other IT systems to which the target has access), all logs recorded, all keys pressed on the keyboard (this would allow collecting passwords), screenshots of what the user of the target sees, capture the video and audio feeds of webcams and microphones connected to the target, etc.

Therefore these tools may be adequately used for human rights violations, such as censorship, surveillance, unauthorised access to devices, jamming, interception, or tracking of individuals.

This is particularly worrisome considering that software designed for intrusive surveillance has been known to have been sold as well to governments conducting hostile surveillance of citizens, activists and journalists.

As they are also used by law enforcement bodies and intelligence agencies, this is a timely document, considering the security concerns dictating the legislative amendments intended to be implemented in several Member States. Indeed, as pointed by the EDPS, although cybersecurity must not be used for disproportionate impact on privacy and processing of personal data, intelligence services and police may indeed adopt intrusive technological measures (including intrusive surveillance technology), in order to make their investigations better targeted and more effective.

It is evident that the principles of necessity and proportionality should dictate the use of intrusion and surveillance technologies. However, it remains to be assessed where to draw the line between what is proportional and necessary and disproportional and unnecessary. That is the core of the problem.

Regarding the export of surveillance and interception technologies to third countries, the EDPS considered that, despite not addressing all the questions concerning the dissemination and use of surveillance technologies, “the EU dual use regime fails to fully address the issue of export of all ICT technologies to a country where all appropriate safeguards regarding the use of this technology are not provided. Therefore, the current revision of the ‘dual-use’ regulation should be seen as an opportunity to limit the export of potentially harmful devices, services and information to third countries presenting a risk for human rights.

As this document relates to the EU cybersecurity strategy and the data protection framework, I would recommend its reading for those interested in those questions. You can find the document here.

References   [ + ]

1. Copyright by Quevaal under the Creative Commons Attribution-Share Alike 3.0 Unported

From your hard drives to your SIM cards: how interesting are you?

Let's see how can we hack these?

Let’s see how can we hack these?

Just recently, the Investigatory Powers Tribunal (IPT), the Court that oversees British intelligence services’ activities, declared that the electronic mass surveillance of mobile phones and other private communications data retrieved from USA surveillance programs, such as Prism, conducted prior to December 2014, contravened Articles 8, referring to the right to private and family life, and 10, referring to freedom of expression, of the European Convention on Human Rights.

One is not so optimistic as to expect that this would suffice to make intelligence agencies cease sharing this kind of information. Mainly because the same Court already recognized that the current legal framework governing data collection by intelligence agencies no longer violates human rights.

However, the decision was still applauded by many with the expectation that, at least, large-scale uncontrolled surveillance activities would not be so bluntly practiced.

Let’s just say that such expectation did not last long.

According to Kaspersky Lab this week’s revelations, it seems that the NSA was able to hide spying software in any hard drive produced by some top manufacturers such as Toshiba, IBM and Samsung. Consequently, it has been able to monitor a large majority of personal, governmental and businesses’ (among which, financial institutions, telecommunications, oil and gas, transportation companies) computers worldwide.

Similarly, the Intercept reported that the NSA and GCHQ were able to get access to the encryption keys used on mobile phone SIM cards intending to protect the privacy of mobile communications manufactured by Gemalto. Normally, an encrypted communication, even if intercepted, would be indecipherable. That would cease to be the case if the intercepting party has the encryption key as it is able to decrypt that communication.

What awe-inspiring ways to circumvent the consent of telecommunications companies and the authorization of foreign governments! Isn’t it dignifying and trustworthy when intelligence services just behave as hackers?

Somehow, and unfortunately, such news almost lacks of any surprising effect, considering well, everything we already know, really… From the Snowden’s revelations to the logic-challenging- argumentation subsequent to Apple and Google’s plans regarding the encryption of communications…

Thus said, perhaps we should all feel flattered to be spied upon. After all, as former NSA Director points out, the agency does not spy on “bad people” but on “interesting people”. Those pretty much convinced – as myself – of being just regular individuals must now be reassured with this extra boost of self-esteem.

A spy in your living room: ‘Tu quoque mi’ TV?

How smart are you?

How smart are you?

So, it seems that the room we have for our privacy to bloom is getting smaller and smaller. We already knew that being at home did not automatically imply seclusion. Still, nosy neighbours were, for quite a long time, the only enemies of home privacy.

However, thicker walls and darker window blinds no longer protect us from external snooping as, nowadays, the enemy seems to hide in our living room or even bedroom.

Indeed, it seems that when we bought our super duper and very expensive Smart TV, we actually may have brought to our home a very sneaky and effective – although apparently innocent – spy.

As you may (or may not) already know, TV with Internet connectivity allow for the collection of its users’ data, including voice recognition and viewing habits. A few days ago many people would praise those capabilities, as the voice recognition feature is applied to our convenience, i.e., to improve the TV’s response to our voice commands and the collection of data is intended to provide a customized and more comfortable experience. Currently, I seriously doubt that most of us do look at our TV screens the same way.

To start with, there was the realization that usage information, such as our favourite programs and online behaviour, and other not intended/expected to be collected information, are in fact collected by LG Smart TV in order to present targeting ads. And this happens even if the user actually switches off the option of having his data collected to that end. Worse, the data collected even respected external USB hard drive.

More recently, the Samsung Smart TV was also put in the spotlight due to its privacy policy. Someone having attentively read the Samsung Smart TV’s user manual, shared the following excerpt online:

To provide you the Voice Recognition feature, some voice commands may be transmitted (along with information about your device, including device identifiers) to a third-party service that converts speech to text or to the extent necessary to provide the Voice Recognition features to you. (…)

Please be aware that if your spoken words include personal or other sensitive information, that information will be among the data captured and transmitted to a third party through your use of Voice Recognition.

And people seemed to have abruptly waken up to the realization that this voice recognition feature is not only directed to specific commands in order to allow for a better interaction between an user and the device, as it also may actually involve the capture and recording of personal and sensitive information, considering the conversation taking place nearby. No need to be a techie to know that this does not amount to performance improvement. This is eavesdropping. And to make it worse, the data is transferred to a third-party.

In the aftermath, Samsung has clarified that it did not retain voice data nor sell the audio being collected. It further explained that a microphone icon is visible on the screen when voice activation was turned on and, consequently, no unexpected recording takes place.

Of course you can now be more careful about what you say around your TV. But as users can activate or deactivate this voice recognition feature, my guess is that most will actually prefer to use the old remote control and to keep the TV as dumb as possible. I mean, just the idea of the possibility of private conversations taking place in front of your TV screen being involuntarily recorded is enough motivation.

Also, it should be pointed out that, considering the personal data at stake (relating to an identified or identifiable person) involved, there are very relevant data protection concerns regarding these situations. Can it simply be accepted that the user has consented to the Terms and Conditions on the TV acquired? Were these very significant terms made clear at any point? It is quite certain that there users could not have foreseen, at the time of the purchase, that such deep and extended collection would actually take place. And if so, such consent cannot be considered to have been freely given. It suffices to think that the features used for the collection of data are what make the TV smart in the first place and, therefore, the main reason for buying the product. Moreover, is this collection strictly necessary to the pretended service to be provided? When the data at stake involves data from other devices or other wording than the voice commands, the answer cannot be positive. And the transmission of personal data to third parties only makes all this worse as it is not specified under what conditions data is transmitted to a third party or who that third party actually is. Adding to this, if we consider that these settings mostly come by default, they are certainly not privacy-friendly and amount to stealthily monitoring. Last but not the least, it still remains to be seen if the proper data anonymisation/pseudinonymisation techniques are effectively put in place.

Nevertheless, these situations brought back into the spotlight the risks to privacy associated with personal devices in the Internet of Things era. As smart devices are more and more present in our households, we are smoothly loosing privacy or, at least, our privacy faces greater risks. In fact, it is quite difficult to live nowadays without these technologies which undoubtedly make our lives so much more comfortable and easier. It is time for people to realize that all this convenience comes with a cost. And an high one.

Older posts

© 2017 The Public Privacy

Theme by Anders NorenUp ↑