Security v. Security – Tech Companies, Backdoors and Law Enforcement Authorities

Grab the popcorns, this is going to be fun!

Grab the popcorns, this is going to be fun!

The access request to the information stored on the Smartphone of one of the San Bernardino shooting suspects has intensified the debate on the implementation of backdoors to enable the access to mobile devices for law enforcement purposes.

The issue does not refer to whether the law enforcement authorities, by means of a proper warrant, are entitled to search a mobile phone and access its content. That is a straightforward fact. They do.

What is at stake is Apple’s objection to a court order requiring it to provide the ongoing federal investigation the proper means to access such information. More concretely, it has been required to actually write a code modifying the iPhone software, that would bypass an important security function put in place, by disabling the feature which automatically erases information after ten attempts of entering the wrong password. This would enable authorities to endlessly enter wrong credentials and eventually crack the device’s password through brute force, without risking the deletion of content, thus being able to access it and extract the information contained on the iPhone of the suspect.

The use of new technologies to conduct criminal and terrorist activities has made it difficult to ignore the advantages of accessing the communications by means of such technologies in the investigation, prevention and combat of criminal activities. Law enforcement authorities point that it is particularly pertinent in the fight against terrorism, paedophilia networks and drug trafficking cases.

In this context, the use of encryption in communications has become a cornerstone of the debate. Investigative authorities are willing to see implemented backdoors in mobile devices in order to ensure the access when necessary. Contrastingly, companies such as Apple refuse to retain access keys – and consequently provide it upon request of law enforcement authorities – to such encrypted communications.

Just recently, FBI Director James Comey has told the US Senate Intelligence Committee that intelligence services are not interested in a ‘backdoor’ per se access to secure devices. Instead, what is at stake is requiring companies to provide the encrypted messages sent through those devices. James Comey is a wordplay habitué. He once said he wanted ‘front doors’ instead of ‘back doors’.

In the same line, White House Press Secretary, Josh Earnest recently stated that, by the abovementioned court order, Apple is not being asked to redesign its products or to create a backdoor.

While these are, at the very least, very puzzling statements, they nevertheless clearly express the subjacent motivation: the ban on encryption products with no backdoors and the implementation of backdoors.

Indeed, if companies can be required to undermine their security and privacy protection features in order to provide access to law enforcement authorities, regardless the legitimate inherent purpose, and disregarding the concrete designation one might find preferable, that is the very definition of a backdoor.

It never ceases to amaze me how controversial among free people living in a democracy it seems to be that the implementation of backdoors is – on both legal and technological grounds and for the sake of everyone’s privacy and security – a very bad idea.

Well, the main argument supporting the concept is that such technological initiative will chiefly help the combat of criminal activities. That is unquestionably a very legitimate purpose. And nobody opposing the implementation of backdoors actually argues otherwise.

However, it is a fact that backdoors would automatically make everyone’s communications less secure and exposed them to a greater risk of attacks by third parties and to further privacy invasions. Moreover, no real warranties in regards of the risk of the abuse which could ensue are ever provided. Those arguing in favour of the access to information through backdoors fail to adequately frame the context. It is vaguely stated that such mechanism will be used when necessary, without any strict definition. What is necessary, anyway? Would it depend on the relevance of the information at stake? Would it depend on the existence of alternative means or of how burdensome those are?

At least, if Apple complies with the order, it is difficult to accept that more similar requests will not immediately ensue. In fact, one will risk saying that those can be expected and will certainly be encouraged in the future. Ultimately, the creation of this cracking software could be used and abused in future cases. And this is particularly worrisome considering the lack of legal framework and the judicial precedent basis.

One may be tempted to sacrifice privacy in the interest of public security. That it is not a wrongful viewpoint. I don’t know anyone that would disagree on that. Except when considering the very own limitations of backdoors when it comes to fighting terrorism for instance. It is harder to support backdoors to prevent criminal activities when confronted with their very own inherent inefficiency and limitations, which seem to go unacknowledged by their supporters.

While companies may be forced to implement such backdoors, to provide access to encrypted communications, there is a myriad of alternatives in the marketplace for criminal seeking encrypted products where no such backdoors are installed. Encryption apps, files encryption, open source products, virtual private networks…

Let’s talk about Isis for instance. It has been alleged – without further demonstration – that they have their own open source encrypted communications app. Therefore, except from weakening the communications’ safety of everybody relying on encrypted messaging apps, considering the open source nature of the app used by Isis, the implementation of backdoors would be pointless for the purpose intended to be achieved.

Thus said, one can easily understand the stance of Apple. Having built its reputation on the privacy and security provided by its devices, it is very risky from a commercial viewpoint to be asked to develop software that counter its core business. Indeed, it modified its software in 2014 in order to become unable to unlock its Smartphones and access its customers’ encrypted data.

The fact that the company is now being asked to help enforcement law authorities by building a backdoor to get around a security function that prevents decryption of its content appears to be just another way of achieving the same outcome. Under a different designation.

Because it now goes way further than requiring companies to comply with a lawful order and warrant to the extent they are able to, requesting private companies to create a tool intended to weaken the security of their own operating systems just goes beyond any good sense. Indeed, it just amounts to require (force?) private companies to create and deliver hacking tools to law enforcement authorities which actually put everyone’s privacy and cybersecurity at risk.

And if this becomes a well accepted requirement in democratic systems, either by precedent either through legislative changes, well, one can only wonder with what enthusiasm such news will be welcomed by some repressive regimes eager to expand their surveillance powers.

From an EU viewpoint, and considering how uncertain is the future of the Privacy Shield framework, and despite the existing divergences among EU Member States in respect of encryption, this whole case certainly does not solve any trust issues in regards of the security of the data transferred to the US.

Safe Harbour 2.0 – Not really safe nor sound

Round 2!

Round 2!

So, back in 2013, the revelations of the massive and indiscriminate surveillance conducted by the US authorities have prompted EU demands regarding the strengthening of the Safe Harbour mechanism.

As you may well be aware by now, the conclusion of the very lengthy negotiations between the EU and the U.S. for the new EU-US Safe Harbour – christened “EU-US Privacy Shield” and intended to replace the former Safe Harbour Agreement – has apparently come to an end.

Which seem to be quite good news, considering how intricate those negotiations were.

Certainly, the approval of the Cybersecurity Information Sharing Act (CISA), according to which, upon ‘cyber threat’ indicators, companies are encouraged to share threat intelligence information with the US government by being absolved of liability for data security, did not help the case. Indeed, this undoubtedly poses a problem for the EU when such information includes some European citizens’ personal data.

Similarly, the delays on the proposed Judicial Redress Act, which would allow European citizens to seek redress against the US if law enforcement agencies misused their personal data, only added up to the existing complication.

The fact that negotiators were running against the clock was another stressful point.

Time was pressing for companies which rely on the Safe Harbour framework to freely transfer data between the United States and the European Union. Indeed, last October, the Court of Justice of the EU ruled that the Safe Harbour decision was invalid (case C-362/14). Consequently, companies had to rely on other legal basis to justify the transfers of personal data to the US.

Moreover, the Article 29 Working Party established the end of January as the turning point date where it would all necessary and appropriate action if no alternative was provided.

The end of January indeed passed and at the beginning of February the conclusion of the negotiations was finally announced.

However no bilateral agreement was really reached, as the new framework is based on “an exchange of letters” with written binding assurances.

The US have indeed offered to address the concerns regarding the access of its authorities to personal data transferred under the Safe Harbour scheme by creating an entity aiming to control that such activity is not excessive. Moreover, access to information by public authorities will be subject to clear limitations, safeguards, and oversight mechanisms.

Thus said, the conclusion of these negotiations represent good news. At least in theory. Certainly, in the EU Commission own words, the new framework “will protect the fundamental rights of Europeans where their data is transferred to the United States and ensure legal certainty for businesses“.

The EU Commission further stated that the new mechanism reflects the requirements set out by the European Court of Justice in its Schrems ruling, namely by providing “stronger obligations on companies in the U.S. to protect the personal data of Europeans and stronger monitoring and enforcement by the U.S. Department of Commerce and Federal Trade Commission (FTC), including through increased cooperation with European Data Protection Authorities.”

Moreover, it said that the new mechanism “includes commitments by the U.S. that possibilities under U.S. law for public authorities to access personal data transferred under the new arrangement will be subject to clear conditions, limitations and oversight, preventing generalised access.”

It appears that mass and indiscriminate surveillance would constitute a violation of the agreement. However, it would still be permissible if a targeted access would not be possible.

Furthermore, “Europeans will have the possibility to raise any enquiry or complaint in this context with a dedicated new Ombudsperson.” This independent entity is yet to be appointed.

The cornerstones of the arrangement therefore seem to be the obligations impending on companies handling personal data of EU data subjects, the restriction on the US government access and the judicial redress possibilities.

A joint annual review is intended to be put in place in order to monitor the functioning of the agreement

Nevertheless, in spite of what is optimistically expected and what one is lead to believe by the EU Commission’s own press release, one must wonder… What has really been achieved in practice?

To begin with, it seems that we are supposed to rely on a declaration by the US authorities on their interpretation regarding surveillance.

Unsurprisingly, many fail to see in what way this new framework is fundamentally different from the Safe Harbour, let alone that it complies with the requirements set out by the CJEU in the Schrems ruling. Hence, it is perhaps expectable that the CJEU will invalidate it on the same grounds it invalidated the Safe Harbour framework.

While US access to EU citizen’s data is expected to be limited to what is necessary and proportionate, as the devil is generally in the details, one must legitimately ask what is to be deemed necessary and proportionate in regards of such surveillance.

It is indeed unavoidable to think that such a framework does not ensure the proper protection of the fundamental rights of Europeans where their data is transferred to the US, nor provide sEU citizens with adequate legal means to redress violations, namely in regards of possible interception by US security agencies.

Anyway, at the moment, the ‘Ombudsperson’ has not yet been set up by the US nor any adequacy decision has been drafted by the EU Commission.

What does this mean in practice?

Well, as transfers to the United States cannot take place on the basis of the invalidated Safe Harbour decision, transfers of data to the USA still lack any legal basis and companies will have to rely upon on alternative legal basis, such as Binding Corporate Rules, Model Contract Clauses or the derogations in Article 26(1).

However, the EU data protection authorities (DPAs) did not exclude the possibility, in particular cases, of preventing companies to adopt new binding corporate rules (BCRs) or install model contract clauses regarding new data transfer agreements. It will be assessed if personal data transfers to the United States can occur under these transfer mechanisms. However, the fact that the data transferred under these methods are subject to surveillance by U.S. national security agencies mechanism is the same issue which lead the CJEU to rule the Safe Harbour Framework as invalid.

In the meantime, the Art.29WP expects to receive, by the end of February, the relevant documents in order to assess its content and if it properly answers the concerns raised by the Schrems judgement.

It further outlined that framework for intelligence activities should be orientated by four ‘essential guarantees’:

A. Processing should be based on clear, precise and accessible rules: this means that anyone who is reasonably informed should be able to foresee what might happen with her/his data where they are transferred;
B. Necessity and proportionality with regard to the legitimate objectives pursued need to be demonstrated: a balance needs to be found between the objective for which the data are collected and accessed (generally national security) and the rights of the individual;
C. An independent oversight mechanism should exist, that is both effective and impartial: this can either be a judge or another independent body, as long as it has sufficient ability to carry out the necessary checks;
D. Effective remedies need to be available to the individual: anyone should have the right to defend her/his rights before an independent body.

Thus said, an ‘adequacy decision’ still has to be drafted and, after consultation of the Art.29WP, approved by the College of Commissioners. In parallel, the U.S. Department of Commerce is expected to implement the agreed-upon mechanisms.

So, let’s wait and see how it goes from here…

Truecaller: In the crossroad between privacy and data protection

Let me see, where am I uploading others personal information today?

Let me see, where am I uploading others personal information today?

As I have already made clear in a previous post, there is little that I find more annoying than, at the end of a particularly stressful workday, being bothered by unrequested telemarketing or spam phone calls. And while I understand that, on the other side of the line, there is a person acting under the direct orientation – and possibly supervision – of his/her employer, these calls most always seem a resistance test for one’s patience.

Therefore, mobile software or applications enabling the prior identification of certain numbers, by replicating the Caller ID experience (as if the contact number was saved in your contact list) or allowing for the automatic blocking of undesirable calls have found here a market to succeed.

Thus said, you might have certainly heard of and possibly installed apps such as Current Caller ID, WhosCall or Truecaller. Most probably, you find them quite useful in order to avoid unwanted contacts.

As I have, for several occasions, unlisted my contact from the Truecaller database, but keep noticing that it eventually ends up being integrated in that list all over again, I want to address today some specific issues regarding that app.

Truecaller is freely available on iOS and Android platforms and is quite efficient in regards of what it promises, providing its subscribers a humongous database of previously identified contacts. In particular, it enables users to identify, without being required to hang up to that end, spam callers.

This efficiency is the result of the data provided by the millions of users who have downloaded the app on their smartphones.

How?

Well, it suffices that users allow for the app to access his/her contacts list as foreseen in the end user agreement, which might not have been read by many. Once this consent has been obtained, the information of the contacts book is uploaded to the Truecaller’s servers and made available to the rest of its subscribers.

According to this crowd-sourced data system, you are able to identify unknown numbers.

Therefore it suffices that another user has marked a given contact as spam for you to be able to immediately identify a caller as such and save yourself from the patience consuming contact. Indeed, and quite undoubtedly, if a number qualified as unwanted by others call you, the screen of your Smartphone will turn red and present the image of shady figure wearing a fedora and sunglasses.

On the down side, if anybody has saved your contact number and name in their address book, if suffices that one person has installed the Truecaller app on their mobile phone and subscribed the abovementioned permission clause, for your contact number and name to end up in that database.

A new interface enables users to add information from their social media channels. Therefore, besides your contacts information, if users do activate the use of third parties social network services, such as Facebook, Google+, LinkedIn or Twitter, Truecaller may upload, store and use the list of identifiers associated with those services linked to your contacts in order to enhance the results shared with other users.

Moreover, it has recently been updated to introduce a search function, thus enabling you to search for any phone number and find any person’s contact.

In the same line, Facebook – which is only the largest social network – has decided to give new uses for the amount of data its users provide with the app Facebook Hello. In that regard, users are required to grant it access to the information contained in their Facebook account. Indeed, Hello uses Facebook’s database to provide details of a caller. Contrastingly, other apps such as Contacts+ integrate information provided in different social networks.

While it is undeniably useful to identify the person behind the unknown numbers, this means that the same others will be able to identify you when you contact them, even if they do not have your number.

Truecaller raises several privacy and data protection concerns. In fact, as names and telephone contacts actually enable the adequate identification of natural individuals, there is no doubt that such information actually constitutes personal data.

Nevertheless, in Truecaller’s own words:

When you install and use the Truecaller Apps, Truecaller will collect, process and retain personal information from You and any devices You may use in Your interaction with our Services. This information may include the following: geo-location, Your IP address, device ID or unique identifier, device type, ID for advertising, ad data, unique device token, operating system, operator, connection information, screen resolution, usage statistics, version of the Truecaller Apps You use and other information based on Your interaction with our Services.

This is particularly problematic considering that Trucaller collects and processes clearly and manifestly the personal data of other data subjects besides of its users.

As for the information related to other persons, it is said:

You may share the names, numbers and email addresses contained in Your device’s address book (“Contact Information”) with Truecaller for the purpose described below under Enhanced Search. If you provide us with personal information about someone else, you confirm that they are aware that You have provided their data and that they consent to our processing of their data according to our Privacy Policy.

This statement is, in my very modest opinion, absolutely ludicrous. Most people who have installed the service are not even aware how it works, let alone that an obligation of notifying an entire contacts list and obtaining individual consent impends upon them. In this context, it is paramount to have into consideration that, in the vast majority of cases, from the users’ viewpoint, the data at stake is collected and processed merely for personal purposes. Moreover, consent is usually defined as “any freely given specific and informed indication of his wishes by which the data subject signifies his agreement to personal data relating to him being processed.”

This basically amounts as saying that non-users do not have any control over their personal data as the sharing of their contact identification and phone number will depend on how many friends actually install Truecaller.

It is evident that Truecaller has no legal permission to process any personal data from non-users of its service. These non-users are data subjects who most certainly have not unambiguously given their consent, the processing is not necessary for the performance of a contract in which the data subject is party, and no legal obligation nor vital interests nor the performance of a task carried out in the public interest are at stake.

In this regard, the possibility provided to those who do not wish to have their names and phone numbers made available through the enhanced search or name search functionalities to exclude themselves from further queries by notifying Truecaller is not even realistic. To begin with, one is required to be aware of the existence of the service and, subsequently, is required to actively seek if his/her contact is on its directory and then to require Truecaller to be unlisted from its database. However, considering all my failed attempts, I am not sure if this option is only available to users or if this simply does not prevent to be added again to the service’s database once another user having the relevant contact on his address book actually allows for such access.

Last, but not the least, and this should not really constitute any surprise, Truecaller has already been hacked in the past by a Syrian hacking group, which resulted in the unauthorized access to some (personal) data of users and non-users. This surely highlights the importance of users carefully choosing the services with whom they entrust their – and others – personal data.

All considering, Truecaller is the obvious practical example of the statement: ‘If You’re Not Paying For It, You Are the Product Being Sold’.

Bits and pieces of issues regarding the happy sharing of your children’s lives on Facebook

It's just a picture of them playing, they don't mind. Like!

It’s just a picture of them playing, they don’t mind. Like!

Similarly to what is happening in other EU Member States’ courts, Portuguese courts have been struggling with the application of traditional legal concepts to the online context. Just recently, in a decision which I addressed here, it considered that those having in their possession of a video containing intimate images of an ex-partner are under the obligation to properly guard it and the omission to practice adequate safeguard are condemned as a relevant omission.

Thus said, there is one particular decision which was issued by a Portuguese appealing court last year that I failed to timely address and which concerns the very specific rights of image of children in the online context. Considering the amount of pictures that appear on my Facebook wall every time I log in on my account and the concerns expressed by the upcoming GDPR in regards of the collection and processing of data referring to minors of sixteen, I would like to address it today.

The court at stake confirmed the decision of the court of first instance, issued within a process of regulating the parental responsibilities of each progenitor, which forbid a separated couple to divulge on social media platforms pictures or information identifying their twelve years old daughter. It severely stated that children are not things or objects belonging to their parents.

One would expected that a court decision would not be necessary to achieve the conclusion according to which children have the right to have their privacy and image respected and safeguarded even from acts practised by their parents. In fact, one would hope that, in the online context, and considering their specific vulnerability and the particular dangers facilitated by medium of the Internet, their protection would be ensured primarily by their parents.

Ironically, the link to the news referring to this court decision was greatly shared among my Facebook friends, most of them with children of their own. The same ones who actually happily share pictures of their own kids. And who haven’t decreased the sharing of information involving their children since then.

It is funny how some people get offended or upset when someone posts online a picture in which they are not particularly favoured or of which they are embarrassed and are quick to require its removal, and do not wonder if it is ethical to publish a picture of information about someone who is not able to give his/her consent. Shouldn’t we worry what type of information would children – our own, our friend’s, our little cousin or nephew – want to see about themselves online in the future?

Every time I log in my Facebook account, there is an array of pictures of birthday parties, afternoons by the sea, first days at school, promenades in the park, playtimes in the swimming pool, displays of leisure activities, such as playing musical instruments or practising a sportive activity… In a particular case, it has been divulged that the child had a serious illness, which fortunately has been overcome ever since but which received full Facebook graphic and descriptive coverage at the time of the ongoing development.

I have seen pictures where my friends’ children appear almost naked or in unflattering poses, or where it is clearly identifiable where they go to school or spend holidays. Many identify their children by their name, age, school they attend, extracurricular activities… In any case, their parenthood is quite well established. I always think that, in the long run, it would permit the building of an extended and detailed profile for anybody which has access to such data. And, if you had read any of my other posts, you know by now that I am not exactly referring to the Facebook friends.

More worryingly, these details about the children’s lives are often displayed on the parents’ online profiles, perhaps due to simple distraction or unawareness, without any privacy settings being implemented. Consequently, anybody having a Facebook account can look for the intended person and have access to all the information contained on that profile.

I do not want to sound like a killjoy, a prude or a moralist. I get it, seriously, I do. A child is the biggest love and it is only human to want to proudly share his growth, development and achievement with relatives and friends. It has always been done and now it is almost effortless and immediate, at the distance of a click. In this regard, by forbidding the sharing of any picture or any information regarding children, the abovementioned decision seems excessive and unrealistic.

Nevertheless, one should not forget that some good sense and responsibility is particularly required in the online context, considering how easy it actually is to lose control of the purposes given to the published information besides the initial ones. As many seem to forget, once uploaded on an online platform, it is no longer within our reach, as they can be easily copied or downloaded by others.

Thus said, while it is certainly impossible to secure anonymity online, the amount of information that is published should be controlled for security, privacy and data protection purposes.

Anyway, this common practice of parents sharing online pictures and information regarding their children makes me wonder how companies such as Facebook, and other platforms focusing on user generated content, who process data at the direction of the user and, consequently, who unintentionally have to collect and process personal data regarding children below the age of sixteen, may be asked to comply with the new requirements of the GDPR in that regard.

If it is to be lawful if and to the extent that consent is given or authorised by the holder of parental responsibility, and if, as the Portuguese court have understood it, parents are not entitled to dispose of their children’s image on social media, a funny conundrum is generated. If the parents cannot publish such information, they will not be able to authorize it either and, consequently, children/teenagers won’t be able to rely on their parents’ authorization to use social media.

Carbon Games or the scapegoat of a bad initiative

Blocking sites with no supervision whatsoever... what can possibly go wrong?

Blocking sites with no supervision whatsoever… what can possibly go wrong?

So… The implementation of a protocol to fight online piracy has led to the imposition of technical restrictions on the access to the website of Carbon Games in Portugal.

Indeed, a few weeks ago, any person, player or consumer who tried to access the website was prevented to do so by restrictions imposed by the Portuguese ISPs, namely Cabovisão, Meo, Nos and Vodafone. Any attempt to reach the site resulted in the following message “the site that you’re trying to reach was blocked due to an order from the Regulator Agency”.

Those who are neither Portuguese nor familiar with the case might fail to grasp the myriad of subjacent issues which are wrong with this statement.

Therefore let me explain.

Last year, a protocol was signed – more specifically a ‘memorandum of understanding’ – between the content industry representatives and telecom operators according to which the latter will be required to restrain the access – i.e. blocking – websites with copyright infringing content.

Among the involved parties one can outline IGAC (Inspeção Geral das Atividades Culturais – General Inspection of Cultural Activities), DGC (Direcção Geral do Consumidor – Directorate General of Consumer), APRITEL (Associação dos Operadores de Telecomunicações – Telecom Operators Association) and MAPINET (Movimento Cívico Anti-Pirataria na Internet – Civic Movement for Anti-Piracy on the Internet).

Following the long judicial process which ended with the Portuguese Intellectual Property Court giving ISPs Vodafone, MEO and NOS the order to block the access to the The Pirate Bay website, such entities felt that a faster and less-expensive site-blocking mechanism was required. One that would not require an individual judicial assessment of copyright infringements.

The abovementioned memorandum intends to frame the cooperation of the signatory parties regarding the protection of authors’ copyright rights, while intending to circumvent the limitation arising from the absence of any duty to monitor of ISPs in regards of the information they transmit or store, thus attributing to IGAC such monitoring obligation.

Thus, under these new incumbent responsibilities, IGAC ought to collect and analyse claims of infringements and to order ISPs to prevent the access to legally protected contents unlawfully made available online.

According to the memorandum, the infringement claims ought to demonstrate the lack of authorization of the copyright owner in regards of the works thus made available. In that regard, claims must also be accompanied by a document certifying that, following the request to remove infringing contents, no answer was obtained from the website administrator.

The specifics are as follows: websites which deal predominantly with making available works protected by copyright without the authorization of the rights’ holders will be denounced by the entities representing the rights owners and, once the claim is confirmed by the IGAC, telecom operators are notified to block the websites at stake. The denouncing claims are expected to be filed periodically (twice a month) through MAPINET and referring to a block of up to 50 allegedly infringing websites. However, it is possible to file individual claims in situations particularly detrimental to copyright owners.

In this context, websites containing more than 500 non-authorized works or distributing repositories containing at least two thirds of illegal copies are deemed to predominantly making available works protected by copyright without the authorization of the rights’ holders.

The protocol has been diligently implemented in practice since its signature as, as far as I am aware, up to 180 websites have been blocked under this procedure.

As the case regarding Carbon Games demonstrates, there are several flaws in this system.

To start with, it is important to clarify that Carbon Games is a US videogames developer and its website deals with games of which they are the original creators.

Secondly, this process is undertaken by several private entities and one public body, the IGAC. While it is expected that the interests of private entities will not forcefully coincide with the general interest of the public, one would at least risk hoping that IGAC, within its recently established obligations of analysing claims of infringements, would not rush such analysis.

While one would expect that the infringing nature of the activity of a website should be adequately assessed, it is evident that the system does not work properly, considering that Carbon Games legally produces videogames and, all considering, should have its interests protected by the implementation of the initiative.

Additionally, the fact that ISPs ought to be compensated for all the trouble that the implementation of this protocol may entail for them actually risks to disincentive the establishment of any internal assessment system regarding the legitimacy of the infringement claims raised.

Moreover, the requirement of 500 illegal works or two thirds of illegal copies seems absolutely discretionary. What is the expected outcome of this decision? That websites containing 499 illegal works will remain fully operational? And if this is really the criterion, then it makes the Carbon Games case all the more ludicrous.

One would expect that a website allegedly managing illegal content would have the chance to contra-argument and present its defence. Apparently, it is not the case. In fact, considering the communication of Carbon Games on its own website, it was not aware of any suspicion of infringement content, any administrative proceedings nor of any blocking order prior to the occurrence of the effective blocking. In fact, it seems that no mechanism has been put in place in order to appreciate the wrongful blocking of websites.

In the meantime, it has been admitted that the order of blockage was unduly given and, accordingly, all the providers of online services have been notified that the blocking should be annulled, thus enabling the proper functioning of the website.

I cannot help but wonder how such an error is even possible. Isn’t the list provided to IGAC supposed to be validated?

While the efficacy of such an agreement is questionable considering that it is quite easy to circumvent such technical restrictions implemented by the ISPs by simply altering DNS servers or by changing the website’s domain, the users not aware of this are actually prevented to access the content of blocked websites.

More gravely, it seems that having a website, disregarding the legal nature of its content, is sufficient to be exposed to such mistakes. And the economic consequences can be quite worrisome for the website considering that an unjustified blocking leads platforms to be deprived from the access of their customers for an undefined period of time. In fact, in the Carbon Games case, it took up to two months (!!) to correct the error.

From the reading of the protocol, I honestly fail to see how the owner of a website, facing an unfounded blocking order, is expected to react and speedily regain control of its full functioning. Of course there are proper judicial means such as filing for an injunction. Nevertheless, considering that all this implemented ‘administrative’ procedure disregards any judicial assessment, it seems counterproductive to only foresee such judicial intervention when it is needed to react to unfounded orders.

It is evident that creativity should be rewarded and incentivized through a great protection and enforcement of IP rights. However, it has been made evident that, without proper legal and judicial oversight, access to legitimate content can be unjustifiably restricted. And while the e-Commerce Directive already includes procedures for removing illegal content, considering this whole experience, this specific solution does not seem to be the right path.

The dangers of certain apps or how to put your whole life out there

Finding love, one data breach at a time.

Finding love, one data breach at a time.

One of my past flatmates was actively looking for love online. Besides having registered in several websites for that end, I remember he also had several mobile applications (apps) installed in his Smartphone. I think he actually subscribed pretty much anything that even remotely could help him find love but outlined Tinder as his main dating tool.

Another of my closest friends is a jogging addicted – shout out P. He has installed on his Smartphone various apps which enable him to know how much steps he has made in a particular day, the route undertaken, and the heart rate via external device, which enables him to monitor his progresses.

What both of my friends have in common? Well, they actually use mobile apps to cover very specific necessities. And in this regard they can rely with almost anybody else.

Indeed, it is difficult to escape apps nowadays. Now that everyone (except for my aunt) seems to have a Smartphone, apps are increasingly popular for the most diversified purposes. For my prior flatmate it was all about dating. For my friend, it is to keep track of his running progresses. But their potential does not end there. From receiving and sending messages, using maps and navigation services, receiving news updates, playing games, dating or just checking the weather… You name a necessity or convenience, and there is an app for it.

On the downside, using apps usually requires to provide more or less personal information to the specific intended effect. Something that has become so usual that most consider as a natural step, without giving it further consideration.

In fact, a detail that most seem to be unaware of, apps allow for a massive collection and processing of personal – and sometimes sensitive – data. In fact, the nature and the amount of personal data accessed and collected raises serious privacy and data protection concerns.

For instance, in the case of my abovementioned flatmate, who was registered on several similar apps, and considering that he did not create fake accounts nor provided false information, each of them collected at least his name, age, gender, profession, location (enabling to presume where he worked, lived and spend time), sexual orientation, what he looks like (if he added a picture to his profiles), the frequency of his accesses to the app, and eventually the success of his online dating life.

In fact, in Tinder’s own words:

Information we collect about you

In General. We may collect information that can identify you such as your name and email address (“personal information”) and other information that does not identify you. We may collect this information through a website or a mobile application. By using the Service, you are authorizing us to gather, parse and retain data related to the provision of the Service. When you provide personal information through our Service, the information may be sent to servers located in the United States and countries around the world.
Information you provide. In order to register as a user with Tinder, you will be asked to sign in using your Facebook login. If you do so, you authorize us to access certain Facebook account information, such as your public Facebook profile (consistent with your privacy settings in Facebook), your email address, interests, likes, gender, birthday, education history, relationship interests, current city, photos, personal description, friend list, and information about and photos of your Facebook friends who might be common Facebook friends with other Tinder users. You will also be asked to allow Tinder to collect your location information from your device when you download or use the Service. In addition, we may collect and store any personal information you provide while using our Service or in some other manner. This may include identifying information, such as your name, address, email address and telephone number, and, if you transact business with us, financial information. You may also provide us photos, a personal description and information about your gender and preferences for recommendations, such as search distance, age range and gender. If you chat with other Tinder users, you provide us the content of your chats, and if you contact us with a customer service or other inquiry, you provide us with the content of that communication.

Considering that Tinder makes available a catalogue of profiles of geographically nearby members, among which one can swipe right or left, according to each one personal preferences, with the adequate analysis, it is even possible to define what type of persons (according to age, body type, hair colour) users find most attractive.

And because Tinder actually depends on having a Facebook profile, I guess that Facebook also gets aware of the average climate of your romantic life. Mainly if you start adding and interacting with your new friends on that platform and, why not, changing your status accordingly.

In the specific case of Tinder, as it mandatorily requires to be provided with a certain amount of Facebook information in order to ensure its proper functioning, these correlations are much easier for this app.

Thus said, a sweep conducted by 26 privacy and data protection authorities from around the world on more than 1,000 diversified apps, thus including Apple and Android apps, free and paid apps, public sector and private sector apps, and ranging from games and health/fitness apps, to news and banking apps has made possible to outline the main concerns at stake.

One of the issues specifically pointed out referred to the information provided to the users/data subjects, as it was concluded that many apps did not have a privacy policy. Therefore, in those cases, users were not properly informed – and therefore aware – about the collection, use, or further disclosure of the personal information provided.

It is a fact that most of us do not read the terms and conditions made available. And most will subscribe pretty much any service he/she is willing to use, disregarding what those terms and conditions actually state.

Nevertheless, a relevant issue in this regard is the excessive amount of data collected considering the purposes for which the information is provided or how it is sneakily collected. For instance, even gambling apps, such as solitaire, which seem far more innocuous, hide unknown risks, as many contain code enabling the access to the user’s information or to his contacts’ list and even allow to track the user’s browsing activities.

This is particularly worrisome when sensitive data, such as health information is at stake. This kind of data is easily collected through fitness orientated apps, which are quite in vogue nowadays. Besides any additional personally identifiable information which you will eventually provide upon creating an account, among the elements which most certainly are collected, one can find: from the name or user name, date of birth, current weight, target weight, height, gender, workouts frequency, workout settings and duration of your workout, heart rate. Also, if you train outdoors, geo-location will most certainly enable to assess the whereabouts of your exercising, from the departure to the arrival points, which will most probably coincide with your home address or its vicinities.

And, if you are particularly proud of your running or cycling results, and are willing to show up to all your friends in what good shape you actually are, there is a chance that you can actually connect the app to your Facebook and display that information in your profile, subsequently enabling Facebook to access the same logged information.

And things actually get worse when considering that, as demonstrated by recent data breaches, it seems that the information provided by their users is not even adequately protected.

For instance, and if I remember it well, due to a security vulnerability in Tinder – that apparently has been already fixed – it seemed that there was a time where the location data, such as longitude and latitude coordinates of users were actually easily accessible. Which is actually quite creepy and dangerous, as it would facilitate stalking and harassment in real life, which is as bad as it is happening online.

Anyway, it is actually very easy to forget the amount of data we provide apps with. However, the correlations that can be made, the conclusions which can be inferred, the patterns that can be assessed amounts to share more information than what we first realise and enables a far more detailed profile of ourselves than most of us would feel comfortable with others knowing.

The limits of government surveillance according to the ECtHR

Limits? What do you mean by 'limits'?

Limits? What do you mean by ‘limits’?

In two very recent judgements, the European Court of Human Rights (hereafter ECtHR) has made several essential points in regards of surveillance conducted by public authorities and its relation with Article 8 of the European Convention of Human Rights (hereafter ECHR).

Article 8 provides that governmental interference with the right to privacy must meet two criteria. First, the interference must be done e conducted “in accordance with the law” and must be “necessary in a democratic society”. Such interference must aim to achieve the protection of the “interests of national security, public safety or the economic well-being of the country, for the prevention of disorder or crime, for the protection of health or morals, or for the protection of the rights and freedoms of others”.

In previous cases regarding surveillance conducted by public authorities, the ECtHR had already concluded that any interference with the right to respect for private life and correspondence, as enshrined in Article 8 of the ECHR, must be strictly necessary for safeguarding the democratic institutions. However, it has now further clarified its interpretation.

In these recent decisions, the ECtHR concluded that the secret surveillance, as carried out in the manner described in the facts of the cases, violated Article 8 of the Convention.

The Roman Zakharov v. Russia decision, issued on the 4th December 2015, concerned the allegations of the editor in chief of a publishing company that laws enabling the installation of equipment which permitted the Federal Security Service (“the FSB”) to intercept all his telephone communications, without prior judicial authorisation, three mobile network operators interfered with his right to the privacy of his telephone communications.

The Court considered that “a reasonable suspicion against the person concerned, in particular, whether there are factual indications for suspecting that person of planning, committing or having committed criminal acts or other acts that may give rise to secret surveillance measures, such as, for example, acts endangering national security” must be verified and the interception shall meet the requirements of necessity and proportionality.

The Szabó and Vissy v. Hungary decision, issued on the 12th January 2016, concerned the allegations of members of a non-governmental organisation voicing criticism of the Government that the legislation enabling police to search houses, postal mail, and electronic communications and devices, without judicial authorization, for national security purposes, violated the right to respect for private life and correspondence.

The Court considered that: “the requirement ‘necessary in a democratic society’ must be interpreted in this context as requiring ‘strict necessity’ in two aspects. A measure of secret surveillance can be found as being in compliance with the Convention only if it is strictly necessary, as a general consideration, for the safeguarding the democratic institutions and, moreover, if it is strictly necessary, as a particular consideration, for the obtaining of vital intelligence in an individual operation. In the Court’s view, any measure of secret surveillance which does not correspond to these criteria will be prone to abuse by the authorities with formidable technologies at their disposal.” Consequently, it must be assessed if “sufficient reasons for intercepting a specific individual’s communications exist in each case”.

In both cases, by requiring surveillance activities to be individually targeted, the ECtHR has established that any indiscriminate interception is unacceptable. This is a most welcomed position considering the well-known legislative instruments and initiatives intended to strength the legitimacy of massive monitoring programs in many EU Member States.

Practical difficulties of the GDPR – the ‘right to be forgotten’ applied to online social platforms

From all the legal challenges that the GDPR will present for businesses in general, I would like to address in this post the issues raised by its implementation in regards of social network platforms, which are quite popular nowadays.

Article 17 of the GDPR establishes the ‘right to erasure’ or the right to be forgotten, as it has come to referred to, which provides data subjects with the right to require from data controllers the erasure of their personal data held by the latter, and the consequent obligation of controller, upon that request to abide, without undue delay, when certain conditions are fulfilled.

Considering that infringing the ‘right to erasure’ may lead to the application of significant economic sanctions, there is the risk that social platforms will be tempted to adopt a preventing approach by complying to all the deletion requests, disregarding their validity, thus erasing content on unfounded grounds. This is particularly worrisome because it may directly lead to the suppression of free speech online. Consequently, online businesses are not and should not be deemed competent to make any assessment in regards of the legitimacy of such claims, a point that I have already tried to make here.

While it seems that a notice and take down mechanism is envisaged without much detail being provided in regards of its practical enforceability, a particular issue in this context is the one related to the identities upon which such obligation impends. Indeed, the obligation to implement the ‘right to be forgotten’ can only be required from those who qualify as data controllers.

As data controllers are defined as the entities who determine the purposes and means of the processing of personal data, it is not clear if online social platforms providers can be defined as such.

Considering the well-known Google Spain case, it is at least certain that search engines are deemed to be controllers in this regard. As you may certainly remember, the CJEU ruled that individuals, provided that certain prerequisites are met, have the right to require from search engines, such as Google, to remove certain results about them, subsequently presented to a search based on a person’s name

Thus said, it is questionable if hosting platforms and online social networks, focused on user generated content, as it is the case of Facebook, qualify as such, considering that the data processed depends of the actions of the users who upload the relevant information. Therefore, the users themselves qualify as controllers. The language of Recital 15 of the GDPR about social networking is inconclusive in this regard.

The abovementioned Recital provides as follows:

This Regulation should not apply to processing of personal data by a natural person in the course of a purely personal or household activity and thus without a connection with a professional or commercial activity. Personal and household activities could include
correspondence and the holding of addresses, or social networking and on-line activity undertaken within the context of such personal and household activities. However, this Regulation should apply to controllers or processors which provide the means for processing personal data for such personal or household activities.

This is not an irrelevant issue, though. In practice, it will amount to enable someone to require and effectively compel Twitter or Facebook to delete the information about her/him despite being provided by others.

And considering that any legal instrument is proportionally as efficient in practice as it is capable of being enforced, the definition of whom is covered and ought to comply with it is unquestionably a paramount element.

As I remember to read elsewhere – I fail to remember where, unfortunately – one wondered if the intermediary liability as foreseen in the e-Commerce Directive would be an appropriate mechanism for the enforcement of the right to erasure/right to be forgotten.

Articles 12-14 of the e-Commerce Directive indeed exempt information society services from liability under specific circumstances, namely when they act as a ‘mere conduit’ of information, or engage in ‘caching’ (the automatic, intermediate and temporary storage of information), or when ‘hosting’ (i.e., storing information at the request of a recipient of the service).

Article 15 establishes the inexistence of any general duty impending on online intermediaries to monitor or actively seek facts indicating illegal activity on their websites.

Having into account the general liability of online intermediaries foreseen in the E-commerce Directive (Directive 2000/31/EC on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market), a particular distinction will perhaps apply according to the level of ‘activity’ or ‘passivity’ of the platforms in the management of the content provided by their users.

However this liability does not fully clarify the extent of the erasure obligation. Will it be proportionate to the degree of ‘activity’ or ‘passivity’ of the service provider in regards of the content?

Moreover, it is not clear how both regimes can be applied simultaneously. While the GDPR does not refer to any notice and take down mechanism and expressly refers that its application is without prejudice of the e-Commerce Directive liability rules, the fact is that the GDPR only establishes the ‘duty of erasure’ to controllers. As the intermediary liability rules require accountability for the activities of third-parties, this is a requirement not easy to overcome.

Thus considering, the most awaited GDPR hasn’t entered into force yet but I already cannot wait for the next chapters.

« Older posts

© 2023 The Public Privacy

Theme by Anders NorenUp ↑