Month: February 2016

Security v. Security – Tech Companies, Backdoors and Law Enforcement Authorities

Grab the popcorns, this is going to be fun!

Grab the popcorns, this is going to be fun!

The access request to the information stored on the Smartphone of one of the San Bernardino shooting suspects has intensified the debate on the implementation of backdoors to enable the access to mobile devices for law enforcement purposes.

The issue does not refer to whether the law enforcement authorities, by means of a proper warrant, are entitled to search a mobile phone and access its content. That is a straightforward fact. They do.

What is at stake is Apple’s objection to a court order requiring it to provide the ongoing federal investigation the proper means to access such information. More concretely, it has been required to actually write a code modifying the iPhone software, that would bypass an important security function put in place, by disabling the feature which automatically erases information after ten attempts of entering the wrong password. This would enable authorities to endlessly enter wrong credentials and eventually crack the device’s password through brute force, without risking the deletion of content, thus being able to access it and extract the information contained on the iPhone of the suspect.

The use of new technologies to conduct criminal and terrorist activities has made it difficult to ignore the advantages of accessing the communications by means of such technologies in the investigation, prevention and combat of criminal activities. Law enforcement authorities point that it is particularly pertinent in the fight against terrorism, paedophilia networks and drug trafficking cases.

In this context, the use of encryption in communications has become a cornerstone of the debate. Investigative authorities are willing to see implemented backdoors in mobile devices in order to ensure the access when necessary. Contrastingly, companies such as Apple refuse to retain access keys – and consequently provide it upon request of law enforcement authorities – to such encrypted communications.

Just recently, FBI Director James Comey has told the US Senate Intelligence Committee that intelligence services are not interested in a ‘backdoor’ per se access to secure devices. Instead, what is at stake is requiring companies to provide the encrypted messages sent through those devices. James Comey is a wordplay habitué. He once said he wanted ‘front doors’ instead of ‘back doors’.

In the same line, White House Press Secretary, Josh Earnest recently stated that, by the abovementioned court order, Apple is not being asked to redesign its products or to create a backdoor.

While these are, at the very least, very puzzling statements, they nevertheless clearly express the subjacent motivation: the ban on encryption products with no backdoors and the implementation of backdoors.

Indeed, if companies can be required to undermine their security and privacy protection features in order to provide access to law enforcement authorities, regardless the legitimate inherent purpose, and disregarding the concrete designation one might find preferable, that is the very definition of a backdoor.

It never ceases to amaze me how controversial among free people living in a democracy it seems to be that the implementation of backdoors is – on both legal and technological grounds and for the sake of everyone’s privacy and security – a very bad idea.

Well, the main argument supporting the concept is that such technological initiative will chiefly help the combat of criminal activities. That is unquestionably a very legitimate purpose. And nobody opposing the implementation of backdoors actually argues otherwise.

However, it is a fact that backdoors would automatically make everyone’s communications less secure and exposed them to a greater risk of attacks by third parties and to further privacy invasions. Moreover, no real warranties in regards of the risk of the abuse which could ensue are ever provided. Those arguing in favour of the access to information through backdoors fail to adequately frame the context. It is vaguely stated that such mechanism will be used when necessary, without any strict definition. What is necessary, anyway? Would it depend on the relevance of the information at stake? Would it depend on the existence of alternative means or of how burdensome those are?

At least, if Apple complies with the order, it is difficult to accept that more similar requests will not immediately ensue. In fact, one will risk saying that those can be expected and will certainly be encouraged in the future. Ultimately, the creation of this cracking software could be used and abused in future cases. And this is particularly worrisome considering the lack of legal framework and the judicial precedent basis.

One may be tempted to sacrifice privacy in the interest of public security. That it is not a wrongful viewpoint. I don’t know anyone that would disagree on that. Except when considering the very own limitations of backdoors when it comes to fighting terrorism for instance. It is harder to support backdoors to prevent criminal activities when confronted with their very own inherent inefficiency and limitations, which seem to go unacknowledged by their supporters.

While companies may be forced to implement such backdoors, to provide access to encrypted communications, there is a myriad of alternatives in the marketplace for criminal seeking encrypted products where no such backdoors are installed. Encryption apps, files encryption, open source products, virtual private networks…

Let’s talk about Isis for instance. It has been alleged – without further demonstration – that they have their own open source encrypted communications app. Therefore, except from weakening the communications’ safety of everybody relying on encrypted messaging apps, considering the open source nature of the app used by Isis, the implementation of backdoors would be pointless for the purpose intended to be achieved.

Thus said, one can easily understand the stance of Apple. Having built its reputation on the privacy and security provided by its devices, it is very risky from a commercial viewpoint to be asked to develop software that counter its core business. Indeed, it modified its software in 2014 in order to become unable to unlock its Smartphones and access its customers’ encrypted data.

The fact that the company is now being asked to help enforcement law authorities by building a backdoor to get around a security function that prevents decryption of its content appears to be just another way of achieving the same outcome. Under a different designation.

Because it now goes way further than requiring companies to comply with a lawful order and warrant to the extent they are able to, requesting private companies to create a tool intended to weaken the security of their own operating systems just goes beyond any good sense. Indeed, it just amounts to require (force?) private companies to create and deliver hacking tools to law enforcement authorities which actually put everyone’s privacy and cybersecurity at risk.

And if this becomes a well accepted requirement in democratic systems, either by precedent either through legislative changes, well, one can only wonder with what enthusiasm such news will be welcomed by some repressive regimes eager to expand their surveillance powers.

From an EU viewpoint, and considering how uncertain is the future of the Privacy Shield framework, and despite the existing divergences among EU Member States in respect of encryption, this whole case certainly does not solve any trust issues in regards of the security of the data transferred to the US.

Safe Harbour 2.0 – Not really safe nor sound

Round 2!

Round 2!

So, back in 2013, the revelations of the massive and indiscriminate surveillance conducted by the US authorities have prompted EU demands regarding the strengthening of the Safe Harbour mechanism.

As you may well be aware by now, the conclusion of the very lengthy negotiations between the EU and the U.S. for the new EU-US Safe Harbour – christened “EU-US Privacy Shield” and intended to replace the former Safe Harbour Agreement – has apparently come to an end.

Which seem to be quite good news, considering how intricate those negotiations were.

Certainly, the approval of the Cybersecurity Information Sharing Act (CISA), according to which, upon ‘cyber threat’ indicators, companies are encouraged to share threat intelligence information with the US government by being absolved of liability for data security, did not help the case. Indeed, this undoubtedly poses a problem for the EU when such information includes some European citizens’ personal data.

Similarly, the delays on the proposed Judicial Redress Act, which would allow European citizens to seek redress against the US if law enforcement agencies misused their personal data, only added up to the existing complication.

The fact that negotiators were running against the clock was another stressful point.

Time was pressing for companies which rely on the Safe Harbour framework to freely transfer data between the United States and the European Union. Indeed, last October, the Court of Justice of the EU ruled that the Safe Harbour decision was invalid (case C-362/14). Consequently, companies had to rely on other legal basis to justify the transfers of personal data to the US.

Moreover, the Article 29 Working Party established the end of January as the turning point date where it would all necessary and appropriate action if no alternative was provided.

The end of January indeed passed and at the beginning of February the conclusion of the negotiations was finally announced.

However no bilateral agreement was really reached, as the new framework is based on “an exchange of letters” with written binding assurances.

The US have indeed offered to address the concerns regarding the access of its authorities to personal data transferred under the Safe Harbour scheme by creating an entity aiming to control that such activity is not excessive. Moreover, access to information by public authorities will be subject to clear limitations, safeguards, and oversight mechanisms.

Thus said, the conclusion of these negotiations represent good news. At least in theory. Certainly, in the EU Commission own words, the new framework “will protect the fundamental rights of Europeans where their data is transferred to the United States and ensure legal certainty for businesses“.

The EU Commission further stated that the new mechanism reflects the requirements set out by the European Court of Justice in its Schrems ruling, namely by providing “stronger obligations on companies in the U.S. to protect the personal data of Europeans and stronger monitoring and enforcement by the U.S. Department of Commerce and Federal Trade Commission (FTC), including through increased cooperation with European Data Protection Authorities.”

Moreover, it said that the new mechanism “includes commitments by the U.S. that possibilities under U.S. law for public authorities to access personal data transferred under the new arrangement will be subject to clear conditions, limitations and oversight, preventing generalised access.”

It appears that mass and indiscriminate surveillance would constitute a violation of the agreement. However, it would still be permissible if a targeted access would not be possible.

Furthermore, “Europeans will have the possibility to raise any enquiry or complaint in this context with a dedicated new Ombudsperson.” This independent entity is yet to be appointed.

The cornerstones of the arrangement therefore seem to be the obligations impending on companies handling personal data of EU data subjects, the restriction on the US government access and the judicial redress possibilities.

A joint annual review is intended to be put in place in order to monitor the functioning of the agreement

Nevertheless, in spite of what is optimistically expected and what one is lead to believe by the EU Commission’s own press release, one must wonder… What has really been achieved in practice?

To begin with, it seems that we are supposed to rely on a declaration by the US authorities on their interpretation regarding surveillance.

Unsurprisingly, many fail to see in what way this new framework is fundamentally different from the Safe Harbour, let alone that it complies with the requirements set out by the CJEU in the Schrems ruling. Hence, it is perhaps expectable that the CJEU will invalidate it on the same grounds it invalidated the Safe Harbour framework.

While US access to EU citizen’s data is expected to be limited to what is necessary and proportionate, as the devil is generally in the details, one must legitimately ask what is to be deemed necessary and proportionate in regards of such surveillance.

It is indeed unavoidable to think that such a framework does not ensure the proper protection of the fundamental rights of Europeans where their data is transferred to the US, nor provide sEU citizens with adequate legal means to redress violations, namely in regards of possible interception by US security agencies.

Anyway, at the moment, the ‘Ombudsperson’ has not yet been set up by the US nor any adequacy decision has been drafted by the EU Commission.

What does this mean in practice?

Well, as transfers to the United States cannot take place on the basis of the invalidated Safe Harbour decision, transfers of data to the USA still lack any legal basis and companies will have to rely upon on alternative legal basis, such as Binding Corporate Rules, Model Contract Clauses or the derogations in Article 26(1).

However, the EU data protection authorities (DPAs) did not exclude the possibility, in particular cases, of preventing companies to adopt new binding corporate rules (BCRs) or install model contract clauses regarding new data transfer agreements. It will be assessed if personal data transfers to the United States can occur under these transfer mechanisms. However, the fact that the data transferred under these methods are subject to surveillance by U.S. national security agencies mechanism is the same issue which lead the CJEU to rule the Safe Harbour Framework as invalid.

In the meantime, the Art.29WP expects to receive, by the end of February, the relevant documents in order to assess its content and if it properly answers the concerns raised by the Schrems judgement.

It further outlined that framework for intelligence activities should be orientated by four ‘essential guarantees’:

A. Processing should be based on clear, precise and accessible rules: this means that anyone who is reasonably informed should be able to foresee what might happen with her/his data where they are transferred;
B. Necessity and proportionality with regard to the legitimate objectives pursued need to be demonstrated: a balance needs to be found between the objective for which the data are collected and accessed (generally national security) and the rights of the individual;
C. An independent oversight mechanism should exist, that is both effective and impartial: this can either be a judge or another independent body, as long as it has sufficient ability to carry out the necessary checks;
D. Effective remedies need to be available to the individual: anyone should have the right to defend her/his rights before an independent body.

Thus said, an ‘adequacy decision’ still has to be drafted and, after consultation of the Art.29WP, approved by the College of Commissioners. In parallel, the U.S. Department of Commerce is expected to implement the agreed-upon mechanisms.

So, let’s wait and see how it goes from here…

Truecaller: In the crossroad between privacy and data protection

Let me see, where am I uploading others personal information today?

Let me see, where am I uploading others personal information today?

As I have already made clear in a previous post, there is little that I find more annoying than, at the end of a particularly stressful workday, being bothered by unrequested telemarketing or spam phone calls. And while I understand that, on the other side of the line, there is a person acting under the direct orientation – and possibly supervision – of his/her employer, these calls most always seem a resistance test for one’s patience.

Therefore, mobile software or applications enabling the prior identification of certain numbers, by replicating the Caller ID experience (as if the contact number was saved in your contact list) or allowing for the automatic blocking of undesirable calls have found here a market to succeed.

Thus said, you might have certainly heard of and possibly installed apps such as Current Caller ID, WhosCall or Truecaller. Most probably, you find them quite useful in order to avoid unwanted contacts.

As I have, for several occasions, unlisted my contact from the Truecaller database, but keep noticing that it eventually ends up being integrated in that list all over again, I want to address today some specific issues regarding that app.

Truecaller is freely available on iOS and Android platforms and is quite efficient in regards of what it promises, providing its subscribers a humongous database of previously identified contacts. In particular, it enables users to identify, without being required to hang up to that end, spam callers.

This efficiency is the result of the data provided by the millions of users who have downloaded the app on their smartphones.

How?

Well, it suffices that users allow for the app to access his/her contacts list as foreseen in the end user agreement, which might not have been read by many. Once this consent has been obtained, the information of the contacts book is uploaded to the Truecaller’s servers and made available to the rest of its subscribers.

According to this crowd-sourced data system, you are able to identify unknown numbers.

Therefore it suffices that another user has marked a given contact as spam for you to be able to immediately identify a caller as such and save yourself from the patience consuming contact. Indeed, and quite undoubtedly, if a number qualified as unwanted by others call you, the screen of your Smartphone will turn red and present the image of shady figure wearing a fedora and sunglasses.

On the down side, if anybody has saved your contact number and name in their address book, if suffices that one person has installed the Truecaller app on their mobile phone and subscribed the abovementioned permission clause, for your contact number and name to end up in that database.

A new interface enables users to add information from their social media channels. Therefore, besides your contacts information, if users do activate the use of third parties social network services, such as Facebook, Google+, LinkedIn or Twitter, Truecaller may upload, store and use the list of identifiers associated with those services linked to your contacts in order to enhance the results shared with other users.

Moreover, it has recently been updated to introduce a search function, thus enabling you to search for any phone number and find any person’s contact.

In the same line, Facebook – which is only the largest social network – has decided to give new uses for the amount of data its users provide with the app Facebook Hello. In that regard, users are required to grant it access to the information contained in their Facebook account. Indeed, Hello uses Facebook’s database to provide details of a caller. Contrastingly, other apps such as Contacts+ integrate information provided in different social networks.

While it is undeniably useful to identify the person behind the unknown numbers, this means that the same others will be able to identify you when you contact them, even if they do not have your number.

Truecaller raises several privacy and data protection concerns. In fact, as names and telephone contacts actually enable the adequate identification of natural individuals, there is no doubt that such information actually constitutes personal data.

Nevertheless, in Truecaller’s own words:

When you install and use the Truecaller Apps, Truecaller will collect, process and retain personal information from You and any devices You may use in Your interaction with our Services. This information may include the following: geo-location, Your IP address, device ID or unique identifier, device type, ID for advertising, ad data, unique device token, operating system, operator, connection information, screen resolution, usage statistics, version of the Truecaller Apps You use and other information based on Your interaction with our Services.

This is particularly problematic considering that Trucaller collects and processes clearly and manifestly the personal data of other data subjects besides of its users.

As for the information related to other persons, it is said:

You may share the names, numbers and email addresses contained in Your device’s address book (“Contact Information”) with Truecaller for the purpose described below under Enhanced Search. If you provide us with personal information about someone else, you confirm that they are aware that You have provided their data and that they consent to our processing of their data according to our Privacy Policy.

This statement is, in my very modest opinion, absolutely ludicrous. Most people who have installed the service are not even aware how it works, let alone that an obligation of notifying an entire contacts list and obtaining individual consent impends upon them. In this context, it is paramount to have into consideration that, in the vast majority of cases, from the users’ viewpoint, the data at stake is collected and processed merely for personal purposes. Moreover, consent is usually defined as “any freely given specific and informed indication of his wishes by which the data subject signifies his agreement to personal data relating to him being processed.”

This basically amounts as saying that non-users do not have any control over their personal data as the sharing of their contact identification and phone number will depend on how many friends actually install Truecaller.

It is evident that Truecaller has no legal permission to process any personal data from non-users of its service. These non-users are data subjects who most certainly have not unambiguously given their consent, the processing is not necessary for the performance of a contract in which the data subject is party, and no legal obligation nor vital interests nor the performance of a task carried out in the public interest are at stake.

In this regard, the possibility provided to those who do not wish to have their names and phone numbers made available through the enhanced search or name search functionalities to exclude themselves from further queries by notifying Truecaller is not even realistic. To begin with, one is required to be aware of the existence of the service and, subsequently, is required to actively seek if his/her contact is on its directory and then to require Truecaller to be unlisted from its database. However, considering all my failed attempts, I am not sure if this option is only available to users or if this simply does not prevent to be added again to the service’s database once another user having the relevant contact on his address book actually allows for such access.

Last, but not the least, and this should not really constitute any surprise, Truecaller has already been hacked in the past by a Syrian hacking group, which resulted in the unauthorized access to some (personal) data of users and non-users. This surely highlights the importance of users carefully choosing the services with whom they entrust their – and others – personal data.

All considering, Truecaller is the obvious practical example of the statement: ‘If You’re Not Paying For It, You Are the Product Being Sold’.

Bits and pieces of issues regarding the happy sharing of your children’s lives on Facebook

It's just a picture of them playing, they don't mind. Like!

It’s just a picture of them playing, they don’t mind. Like!

Similarly to what is happening in other EU Member States’ courts, Portuguese courts have been struggling with the application of traditional legal concepts to the online context. Just recently, in a decision which I addressed here, it considered that those having in their possession of a video containing intimate images of an ex-partner are under the obligation to properly guard it and the omission to practice adequate safeguard are condemned as a relevant omission.

Thus said, there is one particular decision which was issued by a Portuguese appealing court last year that I failed to timely address and which concerns the very specific rights of image of children in the online context. Considering the amount of pictures that appear on my Facebook wall every time I log in on my account and the concerns expressed by the upcoming GDPR in regards of the collection and processing of data referring to minors of sixteen, I would like to address it today.

The court at stake confirmed the decision of the court of first instance, issued within a process of regulating the parental responsibilities of each progenitor, which forbid a separated couple to divulge on social media platforms pictures or information identifying their twelve years old daughter. It severely stated that children are not things or objects belonging to their parents.

One would expected that a court decision would not be necessary to achieve the conclusion according to which children have the right to have their privacy and image respected and safeguarded even from acts practised by their parents. In fact, one would hope that, in the online context, and considering their specific vulnerability and the particular dangers facilitated by medium of the Internet, their protection would be ensured primarily by their parents.

Ironically, the link to the news referring to this court decision was greatly shared among my Facebook friends, most of them with children of their own. The same ones who actually happily share pictures of their own kids. And who haven’t decreased the sharing of information involving their children since then.

It is funny how some people get offended or upset when someone posts online a picture in which they are not particularly favoured or of which they are embarrassed and are quick to require its removal, and do not wonder if it is ethical to publish a picture of information about someone who is not able to give his/her consent. Shouldn’t we worry what type of information would children – our own, our friend’s, our little cousin or nephew – want to see about themselves online in the future?

Every time I log in my Facebook account, there is an array of pictures of birthday parties, afternoons by the sea, first days at school, promenades in the park, playtimes in the swimming pool, displays of leisure activities, such as playing musical instruments or practising a sportive activity… In a particular case, it has been divulged that the child had a serious illness, which fortunately has been overcome ever since but which received full Facebook graphic and descriptive coverage at the time of the ongoing development.

I have seen pictures where my friends’ children appear almost naked or in unflattering poses, or where it is clearly identifiable where they go to school or spend holidays. Many identify their children by their name, age, school they attend, extracurricular activities… In any case, their parenthood is quite well established. I always think that, in the long run, it would permit the building of an extended and detailed profile for anybody which has access to such data. And, if you had read any of my other posts, you know by now that I am not exactly referring to the Facebook friends.

More worryingly, these details about the children’s lives are often displayed on the parents’ online profiles, perhaps due to simple distraction or unawareness, without any privacy settings being implemented. Consequently, anybody having a Facebook account can look for the intended person and have access to all the information contained on that profile.

I do not want to sound like a killjoy, a prude or a moralist. I get it, seriously, I do. A child is the biggest love and it is only human to want to proudly share his growth, development and achievement with relatives and friends. It has always been done and now it is almost effortless and immediate, at the distance of a click. In this regard, by forbidding the sharing of any picture or any information regarding children, the abovementioned decision seems excessive and unrealistic.

Nevertheless, one should not forget that some good sense and responsibility is particularly required in the online context, considering how easy it actually is to lose control of the purposes given to the published information besides the initial ones. As many seem to forget, once uploaded on an online platform, it is no longer within our reach, as they can be easily copied or downloaded by others.

Thus said, while it is certainly impossible to secure anonymity online, the amount of information that is published should be controlled for security, privacy and data protection purposes.

Anyway, this common practice of parents sharing online pictures and information regarding their children makes me wonder how companies such as Facebook, and other platforms focusing on user generated content, who process data at the direction of the user and, consequently, who unintentionally have to collect and process personal data regarding children below the age of sixteen, may be asked to comply with the new requirements of the GDPR in that regard.

If it is to be lawful if and to the extent that consent is given or authorised by the holder of parental responsibility, and if, as the Portuguese court have understood it, parents are not entitled to dispose of their children’s image on social media, a funny conundrum is generated. If the parents cannot publish such information, they will not be able to authorize it either and, consequently, children/teenagers won’t be able to rely on their parents’ authorization to use social media.

© 2023 The Public Privacy

Theme by Anders NorenUp ↑