Tag: Data Protection (page 1 of 4)

Truecaller: In the crossroad between privacy and data protection

Let me see, where am I uploading others personal information today?

Let me see, where am I uploading others personal information today?

As I have already made clear in a previous post, there is little that I find more annoying than, at the end of a particularly stressful workday, being bothered by unrequested telemarketing or spam phone calls. And while I understand that, on the other side of the line, there is a person acting under the direct orientation – and possibly supervision – of his/her employer, these calls most always seem a resistance test for one’s patience.

Therefore, mobile software or applications enabling the prior identification of certain numbers, by replicating the Caller ID experience (as if the contact number was saved in your contact list) or allowing for the automatic blocking of undesirable calls have found here a market to succeed.

Thus said, you might have certainly heard of and possibly installed apps such as Current Caller ID, WhosCall or Truecaller. Most probably, you find them quite useful in order to avoid unwanted contacts.

As I have, for several occasions, unlisted my contact from the Truecaller database, but keep noticing that it eventually ends up being integrated in that list all over again, I want to address today some specific issues regarding that app.

Truecaller is freely available on iOS and Android platforms and is quite efficient in regards of what it promises, providing its subscribers a humongous database of previously identified contacts. In particular, it enables users to identify, without being required to hang up to that end, spam callers.

This efficiency is the result of the data provided by the millions of users who have downloaded the app on their smartphones.

How?

Well, it suffices that users allow for the app to access his/her contacts list as foreseen in the end user agreement, which might not have been read by many. Once this consent has been obtained, the information of the contacts book is uploaded to the Truecaller’s servers and made available to the rest of its subscribers.

According to this crowd-sourced data system, you are able to identify unknown numbers.

Therefore it suffices that another user has marked a given contact as spam for you to be able to immediately identify a caller as such and save yourself from the patience consuming contact. Indeed, and quite undoubtedly, if a number qualified as unwanted by others call you, the screen of your Smartphone will turn red and present the image of shady figure wearing a fedora and sunglasses.

On the down side, if anybody has saved your contact number and name in their address book, if suffices that one person has installed the Truecaller app on their mobile phone and subscribed the abovementioned permission clause, for your contact number and name to end up in that database.

A new interface enables users to add information from their social media channels. Therefore, besides your contacts information, if users do activate the use of third parties social network services, such as Facebook, Google+, LinkedIn or Twitter, Truecaller may upload, store and use the list of identifiers associated with those services linked to your contacts in order to enhance the results shared with other users.

Moreover, it has recently been updated to introduce a search function, thus enabling you to search for any phone number and find any person’s contact.

In the same line, Facebook – which is only the largest social network – has decided to give new uses for the amount of data its users provide with the app Facebook Hello. In that regard, users are required to grant it access to the information contained in their Facebook account. Indeed, Hello uses Facebook’s database to provide details of a caller. Contrastingly, other apps such as Contacts+ integrate information provided in different social networks.

While it is undeniably useful to identify the person behind the unknown numbers, this means that the same others will be able to identify you when you contact them, even if they do not have your number.

Truecaller raises several privacy and data protection concerns. In fact, as names and telephone contacts actually enable the adequate identification of natural individuals, there is no doubt that such information actually constitutes personal data.

Nevertheless, in Truecaller’s own words:

When you install and use the Truecaller Apps, Truecaller will collect, process and retain personal information from You and any devices You may use in Your interaction with our Services. This information may include the following: geo-location, Your IP address, device ID or unique identifier, device type, ID for advertising, ad data, unique device token, operating system, operator, connection information, screen resolution, usage statistics, version of the Truecaller Apps You use and other information based on Your interaction with our Services.

This is particularly problematic considering that Trucaller collects and processes clearly and manifestly the personal data of other data subjects besides of its users.

As for the information related to other persons, it is said:

You may share the names, numbers and email addresses contained in Your device’s address book (“Contact Information”) with Truecaller for the purpose described below under Enhanced Search. If you provide us with personal information about someone else, you confirm that they are aware that You have provided their data and that they consent to our processing of their data according to our Privacy Policy.

This statement is, in my very modest opinion, absolutely ludicrous. Most people who have installed the service are not even aware how it works, let alone that an obligation of notifying an entire contacts list and obtaining individual consent impends upon them. In this context, it is paramount to have into consideration that, in the vast majority of cases, from the users’ viewpoint, the data at stake is collected and processed merely for personal purposes. Moreover, consent is usually defined as “any freely given specific and informed indication of his wishes by which the data subject signifies his agreement to personal data relating to him being processed.”

This basically amounts as saying that non-users do not have any control over their personal data as the sharing of their contact identification and phone number will depend on how many friends actually install Truecaller.

It is evident that Truecaller has no legal permission to process any personal data from non-users of its service. These non-users are data subjects who most certainly have not unambiguously given their consent, the processing is not necessary for the performance of a contract in which the data subject is party, and no legal obligation nor vital interests nor the performance of a task carried out in the public interest are at stake.

In this regard, the possibility provided to those who do not wish to have their names and phone numbers made available through the enhanced search or name search functionalities to exclude themselves from further queries by notifying Truecaller is not even realistic. To begin with, one is required to be aware of the existence of the service and, subsequently, is required to actively seek if his/her contact is on its directory and then to require Truecaller to be unlisted from its database. However, considering all my failed attempts, I am not sure if this option is only available to users or if this simply does not prevent to be added again to the service’s database once another user having the relevant contact on his address book actually allows for such access.

Last, but not the least, and this should not really constitute any surprise, Truecaller has already been hacked in the past by a Syrian hacking group, which resulted in the unauthorized access to some (personal) data of users and non-users. This surely highlights the importance of users carefully choosing the services with whom they entrust their – and others – personal data.

All considering, Truecaller is the obvious practical example of the statement: ‘If You’re Not Paying For It, You Are the Product Being Sold’.

Bits and pieces of issues regarding the happy sharing of your children’s lives on Facebook

It's just a picture of them playing, they don't mind. Like!

It’s just a picture of them playing, they don’t mind. Like!

Similarly to what is happening in other EU Member States’ courts, Portuguese courts have been struggling with the application of traditional legal concepts to the online context. Just recently, in a decision which I addressed here, it considered that those having in their possession of a video containing intimate images of an ex-partner are under the obligation to properly guard it and the omission to practice adequate safeguard are condemned as a relevant omission.

Thus said, there is one particular decision which was issued by a Portuguese appealing court last year that I failed to timely address and which concerns the very specific rights of image of children in the online context. Considering the amount of pictures that appear on my Facebook wall every time I log in on my account and the concerns expressed by the upcoming GDPR in regards of the collection and processing of data referring to minors of sixteen, I would like to address it today.

The court at stake confirmed the decision of the court of first instance, issued within a process of regulating the parental responsibilities of each progenitor, which forbid a separated couple to divulge on social media platforms pictures or information identifying their twelve years old daughter. It severely stated that children are not things or objects belonging to their parents.

One would expected that a court decision would not be necessary to achieve the conclusion according to which children have the right to have their privacy and image respected and safeguarded even from acts practised by their parents. In fact, one would hope that, in the online context, and considering their specific vulnerability and the particular dangers facilitated by medium of the Internet, their protection would be ensured primarily by their parents.

Ironically, the link to the news referring to this court decision was greatly shared among my Facebook friends, most of them with children of their own. The same ones who actually happily share pictures of their own kids. And who haven’t decreased the sharing of information involving their children since then.

It is funny how some people get offended or upset when someone posts online a picture in which they are not particularly favoured or of which they are embarrassed and are quick to require its removal, and do not wonder if it is ethical to publish a picture of information about someone who is not able to give his/her consent. Shouldn’t we worry what type of information would children – our own, our friend’s, our little cousin or nephew – want to see about themselves online in the future?

Every time I log in my Facebook account, there is an array of pictures of birthday parties, afternoons by the sea, first days at school, promenades in the park, playtimes in the swimming pool, displays of leisure activities, such as playing musical instruments or practising a sportive activity… In a particular case, it has been divulged that the child had a serious illness, which fortunately has been overcome ever since but which received full Facebook graphic and descriptive coverage at the time of the ongoing development.

I have seen pictures where my friends’ children appear almost naked or in unflattering poses, or where it is clearly identifiable where they go to school or spend holidays. Many identify their children by their name, age, school they attend, extracurricular activities… In any case, their parenthood is quite well established. I always think that, in the long run, it would permit the building of an extended and detailed profile for anybody which has access to such data. And, if you had read any of my other posts, you know by now that I am not exactly referring to the Facebook friends.

More worryingly, these details about the children’s lives are often displayed on the parents’ online profiles, perhaps due to simple distraction or unawareness, without any privacy settings being implemented. Consequently, anybody having a Facebook account can look for the intended person and have access to all the information contained on that profile.

I do not want to sound like a killjoy, a prude or a moralist. I get it, seriously, I do. A child is the biggest love and it is only human to want to proudly share his growth, development and achievement with relatives and friends. It has always been done and now it is almost effortless and immediate, at the distance of a click. In this regard, by forbidding the sharing of any picture or any information regarding children, the abovementioned decision seems excessive and unrealistic.

Nevertheless, one should not forget that some good sense and responsibility is particularly required in the online context, considering how easy it actually is to lose control of the purposes given to the published information besides the initial ones. As many seem to forget, once uploaded on an online platform, it is no longer within our reach, as they can be easily copied or downloaded by others.

Thus said, while it is certainly impossible to secure anonymity online, the amount of information that is published should be controlled for security, privacy and data protection purposes.

Anyway, this common practice of parents sharing online pictures and information regarding their children makes me wonder how companies such as Facebook, and other platforms focusing on user generated content, who process data at the direction of the user and, consequently, who unintentionally have to collect and process personal data regarding children below the age of sixteen, may be asked to comply with the new requirements of the GDPR in that regard.

If it is to be lawful if and to the extent that consent is given or authorised by the holder of parental responsibility, and if, as the Portuguese court have understood it, parents are not entitled to dispose of their children’s image on social media, a funny conundrum is generated. If the parents cannot publish such information, they will not be able to authorize it either and, consequently, children/teenagers won’t be able to rely on their parents’ authorization to use social media.

The dangers of certain apps or how to put your whole life out there

Finding love, one data breach at a time.

Finding love, one data breach at a time.

One of my past flatmates was actively looking for love online. Besides having registered in several websites for that end, I remember he also had several mobile applications (apps) installed in his Smartphone. I think he actually subscribed pretty much anything that even remotely could help him find love but outlined Tinder as his main dating tool.

Another of my closest friends is a jogging addicted – shout out P. He has installed on his Smartphone various apps which enable him to know how much steps he has made in a particular day, the route undertaken, and the heart rate via external device, which enables him to monitor his progresses.

What both of my friends have in common? Well, they actually use mobile apps to cover very specific necessities. And in this regard they can rely with almost anybody else.

Indeed, it is difficult to escape apps nowadays. Now that everyone (except for my aunt) seems to have a Smartphone, apps are increasingly popular for the most diversified purposes. For my prior flatmate it was all about dating. For my friend, it is to keep track of his running progresses. But their potential does not end there. From receiving and sending messages, using maps and navigation services, receiving news updates, playing games, dating or just checking the weather… You name a necessity or convenience, and there is an app for it.

On the downside, using apps usually requires to provide more or less personal information to the specific intended effect. Something that has become so usual that most consider as a natural step, without giving it further consideration.

In fact, a detail that most seem to be unaware of, apps allow for a massive collection and processing of personal – and sometimes sensitive – data. In fact, the nature and the amount of personal data accessed and collected raises serious privacy and data protection concerns.

For instance, in the case of my abovementioned flatmate, who was registered on several similar apps, and considering that he did not create fake accounts nor provided false information, each of them collected at least his name, age, gender, profession, location (enabling to presume where he worked, lived and spend time), sexual orientation, what he looks like (if he added a picture to his profiles), the frequency of his accesses to the app, and eventually the success of his online dating life.

In fact, in Tinder’s own words:

Information we collect about you

In General. We may collect information that can identify you such as your name and email address (“personal information”) and other information that does not identify you. We may collect this information through a website or a mobile application. By using the Service, you are authorizing us to gather, parse and retain data related to the provision of the Service. When you provide personal information through our Service, the information may be sent to servers located in the United States and countries around the world.
Information you provide. In order to register as a user with Tinder, you will be asked to sign in using your Facebook login. If you do so, you authorize us to access certain Facebook account information, such as your public Facebook profile (consistent with your privacy settings in Facebook), your email address, interests, likes, gender, birthday, education history, relationship interests, current city, photos, personal description, friend list, and information about and photos of your Facebook friends who might be common Facebook friends with other Tinder users. You will also be asked to allow Tinder to collect your location information from your device when you download or use the Service. In addition, we may collect and store any personal information you provide while using our Service or in some other manner. This may include identifying information, such as your name, address, email address and telephone number, and, if you transact business with us, financial information. You may also provide us photos, a personal description and information about your gender and preferences for recommendations, such as search distance, age range and gender. If you chat with other Tinder users, you provide us the content of your chats, and if you contact us with a customer service or other inquiry, you provide us with the content of that communication.

Considering that Tinder makes available a catalogue of profiles of geographically nearby members, among which one can swipe right or left, according to each one personal preferences, with the adequate analysis, it is even possible to define what type of persons (according to age, body type, hair colour) users find most attractive.

And because Tinder actually depends on having a Facebook profile, I guess that Facebook also gets aware of the average climate of your romantic life. Mainly if you start adding and interacting with your new friends on that platform and, why not, changing your status accordingly.

In the specific case of Tinder, as it mandatorily requires to be provided with a certain amount of Facebook information in order to ensure its proper functioning, these correlations are much easier for this app.

Thus said, a sweep conducted by 26 privacy and data protection authorities from around the world on more than 1,000 diversified apps, thus including Apple and Android apps, free and paid apps, public sector and private sector apps, and ranging from games and health/fitness apps, to news and banking apps has made possible to outline the main concerns at stake.

One of the issues specifically pointed out referred to the information provided to the users/data subjects, as it was concluded that many apps did not have a privacy policy. Therefore, in those cases, users were not properly informed – and therefore aware – about the collection, use, or further disclosure of the personal information provided.

It is a fact that most of us do not read the terms and conditions made available. And most will subscribe pretty much any service he/she is willing to use, disregarding what those terms and conditions actually state.

Nevertheless, a relevant issue in this regard is the excessive amount of data collected considering the purposes for which the information is provided or how it is sneakily collected. For instance, even gambling apps, such as solitaire, which seem far more innocuous, hide unknown risks, as many contain code enabling the access to the user’s information or to his contacts’ list and even allow to track the user’s browsing activities.

This is particularly worrisome when sensitive data, such as health information is at stake. This kind of data is easily collected through fitness orientated apps, which are quite in vogue nowadays. Besides any additional personally identifiable information which you will eventually provide upon creating an account, among the elements which most certainly are collected, one can find: from the name or user name, date of birth, current weight, target weight, height, gender, workouts frequency, workout settings and duration of your workout, heart rate. Also, if you train outdoors, geo-location will most certainly enable to assess the whereabouts of your exercising, from the departure to the arrival points, which will most probably coincide with your home address or its vicinities.

And, if you are particularly proud of your running or cycling results, and are willing to show up to all your friends in what good shape you actually are, there is a chance that you can actually connect the app to your Facebook and display that information in your profile, subsequently enabling Facebook to access the same logged information.

And things actually get worse when considering that, as demonstrated by recent data breaches, it seems that the information provided by their users is not even adequately protected.

For instance, and if I remember it well, due to a security vulnerability in Tinder – that apparently has been already fixed – it seemed that there was a time where the location data, such as longitude and latitude coordinates of users were actually easily accessible. Which is actually quite creepy and dangerous, as it would facilitate stalking and harassment in real life, which is as bad as it is happening online.

Anyway, it is actually very easy to forget the amount of data we provide apps with. However, the correlations that can be made, the conclusions which can be inferred, the patterns that can be assessed amounts to share more information than what we first realise and enables a far more detailed profile of ourselves than most of us would feel comfortable with others knowing.

Practical difficulties of the GDPR – the ‘right to be forgotten’ applied to online social platforms

From all the legal challenges that the GDPR will present for businesses in general, I would like to address in this post the issues raised by its implementation in regards of social network platforms, which are quite popular nowadays.

Article 17 of the GDPR establishes the ‘right to erasure’ or the right to be forgotten, as it has come to referred to, which provides data subjects with the right to require from data controllers the erasure of their personal data held by the latter, and the consequent obligation of controller, upon that request to abide, without undue delay, when certain conditions are fulfilled.

Considering that infringing the ‘right to erasure’ may lead to the application of significant economic sanctions, there is the risk that social platforms will be tempted to adopt a preventing approach by complying to all the deletion requests, disregarding their validity, thus erasing content on unfounded grounds. This is particularly worrisome because it may directly lead to the suppression of free speech online. Consequently, online businesses are not and should not be deemed competent to make any assessment in regards of the legitimacy of such claims, a point that I have already tried to make here.

While it seems that a notice and take down mechanism is envisaged without much detail being provided in regards of its practical enforceability, a particular issue in this context is the one related to the identities upon which such obligation impends. Indeed, the obligation to implement the ‘right to be forgotten’ can only be required from those who qualify as data controllers.

As data controllers are defined as the entities who determine the purposes and means of the processing of personal data, it is not clear if online social platforms providers can be defined as such.

Considering the well-known Google Spain case, it is at least certain that search engines are deemed to be controllers in this regard. As you may certainly remember, the CJEU ruled that individuals, provided that certain prerequisites are met, have the right to require from search engines, such as Google, to remove certain results about them, subsequently presented to a search based on a person’s name

Thus said, it is questionable if hosting platforms and online social networks, focused on user generated content, as it is the case of Facebook, qualify as such, considering that the data processed depends of the actions of the users who upload the relevant information. Therefore, the users themselves qualify as controllers. The language of Recital 15 of the GDPR about social networking is inconclusive in this regard.

The abovementioned Recital provides as follows:

This Regulation should not apply to processing of personal data by a natural person in the course of a purely personal or household activity and thus without a connection with a professional or commercial activity. Personal and household activities could include
correspondence and the holding of addresses, or social networking and on-line activity undertaken within the context of such personal and household activities. However, this Regulation should apply to controllers or processors which provide the means for processing personal data for such personal or household activities.

This is not an irrelevant issue, though. In practice, it will amount to enable someone to require and effectively compel Twitter or Facebook to delete the information about her/him despite being provided by others.

And considering that any legal instrument is proportionally as efficient in practice as it is capable of being enforced, the definition of whom is covered and ought to comply with it is unquestionably a paramount element.

As I remember to read elsewhere – I fail to remember where, unfortunately – one wondered if the intermediary liability as foreseen in the e-Commerce Directive would be an appropriate mechanism for the enforcement of the right to erasure/right to be forgotten.

Articles 12-14 of the e-Commerce Directive indeed exempt information society services from liability under specific circumstances, namely when they act as a ‘mere conduit’ of information, or engage in ‘caching’ (the automatic, intermediate and temporary storage of information), or when ‘hosting’ (i.e., storing information at the request of a recipient of the service).

Article 15 establishes the inexistence of any general duty impending on online intermediaries to monitor or actively seek facts indicating illegal activity on their websites.

Having into account the general liability of online intermediaries foreseen in the E-commerce Directive (Directive 2000/31/EC on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market), a particular distinction will perhaps apply according to the level of ‘activity’ or ‘passivity’ of the platforms in the management of the content provided by their users.

However this liability does not fully clarify the extent of the erasure obligation. Will it be proportionate to the degree of ‘activity’ or ‘passivity’ of the service provider in regards of the content?

Moreover, it is not clear how both regimes can be applied simultaneously. While the GDPR does not refer to any notice and take down mechanism and expressly refers that its application is without prejudice of the e-Commerce Directive liability rules, the fact is that the GDPR only establishes the ‘duty of erasure’ to controllers. As the intermediary liability rules require accountability for the activities of third-parties, this is a requirement not easy to overcome.

Thus considering, the most awaited GDPR hasn’t entered into force yet but I already cannot wait for the next chapters.

When the information asked from job applicants is simply too much…

We also need your credit card info, body size and a blood sample just for the application.

We also need your credit card info, body size and a blood sample just for the application. 1)Copyright by Kathryn Decker under the Creative Commons – Attribution 2.0 Generic

I am currently looking for new professional opportunities and, in my quest, I have faced some very peculiar data collection policies in the context of some recruitment processes.

From being required to provide my full name, my ID number, my social security number, my complete address as mandatory information to be provided to apply for a certain job or to file a spontaneous application… I have pretty much been asked everything. At this point, I wouldn’t be surprised anymore to be asked for my bank account, my bloodtype or my electoral numbers, which are as useless information to be required for such purpose.

And when this comes from big companies which actually ought to know better and have data protection policies implemented, it is all the more astonishing!

Perhaps this may come as a surprise for some, as I am prone to conclude considering my recent experiences, but when personal data is collected as part of a recruitment process, the Data Protection rules do apply.

With regards to the balance which ought to be stricken between a potential employer’s need for information in order to select among applications and the applicants’ right to respect for their private life, I think that it is pretty straightforward that requiring the abovementioned elements is pointless and disproportionate in a recruitment process.

In fact, it amounts to collect from job applicants information that is only necessary if you are going to eventually appoint a specific applicant as an employee. Which only happens at a later stage.

Besides being annoying to be required to mandatorily provide pointless personal information to a recruiter from whom one might never hear again, it is actually a breach of data protection rules to collect irrelevant or excessive information.

Having this into consideration, if you collect such unnecessary information in the context of recruitment processes and if you have received my application, you should seriously consider calling me for an interview. :o)

 

References   [ + ]

1. Copyright by Kathryn Decker under the Creative Commons – Attribution 2.0 Generic

The ‘Safe Harbor’ Decision ruled invalid by the CJEU

Safe harbor?!? Not anymore.

Safe harbor?!? Not anymore.

Unfortunately, I hadn’t had the time to address the ruling of the CJEU issue last October, by which the ‘Safe Harbour’ scheme, enabling transatlantic transfers of data from the EU to the US, was deemed invalid.

However, due to its importance, and because this blog is primarily intended to be about privacy and data protection, it would be shameful to finish the year without addressing the issue.

As you may be well aware, article 25(1) of Directive 95/46 establishes that the transfer of personal data from an EU Member State to a third country may occur provided that the latter ensures an adequate level of protection. According to article 25(6) of the abovementioned Directive, the EU Commission may find that a third country ensures an adequate level of protection (i.e., a level of protection of fundamental rights essentially equivalent to that guaranteed within the EU under the directive read in the light of the Charter of Fundamental Rights) by reason of its domestic law or of its international commitments.

Thus said, the EU Commission adopted its Decision 2000/520, by which it concluded that the “Safe Harbour Principles” issued by the US Department of Commerce ensure an adequate level of protection for personal data transferred from the EU to companies established in the US.

Accordingly, under this framework, Facebook has been transferring the data provided by its users residing in the EU from its subsidiary in Ireland to its servers located in the US, for further processing.

These transfers and, unavoidably, the Decision had been challenged by the reference to the CJEU (judgment in Case C-362/14) following the complaint filed by Max Schrems, a Facebook user, before the Irish DPA and subsequently before the Irish High Court. The main argument was that, considering the access electronic communications conducted by its public authorities, the US did not ensure adequate protection of the thus transferred personal data.

According to the AG’s opinion, “the access enjoyed by the United States intelligence services to the transferred data constitutes an interference with the right to respect for private life and the right to protection of personal data”.

Despite considering that a third country cannot be required to ensure a level of protection identical to that guaranteed in the EU, the CJEU considered that the decision fails to comply with the requirements established in Article 25(6) of Directive and that the Commission did not make a proper finding of adequacy but merely examined the safe harbour scheme.

The facts that the scheme’s ambit is restricted to adhering US companies, thus excluding public authorities, and that national security, public interest and law enforcement requirements, to which US companies are also bound, prevail over the safe harbour principles, were deemed particularly decisive in the assessment of the scheme’s validity.

In practice, this would amount to enable the US authorities to access the personal data transferred from the EU to the US and process it in a way incompatible with the purposes for which it was transferred, beyond what was strictly necessary and proportionate to the protection of national security.

As a result, the Court concluded that enabling public authorities to have access on a generalised basis to the content of electronic communications must be regarded as compromising the essence of the fundamental right to respect for private life.

The Court stated that the decision disregards the existence of such negative interference on fundamental rights, and that the lack of provision of limitations and effective legal protections violates the fundamental right to effective judicial protection.

Upon issuance of this ruling, the Art29WP met and concluded that data transfers from the EU to the US could no longer be legitimized by the ‘Safe Harbor’ decision and, if occurring, would be unlawful.
While its practical implications remain unclear, the ruling undoubtedly means that companies relying on the ‘Safe Harbor’ framework for the transfer of personal data from the EU to the US need to rely, instead, on another basis.

In this regard, considering that not all Member States accept the consent of the data subject or an adequacy self-assessment as a legitimizing legal ground for such cross-border transfers, Model Contractual Clauses incorporated into contracts and Binding Corporate Rules (BCR) for intragroup transfers seem to be the most reliable alternatives in certain cases.

Restrictions on data transfers are obviously also foreseen in the GDPR, which, besides BCRs, Standard Contracts and adequacy decisions, includes new data transfer mechanisms such as certification schemes.

You can find the complete version of the ruling here.

Opinion of the EDPS on the dissemination and use of intrusive surveillance technologies

We need some more surveillance here!

We need some more surveillance here! 1)Copyright by Quevaal under the Creative Commons Attribution-Share Alike 3.0 Unported

In a recently published opinion, the EDPS addressed its concerns in regards of the dissemination and use of intrusive surveillance technologies, which are described as aiming “to remotely infiltrate IT systems (usually over the Internet) in order to covertly monitor the activities of those IT systems and over time, send data back to the user of the surveillance tools.”

The opinion specifically refers to surveillance tools which are designed, marketed and sold for mass surveillance, intrusion and exfiltration.

The data accessed and collected through intrusive surveillance tools may contain “any data processed by the target such as browsing data from any browser used on that target, e-mails sent and received, files residing on the hard drives accessible to the target (files located either on the target itself or on other IT systems to which the target has access), all logs recorded, all keys pressed on the keyboard (this would allow collecting passwords), screenshots of what the user of the target sees, capture the video and audio feeds of webcams and microphones connected to the target, etc.

Therefore these tools may be adequately used for human rights violations, such as censorship, surveillance, unauthorised access to devices, jamming, interception, or tracking of individuals.

This is particularly worrisome considering that software designed for intrusive surveillance has been known to have been sold as well to governments conducting hostile surveillance of citizens, activists and journalists.

As they are also used by law enforcement bodies and intelligence agencies, this is a timely document, considering the security concerns dictating the legislative amendments intended to be implemented in several Member States. Indeed, as pointed by the EDPS, although cybersecurity must not be used for disproportionate impact on privacy and processing of personal data, intelligence services and police may indeed adopt intrusive technological measures (including intrusive surveillance technology), in order to make their investigations better targeted and more effective.

It is evident that the principles of necessity and proportionality should dictate the use of intrusion and surveillance technologies. However, it remains to be assessed where to draw the line between what is proportional and necessary and disproportional and unnecessary. That is the core of the problem.

Regarding the export of surveillance and interception technologies to third countries, the EDPS considered that, despite not addressing all the questions concerning the dissemination and use of surveillance technologies, “the EU dual use regime fails to fully address the issue of export of all ICT technologies to a country where all appropriate safeguards regarding the use of this technology are not provided. Therefore, the current revision of the ‘dual-use’ regulation should be seen as an opportunity to limit the export of potentially harmful devices, services and information to third countries presenting a risk for human rights.

As this document relates to the EU cybersecurity strategy and the data protection framework, I would recommend its reading for those interested in those questions. You can find the document here.

References   [ + ]

1. Copyright by Quevaal under the Creative Commons Attribution-Share Alike 3.0 Unported

The General Data Protection Regulation – Start the countdown!

Start the countdown.

Start the countdown. 1)Copyright by Julian Lim under the Creative Commons Attribution 2.0 Generic

After years and years of lengthy drafting and negotiating, the European Commission, the European Parliament and the EU Council, following the final negotiations between the three institutions (the so-called “trilogue negotiations”) have, at last, reached a political agreement on the data protection reform package, which includes the General Data Protection Regulation (“GDPR”) and the Data Protection Directive for the police and criminal justice sector, as the Civil Liberties (LIBE) Committee of the European Parliament also approved the text on 17 December.

A formal adoption from the European Parliament and the EU Council is still required though, currently foreseen to take place at the beginning of 2016.

At this pace, and optimistically, the Regulation will finally be published somewhere in the middle of 2016.

So let the countdown begin…

References   [ + ]

1. Copyright by Julian Lim under the Creative Commons Attribution 2.0 Generic
Older posts

© 2017 The Public Privacy

Theme by Anders NorenUp ↑