Tag: Controller (page 1 of 3)

Bits and pieces of issues regarding the happy sharing of your children’s lives on Facebook

It's just a picture of them playing, they don't mind. Like!

It’s just a picture of them playing, they don’t mind. Like!

Similarly to what is happening in other EU Member States’ courts, Portuguese courts have been struggling with the application of traditional legal concepts to the online context. Just recently, in a decision which I addressed here, it considered that those having in their possession of a video containing intimate images of an ex-partner are under the obligation to properly guard it and the omission to practice adequate safeguard are condemned as a relevant omission.

Thus said, there is one particular decision which was issued by a Portuguese appealing court last year that I failed to timely address and which concerns the very specific rights of image of children in the online context. Considering the amount of pictures that appear on my Facebook wall every time I log in on my account and the concerns expressed by the upcoming GDPR in regards of the collection and processing of data referring to minors of sixteen, I would like to address it today.

The court at stake confirmed the decision of the court of first instance, issued within a process of regulating the parental responsibilities of each progenitor, which forbid a separated couple to divulge on social media platforms pictures or information identifying their twelve years old daughter. It severely stated that children are not things or objects belonging to their parents.

One would expected that a court decision would not be necessary to achieve the conclusion according to which children have the right to have their privacy and image respected and safeguarded even from acts practised by their parents. In fact, one would hope that, in the online context, and considering their specific vulnerability and the particular dangers facilitated by medium of the Internet, their protection would be ensured primarily by their parents.

Ironically, the link to the news referring to this court decision was greatly shared among my Facebook friends, most of them with children of their own. The same ones who actually happily share pictures of their own kids. And who haven’t decreased the sharing of information involving their children since then.

It is funny how some people get offended or upset when someone posts online a picture in which they are not particularly favoured or of which they are embarrassed and are quick to require its removal, and do not wonder if it is ethical to publish a picture of information about someone who is not able to give his/her consent. Shouldn’t we worry what type of information would children – our own, our friend’s, our little cousin or nephew – want to see about themselves online in the future?

Every time I log in my Facebook account, there is an array of pictures of birthday parties, afternoons by the sea, first days at school, promenades in the park, playtimes in the swimming pool, displays of leisure activities, such as playing musical instruments or practising a sportive activity… In a particular case, it has been divulged that the child had a serious illness, which fortunately has been overcome ever since but which received full Facebook graphic and descriptive coverage at the time of the ongoing development.

I have seen pictures where my friends’ children appear almost naked or in unflattering poses, or where it is clearly identifiable where they go to school or spend holidays. Many identify their children by their name, age, school they attend, extracurricular activities… In any case, their parenthood is quite well established. I always think that, in the long run, it would permit the building of an extended and detailed profile for anybody which has access to such data. And, if you had read any of my other posts, you know by now that I am not exactly referring to the Facebook friends.

More worryingly, these details about the children’s lives are often displayed on the parents’ online profiles, perhaps due to simple distraction or unawareness, without any privacy settings being implemented. Consequently, anybody having a Facebook account can look for the intended person and have access to all the information contained on that profile.

I do not want to sound like a killjoy, a prude or a moralist. I get it, seriously, I do. A child is the biggest love and it is only human to want to proudly share his growth, development and achievement with relatives and friends. It has always been done and now it is almost effortless and immediate, at the distance of a click. In this regard, by forbidding the sharing of any picture or any information regarding children, the abovementioned decision seems excessive and unrealistic.

Nevertheless, one should not forget that some good sense and responsibility is particularly required in the online context, considering how easy it actually is to lose control of the purposes given to the published information besides the initial ones. As many seem to forget, once uploaded on an online platform, it is no longer within our reach, as they can be easily copied or downloaded by others.

Thus said, while it is certainly impossible to secure anonymity online, the amount of information that is published should be controlled for security, privacy and data protection purposes.

Anyway, this common practice of parents sharing online pictures and information regarding their children makes me wonder how companies such as Facebook, and other platforms focusing on user generated content, who process data at the direction of the user and, consequently, who unintentionally have to collect and process personal data regarding children below the age of sixteen, may be asked to comply with the new requirements of the GDPR in that regard.

If it is to be lawful if and to the extent that consent is given or authorised by the holder of parental responsibility, and if, as the Portuguese court have understood it, parents are not entitled to dispose of their children’s image on social media, a funny conundrum is generated. If the parents cannot publish such information, they will not be able to authorize it either and, consequently, children/teenagers won’t be able to rely on their parents’ authorization to use social media.

Practical difficulties of the GDPR – the ‘right to be forgotten’ applied to online social platforms

From all the legal challenges that the GDPR will present for businesses in general, I would like to address in this post the issues raised by its implementation in regards of social network platforms, which are quite popular nowadays.

Article 17 of the GDPR establishes the ‘right to erasure’ or the right to be forgotten, as it has come to referred to, which provides data subjects with the right to require from data controllers the erasure of their personal data held by the latter, and the consequent obligation of controller, upon that request to abide, without undue delay, when certain conditions are fulfilled.

Considering that infringing the ‘right to erasure’ may lead to the application of significant economic sanctions, there is the risk that social platforms will be tempted to adopt a preventing approach by complying to all the deletion requests, disregarding their validity, thus erasing content on unfounded grounds. This is particularly worrisome because it may directly lead to the suppression of free speech online. Consequently, online businesses are not and should not be deemed competent to make any assessment in regards of the legitimacy of such claims, a point that I have already tried to make here.

While it seems that a notice and take down mechanism is envisaged without much detail being provided in regards of its practical enforceability, a particular issue in this context is the one related to the identities upon which such obligation impends. Indeed, the obligation to implement the ‘right to be forgotten’ can only be required from those who qualify as data controllers.

As data controllers are defined as the entities who determine the purposes and means of the processing of personal data, it is not clear if online social platforms providers can be defined as such.

Considering the well-known Google Spain case, it is at least certain that search engines are deemed to be controllers in this regard. As you may certainly remember, the CJEU ruled that individuals, provided that certain prerequisites are met, have the right to require from search engines, such as Google, to remove certain results about them, subsequently presented to a search based on a person’s name

Thus said, it is questionable if hosting platforms and online social networks, focused on user generated content, as it is the case of Facebook, qualify as such, considering that the data processed depends of the actions of the users who upload the relevant information. Therefore, the users themselves qualify as controllers. The language of Recital 15 of the GDPR about social networking is inconclusive in this regard.

The abovementioned Recital provides as follows:

This Regulation should not apply to processing of personal data by a natural person in the course of a purely personal or household activity and thus without a connection with a professional or commercial activity. Personal and household activities could include
correspondence and the holding of addresses, or social networking and on-line activity undertaken within the context of such personal and household activities. However, this Regulation should apply to controllers or processors which provide the means for processing personal data for such personal or household activities.

This is not an irrelevant issue, though. In practice, it will amount to enable someone to require and effectively compel Twitter or Facebook to delete the information about her/him despite being provided by others.

And considering that any legal instrument is proportionally as efficient in practice as it is capable of being enforced, the definition of whom is covered and ought to comply with it is unquestionably a paramount element.

As I remember to read elsewhere – I fail to remember where, unfortunately – one wondered if the intermediary liability as foreseen in the e-Commerce Directive would be an appropriate mechanism for the enforcement of the right to erasure/right to be forgotten.

Articles 12-14 of the e-Commerce Directive indeed exempt information society services from liability under specific circumstances, namely when they act as a ‘mere conduit’ of information, or engage in ‘caching’ (the automatic, intermediate and temporary storage of information), or when ‘hosting’ (i.e., storing information at the request of a recipient of the service).

Article 15 establishes the inexistence of any general duty impending on online intermediaries to monitor or actively seek facts indicating illegal activity on their websites.

Having into account the general liability of online intermediaries foreseen in the E-commerce Directive (Directive 2000/31/EC on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market), a particular distinction will perhaps apply according to the level of ‘activity’ or ‘passivity’ of the platforms in the management of the content provided by their users.

However this liability does not fully clarify the extent of the erasure obligation. Will it be proportionate to the degree of ‘activity’ or ‘passivity’ of the service provider in regards of the content?

Moreover, it is not clear how both regimes can be applied simultaneously. While the GDPR does not refer to any notice and take down mechanism and expressly refers that its application is without prejudice of the e-Commerce Directive liability rules, the fact is that the GDPR only establishes the ‘duty of erasure’ to controllers. As the intermediary liability rules require accountability for the activities of third-parties, this is a requirement not easy to overcome.

Thus considering, the most awaited GDPR hasn’t entered into force yet but I already cannot wait for the next chapters.

When the information asked from job applicants is simply too much…

We also need your credit card info, body size and a blood sample just for the application.

We also need your credit card info, body size and a blood sample just for the application. 1)Copyright by Kathryn Decker under the Creative Commons – Attribution 2.0 Generic

I am currently looking for new professional opportunities and, in my quest, I have faced some very peculiar data collection policies in the context of some recruitment processes.

From being required to provide my full name, my ID number, my social security number, my complete address as mandatory information to be provided to apply for a certain job or to file a spontaneous application… I have pretty much been asked everything. At this point, I wouldn’t be surprised anymore to be asked for my bank account, my bloodtype or my electoral numbers, which are as useless information to be required for such purpose.

And when this comes from big companies which actually ought to know better and have data protection policies implemented, it is all the more astonishing!

Perhaps this may come as a surprise for some, as I am prone to conclude considering my recent experiences, but when personal data is collected as part of a recruitment process, the Data Protection rules do apply.

With regards to the balance which ought to be stricken between a potential employer’s need for information in order to select among applications and the applicants’ right to respect for their private life, I think that it is pretty straightforward that requiring the abovementioned elements is pointless and disproportionate in a recruitment process.

In fact, it amounts to collect from job applicants information that is only necessary if you are going to eventually appoint a specific applicant as an employee. Which only happens at a later stage.

Besides being annoying to be required to mandatorily provide pointless personal information to a recruiter from whom one might never hear again, it is actually a breach of data protection rules to collect irrelevant or excessive information.

Having this into consideration, if you collect such unnecessary information in the context of recruitment processes and if you have received my application, you should seriously consider calling me for an interview. :o)

 

References   [ + ]

1. Copyright by Kathryn Decker under the Creative Commons – Attribution 2.0 Generic

The ‘Safe Harbor’ Decision ruled invalid by the CJEU

Safe harbor?!? Not anymore.

Safe harbor?!? Not anymore.

Unfortunately, I hadn’t had the time to address the ruling of the CJEU issue last October, by which the ‘Safe Harbour’ scheme, enabling transatlantic transfers of data from the EU to the US, was deemed invalid.

However, due to its importance, and because this blog is primarily intended to be about privacy and data protection, it would be shameful to finish the year without addressing the issue.

As you may be well aware, article 25(1) of Directive 95/46 establishes that the transfer of personal data from an EU Member State to a third country may occur provided that the latter ensures an adequate level of protection. According to article 25(6) of the abovementioned Directive, the EU Commission may find that a third country ensures an adequate level of protection (i.e., a level of protection of fundamental rights essentially equivalent to that guaranteed within the EU under the directive read in the light of the Charter of Fundamental Rights) by reason of its domestic law or of its international commitments.

Thus said, the EU Commission adopted its Decision 2000/520, by which it concluded that the “Safe Harbour Principles” issued by the US Department of Commerce ensure an adequate level of protection for personal data transferred from the EU to companies established in the US.

Accordingly, under this framework, Facebook has been transferring the data provided by its users residing in the EU from its subsidiary in Ireland to its servers located in the US, for further processing.

These transfers and, unavoidably, the Decision had been challenged by the reference to the CJEU (judgment in Case C-362/14) following the complaint filed by Max Schrems, a Facebook user, before the Irish DPA and subsequently before the Irish High Court. The main argument was that, considering the access electronic communications conducted by its public authorities, the US did not ensure adequate protection of the thus transferred personal data.

According to the AG’s opinion, “the access enjoyed by the United States intelligence services to the transferred data constitutes an interference with the right to respect for private life and the right to protection of personal data”.

Despite considering that a third country cannot be required to ensure a level of protection identical to that guaranteed in the EU, the CJEU considered that the decision fails to comply with the requirements established in Article 25(6) of Directive and that the Commission did not make a proper finding of adequacy but merely examined the safe harbour scheme.

The facts that the scheme’s ambit is restricted to adhering US companies, thus excluding public authorities, and that national security, public interest and law enforcement requirements, to which US companies are also bound, prevail over the safe harbour principles, were deemed particularly decisive in the assessment of the scheme’s validity.

In practice, this would amount to enable the US authorities to access the personal data transferred from the EU to the US and process it in a way incompatible with the purposes for which it was transferred, beyond what was strictly necessary and proportionate to the protection of national security.

As a result, the Court concluded that enabling public authorities to have access on a generalised basis to the content of electronic communications must be regarded as compromising the essence of the fundamental right to respect for private life.

The Court stated that the decision disregards the existence of such negative interference on fundamental rights, and that the lack of provision of limitations and effective legal protections violates the fundamental right to effective judicial protection.

Upon issuance of this ruling, the Art29WP met and concluded that data transfers from the EU to the US could no longer be legitimized by the ‘Safe Harbor’ decision and, if occurring, would be unlawful.
While its practical implications remain unclear, the ruling undoubtedly means that companies relying on the ‘Safe Harbor’ framework for the transfer of personal data from the EU to the US need to rely, instead, on another basis.

In this regard, considering that not all Member States accept the consent of the data subject or an adequacy self-assessment as a legitimizing legal ground for such cross-border transfers, Model Contractual Clauses incorporated into contracts and Binding Corporate Rules (BCR) for intragroup transfers seem to be the most reliable alternatives in certain cases.

Restrictions on data transfers are obviously also foreseen in the GDPR, which, besides BCRs, Standard Contracts and adequacy decisions, includes new data transfer mechanisms such as certification schemes.

You can find the complete version of the ruling here.

A spy in your living room: ‘Tu quoque mi’ TV?

How smart are you?

How smart are you?

So, it seems that the room we have for our privacy to bloom is getting smaller and smaller. We already knew that being at home did not automatically imply seclusion. Still, nosy neighbours were, for quite a long time, the only enemies of home privacy.

However, thicker walls and darker window blinds no longer protect us from external snooping as, nowadays, the enemy seems to hide in our living room or even bedroom.

Indeed, it seems that when we bought our super duper and very expensive Smart TV, we actually may have brought to our home a very sneaky and effective – although apparently innocent – spy.

As you may (or may not) already know, TV with Internet connectivity allow for the collection of its users’ data, including voice recognition and viewing habits. A few days ago many people would praise those capabilities, as the voice recognition feature is applied to our convenience, i.e., to improve the TV’s response to our voice commands and the collection of data is intended to provide a customized and more comfortable experience. Currently, I seriously doubt that most of us do look at our TV screens the same way.

To start with, there was the realization that usage information, such as our favourite programs and online behaviour, and other not intended/expected to be collected information, are in fact collected by LG Smart TV in order to present targeting ads. And this happens even if the user actually switches off the option of having his data collected to that end. Worse, the data collected even respected external USB hard drive.

More recently, the Samsung Smart TV was also put in the spotlight due to its privacy policy. Someone having attentively read the Samsung Smart TV’s user manual, shared the following excerpt online:

To provide you the Voice Recognition feature, some voice commands may be transmitted (along with information about your device, including device identifiers) to a third-party service that converts speech to text or to the extent necessary to provide the Voice Recognition features to you. (…)

Please be aware that if your spoken words include personal or other sensitive information, that information will be among the data captured and transmitted to a third party through your use of Voice Recognition.

And people seemed to have abruptly waken up to the realization that this voice recognition feature is not only directed to specific commands in order to allow for a better interaction between an user and the device, as it also may actually involve the capture and recording of personal and sensitive information, considering the conversation taking place nearby. No need to be a techie to know that this does not amount to performance improvement. This is eavesdropping. And to make it worse, the data is transferred to a third-party.

In the aftermath, Samsung has clarified that it did not retain voice data nor sell the audio being collected. It further explained that a microphone icon is visible on the screen when voice activation was turned on and, consequently, no unexpected recording takes place.

Of course you can now be more careful about what you say around your TV. But as users can activate or deactivate this voice recognition feature, my guess is that most will actually prefer to use the old remote control and to keep the TV as dumb as possible. I mean, just the idea of the possibility of private conversations taking place in front of your TV screen being involuntarily recorded is enough motivation.

Also, it should be pointed out that, considering the personal data at stake (relating to an identified or identifiable person) involved, there are very relevant data protection concerns regarding these situations. Can it simply be accepted that the user has consented to the Terms and Conditions on the TV acquired? Were these very significant terms made clear at any point? It is quite certain that there users could not have foreseen, at the time of the purchase, that such deep and extended collection would actually take place. And if so, such consent cannot be considered to have been freely given. It suffices to think that the features used for the collection of data are what make the TV smart in the first place and, therefore, the main reason for buying the product. Moreover, is this collection strictly necessary to the pretended service to be provided? When the data at stake involves data from other devices or other wording than the voice commands, the answer cannot be positive. And the transmission of personal data to third parties only makes all this worse as it is not specified under what conditions data is transmitted to a third party or who that third party actually is. Adding to this, if we consider that these settings mostly come by default, they are certainly not privacy-friendly and amount to stealthily monitoring. Last but not the least, it still remains to be seen if the proper data anonymisation/pseudinonymisation techniques are effectively put in place.

Nevertheless, these situations brought back into the spotlight the risks to privacy associated with personal devices in the Internet of Things era. As smart devices are more and more present in our households, we are smoothly loosing privacy or, at least, our privacy faces greater risks. In fact, it is quite difficult to live nowadays without these technologies which undoubtedly make our lives so much more comfortable and easier. It is time for people to realize that all this convenience comes with a cost. And an high one.

Game of drones or the not so playful side of the use of RPAS for recreational purposes

I am watching you.

I am watching you.1)Copyright by Don McCullough under the Creative Commons Attribution 2.0 Generic

If one of the gifts you have found underneath the Christmas tree was a drone 2)The term drone is used to describe any type of aircraft that is automated and operates without a pilot on board, commonly described as unmanned aerial vehicles (UAV). There are two types of drones: those which can autonomously follow pre-programmed flight routes and those which have remotely piloted aircrafts systems (RPAS). Only the latter are currently authorised for use in EU airspace., and it happens to have some camera installed on it, you should prepare yourself to embrace your new status of a data controller and face a new set of obligations regarding privacy and safety.

Indeed, whilst drones can be a lot of fun, there are serious considerations at stake which should not be ignored. In fact, the extensive range of their potential applications3)Despite drones were firstly used for military activities, they are increasingly used across the EU for civilian purposes. The civil use usually refers to those commercial, non-commercial and government non-military activities which are more effectively or safely performed by a machine, such as such as the monitoring of rail tracks, dams, dykes or power grids., the proliferation of UAVs with a camera, the collection of data and the subsequent use of such data, namely by private individuals for personal and recreational purposes raise concerns about the impact of these technologies on the safety, security, privacy and the protection of personal data.

As a matter of fact, a drone in itself does not imply the collecting and the processing of any personal data until you attach a camera to it. However, drones are increasingly equipped with high definition optical cameras and therefore are able to capture and record images of the public space. And while there are no apparent privacy concerns regarding the recording of landscapes, having a drone filming through the sky over your neighbourhood might lead to a very different conclusion. Drones have a high potential for collateral or direct intrusion regarding privacy, considering the height at which they operate, allowing to monitor a vast area and to capture large numbers of people or specific individuals. Despite individuals may not always be directly identifiable, their identification may still be possible through the context in which the image is captured or the footage is recorded.

It must be noted that people might not even be aware that they are being filmed or by whom and, as a result, cannot take any steps to avoid being captured if such activity is not made public. People ought not to know that the device is equipped with optical imaging and has recording capabilities. Moreover, because the amateur usage of a drone may not be visible, there is a high risk of being directed to covert and voyeuristic recording of their neighbours’ lives, homes and back gardens. How would you feel if a drone was constantly looming near your windows or in your backyard? Indeed, there is no guarantee regarding the legitimacy of the end to be achieved with the use of drones. None withstanding the fact that a drone may actually pose a threat to people’s personal safety, belongings and property, considering that it may fall, its increasing popularity as a hobby outlines the issue of discriminatory targeting, as certain individuals, such as children, young people and women, are particularly vulnerable to an insidious use of RPAS. This is particularly relevant considering that the images or footage is usually intended to be made publicly available, usually on platforms such as Youtube.

Furthermore, the recording may interfere with the privacy of individuals as their whereabouts, home or workplace addresses, doings and relationships are registered. In this context, the use of drones for hobbying purposes may have a chilling effect on the use of the public space, leading individuals to adjust their behaviour as they fear their activities are being monitored.

Thus considering, the use of this type of aerial technologies is covered by Article 7 and Article 8 of the EU Charter of Fundamental Rights which respectively establish the respect for private life and protection of personal data. Taking into account the abstract nature of the concept of privacy, the main difficulty will be to define when there is a violation at stake.

In addition, there are obviously data protection implications at stake where the drone is capturing personal data. EU data protection rules generally govern the collection, processing and retention of personal data. The EU Directive 95/46/CE and the proposed General Data Protection Regulation are applicable to the collection, processing and retention of personal data, except where personal data is collected in the course of a purely personal or household activity. Hence, the recreational use of drones is a ‘grey area’ and stands almost unregulated due to this household exemption.

Nevertheless, due to the risks at stake, both to privacy and to data protection, the extent to which the ‘household‘ exemption applies in the context of a personal and private use must be questioned.

In a recent ruling, the CJEU concluded that the partial monitoring of the public space carried out by CCTV is subjected to the EU Directive 95/46, even if the camera capturing the images is “directed outwards from the private setting of the person processing the data”. As already analysed here, the CJEU considered that the processing of personal data involved did not fall within the ‘household exemption’ to data protection laws because the camera was capable of identifying individuals walking on a public footpath.

As the RPAS operations may be quite similar to CCTV, but more intrusive, because they are mobile, cover a larger territory, collect a vaster amount of information, it is not a surprise that they may and should be subjected to the same legal obligations. Subsequent to this ruling, these technologies should be considered as potentially privacy-invasive. Consequently, private operators of drones in public spaces should be ready to comply with data protection rules.

Of course, the footage needs to contain images of natural persons that are clear enough to lead to identification. Moreover, and in my opinion, it is not workable to consider, in order for the household exemption to be applied, the images collateral and incidentally captured. Otherwise, selfies unwillingly or unknowingly including someone in the background could not be freely displayed on Facebook without complying with data protection rules. The footage must constitute a serious and systematic surveillance on individuals and their activities.

Therefore, information about the activities being undertaken and about the data processing (such as the identity of the data controller, the purposes of processing, the type of data, the duration of processing and the rights of data subjects), where it does not involve disproportionate efforts, shall be given to individuals (principle of transparency). Moreover, efforts should be made in order to minimize the amount of data obtained (data minimization). Moreover, the controller might need to ensure that the personal data collected by the drone camera is anonymised, is only used for the original purpose for which it was collected (purpose limitation), will be stored adequate and securely and will not be retained for longer than what is necessarily required.

In this context, individuals having their image captured and their activities recorded by the camera of a drone should be given guarantees regarding consent, proportionality and the exercise of their rights to access, correction and erasure.

Thus said, depending on where you are geographically located in the EU, there are obviously different rules regarding the legal aspects related to the use of drones. It is therefore important for individuals intending to operate a drone to get informed and educated about the appropriate use of these devices and the safety, privacy and data protection issues at stake in order to avoid unexpected liability.

References   [ + ]

1. Copyright by Don McCullough under the Creative Commons Attribution 2.0 Generic
2. The term drone is used to describe any type of aircraft that is automated and operates without a pilot on board, commonly described as unmanned aerial vehicles (UAV). There are two types of drones: those which can autonomously follow pre-programmed flight routes and those which have remotely piloted aircrafts systems (RPAS). Only the latter are currently authorised for use in EU airspace.
3. Despite drones were firstly used for military activities, they are increasingly used across the EU for civilian purposes. The civil use usually refers to those commercial, non-commercial and government non-military activities which are more effectively or safely performed by a machine, such as such as the monitoring of rail tracks, dams, dykes or power grids.

CCTV: household security or how to be a data controller at home

CCTV, walking the thin line of protecting yourself or becoming a data processor.

CCTV, walking the thin line of protecting yourself or becoming a data processor.1)Copyright by Nïall Green under the Creative Commons Attribution-Share Alike 1.0 Generic

Having suffered several attacks, in which the windows of the family home had been broken on several occasions, by persons unknown, Mr Ryneš, a Czech citizen, installed a CCTV camera under the eaves of his home. In a fixed position, the camera recorded the entrance to his home, the public footpath and the entrance to the house opposite. The system allowed only a visual recording, which was stored on a hard disk drive. Reaching its full capacity, the device would record over the existing recording, erasing the old material. Although the images would not be monitored in real time, this video surveillance system made it possible to identify two suspects, who were subsequently prosecuted.

However, despite the happy outcome, the operation of this camera system, installed by an individual on his household, for the purposes of protecting the property, health and life of the owner and his family, raised some questions due to the continuous recording of a public space.

One of the suspects challenged the legality of Mr Ryneš recording of the images. The Czech Data Protection Authority (hereafter DPA) considered that this operation infringed data-protection rules because the data collection of persons moving along the street or entering the house opposite occurred lacked their consent; individuals were not informed of the processing of that personal data, the extent and purpose of that processing, by whom and by what means the personal data would be processed, or who would have access to the personal data; and this processing was reported to the Office as mandatory.

Mr Ryneš brought an action challenging that decision in court, which was dismissed. The case was appealed to the Czech Supreme Administrative Court which referred to the Court of Justice of the European Union (hereafter CJEU) for a preliminary ruling.

In this context, in its judgment in Case C-212/13, the CJEU addressed the application of the ‘household exception’, for the purposes of Article 3(2) of Directive 95/46/EC, which refers to the data processing carried out by a natural person in the course of a purely personal or household activity.

The CJEU considered that the image of a person recorded by a camera constitutes personal data within the meaning of the Directive 95/46 inasmuch as it makes it possible to identify the person concerned.

Moreover, the Court considered that video surveillance falls within the scope of the above mentioned directive in so far as it constitutes automatic processing, i.e., an operation which is performed upon personal data, such as collection, recording, storage.

Considering that the main goal of the this Directive is to guarantee a high level of protection of the fundamental rights and freedoms of natural persons, in particular their right to privacy, as foreseen in article 7 of the EU Charter of Fundamental Rights, the CJEU recalled that derogations and limitations must be strictly necessary.

Therefore, the Court deemed that the ‘household exception’ must be narrowly construed and applicable when the data processing activity is carried out ‘purely’ private and household context, even if it incidentally concerns the private life of other persons, such as correspondence and the keeping of address books.

In this context, the CJEU concluded as follows:

(…)the second indent of Article 3(2) of Directive 95/46 must be interpreted as meaning that the operation of a camera system, as a result of which a video recording of people is stored on a continuous recording device such as a hard disk drive, installed by an individual on his family home for the purposes of protecting the property, health and life of the home owners, but which also monitors a public space, does not amount to the processing of data in the course of a purely personal or household activity, for the purposes of that provision.

However, Mr Ryneš’s concerns, which motivated the installation of the camera, were not overlooked by the CJEU. Indeed, the Court outlined that the Directive itself allows, where appropriate, to consider the legitimate interests pursued by the controller, such as the protection of the property, health and life of his family and himself. This reflection is in line with the Opinion of the Article 29 Working Party in this regard as security was mentioned as an example of a legitimate interest of the data controller.

This implies that, even if the household exception is not applicable in this very particular case, a CCTV camera recording activity such as the one in the proceedings is lawful in the light of article 7(f) of the Directive. Thus said, the referring Court will now have to take this interpretative guidance into consideration and decide if the recording and processing at stake were legitimate, for instance, in regards of article 10 of the instrument. It is possible that the Czech Court may still consider that because no information regarding the recording was provided to the public (individuals were not informed of the processing of that personal data, the extent and purpose of that processing, by whom and by what means the personal data would be processed, or who would have access to the personal data) and considering that this processing was not reported to the Office constitute a breach of the data protection rules.

This is particularly relevant considering that, precisely for security purposes, individuals are equipping their households with CCTV systems which capture public space. Only time will tell how this decision will be applied to individuals in practice. Most certainly, DPAs across the EU will update their recommendations regarding the weighing between the necessity of the recording and storing of the data to pursue an interest deemed legitimate and the interests for fundamental rights and freedoms of the data subject.

At this point, it is expectable that householders who have surveillance cameras that capture public space will need to ensure that their collection and further use of any footage which contains images of identifiable individuals complies with the data protection requirements. Thus, they will have, for instance, to at least inform people of this monitoring and ensure that no footage is illegally retained.

References   [ + ]

1. Copyright by Nïall Green under the Creative Commons Attribution-Share Alike 1.0 Generic

The Sony data breach: when
fiction meets reality?

You better believe SONY. You have been HACKED!

You better believe SONY. You have been HACKED!

It is not the first time that Sony suffers a massive cyber attack. Back in 2011, due to some vulnerabilities found in its data servers, a hacking of its Play Station online network service enabled the theft of names, addresses and credit card data belonging to 77 million user accounts.

A few days ago, Sony Pictures computer systems were hacked again allegedly by a group of hackers calling themselves Guardians of Peace. As a consequence, a humongous amount of data, including confidential details, such as medical information, salaries, home addresses, social security numbers, regarding 47 thousands of Sony employees and former employees, namely Hollywood stars, as well as contracts, budgets, layoffs strategies, scripts for movies not yet in production, full length unreleased movies and thousands of passwords were leaked to the Internet.

The reason remains unclear. Despite the denial of a North Korea representative regarding a possible involvement of that country, it is being speculated that this attack is a retaliation from the North Korea government as a response to an upcoming Sony comedy, ‘The Interview’, starring actors Seth Rogen and James Franco, which depicts an assassination attempt against the North Korea’s leader Kim Jong-un. If Hollywood comedies are now deemed a sufficient reason to conduct cyber-attacks on real life, fiction and reality are meeting in a very wrong way.

Anyway, considering the volume and the sensitive nature of the information disclosed, this can actually be one of the largest corporate cyber attacks which has ever been known of.

It is a sharp reminder that hacking attacks can be directed to any company and can take all forms, equally damaging. This attack demonstrates once again that not only critical infrastructure is at risk. Sony Pictures Entertainment is one of the largest studios in Hollywood. It is really not the expected victim of a cyber-attack. However, it was an easy prey as its business decisions regarding information security have been publicly stated in previous occasions. Despite their ludicrous nature, I guess someone took those comments seriously.

Considerations regarding the absurdity of having a file directory named ‘Passwords’ aside, this attack outlines that data breach is one of the major threats that companies face nowadays. Cyber attacks are conducted against companies of all sizes. Large companies do eventually recover from these breaches. Small businesses generally hardly pull through after suffering a cyber-attack. It is therefore essential that businesses implement a solid cyber-security programme, namely conducting regular self-hacking exercises to assess the vulnerabilities of their security systems in order to prevent a potential breach.

What about Sony? Well, the value of the damages regarding its employees is incalculable considering that their identities may be stolen, their bank accounts may be compromised and their houses may be robbed. Only time will tell if and how it will recover.

Older posts

© 2017 The Public Privacy

Theme by Anders NorenUp ↑