Tag: Data Collection (page 2 of 3)

Meet Regin

Yes, You have been hacked and spied upon!

Yes, You have been hacked and spied upon!

Regin is not like all the other regular viruses you can find in your computer. It is the most recently discovered powerful tool for cyber espionage between nation-states, as reported by computer security research lab Symantec, and by its main competitor Kaspersky Labs.

Regin is described as a sophisticated cyber attack platform, which operates much like a back-door Trojan, mainly affecting Windows-based computers. It can be customized with different capabilities depending on the target and, while it operates in five stages, only the first one is detectable.

Among its diversified range of features, Regin allows the remote access and control of a computer, thus enabling the attacker to copy files from the hard drive, to recover deleted files, to steal passwords, to monitor network traffic, to turn the microphone or the camera on, and to capture screenshots.

According to the above mentioned reports, Regin has been sneakily around since, at least, 2008, and has been used in systematic spying campaigns against a wide range of international targets, namely governments’ entities, Internet services providers, telecom operators, financial institutions, mathematical/cryptographic researchers, big and small businesses, and individuals.

As for the geographical incidence, Saudi Arabia and Russia appear to be the major targets of Regin. Mexico, Iran, Afghanistan, India, Belgium and Ireland were among the other targeted countries.

The conclusions drawn in the Symantec’s report are, at the very least, very unsettling. It is stated that, considering its high degree of technical competence, its development is likely to have taken months, if not years, to be completed.

Regin is a highly-complex threat which has been used in systematic data collection or intelligence gathering campaigns. The development and operation of this malware would have required a significant investment of time and resources, indicating that a nation state is responsible. Its design makes it highly suited for persistent, long term surveillance operations against targets.

Therefore, the new million dollar question is: who is behind its conception? Unfortunately, it is very difficult to find out who has created or has otherwise financed its development because little trace of the culprits was left behind. However, it is well known that not all countries are so technologically advanced to be able to engineer such an accurate tool or to conduct such a large scale operation.

As a governmental instrument for mass surveillance, cyber espionage and intelligence gathering, Regin is just one of its kind. A few years ago, the world assisted to the rise of similar viruses, also from a nation state origin. Stuxnet, Duqu and Flame were three of the detected viruses previously employed to perform industrial sabotage or to conduct cyber espionage.

Thus said, this historical pattern for cyber attacks clearly shows that virtual wars are being fought, in an almost invisible battlefield that is the cyberspace, where nation-states clash silently. Once limited to opportunistic criminals, viruses are currently the new weaponry in this cyber warfare.

But a state sponsored cyber attack does not really come as a surprise. Governments have always spy on each other in order to obtain strategic, economic, political, or military advantage. The discovery of Regin just confirms that investments are continuing to be made in order to develop implacable instruments for espionage and intelligence gathering purposes.

In this context, it is no coincidence that cyber security is increasingly appointed as a decisive part of any governments’ security strategy, as it involves protecting national information and infrastructure systems from major cyber threats.

And while these sophisticated attacks are conducted, sensitive information about individuals is accessed, stolen, collected and stored by unknown attackers. To what end? Well, it can be any, really…

Uber – How much privacy are you willing to sacrifice for convenience?

Let's rideshare all your data?

Let’s rideshare all your data?

Ahhh how convenient it is to need a ride and to immediately have a car and a rider at our disposal at the distance of a click on our mobile phone… We used to call a taxi cab. Now it is much cooler: it is up to Uber.

Uber is a San Francisco headquartered company which specialized in the ridesharing services, made available through a Smartphone application. The very particularity of the service is that it does not own any car nor hires any driver. Indeed, Uber is a platform which is intended to put drivers and riders in touch, thus allowing for people having a car to make some extra money and for people who don’t to actually have at their disposal cheaper rides and to select the most suitable ride, among the several models nearby.

If you live in a city where the service is not available, you certainly already know it better from the protests held, a few months ago, by taxi drivers and taxi companies, in some capitals where it was implemented, which qualify it as an anticompetitive business.

Competition matters aside, the Uber business model is built upon customers personal data – which is information that could reasonably be used to identify them – and, therefore, raises privacy and data protection issues which cannot be ignored.

Indeed, in order to develop its customized services, Uber collects and processes a humongous amount of personal data from its customers, such as their name, e-mail address, mobile number, zip code and credit card information.

In addition, certain information – such as the browser used, the URL, all of the areas visited, and the time of day – may be automatically or passively collected while users visit or interact with the services. This data is referred to as ‘Usage Information’. In parallel, the IP address or other unique device identifier (for the computer, mobile or other device used to access the services) is collected.

Tracking information is also collected when the user travels in a vehicle requested via Uber services, as the driver’s mobile phone will send the customer’s GPS coordinates, during the ride, to its servers, including, but not limited to geographic areas. It is important to note that currently most GPS enabled mobile devices can define one’s location to within 50 feet!

This geo-location information is actually the core of the Uber business as it enables users to check which drivers are close to their location, to set a pick up location, and to ultimately allow users wishing so to share this information with others.

The amount of information regarding habits and movements, locations, destinations, workplaces, favourite social spots, which can be concluded from a user’s trip history and from the geo-location data tracked through mobile devices, is as a matter of fact quite surprising… and impressively accurate.

For instance, back in 2012, in a post entitled ‘Ride of Glory’ which is no longer available in its website but is greatly reproduced elsewhere, Uber was actually able to link rides taken between 10pm and 4am on a Friday or Saturday night, followed by a second ride from within 1/10th of a mile of the previous night’s drop-off point 4-6 hours later, to ‘one night stands’.

I suppose that this outcome makes most of us feel quite uncomfortable… One thing is for our whereabouts to be known. Another, quite different, is the conclusion which can be drawn based on that information.

Most of us do not really think about the implications of randomly giving away personal data. We easily sign up for supermarket value cards in order to get discounts over our grocery bills, thus allowing the retailer to track our purchases and consumption habits.

Besides being – at the very least – very unpleasant to have our sex lives revealed by the details of our rides to home, there is indeed a wide room for concern considering Uber’s policy and recent practices.

Uber has a very broad privacy policy to which users actually give their consent when they download its app. Indeed, it establishes very few limits to the use of the collected data. According to its policy, Uber can use the ‘Usage Information’ for a variety of purposes, including to enhance or improve its services. In fact, to attain that goal, Uber may even supplement some of the information collected about its customers with records from third parties.

Quite recently, it announced an “in-depth review and assessment of [its] existing data privacy program”. Certainly this willingness to change does not go unrelated to the comments of a senior executive suggesting Uber was planning to hire a team of opposition researchers to dig up dirt on its critics in the media, referring specifically to a female journalist, which were received with a wave of strong criticism.

Of course, this could have merely been a distasteful and off-the-record (because being off the record makes it all better) comment made in a fancy dinner party which does not represent the overall position of the company.

However, right afterwards emerged the rumour according to which Uber’s internal tool called “god view”, which shows the real-time location of vehicles and customers who have requested a car, as well as access to account history, is easily accessible for employees without rider’s consent. As a matter of fact, it was employed to access and track a reporter’s movements.

These facts cause little surprise to those who already are familiar with Uber’s very own promotion methodologies, some of which consisting, at launching parties, to feature a screen showing in real time where certain customers were.

This pattern is a sharp reminder of the risks at stake when giving away our personal data for convenience. And the information revealed by the amount of data made available, randomly, through an application on our mobile, tablet, computer or similar devices.

Imagine now, for instance, that you have a specific condition which requires frequent visits to a hospital or a specialized medical centre and that Uber would be able to conclude what is your health status as easily it did regarding the user’s nightly romantic encounters.

I hope that this situation will lead to the adoption of a very strict privacy policy which will end up elevating the privacy standards for the entire related-industry.

But considering all this, I must ask: how much privacy are you willing to sacrifice for your convenience?

EU PNR – A plane not yet ready to fly

Plane Not Ready to fly!

Plane Not Ready to fly!

The Civil Liberties, Justice and Home Affairs (LIBE) Committee of the European Parliament has recently discussed the Passenger Name Record (hereafter PNR) draft Directive according to which air carriers would be required, in order to help fight serious crime and terrorism, to provide EU Member States’ law enforcement bodies with information regarding passengers entering or leaving the EU.

This airline passenger information is usually collected during reservation and check-in procedures and relates to a large amount of data, such as travel dates, baggage information, travel itinerary, ticket information, home addresses, mobile phone numbers, frequent flyer information, email addresses, and credit card details.

Similar systems are already in place between the EU and the United States, Canada and Australia, through bilateral agreements, allowing those countries to require EU air carriers to send PNR data regarding all persons who fly to and from these countries. The European Commission’s proposal would now require airlines flying to and from the EU to transfer the PNR data of the passengers’ onboard passengers on international flights to the Member States of arrival or departure.

Nevertheless, the negotiation of the EU PNR proposed airline passengers’ data exchange scheme has been quite wobbly. The European Commission proposed the legal basis, in 2011, which ended up being rejected, in 2013, by the above mentioned committee, allegedly because it does not comply with the principle of proportionality and does not adequately protect personal data as required by the Charter of Fundamental Rights of the EU (hereafter CFREU) and by the Treaty on the Functioning of the EU (hereafter TFEU).

But concerns over possible threats to the EU’s internal security posed by European citizens returning home after fighting for the so-called “Islamic State” restarted the debate. Last summer, the European Council called on Parliament and Council to finalise work on the EU PNR proposal before the end of the year.

However, following the ruling of the Court of Justice of the European Union, regarding the EU’s Data Retention Directive, last April, which declared the mass-scale, systematic and indiscriminate collection of data as a serious violation of fundamental rights, leads to question if these PNR exchange systems with third countries are effectively valid under EU law.

Similarly, many wonder if the abovementioned ruling shouldn’t be taken into account in the negotiations of this draft directive considering that it also refers to the retention of personal data by a commercial operator in order to be made available to law enforcement authorities.

And there are, indeed, real concerns involved.

Of course, an effective fight against terrorism might require law enforcement bodies to access PNR data, namely to tackle the issue regarding ‘foreign fighters’ who benefit from EU free movement rights which allow them to return from conflict zones without border checks. For this reason, some Member States are very keen on pushing forward this scheme.

However, the most elemental principles of the rule of law and the most fundamental rights of innocent citizens (the vast majority of travellers) should not be overstepped.

For instance, as the proposal stands, the PNR data could be retained for up to five years. Moreover, the linking of PNR data with other personal data will enable the access to data of innocent citizens in violation of their fundamental rights.

As ISIS fighters are mostly well-known by the law enforcement authorities as well as by the secret services, it is therefore questionable how reasonable and proportionate can be such an unlimited access to this private information in order to prevent crime. How effective would be the tracking of people’s movements in order to fight against extremism? Won’t such a widespread surveillance ultimately turn everyone into a suspect?

Thus said, from the airlines point of view, the recording of such amount of data would undoubtedly imply an excessive increase of costs and, therefore, an unjustifiable burden.

The European Data Protection Supervisor (EDPS) has already considered that such a system on a European scale does not meet the requirements of transparency, necessity and proportionality, imposed by Article 8 of the CFREU, Article 8 of the European Convention of Human Rights and Article 16 of the TFEU. Similarly, several features of the PNR scheme have been highly criticized by the Fundamental Rights Agency (FRA).

At the moment, the EU Commission has financed national PNR systems in 15 member states (Austria, Bulgaria, Estonia, Finland, France, Hungary, Latvia, Lithuania, the Netherlands, Portugal, Romania, Slovenia, Spain, Sweden, and the UK) which leads to a fractioned and incoherent system. This constitutes a very onerous outcome for airlines and a need for a harmonization among data exchanges systems. The initiative is therefore believed by some MEPs to intend to circumvent the European Parliament’s opposition to the Directive.

Thus considering, it is legitimate to question if the EU-PNR will be finalized, as firstly intended, before the end of year. Given the thick differences between MEPs and among Member States, it appears that the deadline will be more and more unlikely to be meet.

The EU external border’s security at travellers’ fingerprints

One fingerprint down, only nine to go!

One fingerprint down, only nine to go! 1)Copyright by Frettie under the Creative Commons Attribution 3.0 Unported

Last semester, the Council of the European Union and the European Parliament voiced technical, operational and financial concerns regarding the overall implementation of the ‘Smart Borders Package’. In this context, the European Commission initiated an exercise aimed at identifying the most adequate ways for its implementation. The aforementioned exercise would include a Technical Study, which conclusions would be subsequently tested through a Pilot project.

The Technical Study, prepared by the European Commission, has been recently issued.

But let’s create some context here…

The EU is currently assisting to a very important increase in the number of people crossing its borders, namely through air. I am talking about millions of people crossing, for the most diversified reasons, every day, at several points of the external border of the EU. This very fact transforms airports in the most relevant way in and out of the EU.

Therefore, if the border management, namely through the check in procedures, is not duly modernized and dealt with by a proper legal and technical structure, longer delays and queuing are expected. Adding to this, there is also a paramount concern of security, due to the growing numbers of foreign fighters and refugees.

Indeed, under the current framework – the Schengen Borders Code – a thorough check at entry of all travellers crossing the external border is required, regardless their level of risk or how frequently they actually travel in and out the EU. Furthermore, the period of time a traveller stays in the Schengen area is calculated based solely on the stamps affixed in the travel document.

So one of the main goals of the ‘Smart Borders’ initiative is to actually simplify and facilitate the entrance of “bona fide” travellers at the external borders, significantly shortening the waiting times and queues they have to face. Additionally, the initiative aims at preventing irregular border crossing and illegal immigration, namely through the detection of overstays, i.e., people who have entered the EU territory lawfully, but have stayed longer than they were authorized to.

In this context, biometrics 2)The concept refers to metric related to human features, i.e., to elements which are specific to the physical or psychological identity of a person and, therefore, allow to identify that person. Physiological biometrics, which we are specifically considering in this context, refer to human characteristics and traits, such as face, fingerprints, eye retina, iris and voice. appear as a solution. In fact, biometric technologies 3)Technologies which are able to electronically read and process biometric data, in order to identify and recognize individuals. are cheaper and faster than ever and are increasingly used both in the private and the public sector. They are mainly used on forensic investigation and access control systems, as they are a considered an efficient tool for truthful identification and authentication.

Indeed, the use of biometrics data for other purposes than law enforcement is currently being furthered at the EU level. The biometrics systems firstly implemented were deployed in regard of third country nationals, such as asylum or visa applicants (Eurodac4)Eurodac is a large database of fingerprints of applicants for asylum and illegal immigrants found within the EU. The database helps the effective application of the Dublin convention on handling claims for asylum. and VIS)5)The Visa Information System, which ‘VIS’ stands for, allows Schengen States to exchange visa data. and criminals (SIS and SIS II)6) The Schengen Information System, which ‘SIS’ stands for, is the largest information system for public security in Europe.. In 2004 it has been enlarged to the ePassport of the European Union.

Later on, in 2008, the European Commission issued a Communication entitled ‘Preparing the next steps in border management in the European Union’, suggesting the establishment of an Entry/Exit System and a Registered Traveller Programme.

Subsequently, in 2013, the European Commission submitted a ‘Smart Borders Package’, including three legislative proposals. In this regard, the proposal for an Entry/Exit System (hereafter EES) was intended to register entry and exit data of third country nationals crossing the external borders of the Member States of the European Union. Likewise, the proposal regarding a Registered Traveller Programme (hereafter RTP) aimed at offering an alternative border check procedure for pre-screened frequent third-country travellers, thus facilitating their access to the Union without undermining security. In parallel, the purpose of the third proposal was to amend accordingly the Schengen Borders Code.

The foremost aspiration to be achieved with these instruments was for a better management of the external borders of the Schengen Member States, the prevention of irregular immigration, information regarding overstayers, and the facilitation of border crossing for frequent third country national travellers.

Therefore, the EES would allow to record the time and place of entry and the length of stays in an electronic database, and, consequently, to replace the current stamping of passports system. In parallel, the RTP would allow frequent travellers from third countries to enter the EU, subject to a simplified border checks at automated gates.

Although being generally considered an welcomed initiative in terms of modernization, this has awaken, nevertheless, some concerns regarding privacy and data protection. Indeed, the proposal focuses on the use of new technologies to facilitate the travelling of frequent travellers and the monitoring the EU border crossing of nationals of third-countries. In practice, it means that hundreds of millions of EU residents and visitors will be fingerprinted and their faces electronically scanned.

Last year, the European Data Protection Supervisor (EDPS) adopted a very negative position regarding the proposal to introduce an automated biometrics-based EES for travellers in the region, calling it “costly, unproven and intrusive“. The data retention period in the EES, the choice of biometric identifiers, and the possibility of law enforcement authorities to access its database were among the main concerns raised.

As the proposed system would require ten fingerprints to confirm the identity of individuals at borders and to calculate the duration of their stay in the EU, the EDPS pointed to the unnecessary collection and excessive storage of personal information, considering that two or four fingerprints would be sufficient for identification purposes. The EDPS also expressed apprehension regarding the access to the EES database which would be granted to law enforcement authorities, even if the individuals registered were not suspects of any criminal offence. Questions were also raised regarding the possible exchange of information with third countries which do not have the same level of data protection.

Since then, the Technical Study – which I referred to at the beginning of this post – has been conducted in order to identify and assess the most suitable and promising options and solutions.

According to the document, one fingerprint alone can be used for verification, but it is acknowledged that a higher number of fingerprints could lead to better results in terms of accuracy, despite a more difficult implementation, “in particular, taking into account the difficulty of capturing more than 4 FPs [fingerprints] at land borders where limitations in enrolment quality and time may rise regarding the travellers in vehicle and use of hand-held equipment”. Nevertheless, the enrolment of four or eight fingerprints is recommended as one of the test cases of the pilot project.

Moreover, the study noted that “if facial image recognition would be used in combination with FPs [fingerprints], then it has a beneficial impact on both verification and identification in terms of speed and security leading to lower false rejection rate and reduction in number of FPs enrolled”. In addition, the Study has concluded that the use of facial identification alone is an option to be considered for EES and RTP.

Thus said, concerns regarding security should not take the limelight of the fact that biometric data are personal data. In fact, fingerprints can be qualified as sensitive data in so much as they can reveal ethnic information of the individual.

Therefore, biometric data can only be processed if there is a legal basis and the processing is adequate, relevant and not excessive in relation to the purposes for which they are collected and/or further processed. In this context, the purpose limitation is a paramount principle. The definition of the purpose for which the biometric data are collected and subsequently processed is therefore a prerequisite to their subsequent use.

In parallel, the accuracy, the data retention period and the data minimisation principles have to be considered, as the data collected should be precise, proportionate and kept for no longer than what is necessary regarding the purposes for which it was firstly collected.

Besides, the processing of biometric data shall be based on the legal grounds of legitimacy, such as consent of the data subject, which must be freely given, specific and fully informed. In this context, the performance of a contract, the compliance with a legal obligation and the pursuit of legitimate interests of the data controller will also constitute legal grounds to that effect.

It must be noted that the processing of biometric data raises these and other important privacy and data protection concerns that, more than often, are not acknowledged by the public.

To start with, biometric data, in general, and fingerprint data, in particular, is irrevocable due to its stability in time. This makes possible data breaches all the most dangerous.

In addition, the highly complex technologies which are able to electronically read and process biometric data and the diversified methods and systems employed in the collection, processing and storage cannot ensure a full accuracy, even though fingerprints do present a high level of precision. In fact, a low quality of the data or of the extraction algorithms may steer to wrongful results and, therefore, to false rejections or false matches. This might lead to adverse consequences for individuals, namely regarding the irreversibility of the decisions taken based on a wrong identification.

Moreover, the risks associated with the storage of biometric data and the possible linking with other databases raises concerns about the security of the data and of uses non-compatible with the purposes which initially justified the processing.

Thus said, we will have to wait for the results of the Pilot Project which is being developed by the eu-LISA Agency 7)The acronym stands for Agency for the Operational Management of large-scale IT Systems in the area of Freedom, Security and Justice., and is expected to be completed during 2015, in order to verify the feasibility of the options identified in the Technical Study.

References   [ + ]

1. Copyright by Frettie under the Creative Commons Attribution 3.0 Unported
2. The concept refers to metric related to human features, i.e., to elements which are specific to the physical or psychological identity of a person and, therefore, allow to identify that person. Physiological biometrics, which we are specifically considering in this context, refer to human characteristics and traits, such as face, fingerprints, eye retina, iris and voice.
3. Technologies which are able to electronically read and process biometric data, in order to identify and recognize individuals.
4. Eurodac is a large database of fingerprints of applicants for asylum and illegal immigrants found within the EU. The database helps the effective application of the Dublin convention on handling claims for asylum.
5. The Visa Information System, which ‘VIS’ stands for, allows Schengen States to exchange visa data.
6. The Schengen Information System, which ‘SIS’ stands for, is the largest information system for public security in Europe.
7. The acronym stands for Agency for the Operational Management of large-scale IT Systems in the area of Freedom, Security and Justice.

Are you ready for the Internet of Things?

Everything is connected.

Everything is connected. 1)Copyright by Wilgengebroed under the Creative Commons Licence – Attribution 2.0 Generic

Imagine a world where people would receive information on their smart phone about the contents of their fridge; cars involved in an accident would call emergency services, allowing for quicker location and deployment of help; cars would suggest alternative routes in order to avoid traffic jam; personal devices would allow to monitor the health developments of patients or to control the regular medication of elderly persons; washing machines would turn on when energy demand on the grid would be lowest and where alarm clocks and coffee machines could automatically be reset when a morning appointment would be cancelled; a smart oven could be remotely triggered to heat up the dinner inside by the time you would reach home…

If it is true that these scenarios once belonged to the sci-fi world, it is not so hard to picture any of these technologies nowadays. The momentum we are living in and all the progress which is already involved in our lives brings the certitude that it is only a matter of time for us to reach such a future. Technological advancements are allowing achievements that once may have seemed impractical and are turning the sci-fi scenarios into reality.

We are smoothly entering in a new age… The age of the Internet of Things (hereafter IoT). The IoT might be indeed already start happening around us. It suffices to think about all the quite recent changes that we already accept as ordinary.

But what is the IoT all about?

The IoT is a concept which refers to a reality where everyday physical objects will be wirelessly connected to the Internet and be able, without human intervention, to sense and identify themselves to other surrounding devices and create a network of communication and interaction, collecting and sharing data. It  is therefore associated to products with machine-to-machine communication capabilities, which are called ‘smart’.

The high-tech evolution has made ‘smart’ more convenient and accessible and made the vast majority of us technologically dependent on several areas of our daily living. Connected devices have proliferated around us. Consider, for instance, the number of smart phones and other smart devices that most of us cannot conceive a life without anymore as it allows us to connect with the world as it was never possible before.

Similarly, our domestic convenience and comfort have been expanded in ways that once belonged to the imaginary. Homes, housework and household activity can be fully automatized in order to enable us to remotely control lighting, alarm systems, heating or ventilation. The domestic devices that can be connected to the Internet are usually referred to as “home automation” or “domotics”.

In parallel, we are currently able of the ‘quantified self’, which is commonly defined as the self knowledge acquired through self tracking with technology (for instance, pedometers, sleep trackers). One can now track, for example, biometrics as insulin and cortisol, or record more random information about our own habits and lifestyles, as physical activity and caloric intake. This monitoring can be done increasingly by wearables, i.e., computer-powered devices or equipment that can be worn by an individual, including watches, clothing, glasses and items alike. The Google glasses, Google Wear and the Apple Watch are the most famous recent examples.

Scarily enough, the number of objects connected to the Internet already exceeds the number of people on earth. The European Commission claims that an average person currently has at least two objects connected to the Internet and states that this is expected to grow to 7 by 2015 with 25 billion wirelessly connected devices globally. By 2020 that number could double to 50 billion.

However, every time we add another device to our lives, we give away a little more piece of ourselves.

Consequently, along with its conveniences, and due to the easy and cheaply obtained amount of data collection it allows, the idea of a hyper-connected world raises important concerns regarding privacy, security and data protection. To be true, while it is a relatively well-known fact that our mobile devices are frequently sending off data to the Internet, many of us do not understand the far-reaching implications of carrying around an always-on connection, let alone to have almost all your life connected to the Internet.

In fact, such objects will make it possible to access a humongous amount of personal data and to spread it around without any awareness nor control of the users concerned. From preferences, habits and lifestyle, to sensitive data as health or religion information, from geo-location and movements to other behaviour patterns, we will put out there a huge amount of information. In this context, the crossing of data collected by means of different IoT devices will allow the building of a very detailed user profile.

It is essential that users are given control over the data which directly refers to them and are properly informed of what purposes its processing might serve. In fact, currently, it is very common that the data generated is  processed without consent or with a poorly given consent. Quite often further processing of the original data is not subjected to any purpose limitation.

Moreover, as each device will be attributed an IP address in order to connect to internet, each one will be inherently insecure by its very own nature. Indeed, with almost everything connected to the Internet, every device will be at risk of being compromised and hackable. Imagine that your car or home could be subjected to a hacking attack through which it could take control of the vehicle or install a spying application in your TV. Imagine that your fridge could get spam and send phishing e-mails. The data collected through medical devices could be exposed. After all, it is already easier to hack routers and modems than computers.

Last but not the least, as IoT devices will be able to communicate with other devices, the security concerns would multiply exponentially. Indeed, a single compromised device could lead to vulnerability of all the other devices on the network.

Now imagine that all your life is embedded in internet connected devices… Think, for instance, fridges, ovens, washing machines, air conditioners, thermostats, light systems, music players, baby monitors, TVs, webcams, door locks, home alarms, garage door openers, just to name a few. The diversity of connected devices is just astonishing! So we may reach the point where you will have to install firewall for your toaster and a password to secure your fridge.

From a business point of view, questions regarding the security setup and software and operating systems vulnerabilities of devices that will be connected to the internet also have to be answered. Indeed, companies are increasingly using smart industrial equipment and IoT devices and systems, from cars to cameras and elevators, from building management systems to supply chain management system, from financial system to alarm system.

On another level, the security of nations’ critical infrastructures could also be at stake. Imagine, for instance, that the the traffic system, the electric city grid or the water supply can be easily accessed by a third party with ill intentions.

Of course, the EU could not be indifferent to this emerging new reality and to the challenges it presents.

In 2012, the European Commission launched a public consultation, seeking inputs regarding a future policy approach to smart electronic devices and the framework required in order to ensure an adequate level of control of the data gathering, processing and storing, without impairing the economic and societal potential of the IoT. As a result, the European Commission published, in 2013, its conclusions.

Last month, the European data protection authorities, assembled in the Article 29 Working Party, adopted an opinion regarding the IoT, according to which the expected benefits for businesses and citizens cannot come at the detriment privacy security. Therefore, the EU Data Protection Directive 95/46/EC and the e-Privacy Directive 2002/58/EC are deemed to be fully applicable to the processing of personal data through different types of devices, applications and services in the context of the IoT. The opinion addresses some recommendations to several stakeholders participating in the development of the IoT, namely, device manufacturers, application developers and social platforms.

More recently, at the 36th International Conference of Data Protection, Data Protection Officials and Privacy Commissioners adopted a declaration on the Internet of things and a resolution on big data analytics.

The aforementioned initiatives demonstrate the existing concerns regarding Big Data and IoT and the intention to subject them to data protection laws. In this context, it is assumed that data collected through IoT devices should be regarded and treated as personal data, as it implies the processing of data which relate to identified or identifiable natural persons.

This obviously requires a valid consent from data subjects for its use. Parties collecting IoT devices information therefore have to ensure that the consent is fully informed, freely given and specific. The cookie consent requirement is also applicable in this context.

In parallel, data protection principles are deemed to be applicable in the IoT context. Therefore, according to the principle of transparency, parties using IoT devices information have to inform data subjects about what data is collected, how it is processed, for which purposes it will be used and whether it will be shared with third parties. Similarly, the principle of purpose limitation, according to which personal data must be collected for specified, explicit and legitimate purposes and not be further processed in a way incompatible with those purposes, is also applicable. Furthermore, considering the data minimization principle, the data collected should not be excessive in relation to the purpose and not be retained longer than necessary.

Considering the vast number of stakeholders involved (device manufacturers, social platforms, third-party applications, device lenders or renters, data brokers or data platforms), a well-defined allocation of legal responsibilities is required. Therefore, a clear accountability of data controllers shall be established.

In this context, the Directive 2002/58/EC is deemed applicable when an IoT stakeholder stores or gains access to information already stored on an IoT device, in as much as IoT devices qualify as “terminal equipment” (smartphones and tablets), on which software or apps were previously installed to both monitor the user’s environment through embedded sensors or network interfaces, and to then send the data collected by these devices to the various data controllers involved…

Thus said, one can only rejoice that the enchantment about the possibilities of IoT does not surpass the awareness regarding the existent vulnerabilities. But it remains to be found how can these and the other data protection and privacy requirements be effectively implemented in practice.

We certainly are in the good way to dodge any black swan event. However, it won’t be that easy to find the appropriate answers for the massive security issues that come along. And one should not forget that technology seems to always be one step ahead of legislation.

So, the big question to ask is:

Are we really ready for the Internet of Things?

References   [ + ]

1. Copyright by Wilgengebroed under the Creative Commons Licence – Attribution 2.0 Generic

The ‘risk-based’ approach to Data Protection, too risky for SMEs?

Balance is hard, very hard.

Balance is hard, very hard.

For those businesses which collect, process and exploit personal data, the draft of Chapter IV of the forthcoming EU General Data Protection Regulation is particularly relevant as it foresees the possible future compliance obligations of data controllers and data processors.

Considering the last position of the Council of the European Union regarding this chapter, a ‘risk-based‘ approach to compliance is a core element of the accountability principle itself.1)See article 22 of the Council’s document.

In fact, the Article 29 Working Party2)The Article 29 Working Party gathers a representative of the supervisory authority designated by each EU Member State; a representative of the authority established for the EU institutions and bodies; and a representative of the European Commission. recently issued a statement supporting a ‘risk-based‘ approach in the EU data protection legal framework.

But what is it meant by the concept of a ‘risk-based‘ approach?

It mainly refers to the consideration of any potential adverse effects associated with the processing and implies different levels of accountability obligations of data controllers, depending on the risks involved within each specific processing activity. It is therefore quite different from the ‘one size fits all‘ approach, as initially proposed by the European Commission.

In this context, the respect and protection of the data subjects’ rights (for instance, right of access, of objection, of rectification, of erasure, and rights to transparency, to data portability and to be forgotten) shall be granted throughout the data processing activities, regardless the level of risks involved in these activities.

However, principles as legitimacy, transparency, data minimization, data accuracy, purpose limitation and data integrity and the compliance obligations impending upon controllers shall be proportionate to the nature, scope, context and purposes of the processing.

This ‘risk-based‘ approach is developed throughout Chapter IV, namely regarding provisions related to the data protection by design principle3)See article 23., the obligation for documentation4)See article 28., the obligation of security5)See article 30., the obligation to carry out an impact assessment6)See article 33., and the use of certification and codes of conduct7)See articles 38 and 39..

These accountability obligations, in each phase of the processing, will vary according to the type of processing and the risks to privacy and to other rights and freedoms of individuals.

In this context, the proportionality exercise will have an effect on the requirements of privacy by design8)See article 23., which consists on assessing the potential risks of the data processing and implementing suitable privacy and data protection tools and measures in order to address that risk before initiating these activities.

Besides, the introduction of the ‘risk-based‘ approach is also likely to be relevant in respect of controllers not established in the EU, as they most surely won’t be required to designate a representative in the EU, regarding occasional processing activities which are unlikely to result in a risk for the rights and freedoms of individuals 9)See article 25..

Moreover, a ‘risk-based‘ approach will be implemented as well regarding the security of the processing, as technical and organisational measures, adequate to the likelihood and severity of the risk for the rights and freedoms of individuals, shall be adopted10)See article 30..

In parallel, it has been foreseen that the obligation to report data breaches is restricted to the breaches which are likely to result in an high risk for the rights and freedoms of individuals. In this context, if the compromised data is encrypted, for instance, the data controller won’t be required to report a verified breach.11)See article 31 and 32.

The weighing assessment is expected to be also relevant regarding the data protection impact assessment12)See article 33. required for the processing activities that will likely result in a ‘high risk’ to the rights and freedoms of individuals, such as discrimination, identity theft, fraud or financial loss.

Another important requirement is the consultation of a Data Protection Authority prior to the processing of personal data when the impact assessment indicates that the processing would result in a high degree of risk in the absence of measures to be taken by the controller to mitigate the risk.13)See article 34.

Of course “nothing is agreed until everything is agreed” and this chapter will be subjected to further revisions. There is, indeed, a vast room for improvement.

For instance, it is questionable if a ‘risk-based‘ approach does make data protection standards stronger, considering the inadequacy of the risk assessment methodology regarding fundamental rights.

In parallel, the definition of ‘high risk‘ is still too broad, including almost all businesses which are operating online. Similarly,  the impact assessment process presents itself as complex, burdensome and costly. At the current state of play, small businesses and start-ups are most likely to be negatively affected by the administrative and financial burden that some of the abovementioned provisions will entail. This is quite ironic, considering that it was precisely that concern that is at the core of the understanding according to which SMEs should be exempted from the obligation to assign a Data Protection Officer.

However, it is important for businesses to try to anticipate how the compliance requirements will be set in the future in order to be prepared for their implementation.

We will see in due time how onerous the regime will be. Whilst we do not know the exact content of the text that will eventually be adopted, it is evident now that substantive accountability obligations will be imposed upon businesses handling personal data.

References   [ + ]

1. See article 22 of the Council’s document.
2. The Article 29 Working Party gathers a representative of the supervisory authority designated by each EU Member State; a representative of the authority established for the EU institutions and bodies; and a representative of the European Commission.
3. See article 23.
4. See article 28.
5. See article 30.
6. See article 33.
7. See articles 38 and 39.
8. See article 23.
9. See article 25.
10. See article 30.
11. See article 31 and 32.
12. See article 33.
13. See article 34.

Atlas or how tracking technology is getting smarter and more intrusive

Thumbs up on tracking everyone.

Thumbs up on tracking everyone, everywhere.

Traditionally, Atlas refers to a collection of maps, typically of the earth. But this concept is about to assume a much creepier meaning. It is now associated to a ‘people-based marketing’ model, meaning the tracking and mapping of consumer’s behaviours both online and offline, as they move across content, websites and apps with different devices.

I am referring to the new advertising platform called Atlas, recently announced by Facebook. The platform is an improved version of Atlas Advertiser Suite model, purchased from Microsoft in 2013, and is deemed to be more implacable than cookies technology, which it aims to eventually replace.

Currently, marketers usually target and track the performance of online advertisements through cookies. However, cookies have been failing the marketing industry due to the very limited outcomes they allow. Indeed, they are less and less reliable and increasingly ineffective due to browser settings and plug-ins which can block them. Moreover, they are not as effective on smart phones and tablets, the main tools to access internet nowadays, as on computer’s desktops. In addition, they do not distinguish among users and devices.

As a result, advertising companies, contrarily to their best interests, are unable to figure which advertisements are worthy and efficient.

Facebook, dressing a red cape over its blue clinging suit, proposes to solve these issues with Atlas.

How?

Well, taking advantage of the huge amount of data it collects about its members. After all, information as where people live or go, websites they visit, their preferences, interests and their interactions is highly valuable for marketing purposes. Indeed it enables marketing companies to target its advertisements more efficiently according to contextual and behavioural profiling.

But how?

While being logged in a Facebook account, each user has one unique identifier which distinguishes him or her from all the others. It is like a fingerprint.

Atlas will combine cookies with the unique Facebook individual identification to track users’ exposure to advertising across the web, linking their personal information to their browsing activity.

In this context, marketers and advertisers will be able to match the list of individuals who they know have bought their product, through purchasing details, and the list of advertisements that individuals have seen online.

As a result, they will be able to evaluate to what extent their targeted advertisements on Facebook influence its members’ purchases and to assess which ones are successful.

Getting a cold feeling of discomfort regarding your privacy?

Do.

While these might be good news for advertisers and marketers in general, users already worrying about their privacy and personal data will certainly find room for some additional concern.

Indeed, even if many other Internet companies, as Google and Yahoo, collect data on individuals based on their web browsing and other online activities and use it to target ads, Facebook raises the stake to a whole new level.

To start with, it distastefully shares data collected within social networking with third parties, advertisers and marketers. So, information provided by its members in a certain context will be used in another context.

In addition, while combining cookies with the Facebook ID, Atlas will enable to track online activities across devices of logged in users and to assess their reaction to advertising campaigns both across Facebook and third-parties websites and apps, both on desktop and mobile devices.

Therefore, Atlas applies a user’s Facebook identity beyond Facebook’s walls, resulting in exposing users who are logged in across devices to a new persistent tracking mechanism which I can’t help but picture as a constant and undesirable online stalker.

As, having purchased ad campaigns through Atlas, advertisers can choose whether or not to include it on Facebook, its primary intention is consequently to demonstrate that advertisements placed in its website do work, i.e., that online social behaviour and search habits of its members can be a faithful indicator of consumer interest and purchase intent. The aim is to attract advertisers and marketer’s interest in order to place ads on its platform, with the argument that ads bought through Atlas will be more effective than other platforms, because they will use data collected through Facebook.

This will enable Facebook to establish a demand-side platform, where marketers will be able to buy ads which target Facebook’s members as they move across the Web, and even target users through real-time bidding. Once a user has logged into Facebook on a device, Atlas will be able find that user and present personalized ads.

In this context, the core privacy concern is whether data can be utilized while maintaining users’ privacy rights. Facebook pledges that the whole process will be anonymous and that is not going to disclose personal information such as user’s names or locations to advertisers. It is said that marketers and advertisers won’t be able to access other details than those they already know. Furthermore, marketers won’t be able to take Atlas’s cross-device tracking information out of the Facebook system.

Nonetheless, it conveniently failed to acknowledge that this kind of marketing is targeted to us as identified individuals, despite no revelation of real names is involved.

Indeed, it belongs to an emerging strategy known as ‘onboarding’, which aims to link our offline life to our online activity. Instead of users’ actual names, Atlas targeting segments refer to age, gender and demographics and might eventually include political affiliations, credit card use and relationship status.

So, Facebook’s policy regarding real names might not be as well intended as it was firstly presented. As users are voluntarily submitting authentic information, just by using the social network on a regular basis, knowing its users’ real identities allows the building of detailed profiles of people.

It is already known that Facebook’s partners include Omnicom, Instagram and, possibly, Twitter.

Some consider that this aims to take down Google from its dominant position regarding online-display advertisements, taking advantage of the fact that Google’s targeting is primarily based on cookies, which don’t work on mobile phones and get confused across users and devices. Despite Facebook’s denials, Atlas will allow Facebook to build an advertisement network that would, like Google’s AdSense, extend its ads across the Web.

I don’t know one single person – except for my little cousin, who thrives with pub time on TV – that appreciates to have every webpage opened filled with ads. And it becomes worse when they are irrelevant!

Perhaps Atlas is just a way of ensuring that the advertisements we see are of more interest to users. Hopefully, it doesn’t mean that we will have to face a bigger amount of ads.

Anyway, Facebook’s members can’t opt out of Facebook’s data capture mechanisms entirely, although they will be able to view and change the types of ads they are presented with through the Ad Preferences portal.

But while some may argue that Atlas is just a new tool to make ads more relevant to users, one shouldn’t ignore that users are being made more relevant to advertisers. We are the product. Perhaps for those who need to socialize online, Ello is not such a bad option after all…

 

♫ I just call to say…la la la ♪: The unromantic side of telemarketing

Not another one!

Not another one!1)Copyright by methodshop .com under the Creative Commons Licence – Attribution-ShareAlike 2.0 Generic.

Missed anonymous calls that leave you wondering who it may have been… Calls from unknown numbers at the most inconvenient moment… Wasting money in returning the call… The displeasure of discovering, mainly if we were expecting a specific important call, that it is only a marketing communication… The frustration of spending long and precious minutes repeating that we are not interested in whatever product the interlocutor is trying to sell…

It most certainly sounds familiar…

Out of my personal experience I can refer quite a few examples of unsolicited marketing, some of which actually could have been qualified as marketing harassment. Not the best publicity, if you ask me…

From evening calls, to anytime calls, from participating in a raffle only to be attacked by unwanted marketing initiatives, from registering in an online shopping website only to be contacted by financial institutions intending to sell you some credit card, from ordering a body lotion only to start receiving advertising of completely unrelated products…

I am specifically referring to business-to-consumer (B2C) advertising and marketing, through all the channels technologically available to promote companies’ commercial campaigns of products and services among individual buyers.

However, telemarketing is, in my very personal opinion, among the most annoying direct marketing initiatives. It gets worse when calls are repetitive, insistent, and even aggressive, as many of them usually are.

Worse than that? Well, I can easily point out having a salesperson ringing on your bell door right before or, even worst, during dinner time…

If the assumption that consumers purchases are usually based on personal emotions is correct, despite not being a marketing genius myself, I am pretty sure that bothering potential clients is never (ever!) the way to go when it comes to attract consumers. As a matter of fact, I am certain that it can actually lead to the opposite effect. So, if you own a business and somehow your marketing campaign is not working, you might want to check this criterion.

Nevertheless, it is astonishing how abusive and unlawful marketing initiatives frequently are. It never ceases to amaze me the number of businesses that seem to be completely unaware of their responsibilities as data controllers. I always fail to understand if they actually ignore their duties or if they just pretend so in order to take advantage of the data subject most likely ingenuousness on the matter.

Legal requirements, as those foreseen in the E-Privacy Directive, i.e., the Directive on privacy and electronic communications and the Directive 95/46, which is applicable as direct marketing requires personal data processing, are not suitably taken into consideration. It is like some companies do not acknowledge that individuals have any rights over their personal data, including the absolute right to object to their personal data being used for marketing purposes.

However, while it is merely an inconvenience for me, as I know which reasoning I shall refer to and which means are required in order to cease any further annoyance quickly, not everybody does. Sometimes it takes people months before being able to get definitely rid of any undesirable contact.

The very basic requirement that is applicable to direct marketing – the prior consent of the data subject – seems to be easily overlooked as many companies sell or share data from customers without their authorisation. Most of the time, individuals do not even fully appreciate that they giving their consent or what they are consenting to or are not even given the possibility to refuse such use of their personal data.

This is particularly worrying considering all the changes which are on the way. If businesses keep ignoring or refusing to acknowledge the requirements they owe to comply with, they will commit the offences and suffer the sanctions which most likely will be foreseen, for instance, in the future EU General Regulation on Data Protection.

I already had the opportunity to address some of those forthcoming changes here. However, these are particularly restrictive regarding marketing initiatives.

All forms of marketing communications, including telemarketing and direct mail, will be subjected to the individual’s consent. Indeed, the current ‘opt-out’ checkboxes system will be replaced by an ‘opt in’ permission method. This means that any communication which hasn’t been the object of a previous, free, explicit and informed consent of the data subject will therefore be forbidden.

The criterion of explicit consent requires a clear statement or an affirmative action. In this context, companies collecting information will have to ensure that the data subject is well aware of the specific purposes of the data collection, namely for marketing purposes.

In parallel, the data subject would be able to access the data collected without being charged any fee. Moreover, if a data subject decides to opt out of marketing communications, marketers will have to delete any records they hold, if requested. Marketers won’t be able to retain, in that case, any detail, unless they can show legitimate grounds for retaining the data.

As a direct result, if companies cannot demonstrate that consent has been previously explicitly given to marketing purposes, they will have to delete it. Databases and contacts lists will most certainly be severely reduced.

The forthcoming changes will obviously make the conducting of marketing campaigns more difficult and, consequently, will require a shift in the marketing strategies in order to be compliant with the law.

As a consumer, I am always favourable of legislation which protects individuals regarding ambiguities related to the use of their personal information.

As lawyer, I can only provide timely and relevant information that will help my clients to comply with the law while (hopefully) simultaneously making a profit for their company.

The unpleasant side of non compliance with the rules on direct marketing does not limit itself to bad publicity or reputation. Fines, legal action and financial damages also have strikingly negative effects on businesses. For this reason, companies should start preparing for the forthcoming changes in advance in order to avoid any surprises, save time and money and make the most out of a new situation.

References   [ + ]

1. Copyright by methodshop .com under the Creative Commons Licence – Attribution-ShareAlike 2.0 Generic.
Older posts Newer posts

© 2018 The Public Privacy

Theme by Anders NorenUp ↑