Month: November 2014 (page 1 of 2)

Net Neutrality in the EU – A work still in progress

Which neutrality do you prefer?

Which neutrality do you prefer? 1)Copyright by EFF-Graphics under the Creative Commons Attribution 3.0 Unported

Aiming to allow everyone to communicate with anybody globally, the net neutrality principle establishes that all content providers should have equal access on networks. In this context, it enables people to access and impart information and it provides entrepreneurs with the proper platform to invest and develop new businesses models. Therefore, non-discrimination commitments are required from Internet Services Providers regarding users, contents, devices or communications.

But it is easier said than done… In fact, it appears that net neutrality is not a straight forward principle, thus allowing different interpretations. Perhaps the very own nature of the concept can – at least partially – explain the difficulty of the institutional and political debates surrounding the legislative reforms in the telecommunications sector both in the EU and in the USA.

On the EU side, the negotiations regarding the draft regulation laying down measures concerning the European single market for electronic communications and to achieve a Connected Continent (the TSM proposal) have been quite tumultuous.

As you might well remember, it all began with the text proposed by the European Commission, in 2013, which was claimed to fully implement the principle of net neutrality, while it actually stripped it of all real meaning. In fact, it foresaw an almost unlimited right of Internet Services Providers (hereafter ISPs) to manage Internet traffic.

Afterwards, there were the debates within the European Parliament, which first reading ended successfully last April, resulting in a clear and strict interpretation of the net neutrality principle and a proper framework for ‘specialised services’. Indeed, according to the text, telecommunications operators would be allowed to develop access offers with an optimised quality of service for specific applications, which wouldn’t be able to not run properly on the so-called ‘best-effort Internet’.2)A Best Effort Internet refers to the model of the Internet that does not differentiate between ‘levels’ of content providers. All web authors, large and small, enjoy the same ability to produce content or services that can, via the Internet reach an audience / customer base.

Currently, the debates are being held within the Council of the European Union which, along the European Parliament, is the EU co-legislator. However, the meeting of the EU Member States’ telecommunications ministers, held in Luxembourg, past June, clearly demonstrated the existing major divisions among Member States.

Considering the most recent proposal of the Italian Presidency (see here and here), it was quite evident that Member States were heading to a looser and weaker approach to net neutrality rules. The proposal consisted in a ‘principles-based approach’ in order not to inhibit innovation and to avoid having an obsolete regulation in the future.

However, the proposal did not address the principle of net neutrality but rather its opposite, as it set principles to traffic management:

Clear principles for traffic management in general, as well as the obligation to maintain sufficient network capacity for the internet access service regardless of other services also delivered over the same access.

In fact, the very important definitions of net neutrality and specialised services were not included in the text.

According to the document of the Italian Presidency, “instead of a definition of net neutrality there could be a reference to the objective of net neutrality, e.g. in an explanatory recital, which would resolve the concerns that the definition might be at variance with the specific provisions.” However, clear provisions are required in order to ensure its full enforcement.

Specialised services, which refer to the types of content that operators could prioritise over others, despite not being regulated, were not prohibited. Thus said, if they were not foreseen in the text, the principle of non-discrimination should at least have been clearly stated instead. It was not the case.

In its place, it was foreseen that ISPs will be able to apply traffic management measures as long as they were transparent, proportionate and not anti-competitive. Measures “that block, slow down, alter, degrade or discriminate against specific content, applications or services, or specific classes thereof” could be applied under certain circumstances, such as to “prevent the transmission of unsolicited communications”; to prevent “temporary congestion control”; or to meet their “obligations under a contract with an end-user to deliver a service requiring a specific level of quality to that end-user”.

Moreover, the proposal did not contain any reference to the obligation of Member States regarding the guarantee of the right to freedom of expression, which must be ensured at both the end-user and the content provider.

Thus said, this text raised some confusions and concerns. To start with, regarding unsolicited communications, it must be noted that an e-mail service is not an internet access service. Moreover, it should have been clarified that the prevention of temporary congestion should be an exception and not be established ‘by default’. Furthermore, the concept of a “contract with an end-user to deliver a service requiring a specific level of quality to that end-use” is not fully compatible with the ‘best effort’ Internet concept.

Last but not the least; the text lacked a clear non-discrimination principle for Internet access providers. For instance, the text did not refer the discrimination based on pricing which would lead to a result where big telecommunications companies would be able to pay for preferential treatment for their services or to have their services accessible for free, while others, with less financial capacity, would end up being excluded due to throttling of their services.

As a result, ISPs would turn themselves into the gatekeepers of a market of customers which would only be accessible for those companies willing to pay accordingly. In fact, this is a crucial point because consumers will invariably prefer the websites or services made available for free.

The direct result of such a text was that telecoms operators would be able to discriminate between different users, their communications or the content accessed. Internet access providers, and not users, would therefore decide what applications and content could be freely used.

In an unfortunate coincidence, Günther Oettinger, the already well-know Digital Commissioner for its ‘inside the box’ way of thinking, published his first post on his blog, arguing that the full coverage of internet access in rural zones would be finally possible if the telecommunications operators would be allowed “to reap the benefit of their investments”.

Moreover, a letter sent from Jean-Claude Juncker and Frans Timmermans to the other commissioners is being interpreted as suggesting that the European Commission might change direction regarding its initial proposal.

In this context, the main challenge is to conciliate the open internet as a instrument for the democratic expression, which promotes informed citizenship and plurality of opinions, with the network operators own interests in managing their networks, namely through specialized services. ISPs should be entitled to manage traffic – namely offering customers internet access packages with different speeds and volumes – but the traffic should neither be prioritized nor discriminated based on the content, services, applications, or devices used.

More recently, the Italian Presidency appears to have distanced itself from its own proposals, alleging that

none of the compromise drafts, which had been developed at a technical level, has gathered enough consensus. Such drafts (…) are significantly different from the positions of the single Member States, including Italy, that has always chosen to act as a neutral mediator under the Presidency rather than imposing its own point of view.

This is just the consequence of the strong divergences which oppose EU Member States, which is expected to be resolved at a political level.

In this context, the recent resolution adopted by the European Parliament does not come as a surprise as it stresses that

all internet traffic should be treated equally, without discrimination, restriction or interference, irrespective of its sender, receiver, type, content, device, service or application.

In these dark times for net neutrality, one can only hope for the right balance between net neutrality and reasonable traffic management to be found.

And as Christmas is getting closer, one can also wish for the EU and the USA to ultimately adopt compatible rules on guaranteeing an open internet. As announced recently, Barack Obama is taking strong positions in favour of Net Neutrality and is calling on the Federal Communications Commission (FCC) to adopt rules to prevent ISPs from blocking and slowing down content.

References   [ + ]

1. Copyright by EFF-Graphics under the Creative Commons Attribution 3.0 Unported
2. A Best Effort Internet refers to the model of the Internet that does not differentiate between ‘levels’ of content providers. All web authors, large and small, enjoy the same ability to produce content or services that can, via the Internet reach an audience / customer base.

Meet Regin

Yes, You have been hacked and spied upon!

Yes, You have been hacked and spied upon!

Regin is not like all the other regular viruses you can find in your computer. It is the most recently discovered powerful tool for cyber espionage between nation-states, as reported by computer security research lab Symantec, and by its main competitor Kaspersky Labs.

Regin is described as a sophisticated cyber attack platform, which operates much like a back-door Trojan, mainly affecting Windows-based computers. It can be customized with different capabilities depending on the target and, while it operates in five stages, only the first one is detectable.

Among its diversified range of features, Regin allows the remote access and control of a computer, thus enabling the attacker to copy files from the hard drive, to recover deleted files, to steal passwords, to monitor network traffic, to turn the microphone or the camera on, and to capture screenshots.

According to the above mentioned reports, Regin has been sneakily around since, at least, 2008, and has been used in systematic spying campaigns against a wide range of international targets, namely governments’ entities, Internet services providers, telecom operators, financial institutions, mathematical/cryptographic researchers, big and small businesses, and individuals.

As for the geographical incidence, Saudi Arabia and Russia appear to be the major targets of Regin. Mexico, Iran, Afghanistan, India, Belgium and Ireland were among the other targeted countries.

The conclusions drawn in the Symantec’s report are, at the very least, very unsettling. It is stated that, considering its high degree of technical competence, its development is likely to have taken months, if not years, to be completed.

Regin is a highly-complex threat which has been used in systematic data collection or intelligence gathering campaigns. The development and operation of this malware would have required a significant investment of time and resources, indicating that a nation state is responsible. Its design makes it highly suited for persistent, long term surveillance operations against targets.

Therefore, the new million dollar question is: who is behind its conception? Unfortunately, it is very difficult to find out who has created or has otherwise financed its development because little trace of the culprits was left behind. However, it is well known that not all countries are so technologically advanced to be able to engineer such an accurate tool or to conduct such a large scale operation.

As a governmental instrument for mass surveillance, cyber espionage and intelligence gathering, Regin is just one of its kind. A few years ago, the world assisted to the rise of similar viruses, also from a nation state origin. Stuxnet, Duqu and Flame were three of the detected viruses previously employed to perform industrial sabotage or to conduct cyber espionage.

Thus said, this historical pattern for cyber attacks clearly shows that virtual wars are being fought, in an almost invisible battlefield that is the cyberspace, where nation-states clash silently. Once limited to opportunistic criminals, viruses are currently the new weaponry in this cyber warfare.

But a state sponsored cyber attack does not really come as a surprise. Governments have always spy on each other in order to obtain strategic, economic, political, or military advantage. The discovery of Regin just confirms that investments are continuing to be made in order to develop implacable instruments for espionage and intelligence gathering purposes.

In this context, it is no coincidence that cyber security is increasingly appointed as a decisive part of any governments’ security strategy, as it involves protecting national information and infrastructure systems from major cyber threats.

And while these sophisticated attacks are conducted, sensitive information about individuals is accessed, stolen, collected and stored by unknown attackers. To what end? Well, it can be any, really…

Uber – How much privacy are you willing to sacrifice for convenience?

Let's rideshare all your data?

Let’s rideshare all your data?

Ahhh how convenient it is to need a ride and to immediately have a car and a rider at our disposal at the distance of a click on our mobile phone… We used to call a taxi cab. Now it is much cooler: it is up to Uber.

Uber is a San Francisco headquartered company which specialized in the ridesharing services, made available through a Smartphone application. The very particularity of the service is that it does not own any car nor hires any driver. Indeed, Uber is a platform which is intended to put drivers and riders in touch, thus allowing for people having a car to make some extra money and for people who don’t to actually have at their disposal cheaper rides and to select the most suitable ride, among the several models nearby.

If you live in a city where the service is not available, you certainly already know it better from the protests held, a few months ago, by taxi drivers and taxi companies, in some capitals where it was implemented, which qualify it as an anticompetitive business.

Competition matters aside, the Uber business model is built upon customers personal data – which is information that could reasonably be used to identify them – and, therefore, raises privacy and data protection issues which cannot be ignored.

Indeed, in order to develop its customized services, Uber collects and processes a humongous amount of personal data from its customers, such as their name, e-mail address, mobile number, zip code and credit card information.

In addition, certain information – such as the browser used, the URL, all of the areas visited, and the time of day – may be automatically or passively collected while users visit or interact with the services. This data is referred to as ‘Usage Information’. In parallel, the IP address or other unique device identifier (for the computer, mobile or other device used to access the services) is collected.

Tracking information is also collected when the user travels in a vehicle requested via Uber services, as the driver’s mobile phone will send the customer’s GPS coordinates, during the ride, to its servers, including, but not limited to geographic areas. It is important to note that currently most GPS enabled mobile devices can define one’s location to within 50 feet!

This geo-location information is actually the core of the Uber business as it enables users to check which drivers are close to their location, to set a pick up location, and to ultimately allow users wishing so to share this information with others.

The amount of information regarding habits and movements, locations, destinations, workplaces, favourite social spots, which can be concluded from a user’s trip history and from the geo-location data tracked through mobile devices, is as a matter of fact quite surprising… and impressively accurate.

For instance, back in 2012, in a post entitled ‘Ride of Glory’ which is no longer available in its website but is greatly reproduced elsewhere, Uber was actually able to link rides taken between 10pm and 4am on a Friday or Saturday night, followed by a second ride from within 1/10th of a mile of the previous night’s drop-off point 4-6 hours later, to ‘one night stands’.

I suppose that this outcome makes most of us feel quite uncomfortable… One thing is for our whereabouts to be known. Another, quite different, is the conclusion which can be drawn based on that information.

Most of us do not really think about the implications of randomly giving away personal data. We easily sign up for supermarket value cards in order to get discounts over our grocery bills, thus allowing the retailer to track our purchases and consumption habits.

Besides being – at the very least – very unpleasant to have our sex lives revealed by the details of our rides to home, there is indeed a wide room for concern considering Uber’s policy and recent practices.

Uber has a very broad privacy policy to which users actually give their consent when they download its app. Indeed, it establishes very few limits to the use of the collected data. According to its policy, Uber can use the ‘Usage Information’ for a variety of purposes, including to enhance or improve its services. In fact, to attain that goal, Uber may even supplement some of the information collected about its customers with records from third parties.

Quite recently, it announced an “in-depth review and assessment of [its] existing data privacy program”. Certainly this willingness to change does not go unrelated to the comments of a senior executive suggesting Uber was planning to hire a team of opposition researchers to dig up dirt on its critics in the media, referring specifically to a female journalist, which were received with a wave of strong criticism.

Of course, this could have merely been a distasteful and off-the-record (because being off the record makes it all better) comment made in a fancy dinner party which does not represent the overall position of the company.

However, right afterwards emerged the rumour according to which Uber’s internal tool called “god view”, which shows the real-time location of vehicles and customers who have requested a car, as well as access to account history, is easily accessible for employees without rider’s consent. As a matter of fact, it was employed to access and track a reporter’s movements.

These facts cause little surprise to those who already are familiar with Uber’s very own promotion methodologies, some of which consisting, at launching parties, to feature a screen showing in real time where certain customers were.

This pattern is a sharp reminder of the risks at stake when giving away our personal data for convenience. And the information revealed by the amount of data made available, randomly, through an application on our mobile, tablet, computer or similar devices.

Imagine now, for instance, that you have a specific condition which requires frequent visits to a hospital or a specialized medical centre and that Uber would be able to conclude what is your health status as easily it did regarding the user’s nightly romantic encounters.

I hope that this situation will lead to the adoption of a very strict privacy policy which will end up elevating the privacy standards for the entire related-industry.

But considering all this, I must ask: how much privacy are you willing to sacrifice for your convenience?

EU PNR – A plane not yet ready to fly

Plane Not Ready to fly!

Plane Not Ready to fly!

The Civil Liberties, Justice and Home Affairs (LIBE) Committee of the European Parliament has recently discussed the Passenger Name Record (hereafter PNR) draft Directive according to which air carriers would be required, in order to help fight serious crime and terrorism, to provide EU Member States’ law enforcement bodies with information regarding passengers entering or leaving the EU.

This airline passenger information is usually collected during reservation and check-in procedures and relates to a large amount of data, such as travel dates, baggage information, travel itinerary, ticket information, home addresses, mobile phone numbers, frequent flyer information, email addresses, and credit card details.

Similar systems are already in place between the EU and the United States, Canada and Australia, through bilateral agreements, allowing those countries to require EU air carriers to send PNR data regarding all persons who fly to and from these countries. The European Commission’s proposal would now require airlines flying to and from the EU to transfer the PNR data of the passengers’ onboard passengers on international flights to the Member States of arrival or departure.

Nevertheless, the negotiation of the EU PNR proposed airline passengers’ data exchange scheme has been quite wobbly. The European Commission proposed the legal basis, in 2011, which ended up being rejected, in 2013, by the above mentioned committee, allegedly because it does not comply with the principle of proportionality and does not adequately protect personal data as required by the Charter of Fundamental Rights of the EU (hereafter CFREU) and by the Treaty on the Functioning of the EU (hereafter TFEU).

But concerns over possible threats to the EU’s internal security posed by European citizens returning home after fighting for the so-called “Islamic State” restarted the debate. Last summer, the European Council called on Parliament and Council to finalise work on the EU PNR proposal before the end of the year.

However, following the ruling of the Court of Justice of the European Union, regarding the EU’s Data Retention Directive, last April, which declared the mass-scale, systematic and indiscriminate collection of data as a serious violation of fundamental rights, leads to question if these PNR exchange systems with third countries are effectively valid under EU law.

Similarly, many wonder if the abovementioned ruling shouldn’t be taken into account in the negotiations of this draft directive considering that it also refers to the retention of personal data by a commercial operator in order to be made available to law enforcement authorities.

And there are, indeed, real concerns involved.

Of course, an effective fight against terrorism might require law enforcement bodies to access PNR data, namely to tackle the issue regarding ‘foreign fighters’ who benefit from EU free movement rights which allow them to return from conflict zones without border checks. For this reason, some Member States are very keen on pushing forward this scheme.

However, the most elemental principles of the rule of law and the most fundamental rights of innocent citizens (the vast majority of travellers) should not be overstepped.

For instance, as the proposal stands, the PNR data could be retained for up to five years. Moreover, the linking of PNR data with other personal data will enable the access to data of innocent citizens in violation of their fundamental rights.

As ISIS fighters are mostly well-known by the law enforcement authorities as well as by the secret services, it is therefore questionable how reasonable and proportionate can be such an unlimited access to this private information in order to prevent crime. How effective would be the tracking of people’s movements in order to fight against extremism? Won’t such a widespread surveillance ultimately turn everyone into a suspect?

Thus said, from the airlines point of view, the recording of such amount of data would undoubtedly imply an excessive increase of costs and, therefore, an unjustifiable burden.

The European Data Protection Supervisor (EDPS) has already considered that such a system on a European scale does not meet the requirements of transparency, necessity and proportionality, imposed by Article 8 of the CFREU, Article 8 of the European Convention of Human Rights and Article 16 of the TFEU. Similarly, several features of the PNR scheme have been highly criticized by the Fundamental Rights Agency (FRA).

At the moment, the EU Commission has financed national PNR systems in 15 member states (Austria, Bulgaria, Estonia, Finland, France, Hungary, Latvia, Lithuania, the Netherlands, Portugal, Romania, Slovenia, Spain, Sweden, and the UK) which leads to a fractioned and incoherent system. This constitutes a very onerous outcome for airlines and a need for a harmonization among data exchanges systems. The initiative is therefore believed by some MEPs to intend to circumvent the European Parliament’s opposition to the Directive.

Thus considering, it is legitimate to question if the EU-PNR will be finalized, as firstly intended, before the end of year. Given the thick differences between MEPs and among Member States, it appears that the deadline will be more and more unlikely to be meet.

The ‘One Stop Shop’ mechanism reloaded

Get all your data protection matters handled here!

Get all your data protection matters handled here!

The ‘one stop shop’ mechanism is one of the most heralded and yet most controversial features of the General Data Protection Regulation which draft is currently being negotiated within the Council of the European Union.

According to the most recent proposal of the Italian Presidency of the Council of the European Union, where data protection compliance of businesses operating across several EU Member States’ is in question or where individuals in different EU Member States are affected by a personal data processing operation, it would allow businesses to only deal with the Data Protection Authority (DPA) of the country where they are established.

Cases of pure national relevance, where the specific processing is solely carried out in a single Member State or only involves data subjects in that single Member State would not be covered by the model. In such circumstances, the local DPA would investigate and decide on its own without having to engage with other DPAs.

These are, however, deemed to be the exemption as the mechanism aims for a better cooperation among DPAs of the different EU Member States concerned by a specific matter.

Therefore, in cross-border cases, the competence of the DPA of the EU Member State of the main establishment does not lead to the exclusion of the intervention of all the other supervisory authorities concerned by the matter. In fact, while the supervisory authority of the Member State where the company is established will take the lead of the process which will ensue, the other authorities would be able to follow, cooperate and intervene in all the phases of the decision-making process.

In this context, if no consensus is reached among the several authorities involved, the European Data Protection Body (hereafter EDPB) will decide on the binding measures to be implemented by the controller or processor concerned in all of their establishments set up in the EU. Similarly, the EDPB will have legally binding powers in case of failure to reach an agreement over which authority should take the lead.

Multi-jurisdictional operating businesses operating in the EU, which handle vast amounts of personal data, would highly benefit from this ‘one stop shop’ concept, which would enable to reduce the number of regulators investigating the same cases. Indeed, as things stand presently, a company with operations in more than one EU Member State has to deal with 28 different data protection laws and regulators, which unavoidably leads to a lack of harmonization and legal uncertainty.

The Article 29 Working Party has already manifested its support for a ‘one stop shop’ mechanism under the proposed EU General Data Protection Regulation.

However, in the past, Member States have manifested numerous reservations regarding this mechanism. Among the main concerns expressed were the following: businesses would be able to ‘forum shop’ in order to ensure that their preferred DPA leads the process; a DPA would not be able to take enforcement action in another jurisdiction; individuals’ rights to an effective remedy under EU laws would not be appropriately recognised; authorities without the lead position would not be able to influence processes related to data protection breaches involving nationals of their Member States.

As the way the ‘one stop shop‘ mechanism would be implemented in practice is one of the main causes of the hindrance for the Member States to reach an agreement on the wording of a new EU General Data Protection Regulation, let’s hope that the solution proposed by the Italian Presidency of the Council of the European Union does get closer to a suitable accommodation of the various concerns expressed by Member States.

The EU external border’s security at travellers’ fingerprints

One fingerprint down, only nine to go!

One fingerprint down, only nine to go! 1)Copyright by Frettie under the Creative Commons Attribution 3.0 Unported

Last semester, the Council of the European Union and the European Parliament voiced technical, operational and financial concerns regarding the overall implementation of the ‘Smart Borders Package’. In this context, the European Commission initiated an exercise aimed at identifying the most adequate ways for its implementation. The aforementioned exercise would include a Technical Study, which conclusions would be subsequently tested through a Pilot project.

The Technical Study, prepared by the European Commission, has been recently issued.

But let’s create some context here…

The EU is currently assisting to a very important increase in the number of people crossing its borders, namely through air. I am talking about millions of people crossing, for the most diversified reasons, every day, at several points of the external border of the EU. This very fact transforms airports in the most relevant way in and out of the EU.

Therefore, if the border management, namely through the check in procedures, is not duly modernized and dealt with by a proper legal and technical structure, longer delays and queuing are expected. Adding to this, there is also a paramount concern of security, due to the growing numbers of foreign fighters and refugees.

Indeed, under the current framework – the Schengen Borders Code – a thorough check at entry of all travellers crossing the external border is required, regardless their level of risk or how frequently they actually travel in and out the EU. Furthermore, the period of time a traveller stays in the Schengen area is calculated based solely on the stamps affixed in the travel document.

So one of the main goals of the ‘Smart Borders’ initiative is to actually simplify and facilitate the entrance of “bona fide” travellers at the external borders, significantly shortening the waiting times and queues they have to face. Additionally, the initiative aims at preventing irregular border crossing and illegal immigration, namely through the detection of overstays, i.e., people who have entered the EU territory lawfully, but have stayed longer than they were authorized to.

In this context, biometrics 2)The concept refers to metric related to human features, i.e., to elements which are specific to the physical or psychological identity of a person and, therefore, allow to identify that person. Physiological biometrics, which we are specifically considering in this context, refer to human characteristics and traits, such as face, fingerprints, eye retina, iris and voice. appear as a solution. In fact, biometric technologies 3)Technologies which are able to electronically read and process biometric data, in order to identify and recognize individuals. are cheaper and faster than ever and are increasingly used both in the private and the public sector. They are mainly used on forensic investigation and access control systems, as they are a considered an efficient tool for truthful identification and authentication.

Indeed, the use of biometrics data for other purposes than law enforcement is currently being furthered at the EU level. The biometrics systems firstly implemented were deployed in regard of third country nationals, such as asylum or visa applicants (Eurodac4)Eurodac is a large database of fingerprints of applicants for asylum and illegal immigrants found within the EU. The database helps the effective application of the Dublin convention on handling claims for asylum. and VIS)5)The Visa Information System, which ‘VIS’ stands for, allows Schengen States to exchange visa data. and criminals (SIS and SIS II)6) The Schengen Information System, which ‘SIS’ stands for, is the largest information system for public security in Europe.. In 2004 it has been enlarged to the ePassport of the European Union.

Later on, in 2008, the European Commission issued a Communication entitled ‘Preparing the next steps in border management in the European Union’, suggesting the establishment of an Entry/Exit System and a Registered Traveller Programme.

Subsequently, in 2013, the European Commission submitted a ‘Smart Borders Package’, including three legislative proposals. In this regard, the proposal for an Entry/Exit System (hereafter EES) was intended to register entry and exit data of third country nationals crossing the external borders of the Member States of the European Union. Likewise, the proposal regarding a Registered Traveller Programme (hereafter RTP) aimed at offering an alternative border check procedure for pre-screened frequent third-country travellers, thus facilitating their access to the Union without undermining security. In parallel, the purpose of the third proposal was to amend accordingly the Schengen Borders Code.

The foremost aspiration to be achieved with these instruments was for a better management of the external borders of the Schengen Member States, the prevention of irregular immigration, information regarding overstayers, and the facilitation of border crossing for frequent third country national travellers.

Therefore, the EES would allow to record the time and place of entry and the length of stays in an electronic database, and, consequently, to replace the current stamping of passports system. In parallel, the RTP would allow frequent travellers from third countries to enter the EU, subject to a simplified border checks at automated gates.

Although being generally considered an welcomed initiative in terms of modernization, this has awaken, nevertheless, some concerns regarding privacy and data protection. Indeed, the proposal focuses on the use of new technologies to facilitate the travelling of frequent travellers and the monitoring the EU border crossing of nationals of third-countries. In practice, it means that hundreds of millions of EU residents and visitors will be fingerprinted and their faces electronically scanned.

Last year, the European Data Protection Supervisor (EDPS) adopted a very negative position regarding the proposal to introduce an automated biometrics-based EES for travellers in the region, calling it “costly, unproven and intrusive“. The data retention period in the EES, the choice of biometric identifiers, and the possibility of law enforcement authorities to access its database were among the main concerns raised.

As the proposed system would require ten fingerprints to confirm the identity of individuals at borders and to calculate the duration of their stay in the EU, the EDPS pointed to the unnecessary collection and excessive storage of personal information, considering that two or four fingerprints would be sufficient for identification purposes. The EDPS also expressed apprehension regarding the access to the EES database which would be granted to law enforcement authorities, even if the individuals registered were not suspects of any criminal offence. Questions were also raised regarding the possible exchange of information with third countries which do not have the same level of data protection.

Since then, the Technical Study – which I referred to at the beginning of this post – has been conducted in order to identify and assess the most suitable and promising options and solutions.

According to the document, one fingerprint alone can be used for verification, but it is acknowledged that a higher number of fingerprints could lead to better results in terms of accuracy, despite a more difficult implementation, “in particular, taking into account the difficulty of capturing more than 4 FPs [fingerprints] at land borders where limitations in enrolment quality and time may rise regarding the travellers in vehicle and use of hand-held equipment”. Nevertheless, the enrolment of four or eight fingerprints is recommended as one of the test cases of the pilot project.

Moreover, the study noted that “if facial image recognition would be used in combination with FPs [fingerprints], then it has a beneficial impact on both verification and identification in terms of speed and security leading to lower false rejection rate and reduction in number of FPs enrolled”. In addition, the Study has concluded that the use of facial identification alone is an option to be considered for EES and RTP.

Thus said, concerns regarding security should not take the limelight of the fact that biometric data are personal data. In fact, fingerprints can be qualified as sensitive data in so much as they can reveal ethnic information of the individual.

Therefore, biometric data can only be processed if there is a legal basis and the processing is adequate, relevant and not excessive in relation to the purposes for which they are collected and/or further processed. In this context, the purpose limitation is a paramount principle. The definition of the purpose for which the biometric data are collected and subsequently processed is therefore a prerequisite to their subsequent use.

In parallel, the accuracy, the data retention period and the data minimisation principles have to be considered, as the data collected should be precise, proportionate and kept for no longer than what is necessary regarding the purposes for which it was firstly collected.

Besides, the processing of biometric data shall be based on the legal grounds of legitimacy, such as consent of the data subject, which must be freely given, specific and fully informed. In this context, the performance of a contract, the compliance with a legal obligation and the pursuit of legitimate interests of the data controller will also constitute legal grounds to that effect.

It must be noted that the processing of biometric data raises these and other important privacy and data protection concerns that, more than often, are not acknowledged by the public.

To start with, biometric data, in general, and fingerprint data, in particular, is irrevocable due to its stability in time. This makes possible data breaches all the most dangerous.

In addition, the highly complex technologies which are able to electronically read and process biometric data and the diversified methods and systems employed in the collection, processing and storage cannot ensure a full accuracy, even though fingerprints do present a high level of precision. In fact, a low quality of the data or of the extraction algorithms may steer to wrongful results and, therefore, to false rejections or false matches. This might lead to adverse consequences for individuals, namely regarding the irreversibility of the decisions taken based on a wrong identification.

Moreover, the risks associated with the storage of biometric data and the possible linking with other databases raises concerns about the security of the data and of uses non-compatible with the purposes which initially justified the processing.

Thus said, we will have to wait for the results of the Pilot Project which is being developed by the eu-LISA Agency 7)The acronym stands for Agency for the Operational Management of large-scale IT Systems in the area of Freedom, Security and Justice., and is expected to be completed during 2015, in order to verify the feasibility of the options identified in the Technical Study.

References   [ + ]

1. Copyright by Frettie under the Creative Commons Attribution 3.0 Unported
2. The concept refers to metric related to human features, i.e., to elements which are specific to the physical or psychological identity of a person and, therefore, allow to identify that person. Physiological biometrics, which we are specifically considering in this context, refer to human characteristics and traits, such as face, fingerprints, eye retina, iris and voice.
3. Technologies which are able to electronically read and process biometric data, in order to identify and recognize individuals.
4. Eurodac is a large database of fingerprints of applicants for asylum and illegal immigrants found within the EU. The database helps the effective application of the Dublin convention on handling claims for asylum.
5. The Visa Information System, which ‘VIS’ stands for, allows Schengen States to exchange visa data.
6. The Schengen Information System, which ‘SIS’ stands for, is the largest information system for public security in Europe.
7. The acronym stands for Agency for the Operational Management of large-scale IT Systems in the area of Freedom, Security and Justice.

Are you ready for the Internet of Things?

Everything is connected.

Everything is connected. 1)Copyright by Wilgengebroed under the Creative Commons Licence – Attribution 2.0 Generic

Imagine a world where people would receive information on their smart phone about the contents of their fridge; cars involved in an accident would call emergency services, allowing for quicker location and deployment of help; cars would suggest alternative routes in order to avoid traffic jam; personal devices would allow to monitor the health developments of patients or to control the regular medication of elderly persons; washing machines would turn on when energy demand on the grid would be lowest and where alarm clocks and coffee machines could automatically be reset when a morning appointment would be cancelled; a smart oven could be remotely triggered to heat up the dinner inside by the time you would reach home…

If it is true that these scenarios once belonged to the sci-fi world, it is not so hard to picture any of these technologies nowadays. The momentum we are living in and all the progress which is already involved in our lives brings the certitude that it is only a matter of time for us to reach such a future. Technological advancements are allowing achievements that once may have seemed impractical and are turning the sci-fi scenarios into reality.

We are smoothly entering in a new age… The age of the Internet of Things (hereafter IoT). The IoT might be indeed already start happening around us. It suffices to think about all the quite recent changes that we already accept as ordinary.

But what is the IoT all about?

The IoT is a concept which refers to a reality where everyday physical objects will be wirelessly connected to the Internet and be able, without human intervention, to sense and identify themselves to other surrounding devices and create a network of communication and interaction, collecting and sharing data. It  is therefore associated to products with machine-to-machine communication capabilities, which are called ‘smart’.

The high-tech evolution has made ‘smart’ more convenient and accessible and made the vast majority of us technologically dependent on several areas of our daily living. Connected devices have proliferated around us. Consider, for instance, the number of smart phones and other smart devices that most of us cannot conceive a life without anymore as it allows us to connect with the world as it was never possible before.

Similarly, our domestic convenience and comfort have been expanded in ways that once belonged to the imaginary. Homes, housework and household activity can be fully automatized in order to enable us to remotely control lighting, alarm systems, heating or ventilation. The domestic devices that can be connected to the Internet are usually referred to as “home automation” or “domotics”.

In parallel, we are currently able of the ‘quantified self’, which is commonly defined as the self knowledge acquired through self tracking with technology (for instance, pedometers, sleep trackers). One can now track, for example, biometrics as insulin and cortisol, or record more random information about our own habits and lifestyles, as physical activity and caloric intake. This monitoring can be done increasingly by wearables, i.e., computer-powered devices or equipment that can be worn by an individual, including watches, clothing, glasses and items alike. The Google glasses, Google Wear and the Apple Watch are the most famous recent examples.

Scarily enough, the number of objects connected to the Internet already exceeds the number of people on earth. The European Commission claims that an average person currently has at least two objects connected to the Internet and states that this is expected to grow to 7 by 2015 with 25 billion wirelessly connected devices globally. By 2020 that number could double to 50 billion.

However, every time we add another device to our lives, we give away a little more piece of ourselves.

Consequently, along with its conveniences, and due to the easy and cheaply obtained amount of data collection it allows, the idea of a hyper-connected world raises important concerns regarding privacy, security and data protection. To be true, while it is a relatively well-known fact that our mobile devices are frequently sending off data to the Internet, many of us do not understand the far-reaching implications of carrying around an always-on connection, let alone to have almost all your life connected to the Internet.

In fact, such objects will make it possible to access a humongous amount of personal data and to spread it around without any awareness nor control of the users concerned. From preferences, habits and lifestyle, to sensitive data as health or religion information, from geo-location and movements to other behaviour patterns, we will put out there a huge amount of information. In this context, the crossing of data collected by means of different IoT devices will allow the building of a very detailed user profile.

It is essential that users are given control over the data which directly refers to them and are properly informed of what purposes its processing might serve. In fact, currently, it is very common that the data generated is  processed without consent or with a poorly given consent. Quite often further processing of the original data is not subjected to any purpose limitation.

Moreover, as each device will be attributed an IP address in order to connect to internet, each one will be inherently insecure by its very own nature. Indeed, with almost everything connected to the Internet, every device will be at risk of being compromised and hackable. Imagine that your car or home could be subjected to a hacking attack through which it could take control of the vehicle or install a spying application in your TV. Imagine that your fridge could get spam and send phishing e-mails. The data collected through medical devices could be exposed. After all, it is already easier to hack routers and modems than computers.

Last but not the least, as IoT devices will be able to communicate with other devices, the security concerns would multiply exponentially. Indeed, a single compromised device could lead to vulnerability of all the other devices on the network.

Now imagine that all your life is embedded in internet connected devices… Think, for instance, fridges, ovens, washing machines, air conditioners, thermostats, light systems, music players, baby monitors, TVs, webcams, door locks, home alarms, garage door openers, just to name a few. The diversity of connected devices is just astonishing! So we may reach the point where you will have to install firewall for your toaster and a password to secure your fridge.

From a business point of view, questions regarding the security setup and software and operating systems vulnerabilities of devices that will be connected to the internet also have to be answered. Indeed, companies are increasingly using smart industrial equipment and IoT devices and systems, from cars to cameras and elevators, from building management systems to supply chain management system, from financial system to alarm system.

On another level, the security of nations’ critical infrastructures could also be at stake. Imagine, for instance, that the the traffic system, the electric city grid or the water supply can be easily accessed by a third party with ill intentions.

Of course, the EU could not be indifferent to this emerging new reality and to the challenges it presents.

In 2012, the European Commission launched a public consultation, seeking inputs regarding a future policy approach to smart electronic devices and the framework required in order to ensure an adequate level of control of the data gathering, processing and storing, without impairing the economic and societal potential of the IoT. As a result, the European Commission published, in 2013, its conclusions.

Last month, the European data protection authorities, assembled in the Article 29 Working Party, adopted an opinion regarding the IoT, according to which the expected benefits for businesses and citizens cannot come at the detriment privacy security. Therefore, the EU Data Protection Directive 95/46/EC and the e-Privacy Directive 2002/58/EC are deemed to be fully applicable to the processing of personal data through different types of devices, applications and services in the context of the IoT. The opinion addresses some recommendations to several stakeholders participating in the development of the IoT, namely, device manufacturers, application developers and social platforms.

More recently, at the 36th International Conference of Data Protection, Data Protection Officials and Privacy Commissioners adopted a declaration on the Internet of things and a resolution on big data analytics.

The aforementioned initiatives demonstrate the existing concerns regarding Big Data and IoT and the intention to subject them to data protection laws. In this context, it is assumed that data collected through IoT devices should be regarded and treated as personal data, as it implies the processing of data which relate to identified or identifiable natural persons.

This obviously requires a valid consent from data subjects for its use. Parties collecting IoT devices information therefore have to ensure that the consent is fully informed, freely given and specific. The cookie consent requirement is also applicable in this context.

In parallel, data protection principles are deemed to be applicable in the IoT context. Therefore, according to the principle of transparency, parties using IoT devices information have to inform data subjects about what data is collected, how it is processed, for which purposes it will be used and whether it will be shared with third parties. Similarly, the principle of purpose limitation, according to which personal data must be collected for specified, explicit and legitimate purposes and not be further processed in a way incompatible with those purposes, is also applicable. Furthermore, considering the data minimization principle, the data collected should not be excessive in relation to the purpose and not be retained longer than necessary.

Considering the vast number of stakeholders involved (device manufacturers, social platforms, third-party applications, device lenders or renters, data brokers or data platforms), a well-defined allocation of legal responsibilities is required. Therefore, a clear accountability of data controllers shall be established.

In this context, the Directive 2002/58/EC is deemed applicable when an IoT stakeholder stores or gains access to information already stored on an IoT device, in as much as IoT devices qualify as “terminal equipment” (smartphones and tablets), on which software or apps were previously installed to both monitor the user’s environment through embedded sensors or network interfaces, and to then send the data collected by these devices to the various data controllers involved…

Thus said, one can only rejoice that the enchantment about the possibilities of IoT does not surpass the awareness regarding the existent vulnerabilities. But it remains to be found how can these and the other data protection and privacy requirements be effectively implemented in practice.

We certainly are in the good way to dodge any black swan event. However, it won’t be that easy to find the appropriate answers for the massive security issues that come along. And one should not forget that technology seems to always be one step ahead of legislation.

So, the big question to ask is:

Are we really ready for the Internet of Things?

References   [ + ]

1. Copyright by Wilgengebroed under the Creative Commons Licence – Attribution 2.0 Generic

National Security: The new
responsibility of Tech

Let's take a closer look on... everything!

Let’s take a closer look on… everything!

Private tech companies are no longer expected to only aim profit. No. Besides having been assigned with the task of distinguishing public and private interest, they are now required to act as watchdogs to the intelligence services.

I am referring today to the very interesting opinion article of Robert Hannigan, published on Financial Times, last week, which I highly recommend.

Hannigan is the new Director of CGHQ, which stands for Government Communications Headquarters, meaning the British electronic intelligence agency. It operates closely with the British security service, MI5; the overseas intelligence service, MI6; and the United States National Security Agency (NSA).

In the above-mentioned article, Hannigan called for “better arrangements for facilitating lawful investigation by security and law enforcement agencies than we have now” in order to find “a new deal between democratic governments and the technology companies in the area of protecting our citizens”.

He mainly referred to the radical group Islamic State, a.k.a. ISIS and ISIL, “whose members have grown up on the Internet” and are “exploiting the power of the web to create a jihadist threat with near-global reach.” In this context, he qualified tech companies as “the command and control networks of choice” for terrorists.

Basically, and summing it up, let’s all forget about Snowden’s revelations (which I already addressed here) and see the big picture: because terrorists are using the social media websites, tech companies such as Facebook and Twitter ought to share all our private data with law intelligence agencies to stop terrorism. As we all have a common enemy, let’s allow a more undisturbed sharing of information between the intelligence community and private technology companies of our data. In these dangerous times, who needs privacy, anyway, right?

Coincidentally or not, these declarations came in the wake of Apple and Google sophisticated encryption initiatives regarding data on their mobiles and email systems. Indeed, encryption makes the collection of data off the wires more difficult. Unsurprisingly enough, these statements are also in line with FBI Director James Foley efforts.

However, despite seemingly intended to be simultaneously inspiring, alarmist and paranoia inducing, I couldn’t help to notice that the article is actually full of contradictions which I assume were intended to go unacknowledged.

To begin with, the conclusion according to which techniques for encryption or anonymisation through mobile technology in fact help terrorists to hide from the security service – or, as stated, “are the routes for facilitation of crime and terrorism” – is quite a far-fetched one. Terrorism has been here long before new technologies as we know them and, unfortunately, terrorists have always found ways of hiding their operations quite successfully.

As for the allusion that the leaking of information by Edward Snowden has actually helped the development of terror networks… Seriously? Of course, the problem was not mass surveillance in itself. The real issue was that those monitoring activities were revealed to the world.

Besides, the use of Internet by radical groups for promotion, intimidation and online recruitment of potential fighters is already a general concern. But the thing is, as these activities happen in fact on social media platforms, everybody can actually see it. So, where does the need for a more direct and thorough access to social platforms data comes from? It is not as secret terrorist operations are expected to be conducted on Facebook or Twitter. I mean, these companies are not really known for the security of their communications.

Moreover, nobody actually believes that privacy is an absolute right. The ECHR is quite clear on that. The right to privacy shall always be balanced with other rights, freedoms and needs, as for instance the right to information, the freedom of expression and the need to ensure national security. However, I fail to see the balance between civil liberties and national security in Hannigan’s speech. Similarly, I fail to understand how the free and secretive interference in our privacy – for security reasons, always, of course – can be lawful and how its proportionality is ensured.

Likewise, why isn’t a prior court order appropriate to intelligence agencies regarding requests for data? It should be up to the courts, not the GCHQ, nor tech companies, to decide when our personal data shall be shared with the intelligence services. Courts are the only guarantee of individuals’ rights and freedoms and of principles such as necessity and proportionality of the measures taken. Tech companies cannot refuse these requests when they are based on a Court order. So, when Hannigan calls for ‘better arrangements‘ and ‘new deals’, it is very questionable what is truly meant.

Thus said, the consideration that users of social media platforms “do not want the media platforms they use with their friends and families to facilitate murder or child abuse” was just the cherry on top of a very bitter anniversary cake, the 25th anniversary of the world wide web, that Hannigan obviously hasn’t failed to mention.

These arguments are not fit for a “mature debate on privacy in the digital age”. Indeed, the fear, uncertainty and doubt (FUD) is quite a well-known strategy regarding perception influence and public misinformation.

For more regarding this brilliant-for-all-the-wrong-reasons article, check the following posts.

Older posts

© 2019 The Public Privacy

Theme by Anders NorenUp ↑