Tag: Security (page 2 of 3)

Meet Regin

Yes, You have been hacked and spied upon!

Yes, You have been hacked and spied upon!

Regin is not like all the other regular viruses you can find in your computer. It is the most recently discovered powerful tool for cyber espionage between nation-states, as reported by computer security research lab Symantec, and by its main competitor Kaspersky Labs.

Regin is described as a sophisticated cyber attack platform, which operates much like a back-door Trojan, mainly affecting Windows-based computers. It can be customized with different capabilities depending on the target and, while it operates in five stages, only the first one is detectable.

Among its diversified range of features, Regin allows the remote access and control of a computer, thus enabling the attacker to copy files from the hard drive, to recover deleted files, to steal passwords, to monitor network traffic, to turn the microphone or the camera on, and to capture screenshots.

According to the above mentioned reports, Regin has been sneakily around since, at least, 2008, and has been used in systematic spying campaigns against a wide range of international targets, namely governments’ entities, Internet services providers, telecom operators, financial institutions, mathematical/cryptographic researchers, big and small businesses, and individuals.

As for the geographical incidence, Saudi Arabia and Russia appear to be the major targets of Regin. Mexico, Iran, Afghanistan, India, Belgium and Ireland were among the other targeted countries.

The conclusions drawn in the Symantec’s report are, at the very least, very unsettling. It is stated that, considering its high degree of technical competence, its development is likely to have taken months, if not years, to be completed.

Regin is a highly-complex threat which has been used in systematic data collection or intelligence gathering campaigns. The development and operation of this malware would have required a significant investment of time and resources, indicating that a nation state is responsible. Its design makes it highly suited for persistent, long term surveillance operations against targets.

Therefore, the new million dollar question is: who is behind its conception? Unfortunately, it is very difficult to find out who has created or has otherwise financed its development because little trace of the culprits was left behind. However, it is well known that not all countries are so technologically advanced to be able to engineer such an accurate tool or to conduct such a large scale operation.

As a governmental instrument for mass surveillance, cyber espionage and intelligence gathering, Regin is just one of its kind. A few years ago, the world assisted to the rise of similar viruses, also from a nation state origin. Stuxnet, Duqu and Flame were three of the detected viruses previously employed to perform industrial sabotage or to conduct cyber espionage.

Thus said, this historical pattern for cyber attacks clearly shows that virtual wars are being fought, in an almost invisible battlefield that is the cyberspace, where nation-states clash silently. Once limited to opportunistic criminals, viruses are currently the new weaponry in this cyber warfare.

But a state sponsored cyber attack does not really come as a surprise. Governments have always spy on each other in order to obtain strategic, economic, political, or military advantage. The discovery of Regin just confirms that investments are continuing to be made in order to develop implacable instruments for espionage and intelligence gathering purposes.

In this context, it is no coincidence that cyber security is increasingly appointed as a decisive part of any governments’ security strategy, as it involves protecting national information and infrastructure systems from major cyber threats.

And while these sophisticated attacks are conducted, sensitive information about individuals is accessed, stolen, collected and stored by unknown attackers. To what end? Well, it can be any, really…

EU PNR – A plane not yet ready to fly

Plane Not Ready to fly!

Plane Not Ready to fly!

The Civil Liberties, Justice and Home Affairs (LIBE) Committee of the European Parliament has recently discussed the Passenger Name Record (hereafter PNR) draft Directive according to which air carriers would be required, in order to help fight serious crime and terrorism, to provide EU Member States’ law enforcement bodies with information regarding passengers entering or leaving the EU.

This airline passenger information is usually collected during reservation and check-in procedures and relates to a large amount of data, such as travel dates, baggage information, travel itinerary, ticket information, home addresses, mobile phone numbers, frequent flyer information, email addresses, and credit card details.

Similar systems are already in place between the EU and the United States, Canada and Australia, through bilateral agreements, allowing those countries to require EU air carriers to send PNR data regarding all persons who fly to and from these countries. The European Commission’s proposal would now require airlines flying to and from the EU to transfer the PNR data of the passengers’ onboard passengers on international flights to the Member States of arrival or departure.

Nevertheless, the negotiation of the EU PNR proposed airline passengers’ data exchange scheme has been quite wobbly. The European Commission proposed the legal basis, in 2011, which ended up being rejected, in 2013, by the above mentioned committee, allegedly because it does not comply with the principle of proportionality and does not adequately protect personal data as required by the Charter of Fundamental Rights of the EU (hereafter CFREU) and by the Treaty on the Functioning of the EU (hereafter TFEU).

But concerns over possible threats to the EU’s internal security posed by European citizens returning home after fighting for the so-called “Islamic State” restarted the debate. Last summer, the European Council called on Parliament and Council to finalise work on the EU PNR proposal before the end of the year.

However, following the ruling of the Court of Justice of the European Union, regarding the EU’s Data Retention Directive, last April, which declared the mass-scale, systematic and indiscriminate collection of data as a serious violation of fundamental rights, leads to question if these PNR exchange systems with third countries are effectively valid under EU law.

Similarly, many wonder if the abovementioned ruling shouldn’t be taken into account in the negotiations of this draft directive considering that it also refers to the retention of personal data by a commercial operator in order to be made available to law enforcement authorities.

And there are, indeed, real concerns involved.

Of course, an effective fight against terrorism might require law enforcement bodies to access PNR data, namely to tackle the issue regarding ‘foreign fighters’ who benefit from EU free movement rights which allow them to return from conflict zones without border checks. For this reason, some Member States are very keen on pushing forward this scheme.

However, the most elemental principles of the rule of law and the most fundamental rights of innocent citizens (the vast majority of travellers) should not be overstepped.

For instance, as the proposal stands, the PNR data could be retained for up to five years. Moreover, the linking of PNR data with other personal data will enable the access to data of innocent citizens in violation of their fundamental rights.

As ISIS fighters are mostly well-known by the law enforcement authorities as well as by the secret services, it is therefore questionable how reasonable and proportionate can be such an unlimited access to this private information in order to prevent crime. How effective would be the tracking of people’s movements in order to fight against extremism? Won’t such a widespread surveillance ultimately turn everyone into a suspect?

Thus said, from the airlines point of view, the recording of such amount of data would undoubtedly imply an excessive increase of costs and, therefore, an unjustifiable burden.

The European Data Protection Supervisor (EDPS) has already considered that such a system on a European scale does not meet the requirements of transparency, necessity and proportionality, imposed by Article 8 of the CFREU, Article 8 of the European Convention of Human Rights and Article 16 of the TFEU. Similarly, several features of the PNR scheme have been highly criticized by the Fundamental Rights Agency (FRA).

At the moment, the EU Commission has financed national PNR systems in 15 member states (Austria, Bulgaria, Estonia, Finland, France, Hungary, Latvia, Lithuania, the Netherlands, Portugal, Romania, Slovenia, Spain, Sweden, and the UK) which leads to a fractioned and incoherent system. This constitutes a very onerous outcome for airlines and a need for a harmonization among data exchanges systems. The initiative is therefore believed by some MEPs to intend to circumvent the European Parliament’s opposition to the Directive.

Thus considering, it is legitimate to question if the EU-PNR will be finalized, as firstly intended, before the end of year. Given the thick differences between MEPs and among Member States, it appears that the deadline will be more and more unlikely to be meet.

The EU external border’s security at travellers’ fingerprints

One fingerprint down, only nine to go!

One fingerprint down, only nine to go! 1)Copyright by Frettie under the Creative Commons Attribution 3.0 Unported

Last semester, the Council of the European Union and the European Parliament voiced technical, operational and financial concerns regarding the overall implementation of the ‘Smart Borders Package’. In this context, the European Commission initiated an exercise aimed at identifying the most adequate ways for its implementation. The aforementioned exercise would include a Technical Study, which conclusions would be subsequently tested through a Pilot project.

The Technical Study, prepared by the European Commission, has been recently issued.

But let’s create some context here…

The EU is currently assisting to a very important increase in the number of people crossing its borders, namely through air. I am talking about millions of people crossing, for the most diversified reasons, every day, at several points of the external border of the EU. This very fact transforms airports in the most relevant way in and out of the EU.

Therefore, if the border management, namely through the check in procedures, is not duly modernized and dealt with by a proper legal and technical structure, longer delays and queuing are expected. Adding to this, there is also a paramount concern of security, due to the growing numbers of foreign fighters and refugees.

Indeed, under the current framework – the Schengen Borders Code – a thorough check at entry of all travellers crossing the external border is required, regardless their level of risk or how frequently they actually travel in and out the EU. Furthermore, the period of time a traveller stays in the Schengen area is calculated based solely on the stamps affixed in the travel document.

So one of the main goals of the ‘Smart Borders’ initiative is to actually simplify and facilitate the entrance of “bona fide” travellers at the external borders, significantly shortening the waiting times and queues they have to face. Additionally, the initiative aims at preventing irregular border crossing and illegal immigration, namely through the detection of overstays, i.e., people who have entered the EU territory lawfully, but have stayed longer than they were authorized to.

In this context, biometrics 2)The concept refers to metric related to human features, i.e., to elements which are specific to the physical or psychological identity of a person and, therefore, allow to identify that person. Physiological biometrics, which we are specifically considering in this context, refer to human characteristics and traits, such as face, fingerprints, eye retina, iris and voice. appear as a solution. In fact, biometric technologies 3)Technologies which are able to electronically read and process biometric data, in order to identify and recognize individuals. are cheaper and faster than ever and are increasingly used both in the private and the public sector. They are mainly used on forensic investigation and access control systems, as they are a considered an efficient tool for truthful identification and authentication.

Indeed, the use of biometrics data for other purposes than law enforcement is currently being furthered at the EU level. The biometrics systems firstly implemented were deployed in regard of third country nationals, such as asylum or visa applicants (Eurodac4)Eurodac is a large database of fingerprints of applicants for asylum and illegal immigrants found within the EU. The database helps the effective application of the Dublin convention on handling claims for asylum. and VIS)5)The Visa Information System, which ‘VIS’ stands for, allows Schengen States to exchange visa data. and criminals (SIS and SIS II)6) The Schengen Information System, which ‘SIS’ stands for, is the largest information system for public security in Europe.. In 2004 it has been enlarged to the ePassport of the European Union.

Later on, in 2008, the European Commission issued a Communication entitled ‘Preparing the next steps in border management in the European Union’, suggesting the establishment of an Entry/Exit System and a Registered Traveller Programme.

Subsequently, in 2013, the European Commission submitted a ‘Smart Borders Package’, including three legislative proposals. In this regard, the proposal for an Entry/Exit System (hereafter EES) was intended to register entry and exit data of third country nationals crossing the external borders of the Member States of the European Union. Likewise, the proposal regarding a Registered Traveller Programme (hereafter RTP) aimed at offering an alternative border check procedure for pre-screened frequent third-country travellers, thus facilitating their access to the Union without undermining security. In parallel, the purpose of the third proposal was to amend accordingly the Schengen Borders Code.

The foremost aspiration to be achieved with these instruments was for a better management of the external borders of the Schengen Member States, the prevention of irregular immigration, information regarding overstayers, and the facilitation of border crossing for frequent third country national travellers.

Therefore, the EES would allow to record the time and place of entry and the length of stays in an electronic database, and, consequently, to replace the current stamping of passports system. In parallel, the RTP would allow frequent travellers from third countries to enter the EU, subject to a simplified border checks at automated gates.

Although being generally considered an welcomed initiative in terms of modernization, this has awaken, nevertheless, some concerns regarding privacy and data protection. Indeed, the proposal focuses on the use of new technologies to facilitate the travelling of frequent travellers and the monitoring the EU border crossing of nationals of third-countries. In practice, it means that hundreds of millions of EU residents and visitors will be fingerprinted and their faces electronically scanned.

Last year, the European Data Protection Supervisor (EDPS) adopted a very negative position regarding the proposal to introduce an automated biometrics-based EES for travellers in the region, calling it “costly, unproven and intrusive“. The data retention period in the EES, the choice of biometric identifiers, and the possibility of law enforcement authorities to access its database were among the main concerns raised.

As the proposed system would require ten fingerprints to confirm the identity of individuals at borders and to calculate the duration of their stay in the EU, the EDPS pointed to the unnecessary collection and excessive storage of personal information, considering that two or four fingerprints would be sufficient for identification purposes. The EDPS also expressed apprehension regarding the access to the EES database which would be granted to law enforcement authorities, even if the individuals registered were not suspects of any criminal offence. Questions were also raised regarding the possible exchange of information with third countries which do not have the same level of data protection.

Since then, the Technical Study – which I referred to at the beginning of this post – has been conducted in order to identify and assess the most suitable and promising options and solutions.

According to the document, one fingerprint alone can be used for verification, but it is acknowledged that a higher number of fingerprints could lead to better results in terms of accuracy, despite a more difficult implementation, “in particular, taking into account the difficulty of capturing more than 4 FPs [fingerprints] at land borders where limitations in enrolment quality and time may rise regarding the travellers in vehicle and use of hand-held equipment”. Nevertheless, the enrolment of four or eight fingerprints is recommended as one of the test cases of the pilot project.

Moreover, the study noted that “if facial image recognition would be used in combination with FPs [fingerprints], then it has a beneficial impact on both verification and identification in terms of speed and security leading to lower false rejection rate and reduction in number of FPs enrolled”. In addition, the Study has concluded that the use of facial identification alone is an option to be considered for EES and RTP.

Thus said, concerns regarding security should not take the limelight of the fact that biometric data are personal data. In fact, fingerprints can be qualified as sensitive data in so much as they can reveal ethnic information of the individual.

Therefore, biometric data can only be processed if there is a legal basis and the processing is adequate, relevant and not excessive in relation to the purposes for which they are collected and/or further processed. In this context, the purpose limitation is a paramount principle. The definition of the purpose for which the biometric data are collected and subsequently processed is therefore a prerequisite to their subsequent use.

In parallel, the accuracy, the data retention period and the data minimisation principles have to be considered, as the data collected should be precise, proportionate and kept for no longer than what is necessary regarding the purposes for which it was firstly collected.

Besides, the processing of biometric data shall be based on the legal grounds of legitimacy, such as consent of the data subject, which must be freely given, specific and fully informed. In this context, the performance of a contract, the compliance with a legal obligation and the pursuit of legitimate interests of the data controller will also constitute legal grounds to that effect.

It must be noted that the processing of biometric data raises these and other important privacy and data protection concerns that, more than often, are not acknowledged by the public.

To start with, biometric data, in general, and fingerprint data, in particular, is irrevocable due to its stability in time. This makes possible data breaches all the most dangerous.

In addition, the highly complex technologies which are able to electronically read and process biometric data and the diversified methods and systems employed in the collection, processing and storage cannot ensure a full accuracy, even though fingerprints do present a high level of precision. In fact, a low quality of the data or of the extraction algorithms may steer to wrongful results and, therefore, to false rejections or false matches. This might lead to adverse consequences for individuals, namely regarding the irreversibility of the decisions taken based on a wrong identification.

Moreover, the risks associated with the storage of biometric data and the possible linking with other databases raises concerns about the security of the data and of uses non-compatible with the purposes which initially justified the processing.

Thus said, we will have to wait for the results of the Pilot Project which is being developed by the eu-LISA Agency 7)The acronym stands for Agency for the Operational Management of large-scale IT Systems in the area of Freedom, Security and Justice., and is expected to be completed during 2015, in order to verify the feasibility of the options identified in the Technical Study.

References   [ + ]

1. Copyright by Frettie under the Creative Commons Attribution 3.0 Unported
2. The concept refers to metric related to human features, i.e., to elements which are specific to the physical or psychological identity of a person and, therefore, allow to identify that person. Physiological biometrics, which we are specifically considering in this context, refer to human characteristics and traits, such as face, fingerprints, eye retina, iris and voice.
3. Technologies which are able to electronically read and process biometric data, in order to identify and recognize individuals.
4. Eurodac is a large database of fingerprints of applicants for asylum and illegal immigrants found within the EU. The database helps the effective application of the Dublin convention on handling claims for asylum.
5. The Visa Information System, which ‘VIS’ stands for, allows Schengen States to exchange visa data.
6. The Schengen Information System, which ‘SIS’ stands for, is the largest information system for public security in Europe.
7. The acronym stands for Agency for the Operational Management of large-scale IT Systems in the area of Freedom, Security and Justice.

Are you ready for the Internet of Things?

Everything is connected.

Everything is connected. 1)Copyright by Wilgengebroed under the Creative Commons Licence – Attribution 2.0 Generic

Imagine a world where people would receive information on their smart phone about the contents of their fridge; cars involved in an accident would call emergency services, allowing for quicker location and deployment of help; cars would suggest alternative routes in order to avoid traffic jam; personal devices would allow to monitor the health developments of patients or to control the regular medication of elderly persons; washing machines would turn on when energy demand on the grid would be lowest and where alarm clocks and coffee machines could automatically be reset when a morning appointment would be cancelled; a smart oven could be remotely triggered to heat up the dinner inside by the time you would reach home…

If it is true that these scenarios once belonged to the sci-fi world, it is not so hard to picture any of these technologies nowadays. The momentum we are living in and all the progress which is already involved in our lives brings the certitude that it is only a matter of time for us to reach such a future. Technological advancements are allowing achievements that once may have seemed impractical and are turning the sci-fi scenarios into reality.

We are smoothly entering in a new age… The age of the Internet of Things (hereafter IoT). The IoT might be indeed already start happening around us. It suffices to think about all the quite recent changes that we already accept as ordinary.

But what is the IoT all about?

The IoT is a concept which refers to a reality where everyday physical objects will be wirelessly connected to the Internet and be able, without human intervention, to sense and identify themselves to other surrounding devices and create a network of communication and interaction, collecting and sharing data. It  is therefore associated to products with machine-to-machine communication capabilities, which are called ‘smart’.

The high-tech evolution has made ‘smart’ more convenient and accessible and made the vast majority of us technologically dependent on several areas of our daily living. Connected devices have proliferated around us. Consider, for instance, the number of smart phones and other smart devices that most of us cannot conceive a life without anymore as it allows us to connect with the world as it was never possible before.

Similarly, our domestic convenience and comfort have been expanded in ways that once belonged to the imaginary. Homes, housework and household activity can be fully automatized in order to enable us to remotely control lighting, alarm systems, heating or ventilation. The domestic devices that can be connected to the Internet are usually referred to as “home automation” or “domotics”.

In parallel, we are currently able of the ‘quantified self’, which is commonly defined as the self knowledge acquired through self tracking with technology (for instance, pedometers, sleep trackers). One can now track, for example, biometrics as insulin and cortisol, or record more random information about our own habits and lifestyles, as physical activity and caloric intake. This monitoring can be done increasingly by wearables, i.e., computer-powered devices or equipment that can be worn by an individual, including watches, clothing, glasses and items alike. The Google glasses, Google Wear and the Apple Watch are the most famous recent examples.

Scarily enough, the number of objects connected to the Internet already exceeds the number of people on earth. The European Commission claims that an average person currently has at least two objects connected to the Internet and states that this is expected to grow to 7 by 2015 with 25 billion wirelessly connected devices globally. By 2020 that number could double to 50 billion.

However, every time we add another device to our lives, we give away a little more piece of ourselves.

Consequently, along with its conveniences, and due to the easy and cheaply obtained amount of data collection it allows, the idea of a hyper-connected world raises important concerns regarding privacy, security and data protection. To be true, while it is a relatively well-known fact that our mobile devices are frequently sending off data to the Internet, many of us do not understand the far-reaching implications of carrying around an always-on connection, let alone to have almost all your life connected to the Internet.

In fact, such objects will make it possible to access a humongous amount of personal data and to spread it around without any awareness nor control of the users concerned. From preferences, habits and lifestyle, to sensitive data as health or religion information, from geo-location and movements to other behaviour patterns, we will put out there a huge amount of information. In this context, the crossing of data collected by means of different IoT devices will allow the building of a very detailed user profile.

It is essential that users are given control over the data which directly refers to them and are properly informed of what purposes its processing might serve. In fact, currently, it is very common that the data generated is  processed without consent or with a poorly given consent. Quite often further processing of the original data is not subjected to any purpose limitation.

Moreover, as each device will be attributed an IP address in order to connect to internet, each one will be inherently insecure by its very own nature. Indeed, with almost everything connected to the Internet, every device will be at risk of being compromised and hackable. Imagine that your car or home could be subjected to a hacking attack through which it could take control of the vehicle or install a spying application in your TV. Imagine that your fridge could get spam and send phishing e-mails. The data collected through medical devices could be exposed. After all, it is already easier to hack routers and modems than computers.

Last but not the least, as IoT devices will be able to communicate with other devices, the security concerns would multiply exponentially. Indeed, a single compromised device could lead to vulnerability of all the other devices on the network.

Now imagine that all your life is embedded in internet connected devices… Think, for instance, fridges, ovens, washing machines, air conditioners, thermostats, light systems, music players, baby monitors, TVs, webcams, door locks, home alarms, garage door openers, just to name a few. The diversity of connected devices is just astonishing! So we may reach the point where you will have to install firewall for your toaster and a password to secure your fridge.

From a business point of view, questions regarding the security setup and software and operating systems vulnerabilities of devices that will be connected to the internet also have to be answered. Indeed, companies are increasingly using smart industrial equipment and IoT devices and systems, from cars to cameras and elevators, from building management systems to supply chain management system, from financial system to alarm system.

On another level, the security of nations’ critical infrastructures could also be at stake. Imagine, for instance, that the the traffic system, the electric city grid or the water supply can be easily accessed by a third party with ill intentions.

Of course, the EU could not be indifferent to this emerging new reality and to the challenges it presents.

In 2012, the European Commission launched a public consultation, seeking inputs regarding a future policy approach to smart electronic devices and the framework required in order to ensure an adequate level of control of the data gathering, processing and storing, without impairing the economic and societal potential of the IoT. As a result, the European Commission published, in 2013, its conclusions.

Last month, the European data protection authorities, assembled in the Article 29 Working Party, adopted an opinion regarding the IoT, according to which the expected benefits for businesses and citizens cannot come at the detriment privacy security. Therefore, the EU Data Protection Directive 95/46/EC and the e-Privacy Directive 2002/58/EC are deemed to be fully applicable to the processing of personal data through different types of devices, applications and services in the context of the IoT. The opinion addresses some recommendations to several stakeholders participating in the development of the IoT, namely, device manufacturers, application developers and social platforms.

More recently, at the 36th International Conference of Data Protection, Data Protection Officials and Privacy Commissioners adopted a declaration on the Internet of things and a resolution on big data analytics.

The aforementioned initiatives demonstrate the existing concerns regarding Big Data and IoT and the intention to subject them to data protection laws. In this context, it is assumed that data collected through IoT devices should be regarded and treated as personal data, as it implies the processing of data which relate to identified or identifiable natural persons.

This obviously requires a valid consent from data subjects for its use. Parties collecting IoT devices information therefore have to ensure that the consent is fully informed, freely given and specific. The cookie consent requirement is also applicable in this context.

In parallel, data protection principles are deemed to be applicable in the IoT context. Therefore, according to the principle of transparency, parties using IoT devices information have to inform data subjects about what data is collected, how it is processed, for which purposes it will be used and whether it will be shared with third parties. Similarly, the principle of purpose limitation, according to which personal data must be collected for specified, explicit and legitimate purposes and not be further processed in a way incompatible with those purposes, is also applicable. Furthermore, considering the data minimization principle, the data collected should not be excessive in relation to the purpose and not be retained longer than necessary.

Considering the vast number of stakeholders involved (device manufacturers, social platforms, third-party applications, device lenders or renters, data brokers or data platforms), a well-defined allocation of legal responsibilities is required. Therefore, a clear accountability of data controllers shall be established.

In this context, the Directive 2002/58/EC is deemed applicable when an IoT stakeholder stores or gains access to information already stored on an IoT device, in as much as IoT devices qualify as “terminal equipment” (smartphones and tablets), on which software or apps were previously installed to both monitor the user’s environment through embedded sensors or network interfaces, and to then send the data collected by these devices to the various data controllers involved…

Thus said, one can only rejoice that the enchantment about the possibilities of IoT does not surpass the awareness regarding the existent vulnerabilities. But it remains to be found how can these and the other data protection and privacy requirements be effectively implemented in practice.

We certainly are in the good way to dodge any black swan event. However, it won’t be that easy to find the appropriate answers for the massive security issues that come along. And one should not forget that technology seems to always be one step ahead of legislation.

So, the big question to ask is:

Are we really ready for the Internet of Things?

References   [ + ]

1. Copyright by Wilgengebroed under the Creative Commons Licence – Attribution 2.0 Generic

National Security: The new
responsibility of Tech

Let's take a closer look on... everything!

Let’s take a closer look on… everything!

Private tech companies are no longer expected to only aim profit. No. Besides having been assigned with the task of distinguishing public and private interest, they are now required to act as watchdogs to the intelligence services.

I am referring today to the very interesting opinion article of Robert Hannigan, published on Financial Times, last week, which I highly recommend.

Hannigan is the new Director of CGHQ, which stands for Government Communications Headquarters, meaning the British electronic intelligence agency. It operates closely with the British security service, MI5; the overseas intelligence service, MI6; and the United States National Security Agency (NSA).

In the above-mentioned article, Hannigan called for “better arrangements for facilitating lawful investigation by security and law enforcement agencies than we have now” in order to find “a new deal between democratic governments and the technology companies in the area of protecting our citizens”.

He mainly referred to the radical group Islamic State, a.k.a. ISIS and ISIL, “whose members have grown up on the Internet” and are “exploiting the power of the web to create a jihadist threat with near-global reach.” In this context, he qualified tech companies as “the command and control networks of choice” for terrorists.

Basically, and summing it up, let’s all forget about Snowden’s revelations (which I already addressed here) and see the big picture: because terrorists are using the social media websites, tech companies such as Facebook and Twitter ought to share all our private data with law intelligence agencies to stop terrorism. As we all have a common enemy, let’s allow a more undisturbed sharing of information between the intelligence community and private technology companies of our data. In these dangerous times, who needs privacy, anyway, right?

Coincidentally or not, these declarations came in the wake of Apple and Google sophisticated encryption initiatives regarding data on their mobiles and email systems. Indeed, encryption makes the collection of data off the wires more difficult. Unsurprisingly enough, these statements are also in line with FBI Director James Foley efforts.

However, despite seemingly intended to be simultaneously inspiring, alarmist and paranoia inducing, I couldn’t help to notice that the article is actually full of contradictions which I assume were intended to go unacknowledged.

To begin with, the conclusion according to which techniques for encryption or anonymisation through mobile technology in fact help terrorists to hide from the security service – or, as stated, “are the routes for facilitation of crime and terrorism” – is quite a far-fetched one. Terrorism has been here long before new technologies as we know them and, unfortunately, terrorists have always found ways of hiding their operations quite successfully.

As for the allusion that the leaking of information by Edward Snowden has actually helped the development of terror networks… Seriously? Of course, the problem was not mass surveillance in itself. The real issue was that those monitoring activities were revealed to the world.

Besides, the use of Internet by radical groups for promotion, intimidation and online recruitment of potential fighters is already a general concern. But the thing is, as these activities happen in fact on social media platforms, everybody can actually see it. So, where does the need for a more direct and thorough access to social platforms data comes from? It is not as secret terrorist operations are expected to be conducted on Facebook or Twitter. I mean, these companies are not really known for the security of their communications.

Moreover, nobody actually believes that privacy is an absolute right. The ECHR is quite clear on that. The right to privacy shall always be balanced with other rights, freedoms and needs, as for instance the right to information, the freedom of expression and the need to ensure national security. However, I fail to see the balance between civil liberties and national security in Hannigan’s speech. Similarly, I fail to understand how the free and secretive interference in our privacy – for security reasons, always, of course – can be lawful and how its proportionality is ensured.

Likewise, why isn’t a prior court order appropriate to intelligence agencies regarding requests for data? It should be up to the courts, not the GCHQ, nor tech companies, to decide when our personal data shall be shared with the intelligence services. Courts are the only guarantee of individuals’ rights and freedoms and of principles such as necessity and proportionality of the measures taken. Tech companies cannot refuse these requests when they are based on a Court order. So, when Hannigan calls for ‘better arrangements‘ and ‘new deals’, it is very questionable what is truly meant.

Thus said, the consideration that users of social media platforms “do not want the media platforms they use with their friends and families to facilitate murder or child abuse” was just the cherry on top of a very bitter anniversary cake, the 25th anniversary of the world wide web, that Hannigan obviously hasn’t failed to mention.

These arguments are not fit for a “mature debate on privacy in the digital age”. Indeed, the fear, uncertainty and doubt (FUD) is quite a well-known strategy regarding perception influence and public misinformation.

For more regarding this brilliant-for-all-the-wrong-reasons article, check the following posts.

Uncle Sam is watching EU

I know what you're doing!

I know what you’re doing!

Surveillance is commonly defined as the, often surreptitious and illegal, monitoring of behaviours and activities of people for the most diversified ends, which normally include the purposes of supervision, influence or manipulation, control or protection.

Therefore, mass surveillance means to watch over an entire or substantial fraction of a population and is usually conducted by governments or by corporations on their behalf in order to, allegedly, fight terrorism, national security or child pornography, just to mention some of the justifications.

I still remember the worldwide chilling feeling that followed Edward’s Snowden’s revelations, published by The Guardian, back in summer 2013, regarding the extent and the scope of the surveillance programme known as PRISM conducted by the NSA (National Security Agency).

That feeling still remains and the worldwide debates that followed concerning the illegality of the measures taken and the violation of privacy rights and civil liberties are not about to end any time soon.

The news according to which some technology and telecommunications companies granted the NSA direct access to their servers or handed over detailed reports about their customer’s databases most certainly didn’t help.

Despite the denials from the companies concerned that ensued, mass surveillance has become, since then, a concern of the EU.

First, the surveillance measures undertaken affected the fundamental rights of European citizens, namely their right to privacy and to protection of personal data.

Moreover, the surveillance programmes conducted by the USA outlined the connection between the state or government surveillance and the processing of data by private companies.

In addition, the disclosure of large-scale intelligence data collection programmes affected negatively the trust in the transatlantic relationship.

And, in this regard, there is quite a lot at stake.

Indeed, both parties have concluded several agreements regarding the exchange of personal data for the purposes of law enforcement, including the prevention and combating of terrorism and other forms of serious crimes. These are the Mutual Legal Assistance Agreement, the Agreement on the use and transfer of Passenger Name Records (PNR), the Agreement between Europol and the US and the Agreement on the processing and transfer of Financial Messaging Data for the purpose of the Terrorist Finance Tracking Program (TFTP).

In addition, the legal basis for the exchanges for commercial purposes between the EU and the USA is provided by the Safe Harbour Decision, which concerns transfers of personal data from the EU to companies established in the U.S. which have adhered to the Safe Harbour Principles. Efforts to negotiate amendments to the program have been ongoing since the fall of 2013.

Besides, the EU and the USA are currently negotiating the ‘umbrella agreement’, a framework agreement on data protection regarding the transfer and processing of data in the field of police and judicial cooperation.

Last, but not the least, it should be also mentioned the ongoing negotiations for the controversial Transatlantic Trade and Investment Partnership (TTIP), the world biggest trade agreement.

While it is supposed to increase trade and investment, there is a noteworthy apprehension around its potential negative impact on privacy. But, as it is being negotiated behind closed doors, it is yet to be known how much these concerns are justified in the light of the ACTA (Anti-Counterfeiting Trade Agreement), which would have allowed to carry out intrusive surveillance on all of our Internet usage, regardless of whether we had actually infringed anyone’s copyright. This lead the European Parliament to reject it in 2012. All things considered, the EU Ombudsman recommendations are therefore much welcomed.

In this context, the documents very inconveniently released by Edward Snowden revealed that the USA accessed the SWIFT database, the biggest storage of financial transactions in the world, thus accessing millions of personal financial records, in the margin of the Terrorist Financing Tracking Programme (TFTP).1)The TFTP agreement allows the U.S. Treasury to access some data stored in Europe by international bank transfer company Swift (Society for Worldwide Interbank Financial Telecommunication) for the prevention, investigation, detection, and prosecution of conduct pertaining to terrorism or terrorist financing.

Last November, the European Commission released a communication in which it shared its concerns regarding the protection of personal data within the existing instruments.

The European Parliament has already called for the ‘immediate suspension’ of the Safe Harbour as it considered that the principles do not provide adequate protection for EU citizens and for the immediate suspension of the TFTP agreement until a “thorough investigation has been concluded”.

Meanwhile, leaders from the EU and the USA reiterated their commitment in a joint statement.

Although Jean-Claude Juncker has pressed the “conclusion of negotiations on the reform of Europe’s data protection rules, as well as the review of the Safe Harbour arrangement with the U.S.”, Andrus Ansip, who is slated to become the European Commission’s Vice-President for the Digital Single Market, affirmed, during a European Parliament confirmation hearing, that, unless the differences are resolved, the USA – EU Safe Harbour could be suspended. Ansip said that “we have to be absolutely sure that the national security exception will be used as an exception, not on a regular basis.”

It is beyond any doubt that the plea of terrorism or national security concerns can only fall down when facing revelations according to which NSA collects data related to international trade and monitors the telecommunications of leaders from Brazil and Germany. It is evident that those are mere excuses to conduct this kind of surveillance in the name of less honourable goals.

As if this wasn’t enough, documents delivered by Edward Snowden, and recently released by The Intercept, show that the agency has “under cover” agents embedded in foreign companies for the purpose of extending its surveillance reach.

Thus said, transparency reports, while presenting statistics of government’s requests for data, could be a useful tool to disclose the scope and scale of surveillance. However, governments are obviously not that keen in reporting on their surveillance activity and they will make sure to exempt from the report requested information related to ‘national security’.

It doesn’t come as a surprise that technology companies such as Facebook, Yahoo, Google, Microsoft, are now investing in barriers, mainly through the refusal of access requests and encryption of internal traffic, to make it harder for governmental intelligence agencies to ‘snoop around’. Even though some concerns regarding the impact on police investigations, namely of paedophilia suspects, have been raised, it is questionable if they are completely justified, mainly because there are several other ways to access the information stored. For instance, the information stored in the Cloud will still be ‘easily’ accessible.

Nevertheless, these and similar companies are businesses and shouldn’t be assigned with the role of guardian’s of individuals’ rights. It is all very wrong, and very totalitarian regimes look alike, when the governments themselves are attacking the most private parts of our lives.

Encryption measures have lead some to the conclusion that governments should be entitled to have a golden key – a back door access – in order to unlock and access individuals’ communications. The main viewpoint is that, by allowing so, personal safety and national security could be properly ensured…

Thus said, it might not come as the most surprising event that Russia is requiring social network companies, as Facebook and Twitter, to store the personal data of national citizens in servers based within the country’s borders or face being blocked without a previous court ruling. Conveniently, the initiative – which represents an open door to enforce censorship – is even presented as a necessary remedy to protect against foreign threats and USA spying.

It is difficult not to wonder – and worry – if this is the first step for the blocking of all websites with user generated contents, as an already proved effective mean to control the right to information and freedom of expression and any democratic expressions.

In this context, the hypothesis according to which the European Commission (DG Home) has been collaborating with the USA administration regarding the EU data protection reform raises some deep and justified concerns. Mainly if we consider that the former EU Home Affairs Commissioner, Cecilia Malmström, is very likely soon to be confirmed by the European Parliament as the EU’s new trade commissioner, conducting the negotiations over the TTIP, from the EU side. But then, again, if it is true that the European Commission knew about PRISM all along… Conspiracy theories apart, Cecilia Malmström has denied the allegations at the hearing with the Members of the European Parliament.

Of course, according to the principle of conferral or attributed powers, the EU may only exercise competences conferred on it by the Treaties to attain the objectives set out therein.2)See article 5[2] TEU This means that competences not conferred upon the Union in the treaties remain within the Member States.3)See article 4 TEU National security is deemed an essential State function and the sole responsibility of each Member State.

Considering that matters related to national security are usually exempted from surveillance activity reports, I guess that it all comes full circle, after all…

And while one can be glad that the UN issued a report stating that Mass Surveillance Violates Human Rights, one is also entitled to be sceptical regarding its effects on the government programs.


References   [ + ]

1. The TFTP agreement allows the U.S. Treasury to access some data stored in Europe by international bank transfer company Swift (Society for Worldwide Interbank Financial Telecommunication) for the prevention, investigation, detection, and prosecution of conduct pertaining to terrorism or terrorist financing.
2. See article 5[2] TEU
3. See article 4 TEU

A World of Data = Big Data x Little Privacy

Next evolution, Humongous Data?

Next evolution, Humongous Data?

With massive amounts of our personal data now being routinely entered, collected, stored and exchanged, data security and privacy breaches are almost inevitable, in particular the large-scale attacks that lead to the theft of millions of individuals’ data are becoming more and more common nowadays.

With technology at our fingertips, we are sharing more and more information online and by electronic means. From sensors that fit into our cars to wearables, from cloud computing to social networking interaction, from digital pictures and videos to cell phone GPS signals, from online purchase transactions to a sign up process, from the telecommunications’ and insurance to medical or banking sectors, we leave traces of information with every move we make.

The massive volume of data generated and gathered is popularly referred to as ‘Big Data’. The concept commonly describes such a large amount of complex, unstructured, diverse and fast information that it is difficult to process using traditional database and software techniques. Billions to trillions of records of millions of people are now measured in new units as petabytes and exabytes. The golden era for gigabytes is long gone.

So what is so special about Big Data?

The analysis that can be done with Big Data enables the establishment of correlations among large populations that is useful to individuals. It creates a remarkable opportunity for the worldwide society in any field you can think about, ranging from criminal rate predictions to medical research, from public health to national security and from marketing to risk analysis. Companies and governments no longer have to rely on sampling: they have access to the entire plentiful digitized knowledge of digital age, a myriad of data points collected for unrelated purposes and updated in real time.

For instance, a few years ago, Google was able to predict flu outbreaks faster than what was possible using hospital admission records, just by analyzing clusters of search terms by region in the United States. All with algorithms! Quite impressive, huh?

In our enthusiasm to share and bond with others, to live up to the facilities allowed by new technologies, as the world grows more and more connected, we are quite easy when it comes to give away information about ourselves. Businesses know that. And they are continuously developing new means to collect information about their customers.

Why wouldn’t they?

They can try to look for hidden patterns, trends or other insights that will enable them to better mould their products and services to customers, anticipate demand or improve performance. Big Data certainly can bring the appropriate knowledge that will allow innovative improvements for businesses… from which all of us will ultimately benefit. As a result, personal data is consistently collected and traded, being the new money in the new economy that is internet.

For instance, have you noticed how frequently it happens that, after having searched a certain type of good or of services on Google, you will have matching publicity, on the right side of your ‘gmail’ window tab next time you open it?

But the astonishing advantages coming from the analysis of Big Data are tempered by concerns over privacy and data protection.

I believe that many of us don’t think much about the implications of easily sharing and giving away personal details online nowadays. After all, how many of us actually read the consent form regarding the use of our personal data?

But it is important to reflect on a few points which I assume won’t let anybody comfortable after consideration.

Consider, for instance, that some retailers are able, through the analysis of purchasing habits, to predict such intimate details as the pregnancy of a customer and that, despite the will of the concerned customer, ensuing marketing activities which result in disclosing that information.

Consider, for example, with such a volume of data and powerful analytical mechanisms, the combination of data might lead to the identification of individuals, despite the anonymisation of certain elements.

Consider, now, that the data contain biases, inaccuracies, obsolete and missing information, flaw correlations, that unavoidably affect the predictions and conclusions resulting from its analysis and that decisions that can affect your welfare will still be taken based on those predictions and conclusions.

Consider also that most of the data being collected about us more and more doesn’t come directly from us.

At last, consider that hospital records of national health system patients could be sold for insurance purposes.

Scary, at the very least…

The good or bad news is that Big Data analysis isn’t as efficient as many would like or fear it to be.

The risk of biases inherent to data and false correlations and associations is great and increases as bigger volumes of data are analyzed.

For instance, Google’s model of predicting the spread of flu ended up pointing to an overestimated the phenomenon by almost a factor of two.

Regarding public security, Big Data hasn’t proven itself either able to detect patterns or anomalies that could help prevent acts of terror.

No so reliable after all…

Neverthless, one cannot escape Big Data. We live so entangled in it that is more and more usual to talk about an ‘internet of things’. Good things can come from it. But nobody can be entirely sure that it will be used for the legitimate purposes.

In parallel to the enthusiasm of connecting and sharing, there is an increasing concern surrounding the lack of privacy.

In this context, it might indeed be a big place in the market for privacy products. And the seeds are being planted now. Just recently Google has announced that data encryption will come as a default setting on the next Android operating system, known as Android Lollipop, which will make impossible for anyone to gain access to the data without the consent of the owner. This initiative is in line with the announcement made by Tim Cook, the CEO of Apple, regarding the privacy policy of the company. Both guarantee that even police won’t be able to gain access to the user’s personal information. It is however worth mentioning that the upgraded security feature will only protect data and information stored within the iOS device itself and not data stored within the iCloud service.

The advantages which result from Big Data analysis will only be reached if privacy expectations of users are appropriately met and their data protection rights are respected. However, finding the right balance between all the interests at stake: those of the individuals concerned, those of businesses and, ultimately, the general public interest might not be an easy end to chieve, namely in the field of health research.

The Article 29 Working Party recently issued a statement on the impact of the development of Big Data on the protection of individuals with regard to the processing of their personal data in the EU, where it found “no reason to believe that the EU data protection principles are no longer valid and appropriate for the development of Big Data.” Nevertheless, it envisaged the possibility of “further improvements to make [the principles] more effective in practice” in the context of Big Data.

In my opinion, data protection principles shall be deemed to be applicable, as they refer to fairness, transparency and, ultimately, trust. For that reason, the ‘notice and consent’ and the ‘purpose limitation’ models should be preserved as much as possible and data ought to be anonymized to the point where re-identification is secluded.

This week, the European Commission and Big Data Value Association, an industry-led organisation which acts on behalf of companies including ATOS, Nokia Solutions and Networks, Orange, SAP, SIEMENS, have committed in a public-private partnership (PPP) that aims to support research and innovation in Big Data technologies and infrastructures to ensure privacy and security.

No statistics can predict what uncertainties do the future holds regarding Bid Data… However, in these high-speed changing times of information and communications technology, we will surely know anytime soon…

Ello! Here to stay?

Ello, the new kid on the social networks' block.

Ello, the new kid on the social networks’ block.

It must come as a surprise, as I am writing openly on a blog, but I am not the most sociable person in this online world. In fact, my online interactions are mainly limited to an increasingly left aside Facebook account, some comments written here and there in blogs posts or news that particularly interest me and this recently created blog.

Regarding Facebook, I don’t log in as often as I used to. And truth is I find it less interesting in each visit due to the ad-filled pages and the endless requests from friends to play games. Not only am I trying to spend more time offline, but I also find the whole concept of sharing (showing off?), following, liking and commenting bits of others people’s lives very tiring at times. I recognised that is mostly due to a bad management of my account. As I realized recently, I don’t even know that well 90% of my friends and I honestly couldn’t care less about their lives, worries or interests.

However, it is an undeniable source of information regarding feedbacks on the most various subjects, through the specific groups and communities created. Moreover, it has enabled me to find lost friends and to keep in touch with friends and family members living abroad, without having to spend hours on the phone or Skype. In that context, it makes possible for people to share moments and to be part of each other’s lives in a way that would be very difficult otherwise. Besides, it has allowed me to know better people with whom I weren’t that close, making me grow fonder of them or, instead, killing any good impression I might have once had.

Nonetheless, I am more and more driven to more traditional means of communication, for instance gathering and talking. I intend to spend only meaningful time online, namely engaging in rewarding conversations with people who share the same interests as me.

So, when I first heard about the new social networking platform everybody was talking about, Ello, my first question was: what is the point of it? My second thought was: it won’t last. The history of social networks is full of unsuccessful chronicles: Friendster, MySpace, Diaspora or AppleSeed, just to mention a few. The secret for Facebook lasting so long is its most relevant feature: one can actually find almost everybody there and it feeds people’s curiosity and egocentric tendencies.

In Ello’s current Beta phase, you have to receive an invitation from a registered user in order to access the platform and each user can only send up to five invitations. This not only compels users to carefully select future friends but it avoids as well a sharp and fast expansion of the network which would threaten its normal management. However, it will be just a matter of time for it to lose its restricted nature…

Having received an invitation to join Ello, I succumbed to curiosity and created an account… just to see what the fuss was all about!I was not looking for another social network to be in but I was willing to replace Facebook with one platform that would allow me the same benefits without being so annoying.

Regarding the registration act itself, I must point out that identical user-names are not allowed. When I tried to use my real name, it was rejected, both in the integral and partial version of it, because someone else had taken it previously. As a result, I had no option but to pick up a pseudonym. I would have preferred to use my real name, regardless the fact that it might bring identity confusions.

The direct consequence of this is that, if someone wants to add a friend, he or she needs to know what his or her username is. The use of pseudonyms made up just for the registration act makes difficult to find friends on the platform. On the bright side, it certainly helps to keep undesirable wannabe friends away. But it is nevertheless ironic, considering all the buzz surrounding Facebook real names policy, who affected people preferring to adopt pseudonyms. While I don’t believe that Facebook’s policy is unrelated with the recently announced ad network Atlas (which I will address in a future post), I must say that I am not convinced either by Ello’s policy. Google Plus, for instance, had a similar policy and dropped it. However, the same policy regarding user-names is successfully applied in Twitter or Instagram…

Anyway, what is Ello really about? Well, as any other social network platform, it is intended to enable the connection and the sharing of content among users. However, it comes with the promise that user’s data won’t be sold for marketing purposes and paid advertising won’t be allowed.

Regarding the design itself I wasn’t expecting anything special, really. As long as it wasn’t bluish, I would be flexible. I enjoyed the monochrome concept; however I have found the design exaggeratedly minimalist and not very user-friendly. Somehow, knowing that it has been created by artists and designers, I was expecting more creativity.

One feature that struck me negatively is that all the information displayed in each profile is public within the website’s community. Of course, I am fully aware that Facebook itself is far from being the gatekeeper of privacy or a paradigm for any other value. It suffices to remember the sneaky privacy changes or the ones made to please the users, the experiment conducted on users data, and the removal of campaign post-mastectomy photographs or pictures of women breastfeeding considered obscene. More recently, there is the polemic ad network called Atlas. But, I mean, it is a business and profit is its aim. No surprise there. As it is commonly said: if you are not paying for it, you are the product. Proper information and transparency on how, what and why things are done are, in my opinion, the main issues. Nevertheless, I enjoy the apparent privacy regarding the ability to share information among a pre-selected group of friends.

On Ello, users can unilaterally add ‘friends’ (as for acquaintances whose lives they are interested in) and ‘noise’ (as for random popular users) who may be followed through a newsfeed-like menu. It is fairly easy for users to delete their Ello account if they want to opt out of the service. However, one must be aware that it is an irrevocable action and the content will be lost forever. So dramatic!

In an ‘wtf’ section, one can find some elements intended to introduce Ello to the new user. In this regard, its manifest is quite engaging as it reads as follows:

Your social network is owned by advertisers.
Every post you share, every friend you make, and every link you follow is tracked, recorded, and converted into data. Advertisers buy your data so they can show you more ads. You are the product that’s bought and sold.
We believe there is a better way. We believe in audacity. We believe in beauty, simplicity, and transparency. We believe that the people who make things and the people who use them should be in partnership.
We believe a social network can be a tool for empowerment. Not a tool to deceive, coerce, and manipulate — but a place to connect, create, and celebrate life.
You are not a product.

Having navigated around the platform for a little while, I must admit that advertisements were nowhere to be seen. So far, so good… However, despite being a hopeless romantic, the new starry-eyed concept of online celebrating life failed to convince me.

To start with, it is unclear how the website will make money. Let’s not forget that other social network platforms, like Facebook or Tumblr, similarly started without advertising but, profit being intended, it was not a workable business model. According to Ello, profit will eventually come from special features that will be offered against a small amount of money (well, if they are paid for, it is not an offer anymore, just saying…) in order to customize users experience. This is not a new concept: it is called Freemium business model and is used by Evernote, for instance. That makes sense and it is utterly acceptable. After all, Ello has to capitalize somehow. Nevertheless, if the number of users continues to increase, I have serious doubts that those little charges will be sufficient to run the servers.

What is worrying, instead, is that, according to some provisions of its Privacy Policy, Ello is not everything it claims to be.

Although it might have escaped to the most distracted and laziest of us (not everybody reads the privacy policies) , Ello does collect users personal information, namely information about what pages are access, about the device used, information that is send to it directly or post on its web site, and the address of web sites that refer the user. It stores as well the name and e-mail address that users register with. In addition, Ello collects and stores an anonymized version of users IP address and of Google Analytics to gather and aggregate general information about users behaviour, although it offers the option to opt-out of Google Analytics and commits to respect “Do Not Track” browser settings. It states also that it may use or share anonymous data collected for any purpose.

Although Ello reiterates that it won’t sell information about users to any third party, including advertisers, data brokers, search engines, or anyone else, it may share some of the personal information with third parties under several circumstances. Users consent, legal compliance and the fulfilling of contracts requirements celebrated with third party services providers are among the exemptions foreseen.

It is quite strange that, while considering unethical the collecting and selling of personal information for advertising purposes, Ello broadly collects user data for non-advertising ends. Moreover, it establishes the sharing of user data as a rule, and not as an exception, considering the abstract nature of those foreseen.

Bearing in mind that advertising can be very positive as it provides useful information regarding products and services that users may be interested in, I am not sure that this is the biggest of their concerns. Indeed, the door is left open for privacy violations that come along with online tracking. Furthermore, anonymisation of the data does not ensure that, in subsequent matches, an individual won’t be identifiable. Additionally, Ello doesn’t give any guarantee regarding the deletion of information stored in backups when content posted or a personal account is deleted. As for the foreseen possibility of sharing information with future affiliated companies, it just means that the data collected and stored by Ello will be made available for businesses to which users have not delivered their data to.

Only time will tell if Ello is here to stay… But considering the above-mentioned devil in the details, one may conclude that privacy  just seems to be the newest marketing slogan, regardless if it is ensured in fact or not.

Older posts Newer posts

© 2018 The Public Privacy

Theme by Anders NorenUp ↑