Tag: Security (page 1 of 3)

Security v. Security – Tech Companies, Backdoors and Law Enforcement Authorities

Grab the popcorns, this is going to be fun!

Grab the popcorns, this is going to be fun!

The access request to the information stored on the Smartphone of one of the San Bernardino shooting suspects has intensified the debate on the implementation of backdoors to enable the access to mobile devices for law enforcement purposes.

The issue does not refer to whether the law enforcement authorities, by means of a proper warrant, are entitled to search a mobile phone and access its content. That is a straightforward fact. They do.

What is at stake is Apple’s objection to a court order requiring it to provide the ongoing federal investigation the proper means to access such information. More concretely, it has been required to actually write a code modifying the iPhone software, that would bypass an important security function put in place, by disabling the feature which automatically erases information after ten attempts of entering the wrong password. This would enable authorities to endlessly enter wrong credentials and eventually crack the device’s password through brute force, without risking the deletion of content, thus being able to access it and extract the information contained on the iPhone of the suspect.

The use of new technologies to conduct criminal and terrorist activities has made it difficult to ignore the advantages of accessing the communications by means of such technologies in the investigation, prevention and combat of criminal activities. Law enforcement authorities point that it is particularly pertinent in the fight against terrorism, paedophilia networks and drug trafficking cases.

In this context, the use of encryption in communications has become a cornerstone of the debate. Investigative authorities are willing to see implemented backdoors in mobile devices in order to ensure the access when necessary. Contrastingly, companies such as Apple refuse to retain access keys – and consequently provide it upon request of law enforcement authorities – to such encrypted communications.

Just recently, FBI Director James Comey has told the US Senate Intelligence Committee that intelligence services are not interested in a ‘backdoor’ per se access to secure devices. Instead, what is at stake is requiring companies to provide the encrypted messages sent through those devices. James Comey is a wordplay habitué. He once said he wanted ‘front doors’ instead of ‘back doors’.

In the same line, White House Press Secretary, Josh Earnest recently stated that, by the abovementioned court order, Apple is not being asked to redesign its products or to create a backdoor.

While these are, at the very least, very puzzling statements, they nevertheless clearly express the subjacent motivation: the ban on encryption products with no backdoors and the implementation of backdoors.

Indeed, if companies can be required to undermine their security and privacy protection features in order to provide access to law enforcement authorities, regardless the legitimate inherent purpose, and disregarding the concrete designation one might find preferable, that is the very definition of a backdoor.

It never ceases to amaze me how controversial among free people living in a democracy it seems to be that the implementation of backdoors is – on both legal and technological grounds and for the sake of everyone’s privacy and security – a very bad idea.

Well, the main argument supporting the concept is that such technological initiative will chiefly help the combat of criminal activities. That is unquestionably a very legitimate purpose. And nobody opposing the implementation of backdoors actually argues otherwise.

However, it is a fact that backdoors would automatically make everyone’s communications less secure and exposed them to a greater risk of attacks by third parties and to further privacy invasions. Moreover, no real warranties in regards of the risk of the abuse which could ensue are ever provided. Those arguing in favour of the access to information through backdoors fail to adequately frame the context. It is vaguely stated that such mechanism will be used when necessary, without any strict definition. What is necessary, anyway? Would it depend on the relevance of the information at stake? Would it depend on the existence of alternative means or of how burdensome those are?

At least, if Apple complies with the order, it is difficult to accept that more similar requests will not immediately ensue. In fact, one will risk saying that those can be expected and will certainly be encouraged in the future. Ultimately, the creation of this cracking software could be used and abused in future cases. And this is particularly worrisome considering the lack of legal framework and the judicial precedent basis.

One may be tempted to sacrifice privacy in the interest of public security. That it is not a wrongful viewpoint. I don’t know anyone that would disagree on that. Except when considering the very own limitations of backdoors when it comes to fighting terrorism for instance. It is harder to support backdoors to prevent criminal activities when confronted with their very own inherent inefficiency and limitations, which seem to go unacknowledged by their supporters.

While companies may be forced to implement such backdoors, to provide access to encrypted communications, there is a myriad of alternatives in the marketplace for criminal seeking encrypted products where no such backdoors are installed. Encryption apps, files encryption, open source products, virtual private networks…

Let’s talk about Isis for instance. It has been alleged – without further demonstration – that they have their own open source encrypted communications app. Therefore, except from weakening the communications’ safety of everybody relying on encrypted messaging apps, considering the open source nature of the app used by Isis, the implementation of backdoors would be pointless for the purpose intended to be achieved.

Thus said, one can easily understand the stance of Apple. Having built its reputation on the privacy and security provided by its devices, it is very risky from a commercial viewpoint to be asked to develop software that counter its core business. Indeed, it modified its software in 2014 in order to become unable to unlock its Smartphones and access its customers’ encrypted data.

The fact that the company is now being asked to help enforcement law authorities by building a backdoor to get around a security function that prevents decryption of its content appears to be just another way of achieving the same outcome. Under a different designation.

Because it now goes way further than requiring companies to comply with a lawful order and warrant to the extent they are able to, requesting private companies to create a tool intended to weaken the security of their own operating systems just goes beyond any good sense. Indeed, it just amounts to require (force?) private companies to create and deliver hacking tools to law enforcement authorities which actually put everyone’s privacy and cybersecurity at risk.

And if this becomes a well accepted requirement in democratic systems, either by precedent either through legislative changes, well, one can only wonder with what enthusiasm such news will be welcomed by some repressive regimes eager to expand their surveillance powers.

From an EU viewpoint, and considering how uncertain is the future of the Privacy Shield framework, and despite the existing divergences among EU Member States in respect of encryption, this whole case certainly does not solve any trust issues in regards of the security of the data transferred to the US.

The dangers of certain apps or how to put your whole life out there

Finding love, one data breach at a time.

Finding love, one data breach at a time.

One of my past flatmates was actively looking for love online. Besides having registered in several websites for that end, I remember he also had several mobile applications (apps) installed in his Smartphone. I think he actually subscribed pretty much anything that even remotely could help him find love but outlined Tinder as his main dating tool.

Another of my closest friends is a jogging addicted – shout out P. He has installed on his Smartphone various apps which enable him to know how much steps he has made in a particular day, the route undertaken, and the heart rate via external device, which enables him to monitor his progresses.

What both of my friends have in common? Well, they actually use mobile apps to cover very specific necessities. And in this regard they can rely with almost anybody else.

Indeed, it is difficult to escape apps nowadays. Now that everyone (except for my aunt) seems to have a Smartphone, apps are increasingly popular for the most diversified purposes. For my prior flatmate it was all about dating. For my friend, it is to keep track of his running progresses. But their potential does not end there. From receiving and sending messages, using maps and navigation services, receiving news updates, playing games, dating or just checking the weather… You name a necessity or convenience, and there is an app for it.

On the downside, using apps usually requires to provide more or less personal information to the specific intended effect. Something that has become so usual that most consider as a natural step, without giving it further consideration.

In fact, a detail that most seem to be unaware of, apps allow for a massive collection and processing of personal – and sometimes sensitive – data. In fact, the nature and the amount of personal data accessed and collected raises serious privacy and data protection concerns.

For instance, in the case of my abovementioned flatmate, who was registered on several similar apps, and considering that he did not create fake accounts nor provided false information, each of them collected at least his name, age, gender, profession, location (enabling to presume where he worked, lived and spend time), sexual orientation, what he looks like (if he added a picture to his profiles), the frequency of his accesses to the app, and eventually the success of his online dating life.

In fact, in Tinder’s own words:

Information we collect about you

In General. We may collect information that can identify you such as your name and email address (“personal information”) and other information that does not identify you. We may collect this information through a website or a mobile application. By using the Service, you are authorizing us to gather, parse and retain data related to the provision of the Service. When you provide personal information through our Service, the information may be sent to servers located in the United States and countries around the world.
Information you provide. In order to register as a user with Tinder, you will be asked to sign in using your Facebook login. If you do so, you authorize us to access certain Facebook account information, such as your public Facebook profile (consistent with your privacy settings in Facebook), your email address, interests, likes, gender, birthday, education history, relationship interests, current city, photos, personal description, friend list, and information about and photos of your Facebook friends who might be common Facebook friends with other Tinder users. You will also be asked to allow Tinder to collect your location information from your device when you download or use the Service. In addition, we may collect and store any personal information you provide while using our Service or in some other manner. This may include identifying information, such as your name, address, email address and telephone number, and, if you transact business with us, financial information. You may also provide us photos, a personal description and information about your gender and preferences for recommendations, such as search distance, age range and gender. If you chat with other Tinder users, you provide us the content of your chats, and if you contact us with a customer service or other inquiry, you provide us with the content of that communication.

Considering that Tinder makes available a catalogue of profiles of geographically nearby members, among which one can swipe right or left, according to each one personal preferences, with the adequate analysis, it is even possible to define what type of persons (according to age, body type, hair colour) users find most attractive.

And because Tinder actually depends on having a Facebook profile, I guess that Facebook also gets aware of the average climate of your romantic life. Mainly if you start adding and interacting with your new friends on that platform and, why not, changing your status accordingly.

In the specific case of Tinder, as it mandatorily requires to be provided with a certain amount of Facebook information in order to ensure its proper functioning, these correlations are much easier for this app.

Thus said, a sweep conducted by 26 privacy and data protection authorities from around the world on more than 1,000 diversified apps, thus including Apple and Android apps, free and paid apps, public sector and private sector apps, and ranging from games and health/fitness apps, to news and banking apps has made possible to outline the main concerns at stake.

One of the issues specifically pointed out referred to the information provided to the users/data subjects, as it was concluded that many apps did not have a privacy policy. Therefore, in those cases, users were not properly informed – and therefore aware – about the collection, use, or further disclosure of the personal information provided.

It is a fact that most of us do not read the terms and conditions made available. And most will subscribe pretty much any service he/she is willing to use, disregarding what those terms and conditions actually state.

Nevertheless, a relevant issue in this regard is the excessive amount of data collected considering the purposes for which the information is provided or how it is sneakily collected. For instance, even gambling apps, such as solitaire, which seem far more innocuous, hide unknown risks, as many contain code enabling the access to the user’s information or to his contacts’ list and even allow to track the user’s browsing activities.

This is particularly worrisome when sensitive data, such as health information is at stake. This kind of data is easily collected through fitness orientated apps, which are quite in vogue nowadays. Besides any additional personally identifiable information which you will eventually provide upon creating an account, among the elements which most certainly are collected, one can find: from the name or user name, date of birth, current weight, target weight, height, gender, workouts frequency, workout settings and duration of your workout, heart rate. Also, if you train outdoors, geo-location will most certainly enable to assess the whereabouts of your exercising, from the departure to the arrival points, which will most probably coincide with your home address or its vicinities.

And, if you are particularly proud of your running or cycling results, and are willing to show up to all your friends in what good shape you actually are, there is a chance that you can actually connect the app to your Facebook and display that information in your profile, subsequently enabling Facebook to access the same logged information.

And things actually get worse when considering that, as demonstrated by recent data breaches, it seems that the information provided by their users is not even adequately protected.

For instance, and if I remember it well, due to a security vulnerability in Tinder – that apparently has been already fixed – it seemed that there was a time where the location data, such as longitude and latitude coordinates of users were actually easily accessible. Which is actually quite creepy and dangerous, as it would facilitate stalking and harassment in real life, which is as bad as it is happening online.

Anyway, it is actually very easy to forget the amount of data we provide apps with. However, the correlations that can be made, the conclusions which can be inferred, the patterns that can be assessed amounts to share more information than what we first realise and enables a far more detailed profile of ourselves than most of us would feel comfortable with others knowing.

The limits of government surveillance according to the ECtHR

Limits? What do you mean by 'limits'?

Limits? What do you mean by ‘limits’?

In two very recent judgements, the European Court of Human Rights (hereafter ECtHR) has made several essential points in regards of surveillance conducted by public authorities and its relation with Article 8 of the European Convention of Human Rights (hereafter ECHR).

Article 8 provides that governmental interference with the right to privacy must meet two criteria. First, the interference must be done e conducted “in accordance with the law” and must be “necessary in a democratic society”. Such interference must aim to achieve the protection of the “interests of national security, public safety or the economic well-being of the country, for the prevention of disorder or crime, for the protection of health or morals, or for the protection of the rights and freedoms of others”.

In previous cases regarding surveillance conducted by public authorities, the ECtHR had already concluded that any interference with the right to respect for private life and correspondence, as enshrined in Article 8 of the ECHR, must be strictly necessary for safeguarding the democratic institutions. However, it has now further clarified its interpretation.

In these recent decisions, the ECtHR concluded that the secret surveillance, as carried out in the manner described in the facts of the cases, violated Article 8 of the Convention.

The Roman Zakharov v. Russia decision, issued on the 4th December 2015, concerned the allegations of the editor in chief of a publishing company that laws enabling the installation of equipment which permitted the Federal Security Service (“the FSB”) to intercept all his telephone communications, without prior judicial authorisation, three mobile network operators interfered with his right to the privacy of his telephone communications.

The Court considered that “a reasonable suspicion against the person concerned, in particular, whether there are factual indications for suspecting that person of planning, committing or having committed criminal acts or other acts that may give rise to secret surveillance measures, such as, for example, acts endangering national security” must be verified and the interception shall meet the requirements of necessity and proportionality.

The Szabó and Vissy v. Hungary decision, issued on the 12th January 2016, concerned the allegations of members of a non-governmental organisation voicing criticism of the Government that the legislation enabling police to search houses, postal mail, and electronic communications and devices, without judicial authorization, for national security purposes, violated the right to respect for private life and correspondence.

The Court considered that: “the requirement ‘necessary in a democratic society’ must be interpreted in this context as requiring ‘strict necessity’ in two aspects. A measure of secret surveillance can be found as being in compliance with the Convention only if it is strictly necessary, as a general consideration, for the safeguarding the democratic institutions and, moreover, if it is strictly necessary, as a particular consideration, for the obtaining of vital intelligence in an individual operation. In the Court’s view, any measure of secret surveillance which does not correspond to these criteria will be prone to abuse by the authorities with formidable technologies at their disposal.” Consequently, it must be assessed if “sufficient reasons for intercepting a specific individual’s communications exist in each case”.

In both cases, by requiring surveillance activities to be individually targeted, the ECtHR has established that any indiscriminate interception is unacceptable. This is a most welcomed position considering the well-known legislative instruments and initiatives intended to strength the legitimacy of massive monitoring programs in many EU Member States.

The ‘Safe Harbor’ Decision ruled invalid by the CJEU

Safe harbor?!? Not anymore.

Safe harbor?!? Not anymore.

Unfortunately, I hadn’t had the time to address the ruling of the CJEU issue last October, by which the ‘Safe Harbour’ scheme, enabling transatlantic transfers of data from the EU to the US, was deemed invalid.

However, due to its importance, and because this blog is primarily intended to be about privacy and data protection, it would be shameful to finish the year without addressing the issue.

As you may be well aware, article 25(1) of Directive 95/46 establishes that the transfer of personal data from an EU Member State to a third country may occur provided that the latter ensures an adequate level of protection. According to article 25(6) of the abovementioned Directive, the EU Commission may find that a third country ensures an adequate level of protection (i.e., a level of protection of fundamental rights essentially equivalent to that guaranteed within the EU under the directive read in the light of the Charter of Fundamental Rights) by reason of its domestic law or of its international commitments.

Thus said, the EU Commission adopted its Decision 2000/520, by which it concluded that the “Safe Harbour Principles” issued by the US Department of Commerce ensure an adequate level of protection for personal data transferred from the EU to companies established in the US.

Accordingly, under this framework, Facebook has been transferring the data provided by its users residing in the EU from its subsidiary in Ireland to its servers located in the US, for further processing.

These transfers and, unavoidably, the Decision had been challenged by the reference to the CJEU (judgment in Case C-362/14) following the complaint filed by Max Schrems, a Facebook user, before the Irish DPA and subsequently before the Irish High Court. The main argument was that, considering the access electronic communications conducted by its public authorities, the US did not ensure adequate protection of the thus transferred personal data.

According to the AG’s opinion, “the access enjoyed by the United States intelligence services to the transferred data constitutes an interference with the right to respect for private life and the right to protection of personal data”.

Despite considering that a third country cannot be required to ensure a level of protection identical to that guaranteed in the EU, the CJEU considered that the decision fails to comply with the requirements established in Article 25(6) of Directive and that the Commission did not make a proper finding of adequacy but merely examined the safe harbour scheme.

The facts that the scheme’s ambit is restricted to adhering US companies, thus excluding public authorities, and that national security, public interest and law enforcement requirements, to which US companies are also bound, prevail over the safe harbour principles, were deemed particularly decisive in the assessment of the scheme’s validity.

In practice, this would amount to enable the US authorities to access the personal data transferred from the EU to the US and process it in a way incompatible with the purposes for which it was transferred, beyond what was strictly necessary and proportionate to the protection of national security.

As a result, the Court concluded that enabling public authorities to have access on a generalised basis to the content of electronic communications must be regarded as compromising the essence of the fundamental right to respect for private life.

The Court stated that the decision disregards the existence of such negative interference on fundamental rights, and that the lack of provision of limitations and effective legal protections violates the fundamental right to effective judicial protection.

Upon issuance of this ruling, the Art29WP met and concluded that data transfers from the EU to the US could no longer be legitimized by the ‘Safe Harbor’ decision and, if occurring, would be unlawful.
While its practical implications remain unclear, the ruling undoubtedly means that companies relying on the ‘Safe Harbor’ framework for the transfer of personal data from the EU to the US need to rely, instead, on another basis.

In this regard, considering that not all Member States accept the consent of the data subject or an adequacy self-assessment as a legitimizing legal ground for such cross-border transfers, Model Contractual Clauses incorporated into contracts and Binding Corporate Rules (BCR) for intragroup transfers seem to be the most reliable alternatives in certain cases.

Restrictions on data transfers are obviously also foreseen in the GDPR, which, besides BCRs, Standard Contracts and adequacy decisions, includes new data transfer mechanisms such as certification schemes.

You can find the complete version of the ruling here.

Opinion of the EDPS on the dissemination and use of intrusive surveillance technologies

We need some more surveillance here!

We need some more surveillance here! 1)Copyright by Quevaal under the Creative Commons Attribution-Share Alike 3.0 Unported

In a recently published opinion, the EDPS addressed its concerns in regards of the dissemination and use of intrusive surveillance technologies, which are described as aiming “to remotely infiltrate IT systems (usually over the Internet) in order to covertly monitor the activities of those IT systems and over time, send data back to the user of the surveillance tools.”

The opinion specifically refers to surveillance tools which are designed, marketed and sold for mass surveillance, intrusion and exfiltration.

The data accessed and collected through intrusive surveillance tools may contain “any data processed by the target such as browsing data from any browser used on that target, e-mails sent and received, files residing on the hard drives accessible to the target (files located either on the target itself or on other IT systems to which the target has access), all logs recorded, all keys pressed on the keyboard (this would allow collecting passwords), screenshots of what the user of the target sees, capture the video and audio feeds of webcams and microphones connected to the target, etc.

Therefore these tools may be adequately used for human rights violations, such as censorship, surveillance, unauthorised access to devices, jamming, interception, or tracking of individuals.

This is particularly worrisome considering that software designed for intrusive surveillance has been known to have been sold as well to governments conducting hostile surveillance of citizens, activists and journalists.

As they are also used by law enforcement bodies and intelligence agencies, this is a timely document, considering the security concerns dictating the legislative amendments intended to be implemented in several Member States. Indeed, as pointed by the EDPS, although cybersecurity must not be used for disproportionate impact on privacy and processing of personal data, intelligence services and police may indeed adopt intrusive technological measures (including intrusive surveillance technology), in order to make their investigations better targeted and more effective.

It is evident that the principles of necessity and proportionality should dictate the use of intrusion and surveillance technologies. However, it remains to be assessed where to draw the line between what is proportional and necessary and disproportional and unnecessary. That is the core of the problem.

Regarding the export of surveillance and interception technologies to third countries, the EDPS considered that, despite not addressing all the questions concerning the dissemination and use of surveillance technologies, “the EU dual use regime fails to fully address the issue of export of all ICT technologies to a country where all appropriate safeguards regarding the use of this technology are not provided. Therefore, the current revision of the ‘dual-use’ regulation should be seen as an opportunity to limit the export of potentially harmful devices, services and information to third countries presenting a risk for human rights.

As this document relates to the EU cybersecurity strategy and the data protection framework, I would recommend its reading for those interested in those questions. You can find the document here.

References   [ + ]

1. Copyright by Quevaal under the Creative Commons Attribution-Share Alike 3.0 Unported

From your hard drives to your SIM cards: how interesting are you?

Let's see how can we hack these?

Let’s see how can we hack these?

Just recently, the Investigatory Powers Tribunal (IPT), the Court that oversees British intelligence services’ activities, declared that the electronic mass surveillance of mobile phones and other private communications data retrieved from USA surveillance programs, such as Prism, conducted prior to December 2014, contravened Articles 8, referring to the right to private and family life, and 10, referring to freedom of expression, of the European Convention on Human Rights.

One is not so optimistic as to expect that this would suffice to make intelligence agencies cease sharing this kind of information. Mainly because the same Court already recognized that the current legal framework governing data collection by intelligence agencies no longer violates human rights.

However, the decision was still applauded by many with the expectation that, at least, large-scale uncontrolled surveillance activities would not be so bluntly practiced.

Let’s just say that such expectation did not last long.

According to Kaspersky Lab this week’s revelations, it seems that the NSA was able to hide spying software in any hard drive produced by some top manufacturers such as Toshiba, IBM and Samsung. Consequently, it has been able to monitor a large majority of personal, governmental and businesses’ (among which, financial institutions, telecommunications, oil and gas, transportation companies) computers worldwide.

Similarly, the Intercept reported that the NSA and GCHQ were able to get access to the encryption keys used on mobile phone SIM cards intending to protect the privacy of mobile communications manufactured by Gemalto. Normally, an encrypted communication, even if intercepted, would be indecipherable. That would cease to be the case if the intercepting party has the encryption key as it is able to decrypt that communication.

What awe-inspiring ways to circumvent the consent of telecommunications companies and the authorization of foreign governments! Isn’t it dignifying and trustworthy when intelligence services just behave as hackers?

Somehow, and unfortunately, such news almost lacks of any surprising effect, considering well, everything we already know, really… From the Snowden’s revelations to the logic-challenging- argumentation subsequent to Apple and Google’s plans regarding the encryption of communications…

Thus said, perhaps we should all feel flattered to be spied upon. After all, as former NSA Director points out, the agency does not spy on “bad people” but on “interesting people”. Those pretty much convinced – as myself – of being just regular individuals must now be reassured with this extra boost of self-esteem.

The impact of the attack against Charlie Hebdo on our rights and freedoms

This will be the excuse for more intrusion.

This will be the excuse for more intrusion.

I do not particularly appreciate the work of the satirical magazine Charlie Hebdo. I frequently find it distasteful and offensive. And I do like to live in a society where others are able to freely express themselves and I am able to openly dislike or disagree with. That is what the right of expression is about. Of course it is not an absolute right and, of course, when the critique is about sensitive issues, such as religion, race, sexual orientation or gender, someone will most certainly get offended. This is not the main purpose of the satire. As history shows us, this kind of critique has prompted reflections, discussions and cultural, political and social changes.

Thus said, a cold-blooded attack was conducted against the headquarters of the magazine, in Paris, and 12 innocent persons were killed, due to the drawing of a cartoon. I cannot help lingering on the absurdity of these words as I write them. And to feel, over again, the shock, the incredulity, the anger, the frustration, the revolt, the hope. And the fear. The fear of this invisible enemy who is able to strike anywhere, at any time, against anybody. The very same feelings that are awaken each time a terrorist attack occurs.

Looking at the solidarity marches held in Paris, it is unavoidable to outline the particular unifying effect of this particular attack. It has united those in favour of freedom of speech, freedom of information, and, ultimately, the rule of law and democracy ideals. Values that are so deeply anchored in our mindsets and yet so frequently put at risk. On the other side, it has ignited one of the most powerful and basic feelings, the fear. The same fear which has empowered anti-immigration movements with an afresh wave of arguments, increased xenophobia and fed the confusion of concepts such as Muslims, Islamism, extremism and terrorism. As strange as it can be, this event has joined in solidarity existing conflicting ideals that would not be put side to side otherwise. And this is where the scission happens.

In fact, when individuals feel insecure and threatened, intolerance, regarding minorities, cultural, ethnic and religious, for instance, arises. It has happen before. It has been happening more frequently due to the economic crisis. And it has happened again a few days ago, considering the almost immediate popularity of some extreme right political parties on social networks.

Moreover, fear does not only compel individuals to pacifically accept the sacrifice others’ rights and freedoms in order to preserve their own privileges and liberties. In the name of an alleged bigger value, such as national security, individuals also tend to more easily allow, without questioning, restrictions on their own civil and fundamental rights. Anything to feel safe again or at least live the comfort of that illusion.

Times like these, where these kinds of emotions and beliefs so vividly oppose a common threat, are therefore treacherous. One particular danger subsists in the appearance of legitimacy from which certain not so legitimate political ideals and governmental initiatives may benefit.

For instance, in the wake of the abovementioned attack, the French government has notified the European Commission of the impending publication of decrees allowing that websites advocating or promoting terrorist practices or ideals could be blocked without the intervention of a judge.

In this particular case, I sincerely fail to see any relation between the attack itself and such online activities or to perceive how such decrees will somehow help to prevent any eventual similar attacks in the future. However, it is much certainly a first step to take control over the content of online communications and to achieve the desired Internet governance. In the wake of Edward Snowden’s revelations, it was already been made clear how interesting our communications can be to some intelligence services. Of course, if censorship can ever be defensible, it is particularly in this case. Nevertheless, it is a very hazardous path. Where to draw the limit? What guarantees do we have that this is just not the climbing of the first step of the staircase? When will surveillance measures be enough?

Furthermore, the fight against terrorism being primarily of their competence, and in what seems to be the result of passionate emotions and precipitation, some EU Members States are already developing extra security measures. No surprise here. Following a terrorist attack, it is quite common for governments to push for increased surveillance.

I have to admit that I am very sceptic in regards of the efficiency of a more intrusive government surveillance. I do believe that surveillance is needed to be conducted in order to tackle terrorism. But the police and the intelligence services do already conduct surveillance activities which allow for the identification of people involved in terrorist activities. For instance, the Cherif and Said Kouachi brothers, the authors of the attack conducted against Charlie Hebdo, were already known to the security services and this has not prevented the horrific murder of those people. Moreover, Charlie Hebdo was already known as a potential target, as it has been firebombed in 2011.

So to argue that more invasive powers of surveillance on a larger scale, which will imply to treat everyone as a suspect, are required in order to prevent future attacks is very unconvincing. Surveillance must be targeted and limited and the competence of courts in regards of restrictions to individuals’ fundamental rights cannot be diluted.

Considering the existing fear, it is very easy to turn terrorist attacks into the perfect excuse for the practice of mass surveillance and a full government control over the Internet. However, this would get us dangerously close to the very same political regimes we are so proud to differ of. Contrarily to what some of us might think or say, we do not want to risk living in a society where we all are monitorized and afraid to express ourselves. Mass surveillance does not only violate our privacy, it also undermines our ability to speak freely. In this context, the line to censorship can be smoothly crossed. Which is the opposite of what Charlie Hebdo actually stands for.

I mean, if this attack was primarily directed to the freedom of expression of a democratic country, counter-attacking on the same freedom of expression – although in its online manifestation – does seem a little bit odd. Shouldn’t we aim precisely the opposite: to protect the very rights and freedoms that have been attacked? Our freedoms are not protected by further limitations.

At the EU level, border management, internal security, the “foreign fighters” travelling and the online terrorist propaganda were already very vivid concerns. In the wake of the Charlie Hebdo attacks, the European Commission has pledged to present a new programme to fight terrorism. Under the present scenario, it is very likely that the discussions in regards an EU PNR will be boosted.

Only time will tell to what extent these terrorists attacks were able affect our core values. But in the aftermath, it seems that, if the intention of the attack was to undermine our fundamental rights, in the long run, they may be successful.

 

The Sony data breach: when
fiction meets reality?

You better believe SONY. You have been HACKED!

You better believe SONY. You have been HACKED!

It is not the first time that Sony suffers a massive cyber attack. Back in 2011, due to some vulnerabilities found in its data servers, a hacking of its Play Station online network service enabled the theft of names, addresses and credit card data belonging to 77 million user accounts.

A few days ago, Sony Pictures computer systems were hacked again allegedly by a group of hackers calling themselves Guardians of Peace. As a consequence, a humongous amount of data, including confidential details, such as medical information, salaries, home addresses, social security numbers, regarding 47 thousands of Sony employees and former employees, namely Hollywood stars, as well as contracts, budgets, layoffs strategies, scripts for movies not yet in production, full length unreleased movies and thousands of passwords were leaked to the Internet.

The reason remains unclear. Despite the denial of a North Korea representative regarding a possible involvement of that country, it is being speculated that this attack is a retaliation from the North Korea government as a response to an upcoming Sony comedy, ‘The Interview’, starring actors Seth Rogen and James Franco, which depicts an assassination attempt against the North Korea’s leader Kim Jong-un. If Hollywood comedies are now deemed a sufficient reason to conduct cyber-attacks on real life, fiction and reality are meeting in a very wrong way.

Anyway, considering the volume and the sensitive nature of the information disclosed, this can actually be one of the largest corporate cyber attacks which has ever been known of.

It is a sharp reminder that hacking attacks can be directed to any company and can take all forms, equally damaging. This attack demonstrates once again that not only critical infrastructure is at risk. Sony Pictures Entertainment is one of the largest studios in Hollywood. It is really not the expected victim of a cyber-attack. However, it was an easy prey as its business decisions regarding information security have been publicly stated in previous occasions. Despite their ludicrous nature, I guess someone took those comments seriously.

Considerations regarding the absurdity of having a file directory named ‘Passwords’ aside, this attack outlines that data breach is one of the major threats that companies face nowadays. Cyber attacks are conducted against companies of all sizes. Large companies do eventually recover from these breaches. Small businesses generally hardly pull through after suffering a cyber-attack. It is therefore essential that businesses implement a solid cyber-security programme, namely conducting regular self-hacking exercises to assess the vulnerabilities of their security systems in order to prevent a potential breach.

What about Sony? Well, the value of the damages regarding its employees is incalculable considering that their identities may be stolen, their bank accounts may be compromised and their houses may be robbed. Only time will tell if and how it will recover.

Older posts

© 2017 The Public Privacy

Theme by Anders NorenUp ↑