Category: Search Engines

The ‘right to be forgotten’
extended to

Forgetting everywhere

Forgetting everywhere

As you might well remember, the Court of Justice of the European Union, in a better known as ‘right to be forgotten’ judgement, ruled that individuals, provided that certain prerequisites are met, have the right to require from search engines, such as Google, to remove certain results about them, subsequently presented to a search based on a person’s name. (you can read more here, here, here, here and here) According to the ruling, the original information will still be accessible under other search terms or by direct access to the publisher’s original website.

In this context, in cases where the criteria for deletion are met and where search engines do not remove the links requested by the data subject, the latter would be able to complain to its national data protection or judicial authority.

Therefore, last week, the Article 29 Working Party, which gathers representatives of the 28 national data protection authorities (hereafter DPAs) of the EU Member States, has adopted Guidelines on the implementation of the judgement. This has not really come as a surprise as the Working Party had already announced its decision to establish a common approach to the right to be forgotten.

Indeed, the ruling left many questions unanswered. For instance, it was left to be found out if Google was only obliged to block requested names for European domain names only or should do so for all Google search domains. Moreover, it was left unanswered how could the balance between the relevant rights and interests at stake be achieved and how the ‘public figure’ concept could be defined.

According to the Article 29 Working Party, the ruling only refers to search engines as “data controllers” and is not to be applicable to the original source of information, so the information is not to be removed entirely, just from search indexes.

Furthermore, considering that users are directed to localised editions of the company’s search service, when they initially try to visit the website, Google’s current practice regarding delisting procedures consists in delisting results which appear in the European versions of its search engines, but not the international one ‘.com’. In this regard, the Article 29 Working Party considers that Google has therefore failed to effectively implement the abovementioned ruling. It considers, indeed, that limiting the de-listing of search results to EU domains, on the grounds that users tend to access search engines via their national domains, does not sufficiently guarantees the rights of the data subjects. In fact, it concluded that the de-listing should be conducted on all the relevant ‘.com’ domains. This conclusion is certainly in line with a recent position of a French court which decided that Google should remove a link to a defamatory article for a particular search on both on its ‘.fr’ and ‘.com’ domains.

In addition, the document clarifies that Google is not required to block links if searches lead to the same result without using the individual’s name and that, although all individuals have a right to data protection under EU law, DPAs should focus on claims where there is a clear link between the data subject and the EU.

Moreover, referring to the notice stating that “some results may have been removed under data protection law in Europe” posted by Google at the bottom of search results, the Working Party deems that the information provided to users of search engines that the list of results to their queries is not complete has no legal ground under data protection law and is only acceptable if it cannot be concluded that the results related to a particular individual have been de-listed.

Likewise, because there is no legal basis for such routine communication under EU data protection, and in order to avoid a ‘Streisand effect’, it is considered that search engines should not, as a general practice, inform the webmasters of the pages affected by removals of the fact that some web pages cannot be accessed from the search engine in response to a specific name based query. However, it is accepted that contacting the original editor of the content being targeted by a search de-listing request might actually be appropriate when more information is required in order to take a decision.

Furthermore, the guidelines establish a list of 13 common criteria which the data protection authorities should apply when handling complaints following refusals of de-listing by search engines.

It is also stated that no single criterion is determinative and that, in most cases, more than one will have to be taken into consideration. However, each criterion has to be applied in the light of the interest of the general public in having access to the information.

The Working Party further concluded that the impact of the de-listing on individuals’ rights to freedom of expression and access to information will prove, in practice, to be very limited and the balance between the public interest and the rights of the data subject will have to be assessed casuistically.

The abovementioned list includes an orientation regarding what can constitute ‘public life’, considering that, while details associated to the private life of a public figure may be delisted, information regarding the public role or activities should be available for search:

It is not possible to establish with certainty the type of role in public life an individual must have to justify public access to information about them via a search result. However, by way of illustration, politicians, senior public officials, business-people and members of the (regulated) professions can usually be considered to fulfil a role in public life. There is an argument in favour of the public being able to search for information relevant to their public roles and activities.

A good rule of thumb is to try to decide where the public having access to the particular information – made available through a search on the data subject’s name – would protect them against improper public or professional conduct. It is equally difficult to define the subgroup of ‘public figures’. In general, it can be said that public figures are individuals who, due to their functions/commitments, have a degree of media exposure.

In addition, search engines are called upon to be more transparent regarding the de-listing criteria. It is a legitimate concern as Google only has released very limited and abstract information on this regard.

Despite the fact that these guidelines are not legally binding, they reflect a consensual position of national regulators and therefore will certainly influence the enforcement decisions taken by the Member States’ data protection authorities. Considering all the issues surrounding the implementation of the ruling, these guidelines are undoubtedly useful. Nonetheless, it remains to be seen whether Google will actually follow the guidance and extend de-listing to ‘.com’ as well.

In my personal opinion, it is quite outlandish and unrealistic, from both legal and technical points of view, to assume that the ‘right to be forgotten’ can become a global scenario and not just a European one. I mean, considering the overlapping jurisdictions regarding Internet, how come nobody considers the evidence that such a global de-listing sets up a divergence on the international level, between the EU Member States and the rest of the world?

The Internet should not be subjected to geographical boundaries. Assuming that the rules referring to a country can actually apply worldly online can be associated with web censorship. In fact, the EU is not alone in its aim for the global implementation of its rules. Russia and China, for instance, have quite global ambitions regarding Internet governance. Of course, one may argue that the EU motivations are legitimate, intended to protect some individuals’ private sphere, and do not amount to censorship. But considering that this legitimacy of the measures is usually the most frequent argument regarding censorship, is this really the example the EU wants to set?

The Google Affair: Forget Me, Forget Me Not – Part II

I can forget you, you know?

I can forget you, you know?

Bad pictures, dire comments, past scandals. The internet never seems to forget. How convenient would be the possibility to suppress any registry of events of the past, which might adversely affect our honour and dignity or simply expose our private life in an undesirable way?

According to the recent ruling of the ECJ, someone wishing to delete personal information from a search engine’ index will have to request so to the search engines services provider.

This decision is undoubtedly good news to those who would like to suppress information about them from the internet, but is likely to cause problems in practice.

It is unquestionable that people have a right to privacy, and that that privacy extends to information about them. Thus they have the right to control their personal data and can ask for its deletion when no longer have interest in its processing or storage and there is no legitimate interest in being kept by a data controller.

The issues regarding the qualification of search engines services providers as ‘data controllers’ have already been raised here and here. Unfortunately, our concerns don’t end there.

According to this decision, search engines have to deal with individual complaints or they can be challenged before judicial courts or a supervisory authority.

In order to implement the decision, Google had to devise mechanisms (an online form available on its website) in order to receive requests for link removal and was submerged by the amount of requests for personal data deletion received from nationals of the 28 countries of the EU. This has been representing major additional administrative burden for Google and will affect as well all other web intermediaries, as the repercussions of this ruling extend well beyond Google.

Then, search engine services providers will decide on the balance of the rights at stake, weighing up whether it is in the public interest for that information to remain.

Does it make any sense to establish this principle when there is no evident and easy answer about what is privacy nowadays or where do we draw the line between what belongs exclusively to private sphere and what belongs to the public domain?

It is a fact that Google have been removing links which infringe copyrights content on demand of copyrights holders. In those cases, however, there is no individual assessment and the process is mostly automated, which is a very different scenario from what the ruling entails. Indeed, each request would have to be considered individually and demands an appraisal that is not fit for a search engine. Thus, complying with this ruling could end up very costly because making this kind of assessment is not possible through algorithms.

As much as I can agree with the motivation of the ruling, this just doesn’t make any sense. The ECJ thus passes the responsibility to find the right balance over to private entities, businesses, whose primordial concern is profits, although they have neither the expertise nor the legitimacy to act as a legal authority. As a general principle, deletion of websites, or search links, should be decided by a legitimate entity, entitled with public authority, ultimately a court.

Anyway, pertinent questions arise regarding the interpretation of the rules set out in the judgement. What exactly is a public figure? What is deemed to be qualified as public interest? How long has to pass before personal data is no longer relevant? How can a search engine determine if data is inaccurate or its processing is excessive or disproportionate? How will the rights of the publisher be safeguarded in the internal process of a private company?

In the absence of any rigorous criteria to balance the rights of the data subject against the search engine’s economic interests and the public interest, search engine services providers lack a precise test to apply when assessing requests from data subjects for removing links to websites containing their personal data.

The advisory committee set by Google might not be the most adequate solution. The agreement reached by national data protection authorities to form a subcommittee establish uniform handling of such requests is much welcome.

In order to avoid any likelihood of liability for breaching data protection law and of unlawful processing of data, search engine services providers might prudently remove links, despite any public interest in its disclosure, rather than consider the balance of rights in every request. And who would possibly blame them?

The most evident and worrying outcome is that our facility to find information about other individuals is substantially reduced.

But it leaves some room for other discrepancies as well. Will the links deleted from the specific index created especially for a country (e.g. specific national language required) be available within searches operated in other countries? Or only search engines with no connection to the EU will be able to serve the results?

We might be dangerously heading toward a tiered and fragmented internet, with searches results in Europe being less complete than elsewhere. What about the free and open internet, then?

And what about managing the potential interests of third parties, also concerned by the deleted information, who might have as well personal data published but, on the contrary, wish to be easily found online through the search engine set of results?

The Advocate General already alerted [1]Paragraph 134 of the Opinion that suppression of legitimate public domain information would amount to censorship . For the sake of the full access to information and freedom of expression, it is imperative that Google should remain a neutral platform. To that aim, empowering a search engine service provider with censor competences is a remotely desirable result.

The main search engines in Europe already met, in Brussels, with the Article 29 Working Party, which brings together data protection authorities from across the EU, which intends to issue some EU-wide guidelines in order to achieve a unified implementation of the ruling.

Considering the possible scenarios foreseen, these guidelines are very much welcomed. However, it just seems that we are now trying to correct a system that was wrongly built from scratch. Indeed, according to its press release, DPAs asked, for instance, the search engines to explain their de-listing process and what were the criteria applied to the balancing process of their economic interest, the public interest in accessing information and the right of the person who wants the search results de-listed.

Now Google has planned public hearings, in an alleged quest for transparency, in order to boost a debate on the implementation of the ruling.

More recently, at the WP29 Plenary meeting of 16 and 17 September, the European data protection authorities decided to put in place a network of contact persons in order to develop common case – handling criteria to handle complaints presented to the data protection authorities, resulting from search engines’ refusals to “de-list” links from their results.

Now that the confusion has started, one may wonder where will all this end?


1 Paragraph 134 of the Opinion

The Google Affair: Forget Me, Forget Me Not – Part I

Am I forgetting something?

Am I forgetting something?

The ruling better known as the ‘right to be forgotten’ decision won’t risk to be forgotten any time soon.

To make a long story short, the ECJ ruled that the operator of a search engine – in that particular case, Google – is obliged to remove from the list of search results displayed, according to a search made on the basis of a person’s name, links to web pages published by third parties and containing information relating to that person, even when its publication is otherwise lawful and the information is factually correct.

I understand (and welcome) the motivation behind the judgments. It is comforting to know that individuals, where there is no legitimate – and consequently stronger – public interest involved, may be able to move on from their past. As it is very difficult to restrict our lives to the offline world, Google and search engines in general are, nowadays, the primary instrument to find information about everything and anyone, to the point that ‘google’ is commonly used as a verb referring to an online search conducted on that search engine. Without search engines, information wouldn’t be, in the vast majority of cases, so easily accessible.

And that is certainly the main issue at stake. As stated by the Court:

(…) processing of personal data, such as that at issue in the main proceedings, carried out by the operator of a search engine is liable to affect significantly the fundamental rights to privacy and to the protection of personal data when the search by means of that engine is carried out on the basis of an individual’s name, since that processing enables any internet user to obtain through the list of results a structured overview of the information relating to that individual that can be found on the internet — information which potentially concerns a vast number of aspects of his private life and which, without the search engine, could not have been interconnected or could have been only with great difficulty — and thereby to establish a more or less detailed profile of him.  [1]Paragraph 80th of the ruling

So a ‘right to be forgotten’ might as well be a necessary principle in an unforgetting world.

But, true to be told, the principle of giving citizens more control over their personal data – including its deletion – doesn’t come as the new right in town. It existed long before what was decided by the ECJ in several Member States’ legislation, although without such a sticky nickname. Furthermore, it is foreseen in the directive 95/46 on data protection which remains applicable to the Internet as it has developed since. And a related provision is expected to be included in the General Data Protection Regulation, intended to replace Directive 95/46/EC, currently being negotiated at the Council of the European Union. The principle of individuals being able to move on from their past therefore doesn’t come as an originality.

What was indeed a surprise, and not a good one I must admit, was the conclusion that Google, while processing information published by a third party, act and is liable as a controller, and shall delete links to articles lawfully published by third parties, which can remain available on the latter’s website.

One must admit that this conclusion is in line with the obligations that article 6(2) of the Directive foresees regarding controllers while processing personal data: the controller shall weigh its interests, those of the data controller, those of third parties and those of the data subject.

That conclusion is also in line with the observations of the Court in ASNEF and FECEMD ruling[2]Joined Cases C 468/10 and C 469/10 ASNEF and FECEMD [2011] ECR I 0000, paragraphs 44–45, regarding the relevance of the data previous appearance in public sources.

And disregarding all the questions (as mentioned here and here) that unavoidably arise while considering a search engine as a ‘controller’, the ECJ tried to establish criteria for the removal of links and consecrated a balance to be achieved to that end.

Therefore, search engines shall erase data deemed inadequate, irrelevant or no longer relevant, or excessive in the light of the time that had elapsed.[3]Paragraph 94th of the ruling According to the ECJ, even accurate data, despite lawfully published initially, can “in the course of time become incompatible with the directive”.[4]Paragraph 93rd of the ruling These key criteria – “inadequate, irrelevant or no longer relevant, excessive in relation to the purposes of the processing” – shall be balanced with the economic interests of the operator of the search engine and the interest of the general public in finding that information upon a search relating to the data subject’s name.[5]Paragraph 97th of the ruling

However, as rightly pointed out by the ECJ:

(…) that would not be the case if it appeared, for particular reasons, such as the role played by the data subject in public life, that the interference with his fundamental rights is justified by the preponderant interest of the general public in having, on account of inclusion in the list of results, access to the information in question.[6]Paragraph 97th of the ruling

The preponderant public interest is consequently the ultimate test to ensure that information which ought to continue widely available is not removed.

I personally doubt that the Court succeeded in defining the desirable rigorous criteria necessary to achieve a fair balance of rights to privacy and personal data against freedom of expression and information. In fact, the ECJ provides very little legal certainty as it merely portrays an unclear balancing act between the rights of the data subject against the search engine’s economic interests and the public interest, to be determined casuistically.

And this can have very serious repercussions considering that the removal of links from the list of results can have effects upon the legitimate interest of the publishers of the information and of internet users potentially interested in having access to that information.

As pointed out by the General Advocate:

The data protection problem at the heart of the present litigation only appears if an internet user types the data subject’s name and surnames into the search engine, thereby being given a link to the newspaper’s web pages with the contested announcements. In such a situation the internet user is actively using his right to receive information concerning the data subject from public sources for reasons known only to him.[7]Paragraph 130 of the Opinion

And he added:

In contemporary information society, the right to search information published on the internet by means of search engines is one of the most important ways to exercise that fundamental right. This right undoubtedly covers the right to seek information relating to other individuals that is, in principle, protected by the right to private life such as information on the internet relating to an individual’s activities as a businessman or politician. An internet user’s right to information would be compromised if his search for information concerning an individual did not generate search results providing a truthful reflection of the relevant web pages but a ‘bowdlerized’ version thereof[8]Paragraph 131 of the Opinion

It most certainly doesn’t help that the ECJ establishes that data subject’s rights protected by articles 7 and 8 of the Charter override, as a general rule, that interest of internet users.[9]Paragraph 81st of the ruling Does this mean that the right to privacy is the general principle, only subjected to sporadic exemptions represented by the exceptional precedence of the freedom of expression and right to information?

This priority doesn’t seem to be established nor in the Charter nor in the European Convention of Human Rights. It is only expectable that a data subject’s right to protection of his private life must be balanced with other fundamental rights, namely with freedom of expression and freedom of information. Additionally, the conclusion according to which the inclusion in the list of results of a search engine constitutes a more significant interference with the data subject’s fundamental right to privacy than the publication on the web page, seems to forget that, nowadays, freedom of expression is not just about the right to publish, but also about being found online. We have attained a historic momentum where to not be found through a search engine or to no exist online is pretty much the same.[10]Paragraph 87 of the ruling

Therefore, the applicability to search engines but not to news websites or other journalistic activities, due to exemptions for “media” under European data protection law, so the original information itself could remain, does not represent a satisfactory guarantee of the right to information at a time when people have access to both newsworthy information and publishing tools.

Considering the challenges at stake, it is very unlikely that a search engine service provider will satisfactorily balance all these conflicting interests. The general fear that the implementation of this ruling will entail sacrificing of freedom of expression and information might not come as a mere alarmism.

It is worth question whether and how this ruling will affect the negotiations for a General Data Protection Regulation currently ongoing at the Council of the European Union. The Italian Presidency circulated to the DAPIX working group a note entitled ‘Right to be forgotten and the Google judgment’ to examine how the future legislation on the right to deletion should be developed.

This raises some confusions considering that it is for the Council of the European Union and for the European Parliament, as co-legislators, to make the law as it will stand in the future and for the ECJ to interpret the law as it exists.


1 Paragraph 80th of the ruling
2 Joined Cases C 468/10 and C 469/10 ASNEF and FECEMD [2011] ECR I 0000, paragraphs 44–45
3 Paragraph 94th of the ruling
4 Paragraph 93rd of the ruling
5 Paragraph 97th of the ruling
6 Paragraph 97th of the ruling
7 Paragraph 130 of the Opinion
8 Paragraph 131 of the Opinion
9 Paragraph 81st of the ruling
10 Paragraph 87 of the ruling

The Google Affair: To be or not to be a Data Controller – Part II

Data controller or Data processor?

Data controller or Data processor?

With an evident and unfortunate confusion between the roles of ‘data controllers’ and ‘data processors’, (as analysed previously) the European Court of Justice (ECJ) qualifies, in its famous ruling, Google as data controller, while simply reproducing existing information.

This has far reaching implications and raises a number of complex and problematic issues. In this post, we will only be able to address some questions.

Indeed, to consider search engines services providers as data controllers for the purposes of EU law entails that they are obliged to respect the rights and freedoms of the data subjects, comply with a set of principles and obligations as foreseen in Directive 95/46, when they process personal data, and be prepared for liability in case of failure in behaving accordingly.

This obviously forces a shift in search engines services providers’ responsibilities regarding the individuals whose information they process.

A very pertinent and practical question must be asked: how can search engines services providers fulfil the obligations impending on controllers, as provided in Articles 6, 7 and 8 of the Directive, in relation to the personal data sourced from web pages hosted by third-party servers?

If an internet search engine service provider is to be considered a controller, it must guarantee that the processed personal data is adequate, relevant, and not excessive in relation to the purposes for which it was collected, up to date, and no longer than is necessary for the purposes for which the data were collected.

Considering that the search engine merely locates information made available by third parties, indexes it automatically and makes it accessible to internet users according to a particular order of preference, it remains to be explained how a search engine service provider will be able to appraise its compliance with those requirements. Isn’t the publisher of the website concerned in a better position to conduct that assessment?

As to the criteria concerning the legitimacy of processing of data made available on the internet, including personal data, in the absence of a data subject’s consent, it is unquestionable that internet search engine serve legitimate interests. Indeed, it allows an easy and quick access to information and contributes to the dissemination of the information uploaded on the internet.

But what happens when the search engine processes information that is inserted in a special category of data (e.g. personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, trade-union membership, and concerning health or sex life), which processing is prohibited, unless, for instance, data subject has given his explicit consent?

It is important to reflect on the hypothesis raised by the Advocate General, in its opinion:

if internet search engine service providers were considered as controllers of the personal data on third-party source web pages and if on any of these pages there would be ‘special categories of data’ referred to in Article 8 of the Directive (e.g. personal data revealing political opinions or religious beliefs or data concerning the health or sex life of individuals), the activity of the internet search engine service provider would automatically become illegal, when the stringent conditions laid down in that article for the processing of such data were not met.

Would the processing of special categories of data by search engines be deemed to be illegal if the requirements for the processing of such data on third-party source web pages were not met? Is that even a conceivable scenario?

Additionally, considering that, in order to be lawful, and if no other criterion is applicable, the processing of personal data must be carried out with the consent of the data subject, it is only legitimate to question how can search engines services providers ensure the consent from data subjects with whom they have never been in contact with?

Furthermore, one must wonder about the effectiveness of the exercise of the data subjects’ right to access to data, as foreseen in the above mentioned directive, and Google’s capabilities to satisfactorily comply with its obligation, attending to the fact that its activity of caching webpage’s for relevance index ranking depends on an algorithm that will perform the content analysis automatically. Attending to this notion of relevance, e.g., it will be impossible to distinguish in practice people who share the same name.

I guess we all just have to wait to see how all the implications will work in practice…

The Google Affair: To be or not to be a Data Controller – Part I

Is Google a Data Controller?

What am I?

It would be very difficult – and to some extent odd – to start a blog with the intention to express my viewpoints on privacy issues, and related matters, without mentioning the already thoroughly commented ruling, better known as the “right to be forgotten” decision, where the  European Court of Justice (ECJ) concluded that, in order to comply with the rights laid down in Directive 95/46:

the operator of a search engine is obliged to remove from the list of results displayed following a search made on the basis of a person’s name links to web pages, published by third parties and containing information relating to that person, also in a case where that name or information is not erased beforehand or simultaneously from those web pages, and even, as the case may be, when its publication in itself on those pages is lawful.

It might not come as a surprise – after all, I just created a blog – but I have as well some comments that I would like to share in this (still) freedom of expression ever growing and globalized, yet interconnected, technological world.

However, I do not intend, at least today, to reflect on how the ‘right to be forgotten’ may or may not work in practice.

Instead, I would like to refer to a recent speech of  Martine Reicherts, who stated that internet search engine service providers, such as Google, have a big responsibility in ensuring that personal data is being handled properly.

And it is precisely this new notion of ‘responsibility’ regarding search engines services providers that I consider disturbing. After all, until very recently, the internet search engine service providers liability for the third-party content they transfer and/or store has always been quite restricted. I therefore have some doubts and concerns about this sudden and innovative understanding of the responsibility of internet search engine service providers as data ‘controllers’.

One should not forget that the role and legal position of internet search engine service providers has not been expressly regulated in EU legislation.
Their activity consists in locating information made available on the internet by third parties, indexing it automatically, storing it temporarily and finally, making it accessible to internet users according to a particular order of preference.

In this context, we agree with the understanding stated by the ECJ according to which an internet search engine provider processes personal data (if that data relate to an identified or at least identifiable subject) as foreseen in Directive 95/46, regardless of the fact that it also carries out the same operations in respect of other types of information and does not distinguish between the latter and the personal data.

However, in this same context, the internet search engine service provider cannot be automatically considered as ‘controller’ of the processing of such personal data!

A controller is, according to the aforementioned Directive:

the natural or legal person […] which alone or jointly with others determines the purposes and means of the processing of personal data.

A special reference to the processing of personal data by information society services that act as selection intermediaries is absent from the Data Protection Directive.

In this particular case, based on complete automated search algorithms which filters data and delivers results based on relevance and popularity, Google is not aware nor exercises any control over the data included in the publisher’s webpage. In this particular context, Google was not able to distinguish between personal data and other data. Moreover, it is not able to change the content in the host servers.

It is therefore evident that Google acted act as a data processor, i.e., an information society intermediary between content providers and internet users, only storing information and supplying an information location tool.

As stated by the Article 29 Data Protection Working Party:

[t]he principle of proportionality requires that to the extent that a search engine provider acts purely as an intermediary, it should not be considered to be the principal controller with regard to the content related processing of personal data that is taking place. In this case the principal controllers of personal data are the information providers.

I thus strongly support the view expressed by the Advocate General in his Opinion on this case.

Surprisingly, the ruling point out in the opposite direction, and Google was regarded as responsible, under data protection law, for the results which it returned. The fact that the processing of data is conducted entirely by complex algorithms, without any of its control, was absolutely disregarded by the ECJ.

Instead, the ECJ stated that search engines have a separate responsibility from publishers, consisting in loading those data on an internet page, to which is additional. Indeed, according to the ruling, search engines shall be liable to make accessible outdated information because it still is available in a publisher’s website:

that activity of search engines plays a decisive role in the overall dissemination of those data in that it renders the latter accessible to any internet user making a search on the basis of the data subject’s name, including to internet users who otherwise would not have found the web page on which those data are published.

This is just too far-fetched…

© 2023 The Public Privacy

Theme by Anders NorenUp ↑