In today’s data economy, more and more decisions are driven by algorithms with little to no human interference, both in private and public sectors: from services we are offered, people we choose to date, to our creditworthiness and, in some instances, our proneness to welfare and tax fraud.
Ever since the Facebook- Cambridge Analytica fallout, the black box problem of AI algorithms has irreversibly become political on both sides of the Atlantic, with high-level officials ranging from the German Chancellor Merkel to the EU Vice President Vestager taking a firm stance on how online platforms should be transparent about the way their algorithms works.
Governing algorithms is inherently a balancing exercise. On the one hand, private persons (both individuals and entities) who are subjected to an algorithm-produced decision, have a right to be informed about its modus operandi, including the rationale behind such decision-making, the types of data processed, and the potential risk of error. On the other hand, algorithms give businesses a competitive advantage, and to disclose their functioning would go directly against the interest of a company whose entire business model relies on it.
While AI tools still exist in a relative legal vacuum, this blog post explores: 1) the extent of protection granted to algorithms as trade secrets with exceptions of overriding public interest; 2) how the new generation of regulations on the EU and national levels attempt to provide algorithm transparency while preserving trade secrecy; and 3) why the latter development is not a futile endeavour.
The Default Regime of Trade Secrets and Public Interest Exceptions
Algorithms as Trade Secrets
Within the digital rights organisations and governments, transparency of automated decision making (ADM) has increasingly been regarded as a panacea against opaque computing systems (see here and here), however, these calls have often constituted high-level rhetorical positioning. This poses a problem since most complex algorithms dominating our lives (including those developed by Google and Facebook), are proprietary, i.e. shielded as trade secrets, while only a negligible minority of algorithms are open source.
Under Article 2 of the EU Trade Secrets Directive, to warrant protection, the information has to 1) be secret; 2) have commercial value due to its secrecy; and 3) be subject to reasonable steps to keep it secret. This third criterion is the most tangible and easy for businesses to demonstrate. To satisfy it, companies adopt non-disclosure agreements, include clauses banning reverse engineering into their licencing agreements, or limit the number of possible licences altogether so as not to undermine secrecy.
However, the protection granted by the Directive is not absolute. Article 1(2)(b), bolstered by Recital 11, concedes that secrecy will take a back seat if the ‘Union or national rules require trade secret holders to disclose, for reasons of public interest, information, including trade secrets, to the public or to administrative or judicial authorities for the performance of the duties of those authorities’.
There are no guidelines to aid the judges or public authorities as to where to strike the balance in case of the conflict between the private interest to keep an algorithm under wraps and the ephemeral public interest. What is clear is that the types of prevailing public interest are not covered by the Directive itself, and are effectively up to the CJEU and national courts to interpret.
Union Competition Rules as Overriding Public Interest
With regard to trade secrets in general, in the Microsoft case, the CJEU held that a refusal by Microsoft to share interoperability information with a competitor constituted a breach of Article 102 TFEU. Recognising the value of Microsoft’s trade secrets, the Court relied on the ‘exceptional circumstances’ doctrine developed in the Magill and IMS Health cases previously, and re-established that ‘the mere fact of holding intellectual property rights could [not] in itself constitute objective justification for the refusal’ otherwise ‘the raison d’être of the exception which that case-law thus recognises in favour of free competition’ would be defeated (see Microsoft, para 690).
As to algorithms specifically, in the Google Shopping case, the European Commission found that Google had abused its dominant position by demoting rival comparison shopping services in its search results, and thus violated Article 102 TFEU. Although trade secrets remained protected from the public and competitors, Google had to disclose Page Rank parameters to the Commission as the administrative authority for the performance of its investigative duties. It is possible that a similar examination will take place in the recently launched probe in Amazon’s treatment of third-party sellers.
Public Sector Transparency as Overriding Public Interest
Within the EU, public bodies are increasingly being subjected to higher levels of scrutiny and accountability. Since October 2020, algorithm transparency is being implemented ex ante on a local level with the cities of Amsterdam, Helsinki and Nantes establishing beta versions of registers describing the algorithms used in the city administrations. To ensure that AI employed by public services is human-centred, the registers explain, inter alia, how data is processed, what the risks involved are, and whether the tools have human oversight.
Moreover, in decisions of some Member States’ oversight bodies and courts (e.g., France and Italy), when an algorithm is used by public administration, it is viewed as an administrative act and is, hence, subject to disclosure.
In France, the 2016 Law for a Digital Republic (see Article 4 bolstered by detailed implementing Decree n° 2017-330) provided the right of access to administrative documents with a caveat on trade secrets. Under the Decree, whenever a public body subjects residents to algorithmic processing, the latter are entitled to be informed of: 1) the degree to which algorithmic processing contributes to decision-making; 2) the data processed; 3) processing parameters; and 4) operations to which such processing is applied. The information should be communicated to an individual upon request in an intelligible language and without infringing on secrets protected by law (see Article R. 311-3-1-2).
However, later in 2018, when the Constitutional Council was examining the bill harmonizing the French data protection law with the GDPR, it held, that if a public body cannot communicate the operating principles of an algorithm without harming protected secrets [including commercial secrets], no decision can be taken on the sole basis of such algorithm (see para 70). In other words, according to the interpretation by the French Constitutional Council, if a public body in its decision relies solely on an algorithm, trade secrecy may not be relied on as an excuse not to disclose its functioning.
The Italian courts offer a similar reasoning. In 2017, the Lazio Regional Administrative Court stated that an algorithm amounted to a digital administrative act, and that the citizens thus had the right to access it. In 2019, the Administrative Supreme Court of Italy Court went further and conducted the weighing of interests, reaching the conclusion that when used in public administration, the IP right holders cannot expect secrecy since the public’s right to ‘full knowability of the algorithm used and the criteria applied’ prevails.
However, more recently, the same court recognized holders of commercial/technical secrets as ‘parties with conflicting interests’ within the meaning of Article 22 (1)c of the Italian Administrative Procedure Act, as opposed to those who request access to the algorithm in order to verify its correct functioning.
Sanctioning the Use of Algorithms without Disclosure
Sometimes, national courts will satisfy themselves with reasoning on fundamental human rights level, without turning to the interpretation of any sector-specific regulations.
For instance, in February 2020, the District Court of the Hague held that the System Risk Indication algorithm that the Dutch government used to detect fraud in areas such as benefits, allowances, and taxes, violated the right to privacy (Article 8 ECHR), inter alia, because it was not transparent enough, i.e. the government has neither publicized the risk model and indicators that make up the risk model, nor submitted them to the Court (para 6 (49)). Unlike the Italian or French cases mentioned above, the court did not order disclosure of the algorithm procedures to rectify the potential wrongs and ensure transparency; the fact that the algorithm was not initially transparent undermined the residents’ right to privacy.
Somewhat similarly, in the recent case against the food delivery app Deliveroo, the Bologna Court did not seek to analyse how the challenged algorithm functioned. The Court held that the algorithm used by the app to determine the ‘reliability’ of a rider had discriminatory outcomes. And just like in the Hague judgment above, the Court stated that the criteria on the functioning of the algorithm were neither explained on the app beyond generic factors of reliability and participation, nor submitted by the defendant company to the Court, which precluded a more in-depth examination of the issue (see page 17).
Thus, in the absence of established case-law, national courts will at times interpret the existing laws and regulations as granting the public the right of access above IP rights of the owners, and demand the disclosure. Elsewhere, they will place the burden of proof on the owners of algorithms but will not explicitly order disclosure. In the latter case, 1) the courts are inadvertently impairing the development of the public’s right of access in its nascent stages; and 2) in analysing only the outcomes produced by the algorithms, they risk handing down potentially mal-informed judgments based on limited available data.
The Modern Era of Balanced Regulations
Trade Secrets and Right to Explanation
Although the GDPR does not contain a separate article dedicated to the interplay with trade secrets, Recital 4 mentions respecting the ‘freedom to conduct a business’. Recital 63 is more specific to the subject, stating that data subject’s right of direct access to personal data should not adversely affect, inter alia, trade secrets.
As to data subjects’ right to explanation, it is epitomised in Article 22 GDPR on automated decision-making read together with Article 15 (h) on the right of access, and Recital 71. At least when such decision has legal or significant effect on the data subject, the latter is entitled to ‘meaningful information about the logic involved’ in the decision-making.
However, the article has a narrow application to decisions ‘based solely’ on automated processing, meaning if a human being reviews and takes account of other factors in making the final decision, the decision is outside the scope of the article (see EDPB Guidelines). The UK data protection authority ICO gives a potentially more generous interpretation saying that the decision will not fall outside the scope of Article 22 just because a human has ‘rubber-stamped’ it .
Article 22 still remains one of the most unenforceable provisions of the GDPR. Some scholars (see, e.g. Wachter) question the existence of such a right to explanation altogether claiming that if the right does not withstand the balancing against trade secrets, it is of little value.
Despite this, in the 2020 series of lawsuits challenging the legality of the algorithms used by the gig-economy operators (see Deliveroo case above), the recent Uber case relies specifically on Article 22 GDPR. There, the applicants are requesting that the Amsterdam District Court dissect Uber’s advanced fraud-detection software Mastermind. In what the applicants’ lawyer described as ‘Kafkaesque situation’ (see para 50), the algorithm used was fully automated (no meaningful human involvement), produced legal effects for the applicants (termination of their employment with Uber), with no possibility to appeal such decision (Uber has relied on the decision as final).
Thus, the national lawsuits are one suitable way to test the boundaries of Article 22 taken together with Article 15, and to provide an answer to the question that has divided scholars and practitioners alike. Another way is new national laws in specific sectors.
For instance, in May 2019, the Polish lawmakers introduced the first law in the EU effectively expanding Article 22 to all bank loans. While Article 22 would normally apply to solely automated algorithms used by banks, and only to large loans (to satisfy the ‘legal effect’ requirement), under the amended Banking Act, the loan provider is obliged to give a written ‘explanation of their assessment of the applicant’s creditworthiness’, including information on the factors that influenced the assessment of creditworthiness, including the applicant’s personal data. While this law is reported to not be implemented, it serves as a starting point in a much needed trend to lower the threshold of the applicability of Article 22.
Trade Secrets and Transparency for EU Businesses on Online Platforms
In 2019, to ensure competition in the platform economy, the European Parliament and the Council adopted Platform-to-Business (P2B) Regulation. To create a level playing field between businesses, the Regulation for the first time mandates the platforms to disclose to the businesses the main parameters of the ranking systems they employ, i.e. ‘algorithmic sequencing, rating or review mechanisms, visual highlights, or other saliency tools’ while recognising the protection of algorithms by the Trade Secrets Directive (Article 1(5)).
It is not easy to determine how the Regulation suggests to balance these rights, but the description given should at least be based on actual data on the relevance of the ranking parameters used (Recital 27). The recent Guidelines on ranking transparency by the European Commission interpret the ‘main parameters’ to mean ‘what drove the design of the algorithm in the first place’ (para 41). Finally, the providers may no longer refuse disclosure on the sole argument that they had never revealed any of the algorithm parameters in the past or that the information is commercially sensitive (para 82).
Since the Regulation entered into force in July 2020, and the EU Commission has provided first guidelines only in December 2020, what merits closer following is how the Member States will enforce the provision and what kind of ‘disclosure’ the platforms will provide.
Trade Secrets and Media Diversity in Germany
Another ambitious legal instrument, albeit on a national level, is aiming at a level playing field among the media outlets relying on online platforms. The German Interstate Media Law that entered into force in October 2020, transposes the revised Audio-Visual Services Directive, but also goes well beyond the Directive in tackling automated decision-making that leads to prioritization and recommendation of content.
Under the law, the media intermediaries (i.e. online platforms) are to keep the following information permanently available: 1) the criteria that determine the accessibility of content at a platform; and 2) the central criteria of aggregation, selection and presentation of content and their recommendation system, including information on the functioning of the algorithms used (§ 93).
This obligation to ‘explain the algorithm’ makes it the first national law that, in ensuring fairness for all journalistic and editorial offers, also aims more generally at diversity of opinion and information in the digital space – a distinct human rights dimension. If the provision proves enforceable, it might serve as an example for other Member States to emulate.
Trade Secrets and the Draft DSA
On 15 December 2020, the long-awaited package of the Digital Markets Act (DMA) and the Digital Services Act (DSA) that had been promised to tackle the issue of the black box of algorithms, was published.
While the draft DMA is silent on the issue – possibly due to P2B Regulation covering it sufficiently – the draft DSA refers to algorithms under an umbrella term ‘recommender systems’ defined in Article 2(o) as ‘fully or partially automated system used by an online platform to suggest in its online interface specific information to recipients of the service, including as a result of a search initiated by the recipient or otherwise determining the relative order or prominence of information displayed’. Under Article 29 (1), the recipients of the service have the right to know the main parameters of recommender systems, as well as have options to influence/modify those parameters, including at least one option which is not based on profiling.
Lastly, the draft DSA grants the newly introduced Digital Service Coordinators, the Commission, as well as vetted researchers (under conditions to be specified) the powers of data access to ensure compliance with the DSA. The core of this right, however, is undermined in Article 31(6), which effectively allows the platforms to refuse such access based on trade secrecy concerns.
This shows that although addressing algorithms in a horizontal instrument is a move in the right direction, to make it enforceable, the final DSA, as well as any ensuing guidelines, should differentiate between three tiers of disclosure: 1) full disclosure – granting supervisory bodies the right of access, which may not be refused by the IP owners, to all confidential information; 2) limited disclosure – granting vetted researchers the right of access limited in time and scope, with legal guarantees for protection of trade secrecy; and 3) explanation of main parameters – granting individuals information in accessible language without prejudice to trade secrets.
Conclusions
With the ever-rising preponderance of algorithms in our societies, the approach of regulators has also evolved. In the European Union, contributing to this shift were largely the strengthening of human rights guarantees and the competition rules, which make EU residents the most automation-wary users in the world.
Looking at trends of regulatory development, one does detect a slow but positive shift. By default, the EU Trade Secrets Directive is shielding the commercial algorithms owned by businesses from any disclosure, with the vague exception of overriding public interest. Under this regime, the relationship is mutually exclusive: it is either the trade secrets or the public interest that can prevail in any given case of conflict.
While there are no guidelines on what public interest might constitute, drawing from similar wordings in human rights instruments, one assumes a high threshold for such exceptions. The scarce case-law suggests that ‘public interest’ has been interpreted by national courts to include access to public administration, and by the Commission and the CJEU, EU competition rules.
Other national cases have revealed a questionable approach of examining the algorithm outputs for signs of human rights violations without ordering the disclosure if not to the public, at least to the court, of the rationale behind the algorithm. While this approach protects trade secrecy, it can, however, in the long run damage both the IP owners whose product can be wrongfully sanctioned, and the users who are kept in the dark.
More recent legal developments are aiming to ensure a somewhat utopian co-existence of the two opposing interests. On the Union level, these include the GDPR (although it contains reference to trade secrets in passing), the P2B Regulation as the first instrument to explicitly set the goal of algorithm transparency without prejudice to trade secrets, and the resent DSA proposal which sets horizontal rules of transparency for individual users while showing distinct deference to trade secrecy. On a national level, such initiative is embodied in the German Interstate Media Law setting transparency obligations that platforms owe to media outlets.
Unlike the exceptional disclosure regime that sets a high bar for a public interest to beat a legitimate IP right, clear intelligible explanations of the algorithm parameters below the threshold of disproportionate disclosure can preserve both rights. In the framework of the ever-developing human rights, the right to freedom of information and privacy in the 21st century should encompass the right of the general public to be well-informed about algorithms that affect their lives and businesses in the slightest.