This post argues that because children are often not mentioned explicitly in the positions taken by the Parliament, the Council and the Commission, it is not clear what effect the AI Act will have for children in AI contexts.
The negotiations for the EU’s Artificial Intelligence Act are in full swing and there is plenty of debate about the proposed Regulation. One issue that has largely been forgotten is the protection of children. Should there be some specific rules for protecting children in AI contexts? At the moment, the EU legislator seems to say no. Even though the proposal for the AI Act has evolved in directions that take into account fundamental rights, the articles of the Regulation have clearly not been written with children’s protection in mind.
Many actors have emphasised children’s rights in the digital environment. Perhaps most importantly, in 2021 the UN Committee on the Rights of the Child adopted a general comment on children’s rights in the digital environment. It points the way towards more effective protection of children’s digital lives. Certain guiding principles should inform the implementation of children’s rights: non-discrimination, best interests of the child, the right to life, survival and development and the respect for the views of the child. In addition, the comment stipulates that the evolving capacities of children should be respected.
The Commission’s proposal
In the Commission’s original proposal, children were mentioned mainly in the recitals. The proposal was based on the idea of regulating AI systems as products whose safety needs to be ensured, not so much on a fundamental rights logic. Hence, it is understandable that end users, citizens or children were not the main focus. However, if the EU really wants people to trust that AI is used in a way that is safe and respects fundamental rights, as the Commission states on page 1, then rights require effective protection.
The United Nations Convention on the Rights of the Child was mentioned in the Commission’s proposed recital 28, which stated that children’s rights need to be considered when assessing the severity of the harm that an AI system can cause, including in relation to the health and safety of persons. This principle was, however, not backed up by any detailed rules in the proposed articles.
The Commission’s proposal also included a mention of children in recital 15. It stated that AI can be misused and provide tools for manipulative, exploitative and social control practices. Such practices should be prohibited because they contradict Union values of respect for human dignity, freedom, equality, democracy and the rule of law and Union fundamental rights, including the rights of the child. Here, the need to take into account the special needs of children makes an explicit, yet very general, appearance.
Recitals do not constitute binding EU law but articles do. The strongest rules that would protect children were included in articles 5 and 9 in the Commission’s proposal. According to article 5, the use of real-time remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement would be forbidden unless they were used for the targeted search for specific potential victims of crime, including missing children. Proposed article 9 on risk management systems, on the other hand, stipulated that when implementing a risk management system, specific consideration needs to be given to whether the high-risk AI system is likely to be accessed by or have an impact on children.
Hence, the Commission’s proposal included two mentions of children on article level. They would undoubtedly provide some protection of children’s rights. However, the overall impression is of a Regulation that does not pay any specific attention to the protection of children in AI environments.
The Council’s approach
The Council’s general approach from November 2022 did not change this. In fact, one interesting detail was added that points in the opposite direction. The recitals included a new 5a, which referred to the United Nations General Comment No. 25 on children’s rights in digital environments. Here the Council proposed that the AI Act would not affect national laws on the protection of minors insofar as they are not specific to AI systems. This means that national legislators would still be able to legislate on children’s rights – but not in a way that would target AI systems. The Council clearly wanted all AI rules to be harmonized in the EU and no Member State would be able to impose more stringent or lax rules. In recital 5a the Council consequently maintained that even the need to protect children’s rights would not provide room for national legislation on AI.
The Parliament’s position
The logic of the AI Act took a new turn in the European Parliament. In June 2023, it adopted its negotiating position. Many recitals and certain articles stressing the need to protect fundamental rights were added. The nature of the whole Act began to change from a product safety orientation to a fundamental rights driven approach.
For instance, article 29a introduced the idea of a fundamental rights impact assessment for high-risk AI systems. Deployers of such systems would need to conduct an assessment of the system’s impact in the specific context of use. Here, the Parliament stipulated that the specific risks of harm likely to impact marginalised persons or vulnerable groups is one of the issues to be addressed. Even though children are not mentioned, it is possible to interpret them as belonging to vulnerable groups. As the fundamental rights impact assessment could be one effective way of regulating high-risk systems, it is important that vulnerable groups are included in the list. However, more effective protection of children’s rights would undoubtedly follow from explicitly mentioning children in the article.
The Parliament also took a stand on biometric identification systems. The Council would not have forbidden them, but the Parliament proposed a general ban on biometric identification systems in the EU. Hence, the possibility of using such systems in the search for missing children had to go as well.
One significant issue in the future application of the AI Act will be the definition of high risk systems. The rules that apply to them will likely form the bulk of the Regulation. Hence, it is important to note that children were indirectly included by the Parliament on the list of considerations that the Commission should take into account when assessing the risk posed by a system in its role as designating use-cases of AI systems as high-risk. According to the Parliament, the Commission’s assessment should evaluate the extent to which there is an imbalance of power, or the potentially harmed or adversely impacted persons are in a vulnerable position in relation to the user of an AI system due to, among other things, knowledge, economic or social circumstances or age. Here, too, a children’s rights lawyer would probably like to see children mentioned explicitly in order for their protection to be effective, but including vulnerability and age provides at least some leeway for interpreting childhood as a relevant factor.
The Parliament also introduced a completely new article 4b on AI literacy. Even though it is difficult to predict what it would require in practice, it nevertheless puts forward the idea that Member States have a responsibility to promote AI literacy. The proposed article also stipulates that providers and deployers of AI systems should ensure a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf. This should take into account their technical knowledge, experience, education and training, and the context the AI systems are to be used in, considering the persons or groups of persons on which the AI systems are to be used. Implicitly this may mean that the developmental stage and capacities of children also need to be considered. Such an approach would echo the requirement to promote data literacy in UN General Comment No. 25 on the rights of the child in the digital age. However, the proposed article 4b in the AI Act again does not explicitly mention the protection or empowerment of children.
Conclusion
The overall assessment of the evolving AI Act from the perspective of children’s rights is that the legislation is mostly being developed in a positive direction. The weaknesses of the respective positions taken by the Commission, the Council and the Parliament lie in the fact that children are not often mentioned explicitly. This raises doubts as to the effect that the AI Act will have for children in AI contexts. Nowhere in the proposed drafts of the Regulation can we read references to, for instance, the best interests of the child, the child’s right to development or the respect for the views of the child.
It is important to note that the AI Act will be an instrument for full harmonization. This means that Member States will not be able to impose their own rules on AI systems. Children’s rights cannot be protected more, or in different ways, than the AI Act allows. For this reason alone, one would hope for more consideration of children’s protection in the Regulation.
Likewise, the tendency to leave important issues in the recitals is open for critique. It is well known that recitals are often neglected in everyday legal life. An illustrating example is the prohibition to use automated decision making or profiling on children in the General Data Protection Regulation (GDPR). Even though recital 71 GDPR states that automated decision making “should be subject to suitable safeguards, which should include specific information to the data subject and the right to obtain human intervention, to express his or her point of view, to obtain an explanation of the decision reached after such assessment and to challenge the decision. Such measure should not concern a child.” The current interpretation is that because the ban was not included in the articles of the GDPR, it is not a ban at all. Recitals are soft law and when the disagreements on interpretation start, the softer argument rarely wins the day.
The AI Act is far from ready and many unresolved issues remain. One conclusion can be drawn, though. The Act will most likely include more fundamental rights consideration than was originally expected but it will not focus on children. If the EU wants to promote the protection of children’s rights in digital environments, more needs to be done.