Author: Fouad Abdelrazek (LLD Candidate)
Research Group: Law, Technology and Design Thinking
Fouad Abdelrazek |
Thus, the deletion of such data could
significantly impact the efficiency and effectiveness of AI models. Accordingly,
it could have severe implications for the economy in the long run.[2]
On the other hand, the issue of protecting
personal data is a matter of utmost importance, as it is considered a fundamental
human right.[3] Hence,
the regulation of technology is a crucial aspect to ensure the protection of
individuals. Nevertheless, it is important to guarantee that such regulations
do not become an obstacle to development but rather support it.
Achieving the balance between these two
interests is a complex matter that requires careful
consideration and implementation of appropriate policies and regulations.
In my opinion, it also relies on the role of judges in
interpreting the text of regulations. This ensures that regulations can be
effectively applied to emerging technologies while also providing a level of
flexibility necessary to promote innovation and development.
This significant role clearly appears in interpreting
the right to be forgotten (RTBF), especially regarding AI applications. The RTBF
is one of the most powerful rights that the General Data Protection Regulation
(GDPR) has given under the name
of the right to erasure (Article 17). This right gives EU and EEA residents the
power to control their personal data.[4]
However, the concept of the RTBF presents a
significant challenge in terms of its definition and implementation in relation
to AI applications. This is because the requirement for data deletion, which is
a fundamental aspect of the RTBF, is not easily applicable to AI systems. Unlike
humans, AI systems and applications do not “forget” data in the same way, and
the data deletion process in AI contexts is far more complex.[5] As a result, various
conflicts and debates have emerged concerning the interpretation of the RTBF in
the context of AI, making it a topic of significant academic interest.
There is an ongoing debate about interpreting “erasing”
data differently, each with varying levels of difficulty to implement. A strict
interpretation would demand erasing all copies of the data and removing them
from any derived or aggregated representations to the extent that it is
impossible to recover the data by any known technical means. This may not be
feasible with some technologies. A more nuanced and pragmatic interpretation
could permit encrypted data copies to persist as long as they remain
indecipherable to unauthorized parties. A gentler and even more pragmatic
interpretation could permit unencrypted data copies to last as long as they are
no longer publicly visible in indices, database queries, or search engine
results.[6]
Here, the judges have an essential role in
interpreting the definition of the RTBF and directing the organization about
how it should execute the verdict. This interpretation will directly impact AI.[7]
Roughly speaking, there are two methods of
interpreting a legal text: the first is textualism, and the second is purposivism.
Textualism is to stick to the statute's text in interpretation, whereas purposivism
(or intentionalism) considers text-external purposes and legislator intentions.[8]
In this context, can judges’ emotional bias affect
their interpretation of the RTBF to either decrease or increase the deletion of
personal data to improve the economy?
People might unconsciously favour evidence that aligns
with their existing viewpoints while disregarding or devaluing evidence that
contradicts them.[9] From a classical legal realist perspective, the
judge's decision can be biased without the judge knowing.[10] Despite judges' claims that
their emotions do not impact their decisions,[11] it's unlikely that
emotions cease to exist when they act in court. Emotions are a significant
source of intuition, and their impact on decision-making is robust and
valuable.[12] One judge has expressively stated, "Judges,
being flesh and blood, are subject to the same emotions and human frailties as
affect other members of the species."[13]
Hence, the issue of how judges interpret the RTBF in
the context of AI is a complex and multifaceted one. The judgment of the
European Court of Justice’s (ECJ) Google Spain case (C‑131/12) suggests that each
case of the RTBF should be interpreted in its own context (judgement addressing
Question 3, para. 99). This provides judges with much interpretive leeway in
determining the meaning of the RTBF in the context of every case. However, this
leeway may lead to different interpretations in similar cases.
Judges have to emphasize either of the two methods of
interpreting a legal text to define the RTBF. However, interpretations of these
two methods will raise different challenges for implementing the RTBF regarding
AI.
On the one hand, under textualism, where the judge must
adhere strictly to the statute's text, the text unequivocally calls for the erasure
of the individual’s personal data. This may seem to have a harmful impact on
the economy. It may lead to the erasure of a massive amount of data, which AI
depends on in its efficiency, which will significantly impact the economy.
However, are such verdicts technically executable in the first place? In some
cases, it is very difficult to ensure that the personal data is erased from the
model.[14] However, naturally, such
an interpretation will increase trust in the judicial system, encouraging individuals,
in turn, to give their personal data to these organizations.
On the other hand, a purposive interpretation might
lead to a very broad interpretation of the text, which may negatively impact
the trust between individuals and the judicial system. Through the lens of
purposive interpretation, the RTBF may be interpreted such that data is not
necessarily physically destroyed or overwritten; rather, it is merely made
inaccessible or not readily retrievable through normal means. This could imply
that, in practical terms, data marked for deletion in databases may still exist
in some form and is merely concealed, awaiting potential overwriting in the
future.[15] This will not lead to the
actual erasure of personal data. Consequently, this will make individuals more
reluctant to give their personal data to these organizations, which will affect
the efficiency and accuracy of AI and also negatively impact the economy.
In conclusion, implementing the RTBF in the context of
AI requires a nuanced and balanced approach. Considering this challenge, it would
be useful if the Court of Justice of the European Union (CJEU) established clearer
guiding criteria for judges to follow when interpreting the RTBF and its
implementation, aiming to reach a balance between people's interests and the
economy, especially in the context of AI. Although the ECJ presented its
opinion, in practice, it is still debatable whether it was right or not. From this
perspective, the lack of clear criteria for the RTBF, coupled with the rising
number of cases and varying circuits that handle them, will result in a
significant difference in interpretations of the RTBF in similar cases.
The existence of clear criteria would ensure that
judgments are unified and consistent, ensuring trust and fairness, and avoiding
conflicts and negative economic impacts.
[1] Mangini, V., Tal, I., & Moldovan, A.
N. (2020, August). An empirical study on the impact of
GDPR and right to be forgotten organisations
and users perspective. In Proceedings of the 15th international conference on
availability, reliability and security (pp. 1–9).
[2] Salami, E. (2023). Artificial Intelligence: The end of Legal
Protection of Personal Data and Intellectual Property?: Research on the
countering effects of data protection and IPR on the regulation of Artificial
Intelligence systems.
[3] Rodotà, S. (2009). Data protection as a fundamental right. In
Reinventing data protection? (pp. 77–82). Dordrecht: Springer Netherlands.
[4] Post, R. C. (2017). Data privacy and dignitary privacy: Google
Spain, the right to be forgotten, and the construction of the public sphere.
Duke LJ, 67, 981.
[5] Villaronga, E. F., Kieseberg, P., & Li, T. (2018). Humans
forget, machines remember: Artificial intelligence and the right to be
forgotten. Computer Law & Security Review, 34(2), 304–313.
[6] Sandra, I. A. The enforcement of right to be forgotten at the EU
level by using search engines.
[7]Aghion, P., Jones, B. F., & Jones, C. I.
(2018). Artificial
intelligence and economic growth. In The economics of artificial intelligence:
An agenda (pp. 237–282). University of Chicago Press. It is stated on the business Bank of
America site that “AI will contribute more than $15 trillion to the global
economy by 2030” https://business.bofa.com/en-us/content/economic-impact-of-ai.html#
[8] Aalto-Heinilä, M. (2016). Fairness in statutory interpretation:
Text, purpose or intention?. International Journal of Legal Discourse, 1(1),
193–211.
[9] Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon
in many guises. Review of general psychology, 2(2), 175–220.
[11] Maroney, T. A. (2011). Emotional regulation and judicial behavior.
Calif. L. Rev., 99, 1485.
[12] Wistrich, A. J., & Rachlinski, J. J. (2017). Implicit bias in
judicial decision making how it affects judgment and what judges can do about
it. Chapter, 5, pp. 17–16
[13] Maroney, T. (2016). The emotionally intelligent judge: A new (and
realistic) ideal. Revista Forumul Judecatorilor, 61.
[14] Graves, L., Nagisetty, V., & Ganesh, V. (2020). Does AI
Remember? Neural Networks and the Right to be Forgotten.
[15] Villaronga, E. F., Kieseberg, P., &
Li, T. (2018). Humans forget, machines remember:
Artificial intelligence and the right to be forgotten. Computer Law &
Security Review, 34(2), 304-313.
No comments:
Post a Comment