Artificial intelligence (AI) systems have become important tools that support decision-making processes in many fields, from commerce to public services. These systems, which are used in critical processes such as recruitment, credit assessment, insurance risk analysis, etc., may contain biases that may disadvantage certain individuals or groups. In this Article, we will discuss what bias in AI is, in which areas it is prevalent, the risks it brings with it, and solutions.

What is Bias/Algorithmic Discrimination in Artificial Intelligence?

Bias in AI, also referred to as bias in AI or algorithmic discrimination, can be defined as systematic errors that cause AI systems to treat certain individuals or groups unfairly differently compared to others. 

One of the most well-known examples of this is Amazon's automated assessment algorithm developed in 2014 to speed up recruitment processes and increase efficiency. This algorithm was created to analyze job applications and identify the most suitable candidates. However, in 2018, investigations revealed that the system discriminated against female candidates. Despite trying to fix this problem, Amazon canceled the project because it could not completely prevent bias in the algorithm.

Areas Where Prejudice is Frequently Seen

Bias in AI systems is most evident in the following examples: 

- Algorithms that select job applications are biased on issues such as gender or health

- Developing autonomous vehicles to detect light-skinned pedestrians more accurately than dark-skinned pedestrians, 

- AI systems used in loan or mortgage appraisal processes reach discriminatory conclusions based on factors such as the residence or ethnicity of customers.

Causes of Bias in Artificial Intelligence

There are various reasons for bias in AI systems. The educational data used in the learning processes of these systems is one of the most important sources of bias. If this data reflects inequalities and biases in society, AI models can learn these biases and produce similar erroneous results. 

Another important reason is the choices in the design of the model and the algorithms used. How the algorithms are structured and what weights are given can cause the system to be more biased towards certain outcomes. This can lead to unintentional or inadvertent biases being introduced into the system.

It is important to note that hallucinations that can occur in generative AI models, i.e. the generation of outputs that are not real or do not exactly match the training data, can also emerge as a form of bias. Such misleading content can propagate false or negative stereotypes about certain groups.

In conclusion, the causes of bias in AI are complex and include many factors ranging from data quality to the generative AI model. Therefore, these causes need to be carefully examined and appropriate remedies developed to ensure that AI systems are fair and reliable.

Prejudice Elimination and Solution Strategies

Various approaches and solutions have been proposed to address bias and prevent discrimination in AI systems:

-  Processing of Special Categories of Personal Data: The processing of sensitive personal data for the purposes of bias detection and correction in high-risk AI systems is permitted by Article 10(5) of the EU AI Act, with appropriate safeguards. 

Data Source Analysis: It is important to analyze the data sources used to train AI systems to identify and reduce potential biases (based on factors such as zip code, gender, age).

Testing ("Fairness Testing") and Auditing: AI outputs need to be regularly tested and audited for potential bias. 

Transparency: Users should be informed about how algorithms work.

Human Oversight: Informed human oversight should be ensured at every stage of the development, implementation and use of high-risk AI systems. 

The Role of the EU AI Act

The EU AI Act addresses the problem of bias in AI systems from various angles and introduces important regulations on this issue. There are various provisions aimed at reducing the risk of prejudice and preventing discrimination, especially in high-risk AI systems. The highlights of these provisions are as follows;

- The training, validation and testing datasets used in the development of high-risk AI systems need to be relevant, accurate and complete. In particular, they need to reflect geographic, contextual, behavioral or functional settings specific to the intended context of use of the system.

- High-risk AI systems should be designed to be sufficiently transparent so that users can interpret the system's outputs.

- Certain AI applications with a high potential to lead to biased or discriminatory outcomes are prohibited. For example; (i) AI systems that significantly distort behavior and are likely to cause harm by exploiting people's vulnerabilities due to age, disability or socioeconomic status, (ii) social scoring systems that assess or classify people based on their social behavior or personal characteristics over a period of time, leading to discriminatory treatment, (iii) AI systems that assess people's risk of offending based solely on profiling or personality traits.

- Processing of sensitive personal data for the purpose of detecting bias in high-risk AI systems is exceptionally permitted, provided that appropriate safeguards for fundamental rights and freedoms are provided and certain conditions are met.

Evaluation from Turkey's Perspective

In Turkey, there is no legal regulation on artificial intelligence, nor is there a specific legal framework that directly addresses the issue of bias. However, legal regulations such as the Constitution's principle of equality (Art.10) and the Personal Data Protection Law (KVKK) may enable practices involving algorithmic discrimination to be addressed within the legal framework. In particular, the KVKK's obligation to "process personal data in accordance with the law and good faith" (Art. 4/2-a) can be claimed as a legal basis against data processing processes that may lead to discrimination in artificial intelligence systems. However, compared to the GDPR, a detailed regulation on algorithmic decision-making and automated processing processes has not yet entered into force in Turkey.

Bias in AI systems is an important problem that hinders the harnessing of the benefits of technology and can lead to serious social injustices. Ultimately, the development of fair, transparent and unbiased AI systems is crucial for both protecting individual rights and increasing public trust in AI technologies. Implementing legal regulations on this issue in Turkey would be a critical step to support ethical and responsible AI use.