Home ›› 23 Oct 2022 ›› Special Supplement

Discrimination in Artificial Intelligence

BM Mainul Hossain
23 Oct 2022 00:00:00 | Update: 23 Oct 2022 01:22:54
Discrimination in Artificial Intelligence

In recent days, like in other parts of the world, Artificial Intelligence or AI has made its footsteps in Bangladesh. The country’s curious brains are investigating how to take advantage of AI applications. Generally, AI is described as the simulation of human intelligence in machines that mimic the learning and problem-solving capabilities of humans.

Several AI-based applications have found wide acceptance with sustainable business models. Governments across the world are also engaging in AI-based solutions to ensure citizen services, and enforce law and order.

For example, intelligent chatbots are in use by many organisations to offer services and answer queries, usually replied to by human agents. Image processing is being used to analyse medical images and detect diseases with formidable accuracy.

Although the technology has not reached the point it is seen in the movies, both public and private organisations are using AI-enabled surveillance technologies for predicting crime and recommending optimal police presence.

Moreover, techniques like facial recognition are being used not only in social media to tag people, but rather to find a suspect out of the crowd, to ensure the attendance of employees in offices, and even to grant access to restricted areas. From voice-driven personal assistants to delivery drones to self-driving cars, AI-based applications are making their entrance into various aspects of life.

AI and one of its variants, Machine Learning (ML), are being used for predictions, recommendations, selections and a variety of decision-making activities. For example, AI can be used by a bank to decide if an applicant will be granted a loan or not, can be used by a university to decide if a student will be granted a waiver or not, and can be used by a marketing campaign to decide if a customer will be offered a sale or not.

Unfortunately, discrimination is not fully exempt from AI algorithms, just as racial, gender and other types of discrimination are present in the human decision-making process.

As AI only mimics the cognitive skills of intelligent creatures and it learns from previous data, if the history or previous data is full of discriminations and biases, then AI will make biased decisions as well. It just outputs what it learned from the data and the pattern it sees in the data. So, it is not surprising to see such kinds of discriminatory decisions made by machines.

In October 2019, researchers discovered that an algorithm that was used on more than 200 million patients in US hospitals to determine which ones would likely require additional medical attention substantially favoured white patients over black patients. For a variety of factors, on average, black patients with the same problems paid less for healthcare than white ones. The algorithm determined medical needs based on previous payments which eventually led to discrimination.

In the past, an algorithm used by tech giant Amazon for hiring employees was found to be biased against women. The algorithm was taught to favour men over women because it was based on the volume of resumes submitted over the previous few years and the majority of the candidates were men.

A team of researchers from Carnegie Mellon University claimed a few years ago that Google’s algorithm shows prestigious job advertisements more to men in comparison to women. The team found that Google showed the ads 1,852 times to the male group, but just 318 times to the female group.

Some of these discriminations could be intentional, some could be unintentional. However, it’s clear that algorithmic discrimination is a reality.

To reduce the bias and discrimination by the AI algorithms, it is necessary to understand what data is being fed to the system. Data should be pre-processed and cleaned in such a way that the algorithm becomes unbiased towards a particular race, gender, age, or other factors while making decisions.

Some governance is required to regulate the model used to make or aid decisions. If necessary, human intervention should be introduced to evaluate the model to ensure non-discrimination and to shape the nature of the algorithm.

In December 2021, the attorney general of Washington DC announced a bill to ban algorithmic discrimination. In his letter to the chairman of the Council of the DC, he wrote, “Algorithms are tools that use machine learning and personal data to make predictions about individuals. Increasingly, algorithms are used to determine eligibility for opportunities in employment, housing, education, and public accommodations like healthcare, insurance, and credit. But all too often, algorithms reflect and replicate historical bias, exacerbating existing inequalities and harming marginalized communities.”

The world is now talking about Explainable AI, also referred to as Interpretable AI or Explainable ML. There are some AI or ML approaches, where even the designers are unable to explain how an algorithm came to a particular conclusion. The Explainable AI stresses the solution that humans can understand. It can surely help to verify the algorithm’s fairness.

In Bangladesh, it is not very hard to predict that the use of AI is going to flourish in the very near future, especially in line with the country’s focus on Smart Bangladesh.

As with the trending digitalisation, digital systems and services are producing tons of data. Undoubtedly, the country will use that data in its future decision-making processes. While AI and ML will be important tools for a future smart Bangladesh, it is also important to be careful and train those tools with appropriate data. It will be wise to keep the discriminatory nature of AI in mind while formulating various policies and strategies.

Also, it will not hurt to consider a Fair Bangladesh along with Smart Bangladesh. Smartness and fairness can easily go together.

Dr BM Mainul Hossain is an associate professor at Dhaka University’s Institute of Information Technology. He can be reached at [email protected]

×