Articles

Publish at October 12 2021 Updated October 20 2021

Extracting Bias from Artificial Intelligence

Our Algorithms Have, Alas, Our Societal Biases

The presence of artificial intelligence (AI) still seemed worthy of science fiction at the beginning of the 21st century. Presently, there are legions of interfaces using algorithms. Whether it's social networks, online shopping sites or the Internet of Things. Avoiding AIs is becoming increasingly difficult. Instead, all walks of life are looking to use algorithms to improve services to citizens, patients and even students.

Biased AIs

The use of these algorithms is not without its challenges. Experts are noticing and decrying algorithmic biases.  It is important to note that the machines do not discriminate on purpose. Rather, they reproduce biased judgments that are still strongly present in our societies despite the advances of the last few decades. For example, job ads on social networks displayed according to the user's gender: engineering or construction jobs for men, nursing or teaching jobs for women.

Besides, users of Facebook, Twitter and others also experience unfair decisions of algorithms. Although GAFAMs employ manpower to moderate what is said online, these employees cannot analyze the millions of interactions that take place every second. As a result, artificial intelligences punish terms that go against the site's policy. Yet this can lead to absurd situations since they cannot take context into account.

For example, defamatory language about LGBTQI2S+ people is generally banned. This led to an advocacy group denouncing such language... being censored for hateful content. The algorithm being unable to understand that many activist associations are trying to reclaim the slurs in order to make them lose this homophobic power.

Unfairness in Health Care and in Court

However, the most obvious examples have been seen in the field of health care in the US. Algorithms are indeed capable of assisting medical staff when it comes to diagnosing a care recipient. However, an artificial intelligence has shown that it disadvantaged black patients with kidney failure to a transplant they would have needed. A bias, then, that is dangerous and poorly explained.

This is probably one of the handles of AI's criticisms: difficult to understand how it makes these decisions. For example, algorithms have shown their ability to differentiate X-rays of patients of different ethnicities. Yet even the study's authors don't grasp how AIs achieve a minimum success rate of 80%, or even 99% for some. This raises serious questions in a field where systemic discrimination leads to more deaths.

Racial bias is probably the highest according to the report from the Institute for Human-Centered Artificial Intelligence at Stanford University. Their analysis of the past five years shows that the risks have not decreased, on the contrary. Facial recognition AIs analyzing a black person's face have offered users videos about primates... This can be a concern in the field of justice, among others, which relies more and more on algorithms to establish sentences or fines. It will therefore be necessary for justice personnel to maintain a human approach despite the machine's suggestions.

Technical and Human Solutions

In fact, the solution to countering its biases will come from those who designed them. Because machines have as much difficulty as humans in recognizing their misjudgments. We need to understand, as Aurelie Jean argues, that algorithmic science is not Manichean. Certainly, there are obviously modifications to be made in order to combat bias. In fact, it should be remembered that algorithms are tools. Fundamentally neutral, they can help humanity as much as harm it.

So what can be done to eliminate these biases? Some have technical proposals. Large technology companies already organize "bug-hunting" programs whether on operating systems or Internet browsers. Thus, hunters spend their time looking for vulnerability flaws and reporting them so that companies fix them with updates. What if such an initiative was implemented with AIs? Specialized people could analyze decisions and see where the algorithm errs in order to improve it.

To counter gender bias in hiring algorithms, experts have established a statistical definition of equality. This gives a green light to intelligence that balances gender data well and a red light to those with biases. Such a data-driven approach could easily fit into an AI code and adapt for the race issue as well. Still, developers have to want such mechanics to be implemented. Many of them prefer to jealously guard the secrets of their program. Will it then be necessary for laws to be put in place so that AI companies are accountable for discrimination?

Another part of the solution is through representativeness. Indeed, an overwhelming majority of AI designers are white men. So should we be surprised that biases affect women and people of other ethnicities? Thus, integrating these groups into companies producing algorithms could help greatly decrease these biases. Also increasing the presence of non-white faces in databases to improve facial recognition, among other things.

Illustration : Fakurian Design on Unsplash

References :

Eikholt, Sebas. "Mise En Garde Des Experts Dans L'exploitation De L'IA Et Ses Préjugés Cachés." Smarthealth.  Last updated September 23, 2021. https://smarthealth.live/fr/2021/09/23/les-experts-mettent-en-garde-contre-les-prejuges-lies-a-lia-dans-les-soins-de-sante/.

Grison, Thibault. "IA Et Modération Des réseaux Sociaux : Un cas D’école De 'discrimination Algorithmique'." The Conversation. Last updated September 9, 2021. https://theconversation.com/ia-et-moderation-des-reseaux-sociaux-un-cas-decole-de-discrimination-algorithmique-166614.

"Intelligence Artificielle : Arme De Discrimination Massive ?" Magazine Décideurs. Last updated December 11, 2020. https://www.magazine-decideurs.com/news/intelligence-artificielle-arme-de-discrimination-massive.

Jean, Aurélie. "La Science Des Algorithmes N'est Pas Manichéenne." Le Point. Last updated May 23, 2021. https://www.lepoint.fr/invites-du-point/aurelie-jean-la-science-des-algorithmes-n-est-pas-manicheenne-23-05-2021-2427670_420.php.

Leprince-Ringuet, Daphné. "Repérer Les Biais Algorithmiques De L'IA Grâce Aux Programmes De Bug Bounty ?" ZDNet France. Last updated March 16, 2021. https://www.zdnet.fr/actualites/reperer-les-biais-algorithmiques-de-l-ia-grace-aux-programmes-de-bug-bounty-39919477.htm.

Maquet, Clémence. "Intelligence Artificielle : Quelle Approche Des Biais Algorithmiques ?" Siècle Digital. Last updated May 11, 2021. https://siecledigital.fr/2021/05/11/intelligence-artificielle-quelle-approche-des-biais-algorithmiques/.

Mélin, Anna. "Les Biais Algorithmiques : Un Défi Majeur Dans Nos Sociétés Numérisées." Villes Internet. Last updated July 16, 2021. https://www.villes-internet.net/site/les-biais-algorithmiques-un-defi-majeur-dans-nos-societes-numerisees/.

Nast, Condé. "These Algorithms Look at X-Rays-and Somehow Detect Your Race." Wired. Last updated August 5, 2021. https://www.wired.com/story/these-algorithms-look-x-rays-detect-your-race/.

Prades, Arnaud. "Pour éviter Les Biais De L'IA, Mieux Vaut S'intéresser Au Facteur Humain Qu'aux Questions Techno." JDN. Last updated June 30, 2021. https://www.journaldunet.com/solutions/dsi/1503581-pour-eviter-les-biais-de-l-intelligence-artificielle-mieux-vaut-s-interesser-au-facteur-humain-qu-aux-questions-technologiques/.

Pérignon, Christophe. "« Un $ % De programme Sexiste » : Comment Détecter Et Corriger Les biais Des IA." The Conversation. Last updated March 15, 2021. https://theconversation.com/un-de-programme-sexiste-comment-detecter-et-corriger-les-biais-des-ia-156874.

Richard, Philippe. "Comme Les êtres Humains, L’IA Repère Difficilement Ses Erreurs D’appréciation." Techniques De L'Ingénieur. Last updated September 13, 2021. https://www.techniques-ingenieur.fr/actualite/articles/comme-les-etres-humains-lia-repere-difficilement-ses-erreurs-dappreciation-99116/.

Trujilo, Elsa. "Aux Etats-Unis, Un Algorithme Accusé D'avoir Défavorisé Des Patients Noirs Pour Des Greffes De Rein." BFMTV. Last updated October 27, 2020. https://www.bfmtv.com/tech/aux-etats-unis-un-algorithme-accuse-d-avoir-defavorise-des-patients-noirs-pour-des-greffes-de-rein_AN-202010270161.html.

"VIDEO. Les Algorithmes Qui Nous Entourent Sont-ils Racistes Et Sexistes ?"  Franceinfo. Last updated August 19, 2021. https://www.francetvinfo.fr/societe/droits-des-femmes/video-les-algorithmes-qui-nous-entourent-sont-ils-racistes-et-sexistes_4721831.html.

Zignani, Gabriel. ""Les Juges, Policiers Et Avocats Vont Devoir S’approprier La Justice Algorithmisée"." La Gazette Des Communes. Last updated July 21, 2021. https://www.lagazettedescommunes.com/757552/les-juges-policiers-et-avocats-vont-devoir-sapproprier-la-justice-algorithmisee/.


See more articles by this author

Files

  • Invisible Architectures

  Thot Cursus RSS
  RSS reader ? : Feedly, NewsBlur

Access exclusive services for free

Subscribe to our newsletter on pedagogy and educational technologies

You can also index your favorite resources and retrieve your viewing history.

Subscribe to the newsletter
Superprof: the platform to find the best private tutors  in the United States.

 

Add to my playlists


Create a playlist

Receive our news by email

Every day, stay informed about digital learning in all its forms. Great ideas and resources. Take advantage, it's free!