[CHRONICLE] Generative AI and inclusiveness: between discriminatory biases and considerable potential

Artificial intelligence is revolutionizing our daily lives, and has been creating a real boom over the past two years. Nevertheless, everyone has already faced a generated image that reflected discriminatory prejudices. This raises crucial questions of ethics and inclusiveness. Indeed, biases linked to cultural and social stereotypes already influence certain results generated by AI. Yet, paradoxically, generative AI has immense potential to promote gender diversity.

Obvious biases in generative AI results

Examples that speak for themselves

Image generation can be highly prone to discriminatory bias. When an image-generating AI like OpenAI’s DALL-E is asked to generate an image of a rich person, a homeless person or a terrorist, the results may reflect gender or ethnic bias.

 

Generative AI bias scaled

Requests sent to Dall-E: “A nurse”, “A florist”

 

This is also the case for text generation, particularly in the field of recruitment. When the generative AI tool writes a job description, it runs the risk of incorporating gender stereotypes linked to certain professions. More generally, AI also has its limits when it comes to interacting with marginalized groups, such as people with disabilities.

 

There are many examples of this in generative AI, but also in traditional AI, as in the case of speech recognition systems that struggle to understand people with speech impairments, thus excluding these users from many speech technologies. Emotional recognition algorithms can also fail to correctly interpret the facial expressions of people with autism, or suffering from certain diseases or disorders (e.g. paralysis). In particular, skin diagnosis tools may have more difficulty providing a qualitative diagnosis on black skin.

 

These biases are not the result of chance. They stem from a set of factors that influence the way AIs work.

 

Multiple factors leading to AI errors

The causes are diverse: sometimes they are due to the technology used, sometimes it’s our own human biases that lead artificial intelligence into error:

  • Data selection is one of the main factors. To take the previous example, skin diagnosis tools are mostly trained on white skin, which explains their difficulties on other skin types.
  • Secondly, machine learning models are not aware of cultural and ethnic differences. They simply learn from patterns observed in the data.
  • Model architecture also plays a role. It is not designed to recognize or correct biases natively. It simply maximizes predictive efficiency on the basis of available data.
  • In addition, the use of RLHF (Reinforcement Learning from Human Feedback), which involves asking people to rate results during learning, can induce new response biases.

 

These biases are primarily the responsibility of the companies developing these technologies. They have the power to decide which data will be used to train their models, and how these models are adjusted to minimize bias. However, as users, we also have a role to play. When interacting with these models, it’s imperative to point out biases when they appear, and to push for greater transparency and accountability on the part of AI creators. It’s also important to be aware that these biases exist and to question the results, even if, conversely, some companies are working towards a more inclusive approach.

 

The potential of AI to promote inclusiveness and diversity

A turning point that has already begun.

 

AI can also be used to create more representative content by filling in the gaps left by the biases of traditional human creators. We did the test, asking DALL-E to generate an image of Renaissance poets, DALL-E then rewrote the prompt by injecting keywords to add diversity. In this way, their models compensate for the lack of representativeness in their training data through prompt injection, or prompt rewriting, even if it means biasing the results in their own way.

 

Generative AI renaissance

 

But these are not the only possible uses. Generative AI models can be trained on datasets specifically designed to include a diversity of voices and perspectives :

  • The Pile is a massive text database designed by the non-profit AI research group EleutherAI, which seeks to include diverse sources to minimize bias.
  • Similarly, No Language Left Behind, developed by Meta, specializes in poorly documented dialects,

 

A range of initiatives to help move towards more inclusive use.

 

Tipping the scales in favor of inclusivity

One of the most striking initiatives is Helan‘s See My Pain video , which uses AI to visually translate the pain of people suffering from mental disorders. Artificial intelligence is used to help people understand the difficulties they face on a daily basis, a real step forward for people who are often misunderstood by society.

 

AI doesn’t stop there: it is also an excellent tool for helping people with disabilities or disorders affecting their senses to integrate into society.

 

  • Numerous research projects have thus been launched, such as this study by Hong Kong researchers on Vision models to assist the visually impaired.
  • This is also the case for Apple Intelligence, which integrates AI into our everyday tools to, for example, scan our surroundings to provide precise descriptions.

 

Apple intelligenceApple Intelligence

 

 

To continue in this direction, legislation is being put into battle. The AI Act (European regulation on artificial intelligence) has been put in place. It encourages companies to develop AI systems that respect the principles of fairness. It promotes the involvement of stakeholders, including representatives of marginalized groups, in development processes. This ensures that all perspectives are taken into account in the design of AI systems.

 

We haven’t yet reached the end of generative AI, which is still evolving. But if we keep going in this direction, we can hope that artificial intelligence will be a solution for greater inclusivity. The only condition? That companies and users follow the same path.

 

Read also > [CHRONICLE] Luxury & Technology Legal Issues – Episode 2: Protecting customer data and digital images

 

Featured Photo: Andres Siimon/Unsplash

 

Picture of Marion Scala
Marion Scala

Marion Scala is Innovation Program Manager, Hi'Tech Luxury by Micropole. She supports luxury brands in their digital transformation through new customer experiences. She spent many years working for the LVMH Group, and in particular for Guerlain.

Picture of Benjamin Aubron
Benjamin Aubron

Benjamin Aubron has over 8 years' experience in digital innovation, particularly in the luxury and beauty sectors. Today, as part of the Hi'Tech Luxury program, he contributes his expertise in technology scouting and monitoring in the field of Generative Artificial Intelligence.

Luxus Magazine Automne/Hiver 2024

Luxus Magazine N°9

Available now

Subscribe to our Newsletter

Sign up now to receive sneak previews of our programs and articles!

Special offer

Subscription from 1€ for the first month

Luxus Plus Newsletter