Insight

AI & LGBTQ+: Opportunities and challenges

Kubrick Head of Commercial Analytics, Joseph Woodman, explores what the rise of Artificial Intelligence (AI) could mean for bias in the LGBTQIA+ community.

AI is revolutionizing our world, affecting everything from work to social interactions. Recently, the rise of Large Language Models has brought AI into the hands of billions, with profound implications for everyone, including LGBTQ+ individuals.

It is not too far to say that AI is on the path of fundamentally reshaping huge aspects of our society, changing how we work to how we interact. This journey is not new, AI has been shaping our lives for the last decade, but in the past year or so, the rise of Large Language Models has given a fact to this force, giving billions of people direct access to AI.

This unprecedented access has affected all communities, including the lives of LGBTQ+ individuals. While AI offers promising opportunities to enhance the well-being of the LGBTQ+ community, it also poses significant risks that need to be addressed.

One of the primary risks AI presents to the LGBTQ+ community is the potential for bias and discrimination. AI systems learn from data, and if that data reflects existing societal biases, the AI will likely perpetuate them. This is also known as algorithmic bias (or feedback loop bias) and occurs when AI systems perpetuate and amplify existing societal biases through the data they are trained on.

AI models are prone to bias in the same way (if not even more) that we humans are. But unlike humans, who have a moral compass and can challenge their own views, AI can’t – therefore robust ethical guidelines are not merely prescriptive to AI – they are a must. For instance, facial recognition technology has been criticized for its inaccuracies, particularly in relation to non-white and non-cisgender individual. These inaccuracies can lead to misidentification and profiling, which are particularly dangerous for LGBTQ+ individuals who may already face heightened scrutiny and prejudice. We only need to look at the teething problems of AI-generated answers in the news, to see how the biased nature of some online sources can lead to unfavorable answers and continue to polarize people’s viewpoints – which can further exacerbate the already heightened scrutiny and prejudice LGBTQ+ individuals are facing.

Furthermore, AI’s role in surveillance can be particularly concerning. In regions where LGBTQ+ identities are criminalized or heavily stigmatized, AI-driven surveillance systems can be used to track and target individuals based on their online activities or social networks. This not only invades privacy but also places individuals at risk of violence, discrimination, and legal repercussions.

Another risk is the potential for AI to inadvertently ‘out’ LGBTQ+ individuals. Algorithms designed to analyze and predict user behavior might reveal sensitive information about someone’s sexual orientation or gender identity without their consent. For example, targeted advertising based on user behavior could expose someone’s identity to others, leading to unwanted disclosure and potential harm.

Despite these risks, AI also holds substantial promise for improving the lives of LGBTQ+ individuals. One significant opportunity lies in mental health support. AI-powered chatbots and virtual therapists can provide a safe and accessible means for LGBTQ+ individuals to seek help and support, especially in areas where there is a shortage of LGBTQ+ friendly mental health services. These AI systems can offer immediate, anonymous assistance, making mental health care more inclusive and supportive. A great example of this is the usage by the Trevor Project, who uses AI to support crisis counsellor Training.

Moreover, AI can aid in creating more inclusive digital spaces. For instance, content moderation algorithms can be trained to recognize and address homophobic and transphobic language, thereby reducing online harassment. AI can also help develop more inclusive virtual environments, such as video games and social platforms, where LGBTQ+ identities are represented accurately and respectfully.

To ensure that AI serves as a force for good in the lives of LGBTQ+ individuals, it is crucial to address its risks while leveraging its opportunities. In the EU AI act, for instance, there is a point that prohibits the use of AI systems to infer emotions, and makes it ‘unacceptable’ in terms of risk to do any biometric categorization, e.g. looking at biometric data to infer sex life, sexual orientation, race, political opinion etc. Anything that falls into these categories lead to high amounts of oversight, risk assessment and mitigation to manage concerns. There are some limitations however, real time biometric analysis can be used to identify people in certain investigations, e.g. human trafficking or other criminal activities. Whilst there is risk in these limitations, this could in theory be a great crime protection mechanism given the higher volume of violent crimes against the LGBTQ+ and minority communities. If you’d like to know more about the AI act, Kubrick’s very own Simon Duncan explored the reality of the Act coming into effect and reflects on its journey in a LinkedIn Insight.

The journey of AI ethics and AI regulation is still in its infancy, but at its heart it is vital that we are creating and enforcing robust ethical guidelines for AI development and deployment, ensuring diversity in AI research teams, and actively seeking input from LGBTQ+ communities to understand their unique needs and concerns.

By fostering collaboration between technologists, policymakers, and LGBTQ+ advocates, it is possible to create AI systems that not only avoid harm but also actively contribute to the well-being and empowerment of LGBTQ+ individuals. Through careful consideration and inclusive design, AI can become a powerful tool for advancing equality and inclusion in our increasingly digital world.

Thank you to Joe Woodman for writing such an insightful article, as well as a special thank you to Molly Slann, Mirela Gyurova, Dana James-Edwards and Tracey Young for their support and guidance in the creation of this article.

Find out how we can help

Get in touch if you want to partner with us, become a Kubrick consultant, 
or join our internal team

Kubrick branding