Skip to main content

insight

AI is revolutionising our world, affecting everything from work to social interactions. Recently, the rise of Large Language Models has brought AI into the hands of billions, with profound implications for everyone, including LGBTQ+ individuals.

It is not too far to say that Artificial Intelligence (AI) is on the path of fundamentally reshaping huge aspects of our society, changing how we work to how we interact. This journey is not new, AI has been shaping our lives for the last decade, but in the past year or so, the rise of Large Language Models has given a fact to this force, giving billions of people direct access to AI.

This unprecedented access has affected all communities, including the lives of LGBTQ+ individuals. While AI offers promising opportunities to enhance the well-being of the LGBTQ+ community, it also poses significant risks that need to be addressed.

One of the primary risks AI presents to the LGBTQ+ community is the potential for bias and discrimination. AI systems learn from data, and if that data reflects existing societal biases, the AI will likely perpetuate them. This is also known as algorithmic bias (or feedback loop bias) and occurs when AI systems perpetuate and amplify existing societal biases through the data they are trained on.

AI models are prone to bias in the same way (if not even more) that we humans are. But unlike humans, who have a moral compass and can challenge their own views, AI can't - therefore robust ethical guidelines are not merely prescriptive to AI - they are a must[1]. For instance, facial recognition technology has been criticized for its inaccuracies, particularly in relation to non-white and non-cisgender individual[2] [3]. These inaccuracies can lead to misidentification and profiling, which are particularly dangerous for LGBTQ+ individuals who may already face heightened scrutiny and prejudice. We only need to look at the teething problems of AI-generated answers in the news[4]
to see how the biased nature of some online sources can lead to unfavourable answers and continue to polarise people’s viewpoints - which can further exacerbate the already heightened scrutiny and prejudice LGBTQ+ individuals are facing.

Furthermore, AI's role in surveillance can be particularly concerning. In regions where LGBTQ+ identities are criminalized or heavily stigmatized, AI-driven surveillance systems[5] can be used to track and target individuals based on their online activities or social networks. This not only invades privacy but also places individuals at risk of violence, discrimination, and legal repercussions.

Another risk is the potential for AI to inadvertently ‘out’ LGBTQ+ individuals[6]. Algorithms designed to analyse and predict user behaviour might reveal sensitive information about someone’s sexual orientation or gender identity without their consent. For example, targeted advertising based on user behaviour could expose someone's identity to others, leading to unwanted disclosure and potential harm.

Despite these risks, AI also holds substantial promise for improving the lives of LGBTQ+ individuals. One significant opportunity lies in mental health support. AI-powered chatbots and virtual therapists can provide a safe and accessible means for LGBTQ+ individuals to seek help and support, especially in areas where there is a shortage of LGBTQ+ friendly mental health services. These AI systems can offer immediate, anonymous assistance, making mental health care more inclusive and supportive. A great example of this is the usage by the Trevor Project, who uses AI to support crisis counsellor Training[7].

Moreover, AI can aid in creating more inclusive digital spaces. For instance, content moderation algorithms can be trained to recognize and address homophobic and transphobic language, thereby reducing online harassment. AI can also help develop more inclusive virtual environments, such as video games and social platforms, where LGBTQ+ identities are represented accurately and respectfully.

To ensure that AI serves as a force for good in the lives of LGBTQ+ individuals, it is crucial to address its risks while leveraging its opportunities. In the EU AI act[8], for instance, there is a point that prohibits the use of AI systems to infer emotions, and makes it ‘unacceptable’ in terms of risk to do any biometric categorisation, e.g. looking at biometric data to infer sex life, sexual orientation, race, political opinion etc. Anything that falls into these categories lead to high amounts of oversight, risk assessment and mitigation to manage concerns. There are some limitations however, real time biometric analysis can be used to identify people in certain investigations, e.g. human trafficking or other criminal activities. Whilst there is risk in these limitations, this could in theory be a great crime protection mechanism given the higher volume of violent crimes against the LGBTQ+ and minority communities. If you’d like to know more about the AI act, Kubrick’s very own Simon Duncan explored the reality of the Act coming into effect and reflect on its journey in a LinkedIn Insight[9].

The journey of AI ethics and AI regulation is still in its infancy, but at its heart it is vital that we are creating and enforcing robust ethical guidelines for AI development and deployment, ensuring diversity in AI research teams, and actively seeking input from LGBTQ+ communities to understand their unique needs and concerns

By fostering collaboration between technologists, policymakers, and LGBTQ+ advocates, it is possible to create AI systems that not only avoid harm but also actively contribute to the well-being and empowerment of LGBTQ+ individuals. Through careful consideration and inclusive design, AI can become a powerful tool for advancing equality and inclusion in our increasingly digital world.

Thank you to Joe Woodman for writing such an insightful article, as well as a special thank you to Molly Slann, Mirela Gyurova, Dana James-Edwards and Tracey Young for their support and guidance in the creation of this article.

[1] https://www.unesco.org/en/articles/generative-ai-unesco-study-reveals-alarming-evidence-regressive-gender-stereotypes

[2] https://www.bbc.co.uk/news/technology-33347866

[3] https://spectrum.ieee.org/in-2016-microsofts-racist-chatbot-revealed-the-dangers-of-online-conversation

[4] https://apnews.com/article/google-ai-overviews-hallucination-33060569d6cc01abe6c63d21665330d8

[5] https://www.forbes.com/sites/forbestechcouncil/2024/02/02/artificial-intelligence-the-new-eyes-of-surveillance/

[6] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10733821/

[7] https://www.thetrevorproject.org/blog/the-trevor-project-launches-new-ai-tool-to-support-crisis-counselor-training/

[8] https://artificialintelligenceact.eu/

[9] https://www.linkedin.com/feed/update/urn:li:activity:7201590835996504064