As technology lawyers we frequently draft bespoke documents, work with AI and machine learning companies, but since we are always personally overseeing what we do and give personal support we are confident that our approach is informed, unbiased and focused. However, as we start to rely more on AI as a world or use more tech in our business, removing human oversight, can we be confident this advice or work or production is unbiased, accurate and workable?

As more chatbots such as ChatGPT, Zendesk and Freshwork develop, it is important to note that AI systems can reflect the biases present in the data they are trained on, so if it’s the world wide web it is clearly going to have conflicting material to draw from. According to an article written by USC Viterbi, researchers from the USC Information Science institute (ISI), studied the ConcenptNet and GenericsKB database to see if their data was unfair. They found bias in up to 38.6% of facts used by AI. If the training data contains biased or unrepresentative information, the AI model may inadvertently learn and perpetuate those biases.

Concerns about bias in AI systems, including bias against the LGBTQ+ community, have been raised and studied by researchers and organisations. It is crucial to ensure that AI technologies are developed and trained using diverse and inclusive datasets to mitigate biases and promote fairness. Additionally, ongoing efforts are being made to address bias in AI and create more equitable systems. However, will this be enough and when is this likely to be secure, because as we know hate material, discrimination and conflicting data will always sadly be out there for it to draw from, if not trained or restricted carefully.

There are for example potential negative implications of AI embedding bias on social media platforms, including it’s impact on the LGBTQ+ community. If AI systems used in social media algorithms are biased, it could result in several concerning outcomes:

  1. Discriminatory Content: Biased AI algorithms may promote or amplify discriminatory content against the LGBTQ+ community, leading to the spread of hate speech, misinformation, or harmful stereotypes.
  2. Visibility and Representation: Biased algorithms might limit the visibility and representation of LGBTQ+ individuals and their stories, exacerbating existing marginalization and hindering efforts for inclusion and acceptance.
  3. Misclassification and Targeting: Biased AI algorithms may misclassify LGBTQ+ content or users, leading to incorrect targeting or exclusion from relevant information, resources, or support networks.
  4. Online Harassment: If biased AI algorithms fail to adequately address online harassment or hate speech targeting LGBTQ+ individuals, it can contribute to an unsafe online environment and have detrimental effects on their mental well-being.

Addressing bias in AI algorithms and ensuring the fair and equitable treatment of all individuals, including the LGBTQ+ community, is crucial. It requires ongoing efforts to improve data collection, increase diversity in AI development, and implement robust evaluation and accountability mechanisms to minimize bias and promote inclusivity. What should we be considering:

  1. It’s crucial to collect data directly from the LGBT+ community to avoid relying solely on data generated by majority groups.
  2. Data Scrutiny: Conduct a thorough analysis of the training data to identify any biases or imbalances.
  3. Inclusive and Ethical AI Development: It’s essential to involve diverse teams with expertise and real experiences related to the LGBT+ community. 
  4. Regular Bias Audits would help develop and re-train as biases identified 
  5. Transparent Documentation: making clear and open the development process, including data sources, training methods, and any steps taken to address biases. 
  6. Continuous Monitoring. This includes monitoring performance metrics across different demographic groups, regularly revaluating the system’s impact, and addressing any emerging biases promptly.
  7. External Review and Regulation: In the UK we should ideally be encourage external review and audits of AI systems by independent organisations
  8. Ethical Guidelines and Standards: Develop and adhere to ethical guidelines and standards for AI development that explicitly address bias and discrimination against the LGBT+ community. These guidelines should emphasize the importance of fairness, inclusivity, and respect for human rights.

It’s crucial to recognize that bias elimination is an ongoing effort, and it requires collaboration between AI developers, data scientists, ethicists, and communities affected by AI systems. By proactively addressing biases, promoting inclusivity, and ensuring transparency, we can work towards developing AI systems that treat everyone fairly and respectfully.

Social Media AI Bias – does this exist , why and what impact does this have on LGBT+ persons and businesses?

We are at the very beginning of unpacking the real-world consequences of gender and sexual orientation even age bias in these AI tools, which may be suppressing the visibility photos or videos or content for the wrong reasons. In terms of photos for example, the investigation by USA bodies is filled with stories of women being “shadowbanned” for opaque reasons; for many, this harms their businesses and livelihoods. Such as a women in a gym photo advertising a gym , is deemed sexual compared to a man in the same situation.  “Objectification of women seems deeply embedded in the system,” said Leon Derczynski, a professor of computer science at the IT University of Copenhagen, who specializes in online harm. If you are seeking to advertise a business on the web and social media this is a significant draw back.

 “shadowbanned”. Shadowbanning refers to suppression of the content or image , this is the decision of a social media platform to limit the reach of a post or account. So could have potential issues for businesses paying for campaigns or not as it is restricting their business outreach for example.

Does this really matter?

AI bias can have a significant impact on LGBTQ individuals and their businesses, so yes it really does need to be carefully considered:

  1. Discriminatory Algorithms can result in discriminatory outcomes for LGBTQ businesses. For example, such as advertising might disproportionately show or promote products and services to certain demographic groups while neglecting or excluding LGBTQ businesses and their target audience.
  2. Invisibility and Underrepresentation of LGBTQ businesses in search results, online directories, or business listings. This can make it more challenging for LGBTQ-owned businesses to gain visibility and attract customers, leading to potential disadvantages in terms of customer reach, growth, and revenue.
  3. Targeted Marketing often relies on demographic data and patterns to choose its consumers and target marketing efforts. If the algorithms are biased, they might overlook or inaccurately categorise LGBTQ individuals, resulting in missed marketing opportunities for LGBTQ businesses. This can hinder their ability to effectively reach and engage with their target audience.
  4. Loan and Credit Discrimination may result in discriminatory outcomes for LGBTQ-owned businesses, directors or persons, leading to unequal access to capital and financial opportunities. This can hinder their growth and sustainability.
  5. Employment Bias: AI systems are used in various stages of the hiring process, including resume screening and candidate evaluation. If these systems are biased, they can perpetuate discrimination against LGBTQ individuals, potentially affecting the recruitment and employment opportunities of LGBTQ-owned businesses.

Addressing AI bias and ensuring fair treatment for LGBTQ businesses requires a proactive approach from AI developers, businesses, and policymakers. It involves conducting bias assessments, promoting diversity and inclusion in AI development, and implementing transparent and accountable AI practices. By actively working to eliminate bias and promote fairness, we can help create an environment where LGBTQ businesses can thrive and succeed on equal footing.

Can lawyers help in the evolving technology to avoid biases?

We believe as lawyers we can and we try to play a crucial role in addressing AI bias and its implications:

  1. Compliance: We as lawyers must help ensure that AI systems and their deployment comply with relevant laws and regulations. We will ensure that production has considered and correctly assessed the legal landscape related to data protection, privacy, discrimination, and fair treatment. So before we release documents or give advice or direction this analysis will be undertaken
  2. Risk Assessment and Mitigation: We should be assessing at every step the legal risks associated with AI bias and help develop strategies to mitigate those risks. We can evaluate potential liabilities arising from biased AI systems and assist in implementing risk management frameworks to minimise legal exposure for the developers or businesses using them. This can involve drafting policies, terms of service, and consent forms that explicitly address AI bias and its potential impact.
  3. Ethical and Responsible AI Development: As lawyers we provide guidance on ethical considerations in AI development. We can advise on best practices and help develop guidelines and policies that promote fairness, inclusivity, and non-discrimination. Lawyers can also assist in incorporating ethical frameworks into AI development processes, ensuring that the resulting systems align with legal and ethical standards.
  4. Advice, support and Policy Development: We advocate for the creation and implementation of laws and policies that address AI bias and we strong suggest external regulations and audits are considered for compliance. Lawyers really must engage in policy discussions, provide expert input, and contribute to the development of regulations or guidelines related to AI and discrimination. Lawyers can also work with organizations and policymakers to raise awareness about the legal implications of AI bias 
  5. Litigation and Remedies: In cases where AI bias leads to discriminatory outcomes or harm, we will continue to represent affected individuals or businesses in legal proceedings. T

It is important for us to stay updated on the evolving legal landscape related to AI bias and discrimination and HR departments and managers too as this technology will have more and more of an impact on staff, customers and businesses as it develops. By combining legal expertise with a deep understanding of AI technologies, we believe as lawyers we can contribute to the development of responsible AI systems and help protect the rights and interests of individuals and communities affected by AI bias.

Magali Gruet, 2022, ‘That’s just common sense. UUSC researchers find bias in up to 38.6% of facts used by Ai’ (online), available from