Mini Essay

The Impact of AI on Disparities Faced by Women and Minority Groups in a Structurally Unjust Society

 

Artificial Intelligence is “the branch of computer science concerned with modeling intelligent human behavior on a computer” (Adam, pp. 355). AI has become a powerful force that is everywhere around us and rapidly evolving. However, it is crucial to recognize that alongside its transformative potential, Artificial Intelligence also introduces a set of difficult and controversial challenges, particularly in the context of the structurally unjust society we live in. This essay aims to critique and analyze the different ways AI impacts gender and minority disparities from an intersectional feminist perspective, shedding light on how it both reflects and exacerbates existing biases, impedes equitable access and opportunities, and necessitates proactive measures to rectify these inequalities.

 

Artificial Intelligence systems, encompassing machine learning algorithms, often undergo training on massive data and information that recklessly encapsulate historical, systematic, and societal prejudices and biases. As emphasized by Lin and Chen in their comprehensive study “Artificial Intelligence in a Structurally Unjust Society” research consistently highlights that AI is highly prone to “replicating pre-existing social injustices, such as sexism and racism” (Lin & Chen, pp. 2). These biases can manifest in numerous ways, profoundly influencing the real-life experiences of women, BICOP communities, and other minority groups or marginalized communities. A prime example of this phenomenon is found in the Reuters article titled “Amazon Scraps Secret AI recruiting tool that showed bias against Women.” Amazon’s AI-driven recruiting tool was designed to streamline the hiring process but was eventually abandoned due to its inherent bias favoring male candidates. The system had been trained on resumes submitted to Amazon over a ten-year period, which were predominantly from male candidates, clearly showcasing how AI can perpetuate and intensify existing and past sexist biases. Another telling example is an “AI recruiting tool [which] downgrades resumes containing the keyword ‘women’s,’ such as ‘women’s college’ and ‘women’s chess club,’ resulting in a preference for male candidates” (Dastin 2018, as in Lin & Chen, pp. 2).

 

Furthermore, AI’s reliance on historical data can inadvertently reinforce traditional gender roles, biases, and stereotypes. This perpetuates a lack of representation and opportunities for women, racialized people, and other minority groups in fields that have historically been dominated by cis-gender white men, thereby widening the already existing disparities.

 

In a structurally unjust society, access to AI technologies and opportunities to derive benefits from them is inherently unequal. Women, Queer people, and other minority groups frequently encounter many barriers when trying to enter technology or AI-related fields, whether as users or developers. The lack of diversity in AI research and development can result in technologies that inadequately address the needs and perspectives of these underrepresented groups and even exacerbate them. AI-driven decision-making processes can further hinder equitable access to resources, opportunities, and just treatment for marginalized people. For instance, “Another algorithm used to judge the risk for recidivism tends to falsely identify Black defendants as future criminals (Angwin et al. 2016); and the algorithms behind one global search engine tend to represent women of color with degrading stereotypes” (Noble 2018, as in Lin & Chen, pp. 2). These algorithms, when trained on historical data, inadvertently perpetuate the systematic discriminatory practices that have been deeply ingrained in society for decades already.

 

Proactive measures are imperative to mitigate the exacerbation of disparities by AI in structurally unjust societies. This necessitates data-driven policymaking and rigorous oversight to ensure that AI systems are developed and employed fairly and responsibly. Furthermore, transparent and accountable AI development processes can facilitate the early identification and rectification of biases in algorithms. Promoting diversity and inclusivity in AI development stands as another pivotal step. Encouraging women and minority participation in Artificial intelligence research and development can lead to technologies that genuinely address the needs and perspectives of these marginalized groups. Even the implementation and social scientists in AI-related fields to collaborate with AI engineers to try and decrease the disparities generated or exacerbated by AI could be a potential way to improve IA. In addition, investments in educational and training programs can facilitate access to AI-related opportunities for underrepresented individuals.

 

In summary, Artificial Intelligence when operating within a structurally unjust society, presents significant and difficult challenges for women, BICOP communities, and any minority groups that have historically been unrepresented. It not only reflects but also intensifies existing biases in our society, which hinder the pursuit of equitable access and opportunities, and mandates a concerted effort from all stakeholders, encompassing policymakers, industry leaders, and society at large, to ensure that Artificial Intelligence is developed and deployed through an intersectional feminist lens in a manner that upholds fairness, equity, and justice for all people. The consequences of failing to do so may well perpetuate and aggravate the disparities faced by women and many other minority groups, undermining the progress towards a more just, fair, and compassionate society.






Bibliography  

 

Adam, A. (1995). A Feminist Critique of Artificial Intelligence. European Journal of Women’s Studies, 2(3), 355-377. https://doi.org/10.1177/135050689500200305

 

Dastin,  J. (2018). “Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women,” in Reuters

 

Hoffmann, A. L. (2019). Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse. Information, Communication & Society, 22(7), 900–915. https://doi.org/10.1080/1369118X.2019.1573912

 

Skip to content