In this article, Deepali Khanna, Managing Director of the Rockefeller Foundation’s Asia Regional Office, talks about the need for neutral data and AI technologies in decision-making software to avoid sexism, racism, ableism, and other forms of discrimination.

https://asiatimes.com/2021/09/toward-an-inclusive-ai-future-for-women/

Read the full article by clicking on the above link

Problem:

AI has become an all-pervasive technology that penetrates businesses and societal landscapes. Humans have their own set of biases, and they design AI systems. When algorithms get applied to social and economic quandaries, it may lead to discrimination. In decision-making, it might cause algorithmic harm to marginalized communities or deprive them of opportunities. 

A machine can process and analyze large volumes of data, but if the data is flawed with gender stereotypes, the results will reflect the bias.

Examples:

Amazon had to shelve its AI recruiting tool because it failed to rate candidates in an unbiased manner and began to play down women’s résumés. Women’s World Banking found that credit-scoring AI systems commissioned by global financial service providers resulted in the exclusion of women from loans and other financial services.

Short term Solutions:

Long-term solutions 

Need Help?