Gender bias in China’s AI Bots: Reflecting real-life inequalities and bringing embarrassment to Xi’s government
The accusation of gender bias against China’s AI bots has brought huge embarrassment to China. The gender bias in China’s AI bots and systems reflects prevailing conditions for women in mainland China.
According to an article in the South China Morning Post (SCMP), the prevalence of gender bias in AI bots, like China’s most popular Baidu’s Ernie Bot, has come as a shock to the global community.
The gender bias was reported in Baidu’s Ernie Bot which is China’s answer to Open AI’s Chat GPT. Evidence of gender disparity came to the fore when a task was given to the AI bot to generate images of a nurse taking care of the elderly; Ernie Bot depicted a woman. Conversely, when asked to depict a professor teaching mathematics or a boss reprimanding employees, the bot portrayed men. This pattern of gendered representations was consistent across other AI models in China, including search engines and medical assistants. Ironically, China had claimed two months ago that its artificial intelligence chat bot “Ernie Bot” garnered more than 200 million users as it sought to remain China’s most popular Chat GPT-like chat bot amid increasingly fierce competition.
China’s tech industry has taken serious note of these biases and called for a better data balance and assured redesign to promote equal rights. Facing embarrassment, Chinese researchers and industry professionals have begun finding solutions. According to an associate professor at the Beijing Institute of Technology, Gao Yang, these models were trained on data containing existing societal biases. Consequently, when dealing with specific tasks, AI attributed certain characteristics or behaviors to particular genders. Ironically, the gender bias was not restricted to AI bots. A report from the China branch of the Mana Foundation found that search results for “women” often related to sex, and advertisements frequently objectified women. These societal biases, when picked up by AI, perpetuate a cycle of discrimination, influencing users and potentially causing a “butterfly effect” of technical and social issues. Chinese social media, search engines, and online employment platforms have also exhibited significant gender bias.
The most worrying were the real-world implications of the gender biases in AI. For example, AI-based job application processes preferred men over women. In fields like finance, medicine, and law, where AI assists in decision-making, this bias can result in women receiving lower scores for loans or higher interest rates. In northeast China’s Zhejiang province’s sub-provincial city Ningbo, a few experts recently met to discuss AI model-related issues. In the meeting, experts suggested that eliminating discrimination in AI models involved addressing discrimination in the real world first. Encouraging and supporting female students, especially in science and technology, was crucial. However, China’s elite fields have notoriously low female participation. The current Politburo has no women, and only six out of 133 academicians selected in 2023 were women.
The Chinese government had long ago admitted the need to eliminate bias in AI models. An AI planning and management committee under the Ministry of Science and Technology had issued a document in 2019 calling for “responsible AI,” urging developers to eliminate bias in data acquisition, algorithm design, and application. Unfortunately, few companies implemented matching procedures or hired industry experts to redesign models from the beginning. AI technical director at Insound Intelligence Yao Changjiang maintained that improving AI models was a gradual process. He pointed out that AI reflected existing societal problems since it was fed with human data. News reports from China suggested that policymakers have been grappling with AI’s potential while trying to harness its power responsibly.
According to global women activists, gender bias in AI can have profound implications. Biased AI systems can lead to unfair outcomes, such as lower loan scores for women or inaccurate medical diagnoses. Unfortunately, the gender digital divide and lack of diverse perspectives in AI development exacerbated these issues. Natacha Sangwa, a Rwandan student, observed that AI systems often failed to consider women’s unique health symptoms, leading to inaccurate diagnoses. This gap in data and representation perpetuated gender inequalities and affected the quality of AI-powered services.
According to an article in bg tv network on Chinese AI bots, studies showed that AI bots in customer service responded differently to users based on their perceived gender. It is still ongoing. For instance, female users may receive more polite and empathetic responses, while male users may encounter more direct and less nuanced replies. Such discrepancies not only reflect but also perpetuate existing gender norms and biases.
Seasoned broadcaster Lebo Maseke, who wrote an article in the bg tv network, stated that discrimination by AI bots was not just restricted to customer services. It was visible in hiring employees by companies. She pointed out that biased hiring algorithms might favour male candidates over equally qualified female candidates, perpetuating gender disparities in the workplace. Similarly, biased medical AI systems might provide different recommendations or diagnoses based on the patient’s gender, leading to disparities in healthcare outcomes. In the Chinese context, where AI is rapidly being integrated into various sectors, addressing gender bias is crucial to ensuring that technological advancements benefit all members of society equally. Failing to do so could exacerbate existing gender inequalities and hinder progress toward gender parity, she insisted.
She, however, maintained that in China’s context, societal norms and gender roles play a significant role in shaping these biases. Traditional gender roles, deeply ingrained in Chinese culture, often portray men as assertive and dominant, while women are seen as nurturing and submissive. These stereotypes can be found in various forms of media, literature, and everyday interactions. Consequently, when AI developers use real-life data to train their models, these gendered patterns are inadvertently incorporated into the AI systems.
The presence of gender bias in AI systems has far-reaching implications as it undermines the principles of fairness and equality, which are fundamental to ethical AI development. When AI systems treat users differently based on gender, it perpetuates discrimination and reinforces harmful stereotypes. This not only affects individual users but also contributes to broader societal inequalities.
Post Comment