AI favors the rich and the powerful, hurts the rest: Mozilla

The growing power disparity between those who benefit from artificial intelligence (AI) and those who are harmed by technology is a top challenge facing the Internet, according to the 2022 Internet Health Report, which states that AI and automation can be a powerful tool for influential people. It is possible – For example, tech titans who are making more profit from this, but at the same time can be harmful to vulnerable groups and societies.

The report, compiled by researchers at Mozilla, the nonprofit that builds the Firefox web browser and advocates for privacy on the web, said, “In real life, the disadvantages of AI disproportionately affect those who are globally are not benefited from. system of power.”

“Among the global rush to automate, we see serious threats of discrimination and oversight. We see a lack of transparency and accountability, and an over-reliance on automation for decisions of large consequence,” said the Mozilla researchers.

However, the report noted that systems trained with vast swaths of complex real-world data are revolutionizing computing tasks, including recognizing speech, detecting financial fraud, operating self-driving cars, etc. , which were previously difficult or impossible, there are enough and more challenges in the AI ​​universe.

For example, machine learning models often reproduce racist and sexist stereotypes because of bias in data obtained from Internet forums, popular culture, and photo archives.

The nonprofit believes that large companies are not transparent about how they use our personal data in algorithms that recommend social media posts, products, and purchases.

In addition, recommendation systems can be manipulated to show promotions or other harmful content. In YouTube’s Mozilla study, algorithmic recommendations were responsible for showing people 71 percent of videos they regretted watching.

Companies like Google, Amazon and Facebook have major programs in place to tackle issues such as AI bias, yet the subtle ways bias is injected into algorithms. For example, The New York Times pointed to a 2015 Google Photos investigation where Google apologized after photos of black people were labeled as gorillas. To address such embarrassing problems, Google removed the labels for gorillas, chimpanzees and monkeys.

Similarly, at the 2020 mega-protest over the killing of George Floyd in the US, Amazon made money from its facial recognition software and sold it to police departments, even though research has shown that facial recognition programs are faster than white people. misidentify people of color in the U.S., and also that its use by the police could result in an unjust arrest that would largely affect black people. Facebook also featured clips of black men in brawls with white citizens and police cops.

But Mozilla researchers differ in their own way, saying that while Big Tech funds a lot of academic research and even papers focusing on the social problems or risks of AI, they are not pedestrians. Let’s go

“The centralization of influence and control over AI doesn’t work to the benefit of most people,” Mozilla’s Internet Health Report Editor Solana Larsson said in the report. It aims to “strengthen the technology ecosystem beyond the realm of big tech.” and venture capital startups if we want to unlock the full potential of trustworthy AI,” she said.

Mozilla suggested that “a new set of rules could help set the railing for innovation that minimizes harm and enforces data privacy, user rights and more.”

catch all technology news And updates on Live Mint. download mint news app to receive daily market update & Live business News,

More
low

subscribe to mint newspaper

, Enter a valid email

, Thank you for subscribing to our newsletter!