Machine behavior may be easier to replace than our own behavior

In a paper titled ‘Fairness through Awareness’ published in 2011, researchers from IBM, Microsoft and the University of Toronto outlined the importance of preventing algorithmic discrimination. At the time, algorithmic bias was an esoteric concern, and deep learning had not become mainstream. But in recent years, everything from racist Twitter bots and unfortunate Google search results to software systems found to be biased against people of color, while assessing the risks of recidivism, has reduced bias in artificial intelligence (AI) algorithms. The importance of doing is highlighted. As algorithmic decision-making becomes an integral part of our social system, the phenomenon of algorithmic bias will soon become an important policy issue.

AI algorithms do not have the power of the human mind to differentiate between right and wrong. The data we feed into these algorithms ultimately determines the course of an algorithm. Current efforts to get rid of biases in AI algorithms range from simple strategies, such as masking features that lend themselves to biased results and diversifying or re-sampling their training data sets, to more complex strategies. Instead of using algorithms is to classify different groups of people. Using equal measures for all. Strategic approaches have also been deployed, such as hiring more diverse teams to work on AI projects, and giving more transparency and interpretability to algorithms. But these strategies have not yet helped to achieve a satisfactory level of algorithmic neutrality.

Various efforts have been made over a long period of time to tackle the problem of favoritism across the society. Jennifer Eberhart’s book, Biased: The New Science of Race and Inequality, reminds us that racial-regional rules in many American cities that forbade African-Americans from moving into Caucasian neighborhoods were upheld by the U.S. Supreme Court in 1917. was declared illegal. But even today, remnants of those discriminatory practices still exist.

Airbnb, a space-rental app that is one of the stars of the online economy, aims to promote the ideal that “every community is a place where you can be”. But the company has found that racial discrimination is the biggest challenge in realizing it. Mission. Guests belonging to minority groups sometimes feel that they are being discriminated against by hosts who refuse their booking requests. To address this issue, Airbnb has made it mandatory for all its users to sign a commitment to uphold racial equality as a core value. It did not have the desired effect. Airbnb then introduced the concept of ‘immediate booking’, whereby a guest could book an accommodation unit without the host’s prior approval. Only a small fraction of passengers, about 3%, chose this provision. . African-Americans were found to be even more reluctant to use this option than other groups. This was because they wanted to avoid unpleasant surprises when they met the host in person. Why do MPTS fail to solve the problem of these persistent attention bias?

As prominent psychologist Mahzarin R. Banaji and Anthony G. As Greenwald points out in his book Blindspot: Hidden Bias of Good People, the human brain contains biases that are hidden in its non-conscious processes. These implicit biases affect one’s daily decisions without being consciously aware of it. It may not be easy to expect a person to be aware of these prejudices and to change them. It may take several generations of human development to work on behavior improvement before real change is visible.

It is clear that many of the algorithmic biases the global AI industry is trying to tackle are related to biases that have long been ‘baked’ into societal perspectives. If the world is biased, its historical data will also be biased. AI algorithms that ‘learn’ from this data are also bound to be biased. Therefore, as much as we use strategies to reduce the problem of algorithmic bias at a rational level, AI professionals must look for new ways to solve this problem at an implicit level.

The idea is not to fight prejudice, but to try to balance it. Let us assume that we are evaluating the credit worthiness of an individual. Behavioral science lessons will remind us that there are many behavioral factors that can help assess a person’s creditworthiness. For example, a person with a growth mindset will be more likely to be successful in life. A person showing patience in the face of adversity will be in a better position to deal with the setbacks in life. A person’s credibility can often be gauged from these and other such relevant behavioral traits. They can help highlight the positive side of a borrower’s ability and willingness to pay back the loan. Since these behavioral factors are mostly implicit in nature, their introduction into credit markets can be an effective strategy to counteract the negative impact of biases in traditional AI-training data.

Much work will have to be done to implement this new strategy of combining implicit behavioral data with traditional data. The biggest challenge will be to identify the non-conscious behavioral drivers of an individual’s credibility. The second challenge will be how to capture data that accurately represents these implicit behavioral factors. But this difficult task will be worth the effort. Once AI systems are seen as being more fair, their acceptance will increase. This strategy is also a reminder that to make this world a better place, it is much easier to train machines to behave more responsibly, than to wait endlessly for humans to change unwanted patterns of behavior. It is possible.

Biju Dominic is Chief Promoter, Fractal Analytics and President of FinalMile Consulting

subscribe to mint newspaper

* Enter a valid email

* Thank you for subscribing to our newsletter!

Don’t miss a story! Stay connected and informed with Mint.
download
Our App Now!!

.

Leave a Reply