fbpx

Rise of the racist robots? Avoiding bias in AI

Originally published in the Summer 2018 newsletter of the American Mortgage Diversity Council

As algorithms make more and more decisions for us and about us, we must make sure those decisions are actually fair. Does the general public even know that AI is already driving loan worthiness, emergency response, criminal sentencing, and medical diagnosis, to name just a very few applications? It’s writing news stories and weather reports. AI is part of every Amazon purchase and Netflix binge. It’s telling advertisers what you said on Facebook.

 

Artificial intelligence, or AI, is the science of engineering intelligent machines. “Intelligent,” in this regard, means receiving new information and coming to better conclusions based on the new data, the way human brains work.

Current thinking holds that artificial intelligence could be the great equalizer, and on its face, it has that potential. AI decides based on math, right? Today, AI reflects the bias of its creators.

 

AI has had some spectacular failures. Facial recognition systems, among the most commonly used forms of AI, have very good performance overall (88-93% accuracy), but are much worse for darker-skinned faces (77-87% accuracy), and women (79-89% accuracy), and even worse for people at the intersection of those two subgroups (i.e., darker-skinned females) (65-79% accuracy), according to the UC Berkeley School of Information. First-generation virtual assistants reinforce sexist gender roles: female voices for task assistants (Siri and Alexa) and male voices for problem-solving bots (Watson and Einstein.) A driverless car recently struck and killed a pedestrian.

AI is moving into a level of sophistication that makes rooting out bias even more difficult. It is no longer just creating output; AI is now creating its own novel skills and correlations, from logic designers can’t always explain.

How can we capture the full value of AI for the benefit of all?

It starts with transparency standards and open-source code such as deeplearning.js and TensorFlow to render AI that is more governable. AI Now, a top nonprofit advocate for “algorithmic fairness,” says if a designer can’t explain an algorithm’s decisions, they shouldn’t be able to use it. The EU’s General Data Protection Regulation that went into effect in May 25 requires machine-based decisions to be explainable. The U.K.’s House of Lords is calling for a global AI code of ethics. The mandate to develop transparent approaches is growing.

Teams creating artificial intelligence should include ethicists, psychologists, and sociologists, to solve problems fairly. Data ethics courses should be part of engineering curricula. Data sources like historical texts should be evaluated before being uploaded, and before they embed racist and sexist attitudes into models. Better feedback loops have humans in the middle, so we can reinsert judgment, if needed. Finding, scrubbing, interpreting, and choosing data sets has become an art for engineers creating algorithms.

Real estate was considered behind the curve but is catching up quickly when it comes to using machine learning to make decisions by AI. Many time-consuming aspects of property management are already, or soon will be, fully automated: sourcing, qualifying and signing tenants, employees, and construction vendors as well as using sensors and software to monitor, inspect, and display structures. AI bots answer queries about terms of leasing, footage, and other common questions during virtual tours. Real estate startup Truss uses Vera, a proprietary AI bot to help clients source office space. Our company, WeatherCheck, uses AI to tell owners when to use insurance to cover weather damage repairs.

One interesting application of AI in real estate was the “Broker vs Bot challenge” by Inman, to predict buyer preferences. The bot won the top spot – the buyer’s favorite home – all three times, and that was two years ago. Brokerage REX Real Estate Exchange uses artificial intelligence and machine learning to sell homes, crunching tens of thousands of data points to source likely buyers for a home and targeting them with digital ads, then charging only a 2 percent commission.

Juwai.com, a Chinese international real estate website, has deployed a line of Mandarin-speaking robotic personal assistants called “Butler 1” for agents and developers in the U.S., Canada, Australia, the U.K., Malaysia and Singapore to “help guide Chinese buyers through purchases.” They will also capture important customer feedback during transactions.

As Cathy O’Neil, mathematician and author of the book, “Weapons of Math Destruction,” recently said, “It’s an emerging field…within the next two decades we will either have solved the problem of algorithmic accountability or we will have submitted our free will to stupid and flawed machines.”

Clearly, AI is already a part of the day-to-day lives of almost all of us. How it develops, whether with or without bias, also depends on our collective involvement.

Does your property have hidden weather damage?

Check for FREE today!

Subscribe to get more articles like this in your inbox

©2018 WeatherCheck All Rights Reserved. Terms of Service

Log in with your credentials

Forgot your details?