One of the chief issues with machine learning and artificial intelligence systems is that they, like the data scientists who create them, often have their own built-in biases.
Whether that’s favoring one portion of the population over another when deciding who deserves a loan, or misidentifying people of different ethnicities via facial recognition algorithms, machine learning programs have generated problematic outcomes and shaken trust in the technology.
Microsoft (MSFT) is attempting to address the issue of bias in machine learning with its new Fairlearn toolkit. The kit, which the tech giant announced is being made available via its Azure Machine Learning platform in June, will let companies that are developing machine learning models in Azure test for biases in their systems that could dramatically impact people’s lives.
The announcement came during Microsoft’s annual Build developers conference this week. Rather than its normal live event held in Seattle, the company hosted a virtual version of the show.

NEW YORK, NY – APRIL 30: The Microsoft store is seen on April 30, 2020 in New York City. The company said the effects of the coronavirus may not be fully understood until future periods but it has seen an increase in the Cloud business as more people work from home. (Photo by Eduardo MunozAlvarez/VIEWpress via Getty Images)
The Fairlearn toolkit first debuted at Microsoft’s Ignite event in November, but is being made generally available next month.
In explaining the importance of such tools, Microsoft used the example of EY, which tested Fairlearn on a machine learning model designed to automate loan decisions.
When the firm began using Fairlearn on the model, it revealed that the company’s loan algorithm had a significant bias in approving loans for men versus women that resulted in a 15.3 percentage point difference between men receiving loans in the test compared to their female counterparts.
The algorithm was built using loan approval data from banks which included information like transaction, payment, and credit history.
But Microsoft says that can introduce biases against applicants from certain demographics. And if that bleeds into loan approvals, it can have a dramatic impact on individuals’ lives.
According to Microsoft, when EY used the Fairness toolkit to train new machine learning models, it was able to cut the difference in loan approvals to 0.43 percentage points.
“Increasingly we’re seeing regulators looking closely at these models,” said Erin Boyd, Microsoft CVP of Azure AI, said in a statement. “Being able to document and demonstrate that they followed the leading practices and have worked very hard to improve the fairness of the datasets are essential to being able to continue to operate.”
With machine learning algorithms being used across an increasingly wide range of applications, whether that includes facial recognition algorithms for law enforcement agencies, or banks, ensuring bias isn’t a part of the equation will only become more important moving forward.
Read more:
Got a tip? Email Daniel Howley at [email protected] or [email protected], and follow him on Twitter at @DanielHowley.
Follow Yahoo Finance on Twitter, Facebook, Instagram, Flipboard, SmartNews, LinkedIn, YouTube, and reddit
More Stories
How Business News is Shaping the Global Economy
Inside the 2025 US–China Trade Talks
US Import Tariff Changes Explained