Products

EMARKETER delivers leading-edge research to clients in a variety of forms, including full-length reports and data visualizations to equip you with actionable takeaways for better business decisions.
PRO+
New data sets, deeper insights, and flexible data visualizations.
Learn More
Reports
In-depth analysis, benchmarks and shorter spotlights on digital trends.
Learn More
Forecasts
Interactive projections with 10k+ metrics on market trends, & consumer behavior.
Learn More
Charts
Proprietary data and over 3,000 third-party sources about the most important topics.
Learn More
Industry KPIs
Industry benchmarks for the most important KPIs in digital marketing, advertising, retail and ecommerce.
Learn More
Briefings
Client-only email newsletters with analysis and takeaways from the daily news.
Learn More
Analyst Access Program
Exclusive time with the thought leaders who craft our research.
Learn More

About EMARKETER

Our goal is to unlock digital opportunities for our clients with the world’s most trusted forecasts, analysis, and benchmarks. Spanning five core coverage areas and dozens of industries, our research on digital transformation is exhaustive.
Our Story
Learn more about our mission and how EMARKETER came to be.
Learn More
Methodology
Rigorous proprietary data vetting strips biases and produces superior insights.
Learn More
Our People
Take a look into our corporate culture and view our open roles.
Join the Team
Contact Us
Speak to a member of our team to learn more about EMARKETER.
Contact Us
Newsroom
See our latest press releases, news articles or download our press kit.
Learn More
Advertising & Sponsorship Opportunities
Reach an engaged audience of decision-makers.
Learn More
Events
Browse our upcoming and past events, recent podcasts, and other featured resources.
Learn More
Podcasts
Tune in to EMARKETER's daily, weekly, and monthly podcasts.
Learn More

UK watchdogs to clamp down on banks using discriminatory AI in loan applications

The news: UK regulators have signaled that they will clamp down on artificial intelligence (AI) use in banking that might be used to discriminate against people, per the FT.

Banks which use AI to approve loan applications must be able to prove the tech will not worsen discrimination against minorities.

The bigger picture: AI is a significant growth area in banking. Its market size is projected to soar globally from $3.88 billion in 2020 to $64.03 billion in 2030, with a CAGR of 32.6%, per a Research and Markets report.

AI in banking is maturing, and as data analysis improves, it brings the potential for more accurate decision-making. But concerns about misuse have led to heightened regulatory scrutiny:

  • US: Earlier this month, the Consumer Financial Protection Bureau (CFPB) warned it would get tougher on AI misuse in banking. CFPB Director Rohit Chopra cautioned that AI could be abused to advance “digital redlining” and “robo discrimination.” The chairs of two US congressional committees last year asked regulators to ensure the country’s lenders implemented safeguards ensuring AI improved access to credit for low- and middle-income families and people of color.
  • Europe: EU regulators last week urged lawmakers to consider “further analysing the use of data in AI/machine learning models and potential bias leading to discrimination and exclusion.”
  • UK: The Office for AI will release its white paper on governing and regulating AI in early 2022. This could lead to a shift from the government’s current sector-led approach to blanket AI-specific regulations.

The problem: Banks using AI need to be aware of the risks that come with the technology:

  • The complexity of AI models can create a “black box” problem in which decisions are made with very little transparency regarding how they reached their conclusions, making accountability and error detection a challenge.
  • Baked-in biases that are difficult to root out are another risk. For example, Apple and Goldman Sachs found themselves in hot water in 2019 over claims that technology used to measure creditworthiness might be biased against women.
  • Unintended biases in AI models can arise from flawed training data. AI algorithms that are trained on incomplete, biased, or extraneous data can yield judgments that are biased, causing a range of issues, including inadvertent discrimination.

The solution: Clear and consistent guidance from regulators will give banks a framework to work within, helping them to minimize potential problems arising from AI use. Banks must recognise the inherent flaws in AI, improve transparency, and take responsibility for problems. Both banks and watchdogs must introduce policies to minimize the risk of bias and discrimination:

  • For robo-advice, humans should be involved in signing off outputs from algorithms before they are delivered as advice to customers, a practice known as having a “human-in-the-loop.
  • Regulators should offer examples of best practices and poor practices when banks deploy AI.
  • Tools that are heavily reliant on training data may require new processes to manage the data quality.
  • Reverse-engineering can sometimes be used to draw conclusions about black-box algorithms, improving transparency and documentation.