Algorithmic bias — time to readdress the balance

23 August 2022

I Stock 1410819905

Across the financial services sector, artificial intelligence (AI) is playing an increasingly important role in automating processes, making these faster and less resource intensive. However, vigilance is key to prevent unintended bias and while no one doubts the ability of automation to improve efficiency, some are now questioning its capacity for fairness. Are we confident that algorithms being deployed are free of bias against certain groups, leading to their financial exclusion?

Lack of awareness is no excuse

The UK Centre for Data Ethics and Innovation Barometer has previously identified the potential of such bias as “the biggest risk arising from the use of data-driven technology”, with the organisation now working with “partners to facilitate responsible data sharing, better public sector AI and data use, and laying the foundations for a strong AI assurance ecosystem”. Meanwhile, the European Banking Authority (EBA), Bank of England and Financial Conduct Authority are sufficiently concerned to have begun looking at the potential social impact and how new technologies could negatively affect lending decisions.

It’s a challenge that’s unlikely to go away. As to its order of magnitude, no one’s sure, though a first of its kind pilot study by the NHS into how health and care services are allocated and delivered may shed some light on the numbers affected by algorithmic bias.

And if there is a very real issue, what can be done to redress the balance?

The answer lies in ‘algorithmovigilance’ – the systematic monitoring by financial institutions of their algorithms to ensure that the plethora of insights, from credit referencing checks to fraud and AML processes don’t unconsciously or otherwise discriminate against certain individuals or groups.

By being algorithmovigilant, organisations are better equipped to recognise and root out the human biases that can all too readily become part of the AI process and lead to unfair decision making.

Data validation

How does this bias arise in the first place? It’s usually the result of systems being based on imperfect datasets, that are either incomplete, incorrect or not up to date, to which programmers, managers and other stakeholders add their own assumptions and prejudices. Out of this process emerge algorithms that skew assessments.

It should go without saying that AI platforms should be designed so that they meet all legal, social and ethical standards. Which means that firms need to make algorithmovigilance a priority, to steer clear of legal and regulatory risks, as well as long-term reputational damage.

Removing this kind of bias is paramount, given that trust in so many organisations has been eroded. Consumers are actively searching out companies that do ‘the right thing’ and deliver on their promises. What they want and need is transparency, not to fall prey to decisions that can’t be justified or challenged because they’ve been made by an algorithm that no one understands. The computer may say ‘no’, but that doesn’t mean it’s right.

Levelling the playing field

If they are to play their part in creating a level playing field, senior industry leaders must ensure that algorithmovigilance is embedded in their corporate and governance processes, with staff trained to be alert to unintended bias.

Therefore, continuous monitoring is required, with algorithms adjusted accordingly, as market and social conditions change. Only then can organisations truly say that their application procedures are as unbiased as they can be.

Setting up a team of subject experts to form a centre of excellence, to help ensure consistency of approach throughout the organisation, is a good starting point, as is the regular monitoring of customer data to ensure it is as complete and accurate as possible.

Organisations should also look to actively work with regulators to keep up with best practice.

Algorithmovigilance touches on our fundamental relationship with technology, a tool that should serve us, as long as the processes are managed appropriately. This is an industry-wide challenge, to which no one has all the answers, and it’s down to individual institutions and their technology partners to build the strategies, platforms and insights, fit for tomorrow’s marketplace.

Source: globalbankingandfinance.com