Almost no matter what we do, we leave behind data about who we are. This is also true in banking. In many cases, this data is used in ways that are useful to us. For example, it’s nice for 50-year-olds not to receive advertising for housing savings for young people.
“What is dangerous is when banks or other businesses use data to categorize you and limit your choices without your knowledge. Artificial intelligence can define some people as ‘insiders’ and exclude others. That’s when things can go wrong,” says Elisabeth Austad Asser.
She recently defended her doctoral dissertation at the University of Agder (UiA). There, she examined how the use of artificial intelligence can hinder or promote the values of Sparebanken Sør, where she is responsible for sustainability.
In her doctoral dissertation, she has compiled a list of 13 recommendations on how banks should use artificial intelligence. We will return to these later, but first, why is this a concern?
Invisible sorting
Consider, for example, if a male job applicant does not see any adverts for caregiving roles because someone has decided they would not be of interest to him. Or if you search for available school places and some elite schools are omitted from the search results because of your postcode.
“We think we are free when searching the internet, but behind Google, for example, there are algorithms creating a profile of us that determines what we see and what offers we receive,” says Asser.
When such technologies are deployed in banking, the area in which you live may determine whether you qualify for a home loan or credit card.
Previously, such decisions were made by bank employees. They understood that individuals could have different payment patterns despite having poor payer neighbors.
May reinforce inequalities
“It is dangerous when everything is turned into data. Once you have been defined into a category, this can be further utilized by new algorithms. If banks mishandle this, they may contribute to reinforcing social inequalities. And in the worst case, exacerbating them,” says Asser.
She believes this should be particularly thought-provoking for savings banks, which were initially established to promote economic stability.
“A savings bank carries a historical responsibility towards the local community. It should not just succumb to the pursuit of efficiency created by technology,” says Asser.
However, savings banks also have an obligation to operate efficiently and generate profits. And if savings banks do not want to adopt new technologies, they may be outcompeted by other, more efficient banks that do.
“Artificial intelligence has sparked a commercial game in the banking sector. This technology makes it necessary to take societal responsibility. But few are talking about that,” she says.
13 recommendations
In her doctoral dissertation, Asser has created a list of 13 recommendations on how banks can approach artificial intelligence (AI) in a way that promotes the bank’s values.
- Be mindful: Develop a plan for the use of artificial intelligence that aligns with the bank’s values and goals. Be clear about the purpose of different technologies.
- Understand laws and regulations: Ensure that bank employees have knowledge of current laws and regulations regarding AI.
- Focus on data quality: Ensure the data is fair, balanced, and of high quality. Avoid biases in the data by using insights from different departments in the bank. Test AI systems for biases and work to address them.
- Handle biases: Use a diverse team to develop algorithms. Collaborate with groups often excluded when developing AI models. Be transparent about the data used.
- Select the right algorithms: Utilize algorithms that are as fair and impartial as possible. Use expertise to select and evaluate algorithms, especially for sensitive tasks such as credit assessment.
- Educate the organization about biases: Train the entire organization on biases in datasets and strategies for evaluating them.
- Harness internal knowledge: Include employees in the development of AI models and utilize their banking expertise to reduce biases and improve the implementation of AI.
- Strengthen training: Invest in thorough training for employees within the bank when adopting AI technology.
- Involve management: Ensure that both technical and non-technical managers work inclusively and value-based with AI.
- Monitor the systems: Be aware of unintended effects after implementing AI. Be open to external audits of AI systems.
- Understand the responsibility: Be clear on the bank’s responsibility in AI processes and consider using the term “machine in the loop” when relevant.
- Establish an AI ethical board: The board can address challenges and ensure that the bank stays updated in the field of AI.
- Be transparent about AI decisions: Be clear and open about which decisions are made using AI.
Rapid changes
Asser highlights that technology has always changed how we humans relate to the world. In that sense, there is nothing new about artificial intelligence.
What is new is the speed at which changes are happening. Printing took 200 years to reach Norway. The telephone took 75 years to reach 50 million users. When ChatGPT was launched, it reached 100 million users in a week.
“The technology that impacts and shapes us is being adopted very rapidly now. We need to understand this technology to ensure that the way we use it is sustainable and contributes to creating a world in which we all can and want to live,” she says.
University of Agder
Citation:
Artificial intelligence in banks can exacerbate social inequalities (2024, March 6)
retrieved 7 March 2024
from https://techxplore.com/news/2024-03-artificial-intelligence-banks-exacerbate-social.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.