Implementing And Operationalizing High Level Ethical Principles In AI And Data Science Contexts
Joris Krijger, AI & Ethics Specialist, de Volksbank on Building Ethical AI for BFSI
Add bookmark
Artificial intelligence has the potential to revolutionize the financial services industry. In fact it is already being used by financial institutions to enhance fraud protection, enable customer service chatbots, provide personalized spending guidance and so much more.
However, this rapid adoption of AI in BFSI has presented a number of thorny ethical issues. To start, who is responsible for the outcome of the decision making process of an artificial agent? How does an AI algorithm make objective decisions when the historical data it runs on reflects pervasive and systemic biases? What about consumer privacy and consent? Should AI technology even be used to influence high stakes decisions such as credit approvals, pricing and interest rates?
To unpack some of the industry’s most pressing ethical quandaries, we invited Joris Krijger, AI & Ethics Specialist, de Volksbank to present at the upcoming AI in BFSI virtual event taking place February 1-2, 2022.
Register now to attend his session on “Implementing And Operationalizing High Level Ethical Principles In AI And Data Science Contexts.” To get a glimpse into what you can expect, below we have a short Q&A on the topic.
Seth Adler, Editor-In-Chief, ADA: I understand that monitoring the societal impact of data science is one of your team’s goals? This sounds like a tremendously large undertaking so what exactly does this mean?
Joris Krijger: You're absolutely right. It's a very large ambition. And one thing to point out is that, as we went about implementing these ethical principles within the bank, we found that this shouldn't just be a data science project. It shouldn't be just about the data scientists or innovation managers.
Decisions pertaining to AI ethics should happen on an organizational level, perhaps even on a board level for certain applications.
Secondly, what are you going to do with those insights once you have some form of guidance and governance in place?
You can’t, of course, monitor every stakeholder or exactly predict how things will turn out in the upcoming 10 years, for example. But what you can do is try to find certain points, given the ethical principles say in fairness or in explainability, where you can raise awareness and have a discussion within the organization on what do we actually think is fair.
Take credit approvals, for example. Who should decide what is actually the most fair for certain subgroups? Having this discussion and being able to justify the value of the decisions that you have made as an organization, given these applications, I think that's already an important first step in taking accountability and having a responsible AI development process in place.
Seth: AI explainability is a huge part of the puzzle here. Can you tell us more about how your team is approaching that aspect?
Joris: AI systems can become black boxes. We’ve all seen deep learning systems where it's nearly impossible to actually know how a certain outcome came about.
No matter what field you’re in, having meaningful control over what the algorithm is doing and being able to understand as well as explain how it arrived at a specific decision is a fundamental component of responsible data science development processes.
We approach explainability as key stakeholder information. What does the regulator need from us? What do we have to explain to the regulator for this model? What do we have to explain to the customer? Because the customer is not interested in the data set in which we trained a certain algorithm, a customer perhaps just wants to know, can I trust this system and why was my application denied or approved?
You have this contextual interpretation of explainability, for example, and this is something we encounter at the bank very often in implementing and operationalizing these ethical principles. The ethical evaluation is often really context dependent. And so what explainability actually comes to mean, really depends on the application, the stakeholder and the key issues at stake.
Want to learn more about “Implementing And Operationalizing High Level Ethical Principles In AI And Data Science Contexts.” Register to attend AI in BFSI taking place February 1-2, 2022, a FREE virtual event.