Back

Ensuring a positive future for AI in Banking

< Back
Sankaet Pathak
January 4, 2018

Over the last few months I have had a few conversations about how AI could undermine financial inclusion, if we are not careful.

This conversation started a few months ago when I was at a BankInnovation dinner. One of the proposed ideas was to use something like Synapse’s video auth technology for credit underwriting. This has been talked about more and more since then.

The issues with using videos in credit underwriting are fairly straightforward — you are essentially using facial artifacts that amount to age, gender, ethnicity, etc. to asses someone’s creditworthiness. Thankfully this is illegal in America, but surprisingly not in all countries.

My general concerns with AI are probably too verbose to articulate here. In a nutshell, I am worried that machines will inherit human biases as we automate more tasks with them¹. This can happen two ways:

  1. We might pick tools that we think are common sense tools and not think through long-term implications of using them. Like using facial recognition for loan underwriting.
  2. Also our training datasets can have biases as well. In most cases these datasets are labeled manually — making them susceptible to human biases. For instance, a transaction risk model that relies heavily on zip codes could disproportionately target low-and-moderate-income (LMI) neighborhoods.

So with automation, we would be making the same mistakes that humans make today, but at a much larger scale².

Thankfully these issues are fairly easy to address, it is just a function of will. So here is what we are going to do:

We will build an AI ethics team³. This team will not be responsible for developing core AI products. Instead, they will rigorously test these products with the following goals in mind:

  1. Ensure that the models are performing as we expect them to — very similar to quality assurance. Perform different types of stress tests to ensure certain quality standards are met.
  2. Test these models for any hidden biases — random sample our datasets and look for biases around age, gender, ethnicity, location, etc.
  3. Build standards and approval processes that will require us to defend long-term positive impact of AI products and services before they could be pushed to production.

And just like our work on Cognitive Behavioral Science, we will be open-sourcing all of the code and best practices here as well.

--

We hope that over time regulators would start recognizing some of these concerns and everyone building AI would be required to have a team like this. We will do our very best to push for these ideas on a larger scale as we open-source and gain credibility around this.

Since our mission is to build financial equality and democratize best in class financial products for all Americans, making sure that AI works for all is very important to us. If you are as passionate about this as much as we are, here is a link to the job opening (https://angel.co/synapsefi/jobs/315101-ai-ethics-engineer).

--

[1] My other concerns are that with automation there will be significant job loss. As a society, we are not very good with retraining people for new jobs. Nor do we have a robust financial safety blanket to ease the suffering.

[2] With automation, cost per customer goes down and the addressable market increases. So these biases will affect a larger audience.

[3] Yes, an AI Ethics Engineer… you can be a badass Jedi cat if you apply for this position (https://angel.co/synapsefi/jobs/315101-ai-ethics-engineer).

Sankaet Pathak

Founder & CEO @ Synapse

©Synapse Financial Technologies Inc. 2020