Over the weekend, a viral Twitter thread exposed several issues in the credit lending decisioning process for Apple’s new payment card, underwritten by Goldman Sachs.

For some context, Apple and Goldman Sachs were involved in alleged gender discrimination in credit card limits caused by biased algorithms powering Apple Card’s credit lending decisioning process. There was widespread social media instances confirming this discrimination, including Apple’s very own co-founder, Steve Wozniak and his spouse. The primary issue here is with the Black Box algorithm that generated Apple’s credit lending decisions. As laid out in the Twitter thread, Apple Card’s customer service reps were rendered powerless to the algorithm’s decision. Not only did they have no insight into why certain decisions were made, they were they unable to override it.

The Apple Card & Goldman Sachs

Picture1-(1).png
We are entering into an era where our lives are being dictated by algorithms. While we generally feel that machine intelligence should be embraced because of the objective nature of algorithms, the reality is that this machine intelligence can skew our decision making if not properly checked.
 
Last year, Apple and Goldman Sachs announced a rather unique partnership to transform the digital currency landscape. Apple, a pioneer in consumer tech and services, coupled with Goldman Sachs, a behemoth in investment banking, had mutual benefit in driving new consumer offerings in the banking sector. Apple brought understanding of mobile computing and data while Goldman Sachs brought understanding of underwriting, risk modeling and the launch of Marcus, their digital bank, which is seeking to provide millennials with a new way of banking.
 
In this new card experience focused on “simplicity, transparency and privacy,” users get immediate response to their application, along with their credit limit.
Picture1-(2).pngPicture1-(3).png

Acquiring Customers Using Intelligent Automation

The balance of growth while reducing credit losses is a difficult proposition for banks. Lenders want to acquire new customers but must manage the risk of operational loss. Banks are searching for an intelligence system that learns from input data and uses best in class AI/ML (Artificial Intelligence and Machine Learning) to make informed decisions on the right model or series of models and sources, and drives a credit review that manages risk and rewards for the Bank over time with improved Life Time Value Creditors.
 
Apple and Goldman Sachs are driving innovation in credit application review and approval to use the right data from the right sources, lower their approval costs, drive better upfront approvals, limit risk of default and deliver to partners a better lifetime value of that new creditor.

Lending Decisioning

The lending decisioning process most likely uses several well-known approaches for underwriting with the key limitation being a lack of a significant amount of real-time data to drive decisioning. There are significant efforts across the credit card industry to transform their present credit origination system to incorporate a variety of external data sources, which has now created a substantial data store that they will grow, leverage to design advanced analytics for insights, and drive an annuity of intelligence to drive growth and reduce losses in that growth portfolio.  As a result, partnerships like Apple and Goldman Sachs make sense as the analytics and data stored is expected to grow significantly onward.

As they designed a new approach for credit approval, Goldman Sachs knew that present systems rely on a more a business rules process for credit check with the bureaus that is very costly. Apple and Goldman Sachs most likely collaborated on AI/ML models that are trained on bureau and non-traditional data, and the conventional informed review is “changed” towards full autonomous decisioning. Data comes from the three different bureaus of Equifax, TransUnion and Experian, but each have different value (strengths and weaknesses) and many overlap. Non-traditional data comes from other commercial sources such as LexisNexus, Dunn & Bradstreet, as well as publicly-available data sets like court records and voter registration lists that are aggregated by a 3rd party provider.

Regulation, Interpretability & Bias

Goldman Sachs is now being investigated by New York State regulators for gender bias. The FDIC for Risk Management Examination Manual for Credit Card Activities describes the following:
 
Automated underwriting and loan approval processes are increasingly popular and vary greatly in complexity. In an automated system, credit is generally granted based on the cut-off score and the desired loss rate. These systems are often based on statistical models and apply automated decision-making where possible. Banks sometimes establish auto-decline or auto-approve ranges where the system either automatically approves or declines the applicant based on established criteria, such as scores. The automated systems may also incorporate criteria other than scores (such as rules or overlays) into the credit decision. For example, the presence of certain credit bureau attributes (such as bankruptcy) outside of the credit score itself could be a contributing factor in the decision-making process. Examiners should gauge management’s practices for validating the scoring system and other set parameters within automated systems as well as for verifying the accurateness of data entry for those systems.
 
Cost is not the only driver as building trust using a disciplined framework for model risk management to address issues such as bias and explainability is critical. Eliminating bias and ensuring explainability are essential to trusting machine intelligence while remaining compliant with regulation such as the FCRA (Fair Credit Reporting Act). Goldman Sachs knows that it is critical that they get the approval process right. With the approval of a bad actor, it is incredibly difficult to remedy that active relationship. At scale, Goldman Sachs need to be able to develop, train, validate and deploy tens or hundreds of these AI/ML models into production. However, with data growing exponentially, managing both the model lifecycle as well as the model risk management is key. 

Eliminating bias and ensuring explainability are essential to trusting machine intelligence while remaining compliant with regulation.
The primary source of bias in AI/ML is not in the algorithms deployed, but rather the data used as input to build the predictive models. This is a huge problem and the Banking industry has to address it. Different sources of bias will need to be identified along with possible solutions for remedying the situation when deploying machine learning. Trust in an AI/ML model is built on two different aspects of the model:
  • Its ability to generalize to an unseen dataset (predictive power);
  • Our ability to understand why it generalizes (interpretability).
We need to appreciate the importance of transparency when using AI/ML to predict underwriting that impacts credit lending. Quantifying predictive power is well understood, e.g., in terms of a metric like the confusion matrix. Interpretability is less well understood and is in general difficult. When we speak of understanding a model, we need to clarify if we want to understand the model’s overall behavior (global interpretability) or if we want to understand why a specific prediction was made (local interpretability).
 
Here, we focus on local interpretability for a new creditor. We have identified two distinct interpretations of interpretability in the scientific literature that can be posed as answers to two different questions for a particular prediction:
  • Perturbation Interpretation: which bureau or non-traditional features change the prediction the most when changed the least?
  • Holistic Interpretation: which bureau of non-traditional features caused the prediction? 
Ultimately, Apple and Goldman Sachs did not emphasize model risk management principles of interpretability. Clearly, their support teams defer to the “algorithm” for making the determination of credit limit but hopefully they’ll learn from their mistakes. Most likely, the gender bias was introduced from the data they sourced either internally or externally, but the reputational risk of their credit card will suffer.
Atif Kureishy

Atif is the Global VP, Emerging Practices, Artificial Intelligence & Deep Learning at Teradata.

Based in San Diego, Atif specializes in enabling clients across all major industry verticals, through strategic partnerships, to deliver complex analytical solutions built on machine and deep learning. His teams are trusted advisors to the world’s most innovative companies to develop next-generation capabilities for strategic data-driven outcomes in the areas of artificial intelligence, deep learning & data science.

Atif has more than 18 years in strategic and technology consulting, working with senior executive clients. During this time, he has both written extensively and advised organizations on numerous topics, ranging from improving the digital customer experience to multi-national data analytics programs for smarter cities, cyber network defense for critical infrastructure protection, financial crime analytics for tracking illicit funds flow, and the use of smart data to enable analytic-driven value generation for energy & natural resource operational efficiencies.

View all posts by Atif Kureishy

Related Posts