If they did here's one way to handle it
Under the Biden Administration the prudential bank regulators and the Department of Justice initiated a program of aggressive enforcement against redlining by lenders of all types. Announced by Attorney General Merrick Garland in October 2021, the “Combatting Redlining Initiative” marked the beginning of a series of cases in which redlining allegations were heavily dependent on statistical analysis, specifically the concept of “statistical significance”. Basically, statistical significance for redlining analysis passes judgment on whether or not a bank’s performance lending in majority minority census tracts (“MMCTs”, or a subset thereof, is “representative” (low or high) of other lenders’ MMCTs penetration rates into the same market, at the so-called “10% level of significance. But for the results to be valid, there can be no “confounding” factor (an unknown factor that can skew the test results). In several cases in which we were requested to provide consulting advice we observed factors that clearly biased the statistical results but which nonetheless examiners ignored. This article describes the facts in one case where it appeared the regulators ignored biased test results. The identity of the lender is not disclosed because redlining charges were eventually dropped upon further analysis as discussed below.
In this case there was a group of 45 “peer” lenders whose activity lending in the area MMCTs was the basis for determining the market benchmark for lending in the area MMCTs. The standard practice for defining a peer group is to include lenders that processed 50% to 200% of the examined institution’s mortgage application volume inside the market (also known as the Reasonably Expected Market Area or “REMA”). When compared to the peer group’s average MMCT penetration rate, the bank under examination was determined to have a statistically significant low rate of penetration in the MMCTs. However, closer examination of the peer group and application of the statistical significance concept revealed that 22 of the 45 lenders in the group had statistically significant low MMCT penetration rates. The statistical significance model applied by examiners claims only a 5% failure rate (at the low end) due to chance, but in this case the failure rate was nearly 50%. Of the twenty-three Peer Lenders that were Banks/Credit Unions/Savings & Loans, eighteen failed the statistical significance test at the low end, meaning seventy percent of the banks lending into this particular market could face referral for redlining. This indicates a bias in the peer group [data] that distorts the results and undermines the confidence one can have in any conclusion.
The test results were skewed because the peer group included many mortgage companies. It turned out that those companies had abnormally high penetration rates in the market MMCTs. Once those companies were removed from the peer data the client bank’s results were no longer statistically significant.
The bias introduced by including mortgage companies in the peer group does not appear to be unique to this situation. The Urban Institute published a study in April 2023, “An Assessment of lending to LMI and Minority Neighborhoods and Borrowers” which revealed that nationally, Independent mortgage companies extended 9.3% of their lending in Minority neighborhoods while in contrast banks reported only 5.7% of]their mortgage lending in the Minority communities. The study included HMDA-reported lending from 2015 through 2019.
Across the country, mortgage companies extend a much greater volume of their mortgage lending in minority neighborhoods compared to banks during the period under study. Mortgage companies work under entirely different business models than banks and including them in the peer group can introduce a bias to the statistical analysis and undermine its validity.
Mortgage company data alone may not distort the results. In the case discussed here, there was one very large bank that had a MMCT penetration rate triple the average for the peer group. That very large disparity and the fact that the bank had a very large market share meant the bank’s performance distorted the average MMCT penetration rate upward, thereby skewing the comparisons. In the case of the bank under examination it did not precipitate a statistically significant result, but it could have and in fact, did have such an effect on other bank lenders in the market.
The lesson learned is that a bank’s performance should not be judged using mortgage company data, nor judged against a bank whose lending is clearly unrepresentative of its peers. We also recommend testing every lender in the dataset against the dataset. Our analysis reduced the number of banks failing the test from eighteen to only four at the required level of significance. The analysis applied as recommended here would spare fourteen banks the cost of failing the test and possible DOJ referral for redlining.
We recommend that, when relying on statistical significance, peer group data should be tested to confirm the underlying assumption of 5% failure (on the low end) due to chance. In other words, if significantly more than 5% of the peer group institutions show statistically significant results (extreme failure on the low side) the validity of the peer grouping should be questioned. There may be outliers in the peer group that skew the group's distribution mean. In situations where a bias is detected in the peer group adjustments can be made so that the underlying assumption of failure due to chance will be valid.