Latest Posts

Loading...

By: Wendy Greenawalt In my last blog post I discussed the value of leveraging optimization within your collections strategy. Next, I would like to discuss in detail the use of optimizing decisions within the account management of an existing portfolio. Account Management decisions vary from determining which consumers to target with cross-sell or up-sell campaigns to line management decisions where an organization is considering line increases or decreases.  Using optimization in your collections work stream is key. Let’s first look at lines of credit and decisions related to credit line management. Uncollectible debt, delinquencies and charge-offs continue to rise across all line of credit products. In response, credit card and home equity lenders have begun aggressively reducing outstanding lines of credit.    One analyst predicts that the credit card industry will reduce credit limits by $2 trillion by 2010. If materialized, that would represent a 45 percent reduction in credit currently available to consumers. This estimate illustrates the immediate reaction many lenders have taken to minimize loss exposure. However, lenders should also consider the long-term impacts to customer retention, brand-loyalty and portfolio profitability before making any account management decision. Optimization is a fundamental tool that can help lenders easily identify accounts that are high risk versus those that are profit drivers. In addition, optimization provides precise action that should be taken at the individual consumer level. For example, optimization (and optimizing decisions) can provide recommendations for: • when to contact a consumer; • how to contact a consumer; and • to what level a credit line could be reduced or increased... …while considering organizational/business objectives such as: • profits/revenue/bad debt; • retention of desirable consumers; and • product limitations (volume/regional). In my next few blogs I will discuss each of these variables in detail and the complexities that optimization can consider.  

Published: August 23, 2009 by Guest Contributor

By: Kari Michel This blog completes my discussion on monitoring new account decisions with a final focus: scorecard monitoring and performance.  It is imperative to validate acquisitions scorecards regularly to measure how well a model is able to distinguish good accounts from bad accounts. With a sufficient number of aged accounts, performance charts can be used to: • Validate the predictive power of a credit scoring model; • Determine if the model effectively ranks risk; and • Identify the delinquency rate of recently booked accounts at various intervals above and below the primary cutoff score. To summarize, successful lenders maximize their scoring investment by incorporating a number of best practices into their account acquisitions processes: 1. They keep a close watch on their scores, policies, and strategies to improve portfolio strength. 2. They create monthly reports to look at population stability, decision management, scoring models and scorecard performance. 3. They update their strategies to meet their organization’s profitability goals through sound acquisition strategies, scorecard monitoring and scorecard management.

Published: August 18, 2009 by Guest Contributor

By: Wendy Greenawalt The combined impact of rising unemployment, increasing consumer debt burdens and decreasing home values have caused lenders to shift resources away from prospecting and acquisitions to collection and recovery activities. As delinquencies and charge-off rates continue to increase, the likelihood of collecting on delinquent accounts decreases -- because outstanding debts mount for consumers and their ability to pay declines. Integrating optimized decisions into a collection strategy enables a lenders to assign appropriate collection treatments by assessing the level of risk associated with a consumer while considering a customer’s responsiveness to particular treatment options. Specifically, collections optimization uses mathematical algorithms to maximize organizational goals while applying constraints such as budget and call center capacity  -- providing explicit treatment strategies at the consumer level -- while producing the highest probability of collecting outstanding dollars. Optimization can be integrated into a real-time call center environment by targeting the right consumers for outbound calls and assigning resources to consumers most likely to pay.  It can also be integrated into traditional lettering campaigns to determine the number and frequency of letters, and the tone of each correspondence. The options for account treatment are virtually limitless and, unlike other techniques, optimization will determine the most profitable strategy while meeting operational and business constraints without simplification of the problem. By incorporating optimization into a collection strategy that includes a predictive model or score and advanced segmentation, an organization can maximize collected dollars, minimize the costs of collection efforts, improve collections efficiency, and determine which accounts to sell off – all while maximizing organizational profits.  

Published: August 18, 2009 by Guest Contributor

There are a lot of areas covered in your comment: efficiency; credit quality (human side or character in an impersonal environment); and policy adherence. We define efficiency and effectiveness using these metrics: • Turnaround time from application submission to decision; • Resulting delinquencies based upon type of underwriting (centralized vs. decentralized); • Production levels between centralized and decentralized; • Performance of the portfolio based upon type of underwriting; and • Turnaround time from application submission to decision Due to the nature of Experian’s technology, we are able to capture start and stop times of the typical activities related to loan origination.  After analyzing the data from 160+ financial institutions of all sizes, Experian publishes an annual small business benchmark report that documents loan origination process efficiencies and inefficiencies, benchmarking these as industry standards. Turnaround Time From the benchmark report, we’ve seen that institutions that are centralized have consistently had a turnaround time that is half of those with decentralized environments. Interestingly, turnaround time is also much faster for the larger institutions than for smaller.  This is confusing because the smaller community banks tend to promote the close relationship they have with their clients and their communities. Yet, when it comes to actually making a loan decision, it tends to take longer. In addition to speed, another aspect of turnaround is consistency.  We all can think of situations where we were able to beat the stated turnaround times of the larger or the centralized institutions.  Unfortunately, these tend to be isolated instances versus the consistent performance that is delivered in the centralized environment. Resulting delinquencies based upon type of underwriting/Performance of the portfolio based upon type of underwriting Again, referring to the annual small business lending benchmark report, delinquencies in a centralized environment are 50% of those in a decentralized environment. I have worked with a number of institutions that allow the loan officer/relationship manager to “reverse the decision” made by a centralized underwriting group.  The thinking is that the human aspect is otherwise missing in centralized underwriting.  When the data is collected, though, the incremental business/portfolio that is approved by the loan officer (who is close to the client and knows the human side) is not profitable from a credit quality perspective.  Specifically, this incremental portfolio typically has a net charge-off rate that exceeds the net interest margin -- and this is before we even consider the non-interest expense incurred. Your choice: is the incremental business critical to your success…or could you more fruitfully direct your relationship officer’s attention elsewhere? Production levels between centralized and decentralized Not to beat a dead horse, but the multiple of two comes into play here too.  As one looks at the throughput of each role (data entry, underwriter, relationship manager/lender), the production levels of a centralized environment are typically double that of a decentralized. It’s clear that the data point to the efficiency and effectiveness of a centralized environment    

Published: August 7, 2009 by Guest Contributor

By: Kari Michel This blog is a continuation of my previous discussion about monitoring your new account acquisition decisions with a focus on decision management. Decision management reports provide the insight to make more targeted decisions that are sound and profitable. These reports are used to identify: which lending decisions are consistent with scorecard recommendations; the effectiveness of overrides; and/or whether cutoffs should be adjusted. Decision management reports include: • Accept versus decline score distributions • Override rates • Override reason report • Override by loan officer • Decision by loan officer Successful lending organizations review this type of information regularly to make better lending policy decisions.  Proactive monitoring provides feedback on existing strategies and helps evaluate if you are making the most effective use of your score(s). It helps to identify areas of opportunity to improve portfolio profitability. In my next blog, I will discuss the last set of monitoring reports, scorecard performance.  

Published: August 6, 2009 by Guest Contributor

Put yourself in the shoes of your collections team. The year ahead is challenging. Workloads are increasing as consumer debt escalates, and collectors are working tiring, stressful shifts talking to people who don't want to talk about their debts.What kind of incentives can improve your collections performance and at the same time as create a well motivated and productive team?IntroductionFinancial incentives have long been a popular method to help boost staff performance. These rewards usually relate to the achievement of certain goals -- either personal, team, organizational or a combination of all three. A well-constructed incentive plan will increase staff morale and loyalty, as well as making a valuable difference to the bottom line. It can help ensure you are managing a team who are running at full speed and capability during these busy, turbulent times.However, collections managers can also implement alternative non-monetary incentive programs that can boost staff commitment and effectiveness.This series of postings identifies cash and non-cash alternatives that can help build and maintain a motivated team.Getting StartedBefore introducing a new incentive plan, clearly explain your objectives to the team. If your main goal is to maximize profitability, boost morale by letting your team know they are a major source of profit. Their understanding of how individual performance relates to the business will deepen their commitment to the program once it begins.To help you decide what to include in the incentive plan, you must first understand what drives your team. This should be ascertained by conducting regular performance appraisals, call monitoring, attitude surveys and informal conversations. Your staff will likely tell you that increased status and recognition, higher pay, better working conditions and improved benefits would increase both morale and performance. We can look into incentives that address these requirements individually, but let's begin with the most obvious: money.Money is a powerful motivatorThe current economic climate guarantees that money is more important to your team members than ever; they want to be financially rewarded for their efforts. In this industry, collectors work individually so it is wise to target them in this way when using financial incentives.Comparing individuals can also achieve higher performance levels because the cachet of being 'top dog' is a real motivator for some people.Our advice is to begin by targeting staff in three familiar areas and ensure from the start that your collections system delivers the depth and granularity of management information to support your incentive program.I would like to thank the Experian collections experts who contributed to this four-part series. The rest of the series will be posted soon! 

Published: August 6, 2009 by Guest Contributor

By: Tracy Bremmer In our last blog (July 30), we covered the first three stages of model development which are necessary whether developing a custom or generic model.  We will now discuss the next three stages, beginning with the “baking” stage:  scorecard development. Scorecard development begins as segmentation analysis is taking place and any reject inference (if needed) is put into place. Considerations for scorecard development are whether the model will be binned (divides predictive attributes into intervals) or continuous (variable is modeled in its entirety), how to account for missing values (or “false zeros”), how to evaluate the validation sample (hold-out sample vs. an out-of-time sample), avoidance of over-fitting the model, and finally what statistics will be used to measure scorecard performance (KS, Gini coefficient, divergence, etc.). Many times lenders assume that once the scorecard is developed, the work is done.   However, the remaining two steps are critical to development and application of a predictive model:  implementation/documentation and scorecard monitoring.   Neglecting these two steps is like baking a cake but never taking a bite to make sure it tastes good. Implementation and documentation is the last stage in developing a model that can be put to use for enhanced decisioning. Where the model will be implemented will determine the timeliness and complexity for when the models can be put into practice. Models can be developed in an in-house system, a third-party processor, a credit reporting agency, etc. Accurate documentation outlining the specifications of the model will be critical for successful implementation and model audits. Scorecard monitoring will need to be put into place once the model is developed, implemented and put into use. Scorecard monitoring evaluates population stability, scorecard performance, and decision management to ensure that the model is performing as expected over the course of time. If at any time there are variations based on initial expectations, then scorecard monitoring allows for immediate modifications to strategies. With all the right ingredients, the right approach, and the checks and balances in place, your model development process has the potential to come out “just right!”  

Published: August 4, 2009 by Guest Contributor

By: Tracy Bremmer In our last blog, we covered the first three stages of model development which are necessary whether developing a custom or generic model.  We will now discuss the next three stages, beginning with scorecard development. Scorecard development begins as segmentation analysis is taking place and any reject inference (if needed) is put into place. Considerations for scorecard development are whether the model will be binned (divides predictive attributes into intervals) or continuous (variable is modeled in its entirety), how to account for missing values (or “false zeros”), how to evaluate the validation sample (hold-out sample vs. an out-of-time sample), avoidance of over-fitting the model, and finally what statistics will be used to measure scorecard performance (KS, Gini coefficient, divergence, etc.). Many times lenders assume that once the scorecard is developed, the work is done.   However, the remaining two steps are critical to development and application of a predictive model:  implementation/documentation and scorecard monitoring.   Neglecting these two steps is like baking a cake but never taking a bite to make sure it tastes good. Implementation and documentation is the last stage in developing a model that can be put to use for enhanced decisioning. Where the model will be implemented will determine the timeliness and complexity for when the models can be put into practice. Models can be developed in an in-house system, a third-party processor, a credit reporting agency, etc. Accurate documentation outlining the specifications of the model will be critical for successful implementation and model audits. Scorecard monitoring will need to be put into place once the model is developed, implemented and put into use. Scorecard monitoring evaluates population stability, scorecard performance, and decision management to ensure that the model is performing as expected over the course of time. If at any time there are variations based on initial expectations, then scorecard monitoring allows for immediate modifications to strategies. With all the right ingredients, the right approach, and the checks and balances in place, your model development process has the potential to come out “just right!”  

Published: July 30, 2009 by Guest Contributor

There were always questions around the likelihood that the August 1, 2009 deadline would stick.  Well, the FTC has pushed out the Red Flag Rules compliance deadline to November 1, 2009 (from the previously extended August 1, 2009 deadline). This extension is in response to pressures from Congress – and, likely, "lower risk" businesses questioning their being covered under the Red Flag Rule to begin with (businesses such as those related to healthcare, retailers, small businesses, etc). Keep in mind that the FTC extension on enforcement of Red Flag Guidelines does not apply to address discrepancies on credit profiles, and that those discrepancies are expected to be worked TODAY.  Risk management strategies are key to your success. To view the entire press release, visit: http://www.ftc.gov/opa/2009/07/redflag.shtm

Published: July 30, 2009 by Keir Breitenfeld

By: Wendy Greenawalt When consulting with lenders, we are frequently asked what credit attributes are most predictive and valuable when developing models and scorecards. Because we receive this request often, we recently decided to perform the arduous analysis required to determine if there are material differences in the attribute make up of a credit risk model based on the portfolio on which it is applied. The process we used to identify the most predictive attributes was a combination of art and sciences -- for which our data experts drew upon their extensive data bureau experience and knowledge obtained through engagements with clients from all types of industries. In addition, they applied an empirical process which provided statistical analysis and validation of the credit attributes included. Next, we built credit risk models for a variety of portfolios including bankcard, mortgage and auto and compared the credit attribute included in each. What we found is that there are some attributes that are inherently predictive regardless for which portfolio the model was being developed. However, when we took the analysis one step further, we identified that there can be significant differences in the account-level data when comparing different portfolio models. This discovery pointed to differences, not just in the behavior captured with the attributes, but in the mix of account designations included in the model. For example, in an auto risk model, we might see a mix of attributes from all trades, auto, installment and personal finance…as compared to a bankcard risk model which may be mainly comprised of bankcard, mortgage, student loan and all trades.  Additionally, the attribute granularity included in the models may be quite different, from specific derogatory and public record data to high level account balance or utilization characteristics. What we concluded is that it is a valuable exercise to carefully analyze available data and consider all the possible credit attribute options in the model-building process – since substantial incremental lift in model performance can be gained from accounts and behavior that may not have been previously considered when assessing credit risk.  

Published: July 30, 2009 by Guest Contributor

By: Tracy Bremmer Preheat the oven to 350 degrees. Grease the bottom of your pan. Mix all of your ingredients until combined. Pour mixture into pan and bake for 35 minutes. Cool before serving. Model development, whether it is a custom or generic model, is much like baking. You need to conduct your preparatory stages (project design), collect all of your ingredients (data), mix appropriately (analysis), bake (development), prepare for consumption (implementation and documentation) and enjoy (monitor)! This blog will cover the first three steps in creating your model! Project design involves meetings with the business users and model developers to thoroughly investigate what kind of scoring system is needed for enhanced decision strategies. Is it a credit risk score, bankruptcy score, response score, etc.? Will the model be used for front-end acquisition, account management, collections or fraud? Data collection and preparation evaluates what data sources are available and how best to incorporate these data elements within the model build process. Dependent variables (what you are trying to predict) and the type of independent variables (predictive attributes) to incorporate must be defined. Attribute standardization (leveling) and attribute auditing occur at this point. The final step before a model can be built is to define your sample selection. Segmentation analysis provides the analytical basis to determine the optimal population splits for a suite of models to maximize the predictive power of the overall scoring system. Segmentation helps determine the degree to which multiple scores built on an individual population can provide lift over building just one single score. Join us for our next blog where we will cover the next three stages of model development:  scorecard development; implementation/documentation; and scorecard monitoring.

Published: July 30, 2009 by Guest Contributor

By: Kari Michel In my last blog I gave an overview of monitoring reports for new account acquisition decisions listing three main categories that reports typically fall into:  (1) population stability; (2) decision management; (3) scorecard performance. Today, I want to focus on population stability.   Applicant pools may change over time as a result of new marketing strategies, changes in product mix, pricing updates, competition, economic changes or a combination of these. Population stability reports identify acquisition trends and the degree to which the applicant pool has shifted over time, including the scorecard components driving the shift in custom credit scoring models. Population stability reports include: • Actual versus expected score distribution • Actual versus expected scorecard characteristics distributions (available with custom models) • Mean applicant scores • Volumes, approval and booking rates These types of reports provide information to help monitor trends over time, rather than spikes from month to month.  Understanding the trends allows one to be proactive in determining if the shifts warrant changes to lending policies or cut-off scores. Population stability is only one area that needs to be monitored; in my next blog I will discuss decision management reports.  

Published: July 30, 2009 by Guest Contributor

By: Wendy Greenawalt On any given day, US credit bureaus contain consumer trade data on approximately four billion trades. Interpreting data and defining how to categorize the accounts and build attributes, models and decisioning tools can and does change over time, due to the fact that the data reported to the bureaus by lenders and/or servicers also changes. Over the last few years, new data elements have enabled organizations to create attributes to identify very specific consumer behavior. The challenge for organizations is identifying what reporting changes have occurred and the value that the new consumer data can bring to decisioning. For example, a new reporting standard was introduced nearly a decade ago which enabled lenders to report if a trade was secured by money or real property. Before the change, lenders would report the accounts as secured trades making it nearly impossible to determine if the account was a home equity line of credit or a secured credit card. Since then, lender reporting practices have changed and, now, reports clearly state that home equity lines of credit are secured by property making it much easier to delineate the two types of accounts from one another. By taking advantage of the most current credit bureau account data, lenders can create attributes to capture new account types.  They can also capture information (such as: past due amounts; utilization; closed accounts and derogatory information including foreclosure; charge-off and/or collection data) to make informed decisions across the customer life cycle.

Published: July 14, 2009 by Guest Contributor

Vintage analysis 101 The title of this edition, ‘The risk within the risk’ is a testament to the amount of information that can be gleaned from an assessment of the performances of vintage analysis pools. Vintage analysis pools offer numerous perspectives of risk. They allow for a deep appreciation of the effects of loan maturation, and can also point toward the impact of external factors, such as changes in real estate prices, origination standards, and other macroeconomic factors, by highlighting measurable differences in vintage to vintage performance. What is a vintage pool? By the Experian definition, vintage pools are created by taking a sample of all consumers who originated loans in a specific period, perhaps a certain quarter, and tracking the performance of the same consumers and loans through the life of each loan. Vintage pools can be analyzed for various characteristics, but three of the most relevant are: * Vintage delinquency, which allows for an understanding of the repayment trends within each pool; * Payoff trends, which reflect the pace at which pools are being repaid; and * Charge-off curves, which provide insights into the charge-off rates of each pool. The credit grade of each borrower within a vintage pool is extremely important in understanding the vintage characteristics over time, and credit scores are based on the status of the borrower just before the new loan was originated. This process ensures that the new loan origination and the performance of the specific loan do not influence the borrower’s credit score. By using this method of pooling and scoring, each vintage segment contains the same group of loans over time – allowing for a valid comparison of vintage pools and the characteristics found within. Once vintage pools have been defined and created, the possibilities for this data are numerous... Read more about our analysis opportunities for vintage analysis and our recent findings on vintage analysis.  

Published: July 13, 2009 by Kelly Kent

-- by Jeff BernsteinSo, here I am with my first contribution to Experian Decision Analytics’ collections blog, and what I am discussing has practically nothing to do with analytics. But, it has everything to do with managing the opportunities to positively impact collections results and leveraging your investment in analytics and strategies, beginning with the most important weapon in your arsenal – collectors.Yes, I know it’s a bit unconventional for a solutions and analytics company to talk about something other than models; but the difference between mediocre results and optimization rests with your collectors and your organization’s ability to manage customer interactions.Let’s take a trip down memory lane and reminisce about one of the true landscape changing paradigm shifts in collections in recent memory – the use of skill models to become payment of choice.AT&T Universal Card was one of the first early adopters of a radical new approach towards managing an emerging Gen X debtor population during the early 1990s. Armed with fresh research into what influenced delinquent debtors into paying certain collectors while dogging others, they adopted what we called a “management systems” approach towards collections.They taught their entire collections team a new set of skills models that stressed bridging skills between the collector and the customer, thus allowing the collector to interact in a more collaborative, non-aggressive manner. The new approach enabled collectors to more favorably influence customer behavior, creating payment solutions collaboratively that allowed AT&T to become “payment of choice” when competing with other creditors competing for share of wallet.A new of set of skill metrics, which we now affectionately call our “dashboard,” were created to measure the effective use of the newly taught skill models, and collectors were empowered to own their own performance – and to leverage their team leader for coaching and skills development. Team developers, the new name for front line collection managers, were tasked with spending 40-50% or more of their time on developmental activities, using leadership skills in their coaching and development activities.  The game plan was simple.• Engage collectors with customer focused skills that influenced behavior and get paid sooner.• Empower collectors to take on the responsibility for their own development.• Make performance results visible top-to-bottom in the organization to stimulate competitiveness, leveraging our innate desire for recognition. • Make leaders accountable for continuous performance improvement of individuals and teams.It worked. AT&T Universal won the Malcom Baldrige National Quality Award in 1992 for its efforts in “delighting the customer” while driving their delinquencies and charge-offs to superior levels. A new paradigm shift was unleashed and spread like wildfire across the industry, including many of the major credit card issuers and top tier U.S. banks, and large retailers.Why do I bring this little slice of history up in my first blog?I see many banking and financial services companies across the globe struggle with more complex customer situations and harder collections cases -- with their attention naturally focused on tools, models, and technologies. As an industry, we are focused on early lifecycle treatment strategy, identifying current, non-delinquent customers who may be at-risk for future default, and triaging them before they become delinquent. Risk-based collections and segmentation is now a hot topic. Outsourcing and leveraging multiple, non-agent based contact channels to reduce the pressures on collection resources is more important than ever. Optimization is getting top billing as the next “thing.”What I don’t hear enough of is how organizations are engaged in improving the skills of collectors, and executing the right management systems approach to the process to extract the best performance possible from our existing resources. In some ways, this may be lost in the chaos of our current economic climate. With all the focus on analytics, segmentation, strategy and technology, the opportunity to improve operational performance through skill building and leadership may have taken a back seat.I’ve seen plenty of examples of organizations who have spent millions on analytical tools and technologies, improving portfolio risk strategy and targeting of the right customers for treatment. I’ve seen the most advanced dialer, IVR, and other contact channel strategies used successfully to obtain the highest right party contact rates and the lowest possible cost. Yet, with all of that focus and investment, I’ve seen these right party contacts mismanaged by collectors who were not provided with the optimal coaching and skills.With the enriched data available for decisioning, coupled with the amazing capabilities we have for real time segmentation, strategy scripting, context-sensitive screens, and rules-based workflow management in our next generation collections systems, we are at a crossroads in the evolution of collections.Let’s not forget some of the “nuts and bolts” that drive operational performance and ensure success.Something old can be something new. Examine your internal processes aimed at producing the best possible skills at all collector levels and ensure that you are not missing the easiest opportunity to improve your results. 

Published: July 13, 2009 by Guest Contributor

Subscribe to our blog

Enter your name and email for the latest updates.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Subscribe to our Experian Insights blog

Don't miss out on the latest industry trends and insights!
Subscribe