Latest Posts

Loading...

By: Tom Hannagan Apparently my last post on the role of risk management in the pricing of deposit services hit some nerve ends. That’s good. The industry needs its “nerve ends” tweaked after the dearth of effective risk management that contributed to the financial malaise of the last couple of years. Banks, or any business, can prosper by simply following their competitors’ marketing strategies and meeting or slightly undercutting their prices. The actions of competitors are an important piece of intelligence to consider, but not necessarily optimal for your bank to copy. One question is regarding the “how-to” behind risk-based pricing (RBP) of deposits. The answer has four parts. Let’s see. First, because of the importance and size of the deposit business (yes, it’s a line of business) as a funding source, one needs to isolate the interest rate risk. This is done by transfer pricing, or in a sense, crediting the deposit balances for their marginal value as an offset to borrowing funds. This transfer price has nothing to do with the earnings credit rate used in account analysis – that is a merchandising issue used to generate fee income. Fees, resulting from account analysis, when not waived, affect the profitability of deposit services, but are not a risk element. Two things are critical to the transfer of funding credit: 1) the assumptions regarding the duration, or reliability of the deposit balances and 2) the rate curve used to match the duration. Different types of deposit behave differently based on changes in rates paid. Checking account deposit funds tend to be very loyal or “sticky” - they don’t move around a lot (or easily) because of rate paid, if any. At the other extreme, time deposits tend to be very rate-sensitive and can move (in or out) for small incremental gains. Savings, money market and NOW accounts are in-between. Since deposits are an offset (ultimately) to marginal borrowing, just as loans might (ultimately) require marginal borrowing, we recommend using the same rate curve for both asset and liability transfer pricing. The money is the same thing on both sides of the balance sheet and the rate curve used to fund a loan or credit a deposit should be the same. We believe this will help, greatly, to isolate IRR. It is also seems more fair when explaining the concept to line management. Secondly, although there is essentially no credit risk associated with deposits, there is operational risk. Deposit make up most of the liability side of the balance sheet and therefore the lion’s share of institutional funding. Deposits are also a major source of operational expense. The mitigated operational risks such as physical security, backup processing arrangements, various kinds of insurance and catastrophe plans, are normal expenses of doing business and included in a bank’s financial statements. The costs need to be broken down by deposit category to get a picture of the risk-adjusted operating expenses. The third major consideration for analyzing risk-adjusted deposit profitability is its revenue contribution. Deposit-related fee income can be a very significant number and needs to be allocated to particular deposit category that generates this income. This is an important aspect of the return, along with the risk-adjusted funding value of the balances. It will vary substantially for various deposit types. Time deposits have essentially zero fee income, whereas checking accounts can produce significant revenues. The fourth major consideration is capital. There are unexpected losses associated with deposits that must be covered by risk-based capital – or equity. The unexpected losses include: unmitigated operational risks, any error in transfer pricing the market risk, and business or strategic risk. Although the unexpected losses associated with deposit products are substantially less than found in the lending products, they needs to be taken into account to have a fully risk-adjusted view. It is also necessary to be able to compare the risk-adjusted profit and profitability of such diverse services as found within banking. Enterprise risk management needs to consider all of the lines of business, and all of the products of the organization, on a risk-adjusted performance basis. Otherwise it is impossible to decide on the allocation of resources, including precious capital. Without this risk management view of deposits (just as with loans) it is impossible to price the services in a completely knowledgeable fashion. Good entity governance, asset and liability posturing, and competent line of business management, all require more and better risk-based profit considerations to be an important part of the intelligence used to optimally price deposits.      

Published: January 20, 2010 by Guest Contributor

Meat and potatoes Data are the meat and potatoes of fraud detection.  You can have the brightest and most capable statistical modeling team in the world.  But if they have crappy data, they will build crappy models.  Fraud prevention models, predictive scores, and decisioning strategies in general are only as good as the data upon which they are built. How do you measure data performance? If a key part of my fraud risk strategy deals with the ability to match a name with an address, for example, then I am going to be interested in overall coverage and match rate statistics.  I will want to know basic metrics like how many records I have in my database with name and address populated.  And how many addresses do I typically have for consumers?  Just one, or many?  I will want to know how often, on average, we are able to match a name with an address.  It doesn’t do much good to tell you your name and address don’t match when, in reality, they do. With any fraud product, I will definitely want to know how often we can locate the consumer in the first place.  If you send me a name, address, and social security number, what is the likelihood that I will be able to find that particular consumer in my database?  This process of finding a consumer based on certain input data (such as name and address) is called pinning.  If you have incomplete or stale data, your pin rate will undoubtedly suffer.  And my fraud tool isn’t much good if I don’t recognize many of the people you are sending me. Data need to be fresh.  Old and out-of-date information will hurt your strategies, often punishing good consumers.  Let’s say I moved one year ago, but your address data are two-years old, what are the chances that you are going to be able to match my name and address?  Stale data are yucky. Quality Data = WIN It is all too easy to focus on the more sexy aspects of fraud detection (such as predictive scoring, out of wallet questions, red flag rules, etc.) while ignoring the foundation upon which all of these strategies are built.  

Published: January 20, 2010 by Guest Contributor

In a continuation of my previous entry, I’d like to take the concept of the first-mover and specifically discuss the relevance of this to the current bank card market. Here are some statistics to set the stage: • Q2 2009 bankcard origination levels are now at 54 percent of Q2 2008 levels • In Q2 2009, bankcard originations for subprime and deep-subprime were down 63 percent from Q2 2008 • New average limits for bank cards are down 19 percent in Q2 2009 from peak in Q3 2008 • Total unused limits continued to decline in Q3 2009, decreasing by  $100 billion in Q3 2009 Clearly, the bank card market is experiencing a decline in credit supply, along with deterioration of credit performance and problematic delinquency trends, and yet in order to grow, lenders are currently determining the timing and manner in which to increase their presence in this market. In the following points, I’ll review just a few of the opportunities and risks inherent in each area that could dictate how this occurs. Lender chooses to be a first-mover: • Mining for gold – lenders currently have an opportunity to identify long-term profitable segments within larger segments of underserved consumers. Credit score trends show a number of lower-risk consumers falling to lower score tiers, and within this segment, there will be consumers who represent highly profitable relationships. Early movers have the opportunity to access these consumers with unrealized creditworthiness at their most receptive moment, and thus have the ability to achieve extraordinary profits in underserved segments. • Low acquisition costs – The lack of new credit flowing into the market would indicate a lack of competitiveness in the bank card acquisitions space. As such, a first-mover would likely incur lower acquisitions costs as consumers have fewer options and alternatives to consider. • Adverse selection - Given the high utilization rates of many consumers, lenders could face an abnormally high adverse selection issue, where a large number of the most risky consumers are likely to accept offers to access much needed credit – creating risk management issues. • Consumer loyalty – Whether through switching costs or loyalty incentives, first-movers have an opportunity to achieve retention benefits from the development of new client relationships in a vacant competitive space. Lender chooses to be a secondary or late-mover: • Reduced risk by allowing first-mover to experience growing pains before entry. The implementation of new acquisitions and risk-based pricing management techniques with new bank card legislation will not be perfected immediately. Second-movers will be able to read and react to the responses to first movers’ strategies (measuring delinquency levels in new subprime segments) and refine their pricing and policy approaches. • One of the most common first-mover advantages is the presence of switching costs by the customer. With minimal switching costs in place in the bank card industry, the ability for second-movers to deal with an incumbent is not one where switching costs are significant issues – second-movers would be able to steal market share with relative ease. • Cherry-picked opportunities – as noted above, many previously attractive consumers will have been engaged by the first-mover, challenging the second-mover to find remaining attractive segments within the market. For instance, economic deterioration has resulted in short-term joblessness for some consumers who might be strong credit risks, given the return of capacity to repay. Once these consumers are mined by the first-mover, the second-mover will likely incur greater costs to acquire these clients. Whether lenders choose to be first to market, or follow as a second-mover, there are profitable opportunities and risk management challenges associated with each strategy.  Academics and bloggers continue to debate the merits of each, (1)  but it is the ultimately lenders of today that will provide the proof.   [1] http://www.fastcompany.com/magazine/38/cdu.html  

Published: January 18, 2010 by Kelly Kent

By: Ken Pruett The use of Knowledge Based Authentication (KBA) or out of wallet questions continues to grow. For many companies, this solution is used as one of its primary means for fraud prevention.  The selection of the proper tool often involves a fairly significant due diligence process to evaluate various offerings before choosing the right partner and solution.  They just want to make sure they make the right choice. I am often surprised that a large percentage of customers just turn these tools on and never evaluate or even validate ongoing performance.  The use of performance monitoring is a way to make sure you are getting the most out of the product you are using for fraud prevention.  This exercise is really designed to take an analytical look at what you are doing today when it comes to Knowledge Based Authentication. There are a variety of benefits that most customers experience after undergoing this fraud analytics exercise.  The first is just to validate that the tool is working properly.  Some questions to ponder include: Are enough frauds being identified? Is the manual review rate in-line with what was expected?  In almost every case I have worked on as it relates to these engagements, there were areas that were not in-line with what the customer was hoping to achieve.  Many had no idea that they were not getting the expected results. Taking this one step further, changes can also be made to improve upon what is already in place.  For example, you can evaluate how well each question is performing.  The analysis can show you which questions are doing the best job at predicting fraud.  The use of better performing questions can allow you the ability to find more fraud while referring fewer applications for manual review.  This is a great way to optimize how you use the tool. In most organizations there is increased pressure to make sure that every dollar spent is bringing value to the organization.  Performance monitoring is a great way to show the value that your KBA tool is bringing to the organization.  The exercise can also be used to show how you are proactively managing your fraud prevention process.   You accomplish this by showing how well you are optimizing how you use the tool today while addressing emerging fraud trends. The key message is to continuously measure the performance of the KBA tool you are using.  An exercise like performance monitoring could provide you with great insight on a quarterly basis.  This will allow you to get the most out of your product and help you keep up with a variety of emerging fraud trends. Doing nothing is really not an option in today’s even changing environment.  

Published: January 18, 2010 by Guest Contributor

By: Amanda Roth The reality of risk-based pricing is that there is not one “end all be all” way of determining what pricing should be applied to your applicants.  The truth is that statistics will only get you so far.  It may get you 80 percent of the final answer, but to whom is 80 percent acceptable?  The other 20 percent must also be addressed. I am specifically referring to those factors that are outside of your control.  For example, does your competition’s pricing impact your ability to price loans?  Have you thought about how loyal customer discounts or incentives may contribute to the success or demise of your program?  Do you have a sensitive population that may have a significant reaction to any risk-base pricing changes?  These questions must be addressed for sound pricing and risk management. Over the next few weeks, we will look at each of these questions in more detail along with tips on how to apply them in your organization.  As the new year is often a time of reflection and change, I would encourage you to let me know what experiences you may be having in your own programs.  I would love to include your thoughts and ideas in this blog.  

Published: January 18, 2010 by Guest Contributor

To calculate the expected business benefits of making an improvement to your decisioning strategies, you must first identify and prioritize the key metrics you are trying to positively impact.  For example, if one of your key business objectives is improved enterprise risk management, then some of the key metrics you seek to impact, in order to effectively address changes in credit score trends, could include reducing net credit losses through improved credit risk modeling and scorecard monitoring. Assessing credit risk is a key element of enterprise risk management and can addressed as part of your application risk management processes as well as other decisioning strategies that are applied at different points in the customer lifecycle. In working with our clients, Experian has identified 15 key metrics that can be positively impacted through optimizing decisions.  As you review the list of metrics below, you should identify those metrics that are most important to your organization. • Approval rates • Booking or activation rates • Revenue • Customer net present value • 30/60/90-day delinquencies • Average charge-off amount • Average recovery amount • Manual review rates • Annual application volume • Charge-offs (bad debt & fraud) • Avg. cost per dollar collected • Average amount collected • Annual recoveries • Regulatory compliance • Churn or attrition Based on Experian’s extensive experience working with clients around the world to achieve positive business results through optimizing decisions, you can expect between a 10 percent and 15 percent improvement in any of these metrics through the improved use of data, analytics and decision management software. The initial high-level business benefit calculation, therefore, is quite important and straightforward.  As an example, assume your current approval rate for vehicle loans is 65 percent, the average value of an approved application is $200 and your volume is 75,000 applications per year.  Keeping all else equal, a 10 percent improvement in your approval rates (from 65 percent to 72 percent) would generate $10.7 million in incremental business value each year ($200 x 75,000 x .65 x 1.1).  To prioritize your business improvement efforts, you’ll want to calculate expected business benefits across a number of key metrics and then focus on those that will deliver the greatest value to your organization.  

Published: January 14, 2010 by Roger Ahern

I’ve recently been hearing a lot about how bankcard lenders are reacting to changes in legislation, and recent statistics clearly show that lenders have reduced bankcard acquisitions as they retune acquisition and account management strategies for their bankcard portfolios. At this point, there appears to be a wide-scale reset of how lenders approach the market, and one of the main questions that needs to be answered pertains to market-entry timing: Should a lender be the first to re-enter the market in a significant manner, or is it better to wait, and see how things develop before executing new credit strategies? I will dedicate my next two blogs to defining these approaches and discussing them with regard to the current bankcard market. Based on common academic frameworks, today’s lenders have the option of choosing one of the following two routes: becoming a first-mover, or choosing to take the role of a secondary or late mover. Each of these roles possess certain advantages and also corresponding risks that will dictate their strategic choices: The first-mover advantage is defined as “A sometimes insurmountable advantage gained by the first significant company to move into a new market.” (1)  Although often confused with being the first-to-market, first-mover advantage is more commonly considered for firms that first substantially enter the market. The belief is that the first mover stands to gain competitive advantages through technology, economies of scale and other avenues that result from this entry strategy. In the case of the bankcard market, current trends suggest that segments of subprime and deep-subprime consumers are currently underserved, and thus I would consider the first lender to target these customers with significant resources to have ‘first-mover’ characteristics. The second-mover to a market can also have certain advantages: the second-mover can review and assess the decisions of the first-mover and develops a strategy to take advantage of opportunities not seized by the first-mover. As well, it can learn from the mistakes of the first-mover and respond, without having to incur the cost of experiential learning and possessing superior market intelligence. So, being a first-mover and second-mover can each have its advantages and pitfalls. In my next contribution, I’ll address these issues as they pertain to lenders considering their loan origination strategies for the bankcard market. (1) http://www.marketingterms.com/dictionary/first_mover_advtanage  

Published: January 14, 2010 by Kelly Kent

Conducting a validation on historical data is a good way to evaluate fraud models; however, fraud best practices dictate that a proper validation uses properly defined fraud tags. Before you can determine if a fraud model or fraud analytics tool would have helped minimize fraud losses, you need to know what you are looking for in this category.  Many organizations have difficulty differentiating credit losses from fraud losses.  Usually, fraud losses end up lumped-in with credit losses. When this happens, the analysis either has too few “known frauds” to create a business case for change, or the analysis includes a large target population of credit losses that result in poor results. By planning carefully, you can avoid this pitfall and ensure that your validation gives you the best chance to improve your business and minimize fraud losses. As a fraud best practice for validations, consider using a target population that errs on the side of including credit losses; however, be sure to include additional variables in your sample that will allow you and your fraud analytics provider to apply various segmentations to the results.  Suggested elements to include in your sample are; delinquency status, first delinquency date, date of last valid payment, date of last bad  payment and indicator of whether the account was reviewed for fraud prior to booking. Starting with a larger population, and giving yourself the flexibility to narrow the target later will help you see the full value of the solutions you evaluate and reduce the likelihood of having to do an analysis over again.  

Published: January 13, 2010 by Chris Ryan

By: Tom Hannagan This blog has often discussed many aspects of risk-adjusted pricing for loans. Loans, with their inherent credit risk, certainly deserve a lot of attention when it comes to risk management in banking. But, that doesn’t mean you should ignore the risk management implications found in the other product lines. Enterprise risk management needs to consider all of the lines of business, and all of the products of the organization. This would include the deposit services arena. Deposits make up roughly 65 percent to 75 percent of the liability side of the balance sheet for most financial institutions, representing the lion’s share of their funding source. This is a major source of operational expense and also represents most of the bank’s interest expense. The deposit activity has operational risk, and this large funding source plays a huge role in market risk – including both interest rate risk and liquidity risk. It stands to reason that such risks are considered when pricing deposit services. Unfortunately it is not always the case. Okay, to be honest, it’s too rarely the case. This raises serious entity governance questions. How can such a large operational undertaking, not withstanding the criticality of the funding implications, not be subjected to risk-based pricing considerations? We have seen warnings already that the current low interest rate environment will not last forever. When the economy improves and rates head upwards, banks need to understand the bottom line profit implications. Deposit rate sensitivity across the various deposit types is a huge portion of the impact on net interest income. Risk-based pricing of these services should be considered before committing to provide them. Even without the credit risk implications found on the loan side of the balance sheet, there is still plenty of operational and market risk impact that needs to be taken into account from the liability side. When risk management is not considered and mitigated as part of the day-to-day management of the deposit line of business, the bank is leaving these risks completely to chance. This unmitigated risk increases the portion of overall risk that is then considered to be “unexpected” in nature and thereby increases the equity capital required to support the bank.

Published: January 12, 2010 by Guest Contributor

In a previous blog, we shared ideas for expanding the “gain” to create a successful ROI to adopt new fraud best practices  to improve.  In this post, we’ll look more closely at the “cost” side of the ROI equation. The cost of the investment- The costs of fraud analytics and tools that support fraud best practices go beyond the fees charged by the solution provider.  While the marketplace is aware of these costs, they often aren’t considered by the solution providers.  Achieving consensus on an ROI to move forward with new technology requires both parties to account for these costs.  A more robust ROI should these areas: • Labor costs- If a tool increases fraud referral rates, those costs must be taken into account. • Integration costs- Many organizations have strict requirements for recovering integration costs.  This can place an additional burden on a successful ROI. • Contractual obligations- As customers look to reduce the cost of other tools, they must be mindful of any obligations to use those tools. • Opportunity costs- Organizations do need to account for the potential impact of their fraud best practices on good customers.  Barring a true champion/challenger evaluation, a good way to do this is to remain as neutral as possible with respect to the total number of fraud alerts that are generated using new fraud tools compared to the legacy process As you can see, the challenge of creating a compelling ROI can be much more complicated than the basic equation suggests.  It is critical in many industries to begin exploring ways to augment the ROI equation.  This will ensure that our industries evolve and thrive without becoming complacent or unable to stay on top of dynamic fraud trends.  

Published: January 11, 2010 by Chris Ryan

By: Wendy Greenawalt Given the current volatile market conditions and rising unemployment rates, no industry is immune from delinquent accounts. However, recent reports have shown a shift in consumer trends and attitudes related to cellular phones. For many consumers, a cell phone is an essential tool for business and personal use, and staying connected is a very high priority. Given this, many consumers pay their cellular bill before other obligations, even if facing a poor bank credit risk. Even with this trend, cellular providers are not immune from delinquent accounts and determining the right course of action to take to improve collection rates. By applying optimization, technology for account collection decisions, cellular providers can ensure that all variables are considered given the multiple contact options available. Unlike other types of services, cellular providers have numerous options available in an attempt to collect on outstanding accounts.  This, however, poses other challenges because collectors must determine the ideal method and timing to attempt to collect while retaining the consumers that will be profitable in the long term.  Optimizing decisions can consider all contact methods such as text, inbound/outbound calls, disconnect, service limitation, timing and diversion of calls.  At the same time, providers are considering constraints such as likelihood of curing, historical consumer behavior, such as credit score trends, and resource costs/limitations.  Since the cellular industry is one of the most competitive businesses, it is imperative that it takes advantage of every tool that can improve optimizing decisions to drive revenue and retention.  An optimized strategy tree can be easily implemented into current collection processes and provide significant improvement over current processes.

Published: January 7, 2010 by Guest Contributor

A recent article in the Boston Globe talked about the lack of incentive for banks to perform wide-scale real estate loan modifications due to the lack of profitability for lenders in the current government-led program structure. The article cited a recent study by the Boston Federal Reserve that noted up to 45 percent of borrowers who receive loan modifications end up in arrears again afterwards. On the other hand, around 30 percent of borrowers cured without any external support from lenders - leading them to believe that the cost and effort required modifying delinquent loans is not a profitable or not required proposition. Adding to this, one of the study’s authors was quoted as saying “a lot of people you give assistance to would default either way or won’t default either way.” The problem that lenders face is that although they have the knowledge that certain borrowers are prone to re-default, or cure without much assistance – there has been little information available to distinguish these consumers from each other.  Segmenting these customers is the key to creating a profitable process for loan modifications, since identification of the consumer in advance will allow lenders to treat each borrower in the most efficient and profitable manner. In considering possible solutions, the opportunity exists to leverage the power of credit data, and credit attributes to create models that can profile the behaviors that lenders need to isolate. Although the rapid changes in the economy have left many lenders without a precedent behavior in which to model, the recent trend of consumers that re-default is beginning to provide lenders with correlated credit attributes to include in their models. Credit attributes were used in a recent study on strategic defaulters by the Experian-Oliver Wyman Market Intelligence Reports, and these attributes created defined segments that can assist lenders with implementing profitable loan modification policies and decisioning strategies.  

Published: January 6, 2010 by Kelly Kent

By definition, “Return on Investment” is simple: (The gain from an investment - The cost of the investment) _______________________________________________ The cost of the investment With such a simple definition, why do companies that develop fraud analytics and their customers have difficulty agreeing to move forward with new fraud models and tools?   I believe the answer lies in the definition of the factors that make up the ROI equation: “The gain from an investment”- When it comes to fraud, most vendors and customers want to focus on minimizing fraud losses.  But what happens when fraud losses are not large enough to drive change? To adopt new technology it’s necessary for the industry to expand its view of the “gain.”  One way to expand the “gain” is to identify other types of savings and opportunities that aren’t currently measured as fraud losses.  These include: Cost of other tools - Data returned by fraud tools can be used to resolve Red Flag compliance discrepancies and help fraud analysts manage high-risk accounts.  By making better use of this information, downstream costs can be avoided. Other types of “bad” organizations are beginning to look at the similarities among fraud and credit losses.  Rather than identifying a fraud trend and searching for a tool to address it, some industry leaders are taking a different approach -- let the fraud tool identify the high-risk accounts, and then see what types of behavior exist in that population.  This approach helps organizations create the business case for constant improvement and also helps them validate the way in which they currently categorize losses. To increase cross sell opportunities - Focus on the “good” populations.  False positives aren’t just filtered out of the fraud review work flow, they are routed into other work flows where relationships can be expanded.    

Published: January 4, 2010 by Chris Ryan

By: Heather Grover In my previous entry, I covered how fraud prevention affected the operational side of new DDA account opening. To give a complete picture, we need to consider fraud best practices and their impact on the customer experience. As earlier mentioned, the branch continues to be a highly utilized channel and is the place for “customized service.” In addition, for retail banks that continue to be the consumer's first point of contact, fraud detection is paramount IF we should initiate a relationship with the consumer. Traditional thinking has been that DDA accounts are secured by deposits, so little risk management policy is applied. The reality is that the DDA account can be a fraud portal into the organization’s many products. Bank consolidations and lower application volumes are driving increased competition at the branch – increased demand exists to cross-sell consumers at the point of new account opening. As a result, banks are moving many fraud checks to the front end of the process: know your customer and Red Flag guideline checks are done sooner in the process in a consolidated and streamlined fashion. This is to minimize fraud losses and meet compliance in a single step, so that the process for new account holders are processed as quickly through the system as possible. Another recent trend is the streamlining of a two day batch fraud check process to provide account holders with an immediate and final decision. The casualty of a longer process could be a consumer who walks out of your branch with a checkbook in hand – only to be contacted the next day to tell that his/her account has been shut down. By addressing this process, not only will the customer experience be improved with  increased retention, but operational costs will also be reduced. Finally, relying on documentary evidence for ID verification can be viewed by some consumers as being onerous and lengthy. Use of knowledge based authentication can provide more robust authentication while giving assurance of the consumer’s identity. The key is to use a solution that can authenticate “thin file” consumers opening DDA accounts. This means your out of wallet questions need to rely on multiple data sources – not just credit. Interactive questions can give your account holders peace of mind that you are doing everything possible to protect their identity – which builds the customer relationship…and your brand.  

Published: January 4, 2010 by Guest Contributor

By: Heather Grover In past client and industry talks, I’ve discussed the increasing importance of retail branches to the growth strategy of the bank. Branches are the most utilized channel of the bank and they tend to be the primary tool for relationship expansion. Given the face-to-face nature, the branch historically has been viewed to be a relatively low-risk channel needing little (if any) identity verification – there are less uses of robust risk-based authentication or out of wallet questions. However, a now well-established fraud best practice is the process of doing proper identity verification and fraud prevention at the point of DDA account opening. In the current environment of declining credit application volumes and approval across the enterprise, there is an increased focus on organic growth through deposits.  Doing proper vetting during DDA account openings helps bring your retail process closer in line with the rest of your organization’s identity theft prevention program. It also provides assurance and confidence that the customer can now be cross-sold and up-sold to other products. A key industry challenge is that many of the current tools used in DDA are less mature than in other areas of the organization. We see few clients in retail that are using advanced fraud analytics or fraud models to minimize fraud – and even fewer clients are using them to automate manual processes - even though more than 90 percent of DDA accounts are opened manually. A relatively simple way to improve your branch operations is to streamline your existing ID verification and fraud prevention tool set: 1. Are you using separate tools to verify identity and minimize fraud? Many providers offer solutions that can do both, which can help minimize the number of steps required to process a new account; 2. Is the solution realtime? To the extent that you can provide your new account holders with an immediate and final decision, the less time and effort you’ll spend after they leave the branch finalizing the decision; 3. Does the solution provide detail data for manual review? This can help save valuable analyst time and provider costs by limiting the need to do additional searches. In my next post, we’ll discuss how fraud prevention in DDA impacts the customer experience.

Published: December 30, 2009 by Guest Contributor

Subscribe to our blog

Enter your name and email for the latest updates.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Subscribe to our Experian Insights blog

Don't miss out on the latest industry trends and insights!
Subscribe