Latest Posts

Loading...

This is last question in our five-part series on the FFIEC guidance on what it means to Internet banking, what you need to know and how to prepare for the January 2012 deadline.   Q: How are organizations responding? Experian estimates that less than half of the institutions impacted by this guidance are prepared for the examinations.   Many of the fraud tools in the marketplace, particularly those that are used to authenticate individuals were deployed as point-solutions.  Few support the need for a feedback loop to identify vulnerabilities, or the ability to employ a risk-based, “layered” approach that the guidance is seeking. _____________ This is the last of our five-part series but we're happy to answer more questions as we know you need to know how to prepare for the January 2012 deadline.    

Published: November 18, 2011 by Chris Ryan

This is fourth question in our five-part series on the FFIEC guidance and what it means Internet banking. Check back each day this week for more Q&A on what you need to know and how to prepare for the January 2012 deadline.  If you missed parts 1-3, there's no time to waste, check them out here: Go to question one: What does “multi-factor” authentication actually mean? Go to question two: Who does this guidance affect?  And does it affect each type  of credit grantor/ lender differently? Go to question three: What does “layered security” actually mean? Today's Q&A: What will the regulation do to help mitigate fraud risk in the near-term, and long-term? The FFIEC’s guidance will encourage financial institutions to re-examine their processes. The guidance is an important reinforcement of several critical ideas: Fraud losses undermine faith in our financial system by exposing vulnerabilities in the way we exchange goods, services and currencies. It is important that members of the financial services community understand their role in protecting our economy from fraud. Fraud is not the result of a static set of tactics employed by criminals. Fraud tactics evolve constantly and the tools that combat them have to evolve as well.   Considering the impact that technology is having on commerce, it is more important than ever to review the processes that we once thought made our businesses “safe.” The architecture and flexibility of fraud prevention “capabilities” is a weapon unto itself. The guidance provides a perspective on why it is important to be able to understand the risk and to respond accordingly. At the end of the day, the guidance is less about a need to take a specific action---and more about the “capability” to recognize when those actions are needed, and how they should be structured so that high-risk actions are met with strong and sophisticated defenses. _____________ Look for part five, the final in our series tomorrow. 

Published: November 17, 2011 by Chris Ryan

  This is third question in our five-part series on the FFIEC guidance and what it means Internet banking.  If you missed the firstand second question, you can still view - our answer isn't going anywhere.  Check back each day this week for more Q&A on what you need to know and how to prepare for the January 2012 deadline. Question: Who does this guidance affect? And does it affect each type of credit grantor/ lender differently? The guidance pertains to all financial institutions in the US that fall under the FFIEC’s influence. While the guidance specifically mentions authenticating in an on-line environment, it’s clear that the overall approach advocated by the FFIEC applies to authentication in any environment. As fraud professionals know, strengthening the defenses in the on-line environment will drive the same fraud tactics to other channels. The best way to apply this guidance is to understand its intent and apply it across call centers and in-person interactions as well. _____________ Look for part four of our five-part series tomorrow.  If you have a related question that needs an answer, submit in the comments field below and we'll answer those questions too.  Chances are if you are questioning something, others are too - so let's cover it here!  Or, if you would prefer to speak with one of our Fraud Business Consultants directly, complete a contact form and we'll follow up promptly.  

Published: November 16, 2011 by Chris Ryan

This is second question in our five-part series on the FFIEC guidance and what it means Internet banking.  If you missed the first question, don't worry, you can still go back.  Check back each day this week for more Q&A on what you need to know and how to prepare for the January 2012 deadline. Question: What does “multi-factor” authentication actually mean?    “Multi- Factor” authentication refers to the combination of different security requirements that would be unlikely to be compromised at the same time. A simple example of multi-factor authentication is the use of a debit card at an ATM machine.   The plastic debit card is an item that you must physically possess to withdraw cash, but the transaction also requires the PIN number to complete the transaction. The card is one factor, the PIN is a second. The two combine to deliver a multi-factor authentication. Even if the customer loses their card, it (theoretically) can’t be used to withdraw cash from the ATM machine without the PIN. _____________ Look for part three of our five-part series tomorrow.

Published: November 15, 2011 by Chris Ryan

This first question in our five-part series on the FFIEC guidance and what it means Internet banking.  Check back each day this week for more Q&A on what you need to know and how to prepare for the January 2012 deadline. Question: What does “layered security” actually mean?   “Layered” security refers to the arrangement of fraud tools in a sequential fashion. A layered approach starts with the most simple, benign and unobtrusive methods of authentication and progresses toward more stringent controls as the activity unfolds and the risk increases. Consider a customer who logs onto an on-line banking session to execute a wire transfer of funds to another account. The layers of security applied to this activity might resemble: 1.       Layer One- Account log-in. Security = valid ID and Password must be provided 2.       Layer Two- Wire transfer request. Security= IP verification/confirmation that this PC has been used to access this account previously. 3.       Layer Three- Destination Account provided that has not been used to receive wire transfer funds in the past. Security= Knowledge Based Authentication Layered security provides an organization with the ability to handle simple customer requests with minimal security, and to strengthen security as risks dictate.  A layered approach enables the vast majority of low risk transactions to be completed without unnecessary interference while the high-risk transactions are sufficiently verified. _____________ Look for part two of our five-part series tomorrow. 

Published: November 14, 2011 by Chris Ryan

By: John Straka For many purposes, national home-price averages, MSA figures, or even zip code data cannot adequately gauge local housing markets. The higher the level of the aggregate, the less it reflects the true variety and constant change in prices and conditions across local neighborhood home markets. Financial institutions, investors, and regulators that seek out and learn how to use local housing market data will generally be much closer to true housing markets. When houses are not good substitutes from the viewpoint of most market participants, they are not part of the same housing market.  Different sizes and types and ages of homes, for example, may be in the same county, zip code, block, or even right next door to each other, but they are generally not in the same housing market when they are not good substitutes.  This highlights the importance of starting with detailed granular information on local-neighborhood home markets and homes.  To be sure, greater granularity in neighborhood home-market evaluation requires analysts and modelers to deal with much more data on literally hundreds of thousands of neighborhoods in the U.S. It is fair to ask if zip-code level data, for example, might not be generally sufficient. Most housing analysts and portfolio modelers, in fact, have traditionally assumed this, believing that reasonable insights can be gleaned from zip code, county-level, or even MSA data. But this is fully adequate, strictly speaking, only if neighborhood home markets and outcomes are homogenous—at least reasonably so—within the level of aggregation used. Unfortunately, even at zip-code level, the data suggests otherwise.  Examples All of the home-price and home-valuation data for this report was supplied by Collateral Analytics. I have focused on zip7s, i.e. zip+2s, which are a more granular neighborhood measure than zip codes. A Hodrick-Prescott (H-P) Filter was applied by Collateral Analytics to the raw home-price data in order to attenuate short-term variation and isolate the six-year trends. But as we’ll see this dampening still leaves an unrealistically high range of variation within zip codes, for reasons discussed below. Fortunately there is an easy way to control for this, which we’ll apply for final estimates of the range of within-zip variation in home-price outcomes.  The three charts below show the H-P filtered 2005-2011 percent changes in home-price per square foot of living area within three different types of zip codes in San Diego county. Within the first type of zip code, 92319 in this case, the home-price changes in recent years have been relatively homogenous, with a range of -56% to -40% home-price change across the zip7s (i.e., zip+2s) in 92319. But the second type of zip code, illustrated by 92078, is more typical. In this type of case the home-price changes across the zip7s have varied much more. The 2055-2011 zip7 %chg in home prices within 92078 have varied by over 40 percentage points, from -51% to -10%. In the third type of zip code, less frequent but surprisingly common, the home-price changes across the zip7s have had a truly remarkable range of variation. This is illustrated here by zip code 92024 in which the home price outcomes have varied from -51% to +21%, or a 71 percentage point range of difference—and this is not the zip code with the maximum range of variation observed! All of the San Diego County zip codes are summarized in the bar chart below. Nearly two-thirds of the zip codes, 65%, have more than 30 percentage points within-zip difference in the 2005-2011 zip7 %changes in home prices. 40% have more than a 40 percentage point range of different home-price outcomes, 23% have more than a 50 percentage point range, and 13% have more than a 70 percentage point range of differences. The average range of the zip7 within-zip code differences is a 37 percentage point median, 41 percentage-point mean. These high numbers are surprising, and are most likely unrealistically high. Summary of Within-Zip (Zip+2 level) Ranges of Variation in Home-Price Changes in San Diego: Percentage of Zips by Range Across Zip+2s in Home Price/Living Area %Change 2005-2011 Controlling for Factors Inflating the Range of Variation Such sizable differences within a typical single zip code clearly suggest materially different neighborhood home markets. While this qualitative conclusion is supported further below, the magnitudes of the within-zip variation in home-price changes shown above are quite likely inflated. There is a tendency for a limited number of observations in various zip7s to create statistical “noise” outliers, and the inclusion of distressed property sales here can create further outliers, with cases of both limited observations and distress sales particularly capable of creating more negative outliers that are not representative of the true price changes for most homes and their true range of variation within zip codes.  (My earlier blog on June 29th discussed the biases from including distressed property sales while trying to gauge general price trends for most properties.) Fortunately, I’ve been able to access a very convenient way to control for these factors by using the zip7 averages of Collateral Analytics’ AVM (Automated Valuation Model) values rather than simply the home price data summarized above. These industry-leading AVM home valuations have been designed, in part, to filter out statistical noise problems.  The bar chart below shows the still significant zip7 ranges within San Diego County zip codes using the AVM values, but the distribution is now shifted considerably, and more realistically, to a much smaller share of the zip codes with remarkably high zip7 variation. Compared with the chart above, now just 1% of the zips have a zip7 range greater than 60 percentage points, 5% greater than 50, and 11% greater than 40, but there are still 36% greater than 30. To be sure, this distribution, and the average range of zip7 differences—which is now a 25 percentage-point median, 26 percent age-point mean—do show a considerable range of local home market variation within zip codes. It seems fair to conclude that the typical zip code does not contain the uniformity in home price outcomes that most housing analysts and modelers have tended to simply assume. The difference between the effects on consumer wealth and behavior of a 10% home price decline, for example, vs. a 35 to 50% decline, would seem to be sizable in most cases. This kind of difference within a zip code is not at all unusual in these data. How About a Different Type of Urban Area—More Uniform? It might be thought that the diversity of topography, etc., across San Diego County (from the sea to the mountains) makes its variation of home market outcomes within zip codes unusually high. To take a quick gauge of this hypothesis, let’s look at a more topographically uniform urban area: Columbus, Ohio. When I informally polled some of my colleagues asking what their prior belief would be about the within-zip code variation in home price outcomes in Columbus vs. San Diego County, there was unanimous agreement with my prior belief. We all expected greater within-zip uniformity in Columbus. I find it interesting to report here that we were wrong. Both the H-P filtered raw home-price information and the AVM values from Collateral Analytics show relatively greater zip7 variation within Columbus (Franklin County) zip codes than in San Diego County.  The bar chart below shows the best-filtered, most attenuated results,  the AVM values. 5% of the Columbus zips have a zip7 range greater than 70 percentage points, 8% greater than 60, 23% greater than 50, 35% greater than 40, and 65% greater than 30. The average range of zip7 within-zip code differences in Columbus is a 35 percentage point median, 38 percentage-point mean. Conclusion These data seem consistent with what experienced appraisers and real estate agents have been trying to tell economists and other housing analysts, investors, and financial institutions and policymakers for quite a long time. Although they have quite reasonable uses for aggregate time-series and forecasting purposes, more aggregate-data based models of housing markets actually miss a lot of the very real and material variation in local neighborhood housing markets.  For home valuation and many other purposes, even models that use data which gets down to the zip code level of aggregation—which most analysts have assumed to be sufficiently disaggregated—are not really good enough. These models are not as good as they can or should be. These facts are indicative of the greater challenge to properly define local housing markets empirically, in such a way that better data, models, and analytics can be more rapidly developed and deployed for greater profitability, and for sooner and more sustainable housing market recoveries. I thank Michael Sklarz for providing the data for this report and for comments, and I thank Stacy Schulman for assistance in this post.

Published: October 7, 2011 by Guest Contributor

With the most recent guidance newly issued by the Federal Financial Institutions Examination Council (FFIEC) there is renewed conversation about knowledge based authentication. I think this is a good thing.  It brings back into the forefront some of the things we have discussed for a while, like the difference between secret questions and dynamic knowledge based authentication, or the importance of risk based authentication. What does the new FFIEC guidance say about KBA?  Acknowledging that many institutions use challenge questions, the FFIEC guidance highlights that the implementation of challenge questions can greatly impact efficacy of its usefulness. Chances are you already know this.  Of greater importance, though, is the fact that the FFIEC guidelines caution on the use of less sophisticated systems and information that can be easily guessed or obtained from an Internet search, given the amount of information available.    As mentioned above, the FFIEC guidelines call for questions that “do not rely on information that is often publicly available,” recommending instead a broad range of data assets on which to base questions.  This is an area knowledge based authentication users should review carefully.  At this point in time it is perfectly appropriate to ask, “Does my KBA provider rely on data that is publicly sourced”  If you aren’t sure, ask for and review data sources.  At a minimum, you want to look for the following in your KBA provider:     ·         Questions!  Diverse questions from broad data categories, including credit and noncredit assets ·         Consumer question performance as one of the elements within an overall risk-based decisioning policy ·         Robust performance monitoring.  Monitor against established key performance indicators and do it often ·         Create a process to rotate questions and adjust access parameters and velocity limits.  Keep fraudsters guessing! ·         Use the resources that are available to you.  Experian has compiled information that you might find helpful: www.experian.com/ffiec Finally, I think the release of the new FFIEC guidelines may have made some people wonder if this is the end of KBA.  I think the answer is a resounding “No.”  Not only do the FFIEC guidelines support the continued use of knowledge based authentication, recent research suggests that KBA is the authentication tool identified as most effective by consumers.  Where I would draw caution is when research doesn’t distinguish between “secret questions” and dynamic knowledge based authentication, which we all know is very different.   

Published: October 4, 2011 by Guest Contributor

By: Mike Horrocks Have you ever been struck by a turtle or even better burnt by water skies that were on fire?  If you are like me, these are not accidents that I think will ever happen to me and I'm not concerned that my family doctor didn't do a rotation in medical school to specialize in treating them. On October 1, 2013, however, doctors and hospitals across the U.S. will have ability to identify, log, bill, and track those accidents and thousands of other very specific medical events.  In fact the list will jump from a current 18,000 medical codes to 140,000 medical codes.  Some people hail this as a great step toward the management of all types of medical conditions, whereas others view it as a introduction of noise in a medical system already over burdened.  What does this have to do with credit risk management you ask? When I look at the amount of financial and non-financial data that the credit industry has available to understand the risk of our consumer or business clients, I wonder where we are in the range of “take two aspirins and call me in the morning” to “[the accident] occurred inside a chicken coop” (code: Y9272).   Are we only identifying a risky consumer after they have defaulted on a loan?  Or are we trying to find a pattern in the consumer's purchases at a coffee house that would correlate with some other data point to indicate risk when the moon is full? The answer is somewhere in between and it will be different for each institution.  Let’s start with what is known to be predictable when it comes to monitoring our portfolios - data and analytics, coupled with portfolio risk monitoring to minimize risk exposure - and then expand that over time.  Click here for a recent case study that demonstrates this quite successfully with one of our clients. Next steps could include adding in analytics and/or triggers to identify certain risks more specifically. When it comes to risk, incorporating attributes or a solid set of triggers, for example, that will identify risk early on and can drill down to some of the specific events, combined with technology that streamlines portfolio management processes - whether you have an existing system in place or in search of a migration - will give you better insight to the risk profile of your consumers. Think about where your organization lies on the spectrum.    If you are already monitoring your portfolio with some of these solutions, consider what the next logical step to improve the process is - is it more data, or advanced analytics using that data, a combination of both, or perhaps it's a better system in place to monitoring the risk more closely. Wherever you are, don’t let your institution have the financial equivalent need for these new medical codes W2202XA, W2202XD, and W2202XS (injuries resulting from walking into a lamppost once, twice, and sequentially).

Published: September 19, 2011 by Guest Contributor

Our guest blogger this week is Tom Bowers, Managing Director, Security Constructs LLC – a security architecture, data leakage prevention and global enterprise information consulting firm. The rash of large-scale data breaches in the news this year begs many questions, one of which is this: how do hackers select their victims? The answer: research. Hackers do their homework; in fact, an actual hack typically takes place only after many hours of first studying the target. Here’s an inside look at a hacker in action: Using search queries through such resources as Google and job sites, the hacker creates an initial map of the target’s vulnerabilities.  For example, job sites can offer a wealth of information such as hardware and software platform usage, including specific versions and its use within the enterprise. The hacker fills out the map with a complete intelligence database on your company, perhaps using public sources such as government databases, financial filings and court records. Attackers want to understand such details as how much you spend on security each year, other breaches you’ve suffered, and whether you’re using LDAP or federated authentication systems. The hacker tries to identify the person in charge of your security efforts.  As they research your Chief Security Officer or Chief Intelligence Security Officer (who they report to, conferences attended, talks given, media interviews, etc.) hackers can get a sense of whether this person is a political player or a security architect, and can infer the target’s philosophical stance on security and where they’re spending time and attention within the enterprise. Next, hackers look for business partners, strategic customers and suppliers used by the target.  Sometimes it may be easier to attack a smaller business partner than the target itself.  Once again, this information comes from basic search engine queries; attackers use job sites and corporate career sites to build a basic map of the target’s network. Once assembled, all of this information offers a list of potential and likely egress points within the target. While there is little you can do to prevent hackers from researching your company, you can reduce the threat this poses by conducting the same research yourself.  Though the process is a bit tedious to learn, it is free to use; you are simply conducting competitive intelligence upon your own enterprise.  By reviewing your own information, you can draw similar conclusions to the attackers, allowing you to strengthen those areas of your business that may be at risk. For example, if you want to understand which of your web portals may be exposed to hackers, use the following search term in Google: “site:yourcompanyname.com – www.yourcompanyname.com” This query specifies that you want to see everything on your site except WWW sites.  Web portals do not typically start with WWW and this query will show “eportal.yourcompanyname, ecomm.yourcompanyname.” Portals are a great place to start as they usually contain associated user names and passwords;   this means that a database is storing these credentials, which is a potential goldmine for attackers.  You can set up a Google Alert to constantly watch for new portals; simply type in your query, select how often you want updates, and Google will send you an alert every time a new portal shows up in its results. Knowledge is power.  The more you know about your own business, the better you can protect it from becoming prey to hacker-hawks circling in cyberspace. Download our free Data Breach Response Guide

Published: September 6, 2011 by Michael Bruemmer

By: Mike Horrocks Let’s all admit it, who would not want to be Warren Buffet for a day?  While soaking in the tub, the “Sage of Omaha” came up with the idea to purchase shares of Bank of America and managed to close the deal in under 24 hours (and also make $357 million in one day thanks to an uptick in the stock). Clearly investor opinions differ when picking investments, so what did Buffet see that was worth taking that large of a risk? In interviews Buffet simply states that he saw the fundamentals of a good bank (once they fix a few things), that will return his investment many times over. He has also said that he came to this conclusion based on years of seeing opportunities where others only see risk. So what does that have to do with risk management? First, ask yourself as you look at your portfolio of customers what ones are you  “short-selling”  and risk losing and what customers are you investing into and expect Buffet-like returns on in the future? Second, ask yourself how are you making that “investment” decision on your customers? And lastly, ask yourself how confident you are in that decision? If you’re not employing some mode of segmentation today on your portfolio stop and make that happen as soon as you are done reading this blog. You know what a good customer looks like or looked like once upon a time. Admit to yourself that not every customer looks as good as they used to before 2008 and while you are not “settling”, be open minded on who you would want as a customer in the future. Amazingly, Buffet did not have Bank of America’s CEO Brian Moynihan’s phone number when he wanted to make the deal. This is where you are heads and shoulders above Garot’s Steak House’s favorite customer.  You have deposit information, loan activity and performance history, credit data, and even the phone number of your customers. This gives you plenty of data and solutions to build that profile of what a good customer looks like – thereby knowing who to invest in. The next part is the hardest. How confident are you in your decision that you will put your money on it? For example, my wife invested in Bank of America the day before Warren put in his $5 billion. She saw some of the same signs that he did in the bank. However, the fact that I am writing this blog is an indicator that she clearly did not invest to the scale that Warren did. But what is stopping you from going all in and investing in your customers’ future? If the fundamentals of your customer segmenting are sound, any investment today into your customers will come back to you in loyalty and profits in the future. So at the risk of conjuring up a mental image, take the last lesson from Warren Buffet’s tub soaking investment process and get up and invest in those perhaps risky today, yet sound tomorrow customers or run the risk of future profits going down the drain.

Published: August 30, 2011 by Guest Contributor

By: Kari Michel The way medical debts are treated in scores may change with the introduction of June 2011, Medical Debt Responsibility Act. The Medical Debt Responsibility Act would require the three national credit bureaus to expunge medical collection records of $2,500 or less from files within 45 days of their being paid or settled. The bill is co-sponsored by Representative Heath Shuler (D-N.C.), Don Manzullo (R-Ill.) and Ralph M. Hall (R-Texas). As a general rule, expunging predictive information is not in the best interest of consumers or credit granters -- both of which benefit when credit reports and scores are as accurate and predictive as possible. If any type of debt information proven to be predictive is expunged, consumers risk exposure to improper credit products as they may appear to be more financially equipped to handle new debt than they truly are. Medical debts are never taken into consideration by VantageScore® Solutions LLC if the debt reporting is known to be from a medical facility. When a medical debt is outsourced to a third-party collection agency, it is treated the same as other debts that are in collection. Collection accounts of lower than $250, or ones that have been settled, have less impact on a consumer’s VantageScore® credit score. With or without the medical debt in collection information, the VantageScore® credit score model remains highly predictive.

Published: August 29, 2011 by Guest Contributor

By: Mike Horrocks The realities of the new economy and the credit crisis are driving businesses and financial institutions to better integrate new data and analytical techniques into operational decision systems. Adjusting credit risk processes in the wake of new regulations, while also increasing profits and customer loyalty will require a new brand of decision management systems to accelerate more precise customer decisions. There is a Webinar scheduled for Thursday that will insightfully show you how blending business rules, data and analytics inside a continuous-loop decisioning process can empower your organization - to control marketing, acquisition and account management activities to minimize risk exposure, while ensuring portfolio growth. Topics include: What the process is and the key building blocks for operating one over time Why the process can improve customer decisions How analytical techniques can be embedded in the change control process (including data-driven strategy design or optimization) If interested check out more - there is still time to register for the Webinar. And if you just want to see a great video - check out this intro.

Published: August 24, 2011 by Guest Contributor

With the raising of the U.S. debt ceiling and its recent ramifications consuming the headlines over the past month, I began to wonder what would happen if the general credit consumer had made a similar argument to their credit lender. Something along the lines of, “Can you please increase my credit line (although I am maxed out)? I promise to reduce my spending in the future!” While novel, probably not possible. In fact, just the opposite typically occurs when an individual begins to borrow up to their personal “debt ceiling.” When the amount of credit an individual utilizes to what is available to them increases above a certain percentage, it can adversely affect their credit score, in turn affecting their ability to secure additional credit. This percentage, known as the utility rate is one of several factors that are considered as part of an individual’s credit score calculation. For example, the utilization rate makes up approximately 23% of an individual’s calculated VantageScore® credit score. The good news is that consumers as a whole have been reducing their utilization rate on revolving credit products such as credit cards and home equity lines (HELOCs) to the lowest levels in over two years. Bankcard and HELOC utilization is down to 20.3% and 49.8%, respectively according to the Q2 2011 Experian – Oliver Wyman Market Intelligence Reports. In addition to lowering their utilization rate, consumers are also doing a better job of managing their current debt, resulting in multi-year lows for delinquency rates as mentioned in my previous blog post. By lowering their utilization and delinquency rates, consumers are viewed as less of a credit risk and become more attractive to lenders for offering new products and increasing credit limits. Perhaps the government could learn a lesson or two from today’s credit consumer.

Published: August 23, 2011 by Alan Ikemura

Consumer credit card debt has dipped to levels not seen since 2006 and the memory of pre-recession spending habits continues to get hazier with each passing day. In May, revolving credit card balances totaled over $790 billion, down $180 billion from mid-2008 peak levels. Debit and Prepaid volume accounted for 44% or nearly half of all plastic spending, growing substantially from 35% in 2005 and 23% a decade ago. Although month-to-month tracking suggests some noise in the trends as illustrated by the slight uptick in credit card debt from April to May, the changes we are seeing are not at all temporary. What we are experiencing is a combination of many factors including the aftermath impacts of recession tightening, changes in the level of comfort for financing non-essential purchases, the “new boomer” population entering the workforce in greater numbers and the diligent efforts to improve the general household wallet composition by Gen Xers. How do card issuers shift existing strategies? Baby boomers are entering that comfortable stage of life where incomes are higher and expenses are beginning to trail off as the last child is put through college and mortgage payments are predominantly applied toward principle. This group worries more about retirement investments and depressed home values and as such, they demand high value for their spending. Rewards based credit continues to resonate well with this group. Thirty years ago, baby boomers watched as their parents used cash, money orders and teller checks to manage finances but today’s population has access to many more options and are highly educated. As such, this group demands value for their business and a constant review of competitive offerings and development of new, relevant rewards products are needed to sustain market share. The younger generation is focused on technology. Debit and prepaid products accessible through mobile apps are more widely accepted for this group unlike ten to fifteen years ago when multiple credit cards with four figure credit limits each were provided to college students in large scale. Today’s new boomer is educated on the risks of using credit, while at the same time, parents are apt to absorb more of their children’s monthly expenses. Servicing this segment's needs, while helping them to establish a solid credit history, will result in long-term penetration in a growing segment. Recent CARD Act and subsequent amendments have taken a bite out of revenue previously used to offset increased risk and related costs that allowed card issuers to service the near-prime sector. However, we are seeing a trend of new lenders getting in to the credit card game while existing issuers start to slowly evaluate the next tier. After six quarters of consistent credit card delinquency declines, we are seeing slow signs of relief. The average VantageScore for new card originations increased by 8 points from the end of 2008 into early 2010 driven by credit tightening actions and has started to slowly come back down in recent months.   What next? What all of this means is that card issuers have to be more sophisticated with risk management and marketing practices. The ability to define segments through the use of alternate data sources and access channels is critical to ongoing capture of market share and profitable usage. First, the segmentation will need to identify the “who” and the “what.” Who wants what products, how much credit is a consumer eligible for and what rate, terms and rewards structure will be required to achieve desired profit and risk levels, particularly as the economy continues to teeter between further downturn and, at best, slow growth. By incorporating new modeling and data intelligence techniques, we are helping sophisticated lenders cherry pick the non-super prime prospects and offering guidance on aligning products that best balance risk and reward dynamics for each group. If done right, card issuers will continue to service a diverse universe of segments and generate profitable growth.

Published: August 22, 2011 by Guest Contributor

As I’m sure you are aware, the Federal Financial Institutions Examination Council (FFIEC) recently released its, "Supplement to Authentication in an Internet Banking Environment" guiding financial institutions to mitigate risk using a variety of processes and technologies as part of a multi-layered approach. In light of this updated mandate, businesses need to move beyond simple challenge and response questions to more complex out-of-wallet authentication.  Additionally, those incorporating device identification should look to more sophisticated technologies well beyond traditional IP address verification alone. Recently, I contribute to an article on how these new guidelines might affect your institution.  Check it out here, in full:  http://ffiec.bankinfosecurity.com/articles.php?art_id=3932 For more on what the FFIEC guidelines mean to you, check out these resources - which also gives you access to a recent Webinar.

Published: August 19, 2011 by Keir Breitenfeld

Subscribe to our blog

Enter your name and email for the latest updates.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Subscribe to our Experian Insights blog

Don't miss out on the latest industry trends and insights!
Subscribe