Data & Analytics

Loading...

I believe it was George Bernard Shaw that once said something along the lines of, “If economists were laid end-to-end, they’d never come to a conclusion, at least not the same conclusion.” It often feels the same way when it comes to big data analytics around customer behavior. As you look at new tools to put your customer insights to work for your enterprise, you likely have questions coming from across your organization. Models always seem to take forever to develop, how sure are we that the results are still accurate? What data did we use in this analysis; do we need to worry about compliance or security? To answer these questions and in an effort to best utilize customer data, the most forward-thinking financial institutions are turning to analytical environments, or sandboxes, to solve their big data problems. But what functionality is right for your financial institution? In your search for a sandbox solution to solve the business problem of big data, make sure you keep these top four features in mind. Efficiency: Building an internal data archive with effective business intelligence tools is expensive, time-consuming and resource-intensive. That’s why investing in a sandbox makes the most sense when it comes to drawing the value out of your customer data.By providing immediate access to the data environment at all times, the best systems can reduce the time from data input to decision by at least 30%. Another way the right sandbox can help you achieve operational efficiencies is by direct integration with your production environment. Pretty charts and graphs are great and can be very insightful, but the best sandbox goes beyond just business intelligence and should allow you to immediately put models into action. Scalability and Flexibility: In implementing any new software system, scalability and flexibility are key when it comes to integration into your native systems and the system’s capabilities. This is even more imperative when implementing an enterprise-wide tool like an analytical sandbox. Look for systems that offer a hosted, cloud-based environment, like Amazon Web Services, that ensures operational redundancy, as well as browser-based access and system availability.The right sandbox will leverage a scalable software framework for efficient processing. It should also be programming language agnostic, allowing for use of all industry-standard programming languages and analytics tools like SAS, R Studio, H2O, Python, Hue and Tableau. Moreover, you shouldn’t have to pay for software suites that your analytics teams aren’t going to use. Support: Whether you have an entire analytics department at your disposal or a lean, start-up style team, you’re going to want the highest level of support when it comes to onboarding, implementation and operational success. The best sandbox solution for your company will have a robust support model in place to ensure client success. Look for solutions that offer hands-on instruction, flexible online or in-person training and analytical support. Look for solutions and data partners that also offer the consultative help of industry experts when your company needs it. Data, Data and More Data: Any analytical environment is only as good as the data you put into it. It should, of course, include your own client data. However, relying exclusively on your own data can lead to incomplete analysis, missed opportunities and reduced impact. When choosing a sandbox solution, pick a system that will include the most local, regional and national credit data, in addition to alternative data and commercial data assets, on top of your own data.The optimum solutions will have years of full-file, archived tradeline data, along with attributes and models for the most robust results. Be sure your data partner has accounted for opt-outs, excludes data precluded by legal or regulatory restrictions and also anonymizes data files when linking your customer data. Data accuracy is also imperative here. Choose a big data partner who is constantly monitoring and correcting discrepancies in customer files across all bureaus. The best partners will have data accuracy rates at or above 99.9%. Solving the business problem around your big data can be a daunting task. However, investing in analytical environments or sandboxes can offer a solution. Finding the right solution and data partner are critical to your success. As you begin your search for the best sandbox for you, be sure to look for solutions that are the right combination of operational efficiency, flexibility and support all combined with the most robust national data, along with your own customer data. Are you interested in learning how companies are using sandboxes to make it easier, faster and more cost-effective to drive actionable insights from their data? Join us for this upcoming webinar. Register for the Webinar

Published: October 24, 2018 by Jesse Hoggard

This is an exciting time to work in big data analytics. Here at Experian, we have more than 2 petabytes of data in the United States alone. In the past few years, because of high data volume, more computing power and the availability of open-source code algorithms, my colleagues and I have watched excitedly as more and more companies are getting into machine learning. We’ve observed the growth of competition sites like Kaggle, open-source code sharing sites like GitHub and various machine learning (ML) data repositories. We’ve noticed that on Kaggle, two algorithms win over and over at supervised learning competitions: If the data is well-structured, teams that use Gradient Boosting Machines (GBM) seem to win. For unstructured data, teams that use neural networks win pretty often. Modeling is both an art and a science. Those winning teams tend to be good at what the machine learning people call feature generation and what we credit scoring people called attribute generation. We have nearly 1,000 expert data scientists in more than 12 countries, many of whom are experts in traditional consumer risk models — techniques such as linear regression, logistic regression, survival analysis, CART (classification and regression trees) and CHAID analysis. So naturally I’ve thought about how GBM could apply in our world. Credit scoring is not quite like a machine learning contest. We have to be sure our decisions are fair and explainable and that any scoring algorithm will generalize to new customer populations and stay stable over time. Increasingly, clients are sending us their data to see what we could do with newer machine learning techniques. We combine their data with our bureau data and even third-party data, we use our world-class attributes and develop custom attributes, and we see what comes out. It’s fun — like getting paid to enter a Kaggle competition! For one financial institution, GBM armed with our patented attributes found a nearly 5 percent lift in KS when compared with traditional statistics. At Experian, we use Extreme Gradient Boosting (XGBoost) implementation of GBM that, out of the box, has regularization features we use to prevent overfitting. But it’s missing some features that we and our clients count on in risk scoring. Our Experian DataLabs team worked with our Decision Analytics team to figure out how to make it work in the real world. We found answers for a couple of important issues: Monotonicity — Risk managers count on the ability to impose what we call monotonicity. In application scoring, applications with better attribute values should score as lower risk than applications with worse values. For example, if consumer Adrienne has fewer delinquent accounts on her credit report than consumer Bill, all other things being equal, Adrienne’s machine learning score should indicate lower risk than Bill’s score. Explainability — We were able to adapt a fairly standard “Adverse Action” methodology from logistic regression to work with GBM. There has been enough enthusiasm around our results that we’ve just turned it into a standard benchmarking service. We help clients appreciate the potential for these new machine learning algorithms by evaluating them on their own data. Over time, the acceptance and use of machine learning techniques will become commonplace among model developers as well as internal validation groups and regulators. Whether you’re a data scientist looking for a cool place to work or a risk manager who wants help evaluating the latest techniques, check out our weekly data science video chats and podcasts.

Published: October 24, 2018 by Guest Contributor

Electric vehicles are here to stay – and will likely gain market share as costs reduce, travel ranges increase and charging infrastructure grows.

Published: October 24, 2018 by Brad Smith

If your company is like many financial institutions, it’s likely the discussion around big data and financial analytics has been an ongoing conversation. For many financial institutions, data isn’t the problem, but rather what could or should be done with it. Research has shown that only about 30% of financial institutions are successfully leveraging their data to generate actionable insights, and customers are noticing. According to a recent study from Capgemini,  30% of US customers and 26% of UK customers feel like their financial institutions understand their needs. No matter how much data you have, it’s essentially just ones and zeroes if you’re not using it. So how do banks, credit unions, and other financial institutions who capture and consume vast amounts of data use that data to innovate, improve the customer experience and stay competitive? The answer, you could say, is written in the sand. The most forward-thinking financial institutions are turning to analytical environments, also known as a sandbox, to solve the business problem of big data. Like the name suggests, a sandbox is an environment that contains all the materials and tools one might need to create, build, and collaborate around their data. A sandbox gives data-savvy banks, credit unions and FinTechs access to depersonalized credit data from across the country. Using custom dashboards and data visualization tools, they can manipulate the data with predictive models for different micro and macro-level scenarios. The added value of a sandbox is that it becomes a one-stop shop data tool for the entire enterprise. This saves the time normally required in the back and forth of acquiring data for a specific to a project or particular data sets. The best systems utilize the latest open source technology in artificial intelligence and machine learning to deliver intelligence that can inform regional trends, consumer insights and highlight market opportunities. From industry benchmarking to market entry and expansion research and campaign performance to vintage analysis, reject inferencing and much more. An analytical sandbox gives you the data to create actionable analytics and insights across the enterprise right when you need it, not months later. The result is the ability to empower your customers to make financial decisions when, where and how they want. Keeping them happy keeps your financial institution relevant and competitive. Isn’t it time to put your data to work for you? Learn more about how Experian can solve your big data problems. >> Interested to see a live demo of the Ascend Sandbox? Register today for our webinar “Big Data Can Lead to Even Bigger ROI with the Ascend Sandbox.”

Published: October 4, 2018 by Jesse Hoggard

Big Data is no longer a new concept. Once thought to be an overhyped buzzword, it now underpins and drives billions in dollars of revenue across nearly every industry. But there are still companies who are not fully leveraging the value of their big data and that’s a big problem. In a recent study, Experian and Forrester surveyed nearly 600 business executives in charge of enterprise risk, analytics, customer data and fraud management. The results were surprising: while 78% of organizations said they have made recent investments in advanced analytics, like the proverbial strategic plan sitting in a binder on a shelf, only 29% felt they were successfully using these investments to combine data sources to gather more insights. Moreover, 40% of respondents said they still rely on instinct and subjectivity when making decisions. While gut feeling and industry experience should be a part of your decision-making process, without data and models to verify or challenge your assumptions, you’re taking a big risk with bigger operations budgets and revenue targets. Meanwhile, customer habits and demands are quickly evolving beyond a fundamental level. The proliferation of mobile and online environments are driving a paradigm shift to omnichannel banking in the financial sector and with it, an expectation for a customized but also digitized customer experience. Financial institutions have to be ready to respond to and anticipate these changes to not only gain new customers but also retain current customers. Moreover, you can bet that your competition is already thinking about how they can respond to this shift and better leverage their data and analytics for increased customer acquisition and engagement, share of wallet and overall reach. According to a recent Accenture study, 79% of enterprise executives agree that companies that fail to embrace big data will lose their competitive position and could face extinction. What are you doing to help solve the business problem around big data and stay competitive in your company?

Published: September 27, 2018 by Jesse Hoggard

Machine learning (ML), the newest buzzword, has swept into the lexicon and captured the interest of us all. Its recent, widespread popularity has stemmed mainly from the consumer perspective. Whether it’s virtual assistants, self-driving cars or romantic matchmaking, ML has rapidly positioned itself into the mainstream. Though ML may appear to be a new technology, its use in commercial applications has been around for some time. In fact, many of the data scientists and statisticians at Experian are considered pioneers in the field of ML, going back decades. Our team has developed numerous products and processes leveraging ML, from our world-class consumer fraud and ID protection to producing credit data products like our Trended 3DTM attributes. In fact, we were just highlighted in the Wall Street Journal for how we’re using machine learning to improve our internal IT performance. ML’s ability to consume vast amounts of data to uncover patterns and deliver results that are not humanly possible otherwise is what makes it unique and applicable to so many fields. This predictive power has now sparked interest in the credit risk industry. Unlike fraud detection, where ML is well-established and used extensively, credit risk modeling has until recently taken a cautionary approach to adopting newer ML algorithms. Because of regulatory scrutiny and perceived lack of transparency, ML hasn’t experienced the broad acceptance as some of credit risk modeling’s more utilized applications. When it comes to credit risk models, delivering the most predictive score is not the only consideration for a model’s viability. Modelers must be able to explain and detail the model’s logic, or its “thought process,” for calculating the final score. This means taking steps to ensure the model’s compliance with the Equal Credit Opportunity Act, which forbids discriminatory lending practices. Federal laws also require adverse action responses to be sent by the lender if a consumer’s credit application has been declined. This requires the model must be able to highlight the top reasons for a less than optimal score. And so, while ML may be able to deliver the best predictive accuracy, its ability to explain how the results are generated has always been a concern. ML has been stigmatized as a “black box,” where data mysteriously gets transformed into the final predictions without a clear explanation of how. However, this is changing. Depending on the ML algorithm applied to credit risk modeling, we’ve found risk models can offer the same transparency as more traditional methods such as logistic regression. For example, gradient boosting machines (GBMs) are designed as a predictive model built from a sequence of several decision tree submodels. The very nature of GBMs’ decision tree design allows statisticians to explain the logic behind the model’s predictive behavior. We believe model governance teams and regulators in the United States may become comfortable with this approach more quickly than with deep learning or neural network algorithms. Since GBMs are represented as sets of decision trees that can be explained, while neural networks are represented as long sets of cryptic numbers that are much harder to document, manage and understand. In future blog posts, we’ll discuss the GBM algorithm in more detail and how we’re using its predictability and transparency to maximize credit risk decisioning for our clients.

Published: September 12, 2018 by Alan Ikemura

The August 2018 LinkedIn Workforce Report states some interesting facts about data science and the current workforce in the United States. Demand for data scientists is off the charts, but there is a data science skills shortage in almost every U.S. city — particularly in the New York, San Francisco and Los Angeles areas. Nationally, there is a shortage of more than 150,000 people with data science skills. One way companies in financial services and other industries have coped with the skills gap in analytics is by using outside vendors. A 2017 Dun & Bradstreet and Forbes survey reported that 27 percent of respondents cited a skills gap as a major obstacle to their data and analytics efforts. Outsourcing data science work makes it easier to scale up and scale down as needs arise. But surprisingly, more than half of respondents said the third-party work was superior to their in-house analytics. At Experian, we have participated in quite a few outsourced analytics projects. Here are a few of the lessons we’ve learned along the way: Manage expectations: Everyone has their own management style, but to be successful, you must be proactively involved in managing the partnership with your provider. Doing so will keep them aligned with your objectives and prevent quality degradation or cost increases as you become more tied to them. Communication: Creating open and honest communication between executive management and your resource partner is key. You need to be able to discuss what is working well and what isn’t. This will help to ensure your partner has a thorough understanding of your goals and objectives and will properly manage any bumps in the road. Help external resources feel like a part of the team: When you’re working with external resources, either offshore or onshore, they are typically in an alternate location. This can make them feel like they aren’t a part of the team and therefore not directly tied to the business goals of the project. To help bridge the gap, performing regular status meetings via video conference can help everyone feel like a part of the team. Within these meetings, providing information on the goals and objectives of the project is key. This way, they can hear the message directly from you, which will make them feel more involved and provide a clear understanding of what they need to do to be successful. Being able to put faces to names, as well as having direct communication with you, will help external employees feel included. Drive engagement through recognition programs: Research has shown that employees are more engaged in their work when they receive recognition for their efforts. While you may not be able to provide a monetary award, recognition is still a big driver for engagement. It can be as simple as recognizing a job well done during your video conference meetings, providing certificates of excellence or sending a simple thank-you card to those who are performing well. Either way, taking the extra time to make your external workforce feel appreciated will produce engaged resources that will help drive your business goals forward. Industry training: Your external resources may have the necessary skills needed to perform the job successfully, but they may not have specific industry knowledge geared towards your business. Work with your partner to determine where they have expertise and where you can work together to providing training. Ensure your external workforce will have a solid understanding of the business line they will be supporting. If you’ve decided to augment your staff for your next big project, Experian® can help. Our Analytics on DemandTM service provides senior-level analysts, either onshore or offshore, who can help with analytical data science and modeling work for your organization.

Published: September 5, 2018 by Guest Contributor

As more financial institutions express interest and leverage alternative credit data sources to decision and assess consumers, lenders want to be assured of how they can best utilize this data source and maintain compliance. Experian recently interviewed Philip Bohi, Vice President for Compliance Education for the American Financial Services Association (AFSA), to learn more about his perspective on this topic, as well as to gain insights on what lenders should consider as they dive into the world of alternative credit data. Alternative data continues to be a hot topic in the financial services space. How have you seen it evolve over the past few years? It’s hard to pinpoint where it began, but it has been interesting to observe how technology firms and people have changed our perceptions of the value and use of data in recent years. Earlier, a company’s data was just the information needed to conduct business. It seems like people are waking up to the realization that their business data can be useful internally, as well as to others.  And we have come to understand how previously disregarded data can be profoundly valuable. These insights provide a lot of new opportunities, but also new questions.  I would also say that the scope of alternative credit data use has changed.  A few years ago, alternative credit data was a tool to largely address the thin- and no-file consumer. More recently, we’ve seen it can provide a lift across the credit spectrum. We recently conducted a survey with lenders and 23% of respondents cited “complying with laws and regulations” as the top barrier to utilizing alternative data. Why do you think this is the case? What are the top concerns you hear from lenders as it relates to compliance on this topic? The consumer finance industry is very focused on compliance, because failure to maintain compliance can kill a business, either directly through fines and expenses, or through reputation damage. Concerns about alternative data come from a lack of familiarity. There is uncertainty about acquiring the data, using the data, safeguarding the data, selling the data, etc. Companies want to feel confident that they know where the limits are in creating, acquiring, using, storing and selling data. Alternative data is a broad term. When it comes to utilizing it for making a credit decision, what types of alternative data can actually be used?  Currently the scope is somewhat limited. I would describe the alternative data elements as being analogous to traditional credit data. Alternative data includes rent payments, utility payments, cell phone payments, bank deposits, and similar records. These provide important insights into whether a given consumer is keeping up with financial obligations. And most importantly, we are seeing that the particular types of obligations reflected in alternative data reflect the spending habits of people whose traditional credit files are thin or non-existent.  This is a good thing, as alternative data captures consumers who are paying their bills consistently earlier than traditional data does. Serving those customers is a great opportunity. If a lender wants to begin utilizing alternative credit data, what must they know from a compliance standpoint? I would begin with considering what the lender’s goal is and letting that guide how it will explore using alternative data. For some companies, accessing credit scores that include some degree of alternative data along with traditional data elements is enough. Just doing that provides a good business benefit without introducing a lot of additional risk as compared to using traditional credit score information. If the company wants to start leveraging its own customer data for its own purposes, or making it available to third parties, that becomes complex very quickly.  A company can find itself subject to all the regulatory burdens of a credit-reporting agency very quickly. In any case, the entire lifecycle of the data has to be considered, along with how the data will be protected when the data is “at rest,” “in use,” or “in transit.” Alternative data used for credit assessment should additionally be FCRA-compliant. How do you see alternative credit data evolving in the future? I cannot predict where it will go, but the unfettered potential is dizzying. Think about how DNA-based genealogy has taken off, telling folks they have family members they did not know and providing information to solve old crimes. I think we need to carefully balance personal privacy and prudent uses of customer data. There is also another issue with wide-ranging uses of new data. I contend it takes time to discern whether an element of data is accurately predictive.  Consider for a moment a person’s utility bills. If electricity usage in a household goes down when the bills in the neighborhood are going up, what does that tell us? Does it mean the family is under some financial strain and using the air conditioning less? Or does it tell us they had solar panels installed? Or they’ve been on vacation?  Figuring out what a particular piece of data means about someone’s circumstances can be difficult. About Philip Bohi Philip joined  AFSA in 2017 as Vice President, Compliance Education. He is responsible for providing strategic direction and leadership for the Association’s compliance activities, including AFSA University, and is the staff liaison to the Operations and Regulatory Compliance Committee and Technology Task Forces. He brings significant consumer finance legal and compliance experience to AFSA, having served as in-house counsel at Toyota Motor Credit Corporation and Fannie Mae. At those companies, Philip worked closely with compliance staff supporting technology projects, legislative tracking, and vendor management. His private practice included work on manufactured housing, residential mortgage compliance, and consumer finance matters at McGlinchey Stafford, PLLC and Lotstein Buckman, LLP. He is a member of the Virginia State Bar and the District of Columbia Bar. Learn more about the array of alternative credit data sources available to financial institutions.

Published: July 18, 2018 by Kerry Rivera

As I mentioned in my previous blog, model validation is an essential step in evaluating a recently developed predictive model’s performance before finalizing and proceeding with implementation. An in-time validation sample is created to set aside a portion of the total model development sample so the predictive accuracy can be measured on a data sample not used to develop the model. However, if few records in the target performance group are available, splitting the total model development sample into the development and in-time validation samples will leave too few records in the target group for use during model development. An alternative approach to generating a validation sample is to use a resampling technique. There are many different types and variations of resampling methods. This blog will address a few common techniques. Jackknife technique — An iterative process whereby an observation is removed from each subsequent sample generation. So if there are N number of observations in the data, jackknifing calculates the model estimates on N - 1 different samples, with each sample having N - 1 observations. The model then is applied to each sample, and an average of the model predictions across all samples is derived to generate an overall measure of model performance and prediction accuracy. The jackknife technique can be broadened to a group of observations removed from each subsequent sample generation while giving equal opportunity for inclusion and exclusion to each observation in the data set. K-fold cross-validation — Generates multiple validation data sets from the holdout sample created for the model validation exercise, i.e., the holdout data is split into K subsets. The model then is applied to the K validation subsets, with each subset held out during the iterative process as the validation set while the model scores the remaining K-1 subsets. Again, an average of the predictions across the multiple validation samples is used to create an overall measure of model performance and prediction accuracy. Bootstrap technique — Generates subsets from the full model development data sample, with replacement, producing multiple samples generally of equal size. Thus, with a total sample size of N, this technique generates N random samples such that a single observation can be present in multiple subsets while another observation may not be present in any of the generated subsets. The generated samples are combined into a simulated larger data sample that then can be split into a development and an in-time, or holdout, validation sample. Before selecting a resampling technique, it’s important to check and verify data assumptions for each technique against the data sample selected for your model development, as some resampling techniques are more sensitive than others to violations of data assumptions. Learn more about how Experian Decision Analytics can help you with your custom model development.

Published: July 5, 2018 by Guest Contributor

An introduction to the different types of validation samples Model validation is an essential step in evaluating and verifying a model’s performance during development before finalizing the design and proceeding with implementation. More specifically, during a predictive model’s development, the objective of a model validation is to measure the model’s accuracy in predicting the expected outcome. For a credit risk model, this may be predicting the likelihood of good or bad payment behavior, depending on the predefined outcome. Two general types of data samples can be used to complete a model validation. The first is known as the in-time, or holdout, validation sample and the second is known as the out-of-time validation sample. So, what’s the difference between an in-time and an out-of-time validation sample? An in-time validation sample sets aside part of the total sample made available for the model development. Random partitioning of the total sample is completed upfront, generally separating the data into a portion used for development and the remaining portion used for validation. For instance, the data may be randomly split, with 70 percent used for development and the other 30 percent used for validation. Other common data subset schemes include an 80/20, a 60/40 or even a 50/50 partitioning of the data, depending on the quantity of records available within each segment of your performance definition. Before selecting a data subset scheme to be used for model development, you should evaluate the number of records available in your target performance group, such as number of bad accounts. If you have too few records in your target performance group, a 50/50 split can leave you with insufficient performance data for use during model development. A separate blog post will present a few common options for creating alternative validation samples through a technique known as resampling. Once the data has been partitioned, the model is created using the development sample. The model is then applied to the holdout validation sample to determine the model’s predictive accuracy on data that wasn’t used to develop the model. The model’s predictive strength and accuracy can be measured in various ways by comparing the known and predefined performance outcome to the model’s predicted performance outcome. The out-of-time validation sample contains data from an entirely different time period or customer campaign than what was used for model development. Validating model performance on a different time period is beneficial to further evaluate the model’s robustness. Selecting a data sample from a more recent time period having a fully mature set of performance data allows the modeler to evaluate model performance on a data set that may more closely align with the current environment in which the model will be used. In this case, a more recent time period can be used to establish expectations and set baseline parameters for model performance, such as population stability indices and performance monitoring. Learn more about how Experian Decision Analytics can help you with your custom model development needs.

Published: June 18, 2018 by Guest Contributor

Data is a part of a lot of conversations in both my professional and personal life. Everything around us is creating data – whether it’s usable or not is a business case for opportunity. Think about how many times a day you access the television, your phone, iPad or computer. Have a smart fridge? More data. Drive a car? More data. It’s all around us and can help us make more informed decisions. What is exciting to me are the new techniques and technologies, like machine learning, artificial intelligence and SaaS-based applications, that are becoming more accessible to lenders for use in managing their relationships with customers. This means lenders – whether a multi-national bank, online lender, regional bank or credit union – can make better use of the data they have about their customers. Let’s look at two groups – Gen-X and Millennials – who tend to be more transient than past generations. They rent not buy. They are brand loyal but will flip quickly if the experience or their expectations aren’t met. They live out their lives on social media yet know the value of their information. We’re just now starting to get to know the next generation, Gen Z. Can you imagine making individual customer decisions at a large scale on a population with so many characteristics to consider? With machine learning and new technologies available, alternative data – such as social media, visual and video data – can become an important input to knowing when, where and what financial product you offer. And make the offer quickly! This is a stark change from the days when decisions were based on binary inputs, or rather, simple yes/no answers. And it took 1-3 days (or sometimes weeks) to make an offer. More and more consumers are considering nontraditional banks because they offer the personalization and speed at which consumers have become accustomed.  We can thank the Amazons of the world for setting the bar high. The reality is - lenders must evolve their systems and processes to better utilize big data and the insights that machine learning and artificial intelligence can offer at the speed of cloud-based applications. Digitization threatens to lower profits in the finance industry unless traditional banks undertake innovation initiatives centered on better servicing the customer. In plain speak – banks need to innovate like a FinTech – simplify the products and create superior customer experiences. Machine learning and artificial intelligence can be a way to use data for making more informed decisions faster that deliver better experiences and distinguish your business from the next. Prior to Experian, I spent some time at a start-up before it was acquired by one of the large multi-national payment processors. Energizing is a word that comes to mind when I think back to those days. And it’s a feeling I have today at Experian. We’re taking innovation to heart – investing a lot in revolutionary technology and visionary people. The energy is buzzing and it’s an exciting place to be. As a former customer of 20 years turned employee, I’ve started to think Experian will transform the way we think about cool tech companies!

Published: June 15, 2018 by Robert Boxberger

According to our recent research for the State of Alternative Credit Data, more lenders are using alternative credit data to determine if a consumer is a good or bad credit risk. In fact, when it comes to making decisions: More than 50% of lenders verify income, employment and assets as well as check public records before making a credit decision. 78% of lenders believe factoring in alternative data allows them to extend credit to consumers who otherwise would be declined. 70% of consumers are willing to provide additional financial information to a lender if it increases their chance for approval or improves their interest rate. The alternative financial services space continues to grow with products like payday loans, rent-to-own products, short-term loans and more. By including alternative financial data, all types of lenders can explore both universe expansion and risk mitigation. State of Alternative Credit Data

Published: May 25, 2018 by Guest Contributor

Alternative credit data. Enhanced digital credit marketing. Faster, integrated decisioning. Fraud and identity protections. The latest in technology innovation. These were the themes Craig Boundy, Experian’s CEO of North America, imparted to an audience of 800-plus Vision guests on Monday morning. “Technology, innovation and new sources of data are fusing to create an unprecedented number of new ways to solve pressing business challenges,” said Boundy. “We’re leveraging the power of data to help people and businesses thrive in the digital economy.” Main stage product demos took the shape of dark web scans, data visualization, and the latest in biometric fraud scanning. Additionally, a diverse group of breakout sessions showcased all-new technology solutions and telling stats about how the economy is faring in 2018, as well as consumer credit trends and preferences. A few interesting storylines of the day … Regulatory Under the Trump administration, everyone is talking about deregulation, but how far will the pendulum swing? Experian Sr. Director of Regulatory Affairs Liz Oesterle told audience members that Congress will likely pass a bill within the next few days, offering relief to small and mid-sized banks and credit unions. Under the new regulations, these smaller players will no longer have to hold as much capital to cover losses on their balance sheets, nor will they be required to have plans in place to be safely dismantled if they fail. That trigger, now set at $50 billion in assets, is expected to rise to $250 billion. Fraud Alex Lintner, Experian’s President of Consumer Information Services, reported there were 16.7 million identity theft victims in 2017, resulting in $16.8 billion in losses. Need more to fear? There is also a reported 323k new malware samples found each day. Multiple sessions touched on evolving best practices in authentication, which are quickly shifting to biometrics-based solutions. Personal identifiable information (PII) must be strengthened. Driver’s licenses, social security numbers, date of birth – these formats are no longer enough. Get ready for eye scans, as well as voice and photo recognition. Emerging Consumers The quest to understand the up-and-coming Millennials continues. Several noteworthy stats: 42% of Millennials said they would conduct more online transactions if there weren’t so many security hurdles to overcome. So, while businesses and lenders are trying to do more to authenticate and strengthen security, it’s a delicate balance for Millennials who still expect an easy and turnkey customer experience. Gen Z, also known as Centennials, are now the largest generation with 28% of the population. While they are just coming onto the credit scene, these digital natives will shape the credit scene for decades to come. More than ever, think mobile-first. And consider this … it's estimated that 25% of shopping malls will be closed within five years. Gen Z isn’t shopping the mall scene. Retail is changing rapidly! Economy Mortgage originations are trending up. Consumer confidence, investor confidence, interest rates and home sales are all positive. Unemployment remains low. Bankcard originations have now surpassed the 2007 peak. Experian’s Vice President of Analytics Michele Raneri had glowing remarks on the U.S. economy, with all signs pointing to a positive 2018 across the board. Small business loan volumes are also up 10% year-to-date versus the same time last year. Keynote presenters speculate there could be three to four rate hikes within the year, but after years of no hikes, it’s time. Data There are 2.5 quintillion pieces of data created daily. And 80% of what we know about a consumer today is the result of data generated within the past year. While there is no denying there is a LOT of data, presenters throughout the day talked about the importance of access and speed. Value comes with more APIs to seamlessly connect, as well as data visualization solutions like Tableau to make the data easier to understand. More Vision news to come. Gain insights and news throughout the day by following #ExperianVision on Twitter.    

Published: May 21, 2018 by Kerry Rivera

The traditional credit score has ruled the financial services space for decades, but it‘s clear the way in which consumers are managing their money and credit has evolved. Today’s consumers are utilizing different types of credit via various channels. Think fintech. Think short-term loans. Think cash-checking services and payday. So, how do lenders gain more visibility to a consumer’s credit worthiness in 2018? Alternative credit data has surfaced to provide a more holistic view of all consumers – those on the traditional file and those who are credit invisibles and emerging. In an all-new report, Experian dives into “The State of Alternative Credit Data,” providing in-depth coverage on how alternative credit data is defined, regulatory implications, consumer personas attached to the alternative financial services industry, and how this data complements traditional credit data files. “Alternative credit data can take the shape of alternative finance data, rental, utility and telecom payments, and various other data sources,” said Paul DeSaulniers, Experian’s senior director of Risk Scoring and Trended/Alternative Data and attributes. “What we’ve seen is that when this data becomes visible to a lender, suddenly a much more comprehensive consumer profile is formed. In some instances, this helps them offer consumers new credit opportunities, and in other cases it might illuminate risk.” In a national Experian survey, 53% of consumers said they believe some of these alternative sources like utility bill payment history, savings and checking account transactions, and mobile phone payments would have a positive effect on their credit score. Of the lenders surveyed, 80% said they rely on a credit report, plus additional information when making a lending decision. They cited assessing a consumer’s ability to pay, underwriting insights and being able to expand their lending universe as the top three benefits to using alternative credit data. The paper goes on to show how layering in alternative finance data could allow lenders to identify the consumers they would like to target, as well as suppress those that are higher risk. “Additional data fields prove to deliver a more complete view of today’s credit consumer,” said DeSaulniers. “For the credit invisible, the data can show lenders should take a chance on them. They may suddenly see a steady payment behavior that indicates they are worthy of expanded credit opportunities.” An “unscoreable” individual is not necessarily a high credit risk — rather they are an unknown credit risk. Many of these individuals pay rent on time and in full each month and could be great candidates for traditional credit. They just don’t have a credit history yet. The in-depth report also explores the future of alternative credit data. With more than 90 percent of the data in the world having been generated in just the past five years, there is no doubt more data sources will emerge in the coming years. Not all will make sense in assessing credit decisions, but there will definitely be new ways to capture consumer-permissioned data to benefit both consumer and lender. Read Full Report

Published: May 21, 2018 by Kerry Rivera

Marketers are keenly aware of how important it is to “Know thy customer.” Yet customer knowledge isn’t restricted to the marketing-savvy. It’s also essential to credit risk managers and model developers. Identifying and separating customers into distinct groups based on various types of behavior is foundational to building effective custom models. This integral part of custom model development is known as segmentation analysis. Segmentation is the process of dividing customers or prospects into groupings based on similar behaviors such as length of time as a customer or payment patterns like credit card revolvers versus transactors. The more similar or homogeneous the customer grouping, the less variation across the customer segments are included in each segment’s custom model development. So how many scorecards are needed to aptly score and mitigate credit risk? There are several general principles we’ve learned over the course of developing hundreds of models that help determine whether multiple scorecards are warranted and, if so, how many. A robust segmentation analysis contains two components. The first is the generation of potential segments, and the second is the evaluation of such segments. Here I’ll discuss the generation of potential segments within a segmentation scheme. A second blog post will continue with a discussion on evaluation of such segments. When generating a customer segmentation scheme, several approaches are worth considering: heuristic, empirical and combined. A heuristic approach considers business learnings obtained through trial and error or experimental design. Portfolio managers will have insight on how segments of their portfolio behave differently that can and often should be included within a segmentation analysis. An empirical approach is data-driven and involves the use of quantitative techniques to evaluate potential customer segmentation splits. During this approach, statistical analysis is performed to identify forms of behavior across the customer population. Different interactive behavior for different segments of the overall population will correspond to different predictive patterns for these predictor variables, signifying that separate segment scorecards will be beneficial. Finally, a combination of heuristic and empirical approaches considers both the business needs and data-driven results. Once the set of potential customer segments has been identified, the next step in a segmentation analysis is the evaluation of those segments. Stay tuned as we look further into this topic. Learn more about how Experian Decision Analytics can help you with your segmentation or custom model development needs.

Published: April 26, 2018 by Guest Contributor

Subscribe to our blog

Enter your name and email for the latest updates.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Subscribe to our Experian Insights blog

Don't miss out on the latest industry trends and insights!
Subscribe