Sunday, August 9, 2015

Big-data analytics for lenders and creditors


 Credit today is granted by various organizations such as banks, building societies, retailers, mail order companies, utilities and various others. Because of growing demand, stronger competition and advances in computer technology, over the last 30 years traditional methods of making credit decisions that rely mostly on human judgment have been replaced by methods that rely mostly on statistical models. Such statistical models today are not only used for deciding whether or not to accept an applicant (application scoring), but also to predict the likely default of customers that have already been accepted (behavioral scoring) and to predict the likely amount of debt that the lender can expect to recover (collection scoring).  The term credit scoring can be defined on several conceptual levels. Most fundamentally, credit scoring means applying a statistical model to assign a risk score to a credit application or to an existing credit account. On a higher level, credit scoring also means the process of developing such a statistical model from historical data. On yet a higher level, the term also refers to monitoring the accuracy of one or many such statistical models and monitoring the effect that score based decisions have on key business performance indicators.

Credit scoring is performed, because it provides a number of important business benefits, all of them based on the ability to quickly and efficiently obtain fact based and accurate predictions of credit risk of individual applicants or customers. So, for example, in application scoring, credit scores are used for optimizing the approval rate for credit applications. Application scores enable the organization to choose a optimal cut-off score for acceptance, such that market share can be gained while retaining maximum profitability. The approval process and the marketing of credit products can be streamlined based on credit scores: High risk applications can, for example, be given to more experienced staff or pre-approved credit products can be offered to selected low-risk customers via various channels, including direct marketing and the Web.

Credit scores, both of prospects and existing customers, are essential in the customization of credit products. They are used for determining custom credit limits, down payments or deposits and interest rates. Behavioral credit scores of existing customers are used in the early detection of high risk accounts and enable the organization to perform targeted interventions, for example by pro-actively offering debt restructuring. Behavioral credit scores also form the basis for more accurate calculations of the total consumer credit risk exposure, which can result in a reduction of bad debt provision.

Other benefits of credit scoring include an improved targeting of audits at high-risk accounts, thereby optimizing the workload of the auditing staff. Resources spent on debt collection can be optimized by targeting collection activities at accounts with a high collection score. Collection scores are also used for determining the accurate value of a debt book before it is sold to a collection agency.  Finally, credit scores serve to assess the quality of portfolios intended for acquisition and to compare the quality of business from different channels, regions and suppliers.

Building credit models in-house




While under certain circumstances it is appropriate to buy ‘ready-made’ generic credit models from outside vendors or to have credit models developed by outside consultants for a specific purpose, maintaining a practice for building credit models in-house offers several advantages. Most directly, it enables the lending organization to profit from economies of scale when many models need to be built and to afford a greater number of segment specific models for a greater variety of purposes.

Building up a solid, re-usable and flexible data, knowledge and skill base of its own also makes it easier for the organization to stay consistent in the interpretation of model results and reports and to use a consistent modeling methodology across the whole range of customer related scores. This results in a reduced turnaround time for the integration of new models, thereby freeing resources to more swiftly respond to new business questions with new creative models and strategies.

Finally, in-house modeling competency is needed to verify the accuracy and analyze the strengths and weaknesses of acquired credit models, to reduce access of outsiders to strategic information and to retain competitive advantage by building up company specific best practices.

 


Larger credit scoring process


Modeling is the process of creating a scoring rule from a set of examples. In order for modeling to be effective, it has to be integrated into a larger process. Let’s look at application scoring. On the input side, before the modeling, the set of example applications has to be prepared. On the output side, after the modeling, the scoring rule has to be executed on a set of new applications, so that credit granting decisions can be made.

The collection of performance data is at the beginning and at the end of the credit scoring process. Before a set of example applications can be prepared, performance data has to be collected so that applications can be tagged as ‘good’ or ‘bad’. After new applications have been scored and decided upon, the performance of the accepts again has to be tracked and reports created, so that the scoring rule can be validated and possibly substituted, the acceptance policy be fine-tuned and the current risk exposure be calculated.

 


Choosing the right model


With available analytical technologies it is possible to create a variety of model types, such as scorecards, decision trees or neural networks.  When you evaluate, which model type is best suited for achieving your goals, you may want to consider criteria such as the ease of applying the model, the ease of understanding it and the ease of justifying it. At the same time, for each particular model of whatever type, it is important to assess its predictive performance, i.e. the accuracy of the scores that the model assigns to the applications and the consequences of the accept/reject decisions that it suggests. A variety of business relevant quality measures, such as concentration, strategy and profit curves are used for this (see section Model Assessment in the case study section below). The best model will therefore be determined both by the purpose for which the model will be used and by the structure of the data set that it is validated on.

 -----------------------------------------------------------------------------------------------------------

Scorecards


The traditional form of a credit scoring model is a scorecard. This is a table that contains a number of questions that an applicant is asked (called characteristics) and for each such question a list of possible answers (called attributes).  One such characteristic may, for example, be the age of the applicant, and the attributes for this characteristics then are a number of age ranges that an applicant can fall into. For each answer, the applicant receives a certain amount of points – more if the attribute is one of low risk, less vice versa. If the application’s total score exceeds a specified cut-off amount of points, it is recommended for acceptance.  Scorecard model, apart from being a long established method in the industry, still has several advantages when compared with more recent ‘data mining’ types of models, such as decision trees or neural networks.  A scorecard is easy to apply: if needed the scorecard can be evaluated on a sheet of paper in the presence of the applicant. It is easy to understand: the amount of points for one answer doesn’t depend on any of the other answers and across the range of possible answers for one question the amount of points usually increases in a simple way (often monotonically or even linearly). It is therefore often also easy to justify a decision that is made on the basis of a scorecard to the applicant. It is possible to disclose groups of characteristics where the applicant has a potential for improving the score and to do so in broad enough terms not to risk manipulated future applications.

 

Scorecard development process



Development sample


The development sample (input data set) is a balanced sample consisting of 1500 good and 1500 bad accepted applicants. ‘Bad’ has been defined as having been 90 days past due once. Everyone not ‘bad’ is ‘good’ , so there are no ‘indeterminates’.  A separate data set contains the data on rejects. The modeling process, especially the validation charts, require information about the actual good/bad proportion in the accept population. Sampling weights are used here for simulating that proportion. A weight of 30 is assigned to a good application and a weight of 1 to a bad one. Thereafter all nodes in the process flow diagram treat the sample as if it consisted of 45 000 good applications and 1500 bad applications. Figure 3 shows the distribution of good/bad after the application of sampling weights. The bad rate is 3.23%. A Data Partition node then splits a 50 % validation data set away from the development sample. Models will later be compared based on this validation data set.

 


Classing


Classing is the process of automatically and/or interactively binning and grouping interval, nominal or ordinal input variables in order to

  • manage the number of attributes per characteristic
  • improve the predictive power of the characteristic
  • select predictive characteristics
  • make the Weights Of Evidence  - and thereby the amount of points in the scorecard - vary smoothly or even linearly across the attributes
     
    The amount of points that an attribute is worth in a scorecard is determined by two factors:

  • the risk of the attribute relative to the other attributes of the same characteristic and
  • the relative contribution of the characteristic to the overall score
    The relative risk of the attribute is determined by its ‘Weight of Evidence’. The contribution of the characteristic is determined by its co-efficient in a logistic regression (see section Regression below).
     The Weight of Evidence of an attribute is defined as the logarithm of the ratio of the proportion of goods in the attribute over the proportion of bads in the attribute. High negative values therefore correspond to high risk, high positive values correspond to low risk. Since an attribute’s amount of points in the scorecard is proportional to its Weight of Evidence (see section Score Points Scaling below) the classing process determines how many points an attribute is worth relative to the other attributes of the same characteristic.
     After classing has defined the attributes of a characteristic, the characteristic’s predictive power, i.e. its ability to separate high risks from low risks, can be assessed with the so called Information Value measure.  This will aid the selection of characteristics for inclusion in the scorecard. The Information Value is the weighted sum of the Weights of Evidence of the characteristic’s attributes. The sum is weighted by the difference between the proportion of goods and the proportion of bads in the respective attribute. The Information Value should be greater than 0.02 for a characteristic to be considered for inclusion in the scorecard. Information Values lower than 0.1 can be considered weak, smaller than 0.3 medium and smaller than 0.5 strong. If the Information Value is greater than 0.5, the characteristic may be over-predicting, meaning that it is in some form trivially related to the good/bad information.
     There is no single criterion, when a grouping can be considered satisfactory.  A linear or at least monotone increase or decrease of the Weights of Evidence is often what is desired in order for the scorecard to appear plausible. Some analysts would always only include those characteristics where a sensible re-grouping can achieve this.  Others may consider a smooth variation sufficiently plausible and would include a non-monotone characteristic such as ‘income’, where risk is high for both high and low incomes, but low for medium incomes, provided the Information Value is high enough.

 


Regression analysis


After the relative risk across attributes of the same characteristic has been quantified, a logistic regression analysis now determines how to weigh the characteristics against each other.   The Regression node receives one input variable for each characteristic. This variable contains as values the Weights of Evidence of the characteristic’s attributes. (see table 1 for an example of Weight of Evidence coding). Note that Weight of Evidence coding is different from dummy variable coding, in that single attributes are not weighted against each other independently, but whole characteristics are, thereby preserving the relative risk structure of the attributes as determined in the classing stage

A variety of further selection methods (forward, backward, stepwise) can be used in the Regression node to eliminate redundant characteristics. In our case we use a simple regression. These values are in the following step multiplied with the Weights of Evidence of the attributes to form the basis for the score points in the scorecard.

 

Score points scalling


For each attribute its Weight of Evidence and the regression co-efficient of its characteristic could now be multiplied to give the score points of the attribute. An applicant’s total score would then be proportional to the logarithm of the predicted bad/good odds of that applicant.  However, score points are commonly scaled linearly to take more friendly (integer) values and to conform with industry or company standards. We scale the points such that a total score of 600 points corresponds to good/bad odds of 50 to 1 and that an increase of the score of 20 points corresponds to a doubling of the good/bad odds. For the derivation of the scaling rule that transforms the score points of each attribute see equations 3 and 4. The scaling rule is implemented in the Scorecard node (see Figure 1), where it can be easily parameterized. The resulting scorecard is output as a table in HTML and is shown in table 2.  Note, how the score points of the various characteristics cover different ranges. The score points develop smoothly and, with the exception of the ‘Income’ variable, also monotonically across the attributes.

 

Reject Inference


The application scoring models we have built so far, even though we have done everything correctly, still suffer from a fundamental bias. They have been built based on a population that is structurally different from the population to which they are supposed to be applied. All the example applications in the development sample are applications that have been accepted by the old generic scorecard that has been in place during the last two years. This is so because only for those accepted applications it is possible to evaluate their performance and to define a good/bad variable.  However, the through-the-door population that is supposed to be scored is composed of all applicants, those that would have been accepted and those that would have been rejected by the old scorecard. Note that this is only a problem for application scoring, not for behavioral scoring . As a partial remedy to this fundamental bias, it is common practice to go through a process of reject inference. The idea of this approach is to score the data that is retained of the rejected applications with the model that is build on the accepted applications. Then rejects are classified as inferred goods or inferred bads and are added to the accepts data set that contains the actual good and bad. This augmented data set then serves as the input data set of a second modeling run. In case of a scorecard model this involves the re-adjustment of the classing and the re-calculation of the regression co-efficients.

 


------------------------------------------------------------------------------------------------------


Decision Trees


On the other hand, a decision tree may outperform a scorecard in terms of predictive accuracy, because unlike the scorecard, it detects and exploits interactions between characteristics. In a decision tree model, each answer that an applicant gives determines what question he is asked next. If the age of an applicant is for example greater than 50 the model may suggest to grant a credit without any further questions, because the average bad rate of that segment of applications is sufficiently low. If, on the other extreme, the age of the applicant is below 25 the model may suggest to ask about time on the job next. Credit would then maybe only granted to those that have exceeded 24 months of employment, because only in that sub-segment of youngsters the average bad rate is sufficiently low.  A decision tree model thus consists of a set of if .. then … else rules that are still quite straightforward to apply. The decision rules also are easy to understand, maybe even more so than a decision rule that is based on a total score that is made up of many components.  However, a decision rule from a tree model, while easy to apply and easy to understand, may be hard to justify for applications that lie on the border between two segments.  There will be cases where an applicant will for example say: ‘If I had only been 2 months older I would have received a credit without further questions, but now I am asked for additional securities. That is unfair.’ That applicant may also be tempted to make a false statement about his age in his next application. Even if a decision tree is not used directly for scoring, this model type still adds value in a number of ways: the identification of clearly defined segments of applicants with a particular high or low risk can give dramatic new insight into the risk structure of the population. Decision trees are also used in scorecard monitoring, where they identify segments of applications where the scorecard under performs. 

Finally, decision trees often can achieve a similar predictive power as a scorecard with much fewer characteristics. Models that only require few characteristics, sometimes called ‘short scores’, are becoming especially popular in the context of campaigning and marketing for credit products. However, there is a fundamental problem associated with short scores: they diminish the richness of information that the organization can collect on the applicants and thereby erode the basis for future modeling.

 --------------------------------------------------------------------------------------------------------

Neural Nets


With the decision tree, we could see that there is such thing as a decision rule that is too easy to understand and thereby invites fraud. Ironically speaking, there is no danger of this happening with a neural network. Neural networks are extremely flexible models that combine combinations of characteristics in a variety of ways. Their predictive accuracy can therefore be far superior to scorecards and they don’t suffer from sharp ‘splits’ as decision trees do. However, it is virtually impossible to explain or understand the score that is produced for a particular application in any simple way.  It can therefore be difficult to justify a decision that is made on the basis of a neural network model. In some countries it may even be a legal requirement to be able to explain a decision and such a justification then must be produced with additional methods. A neural network of superior predictive power therefore is best suited for certain behavioral or collection scoring purposes, where the average accuracy of the prediction is more important than the insight into the score for each particular case.  Neural network models can not be applied manually like scorecards or simple decision trees, but require software to score the application. Then, however, their use is just as simple as that of the other model types.

 

Model Assessment


After building both a scorecard and a decision tree model we now want to compare the quality of the models on the validation data. One of the standard Enterprise Miner charts in the Assessment node is the concentration curve and is shown in Figure 9. It shows how many of all the bads in the population are concentrated in the group of 2% (4%, 6%, …) worst applicants as predicted by the model. Sorting applicants based on the scorecard scores will result, for example, in around 30% of all the bads being concentrated in the 10% applicants that are considered the worst by the scorecard model. The decision tree is only able to concentrate about half as many bads in the same number of what it calls the worst applicants (the 10% decile is marked by the vertical black line in  In summary, the scorecard is assessed to be superior, because its curve stays above that of the tree.

  
Defining decision rules for application approval and risk management

 

Application approval and risk management do not rely on scores alone, but scores do form the basis of a decision strategy that groups customers into homogenous segments. These segments can then be treated with the same action. For example, in the case of approval decisions, customers are often classified using appropriate cutoff scores as approved, referred for examination or rejected. Other segmentation strategies can determine the limit amount that is assigned to a segment or the collection actions taken. An important type of segmentation is the division of customers into risk pools for the purpose of calculating certain risk components: probability of default (PD), loss given default (LGD) and exposure at default (EAD). These risk components are required by the risk weighted assets (RWA) calculation mandated by the Basel II and III capital requirements regulation. Analysts apply the scorecard and the pooling definition to a historical data set. The long-run historical averages of the default rate, losses and exposures can then be calculated by pool and used as input into the RWA calculation. There are various ways to group customers into segments using a scorecard. Often segmentation involves the setting of thresholds. Sometimes analysts define these thresholds manually, and sometimes they use an algorithm to automatically find a decision rule that is optimal in a specific way. The way multiple thresholds are combined further characterizes a decision rule. Typical examples of decision rules include policy rules (exclusions), single score bins, multiple score bins and decision trees.


 

Deploying scores and decisions

 

Execution of decision rules can be done in batch for all customers so that the assignment of each customer to a group and an action is available in an operational data store for instant retrieval by front-office software. Or, alternatively, the front-office software can initiate execution of the decision rule to make a decision on an individual customer, possibly using new or updated information supplied by the customer at that time (online). The decision is then passed back immediately to the front-office software. In either case, the decision rule is not executed by the front-office software but through middle-layer software on a central server. For existing credit customers, the batch option will be most commonly used, since behavioral information derived from the customer transaction history and other stored customer characteristics is typically more predictive than information a customer might supply in the front office.




23 comments:

  1. Great blog about data mining and gathering. If you want to know more about how to integrate data please contact us from the links below

    The article is meant to help informatica interview questions and answers for experienced individuals or students preparing on this particular topic. There are so many new important informatica scenario based questions points, question covered and different new points all covered in this piece all at ease. The best informatica interview questions thing about the article is that it makes studying and preparation quite simple for individuals and accordingly they can prepare for the informatica questions interview.

    ReplyDelete
  2. Data breaches serve as an important reminder to educate employees through security awareness training and test internal response procedures in preparation for a potential data breach.
    best data rooms

    ReplyDelete
  3. Very Nicely Written Article on Big Data Technology. Please Update More Post Like This.
    Big Data Training in Hyderabad
    Big Data Course in Hyderabad

    ReplyDelete
  4. A very helpful and useful article indeed. Could you please point out where the diagrams referred to can be found? Many thanks.

    ReplyDelete
  5. Analytical data is very important, custom essay writers in usa in that they help us get an idea of what to do next and how it is easier and more correct to do.

    ReplyDelete
  6. Data analysis is very useful and necessary process in each case, because thanks to the obtained data it is possible to simulate the further development of the question of interest.

    ReplyDelete
  7. Good post
    Regards
    Apponix Technologies
    https://www.apponix.com/Big-Data-Institute/hadoop-training-in-bangalore.html

    ReplyDelete
  8. nice course. thanks for sharing this post this post harried me a lot.
    Big Data Training in Gurgaon

    ReplyDelete
  9. This comment has been removed by the author.

    ReplyDelete
  10. Great blog and Great information.It help us to understand some topics of Data Mining Course.Thank you Sir.keep blogging.

    ReplyDelete
  11. I just want to say thank you for sharing this post, it was really awesome and very informative. Thank you.
    If someone looking for Big Data training certification in India. Join us

    Big Data Certifications

    ReplyDelete
  12. Thanks for your great post.We are the leading seo company in mumbai. Hire our seo agency in mumbai today for seo services in mumbai

    ReplyDelete
  13. Wow! That's an amazing conversation. I can resonate with every point about the benefits of being hired as a freelancer. Especially in Android application development, freelancing can provide you with a good source of earning. Could you please let me know some special mention platforms to provide the same?

    ReplyDelete
  14. I every time spent my half an hour to read this website’s articles or reviews all the time along with a cup of coffee.

    APSU BA 1st Year Result 2021
    APSU BA 2nd Year Result 2021
    APSU BA 3rd Year Result 2021

    ReplyDelete