Category Archives: FICO

5 Hot Topics in Credit Scoring from Edinburgh

Analytics Hand 5 Hot Topics in Credit Scoring from Edinburgh

If you want to seek out the newest ideas in credit scoring — a field that advances more rapidly than many people may suspect — the best place is the annual Edinburgh Credit Scoring and Control conference (well, next to our own FICO World). At this year’s conference, I started thinking about what has changed in the world of credit scoring since my first visit to the conference almost 20 years ago.  My phone has certainly gotten smarter in that time – so what is smarter within credit scoring?

With around 70 presentations, the key questions remain the same:

  • What data is available and useful?
  • How do I best gain intelligence from the data?
  • How do I best action the intelligence?
  • How do I comply with the ever-increasing regulations?

Regarding the data, the answer is more and more! We all know that the digital age is creating vast quantities of data that is growing exponentially.  Compared to 20 years ago, there are many new and ‘alternative’ data sources that are now being used to better inform creditworthiness for certain consumer types, include psychometric data, telecoms data, social network data and transaction data.

In terms of turning this data into intelligence, artificial intelligence (AI) and specifically machine learning algorithms are being investigated and used on an increased scale.  With ever greater volume and variety of data coupled with vastly increased computer processing power, machine learning approaches to drive the value from the data are proving more and more useful.

This same processing power for the development of machine learning models is also helping with the ease of deployment of these types of models, which has been troublesome in the past in terms of both speed of deployment and speed of execution.

The evolution from predictive analytics (models that order by a single outcome of interest) to prescriptive analytics (also referred to as decision optimization, identifying the best action to take considering multiple outcome metrics or dimensions) is vastly improving the business outcomes of decisions across the credit lifecycle, from origination through to collection and recovery.  Prescriptive analytics provides the ability to make better, more informed decisions by taking account of multiple (often conflicting) objectives — for example, increasing accept rates whilst controlling losses.

Since the economic crisis there is also greater focus on modelling stressed situations, and how these stresses impact both the likely performance of individual consumers as well as total portfolios. Predictive models help lenders comply with regulations such as Basel and IFRS 9.  As compliance is gained and maintained, we are seeing these same models being used to drive business value through better insights and understanding of portfolios, acting as key inputs to both what-if scenario analysis and decision optimisation capabilities.

FICO data scientists and experts, all of whom blog here, presented no fewer than five sessions at Edinburgh this year on hot topics related to these areas I have described.

Gerald has already written a post on his talk, about new risk analytics to stress-test individual consumers – we’ll be sharing more insights on our topics here, and shining a spotlight on new trends in credit scoring.

Let’s block ads! (Why?)


Video: What Are the Big Card Fraud Trends in Europe?

European Fraud Map 2016 Video: What Are the Big Card Fraud Trends in Europe?

What card fraud trends are bothering fraud managers in Europe? In this video, Gabriel Hopkins of FICO discusses the rise in card-not-present (CNP) fraud, and what card issuers are doing about it.

As shown in the 2016 European Fraud Map published by FICO and based on Euromonitor International data, CNP fraud accounts for around 70% of card fraud, and the percentage is rising. CNP fraud includes online and phone-based purchases, and as ecommerce grows it becomes a more attractive target for criminals, especially given improved verification in a card-present environment.

The challenge in fighting CNP fraud, Hopkins notes, is that retailers, etailers, issuers and banks want to maintain a smooth online shopping experience for customers. That’s why innovations in CNP fraud detection are so vital. FICO has developed new CNP fraud detection models that are focused in catching CNP fraud at the first occurrence.

Let’s block ads! (Why?)


Modeling Deposit Price Elasticity: Where’s the Value?

Deposit Price Elasticity Modeling FICO Modeling Deposit Price Elasticity: Where’s the Value?

The ability to model deposit price elasticity is becoming a core component of deposit portfolio management. In my previous posts on this topic, I discussed:

This post focuses on benefits once models have been completed and are in use. What should you expect to gain from deposit price elasticity models and what can you do with them to maximize benefit to the business?

The main function of a deposit pricing team is to forecast the future performance of their portfolio and a substantial amount of time is spent answering questions such as: How much new money will this rate bring in? How will this cannibalize my more profitable back book? What is the impact of the new rate on my portfolio P&L? What if the market changes rates?

Scientifically derived deposit price elasticity models streamline the answering of these and other questions. Moreover, as the core of a deposit forecasting solution, these models improve the accuracy, efficiency and accountability of the entire pricing process.


Price elasticity models are not affected by qualitative bias and provide a level of accuracy not achievable by gut instinct alone. Rather than focusing on individual products, the modeling suite should have the ability to create simulations of the entire portfolio, incorporating balance movements into and out of the bank as well as the effect price changes have on other products and the resulting impact on the bottom line. Simulations of future behavior should have the ability to predict the impact of price changes and allow the user to flex assumptions around competitor pricing, changes to the macroeconomic environment and internal profit assumptions.

Using this approach we achieve a better understanding of customer behaviors and the associated sensitivities of the bank’s liquidity as a whole. The best system of models tracks the impact of price changes so that previous decisions can be reviewed, appraised and the results fed back into the model calibration. This closed-loop process ensures models are continuously learning and adapting to changing market sentiment.


Another significant benefit of pricing analytics is that they accelerate the decision making process. Pricing managers can rapidly generate a number of different scenarios to study alternative pricing strategies, changes in competitor pricing assumptions or wider market factors. Providing the business with appropriate forecasting levers allows them to focus their expertise on pricing. Generation of evidence to justify pricing decisions becomes an automated process and this makes it possible to quickly iterate towards a pricing strategy that achieves the desired portfolio outcome.


When recommendations are based on transparent model drivers, conversations with internal stakeholders or senior management become easier as forecasted balances can be directly related back to internal or external modeling factors. The demonstrable action-effect behavior of pricing models also extends beyond the organization as they facilitate conversations with regulators.

In recent times, the regulatory burden on bank executives has grown such that transparency of the underlying pricing models is paramount. Pricing decisions must be explainable and the inputs and assumptions that sit behind them thoroughly documented. An ideal price-elasticity solution provides transparency to the underlying models, automates much of this governance process and provides an auditable structure for the entire pricing process.

Towards Optimization & Beyond

As I have discussed, price elasticity models that predict flows into, out of and between products are key to gaining a full understanding of a deposits portfolio. They serve as part of a broader analytic infrastructure underpinned by a strong data management system and a highly skilled analytic workforce. Ultimately, they empower the pricing team to make better decisions that are more accurate, quickly identified, and easily explained. These improved, data-driven processes have myriad benefits for everyone from the pricing analyst through senior management, partner stakeholders such as treasury, finance, marketing, and even external auditors.

However, the full value of a deposits portfolio can only be completely unlocked through the application of optimization, which is the final frontier in the construction of a comprehensive pricing solution. Full price optimization has the ability to discover revenue where a simple forecasting tool cannot. It optimally trades off balances and revenue across every product in the portfolio and finds the most efficient path to achieving the desired business outcome.

In my next post, I’ll discuss price optimization in more detail and how it can directly generate value to the deposits business.

Let’s block ads! (Why?)


FICO Receives Analytics 50 Award for FICO Score XD

Analytics 50 2017 FICO Receives Analytics 50 Award for FICO Score XD

Drexel University’s LeBow College of Business and have named analytic software firm FICO a winner of the Analytics 50 Awards for the second year in a row. The awards program honors organizations using analytics to solve business challenges.  FICO received the award for FICO® Score XD, which leverages groundbreaking analytic technologies and alternative data to help safely and responsibily expand credit access.

For more information check out the full award article.

Led by Radha Chandra, principal scientist in the Scores business unit at FICO, the FICO analytic development team posed the question: Can alternative data expand credit access?  After extensive research and validation, FICO launched FICO Score XD.  Through the development of FICO® Score XD, FICO provides a potential onramp to credit access for the majority of 50+ million Americans who are identified as ‘unscorable’.

In addition to traditional credit data, FICO® Score XD consumes alternative data from telco, cable, and other payment history, plus public record and property data.  One specific design goal of  FICO Score XD was to mirror the standards of the traditional FICO Score – same 300-850 score range, similar relationship between a given score and likelihood of repayment, and same characteristic logic and treatment.

FICO® Score XD uses positive and negative data from National Consumer Telecom and Utilities Exchange, Inc. (NCTUE) including new connect requests, payment history, current and historical account status and LexisNexisRisk Solutions including property ownership records, frequency of residential moves, plus bankruptcies, evictions, and liens. All data sources comply with the Fair Credit Reporting Act (FCRA), the federal law that regulates how consumer reporting agencies use data.

While FICO Scores based on traditional credit data remain the cornerstone of the FICO business, this initiative helps banks and other lenders expand their addressable market by leveraging scores that are built on models utilizing alternative data. Findings demonstrate that consumers with a high FICO® Score XD who go on to obtain credit, maintain a high traditional FICO score in the future – 49% scored 700 just 24 months later.

“FICO’s pioneering analytics are a key driver to our commitment to financial inclusion and provide lenders with a reliable, broad-based risk score that enables precise assessment of creditworthiness for consumers,” said Ethan Dornhelm, vice president, Scores and Analytics, FICO. “It is an honor that FICO – and Radha Chandra on our team – have been recognized by Drexel University for our work to help millions more people gain access to credit.”

FICO Score XD research and solutions have also paved the way for similar initiatives in Mexico, Russia, Turkey and now India. The latest solution – FICO Score X Data for India – was recently announced at Money 20/20.

The Analytics 50 is a collaboration between Drexel University’s LeBow College of Business and to recognize companies using analytics to solve business challenges. Honorees were selected by a panel of researchers and practitioners based on the complexity of the business challenges they faced, the analytics solutions implemented, and the solutions’ business impact on the organization.

Let’s block ads! (Why?)


The Future of AI — A Society of Inclusion

AI 25 Years FICO Machine Learning The Future of AI — A Society of Inclusion

To commemorate the silver jubilee of FICO’s use of artificial intelligence and machine learning, we asked FICO employees a question: What does the future of AI look like? The post below is one of the thought-provoking responses, from Shreyas Gopinath, a senior analytic scientist at FICO, working in Bangalore.

Since 2010, we have seen a rejuvenation of artificial intelligence and a wealth of practical AI applications in areas such as machine translation (think Google Translate), human-like machine chat agents (think Apple Siri or Microsoft Cortana) and advanced players for games such as Chess and Go. In a business context, the possibilities expand even further – faster and accurate creditworthiness checks to enable banks to lend better, fraud prevention to prevent frauds before they happen, robotic process automation that can drastically reduce costs and targeted marketing programs that drive revenues.

I see AI’s promise going further — much further. We have barely tapped the power of AI to improve the lives of people who are excluded from many of society’s benefits. These include people with disabilities, people without access to basic finance, people without access to basic healthcare and people without access to redressal mechanisms.

In the case of financial inclusion, banks in developing economies have struggled to build credit mechanisms to effectively evaluate creditworthiness in the absence of credit bureaus. Without this, they are reluctant to underwrite loans, depriving thousands of access to credit. Using AI techniques on data mined through devices such as smartphones (for e.g. social network data), it is possible to build a reasonably accurate creditworthiness profile of potential borrowers. This isn’t just an idea — FICO is already doing it.

People with hearing disabilities are socially excluded because of the challenges they face with communication. Currently, hearing-aid devices do not work well in isolating human speech among other sounds. By using techniques that have helped image-identification (using convolutional neural networks), it is possible to similarly embed models with next-gen hearing-aid devices that can isolate human speech and provide a better hearing experience to people with hearing disability.

In developing countries, governments struggle to effectively staff and equip healthcare centers in remote areas. The costs are prohibitive and even then, they are not able to monitor these properly. By using smart IoT devices such as blood-glucose monitors and heart-rate sensors, AI techniques can be used to predict potential adverse health events. Patients can then receive emergency care, or be advised of preventive measures to prevent negative health outcomes. These can also be used to provide targeted nutritional and health information, by connecting these devices with smartphones.

The future of AI will benefit all of us in exciting ways. But for me, its use to improve the lives of people currently excluded from society’s benefits will be the most exciting.

See other FICO posts on artificial intelligence.

Let’s block ads! (Why?)


Foreign Tax Compliance Reporting — FATCA vs. CRS

FACTA CRS Reporting Tax Compliance Foreign Tax Compliance Reporting — FATCA vs. CRS

This article is from Michael Blicker, a senior consultant in FICO’s compliance solutions group.

You would think one system would be enough to manage foreign tax compliance. And, when all we had in place was FATCA* that was true. Now, with CRS** (Common Reporting Standard), some financial institutions are starting to think they need two systems.

Here’s where the trouble started.

In 2010 the IRS (Internal Revenue Service) started to define the details of FATCA (Foreign Account Tax Compliance Act). This process was completed in February 2014 with an agreement on a common reporting format between IRS and OECD (Organization for Economic Co-operation and Development), FATCA reporting Form 8966. At that time, OECD was already working on a global approach to exchange financial information in order to combat tax evasion – AEoI (Automatic Exchange of Information).

Unfortunately, OECD used the FATCA approach as an “inspiration” instead of as a blueprint. The more AEoI changed to simplify the process, the greater the difference grew between both initiatives.

As a result, financial institutions had to set up new processes, and most of the software vendors did not enhance their FATCA compliance solutions to cover the CRS reporting requirements. Life did not get easier. In some cases, the financial institutions had to search for a second solution because the existing FATCA solution was not extensible, even though the two topics are more or less linked together.

Same Goal, Different Rules for FATCA/CRS Compliance

The differences between FATCA and CRS are numerous, and start with identification:

  • For FATCA compliance you’re looking for a hint that the customer is a US person: one country, one search criterion “US” to be checked.
  • For CRS compliance you need to match a customer with, at current count, one or more of 94 countries, and you may get multiple country hits for one customer.

The next difference is classification. FATCA requires many more classification types than CRS and the definitions vary for some categories (e.g., financial institutions). The FATCA indicators don’t deliver a clear result, only a hint. The results have to be verified for a final decision. This process has to be audit-proof. The intergovernmental agreements (IGAs) for FATCA consider a FATCA compliance check through the IRS – the audit trail is important to avoid getting the recalcitrant status. AEoI will be national law, and therefore under supervision by the competent authority of the country – an audit trail is important to avoid direct fines.

Finally, reporting is an issue: In most of the countries, it is a straight-through process for FATCA because of the low amount of customers and the low importance for the financial institutions and competent authorities. CRS reporting is more of a regional topic (e.g., is the customer living and working cross-border). The competent authorities are much more interested in the CRS results than a reciprocal report from the IRS (which is not a Form 8966 report and contains much less information).

Two aspects are of interest: possible additional tax revenue from foreign accounts and additional information on the customers. In several countries, the Competent Authority requires additional information within the report. In addition, the financial institutions are facing technical issues due to the different reporting approaches (protocol, procedures and format), even though it is the same form 8966. To enhance the tax compliance challenge: FATCA and CRS differ in the required reporting content.

This is just scratching the tip of the iceberg, if you will. There are many more aspects to be considered.

Managing FATCA/CRS Regulations with One Tax Compliance & Reporting System

If you think you’ll need two systems to deal with this, think again. That approach will only lead to more cost and complication. The best practice is to cover both FATCA and CRS compliance requirements with one system.

Here’s how that works:

  • Identification: The different criteria (recommended indicia, or indicators, as well as additional indicia and information to support the identification process) should be defined in a rule engine independent of any country. This means that you first define the search criteria and then you link the countries to this search (e.g., all CRS countries). Every customer only has to be checked once for both initiatives (to avoid different results from two different solutions).
  • Verification: The big differences here are the classifications. FATCA has various classifications (also because of the tax withholding), CRS has a limited set of differentiators. This aspect can easily be handled by different workflows for possible CRS-relevant customers and customers with FATCA indicator hits. It’s also important to get a customer-centric view of all final results for both CRS and/or FATCA. A single customer can be a US-person with various tax residencies.
  • Reporting: Form 8966 is a schema to be filled by the requested data – either for FATCA or for CRS. The reporting should therefore be kept flexible in three components: schema, data and template definition (e.g., for content) to bring all aspects together. This enables various reporting alternatives without inflexible programming. This approach can also consider different formats and protocols (e.g., report for every country or one report for all countries and split into packages; SFTP or individual communication procedures).

What you need to do as you implement for CRS compliance is maintain flexibility, because change is inevitable. More than 100 countries have already announced their participation in CRS. Implementation will involve changes and challenges, and you need to be able to adapt along with those. Make sure that making changes doesn’t impact your TCO — you want to be able to do this yourself, while taking advice from experts.

Interested in learning more about FATCA/CRS compliance? Download our eBook now.

*The Foreign Account Tax Compliance Act (FATCA) from 2010 enforces the requirements for all US Persons to report non-US financial accounts. The bi-lateral agreements between the IRS (Internal Revenue Service) and different countries (Intergovernmental Agreements/IGA Model 1 and Model 2) delegate the research of such accounts to the national financial institutions in the participating countries. Recalcitrant institutions will be punished with a 30% penalty on all US investments of their customers and the institution itself. Reporting started in 2015.

**Common Reporting Standard (CRS) is the definition to identify the tax residency of the customers of financial institutions based on the AEoI (Automatic Exchange of Information) initiative of the OECD (the legal aspects are defined in “Competent Authority Agreements”/CAA). The approach is based on bi-lateral agreements between two countries. To launch this initiative in 2014, a multi-national agreement was signed by 51 early adopters. Another 43 countries have also signed this agreement since then.

Let’s block ads! (Why?)


Collections Managers — Do You Have “Willful Blindness”?

In the book Willful Blindness, Margaret Heffernan explains what happens when we choose to ignore the obvious. It’s a fascinating look at why it’s so easy to ignore what’s right in front of our eyes — and I’m afraid that I see many examples right here in the UK credit industry.

Now, I’m not suggesting that the majority of credit grantors are negligent, and bury their heads in the sand when it comes to the challenges that seem blatantly obvious to others. But, for collections and recoveries, credit control or debt management operation across the UK, let’s consider the following:

UK Economics Collections Collections Managers — Do You Have “Willful Blindness”?

We know all this – so what’s so obvious? Well, let’s add this to the mix:

  1. An unstable political landscape, coupled with difficult Brexit negotiations, leading to market uncertainties
  2. Regulatory changes for 2018 including IFRS 9, GDPR and PSD2
  3. The Bank of England continuing to hint at interest rate rises in the near future
  4. UK GDP forecast to fall to 1% in 2018
  5. A recent study conducted by the Financial Conduct Authority study found that an estimated 4.1 million people are in financial difficulty owing to missed domestic or credit bills

This would appear to add up to what some might describe as an impending “perfect storm.”

If so, you would expect that every business running a collections and recoveries, credit control or debt management operation would have a sense of urgency around investing wisely to ensure that they are able to adequately protect their assets, their cash flow and their customers against the impacts of all these challenges.

Unfortunately, this is not always the case. While there are some great examples of businesses, across the UK and EMEA, that are on the front foot and delivering transformation in these areas, there are also too many examples of businesses where decisions, on whether to invest in collections and recoveries, credit control or debt management operations are being still being “discussed”, “prioritised” and even “delayed”. These businesses are likely to be in trouble when the storm hits, and fall behind in their ability to:

  • To provide a stable collections operating environment when things get really busy
  • Upscale their collections operations due to dependencies on non-automated technology & manual processes
  • Provide “Best in Class” support for customers who will be experiencing financial vulnerability, maybe for the first time

With so much “obvious” information available to us, so many “obvious” statistics, and so many “obvious” challenges on the horizon, investment in your collections and credit operations should be equally obvious. If it’s not, you may be displaying all the symptoms of willful blindness. The challenge is to recognize it now and do something about it before it’s too late.

FICO has significant experience of working with collections and recoveries, credit control and debt management operations across the globe, helping to solve the challenges described in this blog. If you would like to discuss how we might be able to help you act now, rather than leaving things too late, please contact

Let’s block ads! (Why?)


Regtech: We Help Nonprofits Do Good by Stopping Bad

Regtech GVNG AML FICO Regtech: We Help Nonprofits Do Good by Stopping Bad

The last thing an organization with the tagline “We Power Good” wants to do is inadvertently power bad things — things like money laundering, tax evasion or terrorist financing. That’s why GVNG is now working with FICO.

GVNG has a fascinating value proposition — it provides a technology platform that helps people set up and operate nonprofit organizations. GVNG will use our FICO® TONBELLER® Siron® suite of anti-money laundering (AML) and know your customer (KYC) regtech solutions to perform risk-based checks on people setting up charities, making donations, receiving payments and volunteering services, and to provide immediate alerts when any suspicious transactions take place.

“When it comes to preventing money laundering and knowing our customers, we wanted to go beyond the letter of the law to truly ensure that we weed out organizations or individuals that could have criminal intent,” said Dominic Kalms, president and CEO of GVNG.

“For example, we wanted to put safeguards in place to prevent sexual predators from registering as volunteers, and make sure money launderers or terrorist financers are not using our clients’ services or receiving funds. With our ambitious goal to sign up 2,000 programs in the next year, we needed a scalable solution we could use in the cloud, without increasing our team’s workload. FICO offered the most powerful cloud-based solution, they have a stellar reputation in the marketplace and they could meet our requirement to have it running in just a few months.”

Flexibility, a fast start, cloud deployment — these are critical ingredients for many businesses as they take on compliance solutions. To meet these requirement, we offer the Siron suite of regtech solutions on AWS, and it was recently granted AWS Financial Services Competency status.

We’re proud to help GVNG in its mission to “power good.” We power good in our own way — by stopping bad.

Let’s block ads! (Why?)


PSD2 Preparation – What Should Fraud Managers do First?

Fraud & SecurityPSD2 Preparation – What Should Fraud Managers do First?

PSD2 Fraud Start PSD2 Preparation – What Should Fraud Managers do First?


In an earlier blog I wrote that NOW is the right time for payment service providers to prepare their fraud operations for PSD2. But when the exact nature of the law is still being decided and the consequences of PSD2 are still unclear, where should you start?

Here are three areas that can be considered now to build the foundations for the future development of your fraud operations under PSD2.

  1. Understand the role Transaction Risk Analysis and Transaction Risk monitoring will play for your fraud operations. All payment service providers will need to use Transaction Risk Monitoring to deliver the reporting required by the regulation. PSD2 sets out specific parameters in reference to fraud rates that must be reported against. PSPs must take care that these are understood and that they have the technology in place to report against them, for each payment mechanism.Any PSP looking to secure payments using Transaction Risk Analysis instead of Strong Customer Authentication also needs to understand the applicable parts of the PSD2 Regulatory Technical Standard and how it applies to their fraud operations. This will be different for each PSP and it’s likely you will want to optimise your fraud rates (fraud basis points) for each payment mechanism to make the most of relevant exemptions.
  2. Prepare to communicate with your customers On the face of it, you may think that communication with customers is not part of the fraud function, and to a certain extent that’s correct. Going by the recent letter I got from my bank, communications teams are already updating bank account T’s and C’s and sending letters to customers about the changes to payments that will soon be law.For the fraud manager it’s important to consider what happens within a payment process, and how two-way communication can help to get a transaction stalled by the need for more authentication, or because of fraud concerns back on track. When PSD2 is implemented the need for Strong Customer Authentication is likely to mean more disruption to customers when they make a payment or manage their account. The better a PSP manages the interaction, the less disruption it will cause to their customers and the more it will help with customer retention.
  3.  Consider the technologies you’ll need to support you. PSD2 means an unprecedented level of change in payments process and disruption to your fraud operations. The advent of Account Information Service Providers (AISPs) and Payment Initiation Service Providers (PISPs) will change the behavioural data you have access to, as this is the data that underlies your fraud models is now the time to consider introducing artificial intelligence and adaptive analytics capabilities that can make your fraud models responsive to what is likely to be a prolonged period of upheaval.

We have asked three of FICO’s PSD2 experts – Matt Cox, Russell Robinson and Gabriel Hopkins – to present on these aspects of PSD2 preparation at a webinar on 2nd November at 12 pm GMT. Join us!

Here’s one of our fraud experts, Brian Kinch, talking about first steps for PSD2.

Leave a comment

Let’s block ads! (Why?)


The Future of AI: Less “A” and More “I”

AI 25 Years FICO Machine Learning The Future of AI: Less “A” and More “I”

To commemorate the silver jubilee of FICO’s use of artificial intelligence and machine learning, we asked FICO employees a question: What does the future of AI look like? The post below is one of the thought-provoking responses, from Andrew Fernandex, a lead scientist in FICO’s Product and Technology Organization, working in San Diego.

Why do we call artificial intelligence “artificial”? Why don’t we just call it intelligence? Because even though AI often does exactly what we want extremely well, it sometimes gets it horribly wrong.

In the future, AI won’t get it wrong. Even when we think it’s wrong, we’ll end up discovering that we are the ones who are wrong, not it. It will be that Intelligent. In other words, the future of AI is less “A” and more “I”.

For a time, Google harnessed the awesome power of its deep learning AI to conclude that I was a teenage girl because once, while preparing for a talk on “Best Practices for Software Development”, I spent time looking up background on the book (and movie) 50 Shades of Grey to use as a humorous metaphor.

In reality, I’m a middle-aged man.

It was odd, too, because Google knows everything about me. Google’s services have run both the personal and professional lives of everyone in my household for over a decade.

Similarly, after fighting Apple for years about who owns my photo library and where that library might live, I finally gave up, and shot everything up in the cloud. The first thing Apple did was produce a stunning slideshow of the wildflowers of Anza-Borrego from a recent family trip, far better than I could have done myself. Clearly Apple’s AI knows infinitely more about photos and slideshows than I do.

So in my experience, today’s AI either gets it really right… or horribly wrong.

I think we’ve all had these kinds of experiences in our digital lives:

“Why, yes, YouTube, I do want to see that cat video!”

     “No, Siri, I did not want you to auto-correct that to ‘truck’.”

     “Wow, Google, I am looking for a new car! But not one of those…”

I posit that current AI is wallowing in the uncanny valley, the place where human-like replicas appear almost, but not exactly, like real human beings, thereby eliciting feelings of eeriness and revulsion.

Current AI is so good that it’s tremendously jarring when it misses the mark completely.

Do We Listen to Crowds or Outliers?

Part of the problem is that there really isn’t agreement on a good definition of intelligence.

Take for example, the well-known wisdom of crowds. The unfortunate fact is that, while great minds often do think alike, fools seldom differ. So which does the crowd represent, the great minds or the fools?

In statistical parlance, we are asking which is the better thing to do: follow the crowd and infer that most people will do or think the same, or look at the outliers, the misfits, the rarities to see where true wisdom lies? What’s more important, the mean or the variance?

A fascinating story from APM’s Marketplace shows the difficulty of the task. In “False information on the internet is hiding the truth about onions“, author Tom Scocca tried to publish one factual, evidence-based article on the correct amount of time it took to caramelize onions. He was on a personal mission to correct the public misapprehension, repeated time after time on innumerable websites, that onions could be caramelized quickly.

What actually happened is that the Google AI became a little bit confused. It knew page after page after page on the Internet claimed that onions could be caramelized quickly (the “wisdom of the crowd”). It also knew that Tom’s single web page discussing the caramelization of onions was inordinately popular (the “outlier”). Lastly, the Googlebot knew that Tom mentioned somewhere in his article something about “caramelize” and “quickly”, missing completely that Tom was saying what not to do.

The result? A top-ranked “search results” page claiming that “Tom said you can caramelize onions quickly!” — the completely wrong conclusion for all the right reasons, falsely attributed to the one person trying desperately to change everybody’s mind.

Even if AI just follows the crowd, it can be easily stymied by what is termed the Flaw of Averages. Take 4,063 fighter pilots. Design a cockpit that fits the body-measurements of the “average” pilot. Discover that you’ve designed what amounts to a near-death-trap because not a single pilot is “average” in all their body measurements. Short arms? Long torso. Long legs? Short neck. Skinny waist? Broad shoulders. Averages, or really any measure of “central tendency” is probably not best way to make predictions!

So does intelligence come from sifting out the hidden gems of rare events? Claude Shannon thought so, and founded modern communication theory on that premise: the rarer the event, the more informative it is. Of course, rare events can also be rather meaningless, such as winning the lottery. “But you can’t win if you don’t play!” And that doesn’t even bring survivorship bias into it…

It’s a good thing AI will be less “artificial” and more about “intelligence” in the future, because it will know the right answer to some of these challenges — even if we ourselves do not.

See other FICO posts on artificial intelligence.

Let’s block ads! (Why?)