Tag Archives: analytics

Fraud Detection: Applying Behavioral Analytics

This is the second in my series on five keys to using AI and machine learning in fraud detection. Key 2 is behavioral analytics.

Behavioral analytics use machine learning to understand and anticipate behaviors at a granular level across each aspect of a transaction. The information is tracked in profiles that represent the behaviors of each individual, merchant, account and device. These profiles are updated with each transaction, in real time, in order to compute analytic characteristics that provide informed predictions of future behavior.

Profiles contain details of monetary and non-monetary transactions. Non-monetary may include a change of address, a request for a duplicate card or a recent password reset. Monetary transaction details support the development of patterns that may represent an individual’s typical spend velocity, the hours and days when someone tends to transact, and the time period between geographically disperse payment locations, to name a few examples. Profiles are very powerful as they supply an up- to-date view of activity used to avoid transaction abandonment caused by frustrating false positives.

A robust enterprise fraud solution combines a range of analytic models and profiles, which contain the details necessary to understand evolving transaction patterns in real time. A good example of this occurs in our FICO Falcon Fraud Manager, with its Cognitive Fraud Analytics.

Given the sophistication and speed of organized fraud rings, behavioral profiles must be updated with each transaction. This is a key component of helping financial institutions anticipate individual behaviors and execute fraud detection strategies, at scale, which distinguish both legitimate and illicit behavior changes. A sample of specific profile categories that are critical for effective fraud detection includes:

Behavioral analytics for fraud detection chart Fraud Detection: Applying Behavioral Analytics

Key 3 is distinguishing specialized from generic behavior analytics. Watch for that post, and follow me on Twitter @FraudBird.

For more information:

Let’s block ads! (Why?)


Teradata Delivers Cloud-based Analytics at Scale to Office Depot

CustomerSuccess social share smallest Teradata Delivers Cloud based Analytics at Scale to Office Depot

Leading provider of business services and supplies migrates to Teradata IntelliCloud; benefits have already included better performance, faster insights and cost savings

Teradata (NYSE: TDC), the leading cloud-based data and analytics company, announced today that Office Depot, Inc., a leading omni-channel provider of business services, products and technology solutions, is now using Teradata IntelliCloud to run more than 800,000 queries a day on a secure and scalable analytics platform in the cloud. By delivering a balance of highly competitive pricing and advanced analytic performance, Teradata is ensuring Office Depot has the insight to achieve its goal of becoming a business services platform with end-to-end solutions for organizations of all sizes.
The migration began with a desire by Office Depot to begin leveraging the public cloud as a cost-effective and scalable solution for managing the company’s complex data and analytics workloads. With years of reliable performance from a Teradata on-premises system, and the promise of Teradata Everywhere to deploy its analytics ecosystem anywhere, Office Depot had complete freedom to determine which of many cloud deployment options to use. They also had the flexibility to choose when the move to the cloud should begin, how fast that migration should proceed, and how long to keep any on-premises infrastructure operating in tandem. These are options only Teradata could provide. Further de-risking the buying decision was the software consistency offered by Teradata Everywhere, which allowed Office Depot to migrate to Teradata IntelliCloud with zero code changes to existing applications.
Office Depot is a leading provider of business services and supplies, products and technology solutions through its fully integrated omni-channel platform of approximately 1,400 stores, online presence, and dedicated sales professionals and technicians to small, medium and enterprise businesses.
“We chose to pursue the use of a cloud-based data and analytics platform to drive scalability, enable speed of delivery and to deliver cost savings,” said Todd Hale, Executive Vice President and Chief Information Officer at Office Depot. “That said, because we consider our Teradata platform to be central to our operations, we wanted to proceed with caution. Now that the implementation is complete, we can share that we’ve been extremely pleased with the outcome: it’s cost-effective, the performance is significantly improved over our previous on-premises system, and the cutover was non-disruptive to our business analysts and data scientists. This has been a significant win for our company and a very important step in our transformational strategy.”
Today, Office Depot is running its mainstream production and test/development environments in Teradata IntelliCloud. Office Depot is also leveraging Teradata Managed Services – consultants who are highly trained in advanced tools and process improvements – to ensure its entire analytics environment is managed, optimized and productive. Together, the move to Teradata IntelliCloud and the use of Teradata Managed Services allow Office Depot to secure full value from its analytics investments while focusing internal resources on business value and transformational insights.
Teradata IntelliCloud – Analyze Anything, Anywhere, at Scale
With an innovative product portfolio and licensing structures designed to de-risk buying decisions, Teradata gives its customers the most scalable, secure and proven path to the cloud at the speed and in the deployment environment that is right for them. A key component of this portfolio is Teradata IntelliCloud, a comprehensive as-a-service offering that provides the best performing analytics platform in the cloud.
Teradata is in the advanced tier of AWS partners for both technology and consulting. The company holds more than 5,100 AWS accreditations and certifications, and was recently awarded the AWS Big Data Competency distinction. 
Teradata IntelliCloud and Teradata Managed Services are available today.

Let’s block ads! (Why?)

Teradata United States

How To Issue More Credit Cards With Predictive Analytics

Shanghai Pudong Development Bank Issued More Than 9 Million New Credit Cards Last Year Using FICO Originations

Shanghai Pudong Development Bank (SPDB) Credit Card Center, a credit card lending pioneer in China, has increased its customer base using originations powered by technology from analytics software firm FICO.

SPDB Credit Card Center’s total number of credit card applications using originations driven by a big data AI analytics strategy has exceeded 9 million, since January 2017. During this incredible growth, SPDB has maintained a controllable risk level while increasing its origination rate more than two-fold to 88 percent of applications. FICO’s big data AI analytics has reduced risk by 50% with an approval rate that is four times higher.

For its achievements, SPDB has won the 2017 FICO Decisions Award for Customer Onboarding and Management. The winners of the FICO Decisions Awards were spotlighted at FICO World 2018, the Decisions Conference, April 16-19 in Miami Beach, Florida.

SPDB FW AWARD smaller 1024x498 How To Issue More Credit Cards With Predictive Analytics

SPDB receive their FICO Decision Award

FICO’s origination and big data AI analytics solution was introduced with the aim of overcoming the challenges brought by the rapid development of SPDB’s credit card business. One of the key limitations holding back speedy originations was the limited credit data available on customers.

“Using custom scorecards and models from FICO built using Big Data AI analytics, SPDB Credit Card Centre has managed to significantly improve its risk assessment of consumers with either thin files or no files at the People’s Bank of China credit bureau,” said Sandy Wang, managing director in China for FICO. “The coverage rate or scorable population of the FICO models built using big data AI analytics, covers more than 75 percent of applicants.”

Joy Macknight, deputy editor of The Banker, one of this year’s judges for the FICO Decisions Awards, said, “I gave SPDB top marks because of their customer-centric success. They are achieving great results by taking a holistic approach to risk management across originations and collections.”

“SPDB has harnessed analytic technologies to reduce their overall risk, greatly increase the proportion of automatic originations, increase approval rates and scale their collections,” said Sandy Wang. “They have skillfully created a data-driven and comprehensive origination optimization strategy, using advanced decision science technologies and cutting-edge modelling experience from FICO. It has been a fruitful partnership and a project that has yielded significant results.”

Read the full release

Let’s block ads! (Why?)


New eBook! TDWI Checklist Report: Strategies for Improving Big Data Quality for BI and Analytics

In a big data environment, the notion of data quality that is “fit for purpose” is important. For some types of data science and analytics, raw, messy data is exactly what users want. Yet, even in this case, users need to know the data’s flaws and inconsistencies so that the unexpected insights they seek are based on knowledge, not ignorance. Syncsort’s new eBook, Strategies for Improving Big Data Quality for BI and Analytics, takes a look at applying data quality methods and technologies to big data challenges that fit an organization’s objectives.

TDWI Checklist Report Strategies for Improving Big Data Quality for BI and Analytics banner New eBook! TDWI Checklist Report: Strategies for Improving Big Data Quality for BI and Analytics

As organizations grow dependent on the data they have stored in their big data repositories, or in the cloud, for a wider range of businesses decisions, they need data quality management to improve the data so that it is fit for each desired purpose.

Our TDWI checklist report offers six strategies for improving on big data quality:

  1. Design big data quality strategies that are fit for each purpose
  2. Focus on the most important data quality objectives for your requirements
  3. Perform data quality processes natively on cloud and big data platforms
  4. Reduce analytics and BI delays by applying flexible data quality processes
  5. Provide data lineage tracking as part of data quality processes
  6. Use data quality to improve governance and regulatory compliance
Download the eBook today!

Let’s block ads! (Why?)

Syncsort Blog

Data Management Rules for Analytics

With analytics taking a central role in most companies’ daily operations, managing the massive data streams organizations create is more important than ever. Effective business intelligence is the product of data that is scrubbed, properly stored, and easy to find. When your organization uses raw data without proper management procedures, your results suffer.

The first step towards creating better data for analytics starts with managing data the right way. Establishing clear protocols and following them can help streamline the analytics process, offer better insights, and simplify the process of handling data. You can start by implementing these five rules to manage your data more efficiently.

1. Establish Clear Analytics Goals Before Getting Started

As the amount of data produced by organizations daily grows exponentially, sorting through terabytes of information can become problematic and reduce the efficiency of analytics. Such large data sets require significantly longer times to scrub and properly organize. For companies that deal with multiple streams that exhibit heavy bandwidth, having a clear line of sight towards business and analytics goals can help reduce inflows and prioritize relevant data.

It’s important to establish clear objectives for data and create parameters that filter out data points that are irrelevant or unclear. This facilitates pre-screening datasets and makes scrubbing and sorting easier by reducing white noise. Additionally, you can focus even more on measuring specific KPIs to further filter out the right data from the stream.

banner blog 2 Data Management Rules for Analytics

2. Simplify and Centralize Your Data Streams

Another problem analytics suites face is reconciling disparate data from multiple streams. Organizations have internal, third-party, customer, and other data that must be considered as part of a larger whole instead of viewed in isolation. Leaving data as-is can be damaging to insights, as different sources may use unique formats or different styles.

Before allowing multiple streams to connect to your data analytics software, your first step should be establishing a process to collect data more centrally and unify it. This centralization makes it easier to input data seamlessly into analytics tools, but also simplifies the methodology for users to find and manipulate data. Consider how to set up your data streams best to reduce the number of sources to eventually produce more unified sets.

3. Scrub Your Data Before Warehousing

The endless stream of data raises questions about quality and quantity. While having more information is preferable, data loses its usefulness when it’s surrounded by noise and irrelevant points. Unscrubbed data sets make it harder to uncover insights, properly manage databases, and access information later.

Before worrying about data warehousing and access, consider the processes in place to scrub data to produce clean sets. Create phases that ensure data relevance is considered while effectively filtering out data that is not pertinent. Additionally, make sure the process is as automated as possible to reduce wasted resources. Implementing functions such as data classification and pre-sorting can help expedite the cleaning process.

4. Establish Clear Data Governance Protocols

One of the biggest emerging issues facing data management is data governance. Because of the sensitive nature of many sources—consumer information, sensitive financial details, and so on—concerns about who has access to information are becoming a central topic in data management. Moreover, allowing free access to datasets and storage can lead to manipulation, mistakes, and deletions that could prove damaging.

It’s vital to establish clear and explicit rules about who can access data, when, and how. Creating tiered permission systems (read, read/write, admin) can help limit the exposure to mistakes and danger. Additionally, sorting data in ways that facilitate access to different groups can help manage data access better without the need to give free rein to all team members.

5. Create Dynamic Data Structures

Many times, storing data is reduced to a single database that limits how you can manipulate it. Static data structures are effective for holding data, but they are restrictive when it comes to analyzing and processing it. Instead, data managers should place a greater emphasis towards creating structures that encourage deeper analysis.

Dynamic data structures present a way to store real-time data that allows users to connect points better. Using three-dimensional databases, finding methods to reshape data rapidly, and creating more inter-connected data silos can help contribute to more agile business intelligence. Generate databases and structures that simplify accessing and interacting with data rather than isolating it.

The fields of data management and analytics are constantly evolving. For analytics teams, it’s vital to create infrastructures that are future-proofed and offer the best possible insights for users. By establishing best practices and following them as closely as possible, organizations can significantly enhance the quality of the insights their data produces.

banner blog 2 Data Management Rules for Analytics

Let’s block ads! (Why?)

Blog – Sisense

A Culture of Analytics: Why Amazon & Netflix Succeed While Others Fail

iStock 855485708 e1530043005312 A Culture of Analytics: Why Amazon & Netflix Succeed While Others Fail

When it comes to advanced analytics projects, it often seems like success stories are the exception rather than the rule. Many organizations would like to emulate the data-driven culture displayed by leaders like Amazon, Google, and Netflix, yet have trouble with operationalizing analytics. And there’s broad agreement that analytics and ‘big data’ projects fail at very high rates — more than half the time.

If I had to pick one fundamental reason why, I’d say that it’s because analytics projects are irreducibly complex and multi-faceted: they typically have many moving parts, more than a software project for example. So there are many opportunities to stumble along the way. Projects often fail right at the start, when data is off limits or hard to find. And project goals may shift rapidly: often it’s not known whether the data will confirm hypotheses at all, or yield actionable insights, or support predictive models. There may be problems of scalability, of integrating with operational systems, of model accuracy, and so on. And, in the end, there is the stubborn problem of how to deploy complex workflows and models, often requiring a tedious manual conversion from data scientist code into real-time scoring.

As a result, analytics projects can quickly become overwhelmed with technical details. I believe that an agile, business-driven approach can help ensure success.

Analytics initiatives are often viewed solely through a technical lens, resulting in situations where organizations get lost in the weeds agonizing over statistical models, database structures and platform infrastructure. Amazon and Netflix are able to use data in innovative ways not just because they are technically advanced, but also because they’ve created a “culture of analytics” that pervades every aspect of their business.

Here are four key considerations for executing an analytic strategy that puts the needs of the business first:

  1. Have a purpose. While this might seem obvious, the reality is that many companies lose their way with analytics because they focus first on technical specifications and not enough on a tangible business objective. In effect, they put the technical cart ahead of the business horse.

Let’s step away from analytics for the moment and imagine another type of project — constructing a building. Would you purchase the materials and hire general contractors, plumbers, and electricians before you had a clear understanding of the building’s purpose? Of course not.

Yet all too often, this is exactly how organizations proceed with analytics deployments. They start building infrastructure and hiring data scientists and evaluating technologies (this or that database? Hive or Spark SQL?) long before they’ve specifically defined the business problems and opportunities that can be addressed using analytics. A technology deployment, no matter how seamless or advanced is not going to magically transform your organization into a data-driven powerhouse like Amazon, especially if its purpose is vague and poorly defined.

To avoid costly mistakes, business end users must be part of the analytics strategy from day one. These are the stakeholders who can weigh in on the use cases that have the most to gain from big data and provide a realistic picture of how analytics would/could fit into the applications and processes that they rely on every day to do their jobs.

And start with just one analytics project, with high business potential, and readily available data. Then figure out what technology you need.

  1. Link “insight” to action. What does it really mean to solve a business problem? In the analytics world, it’s become common to assume that the best way to solve a problem is to provide more information, or “insight,” about it. But that’s the equivalent of having a meeting to address a specific issue and “resolving” it by scheduling another meeting. Insight, to be of any value, must be predictive in nature and – this is the important part – drive action quickly, seamlessly and automatically.

It’s this last step that trips up many organizations. Imagine a company that wants to optimize its Q2 sales. The organization might start by attempting to use analytics to better predict sales volume. But telling the sales team that volume is expected to drop in Q2 and even providing insight into why it might do so is not enough. Predictions should launch tangible actions that sales and marketing can take to turn things around.

Amazon, Google, and Netflix are masters at transforming insight into data-driven action. For example, Amazon uses big data to automatically customize the browsing experience for its customers based on their past purchases, and optimize sales. Netflix seeks to directly impact customer behavior with data-fueled recommendations and, more recently has used data to spawn the successful creation of original content that they’re confident audiences will like.

Connecting analytics to actual results demands “high resolution” data and predictive analysis that prompts actions within purchasing, sales, lead generation – whatever the business objective may be. But one final piece is needed to make this all simple, and seamless: integration.

  1. Push analytics to business end-points. Most business and customer-facing stakeholders within an organization don’t know Hadoop from NoSQL from Apache Hive. Nor should they. Analytics infrastructure is highly technical and not easy for non-engineering types to interact with and use.

Successful analytic cultures find ways to push outputs to and integrate seamlessly with, the go-to applications that real-world sales, marketing, procurement and other business-level decision-makers use on a regular basis. These include tools like Salesforce.com, Marketo, and Zendesk. A “touchpoint” approach to analytics concentrates on how and where analytics will realistically be utilized and ensures that lessons learned are integrated into these applications — a critical part of connecting insight to action.

  1. Create feedback loops. Finally, an analytics strategy is never “done.” This is another reason to avoid getting stuck in the weeds by laboriously perfecting statistical models over many months. The analytic climate is always changing for most organizations as new data sources become available, and as business opportunities, challenges and priorities evolve.

Organizations should, therefore, focus on a flexible analytics infrastructure that is not only able to meet a number of diverse requirements across the enterprise, but also one nimble enough to adapt quickly to the needs of the business. This is where the “agile” movement in software development can serve as a useful example: instead of aiming for a “perfect” solution, focus on rapid development of fresh models that can be pushed quickly into production. Then, monitor performance with A/B testing and use “feedback loops” to continually test, refine and improve analytic applications and begin the cycle again. In this way, organizations can rapidly and continuously get actionable data out to the users who need it, even as needs evolve.

The most advanced analytics-driven companies in the world give their employees remarkably free access to their codebases. They encourage safe experimentation with open access to source-code, rigorous code reviews, clearly-defined metrics of success, and instantaneous feedback loops.

Many companies want to be able to emulate data darlings like Amazon, Google, and Netflix. But those organizations have succeeded because they have wisely integrated analytics into the very fabric of their business. Before jumping into the deep end with highly complex technologies and advanced algorithms, companies just getting started should first address low-hanging fruit and build analytic applications that are valuable to actual business users. Don’t let the perfect database or latest and greatest statistical model get in the way of achievable results.

(This blog entry is based on an article that originally ran in insideBigData.)

Let’s block ads! (Why?)

The TIBCO Blog

MapR platform update brings major AI and Analytics innovation…through the file system

From the very beginning of its entry to the market, MapR focused on the file system as an axis of innovation. Recognizing that the native Hadoop Distributed File System’s (HDFS’s) inability to accommodate updates to files was a major blocker for lots of Enterprise customers, MapR put an HDFS interface over the standard Network File System (NFS) to make that constraint go away.

While seemingly simple, the ability for Hadoop to see a standard file system as its own meant that lots of data already present in the Enterprise could be processed by Hadoop. It also meant that non-Hadoop systems could share that data and work cooperatively with Hadoop. That made Hadoop more economical, more actionable and more relevant. For many customers, it transformed Hadoop from a marginal technology to a critical one.

Back to the file system
While MapR has subsequently innovated on the database, streaming and edge computing layers, and has embraced container technology, it is today announcing a major platform update that goes back to file system innovation. But this time it’s not just about making files update-able; it’s about integrating multiple file system technologies, on-premises and in the cloud, and making them work together.

Also read: Kafka 0.9 and MapR Streams put streaming data in the spotlight
Also read: MapR gets container religion with Platform for Docker

The core of the innovation is around integration between the MapR file system (MapR-FS) and Amazon’s Simple Storage Service (S3) file system protocols. This integration manifests in more than one form, and there’s some subtlety here, so bear with me.

S3, for two
The first integration point is support for an S3 interface over MapR-FS, via the new MapR Object Data Service. This allows applications that are S3-compatible to read and write data stored in MapR-FS. Since the S3 protocol is supported not just by S3 itself, but also by on-premises file systems, the ecosystem support for the protocol is robust. Now MapR-FS is part of that ecosystem.

object data service MapR platform update brings major AI and Analytics innovation...through the file system

MapR’s Object Data Services

Credit: MapR

But the integration doesn’t end there; it works in the other direction too. That is to say that S3-compatible storage volumes, including actual S3 buckets in the Amazon Web Services (AWS) cloud, can be federated into MapR-FS, providing a more economical storage option to accommodate data to which applications need only infrequent access.

Premium tiers
MapR-FS now also incorporates erasure coding for fast ingest, ideally on solid state disk (SSD) media. Together with standard S3-compatible storage and native MapR-FS, this allows for full-on storage tiering, enabling what MapR calls a “multi-temperature” data platform. Customers can put hot (frequently-accessed) data on the performance-optimized SSDs; warm (infrequently accessed) data on conventional spinning disks, and cold (rarely accessed) data on S3-compatible storage, including Amazon S3 itself.

Tiered storage is the enabler for keeping all data accessible, in an economically-efficient fashion. That in turn allows for analytics and AI to be far more effective and powerful. You never know when that old data will be important in a particular analysis exercise. And sometimes the best machine learning models are the ones that have been built on deep, historical data, in addition to the more recently-collected variety.

Don’t just make it possible; make it easy
But tiered storage can’t enable all that if it’s just a manual storage strategy. Luckily, this new MapR platform release makes the placement of different data on different media automated, through declarative policy, and all the data tiers are federated in a single namespace so that they feel like a single file system.

There’s much more:

  • Important performance optimizations, including the location of metadata and file stubs in the native MapR-FS layer for S3 data
  • Security features like automatic encryption of all data by default and Secure File-based services with NFSv4
  • Simple GET and PUT operations to move data physically between tiers
  • Strong features like the scheduled or automatic file recall to move data from higher-latency tiers to lower-latency tiers when it becomes newly-relevant
  • Support for fault tolerance features like disaster recovery clusters in the cloud through mirroring from the MapR cluster to MapR-XD cloud storage in AWS, Google Cloud Platform and Microsoft Azure

Also read: MapR diversifies to cloud storage market
Also read: MapR File System selected by SAP for cloud storage layer

In addition to the above, MapR’s integration of Apache Spark 2.3 and Drill 1.14; support for Kafka KSQL; and MapR-DB language bindings for Python and Node.JS make analytics and AI more accessible to a variety of developers and business users. This accessibility is an excellent compliment to the extra enablement provided by the tiered storage.

Parting thoughts
The heart of big data analytics and indeed AI involves high volumes of raw data stored as flat (delimited, JSON, XML, etc.) files. That makes the file system itself critical in operationalizing and optimizing analytics and AI. Adding abstraction layers across the many different storage technologies and locations available today, both on-prem and cloud-based, is key to breaking data silos and making the necessary data easily accessible. And that, in turn, is what makes superior analytics and machine learning possible.

This latest MapR platform release will be available in the third quarter of this year, i.e. within the next three calendar months.

Let’s block ads! (Why?)

ZDNet | crm RSS

Data Science and Visual Analytics for Operations in the Energy Sector

iStock 855386302 e1528773593698 Data Science and Visual Analytics for Operations in the Energy Sector

In recent years, Oil and Gas Companies have been challenged to adapt to lower crude prices. With the recent crude price increase, there has never been a better time for energy companies to transform their operations.

From upstream exploration and production to logistics and downstream refining, energy trading, and the portfolio investments; there are opportunities for optimization. All of these areas benefit from today’s advances in data science and visual analytics. The past few years many companies were forced to reduce costs or consolidate; it was a period of survival. Now, the successful companies of the future are digitizing smarter.

Driving business operations from analytic insights applies to many facets of the digital energy business including:

Modernized Grids and Smarter Oilfields

With TIBCO Systems of Insight:

  • Analysts can create self-service analytic apps to deliver insights into all aspects of a process, quality, and costs.
  • Data scientists can develop machine learning intelligence into sensors, processes, and equipment to reduce data bottlenecks and take action at the point of impact.  
  • Operations and IT developers can empower more users and scale complex, computationally intensive workloads in the cloud.

Asset Portfolio Value Optimization

Using Spotfire, analysts can invoke smart data wrangling, data science, and advanced geoanalytics to develop accurate valuation of assets and resource plays for optimal capital allocation. Spotfire community templates for decline curve analysis and geoanalytics enable these sophisticated calculations to run with point-click configuration, invoking Spotfire’s powerful inbuilt TIBCO Runtime R engine.

Predictive Maintenance, Process Control, and Process Optimization

Spotfire and TIBCO Statistica can readily analyze large amounts of data from internal and external IoT data sources. The combination of your industry expertise with TIBCO’s latest visual, predictive, and prescriptive analytics techniques enable you to address all of your process and equipment surveillance challenges.

Business Operations and Supply Chain Management

Provide managers, engineers, and business users self-service access to data, visualizations, ​and analytics for visibility across the entire value chain. Respond to evolving needs and deliver actionable insights that enable people and systems to make smarter decisions. Reduce time spent on compliance reporting and auditing.

Energy Trading

Develop insights faster and bring clarity to business issues in a way that gets all the traders, managers, and financial decision-makers on the same page quickly. For companies trading in multiple commodities, TIBCO Connected Intelligence can be deployed as a single analytics platform that brings a consolidated view of risks and positions, compliance, and results. Read more about it.

Learn More Firsthand

Listen to TIBCO’s Chief Analytics Officer Michael O’Connell explain how companies are leveraging the latest Spotfire innovations, optimizing exploration and production efforts and investments, and gaining a decisive advantage. And hear Stephen Boyd from Chevron present a real-world case study on TIBCO Connected Intelligence. Register now for the quarterly Houston area TIBCO Spotfire® User Group Meeting taking place on Thursday, June 14th, at the Hilton Garden Inn. Or find a Spotfire Meetup near you.

Let’s block ads! (Why?)

The TIBCO Blog

Salesforce Adds Google Analytics Capabilities to Marketing Cloud

Salesforce on Wednesday announced integrations between its Marketing Cloud and Google Analytics 360 that equip marketers with a broad range of new capabilities, including the following:

  • View consumer insights from both Marketing Cloud and Analytics 360 in a single journey analytics dashboard to better analyze cross-channel engagement. For example, if marketers see a high rate of browse abandons — customers who click through an email offering but don’t buy — they can create a tailored journey to re-engage them.
  • Create audiences in Google Analytics 360 based on their interactions with Marketing Cloud campaigns and deliver Web pages with customized content to those audiences when they click on a link in an email offer.
  • Gain a better understanding of how their content influences transactions, through the availability of Marketing Cloud data in Analytics 360. This lets them unearth macro trends in customer journeys and make strategy tweaks to maximize conversions.
  • Quickly authenticate to Analytics 360 from the Marketing Cloud, and tag each email with the Analytics 360 campaign parameters without having to seek help from IT.

85389 620x325 Salesforce Adds Google Analytics Capabilities to Marketing Cloud

Visualize Google Analytics 360 Web interaction data directly in your Salesforce Marketing Cloud dashboard.

In Q3, a beta rollout will let marketers create audiences in Analytics 360 — such as category buyers, loyalty members and abandoned browsers — and activate them for engagement within Marketing Cloud by, for example, adding a specific audience to a re-engagement journey.

“This is about deeper integrations,” noted Ray Wang, principal analyst of Constellation Research.

“As Adobe has gotten closer to Microsoft, Salesforce has done the same with Google,” he told CRM Buyer. “Adobe does a great job with their analytics and machine learning portfolio.”

What Worked and Why

The integrations are part of a strategic alliance between Salesforce and Google. Salesforce has named G Suite as its preferred email and productivity provider, and plans to use the Google Cloud Platform for its core services in connection with its international infrastructure expansion.

Viewing consumer insights from Marketing Cloud and Analytics 360 in one dashboard is important because “marketers want to know what campaigns worked and why,” Wang said. “They want to know what their customers need and how to predict demand or interest.”

Being able to leverage information about responses in a campaign to deliver tailored Web content “helps in reducing useless interactions and understanding if you have enough context to be relevant,” he pointed out. “The goal is to leverage as many channels as you can at the right time in the right manner.”

All About Personalization

The new integrations between Google and Salesforce Marketiong Cloud “will streamline the process for marketers and make it easier to provide real personalization at scale in customer engagement,” said Rebecca Wettemann, VP of research at Nucleus Research.

“Marketers have been talking about personalization for a long time, but this is really about bringing the data at scale so they can truly execute on it,” she told CRM Buyer.

Adobe earlier this year enhanced its Adobe Target personalization tools. Last month, it announced plans to open up its data science and algorithmic optimization capabilities in Adobe Target, along with other new capabilities.

PegaSystems last week rolled out Pega Infinity, its next-generation digital transformation platform.

Pega Infinity builds on “the continued innovation of our AI-driven marketing capabilities,” remarked Jeff Nicholson, VP of CRM product marketing at Pegasystems, giving clients “the clearest path … to make the long-discussed vision of one-to-one engagement a reality.”

Marketing tech has gone far beyond “simple measurement of campaign effectiveness,” Nicholson told CRM Buyer. “The key to the new generation of tools is not to provide just information to the marketer, but actual AI-powered real-time insight and guidance.”

The aim is for companies to convert “nuggets of insights into actual action and improve their customers’ experience, along with their bottom line,” he observed.

“By improving contextual relevancy, customers and clients will more than likely engage,” said Constellation’s Wang. “If they engage more, you’ll more likely convert. That’s the whole point here.”
end enn Salesforce Adds Google Analytics Capabilities to Marketing Cloud

Richard%20Adhikari Salesforce Adds Google Analytics Capabilities to Marketing Cloud
Richard Adhikari has been an ECT News Network reporter since 2008. His areas of focus include cybersecurity, mobile technologies, CRM, databases, software development, mainframe and mid-range computing, and application development. He has written and edited for numerous publications, including Information Week and Computerworld. He is the author of two books on client/server technology.
Email Richard.

Let’s block ads! (Why?)

CRM Buyer

Analytics Reveal What New Credit UK Consumers Can Afford

Affordability Risk Analytics Reveal What New Credit UK Consumers Can Afford

In the UK, affordability risk has been the subject of increased scrutiny by the Financial Conduct Authority, which, in consultation with lenders, has begun a process of stricter control in terms of treatment of consumers and assessment of their financial vulnerabilities. The goal has been to stop the rising numbers of British borrowers in a state of persistent debt.

At Money2020 today in Amsterdam, we took an important step toward helping UK lenders crack this puzzle. Along with our partners Equifax, we launched a product to address the combined issue of credit and affordability risk, initially in the UK market.

This product stems from extensive research into affordability risk, summarized in a recent white paper written by my colleague, Dr. Andrew Jennings. It’s a hot issue because consumers have a huge appetite for credit, and more “credit-hungry” consumers will normally present a greater risk to lenders. Still, it has been very difficult for a lender to understand the pressures on any consumer in terms of their ability to absorb more credit and pay off the required instalments, without placing unbearable stress on their finances.

During origination, a lender can ask for evidence of income and expenditure if required, and also use data available from sources such as credit bureaux to try to understand the financial circumstances of the applicant. But what happens after a loan is approved, when, say, a lender wants to increase a customer’s credit line, or cross-sell them a new credit product?

It is infeasible to continually request paperwork from a consumer to continue to assess affordability risk. In many respects, understanding the pressure consumer has in meeting their financial obligations cannot be measured.

The FICO® Risk and Affordability Decision Suite, powered by Equifax, is a result of significant research and development by FICO and Equifax and combines a suite of over 46 curated decision keys, including 5 analytics from both FICO and Equifax.

These include:

  • A new consumer-level FICO® Customer Management Score for risk assessment, developed using a combination of traditional methodologies and machine learning techniques
  • A Balance Change Sensitivity Index that identifies which customers would have a significant change in Probability of Default (PD) if they had a sizeable change in their credit card balance
  • An Indebtedness Score, developed on a consumer-level outcome definition to identify customers that are more likely to fall into arrears due to unsustainable credit commitments and higher debt-to-income levels
  • An Affordability Index, which uses trended information on the level and consistency of the funding of the customer’s principal current account and overdraft utilisation to provide new insight into a customer’s cash-flow and affordability position.

When considering the affordability stress on a consumer, it’s critical to use the most up-to-date data. This solution streamlines the delivery of data from Equifax and presents it to the decision processing system of a lender or processor, on a daily basis just in time for account cycling and decisioning in a fully end-to-end managed solution.

For more information on the research behind this solution, read our white paper, A New Challenge for Risk Management: Understanding Consumer Affordability Risk.

Let’s block ads! (Why?)