Category Archives: BI News and Info

Enhance Your Call Center Operations with RapidMiner Server

Many past webinars have focused on the technical details of Rapidminer products—but last week’s webinar put into context what it takes to put models into production to deliver business outcomes using RapidMiner Studio and RapidMiner Server. 

In a recent webinar, Derek Wilson, president and CEO of CDO Advisors LLC, looks at three different use cases to help you use RapidMiner to solve business problems related to call center optimization, including agent churn, customer churn, cross-selling, and ultimately how to put predictive models into production to lower costs and increase customer satisfaction.

Agent & Customer Churn—RapidMiner Studio to RapidMiner Server

Building an accurate predictive model is important—but the real goal is making the model outcome actionable. The outcome of this model will help you discover which attributes in agents/customers lead to high attrition. That information can then be translated into agent training that weeds out candidates with attrition attributes and retains the best agents or changing processes that lead to customer attrition and switching services to a better fit for customers.

First, input agent performance, call center stats, HR data or customer CRM data and operations data. Output that model to your call center management team and into a decision tree so that the results are comprehensible to those who are not necessarily data scientists.

agent churn wf basic Enhance Your Call Center Operations with RapidMiner Serveragent churn workflow Enhance Your Call Center Operations with RapidMiner Server

Next, you pull in RapidMiner Server so that you can operationalize and focus in on key attributes over periods of time without having to sit there and manually run your model in Studio over and over. Simply deploy your RapidMiner Studio model into RapidMiner Server, run a query that goes directly against the database and when you automate it you can choose to drop information out and build reports on top of it.

You can schedule the model, save the results and provide reports to your management team. Incorporating Server takes your models to the next level where your business questions get answered with action. Above, see the differences between the agent churn workflow basic versus the advanced workflow after Server comes into play.

Cross-Selling Opportunities

Can you predict which customers are most likely to respond to a marketing campaign? All you need is information on the responsiveness of past customers to apply to an active campaign list. The input would be the same as for your customer churn model. If the output aims true, (prediction of which customers are most likely to accept a new product), you will be able to target customers for products and consequentially limit your budget to those customers with 100% accuracy.

homeloan Enhance Your Call Center Operations with RapidMiner Server

Let’s build a financial cross-sell and output our files to the marketing and sales teams. RapidMiner Server saves the day again when it lets you create automations and build associations rules after publishing your RapidMiner Studio models to it. Above, we are looking at the lift association rule built in RapidMiner Server for a model that concludes that if a customer has an auto loan, they should have a home loan. Information like this can help you create campaigns that target people within different subsets or parameters which can help the marketing team to make their campaigns as specific and strategic as possible.

You can publish the output of your model in SQL, wrap it with any reporting tool you’d like and share your premise, conclusion, confidences and lift to various teams within your company by feeding your SQL Server table back into your CRM engine.

RapidMiner

pulling them Enhance Your Call Center Operations with RapidMiner Server

Using RapidMiner Server and RapidMiner Studio will help you to get more targeted results that benefit your whole company and build a foundation of intersectional trust between all teams.

Watch this on-demand webinar and see the full demonstration, master these business strategies and find out how RapidMiner can help you achieve your goals.

Let’s block ads! (Why?)

RapidMiner

Cumulative Update #5 for SQL Server 2016 SP1

The 5th cumulative update release for SQL Server 2016 SP1 is now available for download at the Microsoft Downloads site. Please note that registration is no longer required to download Cumulative updates.

To learn more about the release or servicing model, please visit:

Let’s block ads! (Why?)

SQL Server Release Services

Bug in 2-D integration?

 Bug in 2 D integration?
Integrate[
 1/(1 + z^2) Exp[-r^2/(1 + z^2)] r
 , {r, 0, Infinity}
 , {z, -a, a}
 ]

0

Integrate[
 1/(1 + z^2) Exp[-r^2/(1 + z^2)] r
 , {z, -a, a}
 , {r, 0, Infinity}
 ]

a

Just changing the order of integration limits changes the result. The correct answer is a. Parameter a is otherwise undefined. This is in version Mathematica 10.4.0.0

Let’s block ads! (Why?)

Recent Questions – Mathematica Stack Exchange

Omnichannel Vs. Multichannel: What’s The Difference And Who Is Doing It?

278054 278054 h ergb s gl e1505855447232 Omnichannel Vs. Multichannel: What’s The Difference And Who Is Doing It?

The Japanese culture has always shown a special reverence for its elderly. That’s why, in 1963, the government began a tradition of giving a silver dish, called a sakazuki, to each citizen who reached the age of 100 by Keiro no Hi (Respect for the Elders Day), which is celebrated on the third Monday of each September.

That first year, there were 153 recipients, according to The Japan Times. By 2016, the number had swelled to more than 65,000, and the dishes cost the already cash-strapped government more than US$ 2 million, Business Insider reports. Despite the country’s continued devotion to its seniors, the article continues, the government felt obliged to downgrade the finish of the dishes to silver plating to save money.

What tends to get lost in discussions about automation taking over jobs and Millennials taking over the workplace is the impact of increased longevity. In the future, people will need to be in the workforce much longer than they are today. Half of the people born in Japan today, for example, are predicted to live to 107, making their ancestors seem fragile, according to Lynda Gratton and Andrew Scott, professors at the London Business School and authors of The 100-Year Life: Living and Working in an Age of Longevity.

The End of the Three-Stage Career

Assuming that advances in healthcare continue, future generations in wealthier societies could be looking at careers lasting 65 or more years, rather than at the roughly 40 years for today’s 70-year-olds, write Gratton and Scott. The three-stage model of employment that dominates the global economy today—education, work, and retirement—will be blown out of the water.

It will be replaced by a new model in which people continually learn new skills and shed old ones. Consider that today’s most in-demand occupations and specialties did not exist 10 years ago, according to The Future of Jobs, a report from the World Economic Forum.

And the pace of change is only going to accelerate. Sixty-five percent of children entering primary school today will ultimately end up working in jobs that don’t yet exist, the report notes.

Our current educational systems are not equipped to cope with this degree of change. For example, roughly half of the subject knowledge acquired during the first year of a four-year technical degree, such as computer science, is outdated by the time students graduate, the report continues.

Skills That Transcend the Job Market

Instead of treating post-secondary education as a jumping-off point for a specific career path, we may see a switch to a shorter school career that focuses more on skills that transcend a constantly shifting job market. Today, some of these skills, such as complex problem solving and critical thinking, are taught mostly in the context of broader disciplines, such as math or the humanities.

Other competencies that will become critically important in the future are currently treated as if they come naturally or over time with maturity or experience. We receive little, if any, formal training, for example, in creativity and innovation, empathy, emotional intelligence, cross-cultural awareness, persuasion, active listening, and acceptance of change. (No wonder the self-help marketplace continues to thrive!)

These skills, which today are heaped together under the dismissive “soft” rubric, are going to harden up to become indispensable. They will become more important, thanks to artificial intelligence and machine learning, which will usher in an era of infinite information, rendering the concept of an expert in most of today’s job disciplines a quaint relic. As our ability to know more than those around us decreases, our need to be able to collaborate well (with both humans and machines) will help define our success in the future.

Individuals and organizations alike will have to learn how to become more flexible and ready to give up set-in-stone ideas about how businesses and careers are supposed to operate. Given the rapid advances in knowledge and attendant skills that the future will bring, we must be willing to say, repeatedly, that whatever we’ve learned to that point doesn’t apply anymore.

Careers will become more like life itself: a series of unpredictable, fluid experiences rather than a tightly scripted narrative. We need to think about the way forward and be more willing to accept change at the individual and organizational levels.

Rethink Employee Training

One way that organizations can help employees manage this shift is by rethinking training. Today, overworked and overwhelmed employees devote just 1% of their workweek to learning, according to a study by consultancy Bersin by Deloitte. Meanwhile, top business leaders such as Bill Gates and Nike founder Phil Knight spend about five hours a week reading, thinking, and experimenting, according to an article in Inc. magazine.

If organizations are to avoid high turnover costs in a world where the need for new skills is shifting constantly, they must give employees more time for learning and make training courses more relevant to the future needs of organizations and individuals, not just to their current needs.

The amount of learning required will vary by role. That’s why at SAP we’re creating learning personas for specific roles in the company and determining how many hours will be required for each. We’re also dividing up training hours into distinct topics:

  • Law: 10%. This is training required by law, such as training to prevent sexual harassment in the workplace.

  • Company: 20%. Company training includes internal policies and systems.

  • Business: 30%. Employees learn skills required for their current roles in their business units.

  • Future: 40%. This is internal, external, and employee-driven training to close critical skill gaps for jobs of the future.

In the future, we will always need to learn, grow, read, seek out knowledge and truth, and better ourselves with new skills. With the support of employers and educators, we will transform our hardwired fear of change into excitement for change.

We must be able to say to ourselves, “I’m excited to learn something new that I never thought I could do or that never seemed possible before.” D!

Comments

Let’s block ads! (Why?)

Digitalist Magazine

An Analytical Future Requires An Automated Approach

 An Analytical Future Requires An Automated Approach

The Japanese culture has always shown a special reverence for its elderly. That’s why, in 1963, the government began a tradition of giving a silver dish, called a sakazuki, to each citizen who reached the age of 100 by Keiro no Hi (Respect for the Elders Day), which is celebrated on the third Monday of each September.

That first year, there were 153 recipients, according to The Japan Times. By 2016, the number had swelled to more than 65,000, and the dishes cost the already cash-strapped government more than US$ 2 million, Business Insider reports. Despite the country’s continued devotion to its seniors, the article continues, the government felt obliged to downgrade the finish of the dishes to silver plating to save money.

What tends to get lost in discussions about automation taking over jobs and Millennials taking over the workplace is the impact of increased longevity. In the future, people will need to be in the workforce much longer than they are today. Half of the people born in Japan today, for example, are predicted to live to 107, making their ancestors seem fragile, according to Lynda Gratton and Andrew Scott, professors at the London Business School and authors of The 100-Year Life: Living and Working in an Age of Longevity.

The End of the Three-Stage Career

Assuming that advances in healthcare continue, future generations in wealthier societies could be looking at careers lasting 65 or more years, rather than at the roughly 40 years for today’s 70-year-olds, write Gratton and Scott. The three-stage model of employment that dominates the global economy today—education, work, and retirement—will be blown out of the water.

It will be replaced by a new model in which people continually learn new skills and shed old ones. Consider that today’s most in-demand occupations and specialties did not exist 10 years ago, according to The Future of Jobs, a report from the World Economic Forum.

And the pace of change is only going to accelerate. Sixty-five percent of children entering primary school today will ultimately end up working in jobs that don’t yet exist, the report notes.

Our current educational systems are not equipped to cope with this degree of change. For example, roughly half of the subject knowledge acquired during the first year of a four-year technical degree, such as computer science, is outdated by the time students graduate, the report continues.

Skills That Transcend the Job Market

Instead of treating post-secondary education as a jumping-off point for a specific career path, we may see a switch to a shorter school career that focuses more on skills that transcend a constantly shifting job market. Today, some of these skills, such as complex problem solving and critical thinking, are taught mostly in the context of broader disciplines, such as math or the humanities.

Other competencies that will become critically important in the future are currently treated as if they come naturally or over time with maturity or experience. We receive little, if any, formal training, for example, in creativity and innovation, empathy, emotional intelligence, cross-cultural awareness, persuasion, active listening, and acceptance of change. (No wonder the self-help marketplace continues to thrive!)

These skills, which today are heaped together under the dismissive “soft” rubric, are going to harden up to become indispensable. They will become more important, thanks to artificial intelligence and machine learning, which will usher in an era of infinite information, rendering the concept of an expert in most of today’s job disciplines a quaint relic. As our ability to know more than those around us decreases, our need to be able to collaborate well (with both humans and machines) will help define our success in the future.

Individuals and organizations alike will have to learn how to become more flexible and ready to give up set-in-stone ideas about how businesses and careers are supposed to operate. Given the rapid advances in knowledge and attendant skills that the future will bring, we must be willing to say, repeatedly, that whatever we’ve learned to that point doesn’t apply anymore.

Careers will become more like life itself: a series of unpredictable, fluid experiences rather than a tightly scripted narrative. We need to think about the way forward and be more willing to accept change at the individual and organizational levels.

Rethink Employee Training

One way that organizations can help employees manage this shift is by rethinking training. Today, overworked and overwhelmed employees devote just 1% of their workweek to learning, according to a study by consultancy Bersin by Deloitte. Meanwhile, top business leaders such as Bill Gates and Nike founder Phil Knight spend about five hours a week reading, thinking, and experimenting, according to an article in Inc. magazine.

If organizations are to avoid high turnover costs in a world where the need for new skills is shifting constantly, they must give employees more time for learning and make training courses more relevant to the future needs of organizations and individuals, not just to their current needs.

The amount of learning required will vary by role. That’s why at SAP we’re creating learning personas for specific roles in the company and determining how many hours will be required for each. We’re also dividing up training hours into distinct topics:

  • Law: 10%. This is training required by law, such as training to prevent sexual harassment in the workplace.

  • Company: 20%. Company training includes internal policies and systems.

  • Business: 30%. Employees learn skills required for their current roles in their business units.

  • Future: 40%. This is internal, external, and employee-driven training to close critical skill gaps for jobs of the future.

In the future, we will always need to learn, grow, read, seek out knowledge and truth, and better ourselves with new skills. With the support of employers and educators, we will transform our hardwired fear of change into excitement for change.

We must be able to say to ourselves, “I’m excited to learn something new that I never thought I could do or that never seemed possible before.” D!

Comments

Let’s block ads! (Why?)

Digitalist Magazine

How can I find lists of front-end option values?

 How can I find lists of front end option values?

Things like Slider have special option values like Appearance -> "LeftArrow". How can I find all of the possible values it takes, since I can’t even look at the DownValues for it?

1 Answer

Let’s block ads! (Why?)

Recent Questions – Mathematica Stack Exchange

New Uber Leadership SVP Talks Trust, Diversity, And Building High-Performing Teams

When outspoken venture capitalist and Netscape co-founder Marc Andreessen wrote in The Wall Street Journal in 2011 that software is eating the world, he was only partly correct. In fact, business services based on software platforms are what’s eating the world.

Companies like Apple, which remade the mobile phone industry by offering app developers easy access to millions of iPhone owners through its iTunes App Store platform, are changing the economy. However, these world-eating companies are not just in the tech world. They are also emerging in industries that you might not expect: retailers, finance companies, transportation firms, and others outside of Silicon Valley are all at the forefront of the platform revolution.

These outsiders are taking platforms to the next level by building them around business services and data, not just apps. Companies are making business services such as logistics, 3D printing, and even roadside assistance for drivers available through a software connection that other companies can plug in to and consume or offer to their own customers.

SAP Q317 DigitalDoubles Feature1 Image2 New Uber Leadership SVP Talks Trust, Diversity, And Building High Performing TeamsThere are two kinds of players in this business platform revolution: providers and participants. Providers create the platform and create incentives for developers to write apps for it. Developers, meanwhile, are participants; they can extend the reach of their apps by offering them through the platform’s virtual shelves.

Business platforms let companies outside of the technology world become powerful tech players, unleashing a torrent of innovation that they could never produce on their own. Good business platforms create millions in extra revenue for companies by enlisting external developers to innovate for them. It’s as if strangers are handing you entirely new revenue streams and business models on the street.

Powering this movement are application programming interfaces (APIs) and software development kits (SDKs), which enable developers to easily plug their apps into a platform without having to know much about the complex software code that drives it. Developers get more time to focus on what they do best: writing great apps. Platform providers benefit because they can offer many innovative business services to end customers without having to create them themselves.

Any company can leverage APIs and SDKs to create new business models and products that might not, in fact, be its primary method of monetization. However, these platforms give companies new opportunities and let them outflank smaller, more nimble competitors.

Indeed, the platform economy can generate unbelievable revenue streams for companies. According to Platform Revolution authors Geoffrey G. Parker, Marshall W. Van Alstyne, and Sangeet Paul Choudary, travel site Expedia makes approximately 90% of its revenue by making business services available to other travel companies through its API.

In TechCrunch in May 2016, Matt Murphy and Steve Sloane wrote that “the number of SaaS applications has exploded and there is a rising wave of software innovation in APIs that provide critical connective tissue and increasingly important functionality.” ProgrammableWeb.com, an API resource and directory, offers searchable access to more than 15,000 different APIs.

According to Accenture Technology Vision 2016, 82% of executives believe that platforms will be the “glue that brings organizations together in the digital economy.” The top 15 platforms (which include companies built entirely on this software architecture, such as eBay and Priceline.com) have a combined market capitalization of US$ 2.6 trillion.

It’s time for all companies to join the revolution. Whether working in alliance with partners or launching entirely in-house, companies need to think about platforms now, because they will have a disruptive impact on every major industry.

SAP Q317 DigitalDoubles Feature1 Image3 1024x572 New Uber Leadership SVP Talks Trust, Diversity, And Building High Performing Teams

To the Barricades

Several factors converged to make monetizing a company’s business services easier. Many of the factors come from the rise of smartphones, specifically the rise of Bluetooth and 3G (and then 4G and LTE) connections. These connections turned smartphones into consumption hubs that weren’t feasible when high-speed mobile access was spottier.

One good example of this is PayPal’s rise. In the early 2000s, it functioned primarily as a standalone web site, but as mobile purchasing became more widespread, third-party merchants clamored to integrate PayPal’s payment processing service into their own sites and apps.

In Platform Revolution, Parker, Van Alstyne, and Choudary claim that “platforms are eating pipelines,” with pipelines being the old, direct-to-consumer business methods of the past. The first stage of this takeover involved much more efficient digital pipelines (think of Amazon in the retail space and Grubhub for food delivery) challenging their offline counterparts.

What Makes Great Business Platforms Run?

SAP Q317 DigitalDoubles Feature1 Image8 New Uber Leadership SVP Talks Trust, Diversity, And Building High Performing Teams

The quality of the ecosystem that powers your platform is as important as the quality of experience you offer to customers. Here’s how to do it right.

Although the platform economy depends on them, application programming interfaces (APIs) and software development kits (SDKs) aren’t magic buttons. They’re tools that organizations can leverage to attract users and developers.

To succeed, organizations must ensure that APIs include extensive documentation and are easy for developers to add into their own products. Another part of platform success is building a general digital enterprise platform that includes both APIs and SDKs.

A good platform balances ease of use, developer support, security, data architecture (that is, will it play nice with a company’s existing systems?), edge processing (whether analytics are processed locally or in the cloud), and infrastructure (whether a platform provider operates its own data centers and cloud infrastructure or uses public cloud services). The exact formula for which elements to embrace, however, will vary according to the use case, the industry, the organization, and its customers.

In all cases, the platform should offer a value proposition that’s a cut above its competitors. That means a platform should offer a compelling business service that is difficult to duplicate.

By creating open standards and easy-to-work-with tools, organizations can greatly improve the platforms they offer. APIs and SDKs may sound complicated, but they’re just tools for talented people to do their jobs with. Enable these talented people, and your platform will take off.

In the second stage, platforms replace pipelines. Platform Revolution’s authors write: “The Internet no longer acts merely as a distribution channel (a pipeline). It also acts as a creation infrastructure and a coordination mechanism. Platforms are leveraging this new capability to create entirely new business models.” Good examples of second-stage companies include Airbnb, DoubleClick, Spotify, and Uber.

Allstate Takes Advantage of Its Hidden Jewels

Many companies taking advantage of platforms were around long before APIs, or even the internet, existed. Allstate, one of the largest insurers in the United States, has traditionally focused on insurance services. But recently, the company expanded into new markets—including the platform economy.

Allstate companies Allstate Roadside Services (ARS) and Arity, a technology company founded by Allstate in late 2016, have provided their parent company with new sources of revenue, thanks to new offerings. ARS launched Good Hands Rescue APIs, which allow third parties to leverage Allstate’s roadside assistance network in their own apps. Meanwhile, Arity offers a portfolio of APIs that let third parties leverage Allstate’s aggregate data on driver behavior and intellectual property related to risk prediction for uses spanning mobility, consumer, and insurance solutions.

SAP Q317 DigitalDoubles Feature1 Image4 New Uber Leadership SVP Talks Trust, Diversity, And Building High Performing TeamsFor example, Verizon licenses an Allstate Good Hands Rescue API for its own roadside assistance app. And automakers GM and BMW also offer roadside assistance service through Allstate.

Potential customers for Arity’s API include insurance providers, shared mobility companies, automotive parts makers, telecoms, and others.

“Arity is an acknowledgement that we have to be digital first and think about the services we provide to customers and businesses,” says Chetan Phadnis, Arity’s head of product development. “Thinking about our intellectual property system and software products is a key part of our transformation. We think it will create new ways to make money in the vertical transportation ecosystem.”

One of Allstate’s major challenges is a change in auto ownership that threatens the traditional auto insurance model. No-car and one-car households are on the rise, ridesharing services such as Uber and Lyft work on very different insurance models than passenger cars or traditional taxi companies, and autonomous vehicles could disrupt the traditional auto insurance model entirely.

This means that companies like Allstate are smart to look for revenue streams beyond traditional insurance offerings. The intangible assets that Allstate has accumulated over the years—a massive aggregate collection of driver data, an extensive set of risk models and predictive algorithms, and a network of garages and mechanics to help stranded motorists—can also serve as a new revenue stream for the future.

By offering two distinct API services for the platform economy, Allstate is also able to see what customers might want in the future. While the Good Hands Rescue APIs let third-party users integrate a specific service (such as roadside assistance) into their software tools, Arity instead lets third-party developers leverage huge data sets as a piece of other, less narrowly defined projects, such as auto maintenance. As Arity gains insights into how customers use and respond to those offerings, it gets a preview into potential future directions for its own products and services.

SAP Q317 DigitalDoubles Feature1 Image5 1024x572 New Uber Leadership SVP Talks Trust, Diversity, And Building High Performing Teams

Farmers Harvest Cash from a Platform

Another example of innovation fueling the platform economy doesn’t come from a boldfaced tech name. Instead, it comes from a relatively small startup that has nimbly built its business model around data with an interesting twist: it turns its customers into entrepreneurs.

Farmobile is a Kansas City–based agriculture tech company whose smart device, the Passive Uplink Connection (PUC), can be plugged into tractors, combines, sprayers, and other farm equipment.

Farmobile uses the PUC to enable farmers to monetize data from their fields, which is one of the savviest routes to success with platforms—making your platform so irresistible to end consumers that they foment the revolution for you.

Once installed, says CEO Jason Tatge, the PUC streams second-by-second data to farmers’ Farmobile accounts. This gives them finely detailed reports, called Electronic Field Records (EFRs), that they can use to improve their own business, share with trusted advisors, and sell to third parties.

The PUC gives farmers detailed records for tracking analytics on their crops, farms, and equipment and creates a marketplace where farmers can sell their data to third parties. Farmers benefit because they generate extra income; Farmobile benefits because it makes a commission on each purchase and builds a giant store of aggregated farming data.

This last bit is important if Farmobile is to successfully compete with traditional agricultural equipment manufacturers, which also gather data from farmers. Farmobile’s advantage (at least for now) is that the equipment makers limit their data gathering to their existing customer bases and sell it back to them in the form of services designed to improve crop yields and optimize equipment performance.

Farmobile, meanwhile, is trying to appeal to all farmers by sharing the wealth, which could help it leapfrog the giants that already have large customer bases. “The ability to bring data together easily is good for farmers, so we built API integrations to put data in one place,” says Tatge.

Farmers can resell their data on Farmobile’s Data Store to buyers such as reinsurance firm Guy Carpenter. To encourage farmers to opt in, says Tatge, “we told farmers that if they run our device over planting and harvest season, we can guarantee them $ 2 per acre for their EFRs.”

So far, Farmobile’s customers have sent the Data Store approximately 4,200 completed EFRs for both planting and harvest, which will serve as the backbone of the company’s data monetization efforts. Eventually, Farmobile hopes to expand the offerings on the Data Store to include records from at least 10 times as many different farm fields.

SAP Q317 DigitalDoubles Feature1 Image6 1024x572 New Uber Leadership SVP Talks Trust, Diversity, And Building High Performing Teams

Under Armour Binges on APIs

Another model for the emerging business platform world comes from Under Armour, the sports apparel giant. Alongside its very successful clothing and shoe lines, Under Armour has put its platform at the heart of its business model.

But rather than build a platform itself, Under Armour has used its growing revenues to create an industry-leading ecosystem. Over the past decade, it has purchased companies that already offer APIs, including MapMyFitness, Endomondo, and MyFitnessPal, and then linked them all together into a massive platform that serves 30 million consumers.

This strategy has made Under Armour an indispensable part of the sprawling mobile fitness economy. According to the company’s 2016 annual results, its business platform ecosystem, known as the Connected Fitness division, generated $ 80 million in revenue that year—a 51% increase over 2015.

SAP Q317 DigitalDoubles Feature1 Image7 New Uber Leadership SVP Talks Trust, Diversity, And Building High Performing TeamsBy combining existing APIs from its different apps with original tools built in-house, extensive developer support, and a robust SDK, third-party developers have everything they need to build their own fitness app or web site.

Depending on their needs, third-party developers can sign up for several different payment plans with varying access to Under Armour’s APIs and SDKs. Indeed, the company’s tiered developer pricing plan for Connected Fitness, which is separated into Starter, Pro, and Premium levels, makes Under Armour seem more like a tech company than a sports apparel firm.

As a result, Under Armour’s APIs and SDKs are the underpinnings of a vast platform cooperative. Under Armour’s apps seamlessly integrate with popular services like Fitbit and Garmin (even though Under Armour has a fitness tracker of its own) and are licensed by corporations ranging from Microsoft to Coca-Cola to Purina. They’re even used by fitness app competitors like AthletePath and Lose It.

A large part of Under Armour’s success is the sheer amount of data its fitness apps collect and then make available to developers. MyFitnessPal, for instance, is an industry-leading calorie and food tracker used for weight loss, and Endomondo is an extremely popular running and biking record keeper and route-sharing platform.

One way of looking at the Connected Fitness platform is as a combination of traditional consumer purchasing data with insights gleaned from Under Armour’s suite of apps, as well as from the third-party apps that Under Armour’s products use.

Indeed, Under Armour gets a bonus from the platform economy: it helps the company understand its customers better, creating a virtuous cycle. As end users use different apps fueled by Under Armour’s services and data-sharing capabilities, Under Armour can then use that data to fuel customer engagement and attract additional third-party app developers to add new services to the ecosystem.

What Successful Platforms Have in Common

The most successful business platforms have three things in common: They’re easy to work with, they fulfill a market need, and they offer data that’s useful to customers.

For instance, Farmobile’s marketplace fulfills a valuable need in the market: it lets farmers monetize data and develop a new revenue stream that otherwise would not exist. Similarly, Allstate’s Arity experiment turns large volumes of data collected by Allstate over the years into a revenue stream that drives down costs for Arity’s clients by giving them more accurate data to integrate into their apps and software tools.

Meanwhile, Under Armour’s Connected Fitness platform and API suite encourage users to sign up for more apps in the company’s ecosystem. If you track your meals in MyFitnessPal, you’ll want to track your runs in Endomondo or MapMyRun. Similarly, if you’re an app developer in the health and fitness space, Under Armour has a readily available collection of tools that will make it easy for users to switch over to your app and cheaper for you to develop your app.

As the platform economy grows, all three of these approaches—Allstate’s leveraging of its legacy business data, Farmobile’s marketplace for users to become data entrepreneurs, and Under Armour’s one-stop fitness app ecosystem—are extremely useful examples of what happens next.

In the coming months and years, the platform economy will see other big changes. In 2016 for example, Apple, Microsoft, Facebook, and Google all released APIs for their AI-powered voice assistant platforms, the most famous of which is Apple’s Siri.

The introduction of APIs confirms that the AI technology behind these bots has matured significantly and that a new wave of AI-based platform innovation is nigh. (In fact, Digitalistpredicted last year that the emergence of an API for these AIs would open them up beyond conventional uses.) New voice-operated technologies such as Google Home and Amazon Alexa offer exciting opportunities for developers to create full-featured, immersive applications on top of existing platforms.

We will also see AI- and machine learning–based APIs emerge that will allow developers to quickly leverage unstructured data (such as social media posts or texts) for new applications and services. For instance, sentiment analysis APIs can help explore and better understand customers’ interests, emotions, and preferences in social media.

As large providers offer APIs and associated services for smaller organizations to leverage AI and machine learning, these companies can in turn create their own platforms for clients to use unstructured data—everything from insights from uploaded photographs to recognizing a user’s emotion based on facial expression or tone of voice—in their own apps and products. Meanwhile, the ever-increasing power of cloud platforms like Amazon Web Services and Microsoft Azure will give these computing-intensive app platforms the juice they need to become deeper and richer.

These business services will depend on easy ways to exchange and implement data for success. The good news is that finding easy ways to share data isn’t hard and the API and SDK offerings that fuel the platform economy will become increasingly robust. Thanks to the opportunities generated by these new platforms and the new opportunities offered to end users, developers, and platform businesses themselves, everyone stands to win—if they act soon. D!


About the Authors

Bernd Leukert is a member of the Executive Board, Products and Innovation, for SAP.

Björn Goerke is Chief Technology Officer and President, SAP Cloud Platform, for SAP.

Volker Hildebrand is Global Vice President for SAP Hybris solutions.

Sethu M is President, Mobile Services, for SAP.

Neal Ungerleider is a Los Angeles-based technology journalist and consultant.


Read more thought provoking articles in the latest issue of the Digitalist Magazine, Executive Quarterly.

Comments

Let’s block ads! (Why?)

Digitalist Magazine

The Quick and the Dead Slow: Importing CSV Files into Azure Data Warehouse

The series so far:

  1. Creating a Custom .NET Activity Pipeline for Azure Data Factory
  2. Using the Copy Wizard for the Azure Data Factory
  3. The Quick and the Dead Slow: Importing CSV Files into Azure Data Warehouse

In my previous article, I described a way to get data from an endpoint into an Azure Data Warehouse (called ADW from now on in this article). On a conceptual level, that worked well, however there are a few things to consider for the sake of performance and cost, especially important when you are regularly importing large amounts of data.

The final architecture of the data load in my previous article looked like this:

word image 33 The Quick and the Dead Slow: Importing CSV Files into Azure Data Warehouse

As we can see, the files are taken from an FTP server, copied to a blob storage and then imported to the Azure Data Warehouse from there. This means that we will not achieve great levels of performance, especially when you load larger amounts of data, because of the intermediate step of copying data through blob storage .

The fastest way to import data into an Azure Data Warehouse is to use Polybase, and there are some requirements to be met before Polybase can step in.

Just to give an example of what happens if Polybase can be used: I was recently working on an import of a dataset of CSV files with an approximate size of 1Tb. I started by choosing an ‘intermediate storage’ approach (as in the picture above), and it was about to take 9 days to complete, and this with an Azure Data Warehouse scaled to 600DWUs. For more information about ADW scalability and DWUs, have a look at https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-manage-compute-overview ). Given that an ADW with 600DWU costs 7.02 EUR/hour, I was pretty confident that my project accountant would have been unhappy with the cost, which would have been about 1,500 EUR for this load! Instead, by making sure of meeting the criteria for Polybase, I managed to import the entire 1Tb data into my Azure Data Warehouse in the means of 3 hours, i.e. at the cost of about 21 EUR.

In this article we will base our work on the idea of my previous article, however we will change the architecture in order to save time and resources. In this case we will do the following:

  1. Download the files from the FTP (ftp://neoftp.sci.gsfc.nasa.gov/csv/) to our Azure storage
  2. Decompress the files
  3. Look into Polybase requirements and import the data reasonably fast into our ADW

Download the files from FTP

In this section we will not spend too much time on describing how to get the files from the FTP site, because the method is very similar to the one described in my previous article. For downloading the files from the FTP, we will be using the Copy Data wizard. The only difference is that in this case, because the files on the FTP server are compressed, we will need to use Blob storage as a destination. In the download process, we will instruct the ADF to decompress the files as they are being downloaded.

The reason for this is that Polybase does not yet support direct import from compressed files.

To get the files, we need to start the Copy Data wizard from our Data Factory:

word image 34 The Quick and the Dead Slow: Importing CSV Files into Azure Data Warehouse

then configure the FTP server properties:

word image 35 The Quick and the Dead Slow: Importing CSV Files into Azure Data Warehouse

Then we need to select the folder we want to process recursively:

word image 36 The Quick and the Dead Slow: Importing CSV Files into Azure Data Warehouse

Now choose ‘Azure Data Lake store’ (ADL) as a destination. Of course, there are many ways to go about it, but in this case I choose to use ADL because I want to demonstrate the power of U-SQL scripting for preparing the data for Polybase import.

For the ADL destination, I will be using Service-to-Service authentication. This is also a requirement for Polybase loads, so now is a great time to create an Active Directory App which will carry out the task of authenticating our data operations.

Creating a Service-to-Service authentication

In the Azure portal we need to go to Azure Active Directory blade and from there to ‘App registrations, and click on ‘New App Registration’.

word image 37 The Quick and the Dead Slow: Importing CSV Files into Azure Data Warehouse

In the next screen we give a name to our app and we create it:

word image 38 The Quick and the Dead Slow: Importing CSV Files into Azure Data Warehouse

Now that we have created our application, we need to gather its properties for later use, and also we need to create a key for it.

After creating the application we search for it in the ‘New application registration’ tab, and we click on the application we just created:

word image 39 The Quick and the Dead Slow: Importing CSV Files into Azure Data Warehouse

We need to note the Application ID in the next screen:

word image 41 The Quick and the Dead Slow: Importing CSV Files into Azure Data Warehouse

And next we need to create a key for the app by clicking on the Keys link to the right

word image 42 The Quick and the Dead Slow: Importing CSV Files into Azure Data Warehouse

Make sure to write down the key, since it is impossible to retrieve it at a later time.

Back to the ADL destination in the ADF pipeline

Now that we have created the Azure Active Directory App, we are ready to use Service-to-Service authentication for the FTP files to be downloaded and extracted to our data lake.

word image 43 The Quick and the Dead Slow: Importing CSV Files into Azure Data Warehouse

In the above case, we need to specify the Subscription, the Data Lake account and the Tenant ID.

The Service principal id term is a bit inconsistent, but in this field we need to paste the Application Id we gathered from the Properties tab of our Azure AD App. And then the Service principal key is the key we created for the app.

After we click ‘Next’ in the screen above, we will be asked where to store the files on our ADL. For the purpose I have created a folder called Aura. For the copy behaviour, I have chosen ‘Flatten hierarchy’. This means that I will get as many files as there are in the FTP, but in a single folder.

word image 44 The Quick and the Dead Slow: Importing CSV Files into Azure Data Warehouse

In the next screen we are asked to specify the properties of the destination flat file. This is a very important step, since Polybase has a very specific set of expectations for the format of the file, and if these requirements are not met, then we will need to use an intermediary storage to process the files and prepare them for import (and this, as we discussed above, is extremely slow and costly).

Here are the requirements for using Polybase:

The input dataset is of type AzureBlob or AzureDataLakeStore, and the format type under type properties is OrcFormat, or TextFormat with the following configurations:

  • rowDelimiter must be \n.
  • nullValue is set to empty string (“”), or treatEmptyAsNull is set to true.
  • encodingName is set to utf-8, which is default value.
  • escapeChar, quoteChar, firstRowAsHeader, and skipLineCount are not specified.
  • There is no skipHeaderLineCount setting under BlobSource or AzureDataLakeStore for the Copy activity in the pipeline.
  • There is no sliceIdentifierColumnName setting under SqlDWSink for the Copy activity in the pipeline.
  • There is no columnMapping being used in the associated in Copy activity.

The following screen looks like this by default:

word image 45 The Quick and the Dead Slow: Importing CSV Files into Azure Data Warehouse

Usually, we would set up the above screen properly so that we can get the files ready for Polybase directly. For this article, however, I will leave the settings as they are because I would like to demonstrate a data preparation step by using U-SQL.

U-SQL is a language used together with Data Lake and it is a hybrid between T-SQL (the select statement) and C# (used for the WHERE clause). The U-SQL language is extremely flexible and scalable. For more information on U-SQL, check the online documentation here.

Another reason to U-SQL in this case is because Polybase does not support column mapping, and in this case my data has over 3000 variables. This poses a few challenges: in SQL Server and in ADW there is a limitation of 1024 columns per table, which means that in this particular case I need to resort to U-SQL to make sure the data is managed correctly.

So, I click ‘Next’ and end up at the final screen, ready to run the pipeline.

word image 46 The Quick and the Dead Slow: Importing CSV Files into Azure Data Warehouse

Creating a ADW login and user to load the data

When the Azure Data Warehouse was created, we had to specify a user with a password to connect to it. The permissions on that login are not very restricted, and because of this we will now create a login and a database user to do our data import.

The following T-SQL code will create this:


Creating the ADW table

For this article, we will create a small table called NEOData with only a few columns. Here is the T-SQL:

Note: it is still valid even in Azure Data Warehouse that heaps are the fastest way to import data into SQL Server.

Selecting columns to work with

So far we have a Data Lake with the downloaded files from the FTP server, which were extracted from the GZip. In other words, we have our CSV files in the Data Lake.

There is a challenge in this case because the CSV files we have downloaded have 3600 columns. As mentioned, ADW has a limit of 1024 columns per table, and in our case our data science team is only interested in the first 11 columns anyway.

In a case like this, we can use the flexibility of U-SQL, combined with Azure Data Lake analytics views (you can read more about U-SQL views here https://msdn.microsoft.com/en-us/library/azure/mt771901.aspx ).

To do this, we need to construct a view which uses an Extractor in U-SQL which contains all 3600 columns and specifies their data type. In our case all columns are of the float datatype.

Then we need to create a second view, which uses the first view to select only the first 11 columns from it.

And finally, we can output the file from the result of the second view.

Conceptually the code will look like:

There are several ways to prepare the actual U-SQL script which we will run, and usually it is a great help to use Visual Studio and the Azure Data Lake Explorer add-in. The Add-in allows us to browse the files in our Data Lake and right-click on one of the files and then click on the “Create EXTRACT Script” from the context menu. In this case, however, it will take a very long time, since the file is so wide.

Another way to do it is to just to use Excel to write out the column1 to column 3600 and append the data type.

Either way, our final U-SQL script will look similar to this:

As mentioned above, the View1 is used to extract the data from the CSV files, view2 is used to sub-set the data from the view1. Finally the view2 is used to write the final result to our Data Lake. The parameters in the outputter are very important, since these are the requirements for using Polybase to push the data in the fastest way to the Data Warehouse in the next step.

And finally, it is important to boost up the parallelism of the U-SQL processing before submitting the job, since it might take a while if we use the default setting. In my case I am using parallelism of 120.

word image 47 The Quick and the Dead Slow: Importing CSV Files into Azure Data Warehouse

U-SQL scales very well. In my case of about 500Mb of CSV files, it took about 2 minutes for the above script to produce a CSV file of 22Mb, by reducing the width from 3600 to 11 columns.

Importing the file to the Data Warehouse with Polybase

When the U-SQL script above is ready, we can finally import the file that we produced to our Data Warehouse.

To import the data, we are going to use the Copy Data wizard, with which we are already familiar, to create an ADF pipeline. It is just a matter of setting up the ADL as a source, ADW as a destination and setting up the Service-to-Service authentication for ADL and the DataImporter credential for the ADW. After setting up all of this, it is very important to verify that in the last screen there is NO staging storage account used and that Polybase is allowed:

word image 48 The Quick and the Dead Slow: Importing CSV Files into Azure Data Warehouse

Finally, the architecture looks like this, with a direct import from ADL to ADW:

word image 49 The Quick and the Dead Slow: Importing CSV Files into Azure Data Warehouse

Monitoring and performance of the pipeline

After a couple minutes, the pipeline is finished, and we get the following information:

word image 50 The Quick and the Dead Slow: Importing CSV Files into Azure Data Warehouse

Notice that it took about a minute to import 21 MB of data and 277K rows. This is with a 100 DWUs for the Data Warehouse, which is 1.17 EUR per hour.

If we wanted the import to be faster, then we would Scale up the Data Warehouse to 600 DWUs, for example.

Having the feature of a scalable Data Warehouse is great because the user gets to scale up when the resource is used (for imports and for busy read times). However, the downside is that connections get terminated when scaling is in process, and this means down time.

On a final note, all good-old-rules from data warehousing are still valid when it comes to speedy data imports. For example, it is still faster to insert into a heap than to anything else. And let’s not forget to create and rebuild those statistics after the import!

Conclusion:

When you are paying for a resource by the hour, you soon get increasingly interested in the time a data import takes. In this article we explored the options and considerations it takes to import data into an Azure Data Warehouse in a fast and economic way. We saw that the old data warehousing rules are still valid, and that Polybase is a great tool for speedy imports of large volumes of data.

Let’s block ads! (Why?)

SQL – Simple Talk

Biff, Vlad, Irma, and AI

Biff%2BTannen%2B Biff, Vlad, Irma, and AI
In the movie, “Back to the Future II,” bad guy Biff Tannen uses a time machine to visit his younger self and deliver a book from the future containing sports statistics. Using the scores of future games, younger Biff is then able to place sure-fire bets and become an extremely rich tyrant. 

Vladimir Putin recently warned about the emergence of an individual like Biff who might be able to predict the future, but using Artificial Intelligence instead of a time machine. Putin said, “Whoever becomes the leader in this sphere will become the ruler of the world.”

Just imagine a powerful, Biff-like global ruler, building gaudy, gold-plated Tannen Towers in every country. Scary.

But could Biff have predicted the future using AI? Not really, since even the best AI algorithms and big data would have been able to give him a list of future games and scores in order to cheat bookies.

Instead, AI is more like Hurricane Irma’s “cone of uncertainty.” Weather experts can study an existing storm and provide a tentative timeline for it getting bigger or smaller. They can guess its direction, estimating the probability of it traveling straight or veering to one side or the other. Experts with the best weather tools and data still can’t actually predict the specific details; they can only provide probabilities of what might happen.

So a Biff-AI “know-the-future” scenario is very unlikely. That’s not to say certain people will not get rich with AI; there is a very high probability of that. You only have to look at the positioning of the most powerful companies in the world (Apple, Alphabet, Microsoft, Amazon, and Facebook) to see the race for AI dominance. Mark Cuban said the world’s first trillionaire would be somebody who “who masters AI and all its derivatives and applies it in ways we never thought of.”

If you are interested in AI, consider going to the upcoming Predictive Analytics World conference in New York City at the end of October. I have attended in the past and found the event to be a great opportunity for learning and networking.

What about you? Are you concerned about powerful people controlling AI?

changing line style from solid to dashed after an intersection

I used this link: Plot that draws a dashed/solid curve depending on the y-value of the curve to help me start this. I have two lines. I want to change from solid to dashed after an intersection. The line with higher slope will have dashed line AFTER the intersection and the line with lower slope will have dashed line BEFORE the intersection. These two lines should be red and blue.

Here is what I have done so far…

In[289]:= y1 = 1.44; y2 = 27.9 - 16000 x;
intercept = x /. Solve[y1 == y2, x][[1]]

Out[290]= 0.00165375

The plot…

Plot[{y1, y2}, {x, .0014, .0019}, PlotRange -> All, 
 MeshFunctions -> {#1 &}, 
 Mesh -> {{0.0014, intercept}, {intercept, 0.0019}}, 
 MeshShading -> {Blue, Directive[Blue, Dashed]}, MeshStyle -> None]

Here was the result…

7Pl5h changing line style from solid to dashed after an intersection

Not sure how to fix this. Please help me.

3 Answers

Let’s block ads! (Why?)

Recent Questions – Mathematica Stack Exchange