Monthly Archives: January 2018

4 Common Misconceptions About iPaaS

ipaas 4 Common Misconceptions About iPaaS

To reach their digital transformation goals, businesses need to integrate more systems, people, and things than ever before. More importantly, they also need to make those connections at faster and faster speeds. With its ease-of-use, ability to automate processes, and quickly and easily set up connections between all systems and applications, integration platform as service, commonly referred to as an iPaaS, is becoming the tool-of-choice to help organizations connect everything in their growing networks and accelerate their digital transformation.

An iPaaS enables businesses to connect data, applications, and processes faster than a traditional integration platform. In addition, an iPaaS is designed to connect any database and any application—whether it is in a cloud environment, on-premises, or a combination. Connected apps allow businesses to improve operations, drive more business through self-service channels, reduce costs, and engage and empower employees. But, since iPaaS is a relatively new approach to integration there are some common misconceptions surrounding it. Let’s clear up some of these misunderstandings so you can get the most out of this versatile platform.

Misconception #1: iPaaS is only for integration specialists

A good iPaaS caters to the different needs of different groups of developers that have different levels of coding experience. It should have an easy-to-use, intuitive, visual interface so there’s no coding required, making it the ideal solution for business users who don’t have coding experience. And, it should also allow integration specialists to design and deploy flows using the more advanced tools and workflows they are familiar with.

Misconception #2: An iPaaS only integrates cloud-to-cloud systems or apps

Since iPaaS started as a cloud-to-cloud integration tool, many still think it only works with cloud-based components. But, iPaaS does more than just connecting cloud based systems. iPaaS is extremely versatile and can connect on-premises to cloud and back to on-premises from the cloud. It’s ability to connect anything, anywhere is one of the main reasons for its growing popularity.

Misconception #3: iPaaS and ETL are the same thing

ETL does three things that iPaaS does. It makes sure you can extract data, turn that data into the right format, and then load all that transformed data into some other system. An iPaaS is capable of doing all of this, but it’s more than just ETL. With an iPaaS, you can connect different systems together, get data from multiple sources, have multiple transformations, and send the data to the right person or another system in a wide variety of ways. Basically, ETL is a subset of what makes an iPaaS.

Misconception #4: iPaaS and IaaS are the same thing

Infrastructure as a service (IaaS) gives users control over the infrastructure. Well-known IaaS solutions include Amazon AWS and Microsoft’s Azure. iPaaS is a service that iPaaS connects to all of the various cloud services you use (such as Salesforce, Workday, Concur, etc.) and pulls data from each of those systems, transforms it, and delivers all the disparate data into one interface so it can be more easily analyzed.

To hear about other common misconceptions about iPaaS, please tune into the O’Reilly webinar: When is iPaaS the Right Level of Abstraction?

Let’s block ads! (Why?)

The TIBCO Blog

Multi-objective Optimization for Feature Selection: Part 3

In my previous posts (Part 1 and Part 2), we discussed why feature selection is a great technique for improving your models. By having the model analyze the important signals, we can focus on the right set of attributes for optimization. As a side effect, less attributes also mean that you can train your models faster, making them less complex and easier to understand. Finally, less complex models tend to have a lower risk of overfitting. This means that they are more robust when it comes to creating the predictions for new data points.

We also discussed why a brute force approach to feature selection is not feasible for most data sets and tried multiple heuristics for overcoming the computation problem. We looked at evolutionary algorithms which turned out to be fast enough for most data sets. And they have a higher likelihood to find the optimal attribute subset.

So, we’ve found the solution then, right? Well, not quite yet.

Regularization for Feature Selection

So far, we have been optimizing for model accuracy alone. We know from regular machine learning methods that this is a bad idea. If we only look for the most accurate model on the training data it will lead to overfitting. Our models will perform worse on new data points as a result. Most people only think about overfitting when it comes to the model itself. We should be equally concerned about overfitting when we make any decision in data analysis.

If we decide to merge two data sets, take a sample, or filter down the data, we are making a modeling decision. In fact, a more sophisticated machine learning model could have made this decision on its own. We need to be careful about overfitting and validating all decisions, not just the model itself. This is the reason it does not make sense to separate data preparation from modeling. We need to do both in an integrated fashion, and validate them together.

It doesn’t matter if we do feature selection automatically or manually. Any selection becomes a part of the model. And as such it needs to be validated and controlled for overfitting. Learn more about this in our recent blog series on correct validation.

We need to perform regularization to control overfitting for feature selection, just like we do for any other machine learning method. The idea behind regularization is to penalize complexity when you build models. The concept of regularization is truly at the core of statistical learning. The image below defines regularized risk based on the empirical risk and the structural risk. The empirical risk is the error we make on the training data. Which is simply the data we use for doing our feature selection. And the structural risk is a measurement of complexity. In case of an SVM the structural risk would be a low width of the margin. In case of feature selection it is simply the number of features. The more features, the higher the structural risk. We of course want to minimize both risks, error and complexity, at the same time.

trade off factor Multi objective Optimization for Feature Selection: Part 3

Minimizing the number of features and maximizing the prediction accuracy are conflicting goals. Less features means reduced model complexity. More features mean more accurate models. If we can’t minimize both risks at the same time, we need to define which one is more important. This is necessary so we can decide in cases where we need to sacrifice one for the other.

When we make decisions dealing with conflicting objectives, we can introduce a trade-off factor. This is the factor “C” in the formula above. The problem is that we cannot determine C without running multiple experiments. What are the possible accuracy ranges we can reach with our models on the given data set? Do we need 10 or 100 features for this? We can define C without knowing those answers. We could use hold-out data set for testing and then try to optimize for a good value of C, but that takes time.

Wouldn’t it be great if we didn’t have to deal with this additional parameter “C”? Finding a complete range of potential solutions. Some models would be great for lots of accuracy while others would use as little attributes as possible. And then of course some solutions for the trade-off in between. At this point, we also want to use an automated algorithm in a fast and feasible way.

The good news: this is exactly the key idea behind multi-objective optimization. We will see that we can adapt the evolutionary feature selection from our previous post. Once we do that, our model will deliver all good results rather than a single solution.

Or… You just optimize for both simultaneously

We want to maximize the accuracy and minimize the number of features at the same time. Let’s begin by drawing this solution space so that we can compare different solutions. The result will be an image like the one below. We use the number of features on the x-axis and the achieved accuracy on the y-axis. Each point in this space is now representing a model using a specific feature set. The orange point on the left, for example, represents a model which uses only one feature. And the accuracy of this model is 60% (or “0.6” like in the image).

Is the point on the left now better or worse than the other ones? We do not know. Sometimes we prefer less features which makes this a good solution. Sometimes we prefer more accurate models where more features work better. One thing is for sure, we want to find solutions towards the top left corner in this chart. Those are the models run with as little features as possible, and are also the most accurate. This means that we should prefer solutions in this corner over those more towards the bottom right.

Let’s have a look at some examples to make this clearer. In the image below we added three blue points to the orange ones we already had. Are any of those points better than the orange ones? The blue point on the left has only one attribute, which is good. But we have a better model with one attribute: the orange point on top of it. Hence, we prefer the left orange solution over the left blue one. Something similar is true for the blue point on the right. We achieve 85% accuracy, but this is already possible with the solution using only 5 instead of 6 features. We would prefer the less complex model over the right blue point then. The blue point in the middle is even worse: it has less accuracy and more features than necessary. We certainly would prefer any of the orange points over this blue one.

In short, the blue points are clearly inferior to the orange ones. We can say that the orange points dominate the blue ones. The image below shows a bigger set of points (blue) which are all dominated by the orange ones:

We will now transform those concepts into a new feature selection algorithm. Evolutionary algorithms are among the best when you optimize for multiple conflicting criteria. All we need is a new selection approach for our evolutionary feature selection. This new selection technique will simultaneously optimize for more accuracy and less features.

We call this approach non-dominated sorting selection. We can simply replace the single-objective tournament selection with the new one. The image below shows the idea of non-dominated sorting. We start with the first rank of points which are dominating the other points. Those solutions (feature sets in our case) make it into the population of the next generation. They are shown as transparent below. After removing those feature sets, we look for the next rank of dominating points (in solid orange below). We again add those points to the next population as well. We continue this until we reach the desired population size.

The result of such a non-dominated sorting selection is what we call a Pareto front. See the image below for an example. Those are the feature sets which dominate all others. Any solution taken from this Pareto front is equally good. Some are more accurate, some are less complex. They describe the trade-off between both conflicting objectives. We will find those solutions without the need of defining a trade-off factor beforehand.

pareto front Multi objective Optimization for Feature Selection: Part 3

Multi-Objective Feature Selection in Practice

This is one of things which makes multi-objective optimization so great for feature selection. We can find all potentially good solutions without defining a trade-off factor. Even better, we can find all those solutions with a single optimization run. So, it is also a very fast approach. We can inspect the solutions on such a Pareto front. By doing so we can learn from the interactions of features. Some features might be very important in smaller feature sets, but become less important in larger ones. It can happen that interactions of other features become stronger predictors instead. Those are additional insights we can get from the Pareto front. We can see at a glance what the achievable accuracy range is, and what the good range of features is for which we should focus on. Do we need to consider 10 to 20 features or between 100 and 200? This is valuable information in building models.

Let’s now run such a multi-objective optimization for feature selection. Luckily we do not need to code all those algorithms. In RapidMiner, we just need to make two little adaptions in the visual workflow. First, we have to change the selection scheme from tournament selection to non-dominated sorting. This is a parameter of the regular evolutionary feature selection operator. Second, we need to add a second performance criterion besides the accuracy. This would be the number of features. Although not necessary, I have also defined some parameters to add some visual output. This will show the movement of the Pareto front during the optimization. And it will also display the details of all feature sets on the Pareto front at the end.

Results

We are going to use the Sonar data set. As a reminder, the attributes represent bands in a frequency spectrum. And the goal is to classify if an object is a rock or a mine. The basic setup is the same. We have now applied the discussed changes to turn this into a multi-objective feature selection. The image below shows the resulting Pareto front of the feature selection:

pareto front sonar data Multi objective Optimization for Feature Selection: Part 3

The resulting Pareto front has 9 different feature sets. They span a range between 1 and 9 attributes. And the accuracy range we can achieve is between 73% and 82%. We show the attribute count as a negative number, simply because RapidMiner always tries to maximize all objectives. Minimizing the number of attributes or maximizing the negative count are the same. This also means that this Pareto front will move to the top right corner, not to the top left like we discussed before.

Here is the full trade-off between complexity and accuracy. We can see that it does not make sense to use more than 9 features. And we should not accept less than 73% accuracy, since that result can already be achieved with one single feature.

It is also interesting to look further into the details of the resulting attribute sets. If we configure the workflow, we end up with a table showing the details of the solutions:

We again see the range of attributes (between 1 and 9) and accuracies (between 73% and 82%). We can gain some additional insights if we look into the actual attribute sets. If we only use one single feature, let it be attribute_12. But if we use two features, then attribute_12 is inferior to the combination of attribute_11 and attribute_36. This is another indicator why hill-climbing heuristics like forward selection have such a hard time.

The next attribute we should add is attribute_6. But we should drop it again for the attribute sets with 4 and 5 features in favor of other combinations. This attribute becomes interesting only for the larger sets again.

Finally, we can see that the largest attribute sets consisting of 8 or 9 attributes cover all relevant areas of the frequency spectrum. See the previous post for more details on this.

It is these insights which make multi-objective feature selection the go-to-method for this problem. Seeing good ranges for attribute set sizes or the interactions between features allow us to build better models. But there is more! We can use multi-objective feature selection for unsupervised learning methods like clustering. Stay tuned, we discuss this in the next blog post.

RapidMiner Processes

You can download RapidMiner here. Download the processes below to build this machine learning model yourself in RapidMiner.

Download the zip-file and extract its contents. The result will be an .rmp file which can be loaded into RapidMiner via “File” -> “Import Process”.

Let’s block ads! (Why?)

RapidMiner

InterpolatingFunction::dmval error in the Piecewise function with an interpolation funcion included

 InterpolatingFunction::dmval error in the Piecewise function with an interpolation funcion included

I have a set of data, first I deal the data with an Interpolation function.

excitef1 = Interpolation[ps1]

Then I use a Piecewise function to expand the range of the function

excite1 = Piecewise[{{excitef1[r], 4.97 <= r <= 22}, {0, r > 22}}]

The Piecewise function is then used to solve a differential equation

    exciteshiftfunction = 
  ParallelTable[NDSolveValue[{w1'[r] + (20/10*mass*excite1 + i*(i + 1)/r^2)/wave
*(Sin[wave*r + w1[r]])^2 == 0, w1[auo] == -wave*auo}, 
        w1, {r, 5, 10000000}, MaxSteps -> Infinity], {i, 0, 120}];

where mass, wave, auo are constants.

but the reslut returns an error warning:

InterpolatingFunction::dmval: Input value {1035.17} lies outside the range of data in the interpolating function. Extrapolation will be used.

I tried many methods to solve this error, but only get similar error warnings or software crashs due to insufficient system memory.

I appreciate any help, best regards sincerely.

Ps: the data interpolated is attached below.

    {{4.96735, 0.00729014}, {5.02197, 0.00714075}, {5.09024, 
  0.00701852}, {5.19949, 0.00686913}, {5.32236, 0.00673333}, {5.47255,
   0.0066111}, {5.62275, 0.00650245}, {5.77295, 0.00639381}, {5.9095, 
  0.00627158}, {6.05969, 0.00614936}, {6.18258, 0.00601355}, {6.33278,
   0.00586416}, {6.51028, 0.00566045}, {6.63317, 0.00547032}, {6.7697,
   0.00528019}, {6.87893, 0.00511722}, {7.01548, 
  0.00489993}, {7.15201, 0.0047098}, {7.26126, 0.00453325}, {7.35682, 
  0.0043567}, {7.46607, 0.00418015}, {7.54798, 0.00404434}, {7.67087, 
  0.00386779}, {7.79376, 0.00367766}, {7.90299, 0.00347395}, {8.01222,
   0.00331098}, {8.1078, 0.00316159}, {8.24435, 0.0029443}, {8.39454, 
  0.00275417}, {8.50377, 0.0025912}, {8.59935, 0.00246898}, {8.70858, 
  0.00230601}, {8.7905, 0.00221094}, {8.89974, 0.00208872}, {9.02263, 
  0.00193933}, {9.17283, 0.00178994}, {9.30936, 0.00164055}, {9.44589,
   0.00150474}, {9.62342, 0.00135536}, {9.77361, 
  0.00126029}, {9.93745, 0.0011109}, {10.1286, 0.00100226}, {10.2788, 
  0.000880031}, {10.4836, 0.000771386}, {10.6475, 
  0.000703482}, {10.8386, 0.000621997}, {10.9888, 
  0.000554094}, {11.1663, 0.00048619}, {11.3575, 
  0.000431867}, {11.5486, 0.000363964}, {11.7398, 
  0.000336802}, {11.9446, 0.00029606}, {12.1221, 
  0.000255318}, {12.286, 0.000214576}, {12.4498, 0.000200995}, {12.6, 
  0.000160253}, {12.7639, 0.000146672}, {12.955, 
  0.000133091}, {13.0506, 0.000133091}, {13.2281, 
  0.00010593}, {13.3919, 0.00010593}, {13.5694, 
  0.000092349}, {13.7879, 0.0000651875}, {13.9791, 
  0.0000651875}, {14.1975, 0.000038026}, {14.4297, 
  0.000038026}, {14.7027, 0.000038026}, {15.0304, 
  0.0000244453}, {15.2625, 0.0000244453}, {15.5356, 
  0.0000244453}, {15.8224, 0.0000108646}, {15.9726, 
  0.0000108646}, {16.2047, 0.0000108646}, {16.4505, 
  0.0000244453}, {16.6689, 0.0000108646}, {16.8191, 0.}, {17.2424, 
  0.}, {17.5428, 0.}, {17.9661, 0.}, {18.3484, 0.}, {18.8809, 
  0.}, {19.2905, 0.}, {19.6728, 0.}, {20.0415, 0.}, {20.4375, 
  0.}, {20.8061, 0.}, {21.1748, 0.}, {21.3659, 0.}, {21.7619, 
  0.}, {22.1306, 0.}, {22.3217, 0.}}

Let’s block ads! (Why?)

Recent Questions – Mathematica Stack Exchange

The Financial Industry’s Digital Transformation

Mobile Banking The Financial Industry’s Digital Transformation

The financial industry is undergoing a major digital transformation. While it’s not unusual for any industry to reinvent itself periodically, this latest evolution doesn’t resemble most of the scenarios of the recent past.

Typically, competition, innovation, and/or regulatory changes drive transformation, resulting in new products, providers, and pricing plans for consumers. A textbook example of this would be Amazon and eBay disrupting the retail sector. Through their own innovations and an overall advancement in technology, online retailers attracted customers away from legacy retailers with a wide variety of choices, low prices, and high convenience. This new model forced many traditional brick-and-mortar retailers who failed to adapt to go out of business, leaving very different and much more digitally-oriented sector leaders.

While the financial industry transformation features some of the same elements as their retail counterparts, its origins and intra-competitive responses are unique, and the results are still excitingly in flux.

Consumers Demand Personalized Digital Engagement from Banks, In Real Time, On Any Channel

The financial and retail sectors both have emerging competitors who are more technologically sophisticated than the incumbents. However, while financial technology (FinTech) startups have created challenges for their legacy competitors, traditional banks and credit unions have been much more resilient than their equivalents in retail. As of now, many of the banks you and I grew up with are still here (though they likely have gone through a merger or four). That’s because consumers still mostly rely on them for traditional banking, while FinTechs have largely confined themselves to more niche, specialized services (think of Venmo/Paypal for online shopping and peer-to-peer money transfers).

For the average customer, the banking disruption has taken the form of more digital fulfillment options, rather than required in-branch visits. And the origins of that shift are pretty easy to identity: having had a taste of the convenience of the digital revolution in other industries, consumers demanded it of their banks as well. But unlike their shopping center counterparts, established banks were able to accommodate consumer demand more ably. According to a survey by bankrate.com, nearly 40% of Americans have not stepped into the branch of a bank or credit union in the last six months. That’s not because old banks went extinct, but because they met consumer demand by shifting many services online or to ATMs. Thus, we haven’t seen digital disruptors take over the industry in the same way that Amazon captured retail (at least, not yet).

By no means, though, is the banking digital transformation complete. And one of the clouds obscuring the financial industry’s future ought to be in the shape of a smartphone. With mobile phones, consumers continue to gain unprecedented access to more information and services in the palm of their hand. As a result, smartphone-enabled consumers continue demanding more convenience from their financial institutions; the upshot is that the more basic services banks and FinTechs can migrate to smartphones, the firmer their staying power.

However, it’s not just convenience that consumers want. As a consequence of trends driven by Big Data and abetted by devices such as smartphones, today’s digital consumer has a new behavioral pattern compared to previous generations.

For instance, digital consumers are increasingly demanding “always-on” interactions in real-time from their financial institutions that are also personalized, relevant, and multichannel. Satisfying those demands is extremely challenging, and that’s even before considering that individual customers exhibit multiple personas across multiple devices. In a survey conducted by Forrester, 58% of consumers report using “cross-channel journeys” to shop for financial products, meaning that they’re browsing with one touchpoint—web, phone, app, or in-branch—and eventually enrolling or purchasing with another. That percentage will likely grow larger in the coming years, necessitating that financial institutions track and guide these different personas throughout their journeys. That can be a tall order, and data from Accenture confirms as much: 70% of consumers feel that their relationship with banks today is “transactional” in nature, rather than “relationship-based.”

If that percentage seems high to you, perhaps it’s because you thought “only millennials” have those kinds of needs. And while you may be right about that, banks can no longer afford to dismiss this demographic. According to Accenture, millennials now number 1.8 billion globally and are expected to have a lifetime value of $ 10 trillion. Having grown up in the digital age, this group is technologically advanced, focused on securing their financial futures, and expecting personalized experiences and convenience from all industries, including financial institutions. And yet, according to The Financial Brand, nearly half (46%) of millennials don’t think their bank markets products that are relevant to their future financial needs.

3 Imperatives for Financial Institutions Making a Digital Transformation

According to the Digital Banking Report, 84% of financial service professionals consider it important to know their customers. Financial institutions understand the need to tailor experiences to individual needs and personalize their interactions. In fact, more than half (55%) of bankers plan to increase spending on customer experience initiatives [CSI], and nearly 80% consider it important to deliver guidance to customers in real-time [The Financial Brand]. Currently, though, only about 20% of financial institutions are delivering more than basic personalization [Digital Banking Report/Everage]; clearly, there is still a significant gap to fill.

We’ve identified three key imperatives financial institutions need to address to deliver personalized, real-time experiences.

#1 Focus on Data Gathering and Consolidation

All of these multichannel, “always on” interactions that we’ve referenced generate large amounts of data, which is typically captured in various sources such as CRM applications, transactional data stores, disparate account management data stores, etc. Each data source provides a fragmented image of a consumer—a glimpse into one of the personas. Financial institutions need to bring these data sources together to create a comprehensive profile of a consumer, rather than a series of disconnected account holders across platforms and lines of business. By connecting the digital clues and gaining a single customer view, it’s then possible to interpret and anticipate future needs. Currently, according to Forrester only 0.5% of all generated data is analyzed. Richard Joyce, a Senior Analyst at Forrester says, “Just a 10% increase in data accessibility will result in more than $ 65 million additional net income for a typical Fortune 1000 company.” These stats are simply overwhelming and identify a clear-cut target for which the industry must aim.

#2 Build Powerful Analytic Engines that Predict and Prescribe

Personalization is not a matter of simply gathering data, but also acting on that data. And the hard truth is that anticipating customer needs requires powerful analytics engines that were once thought of as “nice-to-have.” However, only about 55% of organizations were expected to increase budgets for data analytics in 2017 [The Financial Brand]. Machine learning and predictive analytics are necessary to optimize how financial institutions market to digital consumers and should no longer be considered “optional.”

#3 Deliver Data-Driven, Highly Personalized Customer Experiences         

Having covered the need to gather and analyze data, our third imperative is about putting it all together into a cohesive system. Because digital consumers demand tailored, contextualized, interactive dialogues, marketing workflows need to synthesize and coordinate inbound requests for information with the appropriate outbound messaging. While certain modular tools can provide short-term fixes to these challenges, financial institutions will ultimately need to undergo organizational changes that break down silos and connect systems, thereby improving data flow and knowledge sharing.

The digital transformations that are shaking the financial industry—unlike comparable evolutions in the retail sector—sprung from consumer demands that remain largely unmet. Both the incumbents and the upstarts are still scrambling to harness the tools and advancements necessary to drive loyalty and engagement. The obstacles involve consolidating and acting on data, and agility in engaging with customers across channels.

If you are interested in hearing more, especially about how FICO and AWS have partnered to help financial institutions address these imperatives, you can join me for a free webinar on Tuesday, January 30th at 1pm EST / 10am PST (simply register by clicking here). As you can probably tell, we at FICO are passionate about this topic, and I hope to see you there!

Let’s block ads! (Why?)

FICO

We Need To Talk About Customer Experience

Businesses share something important with lions. When a lion captures and consumes its prey, only about 10% to 20% of the prey’s energy is directly transferred into the lion’s metabolism. The rest evaporates away, mostly as heat loss, according to research done in the 1940s by ecologist Raymond Lindeman.

Today, businesses do only about as well as the big cats. When you consider the energy required to manage, power, and move products and services, less than 20% goes directly into the typical product or service—what economists call aggregate efficiency (the ratio of potential work to the actual useful work that gets embedded into a product or service at the expense of the energy lost in moving products and services through all of the steps of their value chains). Aggregate efficiency is a key factor in determining productivity.

SAP Q417 DigitalDoubles Feature2 Image2 We Need To Talk About Customer ExperienceAfter making steady gains during much of the 20th century, businesses’ aggregate energy efficiency peaked in the 1980s and then stalled. Japan, home of the world’s most energy-efficient economy, has been skating along at or near 20% ever since. The U.S. economy, meanwhile, topped out at about 13% aggregate efficiency in the 1990s, according to research.

Why does this matter? Jeremy Rifkin says he knows why. Rifkin is an economic and social theorist, author, consultant, and lecturer at the Wharton School’s Executive Education program who believes that economies experience major increases in growth and productivity only when big shifts occur in three integrated infrastructure segments around the same time: communications, energy, and transportation.

But it’s only a matter of time before information technology blows all three wide open, says Rifkin. He envisions a new economic infrastructure based on digital integration of communications, energy, and transportation, riding atop an Internet of Things (IoT) platform that incorporates Big Data, analytics, and artificial intelligence. This platform will disrupt the world economy and bring dramatic levels of efficiency and productivity to businesses that take advantage of it, he says.

Some economists consider Rifkin’s ideas controversial. And his vision of a new economic platform may be problematic—at least globally. It will require massive investments and unusually high levels of government, community, and private sector cooperation, all of which seem to be at depressingly low levels these days.

However, Rifkin has some influential adherents to his philosophy. He has advised three presidents of the European Commission—Romano Prodi, José Manuel Barroso, and the current president, Jean-Claude Juncker—as well as the European Parliament and numerous European Union (EU) heads of state, including Angela Merkel, on the ushering in of what he calls “a smart, green Third Industrial Revolution.” Rifkin is also advising the leadership of the People’s Republic of China on the build out and scale up of the “Internet Plus” Third Industrial Revolution infrastructure to usher in a sustainable low-carbon economy.

The internet has already shaken up one of the three major economic sectors: communications. Today it takes little more than a cell phone, an internet connection, and social media to publish a book or music video for free—what Rifkin calls zero marginal cost. The result has been a hollowing out of once-mighty media empires in just over 10 years. Much of what remains of their business models and revenues has been converted from physical (remember CDs and video stores?) to digital.

But we haven’t hit the trifecta yet. Transportation and energy have changed little since the middle of the last century, says Rifkin. That’s when superhighways reached their saturation point across the developed world and the internal-combustion engine came close to the limits of its potential on the roads, in the air, and at sea. “We have all these killer new technology products, but they’re being plugged into the same old infrastructure, and it’s not creating enough new business opportunities,” he says.

All that may be about to undergo a big shake-up, however. The digitalization of information on the IoT at near-zero marginal cost generates Big Data that can be mined with analytics to create algorithms and apps enabling ubiquitous networking. This digital transformation is beginning to have a big impact on the energy and transportation sectors. If that trend continues, we could see a metamorphosis in the economy and society not unlike previous industrial revolutions in history. And given the pace of technology change today, the shift could happen much faster than ever before.

SAP Q417 DigitalDoubles Feature2 Image3 1024x572 We Need To Talk About Customer ExperienceThe speed of change is dictated by the increase in digitalization of these three main sectors; expensive physical assets and processes are partially replaced by low-cost virtual ones. The cost efficiencies brought on by digitalization drive disruption in existing business models toward zero marginal cost, as we’ve already seen in entertainment and publishing. According to research company Gartner, when an industry gets to the point where digital drives at least 20% of revenues, you reach the tipping point.

“A clear pattern has emerged,” says Peter Sondergaard, executive vice president and head of research and advisory for Gartner. “Once digital revenues for a sector hit 20% of total revenue, the digital bloodbath begins,” he told the audience at Gartner’s annual 2017 IT Symposium/ITxpo, according to The Wall Street Journal. “No matter what industry you are in, 20% will be the point of no return.”

Communications is already there, and energy and transportation are heading down that path. If they hit the magic 20% mark, the impact will be felt not just within those industries but across all industries. After all, who doesn’t rely on energy and transportation to power their value chains?

That’s why businesses need to factor potentially massive business model disruptions into their plans for digital transformation today if they want to remain competitive with organizations in early adopter countries like China and Germany. China, for example, is already halfway through an US$ 88 billion upgrade to its state electricity grid that will enable renewable energy transmission around the country—all managed and moved digitally, according to an article in The Economist magazine. And it is competing with the United States for leadership in self-driving vehicles, which will shift the transportation process and revenue streams heavily to digital, according to an article in Wired magazine.

SAP Q417 DigitalDoubles Feature2 Image4 We Need To Talk About Customer ExperienceOnce China’s and Germany’s renewables and driverless infrastructures are in place, the only additional costs are management and maintenance. That could bring businesses in these countries dramatic cost savings over those that still rely on fossil fuels and nuclear energy to power their supply chains and logistics. “Once you pay the fixed costs of renewables, the marginal costs are near zero,” says Rifkin. “The sun and wind haven’t sent us invoices yet.”

In other words, zero marginal cost has become a zero-sum game.

To understand why that is, consider the major industrial revolutions in history, writes Rifkin in his books, The Zero Marginal Cost Society and The Third Industrial Revolution. The first major shift occurred in the 19th century when cheap, abundant coal provided an efficient new source of power (steam) for manufacturing and enabled the creation of a vast railway transportation network. Meanwhile, the telegraph gave the world near-instant communication over a globally connected network.

The second big change occurred at the beginning of the 20th century, when inexpensive oil began to displace coal and gave rise to a much more flexible new transportation network of cars and trucks. Telephones, radios, and televisions had a similar impact on communications.

Breaking Down the Walls Between Sectors

Now, according to Rifkin, we’re poised for the third big shift. The eye of the technology disruption hurricane has moved beyond communications and is heading toward—or as publishing and entertainment executives might warn, coming for—the rest of the economy. With its assemblage of global internet and cellular network connectivity and ever-smaller and more powerful sensors, the IoT, along with Big Data analytics and artificial intelligence, is breaking down the economic walls that have protected the energy and transportation sectors for the past 50 years.

Daimler is now among the first movers in transitioning into a digitalized mobility internet. The company has equipped nearly 400,000 of its trucks with external sensors, transforming the vehicles into mobile Big Data centers. The sensors are picking up real-time Big Data on weather conditions, traffic flows, and warehouse availability. Daimler plans to establish collaborations with thousands of companies, providing them with Big Data and analytics that can help dramatically increase their aggregate efficiency and productivity in shipping goods across their value chains. The Daimler trucks are autonomous and capable of establishing platoons of multiple trucks driving across highways.

It won’t be long before vehicles that navigate the more complex transportation infrastructures around the world begin to think for themselves. Autonomous vehicles will bring massive economic disruption to transportation and logistics thanks to new aggregate efficiencies. Without the cost of having a human at the wheel, autonomous cars could achieve a shared cost per mile below that of owned vehicles by as early as 2030, according to research from financial services company Morgan Stanley.

The transition is getting a push from governments pledging to give up their addiction to cars powered by combustion engines. Great Britain, France, India, and Norway are seeking to go all electric as early as 2025 and by 2040 at the latest.

The Final Piece of the Transition

Considering that automobiles account for 47% of petroleum consumption in the United States alone—more than twice the amount used for generators and heating for homes and businesses, according to the U.S. Energy Information Administration—Rifkin argues that the shift to autonomous electric vehicles could provide the momentum needed to upend the final pillar of the economic platform: energy. Though energy has gone through three major disruptions over the past 150 years, from coal to oil to natural gas—each causing massive teardowns and rebuilds of infrastructure—the underlying economic model has remained constant: highly concentrated and easily accessible fossil fuels and highly centralized, vertically integrated, and enormous (and enormously powerful) energy and utility companies.

Now, according to Rifkin, the “Third Industrial Revolution Internet of Things infrastructure” is on course to disrupt all of it. It’s neither centralized nor vertically integrated; instead, it’s distributed and networked. And that fits perfectly with the commercial evolution of two energy sources that, until the efficiencies of the IoT came along, made no sense for large-scale energy production: the sun and the wind.

But the IoT gives power utilities the means to harness these batches together and to account for variable energy flows. Sensors on solar panels and wind turbines, along with intelligent meters and a smart grid based on the internet, manage a new, two-way flow of energy to and from the grid.

SAP Q417 DigitalDoubles Feature2 Image5 We Need To Talk About Customer ExperienceToday, fossil fuel–based power plants need to kick in extra energy if insufficient energy is collected from the sun and wind. But industrial-strength batteries and hydrogen fuel cells are beginning to take their place by storing large reservoirs of reserve power for rainy or windless days. In addition, electric vehicles will be able to send some of their stored energy to the digitalized energy internet during peak use. Demand for ever-more efficient cell phone and vehicle batteries is helping push the evolution of batteries along, but batteries will need to get a lot better if renewables are to completely replace fossil fuel energy generation.

Meanwhile, silicon-based solar cells have not yet approached their limits of efficiency. They have their own version of computing’s Moore’s Law called Swanson’s Law. According to data from research company Bloomberg New Energy Finance (BNEF), Swanson’s Law means that for each doubling of global solar panel manufacturing capacity, the price falls by 28%, from $ 76 per watt in 1977 to $ 0.41 in 2016. (Wind power is on a similar plunging exponential cost curve, according to data from the U.S. Department of Energy.)

Thanks to the plummeting solar price, by 2028, the cost of building and operating new sun-based generation capacity will drop below the cost of running existing fossil power plants, according to BNEF. “One of the surprising things in this year’s forecast,” says Seb Henbest, lead author of BNEF’s annual long-term forecast, the New Energy Outlook, “is that the crossover points in the economics of new and old technologies are happening much sooner than we thought last year … and those were all happening a bit sooner than we thought the year before. There’s this sense that it’s not some distant risk or distant opportunity. A lot of these realities are rushing toward us.”

The conclusion, he says, is irrefutable. “We can see the data and when we map that forward with conservative assumptions, these technologies just get cheaper than everything else.”

The smart money, then—72% of total new power generation capacity investment worldwide by 2040—will go to renewable energy, according to BNEF. The firm’s research also suggests that there’s more room in Swanson’s Law along the way, with solar prices expected to drop another 66% by 2040.

Another factor could push the economic shift to renewables even faster. Just as computers transitioned from being strictly corporate infrastructure to becoming consumer products with the invention of the PC in the 1980s, ultimately causing a dramatic increase in corporate IT investments, energy generation has also made the transition to the consumer side.

Thanks to future tech media star Elon Musk, consumers can go to his Tesla Energy company website and order tempered glass solar panels that look like chic, designer versions of old-fashioned roof shingles. Models that look like slate or a curved, terracotta-colored, ceramic-style glass that will make roofs look like those of Tuscan country villas, are promised soon. Consumers can also buy a sleek-looking battery called a Powerwall to store energy from the roof.

SAP Q417 DigitalDoubles Feature2 Image6 We Need To Talk About Customer ExperienceThe combination of solar panels, batteries, and smart meters transforms homeowners from passive consumers of energy into active producers and traders who can choose to take energy from the grid during off-peak hours, when some utilities offer discounts, and sell energy back to the grid during periods when prices are higher. And new blockchain applications promise to accelerate the shift to an energy market that is laterally integrated rather than vertically integrated as it is now. Consumers like their newfound sense of control, according to Henbest. “Energy’s never been an interesting consumer decision before and suddenly it is,” he says.

As the price of solar equipment continues to drop, homes, offices, and factories will become like nodes on a computer network. And if promising new solar cell technologies, such as organic polymers, small molecules, and inorganic compounds, supplant silicon, which is not nearly as efficient with sunlight as it is with ones and zeroes, solar receivers could become embedded into windows and building compounds. Solar production could move off the roof and become integrated into the external facades of homes and office buildings, making nearly every edifice in town a node.

The big question, of course, is how quickly those nodes will become linked together—if, say doubters, they become linked at all. As we learned from Metcalfe’s Law, the value of a network is proportional to its number of connected users.

The Will Determines the Way

Right now, the network is limited. Wind and solar account for just 5% of global energy production today, according to Bloomberg.

But, says Rifkin, technology exists that could enable the network to grow exponentially. We are seeing the beginnings of a digital energy network, which uses a combination of the IoT, Big Data, analytics, and artificial intelligence to manage distributed energy sources, such as solar and wind power from homes and businesses.

As nodes on this network, consumers and businesses could take a more active role in energy production, management, and efficiency, according to Rifkin. Utilities, in turn, could transition from simply transmitting power and maintaining power plants and lines to managing the flow to and from many different energy nodes; selling and maintaining smart home energy management products; and monitoring and maintaining solar panels and wind turbines. By analyzing energy use in the network, utilities could create algorithms that automatically smooth the flow of renewables. Consumers and businesses, meanwhile, would not have to worry about connecting their wind and solar assets to the grid and keeping them up and running; utilities could take on those tasks more efficiently.

Already in Germany, two utility companies, E.ON and RWE, have each split their businesses into legacy fossil and nuclear fuel companies and new services companies based on distributed generation from renewables, new technologies, and digitalization.

The reason is simple: it’s about survival. As fossil fuel generation winds down, the utilities need a new business model to make up for lost revenue. Due to Germany’s population density, “the utilities realize that they won’t ever have access to enough land to scale renewables themselves,” says Rifkin. “So they are starting service companies to link together all the different communities that are building solar and wind and are managing energy flows for them and for their customers, doing their analytics, and managing their Big Data. That’s how they will make more money while selling less energy in the future.”

SAP Q417 DigitalDoubles Feature2 Image7 1024x572 We Need To Talk About Customer Experience

The digital energy internet is already starting out in pockets and at different levels of intensity around the world, depending on a combination of citizen support, utility company investments, governmental power, and economic incentives.

China and some countries within the EU, such as Germany and France, are the most likely leaders in the transition toward a renewable, energy-based infrastructure because they have been able to align the government and private sectors in long-term energy planning. In the EU for example, wind has already overtaken coal as the second largest form of power capacity behind natural gas, according to an article in TheGuardian newspaper. Indeed, Rifkin has been working with China, the EU, and governments, communities, and utilities in Northern France, the Netherlands, and Luxembourg to begin building these new internets.

Hauts-de-France, a region that borders the English Channel and Belgium and has one of the highest poverty rates in France, enlisted Rifkin to develop a plan to lift it out of its downward spiral of shuttered factories and abandoned coal mines. In collaboration with a diverse group of CEOs, politicians, teachers, scientists, and others, it developed Rev3, a plan to put people to work building a renewable energy network, according to an article in Vice.

Today, more than 1,000 Rev3 projects are underway, encompassing everything from residential windmills made from local linen to a fully electric car–sharing system. Rev3 has received financial support from the European Investment Bank and a handful of private investment funds, and startups have benefited from crowdfunding mechanisms sponsored by Rev3. Today, 90% of new energy in the region is renewable and 1,500 new jobs have been created in the wind energy sector alone.

Meanwhile, thanks in part to generous government financial support, Germany is already producing 35% of its energy from renewables, according to an article in TheIndependent, and there is near unanimous citizen support (95%, according to a recent government poll) for its expansion.

If renewable energy is to move forward in other areas of the world that don’t enjoy such strong economic and political support, however, it must come from the ability to make green, not act green.

Not everyone agrees that renewables will produce cost savings sufficient to cause widespread cost disruption anytime soon. A recent forecast by the U.S. Energy Information Administration predicts that in 2040, oil, natural gas, and coal will still be the planet’s major electricity producers, powering 77% of worldwide production, while renewables such as wind, solar, and biofuels will account for just 15%.

Skeptics also say that renewables’ complex management needs, combined with the need to store reserve power, will make them less economical than fossil fuels through at least 2035. “All advanced economies demand full-time electricity,” Benjamin Sporton, chief executive officer of the World Coal Association told Bloomberg. “Wind and solar can only generate part-time, intermittent electricity. While some renewable technologies have achieved significant cost reductions in recent years, it’s important to look at total system costs.”

On the other hand, there are many areas of the world where distributed, decentralized, renewable power generation already makes more sense than a centralized fossil fuel–powered grid. More than 20% of Indians in far flung areas of the country have no access to power today, according to an article in TheGuardian. Locally owned and managed solar and wind farms are the most economical way forward. The same is true in other developing countries, such as Afghanistan, where rugged terrain, war, and tribal territorialism make a centralized grid an easy target, and mountainous Costa Rica, where strong winds and rivers have pushed the country to near 100% renewable energy, according to TheGuardian.

The Light and the Darknet

Even if all the different IoT-enabled economic platforms become financially advantageous, there is another concern that could disrupt progress and potentially cause widespread disaster once the new platforms are up and running: hacking. Poorly secured IoT sensors have allowed hackers to take over everything from Wi-Fi enabled Barbie dolls to Jeep Cherokees, according to an article in Wired magazine.

Humans may be lousy drivers, but at least we can’t be hacked (yet). And while the grid may be prone to outages, it is tightly controlled, has few access points for hackers, and is physically separated from the Wild West of the internet.

If our transportation and energy networks join the fray, however, every sensor, from those in the steering system on vehicles to grid-connected toasters, becomes as vulnerable as a credit card number. Fake news and election hacking are bad enough, but what about fake drivers or fake energy? Now we’re talking dangerous disruptions and putting millions of people in harm’s way.

SAP Q417 DigitalDoubles Feature2 Image8 We Need To Talk About Customer ExperienceThe only answer, according to Rifkin, is for businesses and governments to start taking the hacking threat much more seriously than they do today and to begin pouring money into research and technologies for making the internet less vulnerable. That means establishing “a fully distributed, redundant, and resilient digital infrastructure less vulnerable to the kind of disruptions experienced by Second Industrial Revolution–centralized communication systems and power grids that are increasingly subject to climate change, disasters, cybercrime, and cyberterrorism,” he says. “The ability of neighborhoods and communities to go off centralized grids during crises and re-aggregate in locally decentralized networks is the key to advancing societal security in the digital era,” he adds.

Start Looking Ahead

Until today, digital transformation has come mainly through the networking and communications efficiencies made possible by the internet. Airbnb thrives because web communications make it possible to create virtual trust markets that allow people to feel safe about swapping their most private spaces with one another.

But now these same efficiencies are coming to two other areas that have never been considered core to business strategy. That’s why businesses need to begin managing energy and transportation as key elements of their digital transformation portfolios.

Microsoft, for example, formed a senior energy team to develop an energy strategy to mitigate risk from fluctuating energy prices and increasing demands from customers to reduce carbon emissions, according to an article in Harvard Business Review. “Energy has become a C-suite issue,” Rob Bernard, Microsoft’s top environmental and sustainability executive told the magazine. “The CFO and president are now actively involved in our energy road map.”

As Daimler’s experience shows, driverless vehicles will push autonomous transportation and automated logistics up the strategic agenda within the next few years. Boston Consulting Group predicts that the driverless vehicle market will hit $ 42 billion by 2025. If that happens, it could have a lateral impact across many industries, from insurance to healthcare to the military.

Businesses must start planning now. “There’s always a period when businesses have to live in the new and the old worlds at the same time,” says Rifkin. “So businesses need to be considering new business models and structures now while continuing to operate their existing models.”

He worries that many businesses will be left behind if their communications, energy, and transportation infrastructures don’t evolve. Companies that still rely on fossil fuels for powering traditional transportation and logistics could be at a major competitive disadvantage to those that have moved to the new, IoT-based energy and transportation infrastructures.

Germany, for example, has set a target of 80% renewables for gross power consumption by 2050, according to TheIndependent. If the cost advantages of renewables bear out, German businesses, which are already the world’s third-largest exporters behind China and the United States, could have a major competitive advantage.

“How would a second industrial revolution society or country compete with one that has energy at zero marginal cost and driverless vehicles?” asks Rifkin. “It can’t be done.” D!


About the Authors

Maurizio Cattaneo is Director, Delivery Execution, Energy and Natural Resources, at SAP.

Joerg Ferchow is Senior Utilities Expert and Design Thinking Coach, Digital Transformation, at SAP.

Daniel Wellers is Digital Futures Lead, Global Marketing, at SAP.

Christopher Koch is Editorial Director, SAP Center for Business Insight, at SAP.


Read more thought provoking articles in the latest issue of the Digitalist Magazine, Executive Quarterly.

Comments

Let’s block ads! (Why?)

Digitalist Magazine

Alf the Pope's nose is “poisoning our democracy” with 45*'s preen oil

 Alf the Pope's nose is poisoning our democracy with 45*'s preen oil

In 2010, the Marine Corps Gazette began publishing a series of articles entitled “The Attritionist Letters” styled in the manner of The Screwtape Letters. In the letters, General Screwtape chastises Captain Wormwood for his inexperience and naivete while denouncing the concepts of maneuver warfare in favor of attrition warfare.

Even St Ronnie Reagan has quoted from The Screwtape Letters, so thinking that the WH has VP Screwtape and President Wormwood makes perfect sense. Considering Pence left the Catholic church because it wasn’t conservative enough and 45* worships at the church of the mashie niblick.

Literally nobody in Congress wants unchecked illegal immigration. These insane, bad faith statements should shock us — particularly a day after Trump’s campaign claimed Democrats are complicit in murder — but they’re now routine. Trump is poisoning our democracy. https://t.co/TNlOzr524X

— Brian Klaas (@brianklaas) January 21, 2018

The Screwtape Letters comprises 31 letters written by a senior demon named Screwtape to his nephew, Wormwood (named after a star in Revelation), a younger and less experienced demon, charged with guiding a man (called “the patient”) toward “Our Father Below” (Devil / Satan) from “the Enemy” (God).

 Alf the Pope's nose is poisoning our democracy with 45*'s preen oil
In The Screwtape Letters, C. S. Lewis provides a series of lessons in the importance of taking a deliberate role in Christian faith by portraying a typical human life, with all its temptations and failings, seen from devils’ viewpoints. Screwtape holds an administrative post in the bureaucracy (“Lowerarchy”) of Hell, and acts as a mentor to his nephew Wormwood, an inexperienced (and incompetent) tempter.
In the thirty-one letters which constitute the book, Screwtape gives Wormwood detailed advice on various methods of undermining faith and of promoting sin in “the Patient”, interspersed with observations on human nature and on Christian doctrine. In Screwtape’s advice, selfish gain and power are seen as the only good, and neither demon can comprehend God’s love for man or acknowledge human virtue.

After the second letter… A striking contrast is formed between Wormwood and Screwtape during the rest of the book, wherein Wormwood is depicted through Screwtape’s letters as anxious to tempt his patient into extravagantly wicked and deplorable sins, often recklessly, while Screwtape takes a more subtle stance, as in Letter XII wherein he remarks: “… the safest road to hell is the gradual one – the gentle slope, soft underfoot, without sudden turnings, without milestones, without signposts”.

In the last letter... Screwtape responds to Wormwood’s final letter that he may expect as little assistance as Screwtape would expect from Wormwood were their situations reversed (“My love for you and your love for me are as alike as two peas … The only difference is that I am the stronger.”), mimicking the situation where Wormwood himself informed on his uncle to the Infernal Police for Infernal Heresy (making a religiously positive remark that would offend Satan).

en.wikipedia.org/…

 Alf the Pope's nose is poisoning our democracy with 45*'s preen oil

A California man’s daily sushi habit ended in a trip to hospital with a stomach-churning item to show doctors: a 5ft tapeworm that “wiggled” out of his body.

Fresno emergency department doctor Kenny Banh told the Guardian he was skeptical when the man walked in to his hospital, asking for treatment for a worm.

But when the patient opened a plastic bag, the “giant” parasite was inside, wrapped around a toilet roll.

“Apparently it was still wriggling when he put it in the bag but it had died in transit,” Banh said.

www.theguardian.com/…

 Alf the Pope's nose is poisoning our democracy with 45*'s preen oil

There is no valuable information which could be obtained from inserting a thermometer in the urethra.

 Alf the Pope's nose is poisoning our democracy with 45*'s preen oil

Jeremy Piven appeared in the first Broadway revival of David Mamet‘s Speed-the-Plow, co-starring Mad Men star Elisabeth Moss and three-time Tony nominee Raul Esparza. The production began preview performances on October 3, 2008, and opened on October 23, 2008; the play was due to run through February 22, 2009. After Piven missed several performances, on December 17, 2008, Piven’s rep. announced that due to an undisclosed illness, Piven would be ending his run in the play effective immediately.[15] The illness was revealed to be hydrargaria, a disease caused by exposure to mercury or its compounds, though the source is unknown. Rumours have indicated that the high level of mercury could potentially have been caused by Piven’s habit of consuming fish twice a day for the past 20 years.[16] An alternative explanation is that the herbal remedies Piven was taking were responsible for his high levels of mercury.[17] Mamet joked that Piven was leaving the play “to pursue a career as a thermometer.”[18] On September 1, 2009, Piven, in a guest appearance on the Late Show with David Letterman, explained that he had given up red meat and poultry, and had been getting all of his protein from fish for the past 20 years. William H. Macy and Norbert Leo Butz replaced Piven in the Broadway show.[19]

Let’s block ads! (Why?)

moranbetterDemocrats

Show me your data — the new precondition to M&As

 Show me your data — the new precondition to M&As

In 2017, the total value of merger and acquisitions (M&As) exceeded three trillion dollars. Some of the more notable M&As in the past year include Amazon’s acquisition of Whole Foods, Intel’s purchase of autonomous vehicle tech firm Mobileye, and Verizon’s acquisition of Yahoo, which became a high-profile example of the cost undisclosed data breaches have on valuations — in this case a $ 350 million drop in the final price tag.

To better prepare for the growing threat against corporate, customer, and employee data, companies are enforcing new data management and protection practices. One such change is the practice of requiring that each party in an M&A transaction demonstrate compliance with industry privacy and security standards before finalizing a deal. Under the new precondition, buyers and sellers are making more granular requests for visibility into the other side’s entire information repository and lifecycle to safeguard their own business assets and brands.

While the extent of required compliance varies with each buyer, seller, and deal, it is a key component now nonetheless. From pre- to post-M&A, all parties should consider how their privacy and data security posture could have a material effect on the proposed deal. To that end, here are a few key points to consider when you’re entering a deal:

  • Visibility into the entire information life cycle – How does a company that is contemplating an M&A collect, store, encrypt, and destroy personal data? What information is stored on what systems, and for how long? How is the information inventoried, mapped, and categorized? How, and with whom, is data shared? These are threshold questions for which any acquirer or target company should have answers.
  • What types of data – What types of personal data would potentially be involved in the transaction? For example, does the deal involve direct marketing contact information, personal data originating from new markets, or sensitive data that could subject the company to new, industry-specific laws?
  • Merging corporate data – Is it a transactional goal that one company will become fully incorporated into another, thereby merging the two distinct data sets, or will the target remain a standalone unit that continues to operate as a discrete division with segregated personal data?
  • International data transfers – What are the parties’ legal transfer mechanisms for cross-border personal data transfers? Would the merger itself lead to a cross-border transfer of personal data and, if so, would any country-specific laws then come into play, such as from China or Russia?
  • History of data breaches and the risk of compromised data – Are all transactional parties prepared to provide a history of any known or suspected data incidents or cyberattack attempts, and the responses to all? Beyond just corporate reputation, compromised data could mean any underlying intellectual property’s value has been diminished or that other vectors of attack may exist.
  • Breach response plans and encryption – Are data security plans such as breach response, disaster recovery, and business continuity in place and tested? What levels of encryption are used throughout the organization and how is this determined and monitored?
  • C-suite buy-in – Do the directors, officers, and executives have access to appropriate internal and external resources to help them evaluate data privacy and security issues and make informed business decisions? Have they allocated budgetary resources for personnel and technology solutions needed to automate privacy-compliant best practices well in advance of a transaction?

In a world of growing cyber threats and attacks, these privacy and data security considerations actually go far beyond just M&As. They can help businesses understand the ramifications of worst case scenarios and for evaluating the impact of data security and privacy solutions and policies on company value. Regulators are also more acutely monitoring companies’ privacy practices and statements. For instance, the EU General Data Protection Regulation (GDPR), the most sweeping change to data protection in the past 20 years, will impact any U.S. company that handles EU resident data. Failure to comply with GDPR by the mandated May 25 deadline may lead to fines of up to €20 million or 4 percent of global annual turnover, whichever is higher.

In today’s business climate, not adhering to privacy and data protection practices risks leaving money on the table in M&A deals, incurring regulatory fines, and losing brand assets.

Chris Babel is CEO of TrustArc.

Let’s block ads! (Why?)

Big Data – VentureBeat

Salesfusion Launches Spiffy New Marketing Automation Tool

Salesfusion on Tuesday launched version 12 of its marketing automation solution.

salesfusion Salesfusion Launches Spiffy New Marketing Automation Tool

The company rebuilt Salesfusion 12 from the ground up to provide an architecture robust enough to support “the easiest-to-use, most modern campaign creation tools available to marketers today,” said Greg Vilines, Salesfusion’s VP of product and engineering.

It provides enterprise-grade power and features without the typical price tag, he told CRM Buyer.

Dated architecture was one of the challenges the company faced, said Cindy Zhou, principal analyst at Constellation Research.

“Rearchitecting enables Salesfusion to modernize their solution,” she told CRM Buyer.

The new Olympus modular automation framework, a key part of Salesfusion 12, processes campaigns and adds horsepower to handle increased workloads from traffic spikes.

Salesfusion 12 “was purpose-built for data-driven marketers who wish to build sophisticated marketing programs in a fraction of the time it previously took,” Vilines said.

Customers can try out Salesfusion 12 for two weeks for free, he noted.

Olympus’ Capabilities

Olympus leverages a scalable platform that dynamically adapts to shifts in workload volume, leveraging Amazon Web Services’ natural virtualization capabilities, Vilines said.

Other Olympus components:

  • new data processing and indexing tools for inbound activity tracking, leveraging Amazon Kinesis Data Firehose and Elasticsearch; and
  • high-speed Redshift data warehouses to power advanced analytics and core dashboards and reporting.

“With the increased collaboration between marketers and sales teams, the complexity of multiple campaigns running concurrently and the associated data collection increases,” noted Constellation’s Zhou. “This new automation framework should help customers with performance speed and processing.”

Salesfusion’s New Features

Salesfusion 12 has three new features:

  • Page Builder – which has drag-and-drop builders that let marketers create landing pages and emails rapidly without needing specialized coding skills or third-party tools;
  • Advanced Analytics, which offers custom dashboards as well as reports that allow data drilling, visualization and data storytelling; and
  • Deeper CRM integration with Salesforce, Sugar, Sage, NetSuite, Microsoft Dynamics, Infor and Bullhorn.

PageBuilder has a WYSIWYG interface that lets users add images, videos and buttons. Users can build completely mobile-responsive pages that work across all devices.

PageBuilder leverages Salesfusion’s Form Builder and automated actions to streamline interactions with prospects and customers.

Salesfusion 12 includes the Advanced Analytics platform, a new business intelligence module that lets users leverage marketing data to monitor campaign performance and connect marketing activities to revenue.

85069 500x340 small Salesfusion Launches Spiffy New Marketing Automation Tool

Click Image to Enlarge

Advanced Analytics was released in August as an add-on module for a fee, but is now included in Salesfusion 12, Vilines said. Further, it has new features – opportunity, ROI and funnel reporting.

For CRM integration, Salesfusion 12 includes integration with custom objects in both Microsoft Dynamics CRM and Salesforce. It has deepened connectivity with Bullhorn, and it integrates with campaigns in Salesforce, Infor and Microsoft Dynamics.

“Additional integrations with CRM will be important in making Salesfusion attractive,” said Rebecca Wettemann, VP of research at Nucleus Research, “particularly with CRM vendors like Microsoft that don’t have marketing automation capabilities in their core product.”

Salesfusion has always rated high on usability,” she told CRM Buyer. “These advancements will likely make it more attractive, particularly for organizations where marketers wear several hats — in SMBs — or aren’t everyday users of the application.”

Salesfusion has customers in various industries, including technology, healthcare, finance and recruiting, Vilines said.

Future Plans for Salesfusion 12

Salesfusion will release new features continually over the next few months as part of the Salesfusion 12 launch, including the following:

  • Account View — reshaping ABM efforts to facilitate better insights and account actions;
  • Response Prospector, which will leverage machine learning and AI to find new leads and keep databases up to date; and
  • An updated REST API, which will craft new ways to integrate directly to Salesfusion’s platform.

“Many of the marketing automation leaders are building AI capabilities into their solution,” Zhou noted.

In this respect, Salesfusion may have to work hard to catch up — companies such as Boomtrain, Abert and Blueshift already offer AI-powered marketing automation products.

Still, Salesfusion continuously improves its core infrastructure, Vilines emphasized, “to meet the ever-changing needs of marketers.”
end enn Salesfusion Launches Spiffy New Marketing Automation Tool


Richard%20Adhikari Salesfusion Launches Spiffy New Marketing Automation ToolRichard Adhikari has been an ECT News Network reporter since 2008. His areas of focus include cybersecurity, mobile technologies, CRM, databases, software development, mainframe and mid-range computing, and application development. He has written and edited for numerous publications, including Information Week and Computerworld. He is the author of two books on client/server technology.
Email Richard.

Let’s block ads! (Why?)

CRM Buyer

Our Republican President Reportedly Asked a Porn Star To Spank Him With A Copy Of Forbes Magazine.

 Our Republican President Reportedly Asked a Porn Star To Spank Him With A Copy Of Forbes Magazine.

Stormy Daniels

OK, I am not one to cast judgment on other people’s consensual sexual behavior. But for the 83% of Republican voters who continue to support this guy, please, please spare me any further talk about your so-called values, your so-called religious beliefs, or your so-called morals. We found out how meaningless those were when you cast your lot with that child molester in the Alabama Senate race. You don’t have any moral authority. At the risk of being colloquial, you got nothin’.

So spare me the self-righteous crap you spew about abortion. Spare me the tirades about “godless Liberals.” Spare me the whole panoply of garbage you’ve spun about Democrats and “moral relativism” and whatever since you harnessed your wagon to the the likes of William Bennett and Jerry Falwell. You’ve had a long history of feeling superior about yourselves. It’s over now.

There are new details about the president’s alleged relationship with adult-film star Stormy Daniels. According to a report by Mother Jones, Daniels once claimed that the president made her spank him with an issue of Forbes magazine.

Believe it or not, Ms. Daniels once had some serious political aspirations. They were prompted by—you guessed it—the crass hypocrisy and infidelity of Republican Senator, “Diaper” David Vitter, whose shenanigans became headline news in 2009 after his name surfaced in a DC Madame’s little black book. Daniels, like most people, was incensed not at the guy’s behavior, but his morally superior double-talk.

Daniels, who grew up in Baton Rouge, Louisiana, told reporters she wanted to highlight his hypocrisy. She offered up a potential campaign slogan: “Stormy Daniels: Screwing people honestly.”

At that time, Daniels considered Donald Trump a potential donor to her campaign, having been advised by a political operative who she’d retained to explore the prospect of waging a political campaign against Vitter. The operative mentioned Trump in an email to Daniels, and then forwarded the same email to a Democratic consultant, Andrea Dube, who confirmed the following:

This email was sent to Andrea Dubé, a Democratic political consultant based in New Orleans. In response, Dubé expressed surprise that Daniels was friendly with Trump. “Donald Trump?” she wrote. “In her cell phone?”

“Yep,” the other consultant replied. “She says one time he made her sit with him for three hours watching ‘shark week.’ Another time he had her spank him with a Forbes magazine.”

***

The campaign consultant who wrote the email to Dubé tells Mother Jones that Daniels said the spanking came during a series of sexual and romantic encounters with Trump and that it involved a copy of Forbes with Trump on the cover.

It’s beyond the purview of this particular Diary to explore the nuances of what Trump was trying to work out in his head by asking to be spanked with a copy of Forbes magazine with him on the cover. I’m not going to touch that, and in truth it’s none of my business. And, as pointed out in the comments, it’s possible that Daniels made up the whole episode.

But if this had been Barack Obama, what do you think the reaction of the right would be?

Let’s block ads! (Why?)

moranbetterDemocrats

3 Things About Machine Learning Every Marketer Needs to Know

20180117 bnr data nodes shop 351x200 3 Things About Machine Learning Every Marketer Needs to Know

TL;DR: Machine Learning 101: 3 Things Marketers Need to Know

Got data?

I bet you do.

Mountains of data, in fact. Terabytes of data. Libraries worth of data. With more streaming in every hour of every day.

We marketers love our data, but, let’s face it … we probably only use a fraction of the data we collect.

It’s not that we don’t want to use more of it. We do.

It would be fantastic, for example, to follow each and every customer around, to see everything they read, how long they read it for, where they clicked next. You might even want to drop a cookie on their computer and see all the other websites they went to. You could survey them, too, and send them personal messages on social media. Test when is the best time to send them messages, and which channel they respond to best.

Then, with all that wonderful knowledge, you could hole up in your office and design a complete soup-to-nuts marketing strategy just for them.

I’m not talking about something like account-based marketing, where your work is for one big target company. I’m talking about a totally personalized, hand-crafted marketing strategy and execution for every single possible prospect your company could have.

Just think of it: thousands of completely personalized marketing plans. Tens of thousands of personalized messages. Hundreds of thousands of hours poring over the data, studying exactly how each and every single prospect behaves.

That’d be great, right?

Well, if you had unlimited time and unlimited resources, maybe. If you never had to sleep, and had no family and no life … and the assurance that you’d live to be at least 312.

Otherwise … forget it.

Being able to focus that closely and to process every little bit of data we have about our prospects and customers is laughable. Delusional.

We are not machines.

At the most, we only have enough resources to segment our audiences. We have to create personas and buyers journeys based on our best guesses (informed by the data, of course).

But what if machines could do all that?

What if a well-trained algorithm could follow each one of your prospects around and could recommend the perfect piece of content and send it to them at the perfect time, in the channel they’d be most likely to respond to it in? And what if the algorithm could even predict the perfect time for your ace salesperson to finally give them a call?

That’s what machine learning can do.

Here’s what you need to know about it (at least for starters).

Machine learning is a subset of artificial intelligence.

At its simplest definition, machine learning is nothing more than “using data to answer questions.” Hat tip to thank Google’s superb video series on machine learning for that definition.

It’s a specific type ‒ or discipline, if you will ‒ of artificial intelligence. One of its strengths is that a machine learning algorithm’s accuracy can improve over time. It can “learn.” So. while a program that can play chess might be considered artificial intelligence, a program that can learn to play chess, and ping pong, and any other game, would be an example of machine learning.

More complicated machine learning systems are often called “deep learning.” So, for the game example, deep learning systems are set up to use multiple levels – called “neural nets” ‒ to do their processing.

Here’s a Venn diagram to help understand:

Let’s block ads! (Why?)

Act-On Blog