Category Archives: BI News and Info

Digital Transformation And The Successful Management Of Innovation

CDP 2 1 Digital Transformation And The Successful Management Of Innovation

Achieving quantum leaps through disruption and using data in new contexts, in ways designed for more than just Generation Y — indeed, the digital transformation affects us all. It’s time for a detailed look at its key aspects.

Data finding its way into new settings

Archiving all of a company’s internal information until the end of time is generally a good idea, as it gives the boss the security that nothing will be lost. Meanwhile, enabling him or her to create bar graphs and pie charts based on sales trends – preferably in real time, of course – is even better.

But the best scenario of all is when the boss can incorporate data from external sources. All of a sudden, information on factors as seemingly mundane as the weather start helping to improve interpretations of fluctuations in sales and to make precise modifications to the company’s offerings. When the gusts of autumn begin to blow, for example, energy providers scale back solar production and crank up their windmills. Here, external data provides a foundation for processes and decisions that were previously unattainable.

Quantum leaps possible through disruption

While these advancements involve changes in existing workflows, there are also much more radical approaches that eschew conventional structures entirely.

“The aggressive use of data is transforming business models, facilitating new products and services, creating new processes, generating greater utility, and ushering in a new culture of management,” states Professor Walter Brenner of the University of St. Gallen in Switzerland, regarding the effects of digitalization.

Harnessing these benefits requires the application of innovative information and communication technology, especially the kind termed “disruptive.” A complete departure from existing structures may not necessarily be the actual goal, but it can occur as a consequence of this process.

Having had to contend with “only” one new technology at a time in the past, be it PCs, SAP software, SQL databases, or the Internet itself, companies are now facing an array of concurrent topics, such as the Internet of Things, social media, third-generation e-business, and tablets and smartphones. Professor Brenner thus believes that every good — and perhaps disruptive — idea can result in a “quantum leap in terms of data.”

Products and services shaped by customers

It has already been nearly seven years since the release of an app that enables customers to order and pay for taxis. Initially introduced in Berlin, Germany, mytaxi makes it possible to avoid waiting on hold for the next phone representative and pay by credit card while giving drivers greater independence from taxi dispatch centers. In addition, analyses of user data can lead to the creation of new services, such as for people who consistently order taxis at around the same time of day.

“Successful models focus on providing utility to the customer,” Professor Brenner explains. “In the beginning, at least, everything else is secondary.”

In this regard, the private taxi agency Uber is a fair bit more radical. It bypasses the entire taxi industry and hires private individuals interested in making themselves and their vehicles available for rides on the Uber platform. Similarly, Airbnb runs a platform travelers can use to book private accommodations instead of hotel rooms.

Long-established companies are also undergoing profound changes. The German publishing house Axel Springer SE, for instance, has acquired a number of startups, launched an online dating platform, and released an app with which users can collect points at retail. Chairman and CEO Matthias Döpfner also has an interest in getting the company’s newspapers and other periodicals back into the black based on payment models, of course, but these endeavors are somewhat at odds with the traditional notion of publishing houses being involved solely in publishing.

The impact of digitalization transcends Generation Y

Digitalization is effecting changes in nearly every industry. Retailers will likely have no choice but to integrate their sales channels into an omnichannel approach. Seeking to make their data services as attractive as possible, BMW, Mercedes, and Audi have joined forces to purchase the digital map service HERE. Mechanical engineering companies are outfitting their equipment with sensors to reduce downtime and achieve further product improvements.

“The specific potential and risks at hand determine how and by what means each individual company approaches the subject of digitalization,” Professor Brenner reveals. The resulting services will ultimately benefit every customer – not just those belonging to Generation Y, who present a certain basic affinity for digital methods.

“Think of cars that notify the service center when their brakes or drive belts need to be replaced, offer parking assistance, or even handle parking for you,” Brenner offers. “This can be a big help to elderly people in particular.”

Chief digital officers: team members, not miracle workers

Making the transition to the digital future is something that involves not only a CEO or a head of marketing or IT, but the entire company. Though these individuals do play an important role as proponents of digital models, it also takes more than just a chief digital officer alone.

For Professor Brenner, appointing a single person to the board of a DAX company to oversee digitalization is basically absurd. “Unless you’re talking about Da Vinci or Leibnitz born again, nobody could handle such a task,” he states.

In Brenner’s view, this is a topic for each and every department, and responsibilities should be assigned much like on a soccer field: “You’ve got a coach and the players – and the fans, as well, who are more or less what it’s all about.”

Here, the CIO neither competes with the CDO nor assumes an elevated position in the process of digital transformation. Implementing new databases like SAP HANA or Hadoop, leveraging sensor data in both technical and commercially viable ways, these are the tasks CIOs will face going forward.

“There are some fantastic jobs out there,” Brenner affirms.

Want more insight on managing digital transformation? See Three Keys To Winning In A World Of Disruption.

Image via Shutterstock

Comments

Let’s block ads! (Why?)

Digitalist Magazine

Generating HTML from SQL Server Queries

You can produce HTML from SQL because SQL Server has built-in support for outputting XML, and HTML is best understood as a slightly odd dialect of XML that imparts meaning to predefined tags. There are plenty of edge cases where an HTML structure is the most obvious way of communicating tables, lists and directories. Where data is hierarchical, it can make even more sense. William Brewer gives a simple introduction to a few HTML-output techniques.

Can you produce HTML from SQL? Yes, very easily. Would you ever want to? I certainly have had to. The principle is very simple. HTML is really just a slightly odd dialect of XML that imparts meaning to predefined tags. SQL Server has built-in ways of outputting a wide variety of XML. Although I’ve had in the past to output entire websites from SQL, the most natural thing is to produce HTML structures such as tables, lists and directories.

HTML5 can generally be worked on in SQL as if it were an XML fragment. XML, of course, has no predefined tags and is extensible, whereas HTML is designed to facilitate the rendering and display of data. By custom, it has become more forgiving than XML, but in general, HTML5 is based on XML.

Generating Tables from SQL expressions.

In HTML5, tables are best done simply, but using the child elements and structures so that the web designer has full control over the appearance of the table. CSS3 allows you to specify sets of cells within a list of child elements. Individual TD tags, for example, within a table row (TR) can delimit table cells that can have individual styling, but the rendering of the table structure is quite separate from the data itself.

The table starts with an optional caption element, followed by zero or more colgroup elements, followed optionally by a thead element. This header is then followed optionally by a tfoot element, followed by either zero or more tbody elements or one or more tr elements, followed optionally by a tfoot element, but there can be only one tfoot element.

The HTML5 ‘template’ for tables

In SQL Server, one can create the XML for a table like this with this type of query which is in the form of a template with dummy data.

Which produces (after formatting it nicely) this

So, going to AdventureWorks, we can now produce a table that reports on the number of sales for each city, for the top thirty cities.

I’ve left out the tfoot row because I didn’t need that. Likewise colgroup. I use tfoots mostly for aggregate lines, but you are limited to one only at the end, so it is not ideal for anything other than a simple ‘bottom line’.

When this is placed within and html file, with suitable CSS, it can look something like this

word image 28 Generating HTML from SQL Server Queries

Generating directory lists from SQL expressions.

The HTML is for rendering name-value groups such as dictionaries, indexes, definitions, questions and answers and lexicons. The name-value group consists of one or more names (dt elements) followed by one or more values (dd elements). Within a single dl element, there should not be more than one dt element for each name.

We’ll take as an example an excerpt from the excellent SQL Server glossary

This produces a directory list which can be rendered as you wish

word image 29 Generating HTML from SQL Server Queries

Generating hierarchical lists from SQL expressions.

HTML Lists represent probably the most useful way of passing simple hierarchical data to an application. You can actually use directories (DLs) to do this for lists name-value pairs and even tables for more complex data. Here is a simple example of a hierarchical list, generated from AdventureWorks. You’d want to use a recursive CTE for anything more complicated.

…giving…

word image 30 Generating HTML from SQL Server Queries

Conclusions

There are quite a few structures now in HTML5. Even the body tag has subordinate header, nav, section, article, aside, footer, details and summary tags. If you read the W3C Recommendation it bristles with good suggestions for using markup to create structures. The pre tag can use child code, samp and kbd tags to create intelligent formatting. Data in SQL Server can easily generate this sort of structured HTML5 markup. This has obvious uses in indexes, chaptering, glossaries as well as the obvious generation of table-based reports. There is quite a lot of mileage in creating HTML from SQL Server queries

Let’s block ads! (Why?)

SQL – Simple Talk

Plotting a general ZigZag curve with possible threshold value

I want to use zigzag curve to describe the trend of simple data. here is a list as

lstPrices={4.36,4.32,4.2,4.2,4.22,4.12,4.28,4.29,4.29,4.31,4.25,4.35,4.59,4.68,4.61,4.59,5.05,4.95,5.09,5.11,4.99,4.96,5.11,5.37,5.6,5.38,5.42,5.36,4.9,4.92,4.98,4.89,4.99,4.8,4.79,4.62,4.65,4.7,4.68,4.7,4.81,4.84,4.77,4.85,4.78,4.69,4.71,4.66,4.69,4.78,4.78,4.81,4.85,4.78,5.1,5.29,5.19,5.28,5.22,5.18,5.07,5.08,5.09,5.07,5.1,5.05,5.05,5.13,5.1,5.09,5.21,5.24,5.26,5.35,5.19,5.24,5.09,5.18,5.19,5.18,5.13,5.15,5.06,5.09,5.08,5.01,4.99,4.99,4.94,4.98,4.92,4.87,4.91,4.91,4.92,4.95,4.9,4.93,4.99,5.04,4.98,5.17,5.07,5.08,5.14,5.17,5.08,5.53,5.57,5.49,5.47,5.64,5.48,5.47,5.31,5.36,5.35,5.31,5.37,5.35};

and I give new definition of FindPeaks and the related.

JFindPeaks[list_?ListQ] := MapAt[Round, FindPeaks[list] // N, {All, 1}]
JFindValleys[list_?ListQ] := Module[{x, y}, Map[({x, y} = #; {x, -y}) &, JFindPeaks[-list]]]
JFindExtremes[list_?ListQ] := Sort[JFindPeaks[list]~Join~JFindValleys[list]]

then some lists are computed as

peaks = JFindPeaks[lstPrices];
valls = JFindValleys[lstPrices];
extrs = JFindExtremes[lstPrices];

and two plots too,

p1 = ListLinePlot[lstPrices,
   Epilog -> {
     {Red, PointSize[0.015], Point[peaks]},
     {Blue, PointSize[0.015], Point[valls]}},
   PlotStyle -> Directive[Black, Dotted]
   ];
p2 = Graphics@Line@extrs;

finnally, the target plot comes out.

Show[p1, p2,
 AspectRatio -> 1/GoldenRatio,
 Frame -> True,
 GridLines -> Automatic,
 GridLinesStyle -> Directive[Gray, Dotted],
 ImageSize -> Large
 ]

It’s like this,

dNBIQ Plotting a general ZigZag curve with possible threshold value

but the most I want to get could be like the following one or the other similarly, or these sub-peaks-valleys should be ellminated on the plot.

JVeZN Plotting a general ZigZag curve with possible threshold value

so how to realize it? Maybe a threshold value is necessary. Thanks!

1 Answer

Let’s block ads! (Why?)

Recent Questions – Mathematica Stack Exchange

Unable to restore a backup – Msg 3241

I worked on an interesting issue today where a user couldn’t restore a backup.   Here is what this customer did:

  1. backed up a database from an on-premises server (2008 R2)
  2. copied the file to an Azure VM
  3. tried to restore the backup on the Azure VM (2008 R2 with exact same build#)

But he got the following error:

Msg 3241, Level 16, State 0, Line 4
The media family on device ‘c:\temp\test.bak’ is incorrectly formed. SQL Server cannot process this media family.
Msg 3013, Level 16, State 1, Line 4
RESTORE HEADERONLY is terminating abnormally.

We verified that he could restore the same backup on the local machine (on-premises).  Initially I thought the file must have been corrupt during transferring.   We used different method to transfer file and zipped the file.  The behavior is the same.   When we backed up a database from the same Azure VM and tried to restore, it was successful.

We were at a point I thought there might be a bug and I was planning to get the backup in house to reproduce the problem until this customer told me that they are using a tool called “Microsoft SQL Server Backup to Microsoft Azure Tool” which is only necessary to use for SQL 2008 R2 or below because SQL 2012 and above have builtin functionality to backup to and restore from Azure blob storage.    Then I said how about we stop that service to see what happens.   After stopping that service (screenshot below), restore now works perfectly fine.   After restarting it, the same error.

image thumb299 Unable to restore a backup – Msg 3241

After a little research, I found out more about this tool.  The backup tool basically is a filter driver that watches any files with certain extension you have configured when SQL Server tries to access the file.

  1. when SQL Server does a backup, the tool will redirect the file to Azure blob storage and leaves a small stub file on local computer
  2. when SQL Server does a restore, the tool will try to access the same file from Azure blog stroage and give it to SQL Server

Now you can see the problem.  In this customer’s scenario, the *.bak file was transferred from on-premise server directly to the Azure VM.  there is no corresponding file on the Azure blob storage.  Since the tool can’t find the file to provide content to SQL Server, the restore fails with the error.

Here are ways how you fix the issue

The cleanest way is to configure the rule to watch files only to specific path instead of watching the whole computer.  By default, all path on the local machine is checked.  But you can add multiple paths to watch.  If you do this, you can simply put the backup you copied from remote machine (which is not part of the backup from the server watched by the tool) to a different folder and perform restore.

image thumb300 Unable to restore a backup – Msg 3241

Alternatively, you can stop this tool when you restore a backup that you copied from a different machine.

You can also use a different file extension for the file you copied from remote machine and try to restore

Jack Li |Senior Escalation Engineer | Microsoft SQL Server

twitter| pssdiag |Sql Nexus

Let’s block ads! (Why?)

CSS SQL Server Engineers

Using Zeppelin Notebooks with your Oracle Data Warehouse – Part 1

 Using Zeppelin Notebooks with your Oracle Data Warehouse   Part 1

Over the past couple of weeks I have been looking at one of the Apache open source projects called Zeppelin. It’s a new style of application called a “notebook” which typically runs within your browser. The idea behind notebook-style applications like Zeppelin is to deliver an adhoc data-discovery tool – at least that is how I see it being used. Like most notebook-style applications, Zeppelin provides a number of useful data-discovery features such as:

  • a simple way to ingest data
  • access to languages that help with data discovery and data analytics
  • some basic data visualization tools
  • a set of collaboration services for sharing notebooks (collections of reports)

Zeppelin is essentially a scripting environment for running ordinary SQL statements along with a lot of other languages such as Spark, Python, Hive, R etc. These are controlled by a feature called “interpreters” and there is a list of the latest interpreters available here.

A good example of a notebook-type of application is R Studio which many of you will be familiar with because we typically use it when demonstrating the R capabilities within Oracle Advanced Analytics. However, R Studio is primarily aimed at data scientists whilst Apache Zeppelin is aimed at other types of report developers and business users although it does have a lot of features that data scientists will find useful.

Use Cases

What’s a good use case for Zeppelin? Well, what I like about Zeppelin is that you can quickly and easily create a notebook, or workflow, that downloads a log file from a URL, reformats the data in the file and then displays the resulting data set as a graph/table.

Nothing really earth-shattering in that type of workflow except that Zeppelin is easy to install, it’s easy to setup (once you understand its architecture), and it seems to be easy to share your results. Here’s a really simple workflow described above that I built to load data from a file, create an external table over the data file and then run a report:

 Using Zeppelin Notebooks with your Oracle Data Warehouse   Part 1

This little example shows how notebooks differ from traditional BI tools. Each of the headings in the above image (Download data from web url, Create directory to data file location, Drop existing staging table etc etc) is a separate paragraph within the “Data Access Tutorial” notebook.

The real power is that each paragraph can use a different language such as SQL, or java, shell scripting or python etc etc. In the workbook shown above I start by running a shell script that pulls a data file from a remote server. Then using a SQL paragraph I create a directory object to access the data file. The next SQL paragraph drops my existing staging table and the subsequent SQL paragraph creates the external table over the data file. The final SQL paragraph looks like this:

%osql
select * from ext_bank_data 

where %osql tells me the language, or interpreter, I am using which in this case is SQL connecting to a specific schema in my database.

Building Dashboards 

You can even build relatively simple briefing books containing data from different data sets and even different data sources (Zeppelin supports an ever growing number of data sources) – in this case I connected Zeppelin to two different schemas in two different PDBs:

 Using Zeppelin Notebooks with your Oracle Data Warehouse   Part 1

What’s really nice is that I can even view these notebooks on my smartphone (iPhone) as you can see below. The same notebook shown above appears on my iPhone screen in a vertical layout style to make best use of the screen real estate:

 Using Zeppelin Notebooks with your Oracle Data Warehouse   Part 1

I am really liking Apache Zeppelin because it’s so simple to setup (I have various versions running on Mac OSX and Oracle Linux) and start. It has just enough features to be very useful and not overwhelming. I like the fact that I can create notebooks, or reports, using a range of different languages and show data from a range of different schemas/PDBs/database alongside each other. It is also relatively easy to share those results. And I can open my notebooks (reports) on my iPhone.

Visualizations

There is a limited set of available visualizations within the notebook (report) editor when you are using a SQL-based interpreter (connector). Essentially you have a basic, scrollable table and five types of graph to choose for viewing your data. You can interactively change the layout of the graph by clicking on the “settings” link but there are no formatting controls to alter the x or y labels – if you look carefully at the right-hand area graph in the first screenshot you will probably spot that the time value labels on the x-axis overlap each other.

Quick Summary

Now, this may be obvious but I would not call Zeppelin a data integration tool nor a BI tool for reasons that will become clear during the next set of blog posts.

Having said that, overall, Zeppelin is a very exciting and clever product. It is relatively easy to setup connections to your Oracle Database, the scripting framework is very powerful and there are visualization features are good enough. It’s a new type of application that is just about flexible enough for data scientists, power users and report writers.

What’s next?

In my next series of blog posts, which I aiming to write over the next couple of weeks, I will explain how to download and install Apache Zeppelin, how to setup connections to an Oracle Database and how to use some of the scripting features to build reports similar to the ones above. If you are comfortable with writing your own shell scripts, SQL scripts, markup scripts for formatting text then Zeppelin is very flexible tool.

If you are already using Zeppelin against your Oracle Database and would like to share your experiences that would be great – please use the comments feature below or feel free to send me an email: keith.laker@oracle.com.
(image at top of post is courtesy of wikipedia)

Let’s block ads! (Why?)

The Data Warehouse Insider

Machine Learning: What’s In It For Finance?

IDC finance Machine Learning: What’s In It For Finance?

The world has changed. We’ve seen massive disruption on multiple fronts – business model disruption, cybercrime, new devices, and an app-centric world. Powerful networks are crucial to success in a mobile-first, cloud-first world that’s putting an ever-increasing increasing amount of data at our fingertips. With the Internet of Things (IoT) we can connect instrumented devices worldwide and use new data to transform business models and products.

Disruption

Disruption comes in many forms. It’s not big or scary, it’s just another way of describing change and evolution. In the ’80s it manifested as call centers. Then, as the digital landscape began to take shape, it was the Internet, cloud computing … now it’s artificial intelligence (AI).

Digital transformation

Digital transformation means different things to different companies, but in the end I believe it will be a simple salvation that will carry us forward. If you Bing (note I worked for Microsoft for 15 years before experiencing digital transformation from the lens of the outside world), digital transformation, it says it’s “the profound and accelerating transformation of business activities, processes, competencies, and models to fully leverage the changes and opportunities of digital technologies and their impact across society in a strategic and prioritized way.” (I’ll simplify that; keep reading.)

A lot of today’s digital transformation ideas are ripped straight from the scripts of sci-fi entertainment, whether you’re talking about the robotic assistants of 2001: A Space Odyssey or artificial intelligence in the Star Trek series. We’re forecasting our future with our imagination. So, let’s move on to why digital transformation is needed in our current world.

Business challenges

The basic challenges facing businesses today are the same as they’ve always been: engaging customers, empowering employees, optimizing operations, and reinventing the value offered to customers. However, what has changed is the unique convergence of three things:

  1. Increasing volumes of data, particularly driven by the digitization of “things” and heightened individual mobility and collaboration
  1. Advancements in data analytics and intelligence to draw actionable insight from the data
  1. Ubiquity of cloud computing, which puts this disruptive power in the hands of organizations of all sizes, increasing the pace of innovation and competition

Digital transformation in plain English

Hernan Marino, senior vice president, marketing, & global chief operating officer at SAP, explains digital transformation by giving specific industry examples to make it simpler.

Automobile manufacturing used to be the work of assembly lines, people working side-by-side literally piecing together, painting, and churning out vehicles. It transitioned to automation, reducing costs and marginalizing human error. That was a business transformation. Now, we are seeing companies like Tesla and BMW incorporate technology into their vehicles that essentially make them computers on wheels. Cameras. Sensors. GPS. Self-driving vehicles. Syncing your smartphone with your car.

The point here is that companies need to make the upfront investments in infrastructure to take advantage of digital transformation, and that upfront investment will pay dividends in the long run as technological innovations abound. It is our job to collaboratively work with our customers to understand what infrastructure changes need to be made to achieve and take advantage of digital transformation.

Harman gives electric companies as another example. Remember a few years ago, when you used to go outside your house and see the little power meter spinning as it recorded the kilowatts you use? Every month, the meter reader would show up in your yard, record your usage, and report back to the electric company.

Most electric companies then made a business transformation and installed smart meters – eliminating the cost of the meter reader and integrating most homes into a smart grid that gave customers access to their real-time information. Now, as renewable energy evolves and integrates more fully into our lives, these same electric companies that switched over to smart meters are going to make additional investments to be able to analyze the data and make more informed decisions that will benefit both the company and its customers.

That is digital transformation. Obviously, banks, healthcare, entertainment, trucking, and e-commerce all have different needs than auto manufacturers and electric companies. It is up to us – marketers and account managers promoting digital transformation – to identify those needs and help our clients make the digital transformation as seamlessly as possible.

Digital transformation is more than just a fancy buzzword, it is our present and our future. It is re-envisioning existing business models and embracing a different way of bringing together people, data, and processes to create more for their customers through systems of intelligence.

Learn more about what it means to be a digital business.

Comments

Let’s block ads! (Why?)

Digitalist Magazine

Teradata Delivers Industry-First License Portability Designed for the Hybrid Cloud

Simplified license tiers with bundled features, subscription-based licenses, and license portability help hybrid cloud quickly adapt to evolving business needs

Teradata (NYSE: TDC), the leading data and analytics company, today introduced innovative database license flexibility across hybrid cloud deployments, enabled through a consistent and simplified licensing model. Teradata’s new licensing model delivers:

  • Portability for deployment flexibility 
  • Subscription-based licenses
  • Simplified tiers with bundled features

With portable database licenses, Teradata customers now have the flexibility to choose, shift, expand, and restructure their hybrid cloud environment by moving licenses between deployment options as their business needs change. This new software licensing model is the first in the hybrid cloud market to feature portability — a shift away from cloud lock-in or siloed on-premises deployments.

Until now, hybrid cloud vendors have offered complex, inconsistent licensing models across deployment options that make it difficult for customers to select a solution that fits all needs. Teradata is changing the game with licensing flexibility and portability to ensure simplicity and consistency in support of agile, fast-growing businesses.

“The Teradata Database continues to be recognized as the leading data management solution for analytics in every performance parameter, and today we can also say it comes with the very best value proposition,” said John Dinning, Executive Vice President and Chief Business Officer, Teradata. “Not only is the database license portable across the hybrid cloud options, but so are workloads, enabled by a common code base in all deployments. This flexibility is a first in our industry and means that data models, applications, and development efforts can be migrated or transferred unchanged across any ecosystem.

“For example, a company may first develop their analytic solution on a Teradata IntelliFlex™ on-premises system and then seamlessly port the solution over to Teradata IntelliCloud™ . This will allow companies more options to develop and deploy the Teradata Database without having to worry about scaling their solution as their business and analytic needs grow.”

Subscription%20graphic%20for%20Licensing%20Flex Teradata Delivers Industry First License Portability Designed for the Hybrid Cloud


Teradata IntelliCloud™ is the next-generation secure managed cloud offering that provides data and analytic software as a service. It is available with new deployment choices including Teradata IntelliFlex™, the company’s flagship enterprise data warehouse platform that Teradata will deploy and manage in its own data centers, and global public cloud infrastructure from Amazon Web Services (AWS) and later, from Microsoft Azure.

In order to provide simplicity and portability across deployment options, Teradata licensing is based on a consistent metric. This metric is unique in that it takes into consideration not only the number of CPU cores available, but also how much data is fed to the CPU. This benefits customers by adjusting the licensing cost according to the performance potential of the system on which they are running. By using the metric calculation, Teradata can offer equivalent license portability across on-premises, public, private, and Teradata IntelliCloud configurations.

“By pairing the highest quality hybrid cloud solutions available with the most convenient usage model, Teradata is creating a blend of options for its customers that address the realities of rapidly evolving analytics capabilities and the emergence of new business requirements and models,” said Jim Curtis, Senior Analyst, Data Platforms and Analytics, 451 Research. “This is truly an example of fast-forward thinking; a win-win for Teradata as well as for aggressive companies that thrive and compete on business agility.”

Subscription-based licenses deliver on customer requests for lower up-front costs as well as smooth and consistent OPEX spending, making it easier for customers to budget and predict spending patterns.

These new subscription-based licenses come in four simplified tiers designed to meet customer requirements ranging from a free tier for database development to high-concurrency mixed-workload analytical systems, with new bundled features.

The four license tiers are:

  • Developer — This free tier is designed specifically for customers that are developing new applications in a non-production environment. The Developer tier is available in software-only versions on public cloud or as VMware to run on non-Teradata hardware.
  • Base — This plan is designed for low concurrency, entry-level data warehouses. It is available in the cloud and on-premises. 
  • Advanced — The Advanced tier supports high-concurrency, mixed-workload production environments. It includes powerful Teradata Integrated Workload Management and Teradata Intelligent Memory features. This option is available in the cloud and on-premises. 
  • Enterprise — This top-tier plan includes a more robust set of workload management features with Teradata Active System Management and Teradata Intelligent Memory. The Enterprise tier is available in the cloud and on-premises.
Software%20Decoupled%20graphic%20for%20Licensing Teradata Delivers Industry First License Portability Designed for the Hybrid Cloud


All the tiers come with the same version of Teradata database software. This enables the easy movement of workloads across tiers. All tiers also come with high-value database features bundled into the license to make it easy for customers to incorporate cutting-edge technologies and build sophisticated analytical environments. These features support Columnar, Temporal, Secure Zones, and Row-Level Security capabilities. Customers can leverage these features for improved performance, enhanced time-based analytics, more robust security and auditability at no additional cost.

Teradata’s new license model is available now.

Relevant News Links

Let’s block ads! (Why?)

Teradata United States

From Foe To Friend: How AI Can Boost Purpose

“Purpose” is the new star in the economic cosmos. But what impact will this new orientation have on workers and on software development?

Just imagine it: the fifth day of your work week entirely at your disposal. You could start working on the project that you’ve always dreamed of. You could get involved in social initiatives while drawing on support and resources from your employer. Or perhaps you’d like to spend time with friends and family, or simply be all by yourself.

It sounds too good to be true, right? Not at IXDS. The Berlin-based design and innovation agency believes in a 32-hour week and envisions total employee flexibility, as wellas a company organization free from hierarchies. And it’s proven to be quite a success. For more than 10 years, IXDS has been working with its customers – including startups and DAX companies – on future scenarios, innovative products, and novel services.

Nancy Birkhölzer, CEO at IXDS, recently participated in the first SAP Research Round Table hosted in the Data Space in Berlin, alongside other representatives from industry, academia, politics, and associations. The panel discussed how digitalization is shaping the social, economic, and ecological framework, and how this could impact people, companies, and the world of work. The panel also discussed what software providers like SAP need to do to not only meet the challenges of this dynamic environment, but to actively shape it.

With these challenges in mind, the organizational team under Norbert Koppenhagen, head of research at the SAP Innovation Center Network, selected the new location, in which SAP collaborates with startups and maintains its network in Berlin thanks to a fresh event and design concept. The array of topics was as vibrant as the participants present. Discussions included the new leadership culture, alternative organizational forms, people-centered service systems, solopreneurship, as well as purpose activation in companies. Why purpose activation? At the end of the day, companies all have one purpose: to generate revenue, right?

Purpose infographic V1 From Foe To Friend: How AI Can Boost Purpose

Click to enlarge

No place for empty words without actions

For Markus Heinen, chief innovation officer at EY and keynote speaker at the round table, it’s certainly not all about revenue. In the future, companies that are aware of their social impact will make all the difference: “Companies need to follow a purpose. Purpose is an aspirational reason for being that is grounded in humanity and inspires action,” he explains.

At first, it may sound rather philosophical, having little to do with economic success. Yet social responsibility and having a credible brand promise have become key differentiating factors for companies. Thanks to Big Data, real-time reporting, and digital discussion platforms, people are constantly connected and up-to-date.

“There is no place for empty words without actions. Companies without a purpose will fail to keep pace. Only when all performance factors are rigorously targeted towards that purpose is the concept able to bring genuine added value,” affirms Markus.

But what does this all mean for employees? At IXDS, for instance, this is how the company conceives the future of work: Success is measured by how the results contribute to the company’s purpose. For each project, the team decides on a project purpose, which is both derived from the overall purpose and specified for the project-related deliverables. How the employees wish to achieve this is left up to them. And the same for how long they wish to work on it.

“We don’t want to assess our employees based on the number of hours they work anymore. We also don’t want to pay them based on their position. Everyone should assume a role in the project based on where they think they can make the best contribution. In future, we like to assess colleagues based on their impact, their value contribution,” Birkhölzer explains.

Dedication to the future of work

This example shows how it is increasingly important for companies to identify and manage non-monetary assets, such as knowledge, innovativeness, teamwork, and value-oriented conduct. Today, these factors account for 80% of a company’s value. Enterprise software must therefore be able to map not just financial metrics, but also intangible assets. It is precisely these issues that Günter Pecht-Seibert and his team from SAP Innovation Center Network are looking to focus on – in the new “Future of Work” focus area.

Günter explains his team’s ambitions as follows: “We want to develop cloud-based solutions that improve employee engagement and well-being, increase companies’ brand value, and accelerate genuine knowledge work. As a first step, we plan to help companies activate their purpose. Our long-term goal is to support companies who have not yet ventured this far, and help them transform into a purpose-led organization.”

The first two solutions are planned to be launched in 2017 with Knowledge Workspace and People Insights. A complete software suite will follow later.

The same applies for the next research round table session. The event was well received by all participants, and triggered many constructive talks. The result of the round table was the identification of promising research topics that the Research & Innovation Team from the SAP Innovation Center Network would like to tackle, and refine in additional workshops. The overarching goal: to help the world run better and improve people’s lives – yes, SAP has also defined its purpose.

Five things to know about the future of work

The experts who participated at the round table discussed many interesting topics. Here are the five key takeaways:

  1. Companies should dedicate themselves to a purpose that can be globally integrated in the company, and pursued consistently.
  2. The health and well-being of employees is the way forward to a company’s success. Recognizing, measuring, and managing these factors is a key challenge for companies.
  3. Employees must have the opportunity for lifelong learning, which corresponds to their interests, and is useful, orchestrated logically, and independent from their current employer.
  4. In the future, organizational structures will be less based on hierarchies, but rather on decentralized networks, which push beyond company boundaries, and create added value.
  5. This means that companies will need software tools that can be adapted as required. The applications of the future will enable flexible problem solving, make collective knowledge accessible in organizations, and allow companies to combine data from various internal and external systems.

For more insight on fostering a culture of purpose at your business, see 5 Ways To Become A Better Purpose-Driven Leader

Comments

Let’s block ads! (Why?)

Digitalist Magazine

Unexpected behavior of Variables

 Unexpected behavior of Variables

I apologize in advance if this is a duplicate. Variables has a strange behavior when it encounters powers:

  w = s1^(n + 2) s2;
  Variables[w]
  (*{s1, s1^n, s2}*)

I’d have expected {s1, s2, n},

On the other hand

 w = s1^2 s2;
 Variables[w]
 (*{s1, s2}*) 

yields what one expects. I wonder if there is a way to get the expected result in the first example.

Let’s block ads! (Why?)

Recent Questions – Mathematica Stack Exchange

Cumulative Update #5 for SQL Server 2014 SP2

The 5th cumulative update release for SQL Server 2014 SP2 is now available for download at the Microsoft Downloads site. Please note that registration is no longer required to download Cumulative updates.

To learn more about the release or servicing model, please visit:

Let’s block ads! (Why?)

SQL Server Release Services