• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: Learning

You don’t code? Do machine learning straight from Microsoft Excel

December 31, 2020   Big Data

Transform 2021

Join us for the world’s leading event about accelerating enterprise transformation with AI and Data, for enterprise technology decision-makers, presented by the #1 publisher in AI and Data

Learn More


Machine learning and deep learning have become an important part of many applications we use every day. There are few domains that the fast expansion of machine learning hasn’t touched. Many businesses have thrived by developing the right strategy to integrate machine learning algorithms into their operations and processes. Others have lost ground to competitors after ignoring the undeniable advances in artificial intelligence.

But mastering machine learning is a difficult process. You need to start with a solid knowledge of linear algebra and calculus, master a programming language such as Python, and become proficient with data science and machine learning libraries such as Numpy, Scikit-learn, TensorFlow, and PyTorch.

And if you want to create machine learning systems that integrate and scale, you’ll have to learn cloud platforms such as Amazon AWS, Microsoft Azure, and Google Cloud.

Naturally, not everyone needs to become a machine learning engineer. But almost everyone who is running a business or organization that systematically collects and processes can benefit from some knowledge of data science and machine learning. Fortunately, there are several courses that provide a high-level overview of machine learning and deep learning without going too deep into math and coding.

But in my experience, a good understanding of data science and machine learning requires some hands-on experience with algorithms. In this regard, a very valuable and often-overlooked tool is Microsoft Excel.

To most people, MS Excel is a spreadsheet application that stores data in tabular format and performs very basic mathematical operations. But in reality, Excel is a powerful computation tool that can solve complicated problems. Excel also has many features that allow you to create machine learning models directly into your workbooks.

While I’ve been using Excel’s mathematical tools for years, I didn’t come to appreciate its use for learning and applying data science and machine learning until I picked up Learn Data Mining Through Excel: A Step-by-Step Approach for Understanding Machine Learning Methods by Hong Zhou.

Learn Data Mining Through Excel takes you through the basics of machine learning step by step and shows how you can implement many algorithms using basic Excel functions and a few of the application’s advanced tools.

While Excel will in no way replace Python machine learning, it is a great window to learn the basics of AI and solve many basic problems without writing a line of code.

Linear regression machine learning with Excel

Linear regression is a simple machine learning algorithm that has many uses for analyzing data and predicting outcomes. Linear regression is especially useful when your data is neatly arranged in tabular format. Excel has several features that enable you to create regression models from tabular data in your spreadsheets.

One of the most intuitive is the data chart tool, which is a powerful data visualization feature. For instance, the scatter plot chart displays the values of your data on a cartesian plane. But in addition to showing the distribution of your data, Excel’s chart tool can create a machine learning model that can predict the changes in the values of your data. The feature, called Trendline, creates a regression model from your data. You can set the trendline to one of several regression algorithms, including linear, polynomial, logarithmic, and exponential. You can also configure the chart to display the parameters of your machine learning model, which you can use to predict the outcome of new observations.

You can add several trendlines to the same chart. This makes it easy to quickly test and compare the performance of different machine learning models on your data.

 You don’t code? Do machine learning straight from Microsoft Excel

Above: Excel’s Trendline feature can create regression models from your data.

In addition to exploring the chart tool, Learn Data Mining Through Excel takes you through several other procedures that can help develop more advanced regression models. These include formulas such as LINEST and LINREG, which calculate the parameters of your machine learning models based on your training data.

The author also takes you through the step-by-step creation of linear regression models using Excel’s basic formulas such as SUM and SUMPRODUCT. This is a recurring theme in the book: You’ll see the mathematical formula of a machine learning model, learn the basic reasoning behind it, and create it step by step by combining values and formulas in several cells and cell arrays.

While this might not be the most efficient way to do production-level data science work, it is certainly a very good way to learn the workings of machine learning algorithms.

Other machine learning algorithms with Excel

Beyond regression models, you can use Excel for other machine learning algorithms. Learn Data Mining Through Excel provides a rich roster of supervised and unsupervised machine learning algorithms, including k-means clustering, k-nearest neighbor, naive Bayes classification, and decision trees.

The process can get a bit convoluted at times, but if you stay on track, the logic will easily fall in place. For instance, in the k-means clustering chapter, you’ll get to use a vast array of Excel formulas and features (INDEX, IF, AVERAGEIF, ADDRESS, and many others) across several worksheets to calculate cluster centers and refine them. This is not a very efficient way to do clustering, but you’ll be able to track and study your clusters as they become refined in every consecutive sheet. From an educational standpoint, the experience is very different from programming books where you provide a machine learning library function your data points and it outputs the clusters and their properties.

 You don’t code? Do machine learning straight from Microsoft Excel

Above: When doing k-means clustering on Excel, you can follow the refinement of your clusters on consecutive sheets.

In the decision tree chapter, you will go through the process calculating entropy and selecting features for each branch of your machine learning model. Again, the process is slow and manual, but seeing under the hood of the machine learning algorithm is a rewarding experience.

In many of the book’s chapters, you’ll use the Solver tool to minimize your loss function. This is where you’ll see the limits of Excel, because even a simple model with a dozen parameters can slow your computer down to a crawl, especially if your data sample is several hundred rows in size. But the Solver is an especially powerful tool when you want to fine-tune the parameters of your machine learning model.

 You don’t code? Do machine learning straight from Microsoft Excel

Above: Excel’s Solver tool fine-tunes the parameters of your model and minimizes loss functions.

Deep learning and natural language processing with Excel

Learn Data Mining Through Excel shows that Excel can even express advanced machine learning algorithms. There’s a chapter that delves into the meticulous creation of deep learning models. First, you’ll create a single layer artificial neural network with less than a dozen parameters. Then you’ll expand on the concept to create a deep learning model with hidden layers. The computation is very slow and inefficient, but it works, and the components are the same: cell values, formulas, and the powerful Solver tool.

 You don’t code? Do machine learning straight from Microsoft Excel

Above: Deep learning with Microsoft Excel gives you a view under the hood of how deep neural networks operate.

In the last chapter, you’ll create a rudimentary natural language processing (NLP) application, using Excel to create a sentiment analysis machine learning model. You’ll use formulas to create a “bag of words” model, preprocess and tokenize hotel reviews, and classify them based on the density of positive and negative keywords. In the process you’ll learn quite a bit about how contemporary AI deals with language and how much different it is from how we humans process written and spoken language.

Excel as a machine learning tool

Whether you’re making C-level decisions at your company, working in human resources, or managing supply chains and manufacturing facilities, a basic knowledge of machine learning will be important if you will be working with data scientists and AI people. Likewise, if you’re a reporter covering AI news or a PR agency working on behalf of a company that uses machine learning, writing about the technology without knowing how it works is a bad idea (I will write a separate post about the many awful AI pitches I receive every day). In my opinion, Learn Data Mining Through Excel is a smooth and quick read that will help you gain that important knowledge.

Beyond learning the basics, Excel can be a powerful addition to your repertoire of machine learning tools. While it’s not good for dealing with big data sets and complicated algorithms, it can help with the visualization and analysis of smaller batches of data. The results you obtain from a quick Excel mining can provide pertinent insights in choosing the right direction and machine learning algorithm to tackle the problem at hand.

Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics.

This story originally appeared on Bdtechtalks.com. Copyright 2020

VentureBeat

VentureBeat’s mission is to be a digital townsquare for technical decision makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you,
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more.

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

OcéanIA treats climate change like a machine learning grand challenge

December 9, 2020   Big Data

When it comes to customer expectations, the pandemic has changed everything

Learn how to accelerate customer service, optimize costs, and improve self-service in a digital-first world.

Register here

Self-driving cars. Artificial general intelligence. Beating a human in a game of chess. Grand challenges are tasks that can seem like moonshots that, if achieved, will move the entire machine learning discipline forward. Now a team of researchers with the recently established OcéanIA is treating the study of the ocean and climate change as a machine learning grand challenge. The four-year project that brings together more than a dozen AI researchers and scientists shared some initial plans this week.

The OcéanIA project begins with a focus on the automated recognition of plankton species, many of which have not been documented. Next to trees and forests, plankton and the processes they’re a part of in the ocean are some of the largest carbon-capturing methods on Earth. Last year, the Intergovernmental Panel on Climate Change identified a correlation between climate change and the ocean’s ability to sequester carbon, produce oxygen, and support biodiversity. A study released in May found that plankton absorb twice as much carbon as scientists previously thought. A team of about 15 researchers are working on OcéanIA across machine learning and fields like biology, said Inria Chile Research Center director Nayat Sánchez-Pi.

“These crucial ecological services provided by plankton need to be better measured, monitored, and protected in order to maintain the ocean’s stability, to mitigate the various effects of climate change, and ensure the food security of population,” Sánchez-Pi said. “Oceans today we can say are the last unknown, and understanding the role of oceans in climate change is not only important but also a challenge for modern AI and applied ML.”

Sánchez-Pi was one of four keynote speakers at the Latinx in AI workshop Monday as part of the Neural Information Processing Systems (NeurIPS) conference. Affinity workshops at the conference include Black in AI, Jews in AI, Queer in AI, and Women in Machine Learning. For the first time this year, NeurIPS will host Indigenous in AI and Muslims in AI workshops.

Luis Martí and Sánchez-Pi are also lead authors of a paper detailing OcéanIA that was accepted for publication at the Tackling Climate Change workshop being held Friday, the first published work associated with the project. More than 90 research and proposal papers were accepted for publication at the climate change workshop.

Machine learning challenges presented by the need to study plankton and oceans range from working with small datasets and few-shot learning methods to transfer learning, the process of repurposing a model for new tasks.

Unsupervised and semi-supervised methods will be used to identify particular plankton species. There are an estimated 70,000 unknown plankton species in the ocean today. Explainability will be used to tell the difference between different species.

Specific challenges listed in the proposal paper include the creation of models that incorporate complex knowledge about plankton into ocean-climate models and the development of “a metabolic model including the main microbial oceanic compartments and couple it with physics,” as well as computer vision for identifying plankton from satellite images. Satellite imagery is a traditional method researchers use to understand plankton populations.

At the previous Tackling Climate Change workshop at NeurIPS, researchers like Google Brain cofounder Andrew Ng argued that making scientific progress toward solving climate change and progress toward machine learning grand challenges is a two-way street.

“I do think [for] the future of AI and ML, a great challenge is scientific discovery. Indeed, how to embed prior knowledge, scientific reasoning, and how to be able to deal with small data,” Institute for Computational Sustainability director Carla Gomes said during a panel discussion one year ago.

Last year at NeurIPS, Facebook chief AI scientist Yann LeCun talked about energy efficiency as another worthy challenge for AI researchers.

Above: Tara sampling method

Data to study plankton species will come courtesy of Tara Océans Foundation, which has undertaken 11 expeditions since 2005. The 12th Tara Océans expedition will focus on the study of the ocean ecosystem. It begins this month and continues through July 2022. The expedition will travel along the coast of Africa, Europe, and South America. Along the way, participants will collect samples at depths ranging from the surface of the sea to 1,000 meters deep.

More than 35 scientific institutions from the University of Sao Paolo in Brazil to the University of Cape Town in South Africa will participate in the study of samples and data collected by Tara Océans. An upcoming leg of the expedition will go through the Patagonia region of Chile.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

AI Weekly: The state of machine learning in 2020

November 27, 2020   Big Data
 AI Weekly: The state of machine learning in 2020

When it comes to customer expectations, the pandemic has changed everything

Learn how to accelerate customer service, optimize costs, and improve self-service in a digital-first world.

Register here

It’s hard to believe, but a year in which the unprecedented seemed to happen every day is just weeks from being over. In AI circles, the end of the calendar year means the rollout of annual reports aimed at defining progress, impact, and areas for improvement.

The AI Index is due out in the coming weeks, as is CB Insights’ assessment of global AI startup activity, but two reports — both called The State of AI — have already been released.

Last week, McKinsey released its global survey on the state of AI, a report now in its third year. Interviews with executives and a survey of business respondents found a potential widening of the gap between businesses that apply AI and those that do not.

The survey reports that AI adoption is more common in tech and telecommunications than in other industries, followed by automotive and manufacturing. More than two-thirds of respondents with such use cases say adoption increased revenue, but fewer than 25% saw significant bottom-line impact.

Along with questions about AI adoption and implementation, the McKinsey State of AI report examines companies whose AI applications led to EBIT growth of 20% or more in 2019. Among the report’s findings: Respondents from those companies were more likely to rate C-suite executives as very effective, and the companies were more likely to employ data scientists than other businesses were.

At rates of difference of 20% to 30% or more compared to others, high-performing companies were also more likely to have a strategic vision and AI initiative road map, use frameworks for AI model deployment, or use synthetic data when they encountered an insufficient amount of real-world data. These results seem consistent with a Microsoft-funded Altimeter Group survey conducted in early 2019 that found half of high-growth businesses planned to implement AI in the year ahead.

If there was anything surprising in the report, it’s that only 16% of respondents said their companies have moved deep learning projects beyond a pilot stage. (This is the first year McKinsey asked about deep learning deployments.)

Also surprising: The report showed that businesses made little progress toward mounting a response to risks associated with AI deployment. Compared with responses submitted last year, companies taking steps to mitigate such risks saw an average 3% increase in response to 10 different kinds of risk — from national security and physical safety to regulatory compliance and fairness. Cybersecurity was the only risk that a majority of respondents said their companies are working to address. The percentage of those surveyed who consider AI risks relevant to their company actually dropped in a number of categories, including in the area of equity and fairness, which declined from 26% in 2019 to 24% in 2020.

McKinsey partner Roger Burkhardt called the survey’s risk results concerning.

“While some risks, such as physical safety, apply to only particular industries, it’s difficult to understand why universal risks aren’t recognized by a much higher proportion of respondents,” he said in the report. “It’s particularly surprising to see little improvement in the recognition and mitigation of this risk, given the attention to racial bias and other examples of discriminatory treatment, such as age-based targeting in job advertisements on social media.”

Less surprising, the survey found an uptick in automation in some industries during the pandemic. VentureBeat reporters have found this to be true across industries like agriculture, construction, meatpacking, and shipping.

“Most respondents at high performers say their organizations have increased investment in AI in each major business function in response to the pandemic, while less than 30% of other respondents say the same,” the report reads.

The McKinsey State of AI in 2020 global survey was conducted online from June 9 to June 19 and garnered nearly 2,400 responses, with 48% reporting that their companies use some form of AI. A 2019 McKinsey survey of roughly the same number of business leaders found that while nearly two-thirds of companies reported revenue increases due to the use of AI, many still struggled to scale its use.

The other State of AI

A month before McKinsey published its business survey, Air Street Capital released its State of AI report, which is now in its third year. The London-based venture capital firm found the AI industry to be strong when it comes to company funding rounds, but its report calls centralization of AI talent and compute “a huge problem.” Other serious problems Air Street Capital identified include ongoing brain drain from academia to industry and issues with reproducibility of models created by private companies.

A number of the report’s conclusions are in line with a recent analysis of AI research papers that found the concentration of deep learning activity among Big Tech companies, industry leaders, and elite universities is increasing inequality. The team behind this analysis says a growing “compute divide” could be addressed in part by the implementation of a national research cloud.

As we inch toward the end of the year, we can expect more reports on the state of machine learning. The state of AI reports released in the past two months demonstrate a variety of challenges but suggest AI can help businesses save money, generate revenue, and follow proven best practices for success. At the same time, researchers are identifying big opportunities to address the various risks associated with deploying AI.

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers and AI editor Seth Colaner — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI Channel.

Thanks for reading,

Khari Johnson

Senior AI Staff Writer

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

AI research finds a ‘compute divide’ concentrates power and accelerates inequality in the era of deep learning

November 11, 2020   Big Data
 AI research finds a ‘compute divide’ concentrates power and accelerates inequality in the era of deep learning

Maintain your employer brand in a pandemic

Read the VentureBeat Jobs guide to employer branding

Download eBook

AI researchers from Virginia Tech and Western University have concluded that an unequal distribution of compute power in academia is furthering inequality in the era of deep learning. They also point to the impact on academia of people leaving prestigious universities for high-paying industry jobs.

The concentration of compute power at elite universities crowds out mid- to low-tier research organizations, according to analysis that draws on 171,394 papers from nearly 60 prestigious computer science conferences. The team reviewed papers accepted for publication at large AI conferences such as ACL, ICML, and NeurIPS in categories like computer vision, data mining, machine learning, and NLP.

“Exploiting the sudden rise of deep learning due to an unanticipated usage of GPUs since 2012, we find that AI is increasingly being shaped by a few actors, and these actors are mostly affiliated with either large technology firms or elite universities,” their paper reads. “To truly ‘democratize’ AI, a concerted effort by policymakers, academic institutions, and firm-level actors is needed to tackle the compute divide.”

Nur Ahmed and Muntasir Wahed summarized their findings and recommendations in a paper titled “The De-democratization of AI: Deep Learning and the Compute Divide in Artificial Intelligence Research.” The paper was published recently on arXiv and presented in late October at Strategic Management Society, a business research conference.

The fact that wealthier universities and companies have an advantage in deep learning is not surprising. Large modern networks like AlphaGo Zero and GPT-3 can require millions of dollars in compute for training. And a December 2019 analysis labeled Google, Stanford University, MIT, Carnegie Mellon University, UC Berkeley, and Microsoft as the top six contributors at leading AI research conferences.

At the same time, smaller schools often lack the financial resources to consider deep learning applications. This limitation can define the kinds of AI that researchers in academia explore or accelerate brain drain to Big Tech companies with plenty of money to compete for top AI talent.

Confirming this opportunity gap, the paper found that universities ranked 301-500 by U.S. News and World Report have published on average six fewer papers at AI research conferences — or 25% fewer than a counterfactual estimator — since the rise of deep learning. Fortune 500 companies, Big Tech leaders, and elite universities saw dramatically different trends.

“To the best of our knowledge, this is the first study that finds evidence that an increased need for specialized equipment can result in ‘haves and have-nots’ in a scientific field,” the paper reads. “We contend that the rise of deep learning increases the importance of compute and data drastically, which, in turn, heightens the barriers of entry by increasing the costs of knowledge production.”

The coauthors say their work demonstrates what they call the “compute divide” along a series of social fault lines. Elite universities tend to have more wealthy students and are typically less diverse than other schools. Big Tech firms also lack diversity, particularly among engineers, people who design products, and AI research. Since AI has become a general purpose technology impacting aspects of business, public services, and private lives, this demographic imbalance has widespread consequences.

In analyzing this trend, Ahmed and Wahed break the history of artificial intelligence into two eras. They define the first as stretching from the 1960s to about 2012, when general purpose hardware was used to train AI. In the second era, deep learning and specialized hardware like GPUs have defined the industry, since the two were found to be effective together in the ImageNet image classification competition to advance computer vision.

When it comes to solutions, the coauthors say their findings present “concrete evidence” of the need for a national AI research cloud. In June, major universities, tech companies, and members of the U.S. Senate backed the idea of a national AI research cloud. Shared public datasets that can help train and test AI models will be particularly beneficial for resource-constrained organizations.

The paper asserts that the U.S. government should help universities by extending shared public datasets and other resources. Groups like the Defense Innovation Board and National Security Commission on AI (NSCAI) have advised the Pentagon and Congress to increase public-private partnerships, government funding, and outreach to developers working remotely as a way to attract talent from nontraditional backgrounds.

We could see movement on these fronts in the months ahead. President-elect Joe Biden’s platform has committed to investing $ 300 billion in research and development in areas like 5G and artificial intelligence.

Ahmed and Wahed’s findings are backed up by other recent papers evaluating the AI ecosystem and the technology’s role in bringing academia and industry closer together. For example, a paper called “Artificial Intelligence, Human Capital, and Innovation” found that AI created an unprecedented brain drain from academia between 2004 and 2018, leading to more than 200 people leaving for industry positions. Published in fall 2019 and updated last month, the paper finds that top universities, Ph.D. students, and startups in deep learning are among those that benefit most from current AI talent shortages. The analysis also found that Carnegie Mellon University, MIT, and Stanford University rank highest among colleges whose alumni go on to launch AI startups.

Ahmed and Wahed’s paper also follows a survey of more than 200 computer science department chairs on the impact of industry on academia. Commissioned by the Computing Research Association (CRA) and released a few months ago, the study identifies both positive and negative results of close cooperation between academia and industry. These changes include a shift of computing research faculty to industry jobs.

“This shift has the potential for negative impacts on the kinds of research done, the quality of the research, the culture of computer science departments, and the training of undergraduates and graduate students. Particular attention needs to be focused on issues related to department culture, potential conflict of interest, intellectual property, and ensuring that students continue to have sufficient faculty mentoring and contact to prepare them for their career,” a white paper about the survey reads.


How startups are scaling communication:

The pandemic is making startups take a close look at ramping up their communication solutions. Learn how


Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

LinkedIn open-sources Dagli, a machine learning library for Java

November 11, 2020   Big Data
 LinkedIn open sources Dagli, a machine learning library for Java

Maintain your employer brand in a pandemic

Read the VentureBeat Jobs guide to employer branding

Download eBook

LinkedIn today open-sourced Dagli, a machine learning library for Java (and other JVM languages) that ostensibly makes it easier to write bug-resistant, readable, modifiable, maintainable, and deployable model pipelines without incurring technical debt.

While machine learning maturity in the enterprise is generally increasing, the majority of companies (50%) spend between 8 and 90 days deploying a single machine learning model (with 18% taking longer than 90 days), a 2019 survey from Algorithmia found. Most peg the blame on failure to scale, followed by model reproducibility challenges, a lack of executive buy-in, and poor tooling.

With Dagli, the model pipeline is defined as a directed acyclic graph, a graph consisting of vertices and edges with each edge directed from one vertex to another for training and inference. The Dagli environment provides pipeline definitions, static typing, near-ubiquitous immutability, and other features preventing the large majority of potential logic errors.

“Models are typically part of an integrated pipeline … and constructing, training, and deploying these pipelines to production remains more cumbersome than it should be,” LinkedIn natural language processing research scientist Jeff Pasternack wrote in a blog post. “Duplicated or extraneous work is often required to accommodate both training and inference, engendering brittle ‘glue’ code that complicates future evolution and maintenance of the model.”

Dagli works on servers, Hadoop, command-line interfaces, IDEs, and other typical JVM contexts. Plenty of pipeline components are ready to use right out of the box, including neural networks, logistic regression, gradient boosted decision trees, FastText, cross-validation, cross-training, feature selection, data readers, evaluation, and feature transformations.

For experienced data scientists, Dagli offers a path to performant, production-ready AI models maintainable and extensible in the long term that can leverage an existing JVM technology stack. For software engineers with less experience, Dagli provides an API that can be used with a JVM language and tooling that’s designed to avoid typical logic bugs.

“With Dagli, we hope to make efficient, production-ready models easier to write, revise, and deploy, avoiding the technical debt and long-term maintenance challenges that so often accompany them,” Pasternack continued. “Dagli takes full advantage of modern, highly multicore processors and … powerful graphics cards for effective single-machine training of real-world models.”

The release of Dagli comes after LinkedIn made available the LinkedIn Fairness Toolkit (LiFT), an open source software library designed to enable the measurement of fairness in AI and machine learning workflows. Prior to LiFT, LinkedIn debuted DeText, an open source framework for natural language process-related ranking, classification, and language generation tasks that leverages semantic matching, using deep neural networks to understand member intents in search and recommender systems.


How startups are scaling communication:

The pandemic is making startups take a close look at ramping up their communication solutions. Learn how


Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Google details how it’s using AI and machine learning to improve search

October 16, 2020   Big Data

Automation and Jobs

Read our latest special issue.

Open Now

During a livestreamed event this afternoon, Google detailed the ways it’s applying AI and machine learning to improve the Google Search experience.

Soon, Google says users will be able to see how busy places are in Google Maps without having to search for specific beaches, parks, grocery stores, gas stations, laundromats, pharmacies, or other business, an expansion of Google’s existing busyness metrics. The company also says it’s adding COVID-19 safety information to business profiles across Search and Maps, revealing whether they’re using safety precautions like temperature checks, plexiglass, and more.

An algorithmic improvement to “Did you mean,” Google’s spell-checking feature for Search, will enable more accurate and precise spelling suggestions. Google says the new underlying language model contains 680 million parameters — the variables that determine each prediction — and runs in less than three milliseconds. “This single change makes a greater improvement to spelling than all of our improvements over the last five years,” Prabhakar Raghavan, head of Search at Google, said in a blog post.

Beyond this, Google says it can now index individual passages from webpages as opposed to whole pages. When this rolls out fully, it will improve roughly 7% of search queries across all languages, the company claims. A complementary AI component will help Search capture the nuances of what webpages are about, ostensibly leading to a wider range of results for search queries.

“We’ve applied neural nets to understand subtopics around an interest, which helps deliver a greater diversity of content when you search for something broad,” Raghavan continued. “As an example, if you search for ‘home exercise equipment,’ we can now understand relevant subtopics, such as budget equipment, premium picks, or small space ideas, and show a wider range of content for you on the search results page.”

Google is also bringing Data Commons, its open knowledge repository that combines data from public datasets (e.g., COVID-19 stats from the U.S. Centers for Disease Control and Prevention) using mapped common entities, to search results on the web and mobile. In the near future, users will be able to search for topics like “employment in Chicago” on Search to see information in context.

On the ecommerce and shopping front, Google says it has built cloud streaming technology that enables users to see products in augmented reality (AR). With cars from Volvo, Porsche, and other “top” auto brands, for example, they can zoom in to view the steering wheel and other details in a driveway, to scale, on their smartphones. Separately, Google Lens on the Google app or Chrome on Android (and soon iOS) will let shoppers discover similar products by tapping on elements like vintage denim, ruffle sleeves, and more.

 Google details how it’s using AI and machine learning to improve search

Above: Augmented reality previews in Google Search.

Image Credit: Google

In another addition to Search, Google says it will deploy a feature that highlights notable points in videos — for example, a screenshot comparing different products or a key step in a recipe. (Google expects 10% of searches will use this technology by the end of 2020.) And Live View in Maps, a tool that taps AR to provide turn-by-turn walking directions, will enable users to quickly see information about restaurants including how busy they tend to get and their star ratings.

Lastly, Google says it will let users search for songs by simply humming or whistling melodies, initially in English on iOS and in more than 20 languages on Android. You will able to launch the feature by opening the latest version of the Google app or Search widget, tapping the mic icon, and saying “What’s this song?” or selecting the “Search a song” button, followed by at least 10 to 15 seconds of humming or whistling.

“After you’re finished humming, our machine learning algorithm helps identify potential song matches,” Google wrote in a blog post. “We’ll show you the most likely options based on the tune. Then you can select the best match and explore information on the song and artist, view any accompanying music videos or listen to the song on your favorite music app, find the lyrics, read analysis and even check out other recordings of the song when available.”

Google says that melodies hummed into Search are transformed by machine learning algorithms into a number-based sequence representing the song’s melody. The models are trained to identify songs based on a variety of sources, including humans singing, whistling, or humming, as well as studio recordings. They also take away all the other details, like accompanying instruments and the voice’s timbre and tone. This leaves a fingerprint that Google compares with thousands of songs from around the world and identify potential matches in real time, much like the Pixel’s Now Playing feature.

“From new technologies to new opportunities, I’m really excited about the future of search and all of the ways that it can help us make sense of the world,” Raghavan said.

Last month, Google announced it will begin showing quick facts related to photos in Google Images, enabled by AI. Starting in the U.S. in English, users who search for images on mobile might see information from Google’s Knowledge Graph — Google’s database of billions of facts — including people, places, or things germane to specific pictures.

Google also recently revealed it’s using AI and machine learning techniques to more quickly detect breaking news around crises like natural disasters. In a related development, Google said it launched an update using language models to improve the matching between news stories and available fact checks.

In 2019, Google peeled back the curtains on its efforts to solve query ambiguities with a technique called Bidirectional Encoder Representations from Transformers, or BERT for short. BERT, which emerged from the tech giant’s research on Transformers, forces models to consider the context of a word by looking at the words that come before and after it. According to Google, BERT helped Google Search better understand 10% of queries in the U.S. in English — particularly longer, more conversational searches where prepositions like “for” and “to” matter a lot to the meaning.

BERT is now used in every English search, Google says, and it’s deployed across languages including Spanish, Portuguese, Hindi, Arabic, and German.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

The secrets of small data: How machine learning finally reached the enterprise

October 9, 2020   Big Data

Automation and Jobs

Read our latest special issue.

Open Now

Over the past decade, “big data” has become Silicon Valley’s biggest buzzword. When they’re trained on mind-numbingly large data sets, machine learning (ML) models can develop a deep understanding of a given domain, leading to breakthroughs for top tech companies. Google, for instance, fine-tunes its ranking algorithms by tracking and analyzing more than one trillion search queries each year. It turns out that the Solomonic power to answer all questions from all comers can be brute-forced with sufficient data.

But there’s a catch: Most companies are limited to “small” data; in many cases, they possess only a few dozen examples of the processes they want to automate using ML. If you’re trying to build a robust ML system for enterprise customers, you have to develop new techniques to overcome that dearth of data.

Two techniques in particular — transfer learning and collective learning — have proven critical in transforming small data into big data, allowing average-sized companies to benefit from ML use cases that were once reserved only for Big Tech. And because just 15% of companies have deployed AI or ML already, there is a massive opportunity for these techniques to transform the business world.

 The secrets of small data: How machine learning finally reached the enterprise

Above: Using the data from just one company, even modern machine learning models are only about 30% accurate. But thanks to collective learning and transfer learning, Moveworks can determine the intent of employees’ IT support requests with over 90% precision.

Image Credit: Moveworks

From DIY to open source

Of course, data isn’t the only prerequisite for a world-class machine learning model — there’s also the small matter of building that model in the first place. Given the short supply of machine learning engineers, hiring a team of experts to architect an ML system from scratch is simply not an option for most organizations. This disparity helps explain why a well-resourced tech company like Google benefits disproportionately from ML.

But over the past several years, a number of open source ML models — including the famous BERT model for understanding language, which Google released in 2018 — have started to change the game. The complexity of creating a model the caliber of BERT, whose aptly named “large” version has about 340 million parameters, means that few organizations can even consider quarterbacking such an initiative. However, because it’s open source, companies can now tweak that publicly available playbook to tackle their specific use cases.

To understand what these use cases might look like, consider a company like Medallia, a Moveworks customer. On its own, Medallia doesn’t possess enough data to build and train an effective ML system for an internal use case, like IT support. Yet its small data does contain a treasure trove of insights waiting for ML to unlock them. And by leveraging new techniques to glean these insights, Medallia has become more efficient, from recognizing which internal workflows need attention to understanding the company-specific language its employees use when asking for tech support.

Massive progress with small data

So here’s the trillion-dollar question: How do you take an open source ML model designed to solve a particular problem and apply that model to a disparate problem in the enterprise? The answer starts with transfer learning, which, unsurprisingly, entails transferring knowledge gained from one domain to a different domain that has less data.

For example, by taking an open source ML model like BERT — designed to understand generic language — and refining it at the margins, it is now possible for ML to understand the unique language employees use to describe IT issues. And language is just the beginning, since we’ve only begun to realize the enormous potential of small data.

 The secrets of small data: How machine learning finally reached the enterprise

Above: Transfer learning leverages knowledge from a related domain — typically one with a greater supply of training data — to augment the small data of a given ML use case.

Image Credit: Moveworks

More generally, this practice of feeding an ML model a very small and very specific selection of training data is called “few-shot learning,” a term that’s quickly become one of the new big buzzwords in the ML community. Some of the most powerful ML models ever created — such as the landmark GPT-3 model and its 175 billion parameters, which is orders of magnitude more than BERT — have demonstrated an unprecedented knack for learning novel tasks with just a handful of examples as training.

Taking essentially the entire internet as its “tangential domain,” GPT-3 quickly becomes proficient at these novel tasks by building on a powerful foundation of knowledge, in the same way Albert Einstein wouldn’t need much practice to become a master at checkers. And although GPT-3 is not open source, applying similar few-shot learning techniques will enable new ML use cases in the enterprise — ones for which training data is almost nonexistent.

The power of the collective

With transfer learning and few-shot learning on top of powerful open source models, ordinary businesses can finally buy tickets to the arena of machine learning. But while training ML with transfer learning takes several orders of magnitude less data, achieving robust performance requires going a step further.

That step is collective learning, which comes into play when many individual companies want to automate the same use case. Whereas each company is limited to small data, third-party AI solutions can use collective learning to consolidate those small data sets, creating a large enough corpus for sophisticated ML. In the case of language understanding, this means abstracting sentences that are specific to one company to uncover underlying structures:

 The secrets of small data: How machine learning finally reached the enterprise

Above: Collective learning involves abstracting data — in this case, sentences — with ML to uncover universal patterns and structures.

Image Credit: Moveworks

The combination of transfer learning and collective learning, among other techniques, is quickly redrawing the limits of enterprise ML. For example, pooling together multiple customers’ data can significantly improve the accuracy of models designed to understand the way their employees communicate. Well beyond understanding language, of course, we’re witnessing the emergence of a new kind of workplace — one powered by machine learning on small data.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Measuring & Optimizing Overall Equipment Effectiveness with Machine Learning

October 7, 2020   BI News and Info
pexels barthy bonhomme 185039 2 Measuring & Optimizing Overall Equipment Effectiveness with Machine Learning

Overall equipment effectiveness (OEE) is a metric used in manufacturing operations to see and understand how efficiently processes and equipment are being used. It looks at facilities, time, materials, and the productivity of each of those discrete aspects of the manufacturing process. Given the budget constraints that all manufacturers face, it’s obvious that equipment effectiveness measures are so popular and necessary.

To dive into the concept of overall equipment effectiveness further, there are three major aspects that you must consider when it comes to setting key performance indicators (KPIs) for OEE.

  • Availability: The amount of time that the equipment is ready and useful for operation
  • Performance: How well the equipment works at maximum operating speed
  • Quality: The percentage of ‘good’ parts that are produced (as opposed to parts that need to be scrapped)

Key Performance Indicators for Overall Equipment Effectiveness

Because each of these KPIs is quantifiable, there are calculations managers can use to benchmark their performance, thus allowing them to find places for improvement and then dial up the optimal outputs for their processes. The KPIs are fairly straightforward to calculate.

Availability is measured as the portion of time that a given machine is ready to operate. It is sometimes referred to as ‘uptime’, and is designated as a simple ratio of operating time over scheduled time, where operating time takes into account the actual measured time that the machine is used, and scheduled time would be, for example, the time that the machine is used on a shift.

Performance is measured as a ratio of the actual pace of the machine over the designed speed of the machine, and as such doesn’t consider either quality or availability. As an uncomplicated example, if a machine can travel at 100 MPH, and it’s only driven at 50 MPH, the performance would be shown as 50%.

Quality is measured as the ratio of good parts to the overall total of the parts output of the machine. This is also considered the yield.

Key Benefits of Using Overall Equipment Effectiveness

The old saying ‘you can’t manage what you can’t measure’ applies quite well to OEE. Having the ability to measure each of these KPIs means manufacturers that adopt OEE can better manage their processes.

Some of the benefits of using OEE as a tool for manufacturing include:

  • Optimizing machine usage: Developing an understanding of a machine’s performance allows manufacturers to optimize that performance with subtle adjustments
  • Improving process quality: Producing fewer defective products means less waste and better ROI
  • Reducing repair costs: Knowing the expected machine efficiency means that proactive measures can be taken to repair prior to major breakdowns

Optimizing Overall Equipment Effectiveness with ML

From an OEE perspective, it’s a best practice to measure at the step in the process where there’s a bottleneck, or potential constraint. No matter what’s being manufactured, there’s always a point in the process that can become an obstacle. It’s at that point that OEE is critical to understand what’s happening, as it’s there that determines the overall performance.

In the past, measuring OEE and making adjustments based on those measures was something that happened largely manually, and was based on historical knowledge. But it shouldn’t come as any surprise that each of these is an ideal use case for machine learning. As we describe in our blog post on predictive maintenance, the major driver for the Industry 4.0 Revolution is the rapid development of the Internet of Things (IoT).

As sensors become more embedded in machines, as well as integral to manufacturing, it becomes easier for manufacturers to automatically measure the necessary components to optimize their intricate operations. The data that is generated by the sensors can be used to ensure that the appropriate machine learning algorithms have enough data to be useful.

For the above-mentioned three major aspects of OEE (availability; performance; quality), here’s how machine learning can be leveraged:

  • Availability: Machine learning algorithms can help lower the amount of time needed to setup or retool manufacturing lines, based on previous similar occurrences, helping to increase OEE
  • Performance: The data gathered can help an ML algorithm identify roadblocks or slowdowns in production, and then leverage predictive maintenance to lessen or eliminate them
  • Quality: ML algorithms can be applied to increase the usable manufacturing yields of a process

Final Thoughts

OEE is a valuable tool in almost every manufacturing operation and, by using the proper machine learning techniques, manufacturers can truly optimize their operational efficiencies, easily and automatically.

If you’re curious to learn more about the effects that AI and ML are having on your manufacturing processes, sign up for a free, no obligation AI assessment. We’ll help you explore the most impactful ways you can use AI in your operations.

Let’s block ads! (Why?)

RapidMiner

Read More

10 machine learning algorithms you need to know

September 7, 2020   BI News and Info

If you’ve just started to explore the ways that machine learning can impact your business, the first questions you’re likely to come across are what are all of the different types of machine learning algorithms, what are they good for, and which one should I choose for my project? This post will help you answer those questions.

There are a few different ways to categorize machine learning algorithms. One way is based on what the training data looks like. There are three different categories used by data scientists with respect to training data:

  • Supervised, where the algorithms are trained based on labeled historical data—which has often been annotated by humans—to try and predict future results.
  • Unsupervised, by contrast, uses unlabeled data that the algorithms try to make sense of by extracting rules or patterns on their own.
  • Semi-supervised, which is a mix of the two above methods, usually with the preponderance of data being unlabeled, and a small amount of supervised (labeled) data.

Another way to classify algorithms—and one that’s more practical from a business perspective—is to categorize them based on how they work and what kinds of problems they can solve, which is what we’ll do here.

There are three basic categories here as well: regression, clustering, and classification algorithms. Let’s jump into each.

Regression algorithms

There are basically two kinds of regression algorithms that we commonly see in business environments. These are based on the same regression that might be familiar to you from statistics.

1. Linear regression

Described very simply, linear regression plots a line based on a set of data points, called the dependent variable (plotted on the y-axis) and the explanatory variable (plotted on the x-axis).

Linear regression is a commonly used statistical model that can be thought of as a kind of Swiss Army knife for understanding numerical data. For example, linear regression can be used to understand the impact of price changes on goods and services by mapping the sales of various prices against its sales, in order to help guide pricing decisions. Depending on the specific use case, some of the variants of linear regression, including ridge regression, lasso regression, and polynomial regression might be suitable as well.

2. ARIMA

ARIMA (“autoregressive integrated moving average”) models can be considered a special type of regression model.

It allows you to explore time-dependent data points because it understands data points as a sequence, rather than as independent from one another. For this reason, ARIMA models are especially useful for conducting time-series analyses, for example, demand and price forecasting.

Clustering algorithms

Clustering algorithms are typically used to find groups in a dataset, and there’s a few different types of algorithms that can do this.

3. k-means clustering

k-means clustering is generally used to segregate groups with related characteristics and group them together.

Businesses looking to develop customer segmentation strategies might use k-means clustering to better target marketing campaigns that groups of customers should respond to. Another use case for k-means clustering would be detecting insurance fraud, using historical data that in the past had showed tendencies to defraud the insurance provider to examine current cases.

4. Agglomerative & divisive clustering

Agglomerative clustering is a method used for finding hierarchal relationships for data clusters.

It uses a bottom-up approach, putting each individual data point into its own cluster, and then merging similar clusters together. By contrast, divisive clustering takes the opposite approach, and assumes all the data points are in the same cluster and then divides similar clusters from there.

A timely use case for these clustering algorithms is tracking viruses. By using DNA analysis, scientists are able to better understand mutation rates and transmission patterns.

Classification algorithms

Classification algorithms are similar to clustering algorithms, but while clustering algorithms are used to both find the categories in data and sort data points into those categories, classification algorithms sort data into predefined categories.

5. k-nearest neighbors

Not to be confused with k-means clustering, k-nearest neighbors is a pattern classification method that looks at the data presented, scans through all past experiences, and identifies the one that is the most similar.

k-nearest neighbors is often used for activity analysis in credit card transactions, comparing transactions to previous ones. Abnormal behavior, like using a credit to make a purchase in another country, might trigger a call from the card issuers fraud detection unit. The algorithm can also be used for visual pattern recognition, and it’s now frequently used as part of retailers’ loss prevention tactics.

6. Tree-based algorithms

Tree-based algorithms, including decision trees, random forests, and gradient-boosted trees are used to solve classification problems. Decision trees excel at understanding data sets that have many categorical variables and can be effective even when some data is missing.

They’re primarily used for predictive modeling, and are helpful in marketing, answering questions like “which tactics should we be doing more of?” A decision tree might help an email marketer decide which customers would be more likely to order based on specific offers.

A random forest algorithm uses multiple trees to come up with a more complete analysis. In a random forest algorithm, multiple trees are created, and the forest uses the average decisions of its trees to make a prediction.

Gradient-boosted trees (GBTs) also use decision trees but rely on an iterative approach to correct for any mistakes in the individual decision tree models. GBTs are widely considered to be one of the most powerful predictive methods available to data scientists and can be used by manufacturers to optimize the pricing of a product or service for maximum profit, among other use cases.

7. Support vector machine

A support vector machine (SVM) is, according to some practitioners, the most popular machine learning algorithm. It’s a classification (or sometimes a regression) algorithm that’s used to separate a dataset into classes, for example two different classes might be separated by a line that demarcates a distinction between the classes.

There could be an infinite number of lines that do the job, but SVM helps find the optimal line. Data scientists are using SVMs in a wide variety of business applications, including classifying images, detecting faces, recognizing handwriting, and bioinformatics.

8. Neural networks

Neural networks are a set of algorithms designed to recognize patterns and mimic, as much as possible, the human brain. Neural nets, like the brain, are able to adapt to changing conditions, even ones that weren’t originally intended.

A neural net can be taught to recognize, say, an image of a dog by providing a training set of images of dogs. Once the algorithm processes the training set, it can then classify novel images into ‘dogs’ or ‘not dogs’. Neural networks work on more than just images, though, and can be used for text, audio, time-series data, and more. There are many different types of neural networks, all optimized for the specific tasks they’re intended to work on.

Some of the business applications for neural networks are weather prediction, face detection and recognition, transcribing speech into text, and stock market forecasting. Marketers are using neural networks to target specific content and offers to customers who would be most ready to act on the content.

Deep learning is really a subset of neural networks, where algorithms ‘learn’ by analyzing large datasets. Deep learning has a myriad of business uses, and in many cases, it can outperform the more general machine learning algorithms. Deep learning doesn’t generally require human inputs for feature creation, for example, so it’s good at understanding text, voice and image recognition, autonomous driving, and many other uses.

Other algorithm types

In addition to the above categories, there are other types of algorithms that can be used during model creation and training to help the process, like fuzzy matching and feature selection algorithms.

9. Fuzzy matching

Fuzzy matching is a type of clustering algorithm that can make matches even when items aren’t exactly the same, due to data issues like typos. For some natural language processing tasks, preprocessing with fuzzy matching can improve results by three to five percent.

A typical use case is customer profile management. Fuzzy matching lets you identify very similar addresses as the same so that only one unique record ID and source file would be used for the two similar addresses.

10. Feature selection algorithms

Feature selection algorithms are used to whittle down the number of input parameters from a model. A lower number of input variables may lower the computational cost of running a model, as well as improve the performance of the model.

The commonly used techniques like PCA and MRMR are useful for picking up as much information as possible from a reduced subset of features. Using a subset of features can be beneficial because your model may be less confused by noise and the computation time of your algorithm will go down. Feature selection has been used to show business competitor relationships, for example.

If you want to dive deeper into machine learning, including how to get your first project off the ground, check out RapidMiner’s Human’s Guide to Machine Learning Projects.

Let’s block ads! (Why?)

RapidMiner

Read More

Teradata is Selected by Brinker International to Enhance Advanced Analytics, Machine Learning and Data Science Capabilities

August 21, 2020   BI News and Info
teradata logo social Teradata is Selected by Brinker International to Enhance Advanced Analytics, Machine Learning and Data Science Capabilities

Leading Casual Dining Restaurant Company Reinvests in Teradata as it Moves from On-Premises to the Cloud

Teradata (NYSE: TDC), the cloud data and analytics company, today announced that after an evaluation of other cloud analytics offerings on the market, Brinker International, Inc. (NYSE: EAT) has reinvested with Teradata, leveraging the Teradata Vantage platform – delivered as-a-service, on Amazon Web Services (AWS) – as the core of its data foundation to facilitate advanced analytics, machine learning and data science across the organization.
 
Brinker is one of the world’s leading casual dining restaurant companies and has been a Teradata customer for more than two decades. Founded in 1975 and based in Dallas, Texas, Brinker owns, operates, or franchises more than 1,600 restaurants under the names Chili’s® Grill & Bar and Maggiano’s Little Italy®. Over the past year, Brinker has been working to further increase its capabilities in advanced analytics and data science.
 
“Being a data-driven organization allows us to make informed decisions to create a better Guest and Team Member experience,” said Pankaj Patra, senior vice president and chief information officer at Brinker International. “As we looked for more flexible and cost-effective ways to manage and access our data, we evaluated quite a few cloud-native providers. After careful consideration, we decided the best course of action would be to migrate to Teradata Vantage in the cloud and take advantage of its as-a-service offerings to support our analytic goals.”
 
With Teradata Vantage delivered as-a-service, in the cloud, enterprises such as Brinker can focus on mining their data for insights that drive business decisions, rather than on managing infrastructure. By integrating Vantage’s machine learning capabilities, Brinker can now apply advanced analytics and predictive modeling to its business processes, enabling more accurate sales forecasting, demand and traffic forecasting, team member management, recommendation engines for customers and more.
 
“We’re proud of our ongoing relationship with Brinker and its long-standing position as a leader in the restaurant industry – a position due in large part to its culture of innovation in using data and analytics to streamline business processes, facilitate rapid decision-making and turn insights into answers,” said Ashish Yajnik, vice president of Vantage Cloud at Teradata. “Our collaboration with AWS and participation in the AWS Independent Software Vendor (ISV) Workload Migration Program has helped Brinker successfully move their mission-critical data infrastructure to the cloud. We look forward to expanding our relationship by powering their advanced analytics and data science capabilities through the scalable, clean and trusted data foundation that the Vantage platform provides.”
 
Teradata is an Advanced Technology and Consulting Partner in the AWS Partner Network (APN). The company brings proven processes and tools to make migrations to Vantage on AWS low risk and the fastest path to customer value through the AWS ISV Workload Migration – an APN Partner program that helps customers migrate ISV workloads to AWS to achieve their business goals and accelerate their cloud journey.
 
“Through the AWS ISV Workload Migration Program, Teradata was able to help Brinker migrate to Vantage on AWS securely and cost effectively. We are pleased to collaborate with Teradata and its long-standing customer Brinker to enhance their cloud practices,” said Sabina Joseph, director, Americas ISVs, Amazon Web Services, Inc.
 
Teradata Vantage is the leading hybrid cloud data analytics software platform that enables ecosystem simplification by unifying analytics, data lakes and data warehouses. With Vantage delivered as-a-service, enterprise-scale companies can eliminate silos and cost-effectively query all their data, all the time, regardless of where the data resides – in the cloud using low cost object stores, on multiple clouds, on-premises or anywhere in-between – to get a complete view of their business. And by combining Vantage with first party cloud services, Teradata enables customers to expand their cloud ecosystem with deep integration of cloud-specific, cloud-native services.
 
Webinar
Join Teradata for a live webinar on July 29th, 8:00 – 9:00 a.m. PT featuring Mark Abramson, lead architect, BI and analytics at Brinker International, and William McKnight, president of McKnight Consulting Group. The session will be moderated by Ed White, vice president, portfolio marketing and competitive intelligence at Teradata. Details below:
 
Webinar: Brinker’s Journey Back to Teradata
 
Wednesday, July 29th
8:00 a.m. – 9:00 a.m. PT /
11:00 a.m. – 12:00 p.m. ET
 
Registration is required and is open to Teradata prospects, customers, analysts, partners and Teradata employees.
 
This interactive webinar will highlight:

  • Brinker’s future analytic strategies and how Teradata will be part of its ongoing journey to lower overall costs and improve performance.
  • How Brinker embraces and drives benefits using Teradata Vantage on AWS, particularly to meet their advanced analytics and computing needs.
  • McKnight’s latest research into price-performance on modern cloud database management systems, including best practices. 

About Brinker International, Inc. 
Hi, welcome to Brinker International, Inc. (NYSE: EAT)! We’re one of the world’s leading casual dining restaurant companies. Founded in 1975 in Dallas, Texas, we stay true to our roots, but also enjoy exploring outside of our hometown. As of March 25, 2020, we owned, operated or franchised 1,675 restaurants in 29 countries and two territories under the names Chili’s® Grill & Bar (1,622 restaurants) and Maggiano’s Little Italy® (53 restaurants). Our passion is making people feel special and we hope you feel that passion each time you visit one of our restaurants or our home office. Find more information about us at www.brinker.com, follow us on LinkedIn or review us on Glassdoor.

Let’s block ads! (Why?)

Teradata United States

Read More
« Older posts
  • Recent Posts

    • Rickey Smiley To Host 22nd Annual Super Bowl Gospel Celebration On BET
    • Kili Technology unveils data annotation platform to improve AI, raises $7 million
    • P3 Jobs: Time to Come Home?
    • NOW, THIS IS WHAT I CALL AVANTE-GARDE!
    • Why the open banking movement is gaining momentum (VB Live)
  • Categories

  • Archives

    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited