• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: Performance

Free Training – How to Leverage Sales Intelligence to Improve Sales Performance

March 10, 2021   CRM News and Info

Join us for this free training session and learn how to leverage sales intelligence technology to improve sales performance.

Date: March 18th 
Time: 12-1pm Eastern

Research has proven that incorporating sales intelligence in your sales process significantly improves sales performance for B2B sellers.  Harvard Business Review found that top-performing sales teams cite intelligence as a key driver fueling sales growth:

  • 41% improvement in targeting
  • 40% improvement in forecasting
  • 34% improvement in lead quality
  • 27% reduction in time spent looking for data
  • 20% improvement in won opportunities
  • Be our guest for an informative session and learn how to incorporate sales intelligence into your sales cycle.

During this session you will learn: 

  • What is sales intelligence?
  • The cost of missing & bad data to sales
  • How to uncover deep information about your prospects & customers
    • Annual Revenue, ownership, employee count, and more
    • Key contacts with verified email and telephone
    • Technologies in use
    • Industry info. and similar companies
  • Automatically update prospect & customer info
  • Day-in-the-life of sales using intelligence
All Attendees will receive 30 days of access to InsideView Insights, a leading B2B sales intelligence platform

CLICK HERE to register

About the Author: David Buggy is a veteran of the CRM industry with 18 years of experience helping businesses transform by leveraging Customer Relationship Management technology. He has over 17 years of experience with Microsoft Dynamics CRM/365 and has helped hundreds of businesses plan, implement and support CRM initiatives. To reach David or call 844.8.STRAVA (844.878.7282) To learn more about Strava Technology Group visit www.stravatechgroup.com

Let’s block ads! (Why?)

CRM Software Blog | Dynamics 365

Read More

Facebook’s new computer vision model achieves state-of-the-art performance by learning from random images

March 4, 2021   Big Data

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


Facebook today announced an AI model trained on a billion images that ostensibly achieves state-of-the-art results on a range of computer vision benchmarks. Unlike most computer vision models, which learn from labeled datasets, Facebook’s generates labels from data by exposing the relationships between the data’s parts — a step believed to be critical to someday achieving human-level intelligence.

The future of AI lies in crafting systems that can make inferences from whatever information they’re given without relying on annotated datasets. Provided text, images, or another type of data, an AI system would ideally be able to recognize objects in a photo, interpret text, or perform any of the countless other tasks asked of it.

Facebook claims to have made a step toward this with a computer vision model called SEER, which stands for SElf-supERvised. SEER contains a billion parameters and can learn from any random group of images on the internet without the need for curation or annotation. Parameters, a fundamental part of machine learning systems, are the part of the model derived from historical training data.

New techniques

Self-supervision for vision is a challenging task. With text, semantic concepts can be broken up into discrete words, but with images, a model must decide for itself which pixel belongs to which concept. Making matters more challenging, the same concept will often vary between images. Grasping the variation around a single concept, then, requires looking at a lot of different images.

Facebook researchers found that scaling AI systems to work with complex image data required at least two core components. The first was an algorithm that could learn from a vast number of random images without any metadata or annotations, while the second was a convolutional network — ConvNet — large enough to capture and learn every visual concept from this data. Convolutional networks, which were first proposed in the 1980s, are inspired by biological processes, in that the connectivity pattern between components of the model resembles the visual cortex.

In developing SEER, Facebook took advantage of an algorithm called SwAV, which was borne out of the company’s investigations into self-supervised learning. SwAV uses a technique called clustering to rapidly group images from similar visual concepts and leverage their similarities, improving over the previous state-of-the-art in self-supervised learning while requiring up to 6 times less training time.

 Facebook’s new computer vision model achieves state of the art performance by learning from random images

Above: A simplified schematic showing SEER’s model architecture.

Image Credit: Facebook

Training models at SEER’s size also required an architecture that was efficient in terms of runtime and memory without compromising on accuracy, according to Facebook. The researchers behind SEER opted to use RegNets, or a type of ConvNet model capable of scaling to billions or potentially trillions of parameters while fitting within runtime and memory constraints.

Facebook software engineer Priya Goyal said SEER was trained on 512 NVIDIA V100 GPUs with 32GB of RAM for 30 days.

The last piece that made SEER possible was a general-purpose library called VISSL, short for VIsion library for state-of-the-art Self Supervised Learning. VISSL, which Facebook is open-sourcing today, allows for self-supervised training with a variety of modern machine learning methods. The library facilitates self-supervised learning at scale by integrating algorithms that reduce the per-GPU memory requirement and increase the training speed of any given model.

Performance and future work

After pretraining on a billion public Instagram images, SEER outperformed the most advanced state-of-the-art self-supervised systems, Facebook says. SEER also outperformed models on tasks including object detection, segmentation, and image classification. When trained with just 10% of the examples in the popular ImageNet dataset, SEER still managed to hit 77.9% accuracy. And when trained with just 1%, SEER was 60.5% accurate.

When asked whether the Instagram users whose images were used to train SEER were notified or given an opportunity to opt out of the research, Goyal noted that Facebook informs Instagram account holders in its data policy that it uses information like pictures to support research, including the kind underpinning SEER. That said, Facebook doesn’t plan to share the images or the SEER model itself, in part because the model might contain unintended biases.

“Self-supervised learning has long been a focus for Facebook AI because it enables machines to learn directly from the vast amount of information available in the world, rather than just from training data created specifically for AI research,” Facebook wrote in a blog post. “Self-supervised learning has incredible ramifications for the future of computer vision, just as it does in other research fields. Eliminating the need for human annotations and metadata enables the computer vision community to work with larger and more diverse datasets, learn from random public images, and potentially mitigate some of the biases that come into play with data curation. Self-supervised learning can also help specialize models in domains where we have limited images or metadata, like medical imaging. And with no labor required up front for labeling, models can be created and deployed quicker, enabling faster and more accurate responses to rapidly evolving situations.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Facebook researchers propose ‘pre-finetuning’ to improve language model performance

February 2, 2021   Big Data
 Facebook researchers propose ‘pre finetuning’ to improve language model performance

How open banking is driving huge innovation

Learn how fintechs and forward-thinking FIs are accelerating personalized financial products through data-rich APIs.

Register Now


Machine learning researchers have achieved remarkable success with language model pretraining, which uses self-supervision, a training technique that doesn’t require labeled data. Pretraining refers to training a model with one task to help it recognize patterns that can be applied to a range of other tasks. In this way, pretraining imitates the way human beings process new knowledge. That is, using parameters of tasks that have been learned before, models learn to adapt to new and unfamiliar tasks.

For many natural language tasks, however, training examples for related problems exist. In an attempt to leverage these, researchers at Facebook propose “pre-finetuning,” a methodology of training language models that involves a learning step with over 4.8 million training examples performed on around 50 classification, summarization, question-answering, and commonsense reasoning datasets. They claim that pre-finetuning consistently improves performance for pretrained models while also significantly improving sample efficiency during fine-tuning.

It’s an approach that has been attempted before, often with success. In a 2019 study, researchers at the Allen Institute noticed that pre-finetuning a BERT model on a multiple choice question dataset appeared to teach the model something about multiple choice questions in general. A subsequent study found that pre-finetuning increased a model’s robustness for name swaps, where the names of different people were swapped in a sentence about which the model had to answer.

In order to ensure that their pre-finetuning stage incorporated general language representations, the researchers included tasks in four different domains: classification, commonsense reasoning, machine reading comprehension, and summarization. They call their pre-finetuned models MUPPET, which roughly stands for “Massive Multi-task Representation with Pre-finetuning.”

After pre-finetuning RoBERTa and BART, two popular pretrained models for natural language understanding, the researchers tested their performance on widely-used benchmarks including RTE, BoolQ, RACE, SQuAD, and MNLI. Interestingly, the results show that pre-finetuning can hurt performance when few tasks are used to a critical point, usually above 15 tasks. But pre-finetuning beyond this point leads to performance improvements correlated with the number of language tasks. MUPPET models outperform their vanilla pretrained counterparts and leveraging representations with 34-40 tasks enables the models to reach higher even accuracies with less data than a baseline RoBERTa model.

“These [performance] gains are particularly strong in the low resource regime, where there is relatively little labeled data for fine-tuning,” the researchers wrote in a paper describing their work. “We show that we can effectively learn more robust representations through multitask learning at scale. … Our work shows how even seemingly very different datasets, for example, summarization and extractive QA, can help each other by improving the model’s representations.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

CRM’s Pandemic Performance

February 1, 2021   CRM News and Info

One of the many disappointments we’ve seen in handling the COVID-19 crisis has been how cloud software companies have been almost shut out of participating in providing solutions. In some ways it’s to be expected because databases and wet chemistry aren’t a natural fit, but still…

Early in the crisis vendors like Salesforce and Zoho began offering their products for free, or at low cost, to companies and end users suddenly faced with working at home. Oracle, Salesforce and others also developed tracking apps that could be easily deployed on the Web to help health authorities track exposure and quarantines.

Some states were more aggressive in trying to apply technology. For example, Rhode Island, whose governor Gina Raimondo is a former venture capitalist and is currently being vetted by the Senate to be the next Secretary of Commerce. Raimondo engaged with Salesforce early on to get an exposure tracking app built that has been used in numerous other states.

But the apparent disinterest and even outright hostility aimed at public health initiatives in some states relegated tracking to the back of the bus. The only issue driving interest in many quarters has been finding a vaccine. That’s too bad because silver bullets don’t often solve problems, though they can help a lot. By their natures complex problems are best handled by a portfolio of solutions that lead to the desired result.

It’s not a question of vaccine or, but of vaccine and; as in vaccine and masks and hand hygiene and public health such as spacing and therapeutics. We know this but we’ve been eagerly awaiting a vaccine while dismissing the other parts of the portfolio with predictable results. But the portfolio is exactly where software solutions for tracking and tracing to support public health can do the most good. The public just seems to not have the interest.

Problem Solvers

Nevertheless, if you step back and look at what the tech sector has demonstrated to us through COVID, you can feel impressed. There are multiple huge networks capable of bringing technical solutions to any corner of a reasonably mature society i.e., one with good Internet support, which Oracle and others have shown.

There are also multiple application development environments capable of delivering management solutions with very little notice. They don’t deliver vaccines, but we’ve seen how ineffective vaccines alone are.

Look at Salesforce’s efforts over the pandemic: The company upended its development calendar about a year ago to develop and bring to market solutions that could make the pandemic a little less awful.

Salesforce Anywhere supports cubical workers trying to do their jobs at home, and Work.com offers apps that help organizations to do things we never thought of before, like staggering employee arrival times to minimize traffic on elevators where viral spread could happen.

Oracle recently announced a partnership with the Malaysian drug company Pharmaniaga Bhd to use the cloud-based logistics platform — Oracle Fusion Cloud Supply Chain and Manufacturing (SCM) — to improve logistics for vaccine distribution.

Salesforce also recently announced Vaccine Cloud, which is designed to support vaccine management, the stuff that happens once the vaccine is in vials and ready to ship. You can find out more here.

The most important thing from my perspective is that the product provides the structure for small organizations to effectively manage their time, labor and supplies to optimize getting shots into arms.

As a CRM issue, once again we have an example of how the platform is driving business performance. Flexible software drives business agility, I like to say, and nothing drives software flexibility like a development environment that generates code to support multiple systems. Integration, analytics, workflow and other aspects of running applications all converge into a development environment with a code generator.

Answering the Call

Vendors have always looked for proof of concepts that tell a bigger story than a simple use case because they help sell product. In the releases and announcements we’re seeing in the CRM world, we’re seeing more.

The cloud and cloud-based platforms are not perfect, but we’re light years ahead of where we were just a few years ago when, too often, if our businesses were presented with even routine challenges our almost default answer was “Our system won’t let us do that.”

In stark contrast, according to a video of Gina Raimondo that I saw at Dreamforce a few months ago, when the governor called Marc Benioff early last year looking for help with tracking and tracing, Benioff’s response was, “Okay, I’ll have somebody there on Monday.”

This approach to responsive technology isn’t a result of the pandemic, we’ve been on this path for twenty years, but the crisis has served to demonstrate the solution; and for the broader civilization I think it marks an important turning point, maybe like moveable type.
end enn CRMs Pandemic Performance

The opinions expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.


Denis%20Pombriant CRMs Pandemic Performance
Denis Pombriant is a well-known CRM industry analyst, strategist, writer and speaker. His new book, You Can’t Buy Customer Loyalty, But You Can Earn It, is now available on Amazon. His 2015 book, Solve for the Customer, is also available there.
Email Denis.

Let’s block ads! (Why?)

CRM Buyer

Read More

AI models from Microsoft and Google already surpass human performance on the SuperGLUE language benchmark

January 6, 2021   Big Data
 AI models from Microsoft and Google already surpass human performance on the SuperGLUE language benchmark

Transform 2021

Join us for the world’s leading event about accelerating enterprise transformation with AI and Data, for enterprise technology decision-makers, presented by the #1 publisher in AI and Data

Learn More


In late 2019, researchers affiliated with Facebook, New York University (NYU), the University of Washington, and DeepMind proposed SuperGLUE, a new benchmark for AI designed to summarize research progress on a diverse set of language tasks. Building on the GLUE benchmark, which had been introduced one year prior, SuperGLUE includes a set of more difficult language understanding challenges, improved resources, and a publicly available leaderboard.

When SuperGLUE was introduced, there was a nearly 20-point gap between the best-performing model and human performance on the leaderboard. But as of early January, two models — one from Microsoft called DeBERTa and a second from Google called T5 + Meena — have surpassed the human baselines, becoming the first to do so.

Sam Bowman, assistant professor at NYU’s center for data science, said the achievement reflected innovations in machine learning including self-supervised learning, where models learn from unlabeled datasets with recipes for adapting the insights to target tasks. “These datasets reflect some of the hardest supervised language understanding task datasets that were freely available two years ago,” he said. “There’s no reason to believe that SuperGLUE will be able to detect further progress in natural language processing, at least beyond a small remaining margin.”

But SuperGLUE isn’t a perfect — nor a complete test of human language ability. In a blog post, the Microsoft team behind DeBERTa themselves noted that their model is “by no means” reaching the human-level intelligence of natural language understanding. They say this will require research breakthroughs — along with new benchmarks to measure them and their effects.

SuperGLUE

As the researchers wrote in the paper introducing SuperGLUE, their benchmark is intended to be a simple, hard-to-game measure of advances toward general-purpose language understanding technologies for English. It comprises eight language understanding tasks drawn from existing data and accompanied by a performance metric as well as an analysis toolkit.

The tasks are:

  • Boolean Questions (BoolQ) requires models to respond to a question about a short passage from a Wikipedia article that contains the answer. The questions come from Google users, who submit them via Google Search.
  • CommitmentBank (CB) tasks models with identifying a hypotheses contained within a text excerpt from sources including the Wall Street Journal and determining whether this hypothesis holds true.
  • Choice of plausible alternatives (COPA) provides a premise sentence about topics from blogs and a photography-related encyclopedia from which models must determine either the cause or effect from two possible choices.
  • Multi-Sentence Reading Comprehension (MultiRC) is a question-answer task where each example consists of a context paragraph, a question about that paragraph, and a list of possible answers. A model must predict which answers are true and false.
  • Reading Comprehension with Commonsense Reasoning Dataset (ReCoRD) has models predict masked-out words and phrases from a list of choices in passages from CNN and the Daily Mail, where the same words or phrases might be expressed using multiple different forms, all of which are considered correct.
  • Recognizing Textual Entailment (RTE) challenges natural language models to identify whenever the truth of one text excerpt follows from another text excerpt.
  • Word-in-Context (WiC) provides models two text snippets and a polysemous word (i.e., word with multiple meanings) and requires them to determine whether the word is used with the same sense in both sentences.
  • Winograd Schema Challenge (WSC) is a task where models, given passages from fiction books, must answer multiple-choice questions about the antecedent of ambiguous pronouns. It’s designed to be an improvement on the Turing Test.

SuperGLUE also attempts to measure gender bias in models with Winogender Schemas, pairs of sentences that differ only by the gender of one pronoun in the sentence. However, the researchers note that Winogender has limitations in that it offers only positive predictive value: While a poor bias score is clear evidence that a model exhibits gender bias, a good score doesn’t mean the model is unbiased. Moreover, it doesn’t include all forms of gender or social bias, making it a coarse measure of prejudice.

To establish human performance baselines, the researchers drew on existing literature for WiC, MultiRC, RTE, and ReCoRD and hired crowdworker annotators through Amazon’s Mechanical Turk platform. Each worker, which was paid an average of $ 23.75 an hour, completed a short training phase before annotating up to 30 samples of selected test sets using instructions and an FAQ page.

Architectural improvements

The Google team hasn’t yet detailed the improvements that led to its model’s record-setting performance on SuperGLUE, but the Microsoft researchers behind DeBERTa detailed their work in a blog post published earlier this morning. DeBERTa isn’t new — it was open-sourced last year — but the researchers say they trained a larger version with 1.5 billion parameters (i.e., the internal variables that the model uses to make predictions). It’ll be released in open source and integrated into the next version of Microsoft’s Turing natural language representation model, which supports products like Bing, Office, Dynamics, and Azure Cognitive Services.

DeBERTa is pretrained through masked language modeling (MLM), a fill-in-the-blank task where a model is taught to use the words surrounding a masked “token” to predict what the masked word should be. DeBERTa uses both the content and position information of context words for MLM, such that it’s able to recognize “store” and “mall” in the sentence “a new store opened beside the new mall” play different syntactic roles, for example.

Unlike some other models, DeBERTa accounts for words’ absolute positions in the language modeling process. Moreover, it computes the parameters within the model that transform input data and measure the strength of word-word dependencies based on words’ relative positions. For example, DeBERTa would understand the dependency between the words “deep” and “learning” is much stronger when they occur next to each other than when they occur in different sentences.

DeBERTa also benefits from adversarial training, a technique that leverages adversarial examples derived from small variations made to training data. These adversarial examples are fed to the model during the training process, improving its generalizability.

The Microsoft researchers hope to next explore how to enable DeBERTa to generalize to novel tasks of subtasks or basic problem-solving skills, a concept known as compositional generalization. One path forward might be incorporating so-called compositional structures more explicitly, which could entail combining AI with symbolic reasoning — in other words, manipulating symbols and expressions according to mathematical and logical rules.

“DeBERTa surpassing human performance on SuperGLUE marks an important milestone toward general AI,” the Microsoft researchers wrote. “[But unlike DeBERTa,] humans are extremely good at leveraging the knowledge learned from different tasks to solve a new task with no or little task-specific demonstration.”

New benchmarks

According to Bowman, no successor to SuperGLUE is forthcoming, at least not in the near term. But there’s growing consensus within the AI research community that future benchmarks, particularly in the language domain, must take into account broader ethical, technical, and societal challenges if they’re to be useful.

For example, a number of studies show that popular benchmarks do a poor job of estimating real-world AI performance. One recent report found that 60%-70% of answers given by natural language processing models were embedded somewhere in the benchmark training sets, indicating that the models were usually simply memorizing answers. Another study — a meta-analysis of over 3,000 AI papers — found that metrics used to benchmark AI and machine learning models tended to be inconsistent, irregularly tracked, and not particularly informative.

Part of the problem stems from the fact that language models like OpenAI’s GPT-3, Google’s T5 + Meena, and Microsoft’s DeBERTa learn to write humanlike text by internalizing examples from the public web. Drawing on sources like ebooks, Wikipedia, and social media platforms like Reddit, they make inferences to complete sentences and even whole paragraphs.

As a result, language models often amplify the biases encoded in this public data; a portion of the training data is not uncommonly sourced from communities with pervasive gender, race, and religious prejudices. AI research firm OpenAI notes that this can lead to placing words like “naughty” or “sucked” near female pronouns and “Islam” near words like “terrorism.” Other studies, like one published by Intel, MIT, and Canadian AI initiative CIFAR researchers in April, have found high levels of stereotypical bias from some of the most popular models, including Google’s BERT and XLNet, OpenAI’s GPT-2, and Facebook’s RoBERTa. This bias could be leveraged by malicious actors to foment discord by spreading misinformation, disinformation, and outright lies that “radicalize individuals into violent far-right extremist ideologies and behaviors,” according to the Middlebury Institute of International Studies.

Most existing language benchmarks fail to capture this. Motivated by the findings in the two years since SuperGLUE’s introduction, perhaps future ones might.

VentureBeat

VentureBeat’s mission is to be a digital townsquare for technical decision makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you,
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more.

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Set Targets and Track Dynamics 365 CRM User Performance with Ease on Daily, Weekly or Monthly Basis

December 30, 2020   CRM News and Info

xSet Targets and Track Dynamics 365 CRM 625x357.jpg.pagespeed.ic.VVa2gk1t4K Set Targets and Track Dynamics 365 CRM User Performance with Ease on Daily, Weekly or Monthly Basis

Looking for a complete Dynamics 365 CRM monitoring solution? Get User Adoption Monitor for seamless tracking of Dynamics 365 CRM user actions!

Recently, our user actions monitoring app – User Adoption Monitor, a Preferred App on Microsoft AppSource – had released three new features making it a formidable app for tracking user actions across Dynamics 365 CRM/Power Apps. In our previous blog, you were given in-depth information about one of the newly released User Adoption Monitor feature – Data Completeness– which helps you to ensure that all Dynamics 365 CRM records have the necessary information required to conduct smooth business transactions. And now in this blog, we will shed more light on its remaining two new features – Aggregate Tracking & Target Tracking.

So, let’s see how these two new features will help you in tracking Dynamics 365 CRM/Power Apps user actions.

Aggregate Tracking

With this feature, you will be able to track the aggregations of respective numeric fields of the entity on which the specific user action has been defined. For example, as a Sales Manager you want to track how much sales your team has made for a specific period. To know this, all you have to do is configure aggregate tracking for the ‘Opportunity-win’ action. Once the tracking is done, you will get the SUM of the Actual Revenue of all the Opportunities won by each of your team members for a defined period of time. Similarly, you can get the aggregate value (SUM or AVG) of Budget Amount, Est. Revenue, Freight Amount, etc.

ximage001 2 625x277.png.pagespeed.ic.Crva0sLuKh Set Targets and Track Dynamics 365 CRM User Performance with Ease on Daily, Weekly or Monthly Basis

Target Tracking

Using this feature, you will be able to allot sales targets to your team members and keep regular track of it in Dynamics 365 CRM/Power Apps. You can set targets in both count & value. Consider a scenario where you want to appraise the performance of your sales team. With this feature, you can set a target for each of your team member,s and based on the tracking result will be able to analyze their performance easily. You can now track the performance of your team members using this feature in the following ways:

Target based on Aggregation Value

If you want to track the total sales value generated by your team members then you can use this feature and set the target in aggregate value. Once set, you can now monitor the performance of your team members by comparing the aggregate value of the target set and the total aggregate value of the target achieved by them on a daily, weekly, or monthly basis.

ximage002 625x242.png.pagespeed.ic.UID43CELeX Set Targets and Track Dynamics 365 CRM User Performance with Ease on Daily, Weekly or Monthly Basis

Target based on Count

Similarly, if you want to keep tab on the count of sales made by your team members then you can set the target in the count. Once set, you can now monitor and compare the target set (in count) and the total target achieved (in count) by your team members on a daily, weekly, or monthly basis.

ximage003 1 625x240.png.pagespeed.ic.nwaW0SYIMm Set Targets and Track Dynamics 365 CRM User Performance with Ease on Daily, Weekly or Monthly Basis

Target defined for Fixed Interval

In the above two scenarios, the Targets were defined with the Interval set as ‘Recurring’. In such cases, tracking will be done on a daily, weekly, or monthly basis. Other than that, you can also define the Target for a fixed period of time by setting the Interval as Fixed.

After you set the interval of Target Configuration as fixed, you can define the Start Date and End Date for which you want to set the target tracking.

With this, you can easily monitor and compare the target set and the total target achieved (both in count or aggregate value) by your team members for a given fixed period of time.

ximage004 625x206.png.pagespeed.ic.gu7M568O5S Set Targets and Track Dynamics 365 CRM User Performance with Ease on Daily, Weekly or Monthly Basis

ximage005 1 625x264.png.pagespeed.ic.t3eqjfnJyU Set Targets and Track Dynamics 365 CRM User Performance with Ease on Daily, Weekly or Monthly Basis

A handy app to have at the time for appraisals, isn’t it?

So, wait no more! Satisfy your curiosity by downloading User Adoption Monitor from our website or Microsoft AppSource and exploring these amazing features for a free trial of 15 days.

For a personal demo or any user actions monitoring queries, feel free to contact us at crm@inogic.com

Until next time –Wish you a Safe and Glorious New Year!

Keep Tracking Us!

Let’s block ads! (Why?)

CRM Software Blog | Dynamics 365

Read More

Performance of pure function – Best way to define a function?

October 28, 2020   BI News and Info

 Performance of pure function   Best way to define a function?

Let’s block ads! (Why?)

Recent Questions – Mathematica Stack Exchange

Read More

Improving Company Performance

July 13, 2020   CRM News and Info

Company performance relies on how well a company is led, how engaged its employees are, and the quality of the technology at hand which effectively means the tools employees can leverage. Certainly, there are metrics for all of this. But the trouble with metrics is that they are collected and administered by management which has a vested interest in the outcome of any survey.

Beagle Research Group took a different approach in a recent research project sponsored by Zoho Corporation. During the winter of 2020 Beagle asked rank and file employees what they thought. Beagle’s specific interest was to better understand employee engagement, and to be complete, Beagle also asked about management, from the employee perspective, as well as technology.

What Beagle found gratifying was that the professionalism of employees and the executives running their companies was quite good. Somewhat surprising was the opinion of employees about the technology systems they work with day to day.

Beagle’s conclusion is that to improve corporate performance today, one should first look at the quality of the technology that is used to run companies and face customers.

How Beagle came to those conclusions is an interesting story.

During the COVID winter Beagle surveyed 509 individuals from across America and Canada. They were specifically selected because they were line of business people and not managers. These people largely had customer-facing jobs.

Questions where grouped in three buckets:

  • Employee Engagement
  • Alignment and Competence
  • Technology

Engagement and technology are self-evident. Alignment and competence reflect how employees gauge their ability to align with company goals and their impression of how well their bosses convey company needs from the jobs.

Scoring the Results

Most of the survey questions were asked as ratings on a scale of one to five, with an answer of three being neutral. In most cases, the ratings related to these sentiments.

  1. Completely Disagree
  2. Disagree
  3. Neutral
  4. Agree
  5. Completely Agree

In all cases Beagle grouped the percentages (not the numeric values of the options) of 1 and 2, as well as 4 and 5, giving scores for Disagree and Agree which were then added together and divided to find the ratios of Agree/Disagree which they view as important metrics.

For example, this statement: “I have high satisfaction in the work I do.”

The results were 63 percent agreement, 13 percent disagreement and 21 percent neutral. Importantly the ratio of 5.2 (agree/disagree) represents a relatively high consensus for the group, since most of the survey takers took a stand, with only 21 percent reporting as neutral.

This was the basis of scoring.

It’s important to note however that not all answers were as cut and dried. In another example, when asked to rate company leadership, 42 percent rated theirs exceptional or good; only 19 percent said theirs was poor or not good, and 39 percent remained neutral.

The high number of noncommittal answers gave no real majority, but still provided a usable ratio. In this case, the rating ratio of just over 2:1, which falls into the needs improvement category which is categorized in the table below.

86753 table Improving Company Performance

This scoring system presented Beagle with the big question of what to do with the answers in the middle, neutrals who refused to fish or cut bait. Neutrals were a big issue precisely because as the questions got more difficult, more people decided not to decide.

For example, for questions about rating their own employee engagement, the average neutral score was 23.2 and the ratio was 5.8:1 (outstanding); for questions about alignment and competency, the average for neutrals went to 28 with an average ratio of 3:1 (acceptable, but needs improvement); and for technology, the average for neutrals was 29 and the average ratio was 1.97, technically a failing grade but which rounds up to 2.0 and needs improvement.

Despite the survey being completely confidential, it seems that people were reluctant to say what they thought (and how could people have no opinion of such a vital part of their jobs?).

Beagle decided the neutrals provided vital insight because there was an inverse relationship between neutrals and positive scores. Simply put, high ratios had low numbers of neutrals, and they didn’t view that as an accident but as mutual reinforcement.

The scoring system is somewhat reminiscent of the Net Promoter Score (NPS) in which the scores of 7 and 8 on a scale of 0 to 10 are discarded, the zero through 6 scores are tallied and subtracted from the total of the nines and tens, leaving users with the net.

Conclusions

It all comes down to this. Professionalism is high in the ranks. Most employees like their jobs and are happy to do them. Moreover, they think their bosses are good. Bosses communicate effectively and are fair in their dealings.

Things break down over technology though. Half of the panel think the technologies they work with are good, which Beagle suggests is low. Also, only single-digit results accrue for questions about whether employees have access to systems that recommend next best actions and voice recognition, two things indicative of advanced customer-facing tools.

Almost one third (29 percent) of the panel had no opinion of their technology which Beagle found high given the amount of time employees spend interfacing with technology each day. As mentioned above, technology gets a barely passing grade using this scoring method.

The inescapable conclusion drawn is that businesses across North America are led well and staffed with people who are engaged in what they’re doing. But technology is barely adequate — and because of this, managers should devote their attention to improving systems whenever they think about how to improve overall company performance.

If you’d like to play “what if” with the data go here.

About the Survey

Results come from 509 individual contributors culled equally from many industries and company sizes. Zoho Corporation paid for the study. Beagle Research Group, LLC is solely responsible for the analysis and results. end enn Improving Company Performance

Publisher’s Note: ECT News Network received no financial compensation to publish the results of this sponsored research.


Denis%20Pombriant Improving Company Performance
Denis Pombriant is a well-known CRM industry analyst, strategist, writer and speaker. His new book, You Can’t Buy Customer Loyalty, But You Can Earn It, is now available on Amazon. His 2015 book, Solve for the Customer, is also available there.
Email Denis.

Let’s block ads! (Why?)

CRM Buyer

Read More

Facebook claims wav2vec 2.0 tops speech recognition performance with 10 minutes of labeled data

June 23, 2020   Big Data

In a paper published on the preprint server Arxiv.org, researchers at Facebook describe wav2vec 2.0, an improved framework for self-supervised speech recognition. They claim it demonstrates for the first time that learning representations from speech, followed by fine-tuning on transcribed speech, can outperform the best semi-supervised methods while being conceptually simpler, achieving state-of-the-art results using just 10 minutes of labeled data and pretraining on 53,000 hours of unlabeled data.

AI models benefit from large amounts of labeled data — it’s how they learn to infer patterns and make predictions. However, as the coauthors of the paper note, labeled data is generally harder to come by than unlabeled data. Current speech recognition systems require thousands of hours of transcribed speech to reach acceptable performance, which isn’t available for the majority of the nearly 7,000 languages spoken worldwide. Facebook’s original wav2vec and other systems attempt to sidestep this with self-supervision, which generates labels automatically from data. But they’ve fallen short in terms of performance compared with semi-supervised methods that combine a small amount of labeled data with a large amount of unlabeled data during training.

Wav2vec 2.0 ostensibly closes the gap with an encoder module that takes raw audio and outputs speech representations, which are fed to a Transformer that ensures the representations capture whole-audio-sequence information. Created by Google researchers in 2017, the Transformer network architecture was initially intended as a way to improve machine translation. To this end, it uses attention functions instead of a recurrent neural network to predict what comes next in a sequence. This characteristic enables wav2vec 2.0 to build context representations over continuous speech representations and record statistical dependencies over audio sequences end-to-end.

 Facebook claims wav2vec 2.0 tops speech recognition performance with 10 minutes of labeled data

Above: A diagram illustrating wav2vec 2.0’s architecture.

To pretrain wav2vec 2.0, the researchers masked portions of the speech representations (approximately 49% of all time steps with a mean span length of 299 milliseconds) and tasked the system with predicting them correctly. Then, to fine-tune it for speech recognition, they added a projection on top of wav2vec 2.0 representing vocabulary in the form of tokens for characters and word boundaries (e.g., word spaces of written English) before performing additional masking during training.

VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

The coauthors trained wav2vec 2.0 on several unlabeled and labeled data sources for up to 5.2 days at a time on 128 Nvidia V100 graphics cards to evaluate the system’s performance. Fine-tuning took place on between eight and 24 graphics cards.

According to the team, the largest trained wav2vec 2.0 model — which was fine-tuned on only 10 minutes of labeled data (48 recordings with an average length of 12.5 seconds) — achieved a word error rate of 5.7 on the open source Librispeech corpus. (Here, “word error rate” refers to the number of errors divided by total words.) On a 100-hour subset of Librispeech, the same model managed a word error rate of 2.3 — 45% lower than the previous state of the art trained with 100 times less labeled data — and 1.9 when fine-tuned on even more data, a result competitive with top semi-supervised methods that rely on more sophisticated architectures.

“[This] demonstrates that ultra-low resource speech recognition is possible with self-supervised learning on unlabeled data,” the researchers wrote. “We have shown that speech recognition models can be built with very small amounts of annotated data at very good accuracy. We hope our work will make speech recognition technology more broadly available to many more languages and dialects.”

Facebook used the original wav2vec to provide better audio data representations for keyword spotting and acoustic event detection and to improve its systems that proactively identify posts in violation of its community guidelines. It’s likely wav2vec 2.0 will be applied to the same tasks; beyond this, the company says it plans to make the models and code available as an extension to its fairseq modeling toolkit.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

The Future Plant: Current Challenges In Asset Performance

May 12, 2020   SAP
 The Future Plant: Current Challenges In Asset Performance

Part 1 of a three-part series

When Horst started his work as a machine technician at a manufacturing plant 20 years ago, asset management looked very different than it looks today. Having climbed up the career ladder to become an asset manager, Horst has created a modern maintenance environment that tackles many of the major problems German manufacturing companies are concerned with.

Horst no longer has to do a daily tour through the plant to note downed-machine issues or check on maintenance-due dates. Instead, Horst uses asset management software that provides him a constant overview of all assets, right from his desk. Every asset is digitally represented by its digital twin and can permanently be monitored via a visual display.

By continuously collecting relevant data, designated devices automatically enrich an asset’s digital twin with information about its current performance and condition. Data analytics algorithms can use this information to generate a set of relevant KPIs throughout each asset’s entire lifecycle.

For Horst, it is crucial to always be prepared for any possible machine breakdown. Therefore, he is especially interested in knowing an asset’s mean time to failure (MTTF) or mean time between failures (MTBF), as well as the frequency of these incidents.

Knowing particular failures, and how often they typically occur with certain assets, helps Horst classify machine problems to common failure modes and get an understanding of when a failure is likely to happen. It also supports him in grouping his assets into certain risk categories depending on how often, how severe, and how detectable failures are occurring with an asset. This entire process is called Failure Mode Analytics – an important analysis for strategic asset management that is strongly enabled by the ability to monitor each asset’s performance.

Two other important KPIs are relevant once a predicted failure occurs: mean time to repair (MTTR) and mean downtime. As the main measures of machine availability, these KPIs are supposed to be relatively low to enable a maximum level of production continuity.

Following the principles of lean management, Horst is constantly engaged in putting appropriate measures in place to reduce the time a machine is down for repair. In this context, respective breakdown costs also play a meaningful role in managing asset performance.

Last, but not least, an asset’s comprehensive performance can be evaluated in the Overall Equipment Effectiveness KPI. This KPI indicates the percentage of time in which an asset is producing only good parts (quality) as fast as possible (performance) with no stop time (availability). Combining the aspects of quality, performance, and availability makes this measure a very powerful tool for Horst in assessing his assets and in gaining data-based knowledge about his overall plant productivity.

The variety of different KPIs makes it possible to have continual, real-time insight into all assets and their performance. For Horst, who always needs to have a profound overview of his assets’ current state, this really makes life easier. More importantly, the asset performance software equips him with a reliable base for decision-making.

While in the past, most decisions were made based on gut feeling, today the digital twin and its KPIs serve as the source for making machine diagnoses and determining asset maintenance routines. Also, standardized KPIs allow comparisons between several groups of assets or across different plants. This makes processes more transparent and more reliable, therefore helping Horst achieve the best possible asset operation.

By enabling technologies for the smart factory, companies are achieving Mission Unstoppable: making facilities management a transparent, manageable process.

Let’s block ads! (Why?)

Digitalist Magazine

Read More
« Older posts
  • Recent Posts

    • The Easier Way For Banks To Handle Data Security While Working Remotely
    • 3 Ways Data Virtualization is Evolving to Meet Market Demands
    • Did you find everything you need today?
    • Missing Form Editor through command bar in Microsoft Dynamics 365
    • I’m So Excited
  • Categories

  • Archives

    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited