• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: from

Switch from Old Record View to Kanban Board View to Maximize Business Productivity within Dynamics 365 CRM / PowerApps

January 24, 2021   CRM News and Info

xSwitch from Old Record View to Kanban Board View 625x357.jpg.pagespeed.ic.zl ulz 8It Switch from Old Record View to Kanban Board View to Maximize Business Productivity within Dynamics 365 CRM / PowerApps

Information lies at the bottom of every decision we make in business. And well-informed decisions can directly affect the current outcomes of the business as well as shape the future course and direction of businesses towards their desired goals and vision. So if the information is so important today, why not simplify it?

As Dynamics 365 CRM / PowerApps users, you are well aware of the expanse and complexity of information available to you through this great software. So the next order of business is to make the most of it. Here’s when Kanban Board comes into the picture. Kanban Board (Preferred App on Microsoft AppSource) is a card-based system that can change the way you categorize and visualize your Dynamics 365 CRM and PowerApps data.

Let’s walk through the feature highlights of Kanban Board with Kanban View to better understand the utility of this:

Kanban Visualization of Views

1 Switch from old Record View to Kanban Board View 625x286 Switch from Old Record View to Kanban Board View to Maximize Business Productivity within Dynamics 365 CRM / PowerApps

We are visual thinkers by nature, in simpler words, we understand faster by seeing visualizations. Banking on this insight, Kanban Board introduces a card-based visualization system that allows users to visualize any CRM View as distinct lanes called a Kanban View. This view allows for quick access as well as easy understanding and classification of the data in the view, as you can see in the screenshot.

Business Process Flow

x2 Switch from old Record View to Kanban Board View 625x237.png.pagespeed.ic. b8fxzcvkn Switch from Old Record View to Kanban Board View to Maximize Business Productivity within Dynamics 365 CRM / PowerApps

A business process is a series of steps that go through various transition phases. When a business process flow is defined for an entity, users can organize and categorize the records using the Kanban Board. For instance, in the ‘Opportunities’ entity users can select the Business Process Flow defined for the entity and get the Kanban view of records at various stages of the process.

Row Grouping

x3 Switch from old Record View to Kanban Board View 1 625x284.png.pagespeed.ic.VpQVayceX6 Switch from Old Record View to Kanban Board View to Maximize Business Productivity within Dynamics 365 CRM / PowerApps

Another powerful feature of Kanban View is that it allows users to categorize and group records in a row based on any field value. For instance, consider if a user wants to group records based on their priority of High, Normal, Low, or Other. They can simply drag and drop the cards from one row to another and the field of that record is updated in both lanes & row automatically!

Drag and Drop

x4 Switch from old Record View to Kanban Board View 625x285.png.pagespeed.ic.Tp2B80PrQx Switch from Old Record View to Kanban Board View to Maximize Business Productivity within Dynamics 365 CRM / PowerApps

The modern-day app and feature design have the user as the central focus. One of the most popular functionalities that is made for users is the drag and drop functionality. Kanban Board allows users to drag and drop the cards across columns to quickly update the values of the underlying category field as shown in the above screenshot.

Quick Activity Actions

x5 Switch from old Record View to Kanban Board View.png.pagespeed.ic.tUad9mfHcV Switch from Old Record View to Kanban Board View to Maximize Business Productivity within Dynamics 365 CRM / PowerApps

Kanban Board gives users the option to perform some actions instantly by definingQuick Activity actions for the records. Three quick activity actions can be defined as of now. A quick create form is displayed that comes pre-filled with references to the said record already set.

So this was all from this productivity app and how it transforms the user experience and maximizes output. But don’t just take our words for it, you can explore more and try it yourself from our website, you can even find it on Microsoft AppSource. Wondering if this might be the right productivity app for you? Get a free demo by emailing us at crm@inogic.com and we’ll walk you through the solution and how it can apply to your organizational needs.

Until next time – Stay safe and Kanbanize to optimize your working style!

Let’s block ads! (Why?)

CRM Software Blog | Dynamics 365

Read More

The Dynamics 365 Sales Mobile App Helps Salespeople Stay Productive From Anywhere

January 20, 2021   CRM News and Info

 The Dynamics 365 Sales Mobile App Helps Salespeople Stay Productive From Anywhere

The new Dynamics 365 Sales mobile app, now available for preview, is optimized to help your Sales team stay productive from wherever they’re working. The key capabilities of the mobile app enable sellers to prepare more thoroughly for customer engagements, log and share information quickly, and easily find information they need. You will not only be able to view data from Dynamics 365, but you’ll also be able to view data from Exchange in the app.

Benefits of using the Dynamics 365 Sales Mobile App

  • Utilize time more effectively – Field sellers spend a lot of time on the road, traveling to meet clients. With the mobile app, “on-the-go” time becomes productive time.
  • Easy – The Dynamics 365 Sales mobile app is easy to use. You can simply sign into the mobile app by using the same work email address used for Dynamics 365. Salespeople can easily find the information they are looking for.  Salespeople can easily find contacts and recent records. The app is exceedingly simple to navigate and is available on both iOS and Android.
  • Stay more organized -Salespeople are able to take post-meeting actions such as adding notes, creating contacts, or updating important data in relevant records. It becomes a cinch to stay up-to-date with important information.
  • Plan – The mobile app can be used to plan and map out your day by seeing what your day has in store – in terms of upcoming meetings, appointments, etc. Upon opening the app, salespeople immediately see reminders about customer meetings or insights for the day.
  • Build customer relationships and loyalty – Salespeople have quick access to customer information on-the-go, making it easy to keep information up to date and to respond to customers quicker. This not only simplifies the customer relationship, but also helps sellers to focus on selling. Salespeople go into meetings better prepared – as they can review important customer information prior to customer engagements.

Home page

Upon opening the Dynamics 365 Sales mobile app on your mobile device, you’ll see the home page. This home page provides high level information on the meetings and insight cards – specific to you.

home page The Dynamics 365 Sales Mobile App Helps Salespeople Stay Productive From Anywhere

The home page displays five different types of information: meetings, recent contacts, recent records, reminders, and insights.

Meetings

The meetings section shows important information to salespeople about the last meeting they were in, as well as the next meeting coming up. They will also have the ability to see information on all meetings in this section.

meetings The Dynamics 365 Sales Mobile App Helps Salespeople Stay Productive From Anywhere

to learn more, visit our blog

Let’s block ads! (Why?)

CRM Software Blog | Dynamics 365

Read More

How Profectus Delivers Value from Data

January 9, 2021   Sisense

Every company is becoming a data company. In Data-Powered Businesses, we dive into the ways that companies of all kinds are digitally transforming to make smarter data-driven decisions, monetize their data, and create companies that will thrive in our current era of Big Data.

Streamlining data management across high-volume transactions 

Profectus is an international technology and services company that provides leading technologies for rebate and deal management, contract compliance and accounts payable audits. Founded 20 years ago and with offices in Australia, NZ, USA and The UK, their solutions are leveraged by 100 ASX listed companies, including Westpac and HSBC, Coca Cola Amatil, Vodafone, Coles, Kmart, JP Morgan, and Rio Tinto, just to name a few.  

For Profectus, data is absolutely everything. From accounts payable data coming in as direct feeds from ERP finance systems, to hundreds of thousands of invoices Profectus’ solutions ingest on behalf of its customers, along with any agreement data that their customers have with their suppliers.

“We crunch enormous Accounts Payable data files, and thousands of rebate agreements, and invoices,” Profectus’ Chief Technology Officer, Mark Webster told attendees. “In the retail sector, for one of our biggest customers, we have 4TB of data that we crunch through every few months. That’s billions and billions of rows of data that we go through to find the different variances in order to find the best value for our customers.”

packages CTA banners BI and Analytics1 How Profectus Delivers Value from Data

Part of Profectus’ suite of services is ensuring that every transaction is aligned with a particular deal. But Mark revealed that despite the data-rich services the company provides, a lot of teams still use Excel spreadsheets.

“These have their limitations due to their size and data sets,” he explained. “And when you become a large organization, spreadsheets just aren’t going to cut it for you anymore.”

According to Mark, Profectus found that on average, somewhere between 3.5-4 transactions per 10k transactions contained an error. This number may seem relatively insignificant, but when repeated across millions or even billions of transactions, these errors add up.

“With our solutions, we’re able to save millions of dollars for our clients, simply because we are able to find the details of these transactions buried deep in the data,” Mark explained. “And the reason why we’re able to do that is because we really pride ourselves on focusing on the detail and accuracy of our data analysis. We don’t use aggregate data, we don’t use rollups. We use full detail — and that’s where we find the full value.”

Leveraging smarter data tools to unlock deeper insights

Profectus does a lot of processing, with around 90 people in their office busy “crunching” through row by row of data. But with the company growing fast, the challenge is finding a better way to boost the productivity, efficiency, and accuracy of processing these vast volumes of data at scale.

“Our COO was wondering, how could we possibly bring on more customers and then try to grow the team?” Mark said. “If a customer signs up, well sales are doing their jobs properly. But as they bring all these extra customers on, who can service them? Our business is growing, but our cost base is growing with it, because we just have to hire more and more people to trawl through more and more spreadsheets — that can’t be the sustainable way to do it.”

Profectus began looking for technology to take over and find a solution to automate the process of extracting extremely large volumes of data.

“We wanted to have algorithms, ‘visualization stations’, that actually tease out the differences in the data in a lot more automated way, so that we’re not just throwing more and more human capital at it, but actually leveraging smarter technology,” he added. “Spreadsheets just die at a certain size, and communicating the results becomes extremely difficult.”

“Think about the resources taken for teams to carefully handcraft and curate large spreadsheets, then attach them into an email. Then the customer comes back with various edits and more attachments. Trying to merge all the edits and figure out which version is the right one just gets out of control. And this whole process just breaks down at scale.” 

Discovering the “single source of truth” with Sisense

For Profectus, having a streamlined, automated online system, where there’s a single source of truth was their “holy grail” solution.

“We did a very thorough and rigorous examination of the BI space and we put all of the different platforms through the ringer, but Sisense came out as the leading BI solution on the market,” Mark said. “With Sisense, not only is the data stored safely and securely, but we can extract the full value from our data and we can get the consistent repeatable and scalable answers our business needs.

“We also are using embedded analytics, with a portal that our customers can log in to and see easily for a more unified customer experience — and Sisense allows us to do this far more easily.”

Importantly, it was the sheer scalable power of Sisense’s solution that Profectus found was unmatched in the market.

Unlocking data in Snowflake to deliver insights through Sisense

With a high-powered data warehouse in place, Profectus needed a tool to unlock data that answered critical business questions. Through a combined pairing of Sisense and Snowflake, the Profectus team is now able to unlock the data in Snowflake with datasets they provide, including CSVs, spreadsheets, and third-party API integrations. Snowflake’s speed supports the live connections, ensuring Profectus sees the freshest data in its warehouse whenever up-to-date metrics are needed.

“My team now relies on Sisense and Snowflake to simplify a variety of recurring data aggregation workflows, from reports to spend analysis. Anything that used to require manually aggregating and merging spreadsheets can be pulled out of Sisense.”

“As an example, we ran a representative data set that we had in our Snowflake data warehouse through a competing solution, but we killed the process at 20 minutes because that was already unacceptable both from a customer experience and cost perspective,” Mark explained. “With Sisense, we ran the same data set, and it processed the query within 20 seconds! That was our aha moment.”

“This sort of data efficiency gain is a big deal for us, because it helps us to achieve the scale we need to serve our customers and grow as an organization.” 

The data-driven vision for the future

Moving forward, Profectus is excited to reap the benefits of its new “project Delta,” which involves leveraging Sisense’s solution as part of a revolutionary shift towards smarter data-driven decision making.

“Project Delta for us is all about leveraging the right technology solutions to instigate new and exciting change,” Mark explained. “We want to enable behavior change in our customers, and for our customers to be able to optimize their business decisions, transform the way they do business with their suppliers, and help them enjoy much greater value. We’re confidently shifting towards automating a lot of our processing, taking the problem away from all the 90 people who have to manually check line after line of data, and actually getting the computer to do the job.”

“Importantly, we’re putting the right visualizations online to solve our communications problems, so our customers, their suppliers, and our own analysts can all log into the same solution and look at the same source of data treatment. They can all actually see the same story at the same time consistently, with full version control and no errors.”

Ultimately, Profectus wasn’t just looking for a “software vendor,” but a technology and business partner to work together, to help bring these great solutions to market.

“This is where Sisense really shines for us, because they have very much the same vision that we have around how to unlock insights from data and then take powerful actions based on those insights,” Mark added. “Sisense has a very compelling vision, which fits perfectly with what we’re trying to achieve.”

packages CTA banners Product Teams How Profectus Delivers Value from Data

David Huynh is a Customer Success Manager with Sisense. He holds a degree in Business Information Systems and has spent the last 9 years in a variety of fields including sales and project management. David is passionate about helping businesses leverage data and technology to succeed. When not in the office, he enjoys cooking, travelling, and working on cars.

Let’s block ads! (Why?)

Blog – Sisense

Read More

Researchers design AI that can infer whole floor plans from short video clips

January 7, 2021   Big Data

Transform 2021

Join us for the world’s leading event about accelerating enterprise transformation with AI and Data, for enterprise technology decision-makers, presented by the #1 publisher in AI and Data

Learn More


Floor plans are useful for visualizing spaces, planning routes, and communicating architectural designs. A robot entering a new building, for instance, can use a floor plan to quickly sense the overall layout. Creating floor plans typically requires a full walkthrough so 3D sensors and cameras can capture the entirety of a space. But researchers at Facebook, the University of Texas at Austin, and Carnegie Mellon University are exploring an AI technique that leverages visuals and audio to reconstruct a floor plan from a short video clip.

The researchers assert that audio provides spatial and semantic signals complementing the mapping capabilities of images. They say this is because sound is inherently driven by the geometry of objects. Audio reflections bounce off surfaces and reveal the shape of a room, far beyond a camera’s field of view. Sounds heard from afar — even multiple rooms away — can reveal the existence of “free spaces” where sounding objects might exist (e.g., a dog barking in another room). Moreover, hearing sounds from different directions exposes layouts based on the activities or things those sounds represent. A shower running might suggest the direction of the bathroom, for example, while microwave beeps suggest a kitchen.

The researchers’ approach, which they call AV-Map, aims to convert short videos with multichannel audio into 2D floor plans. A machine learning model leverages sequences of audio and visual data to reason about the structure and semantics of the floor plan, finally fusing information from audio and video using a decoder component. The floor plans AV-Map generates, which extend significantly beyond the area directly observable in the video, show free space and occupied regions divided into a discrete set of semantic room labels (e.g., family room and kitchen).

 Researchers design AI that can infer whole floor plans from short video clips

The team experimented with two settings, active and passive, in digital environments from the popular Matternet3D and SoundSpaces datasets loaded into Facebook’s AI Habitat. In the first, they used a virtual camera to emit a known sound while it moved throughout the room of a model home. In the second, they relied only on naturally occurring sounds made by objects and people inside the home.

Across videos recorded in 85 large, real-world, multiroom environments within AI Habitat, the researchers say AV-Map not only consistently outperformed traditional vision-based mapping but improved the state-of-the-art technique for extrapolating occupancy maps beyond visible regions. With just a few glimpses spanning 26% of an area, AV-Map could estimate the whole area with 66% accuracy.

“A short video walk through a house can reconstruct the visible portions of the floorplan but is blind to many areas. We introduce audio-visual floor plan reconstruction, where sounds in the environment help infer both the geometric properties of the hidden areas as well as the semantic labels of the unobserved rooms (e.g., sounds of a person cooking behind a wall to the camera’s left suggest the kitchen),” the researchers wrote in a paper detailing AV-Map. “In future work, we plan to consider extensions to multi-level floor plans and connect our mapping idea to a robotic agent actively controlling the camera … To our knowledge, ours is the first attempt to infer floor plans from audio-visual data.”

VentureBeat

VentureBeat’s mission is to be a digital townsquare for technical decision makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you,
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more.

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

AI models from Microsoft and Google already surpass human performance on the SuperGLUE language benchmark

January 6, 2021   Big Data
 AI models from Microsoft and Google already surpass human performance on the SuperGLUE language benchmark

Transform 2021

Join us for the world’s leading event about accelerating enterprise transformation with AI and Data, for enterprise technology decision-makers, presented by the #1 publisher in AI and Data

Learn More


In late 2019, researchers affiliated with Facebook, New York University (NYU), the University of Washington, and DeepMind proposed SuperGLUE, a new benchmark for AI designed to summarize research progress on a diverse set of language tasks. Building on the GLUE benchmark, which had been introduced one year prior, SuperGLUE includes a set of more difficult language understanding challenges, improved resources, and a publicly available leaderboard.

When SuperGLUE was introduced, there was a nearly 20-point gap between the best-performing model and human performance on the leaderboard. But as of early January, two models — one from Microsoft called DeBERTa and a second from Google called T5 + Meena — have surpassed the human baselines, becoming the first to do so.

Sam Bowman, assistant professor at NYU’s center for data science, said the achievement reflected innovations in machine learning including self-supervised learning, where models learn from unlabeled datasets with recipes for adapting the insights to target tasks. “These datasets reflect some of the hardest supervised language understanding task datasets that were freely available two years ago,” he said. “There’s no reason to believe that SuperGLUE will be able to detect further progress in natural language processing, at least beyond a small remaining margin.”

But SuperGLUE isn’t a perfect — nor a complete test of human language ability. In a blog post, the Microsoft team behind DeBERTa themselves noted that their model is “by no means” reaching the human-level intelligence of natural language understanding. They say this will require research breakthroughs — along with new benchmarks to measure them and their effects.

SuperGLUE

As the researchers wrote in the paper introducing SuperGLUE, their benchmark is intended to be a simple, hard-to-game measure of advances toward general-purpose language understanding technologies for English. It comprises eight language understanding tasks drawn from existing data and accompanied by a performance metric as well as an analysis toolkit.

The tasks are:

  • Boolean Questions (BoolQ) requires models to respond to a question about a short passage from a Wikipedia article that contains the answer. The questions come from Google users, who submit them via Google Search.
  • CommitmentBank (CB) tasks models with identifying a hypotheses contained within a text excerpt from sources including the Wall Street Journal and determining whether this hypothesis holds true.
  • Choice of plausible alternatives (COPA) provides a premise sentence about topics from blogs and a photography-related encyclopedia from which models must determine either the cause or effect from two possible choices.
  • Multi-Sentence Reading Comprehension (MultiRC) is a question-answer task where each example consists of a context paragraph, a question about that paragraph, and a list of possible answers. A model must predict which answers are true and false.
  • Reading Comprehension with Commonsense Reasoning Dataset (ReCoRD) has models predict masked-out words and phrases from a list of choices in passages from CNN and the Daily Mail, where the same words or phrases might be expressed using multiple different forms, all of which are considered correct.
  • Recognizing Textual Entailment (RTE) challenges natural language models to identify whenever the truth of one text excerpt follows from another text excerpt.
  • Word-in-Context (WiC) provides models two text snippets and a polysemous word (i.e., word with multiple meanings) and requires them to determine whether the word is used with the same sense in both sentences.
  • Winograd Schema Challenge (WSC) is a task where models, given passages from fiction books, must answer multiple-choice questions about the antecedent of ambiguous pronouns. It’s designed to be an improvement on the Turing Test.

SuperGLUE also attempts to measure gender bias in models with Winogender Schemas, pairs of sentences that differ only by the gender of one pronoun in the sentence. However, the researchers note that Winogender has limitations in that it offers only positive predictive value: While a poor bias score is clear evidence that a model exhibits gender bias, a good score doesn’t mean the model is unbiased. Moreover, it doesn’t include all forms of gender or social bias, making it a coarse measure of prejudice.

To establish human performance baselines, the researchers drew on existing literature for WiC, MultiRC, RTE, and ReCoRD and hired crowdworker annotators through Amazon’s Mechanical Turk platform. Each worker, which was paid an average of $ 23.75 an hour, completed a short training phase before annotating up to 30 samples of selected test sets using instructions and an FAQ page.

Architectural improvements

The Google team hasn’t yet detailed the improvements that led to its model’s record-setting performance on SuperGLUE, but the Microsoft researchers behind DeBERTa detailed their work in a blog post published earlier this morning. DeBERTa isn’t new — it was open-sourced last year — but the researchers say they trained a larger version with 1.5 billion parameters (i.e., the internal variables that the model uses to make predictions). It’ll be released in open source and integrated into the next version of Microsoft’s Turing natural language representation model, which supports products like Bing, Office, Dynamics, and Azure Cognitive Services.

DeBERTa is pretrained through masked language modeling (MLM), a fill-in-the-blank task where a model is taught to use the words surrounding a masked “token” to predict what the masked word should be. DeBERTa uses both the content and position information of context words for MLM, such that it’s able to recognize “store” and “mall” in the sentence “a new store opened beside the new mall” play different syntactic roles, for example.

Unlike some other models, DeBERTa accounts for words’ absolute positions in the language modeling process. Moreover, it computes the parameters within the model that transform input data and measure the strength of word-word dependencies based on words’ relative positions. For example, DeBERTa would understand the dependency between the words “deep” and “learning” is much stronger when they occur next to each other than when they occur in different sentences.

DeBERTa also benefits from adversarial training, a technique that leverages adversarial examples derived from small variations made to training data. These adversarial examples are fed to the model during the training process, improving its generalizability.

The Microsoft researchers hope to next explore how to enable DeBERTa to generalize to novel tasks of subtasks or basic problem-solving skills, a concept known as compositional generalization. One path forward might be incorporating so-called compositional structures more explicitly, which could entail combining AI with symbolic reasoning — in other words, manipulating symbols and expressions according to mathematical and logical rules.

“DeBERTa surpassing human performance on SuperGLUE marks an important milestone toward general AI,” the Microsoft researchers wrote. “[But unlike DeBERTa,] humans are extremely good at leveraging the knowledge learned from different tasks to solve a new task with no or little task-specific demonstration.”

New benchmarks

According to Bowman, no successor to SuperGLUE is forthcoming, at least not in the near term. But there’s growing consensus within the AI research community that future benchmarks, particularly in the language domain, must take into account broader ethical, technical, and societal challenges if they’re to be useful.

For example, a number of studies show that popular benchmarks do a poor job of estimating real-world AI performance. One recent report found that 60%-70% of answers given by natural language processing models were embedded somewhere in the benchmark training sets, indicating that the models were usually simply memorizing answers. Another study — a meta-analysis of over 3,000 AI papers — found that metrics used to benchmark AI and machine learning models tended to be inconsistent, irregularly tracked, and not particularly informative.

Part of the problem stems from the fact that language models like OpenAI’s GPT-3, Google’s T5 + Meena, and Microsoft’s DeBERTa learn to write humanlike text by internalizing examples from the public web. Drawing on sources like ebooks, Wikipedia, and social media platforms like Reddit, they make inferences to complete sentences and even whole paragraphs.

As a result, language models often amplify the biases encoded in this public data; a portion of the training data is not uncommonly sourced from communities with pervasive gender, race, and religious prejudices. AI research firm OpenAI notes that this can lead to placing words like “naughty” or “sucked” near female pronouns and “Islam” near words like “terrorism.” Other studies, like one published by Intel, MIT, and Canadian AI initiative CIFAR researchers in April, have found high levels of stereotypical bias from some of the most popular models, including Google’s BERT and XLNet, OpenAI’s GPT-2, and Facebook’s RoBERTa. This bias could be leveraged by malicious actors to foment discord by spreading misinformation, disinformation, and outright lies that “radicalize individuals into violent far-right extremist ideologies and behaviors,” according to the Middlebury Institute of International Studies.

Most existing language benchmarks fail to capture this. Motivated by the findings in the two years since SuperGLUE’s introduction, perhaps future ones might.

VentureBeat

VentureBeat’s mission is to be a digital townsquare for technical decision makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you,
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more.

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

OpenAI debuts DALL-E for generating images from text

January 6, 2021   Big Data

Transform 2021

Join us for the world’s leading event about accelerating enterprise transformation with AI and Data, for enterprise technology decision-makers, presented by the #1 publisher in AI and Data

Learn More


OpenAI today debuted two multimodal AI systems that combine computer vision and NLP, like DALL-E, a system that generates images from text. For example, the photo above for this story was generated from the text prompt “an illustration of a baby daikon radish in a tutu walking a dog.” DALL-E uses a 12-billion parameter version of GPT-3, and like GPT-3 is a Transformer language model. The name is meant to hearken to the artist Salvador Dali and the robot WALL-E.

Above: Examples of images generated from the text prompt “A stained glass window with an image of a blue strawberry”

Image Credit: OpenAI

Tests shared by OpenAI today appear to demonstrate DALL-E has the ability to manipulate and rearrange objects in generated imagery but also create things that just don’t exist like a cube with the texture of a porcupine or cube of clouds. Based on text prompt, images generated by DALL-E can appear as if they were taken in the real world, while others can depict works of art. Visit the OpenAI website to try a controlled demo of DALL-E.

Above: cloud cube

“We recognize that work involving generative models has the potential for significant, broad societal impacts. In the future, we plan to analyze how models like DALL·E relate to societal issues like economic impact on certain work processes and professions, the potential for bias in the model outputs, and the longer term ethical challenges implied by this technology,” OpenAI said in a blog post about DALL-E today.

OpenAI also introduced CLIP today, a multimodal model trained on 400 million pairs of images and text collected from the internet. CLIP uses zero-shot learning capabilities akin to GPT-2 and GPT-3 language models.

“We find that CLIP, similar to the GPT family, learns to perform a wide set of tasks during pre-training including object character recognition (OCR), geo-localization, action recognition, and many others. We measure this by benchmarking the zero-shot transfer performance of CLIP on over 30 existing datasets and find it can be competitive with prior task-specific supervised models,” a paper about the model by 12 OpenAI coauthors reads.

Although testing found CLIP was proficient at a number of tasks, testing also found that CLIP falls short in specialization tasks like satellite imagery classification or lymph node tumor detection.

“This preliminary analysis is intended to illustrate some of the challenges that general purpose computer vision models pose and to give a glimpse into their biases and impacts. We hope that this work motivates future research on the characterization of the capabilities, shortcomings, and biases of such models, and we are excited to engage with the research community on such questions,” the paper reads.

OpenAI chief scientist Ilya Sutskever was coauthor of a paper detailing CLIP, and seems to have alluded to the coming release of CLIP when he told deeplearning.ai recently that multimodal models would be a major machine learning trend in 2021. Google AI chief Jeff Dean made a similar prediction for 2020 in an interview with VentureBeat.

The release of DALL-E follows the release of a number of generative models with the power to mimic or distort reality or predict how people paint landscape and still life art. Some, like StyleGAN, have demonstrated a propensity to racial bias.

OpenAI researchers working on CLIP and DALL-E called for additional research into the potential societal impact of both systems. GPT-3 displayed significant anti-Muslim bias and negative sentiment scores for Black people so the same shortcomings could be embedded in DALL-E. A bias test included in the CLIP paper found that the model was most likely to miscategorize people under 20 as criminals or non-human, people classified as men were more likely to be labeled as criminals then people classified as women, and some label data contained in the dataset are heavily gendered.

How OpenAI made DALL-E and additional details will be shared in an upcoming paper. Large language models that use data scraped from the internet have been criticized by researchers who say the AI industry needs to undergo a culture change.

VentureBeat

VentureBeat’s mission is to be a digital townsquare for technical decision makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you,
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more.

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

You don’t code? Do machine learning straight from Microsoft Excel

December 31, 2020   Big Data

Transform 2021

Join us for the world’s leading event about accelerating enterprise transformation with AI and Data, for enterprise technology decision-makers, presented by the #1 publisher in AI and Data

Learn More


Machine learning and deep learning have become an important part of many applications we use every day. There are few domains that the fast expansion of machine learning hasn’t touched. Many businesses have thrived by developing the right strategy to integrate machine learning algorithms into their operations and processes. Others have lost ground to competitors after ignoring the undeniable advances in artificial intelligence.

But mastering machine learning is a difficult process. You need to start with a solid knowledge of linear algebra and calculus, master a programming language such as Python, and become proficient with data science and machine learning libraries such as Numpy, Scikit-learn, TensorFlow, and PyTorch.

And if you want to create machine learning systems that integrate and scale, you’ll have to learn cloud platforms such as Amazon AWS, Microsoft Azure, and Google Cloud.

Naturally, not everyone needs to become a machine learning engineer. But almost everyone who is running a business or organization that systematically collects and processes can benefit from some knowledge of data science and machine learning. Fortunately, there are several courses that provide a high-level overview of machine learning and deep learning without going too deep into math and coding.

But in my experience, a good understanding of data science and machine learning requires some hands-on experience with algorithms. In this regard, a very valuable and often-overlooked tool is Microsoft Excel.

To most people, MS Excel is a spreadsheet application that stores data in tabular format and performs very basic mathematical operations. But in reality, Excel is a powerful computation tool that can solve complicated problems. Excel also has many features that allow you to create machine learning models directly into your workbooks.

While I’ve been using Excel’s mathematical tools for years, I didn’t come to appreciate its use for learning and applying data science and machine learning until I picked up Learn Data Mining Through Excel: A Step-by-Step Approach for Understanding Machine Learning Methods by Hong Zhou.

Learn Data Mining Through Excel takes you through the basics of machine learning step by step and shows how you can implement many algorithms using basic Excel functions and a few of the application’s advanced tools.

While Excel will in no way replace Python machine learning, it is a great window to learn the basics of AI and solve many basic problems without writing a line of code.

Linear regression machine learning with Excel

Linear regression is a simple machine learning algorithm that has many uses for analyzing data and predicting outcomes. Linear regression is especially useful when your data is neatly arranged in tabular format. Excel has several features that enable you to create regression models from tabular data in your spreadsheets.

One of the most intuitive is the data chart tool, which is a powerful data visualization feature. For instance, the scatter plot chart displays the values of your data on a cartesian plane. But in addition to showing the distribution of your data, Excel’s chart tool can create a machine learning model that can predict the changes in the values of your data. The feature, called Trendline, creates a regression model from your data. You can set the trendline to one of several regression algorithms, including linear, polynomial, logarithmic, and exponential. You can also configure the chart to display the parameters of your machine learning model, which you can use to predict the outcome of new observations.

You can add several trendlines to the same chart. This makes it easy to quickly test and compare the performance of different machine learning models on your data.

 You don’t code? Do machine learning straight from Microsoft Excel

Above: Excel’s Trendline feature can create regression models from your data.

In addition to exploring the chart tool, Learn Data Mining Through Excel takes you through several other procedures that can help develop more advanced regression models. These include formulas such as LINEST and LINREG, which calculate the parameters of your machine learning models based on your training data.

The author also takes you through the step-by-step creation of linear regression models using Excel’s basic formulas such as SUM and SUMPRODUCT. This is a recurring theme in the book: You’ll see the mathematical formula of a machine learning model, learn the basic reasoning behind it, and create it step by step by combining values and formulas in several cells and cell arrays.

While this might not be the most efficient way to do production-level data science work, it is certainly a very good way to learn the workings of machine learning algorithms.

Other machine learning algorithms with Excel

Beyond regression models, you can use Excel for other machine learning algorithms. Learn Data Mining Through Excel provides a rich roster of supervised and unsupervised machine learning algorithms, including k-means clustering, k-nearest neighbor, naive Bayes classification, and decision trees.

The process can get a bit convoluted at times, but if you stay on track, the logic will easily fall in place. For instance, in the k-means clustering chapter, you’ll get to use a vast array of Excel formulas and features (INDEX, IF, AVERAGEIF, ADDRESS, and many others) across several worksheets to calculate cluster centers and refine them. This is not a very efficient way to do clustering, but you’ll be able to track and study your clusters as they become refined in every consecutive sheet. From an educational standpoint, the experience is very different from programming books where you provide a machine learning library function your data points and it outputs the clusters and their properties.

 You don’t code? Do machine learning straight from Microsoft Excel

Above: When doing k-means clustering on Excel, you can follow the refinement of your clusters on consecutive sheets.

In the decision tree chapter, you will go through the process calculating entropy and selecting features for each branch of your machine learning model. Again, the process is slow and manual, but seeing under the hood of the machine learning algorithm is a rewarding experience.

In many of the book’s chapters, you’ll use the Solver tool to minimize your loss function. This is where you’ll see the limits of Excel, because even a simple model with a dozen parameters can slow your computer down to a crawl, especially if your data sample is several hundred rows in size. But the Solver is an especially powerful tool when you want to fine-tune the parameters of your machine learning model.

 You don’t code? Do machine learning straight from Microsoft Excel

Above: Excel’s Solver tool fine-tunes the parameters of your model and minimizes loss functions.

Deep learning and natural language processing with Excel

Learn Data Mining Through Excel shows that Excel can even express advanced machine learning algorithms. There’s a chapter that delves into the meticulous creation of deep learning models. First, you’ll create a single layer artificial neural network with less than a dozen parameters. Then you’ll expand on the concept to create a deep learning model with hidden layers. The computation is very slow and inefficient, but it works, and the components are the same: cell values, formulas, and the powerful Solver tool.

 You don’t code? Do machine learning straight from Microsoft Excel

Above: Deep learning with Microsoft Excel gives you a view under the hood of how deep neural networks operate.

In the last chapter, you’ll create a rudimentary natural language processing (NLP) application, using Excel to create a sentiment analysis machine learning model. You’ll use formulas to create a “bag of words” model, preprocess and tokenize hotel reviews, and classify them based on the density of positive and negative keywords. In the process you’ll learn quite a bit about how contemporary AI deals with language and how much different it is from how we humans process written and spoken language.

Excel as a machine learning tool

Whether you’re making C-level decisions at your company, working in human resources, or managing supply chains and manufacturing facilities, a basic knowledge of machine learning will be important if you will be working with data scientists and AI people. Likewise, if you’re a reporter covering AI news or a PR agency working on behalf of a company that uses machine learning, writing about the technology without knowing how it works is a bad idea (I will write a separate post about the many awful AI pitches I receive every day). In my opinion, Learn Data Mining Through Excel is a smooth and quick read that will help you gain that important knowledge.

Beyond learning the basics, Excel can be a powerful addition to your repertoire of machine learning tools. While it’s not good for dealing with big data sets and complicated algorithms, it can help with the visualization and analysis of smaller batches of data. The results you obtain from a quick Excel mining can provide pertinent insights in choosing the right direction and machine learning algorithm to tackle the problem at hand.

Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics.

This story originally appeared on Bdtechtalks.com. Copyright 2020

VentureBeat

VentureBeat’s mission is to be a digital townsquare for technical decision makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you,
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more.

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Nuro becomes first company to receive commercial autonomous vehicle permit from California DMV

December 24, 2020   Big Data
 Nuro becomes first company to receive commercial autonomous vehicle permit from California DMV

Hours after announcing that it acquired self-driving truck startup Ike, Nuro revealed it’s the first company to receive permission from the California Department of Motor Vehicles (DMV) to charge a fee and receive compensation for its driverless delivery service. Unlike the autonomous testing licenses the California DMV previously granted to Nuro and others, which limited the compensation self-driving vehicle companies could receive, the deployment permit enables Nuro to make its technology commercially available.

Some experts predict the pandemic will hasten adoption of autonomous vehicles for delivery. Self-driving cars, vans, and trucks promise to minimize the risk of spreading disease by limiting driver contact. This is particularly true with regard to short-haul freight, which is experiencing a spike in volume during the outbreak. The producer price index for local truckload carriage jumped 20.4% from July to August, according to the U.S. Bureau of Labor Statistics, most likely propelled by demand for short-haul distribution from warehouses and distribution centers to ecommerce fulfillment centers and stores.

The California DMV permit allows Nuro to use a fleet of light-duty driverless vehicles for a delivery service on surface streets within designated parts of Santa Clara and San Mateo counties, including the cities of Atherton, East Palo Alto, Los Altos Hills, Los Altos, Menlo Park, Mountain View, Palo Alto, Sunnyvale, and Woodside. The vehicles have a maximum speed of 25 miles per hour and are only approved to operate in fair weather conditions on streets with a speed limit of no more than 35 miles per hour.

“This permit will allow our vehicles to operate commercially on California roads in two counties near our [Mountain View, California] headquarters in the Bay Area. Soon we will announce our first deployment in California with an established partner. The service will start with our fleet of Prius vehicles in fully autonomous mode, followed by our custom-designed electric R2 vehicles,” Nuro chief legal and policy officer David Estrada wrote in a blog post. “We have extensively tested our self-driving technology and built a track record of safe operations over the past four years, including two successful commercial deployments in other states and driverless testing with R2 in the Bay Area communities where we plan to deploy.”

In April, Nuro, which has over 600 employees, secured a permit from the California DMV to test driverless delivery vehicles on public roads within a portion of the San Francisco Bay Area. That followed the issuance of a DMV permit in 2017 requiring that the company employ safety drivers in its autonomous test vehicles on public roads. More recently, in February, the U.S. National Highway Traffic Safety Administration (NHTSA) granted Nuro an autonomous vehicle exemption that allowed the company to pilot its custom-designed R2 delivery vehicles on roads without certain equipment required for passenger vehicles.

For the better part of a year, Nuro’s fleet of Toyota Prius vehicles in Houston, Texas has been making deliveries to consumers from various partners, including Kroger, Domino’s, and Walmart. The company has deployed over 75 delivery vehicles to date, a mix of self-driving Priuses and R2s.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

New York City Council votes to prohibit businesses from using facial recognition without public notice

December 11, 2020   Big Data
 New York City Council votes to prohibit businesses from using facial recognition without public notice

The cutting-edge computer architecture that’s changing the AI game

Learn about the next-gen architecture needed to unlock the true capabilities of AI and machine learning.

Register here

New York City Council today passed a privacy law for commercial establishments that prohibits retailers and other businesses from using facial recognition or other biometric tracking without public notice. If signed into law by NYC Mayor Bill de Blasio, the bill would also prohibit businesses from being able to see biometric data for third parties.

In the wake of the Black Lives Matter movement, an increasing number of cities and states have expressed concerns about facial recognition technology and its applications. Oakland and San Francisco, California and Somerville, Massachusetts are among the metros where law enforcement is prohibited from using facial recognition. In Illinois, companies must get consent before collecting biometric information of any kind, including face images. New York recently passed a moratorium on the use of biometric identification in schools until 2022, and lawmakers in Massachusetts have advanced a suspension of government use of any biometric surveillance system within the commonwealth. More recently, Portland, Maine approved a ballot initiative banning the use of facial recognition by police and city agencies.

The New York City Council bill, which was sponsored by Bronx Councilman Ritchie Torre, doesn’t outright ban the use of facial recognition technologies by businesses. However, it does impose restrictions on the ways brick-and-mortar locations like retailers, which might use facial recognition to prevent theft or personalize certain services, can deploy it. Businesses that fail to post a warning about collecting biometric data must pay $ 500. Businesses found selling data will face fines of $ 5,000.

In this aspect, the bill falls short of Portland, Oregon’s recently-passed ordinance regarding biometric data collection, which bans all private use of biometric data in places of “public accommodation,” including stores, banks, restaurants, public transit stations, homeless shelters, doctors’ offices, rental properties, retirement homes, and a variety of other types of businesses (excepting workplaces). It’s scheduled to take effect starting January 1, 2021.

“I commend the City Council for protecting New Yorkers from facial recognition and other biometric tracking. No one should have to risk being profiled by a racist algorithm just for buying milk at the neighborhood store,” Fox Cahn, executive director of the Surveillance Technology Oversight Project, said. “While this is just a first step towards comprehensively banning biometric surveillance, it’s a crucial one. We shouldn’t allow giant companies to sell our biometric data simply because we want to buy necessities. Far too many companies use biometric surveillance systems to profile customers of color, even though they are biased. If companies don’t comply with the new law, we have a simple message: ‘we’ll see you in court.’”

Numerous studies and VentureBeat’s own analyses of public benchmark data have shown facial recognition algorithms are susceptible to bias. One issue is that the data sets used to train the algorithms skew white and male. IBM found that 81% of people in the three face-image collections most widely cited in academic studies have lighter-colored skin. Academics have found that photographic technology and techniques can also favor lighter skin, including everything from sepia-tinged film to low-contrast digital cameras.

“Given the current lack of regulation and oversight of biometric identifier information, we must do all we can as a city to protect New Yorkers’ privacy and information,” said Councilman Andrew Cohen, who chairs the Committee on Consumer Affairs. Crain’s New York reports that the committee voted unanimously in favor of advancing Torres’ bill to the full council hearing earlier this afternoon.

The algorithms are often misused in the field, as well, which tends to amplify their underlying biases. A report from Georgetown Law’s Center on Privacy and Technology details how police feed facial recognition software flawed data, including composite sketches and pictures of celebrities who share physical features with suspects. The New York Police Department and others reportedly edit photos with blur effects and 3D modelers to make them more conducive to algorithmic face searches. And police in Minnesota have been using biometric technology from vendors including Cognitec since 2018, despite a denial issued that year, according to the Star Tribune.

Amazon, IBM, and Microsoft have self-imposed moratoriums on the sale of facial recognition systems. But some vendors, like Rank One Computing and Los Angeles-based TrueFace, are aiming to fill the gap with customers, including the City of Detroit and the U.S. Air Force.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Use hidden measures and members from #PowerBI dataset in an Excel Pivot table

December 3, 2020   Self-Service BI

When you connect to a Power BI Dataset from Power BI desktop you might have noticed that you can see and use hidden measures and columns in the dataset.

113020 2035 usehiddenme1 Use hidden measures and members from #PowerBI dataset in an Excel Pivot table

But the hidden fields cannot be seen if you browse the dataset in Excel.

113020 2035 usehiddenme2 Use hidden measures and members from #PowerBI dataset in an Excel Pivot table

But that does not mean that you cannot use the fields in Excel – and here is how you can do it.

Using VBA

You can use VBA by creating a macro

113020 2035 usehiddenme3 Use hidden measures and members from #PowerBI dataset in an Excel Pivot table

The code will add the field AddressLine1 from the DImReseller dimension as a Rowfield if the active cell contains a pivotable.

113020 2035 usehiddenme4 Use hidden measures and members from #PowerBI dataset in an Excel Pivot table
Sub AddField()
    Dim pv As PivotTable
        Set pv = ActiveCell.PivotTable
        pv.CubeFields("[DimReseller].[AddressLine1]").Orientation = xlRowField
End Sub

If you want to add a measure/value to the pivotable you need to set change the Orientation property to xlDataFields

113020 2035 usehiddenme5 Use hidden measures and members from #PowerBI dataset in an Excel Pivot table

This means that we now have added two hidden fields from the dataset

113020 2035 usehiddenme6 Use hidden measures and members from #PowerBI dataset in an Excel Pivot table

Add hidden measures using OLAP Tools

You can also add hidden measures using the OLAP Tools and MDX Calculated Measure

113020 2035 usehiddenme7 Use hidden measures and members from #PowerBI dataset in an Excel Pivot table

Simply create a new calculated measure by referencing the hidden measure in the MDX

113020 2035 usehiddenme8 Use hidden measures and members from #PowerBI dataset in an Excel Pivot table

This will add a calculated Measure to the measure group you selected

113020 2035 usehiddenme9 Use hidden measures and members from #PowerBI dataset in an Excel Pivot table

And you can add that to your pivotable

113020 2035 usehiddenme10 Use hidden measures and members from #PowerBI dataset in an Excel Pivot table

Referencing hidden items using CUBE functions

Notice that you can also reference the hidden measures using CUBE functions

113020 2035 usehiddenme11 Use hidden measures and members from #PowerBI dataset in an Excel Pivot table

Simply specify the name of the measure as the member expression in this case as “[Measures].[Sales Profit]”

You can also refer to members from hidden fields using the CUBEMEMBER functions

113020 2035 usehiddenme12 Use hidden measures and members from #PowerBI dataset in an Excel Pivot table

Hope this can help you too.

Power On!

Let’s block ads! (Why?)

Erik Svensen – Blog about Power BI, Power Apps, Power Query

Read More
« Older posts
  • Recent Posts

    • Switch from Old Record View to Kanban Board View to Maximize Business Productivity within Dynamics 365 CRM / PowerApps
    • PUNNIES
    • Cashierless tech could detect shoplifting, but bias concerns abound
    • Misunderstood Loyalty
    • Pearl with a girl earring
  • Categories

  • Archives

    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited