• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: from

Can’t Add Members to a Marketing List From a Saved View in Dynamics 365

April 7, 2021   CRM News and Info
4283954567 Can’t Add Members to a Marketing List From a Saved View in Dynamics 365
Can’t add members to marketing list from saved view

We recently found an issue in Dynamics 365 where a customer could not add members to a Marketing List From a Saved View in Dynamics 365. Read on to find out why.

THE PROBLEM

My customer confirmed they had a saved view of contacts that brought up the correct records in advanced find. Second, she opened a new marketing list and went to Add Members using the saved view of Contacts. Dynamics 365 displayed the criteria and then…Boom!  Zero contacts found.  What?!  We know the criteria was good in advanced find, so what happened when we tried to use it to populate a Marketing List?

THE EXPLANATION

After working through testing in different ways, it was a mystery. Yes, the criteria looked a little different in the “add members” window, but all the logic appeared to still be there. Since this was a Dynamics 365 Online system, I opened a ticket with Microsoft support. After 5 minutes on the call, the Microsoft rep told me that the problem we were seeing was a known bug.  Sometimes the criteria from advanced find doesn’t translate correctly, when using a saved view to add members to a marketing list. It just happens, and not always with the same field.  In our case, it was the “Email messages (regarding)” field. There was good news, though, the workaround was an easy fix.

THE WORKAROUND/SOLUTION (FOR NOW)

During our short call, the Microsoft rep told me that this issue is slated to be fixed in a future release. However, which release or when that will happen, is not yet known.  I said there was some good news and yes, we have a way to get around this problem. Simply put, compare the Saved View to  the Add Members view, line by line.  If any of the lines are not the same in the “Add Members” view, then change them so they match the Saved View. In the example below, take a look at the Email Messages (Regarding) line.  Notice that the lines are different between the saved view and the add members view.

Advanced Find:

Saved view criteria when used to add members to a marketing list:

Picture2 Can’t Add Members to a Marketing List From a Saved View in Dynamics 365

Picture3 Can’t Add Members to a Marketing List From a Saved View in Dynamics 365

The “add members” criteria above found zero Contacts.

Changing the Email (regarding) filter in the “add members” screen to look like this, retrieved the expected Contacts.

add members after Can’t Add Members to a Marketing List From a Saved View in Dynamics 365

Need help with Microsoft Dynamics 365 Marketing or other areas of the system?  We can help!

Get in touch with Beringer Technology Group today!

Beringer Technology Group, a leading Microsoft Gold Certified Partner specializing in Microsoft Dynamics 365 and CRM for Distribution, also provides expert Managed IT Services, Backup and Disaster Recovery, Cloud Based Computing, Email Security Implementation and Training,  Unified Communication Solutions, and Cybersecurity Risk Assessment.

Let’s block ads! (Why?)

CRM Software Blog | Dynamics 365

Read More

“politics is downstream from culture”

April 3, 2021   Humor
 politics is downstream from culture

“The idea that politics is downstream from culture was a term popularized by (the late) Andrew Breitbart. The basic premise is that politics respond to the culture in which it inhabits.”

Who knew, Big Tech is corporate communism, which may explain why there should be a Magnitsky law for the US. A Ghislaine Maxwell pardon could occur if she gave up those tapes of the previous guy.

x

Comic gold @mcsweeneys

‘The Boat Cops open a shipping container. It is full to the brim with every time a waiter said, “Enjoy your meal,” and I replied, “You too.”’https://t.co/eUbGJbADHi

— COBadger (@jordanlgraham) March 29, 2021

BOAT COPS (grabbing bullhorn from the slavering media horde): MA’AM, IN THE PAST 24 HOURS, YOU HAVE HALTED $ 9.6 BILLION WORTH OF SHIPPING TRAFFIC. HOW DO YOU PLAN TO PAY FOR THIS?

ME: Oh god…

I open my online banking app. Somehow, it contains only a CVS receipt. Everyone groans.

BOAT COPS: We thought so.

Like rats fleeing a sinking vessel, all my teeth fall out at once. I scramble to catch them and stuff them in my pockets. Once again, my phone buzzes. It’s another text from my father.

DAD: So how’s the whole boat thing going?

 politics is downstream from culture
a dead squirrel is right twice a day because one third of the trinity lays an egg

And Robin laid a Faberge egg.

x

.@navalny now “a personal prisoner of Putin,” the Kremlin killing him slowly, writes @vkaramurza: Navalny’s lawyers say he’s experiencing sleep deprivation, constant pain, denial of medical care–which hundreds of Russian doctors called deliberate torture. https://t.co/g01sdwFUC5

— Sasha Ingber (@SashaIngber) March 30, 2021

x

Rep. Marjorie Taylor Greene: “I call it corporate communism. These are private corporations who thrive on capitalism… But yet they are adapting these communist policies, just like the Democrats are.” pic.twitter.com/Bw2A0fow2C

— The Hill (@thehill) March 30, 2021

Trumpian Republicans often rail against “elites” — especially “coastal elites” — and big tech is one of their favorite targets. But liberal economist and New York Times columnist Paul Krugman is not swayed by their rhetoric. This week in his column, Krugman argues that Republicans in 2021 are still committed to anti-working class policies and make that painfully clear with their actions.

Krugman notes that the American Rescue Plan Act — a $ 1.9 trillion COVID-19 relief and stimulus bill that President Joe Biden recently signed into law — didn’t receive “a single Republican vote in Congress.” And he doubts that other legislation that helps average Americans will either.

Krugman writes, “Why are elected Republicans still so committed to right-wing economic policies that help the rich while shortchanging the working class?…. I ask why Republicans are ‘still’ committed to right-wing economics because in the past, there wasn’t any puzzle about their position.”

The economist notes that although Republicans have “managed to win elections by playing to the cultural grievances and racial hostility of working-class Whites,” the GOP never abandons its “pro-rich priorities.”

[…]

Krugman wraps up his column by stressing that when it comes to economic policy, the most important thing is not what Republicans say, but what they do.

“I suspect that the absence of true populism on the right has a lot to do with the closing of the right-wing mind,” Krugman writes. “The conservative establishment may have lost power, but its apparatchiks are still the only people in the GOP who know anything about policy. And big money may still buy influence even in a party whose energy comes mainly from intolerance and hate.”

x

Nike has filed a trademark infringement lawsuit against the creator of the viral and controversial “Satan Shoes” created as part of a promotion for rapper Lil Nas X’s new song. https://t.co/a9YM7XxxNE

— NowThis (@nowthisnews) March 31, 2021

Let’s block ads! (Why?)

moranbetterDemocrats

Read More

I’m a Tech Recruiter — These Stories from Women in Analytics Inspire Me

April 2, 2021   Sisense
inspiring stories women in analytics blog yoast 1200x628 1 I’m a Tech Recruiter — These Stories from Women in Analytics Inspire Me

In honor of Women’s History Month, I sat down (virtually) with a diverse panel of women from across Sisense in an open interview format. We dug into their unique challenges and successes to learn what inspires them as they blaze their trails in the tech world. From juggling work and life responsibilities and raising children to lifting up the voices of other marginalized individuals, these intrepid Sisensers do it all and are helping build a tech powerhouse at the same time!

It will take a lot of work to continue building a more equitable world and to forge the way for the next generation of women in tech; hopefully after hearing from these Sisensers, you’ll understand a bit about how far we’ve come and reaffirm your commitment to helping build a better future. 

Women inspiring women

CNBC reported that in 2021, women’s workforce participation (the portion of the population actively searching for work) hit a 33-year low — just 57%. After decades of positive trends of women in the workforce, the global pandemic has forced women out of the workplace like never before. Women — far more often than men — find themselves having to stay home with their families due to school closures and lack of childcare options.

But at Sisense and many other high-tech companies, women are still a powerful part of the team.

“I’m in constant awe of working mothers who balance their home responsibilities with their day jobs,” said Melanie Tantingco, VP, Talent Acquisition.

As the number of women in the workforce is being forcibly reduced by the pandemic, it’s more important than ever to support the women on our teams. Being able to balance life and work is critical to keeping women in the workforce. But it does require some give and take from our teammates and the companies themselves. It can be done, but even under the best conditions, it’s not easy.

“Something that I love about working in tech is that I have always had the ability to work from home or even work in a hybrid model,” said Elise Woodard, Social Media Manager. “Since COVID hit in 2020, I’ve been home and working remotely with my four kids. Even though it’s incredibly difficult some days, I feel good setting the example for my kids that women can have powerful careers. This is especially important for me to show my daughter.”

Helping young girls find pathways into tech is so important. While some girls have mothers or other family members as examples, others are inspired by women who run organizations that encourage girls to explore STEM activities.

“I am inspired by Reshma Saujani, Founder of Girls who Code,” said Susanna Tharakan, Diversity & Inclusion Program Manager. “She’s a woman of color, like me, who paved her own path to success while bringing up other women alongside her. Her TEDx talk called ‘Teach Girls Bravery not Perfection’ was both inspiring and validating.”

Mentors matter

No one gets anywhere in life alone; we all need help. Since women understand the unique challenges that we go through in the workforce, it’s especially important that we support each other, whenever and however we can! In this spirit, one question we asked our panel was, “Who are some women who have helped you in your career?”

“My honest answer is my mom,” said Cody Young, Talent Acquisition Partner. “Although she doesn’t work in tech, she has had a successful career and has always been there to provide me with authentic guidance and has embodied what it means to balance career, home, and passion in the workplace. Growing up with a strong example of what it means to be a woman of color in the workplace without compromising her own values has helped me to lead with confidence without having to compromise my own values.”

The tech world can be challenging, but startups and even bigger companies like Sisense can offer women unique opportunities to support each other and help women grow and succeed.

“While working for my first startup, [a leader named] Osha Kondori advised me to negotiate a promotion that I clearly wanted but didn’t feel qualified to ask for,” said Mirijam Stewart, Customer Success Manager. “She knew my worth before I did. Then, she referred me to Periscope Data [which merged with Sisense]. In the past year I’ve been able to join her team during COVID-19 and tackle difficult conversations.”

Paving the way for the next generation of women in tech

When women succeed, we all succeed. Our society and our planet have a long way to go to create equality for women and young girls. Women who have already made inroads in the tech world can help those just starting their careers or who are still learning the skills they’ll need to succeed. Many people ask themselves, “How can I help the next generation of young women succeed in the tech world and beyond?” Our panel offers this guidance:

“Get involved with nonprofits or resource groups in underrepresented communities,” said Shannon Woodward, Account Development Representative. “Take an intersectional approach to involving yourself.”

“Do well yourself and talk about it,” said Shruthi Panicker, Senior Technical Product Marketing Manager. “Become friends with others and be part of the community.”

All children need support and encouragement, but especially girls who are interested in male-dominated worlds like tech, the sciences, and math. 

“Celebrate when girls explore their interests in STEM programs,” Melanie offered. “Gift them with non-gendered or science-based toys. Encourage them to take tech internships or seek out women mentors early in their career.”

“It’s important to talk to girls about technology at a young age,” Elise said. “I think we’ve made strides in the last generation, and I already see so many opportunities for my daughter in this field. We need to continue to foster an environment that encourages girls to thrive in tech.”

Celebrating Women’s History Month

Although I think we can all agree that women’s history should be celebrated beyond just one month of the year, it’s amazing to have a month dedicated to increasing awareness of women’s issues in society and celebrating the trailblazers who paved the way for women to get where we are today.

“It’s a time to be thankful for the women who have broken barriers that have allowed us to be in the rooms we are now,” said Cody.

“For me, Women’s History Month means celebrating how far we’ve come while recognizing the women in our lives who have helped us get to where we are today,” said Susanna. “Women supporting women. Women recognizing women. Shining a spotlight on those who have been overshadowed, talked over, and just not given the same attention as our male counterparts.”

Envisioning a better future for women in tech

There’s no telling what tomorrow will bring or what the future of tech will look like; however, we can all play a role in creating more opportunities for diverse communities and people who have historically been barred from progress or leadership. 

“Elevate more women who do not look like you,” said Mirijam. “Seek them out. Ask them about their perspective. Don’t ask for similar experience; ask for their story and look at what they’ve achieved.”

The past is not one story; nor is the future. It will be woven by the countless individuals who go out, every day, and try to make the world a better place, helping support each other — people with different backgrounds, outlooks, dreams, and goals. 

“I believe organizations need to do more in order to recruit and retain women in the tech industry,” said Mimi Mbaye, People Partner. “We need to support diversity, equity, and inclusion so more women can enter the space.”

Julia Casey is a Talent Acquisition Partner at Sisense. She has almost 3 years of experience in the tech industry recruiting top talent, creating a quality candidate experience, refining Sisense’s employer branding, and co-leading the in-house women’s Employee Resource Group, Zenith.

Tags: future of work | women’s history month

Let’s block ads! (Why?)

Blog – Sisense

Read More

AI could help advertisers recover from loss of third-party cookies

March 28, 2021   Big Data
 AI could help advertisers recover from loss of third party cookies

From TikTok to Instagram, how’s your creative working for you?

In digital marketing, there’s no one-size-fits-all. Learn how data can make or break the performance of creative across all platforms.

Register Now


Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


Options for targeting digital advertising in a way that doesn’t rely on cookies are increasing, thanks to advances in predictive analytics and AI that will ultimately lessen the current dominance of Google, Facebook, and other large-scale content aggregators.

Google announced earlier this month that it will no longer allow third-party cookies to collect data via its Chrome browser. Many companies have historically relied on those cookies to better target their digital advertising, as the cookies enable digital ad networks and social media sites to create a profile of an end user without knowing specifically who that individual is. While that approach doesn’t necessarily breach anyone’s privacy, it does give many users the feeling that some entity is tracking the sites they visit in a way that makes them uncomfortable.

Providers of other browsers, such as Safari from Apple and the open source Firefox browser, have already abandoned third-party cookies. To be clear, Google isn’t walking away from tracking user behavior. Instead, the company has created a Federated Learning of Cohorts (FLoC) mechanism to track user behavior that doesn’t depend on cookies to collect data. Instead of being able to target an ad to a specific anonymous user, advertisers are presented with an opportunity to target groups of end users that are now organized into cohorts based on data Google still collects.

It remains to be seen how these initiatives might substantially change the user experience. However, some advertisers are now looking to employ machine learning algorithms and other forms of advanced analytics being made available via digital advertising networks to reduce their dependency on Google, Facebook, Twitter, Microsoft, and other entities that control massive online communities.

For example, Equifax, a credit management bureau, is working with Quantcast to place advertising closer to where relevant content is being originally created and consumed, said Joella Duncan, director of media strategy for North America at Quantcast.

“We want our marketing teams to be able to pull more levers,” Duncan said. “Third-party cookies are stale.”

That approach provides the added benefit of lessening an advertiser’s dependency on walled online gardens dominated by a handful of companies, Quantcast CEO Konrad Feldman said.

At the core of the Quantcast platform is an Ara engine that applies machine learning algorithms to data collected from 100 million online destinations in real time. That data is then analyzed using a set of predictive models that surface the behavioral patterns that make it possible to target ad campaigns. Those predictive models are scored a million times per second, in addition to being continuously updated to reflect recent events across the internet. “We’re not dependent on only one technique,” Feldman said.

That capability not only benefits clients such as Equifax, it also enables publishers of original content to retain a larger share of the advertising revenue generated. Google, Facebook, and Microsoft are all now moving toward compensating publishers for content that appears on their sites, but the bulk of the advertising revenue will still wind up in their coffers.

Quantcast is making a case for an alternative approach to digital advertising that would make it more evenly distributed. Advertisers are not likely to walk away from walled online gardens that make it cost-efficient for them to target millions of users. However, many of those same advertisers are looking for a way to more efficiently target narrower audience segments that might have a greater affinity for their products and services based on the content they regularly consume.

The AI and advanced analytics capabilities being embedded within digital advertising platforms may not upend the business models used by Google, Facebook, and others and based on walled gardens that themselves were constructed using algorithms. But it’s becoming apparent that fissures in the walls of those gardens are starting to appear as other entities in the world of advertising apply their own AI countermeasures.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Not Getting the Most From Your Model Ops? Why Businesses Struggle With Data Science and Machine Learning

March 18, 2021   TIBCO Spotfire
TIBCO ModelOps scaled e1615222627944 696x366 Not Getting the Most From Your Model Ops? Why Businesses Struggle With Data Science and Machine Learning

Reading Time: 2 minutes

Companies have begun to recognize the value of integrating data science (DS) and machine learning (ML) across their organization to reap the benefits of the advanced analytics they can provide. As such, DS/ML has seen a surge in popularity and usage as businesses have invested heavily in this technology. 

However, there’s a distinct difference between investing in DS/ML and managing to successfully gain tangible business value from that investment, and that’s where organizations are running into problems. 

The Results Are in: Businesses Struggle With DS/ML Deployment Across the Board

We recently performed a global survey across 18 countries and 22 industries, including over a hundred business leaders and executives, more than half of which were in the C-Suite. 

Of those respondents, just 14 percent reported that they are currently operationalizing DS/ML. Within that 14 percent, 24 percent can only use it in one functional area, far below the potential innovative capability of the technology.  

Why are so few organizations able to follow through with model ops adoption? What are the barriers keeping businesses from operationalizing data science and machine learning?       

The Devil’s in the Data

According to the survey results, while a lack of talented data scientists to build the models was listed in the top ten obstacles to DS/ML adoption, it was only cited by about 16 percent of respondents. On the other hand, seven of these ten, including the top four, were all data-related. Issues with data security, data privacy, data prep, and data access, in particular, were all cited by between 27 to 38 percent of respondents.

While there are many other issues to contend with, including lack of management and financial support and a clear integration strategy, security compliance and data privacy concerns are clearly a significant barrier when it comes to operationalizing DS/ML. 

Why Overcoming These Problems are Critical for Innovation

Data scientists can develop as many models as they want for a business, but if they don’t get deployed, then they aren’t providing any value. For the modern digital business to have any hope of keeping up with the competition, model ops is a vital tool that can allow them to effectively operationalize DS/ML models, putting them into production and applying them to streaming, real-time data, edge applications, and more. 

For a more in-depth breakdown of our survey results, you can check out our full ebook now. And if you’re ready to move past insights and into action, you can download our four-step guide to finding out what it takes to operationalize data science within your organization and get a leg-up on the competition.

Let’s block ads! (Why?)

The TIBCO Blog

Read More

Introducing the First Solution from Project Cortex: SharePoint Syntex

March 14, 2021   CRM News and Info

Think about how many documents are stored in your CRM system. There’s at least thousands, if not millions, of forms, papers, policies, and reports stored in there. And chances are, you rely on that content to get work done, so it has to be organized. 

Before software, we relied on the good old-fashioned filing cabinet to manage the chaos. That system came with a number of limitations though. If a specific document belonged in multiple folders, you had to copy it, then put it in a separate folder. To be even more organized, you had to put multiple folders inside bigger folders with their own separate label. While this does help to keep things organized, finding a specific document took forever. 

Well, believe it or not, we’ve actually carried over some of those limitations into our digital systems. How many folders are you using to keep track of all the information in your CRM? Is it easy to find what you’re looking for? Probably not. With the available technology we have, it’s time to implement a more effective way to manage content. And Microsoft has just announced this better way: SharePoint Syntex. 

This new solution is the first one released with Microsoft’s Project Cortex. Syntex was created to help businesses better maintain the staggering content stored in their systems, and drive faster, smarter business outcomes. 

Screen Shot 2021 03 12 at 10.03.07 AM 625x392 Introducing the First Solution from Project Cortex: SharePoint Syntex

SharePoint Syntex 

Using machine learning and advanced AI technology, Syntex is able to accumulate data from content, turn it into metadata, and store it within a SharePoint library. When asked about the purpose of Syntex, Microsoft leaders stated that it’s “designed to amplify human expertise, automate content processing, and transform content into knowledge. It delivers intelligent content services that work the way you do.” 

Here’s how it works: Syntex relies on models created by users to classify and organize documents, and then extract certain data. Users can also apply sensitivity or retention labels to safeguard private information.  

Ultimately, the purpose of the solution is to help users avoid the time and hassle of tagging each individual piece of content while helping users to find information faster. 

Syntex’s Three Buckets 

Let’s take a closer look at SharePoint Syntex. Essentially, it can be broken down into three buckets: content understanding, content processing, and content compliance. However, before we can utilize these buckets, a content center needs to be created. 

Screen Shot 2021 03 12 at 10.45.32 AM 625x380 Introducing the First Solution from Project Cortex: SharePoint Syntex

Content Center 

When you first set up Syntex, a default content center will be created. This is where all content models will be created and stored, and where you’ll apply models to certain documents. Admins can easily create additional centers as needed. 

Content Understanding 

After you’ve created a content center, you can start training Syntex models through sample files. This includes documents such as documents, images, PNGs, and PDFs, etc. While the model only requires five positive file examples and one negative, the more files you upload, the more accurate Syntex is. 

Next, content explanations are created. These usually include phrase lists (specific words or characters, pattern lists (patterns of numbers or characters), or proximity (how close explanations are to each other).  

Once you’ve finished setting up training parameters, you can see how accurate each one is by testing them in your content center. 

Content Processing 

The content processing is where content is captured, ingestion, and categorized, which helps to streamline content-centric processes that rely on Power Automate. For example, if your team regularly relies on structured forms (surveys, purchase orders, invoices, etc.), you can train Syntex to gather certain values to create metadata that will help with search parameters and discovery. 

Screen Shot 2021 03 12 at 10.40.35 AM 625x435 Introducing the First Solution from Project Cortex: SharePoint Syntex

Content Compliance 

After you’ve fully trained your models, it’s time to secure them. This bucket allows you to comply with your organization’s security and compliance policies by setting up sensitivity and retention labels that are automatically applied when content is uploaded. This way, everything stays secure. 

The Pros of Syntex 

How much content does your CRM have? Your intranet? Your website? It’s astounding to think about how many pieces of content your business relies on. But what’s even more astounding is how poor most organization’s metadata strategies are. 

SharePoint Syntex is designed to revamp old, outdated, and ineffective content management systems with something modern, cost-effective, and automated. 

We get it: no one enjoys content management. It’s time-consuming and monotonous, but without it, content discoverability is virtually non-existent. With Syntex, your organization will have access to knowledge that was previously unavailable without going through the daunting process of content tagging. 

“Syntex…provides a foundation for this concept of knowledge management and knowledge sharing in an organization,” says Dan Holem a member of the Project Cortex team. “And Project Cortex is now applying AI to address the weak spots, which have been the actual gathering and unlocking of knowledge, the curation of it, and then the delivery into the context of your work.” 

See full article here.

Get Started with Syntex by Contacting JourneyTEAM 

We’re just beginning to understand what we can do with Syntex. Microsoft leaders have already announced that additional features and tools are already in the works. But don’t wait to take advantage of Syntex’s benefits. Contact JourneyTEAM today to learn more about the solution and how it can benefit you. 


Jenn Alba JourneyTEAM 625x625 Introducing the First Solution from Project Cortex: SharePoint SyntexArticle by: Jenn Alba – Marketing Manager – 801.938.7816

JourneyTEAM is an award-winning consulting firm with proven technology and measurable results. They take Microsoft products; Dynamics 365, SharePoint intranet, Office 365, Azure, CRM, GP, NAV, SL, AX, and modify them to work for you. The team has expert level, Microsoft Gold certified consultants that dive deep into the dynamics of your organization and solve complex issues. They have solutions for sales, marketing, productivity, collaboration, analytics, accounting, security and more. www.journeyteam.com

Let’s block ads! (Why?)

CRM Software Blog | Dynamics 365

Read More

Analyzing Data from Multiple Sources: The Key to More Powerful Insights

March 13, 2021   Sisense

Simple datasets just won’t cut it anymore. To get truly powerful insights, you need to pull in data from multiple sources. The more complex and diverse your datasets, the more surprising and potent the insights they’ll produce. Additional data sources increase your chances to inform actions, fueling top-line and bottom-line growth.

How does it work in real life? Read on to find out how Measuremen optimizes workspace utilization, Skullcandy minimizes product returns, and Air Canada improves airline safety.

cloud flexibility hosting options blog cta banner 770x250 1 Analyzing Data from Multiple Sources: The Key to More Powerful Insights

Measuremen: Optimizing facilities use with data from numerous sources

When Measuremen CEO Vincent le Noble began the company in 2005, he wanted to help his clients make the best use of their workspaces. He leaned into his facilities-management experience, taking careful note of how organizations used desks, chairs, meeting rooms, and amenities. At the start, Measuremen observed and recorded utilization data and began to log the kinds of activities each space enabled (such as collaboration, individual work, and high-concentration work). 

Since those early days, Measuremen has broadened its data sources. A mobile app is used to collect granular information about how each component or meeting space is used. The app not only documents utilization data, but allows users to add subjective inputs such as personal preferences. According to Vincent, it allows Measuremen to ask customers questions such as: what made users come to the office, how they foresee the future for their departments, whether they will grow, whether they will shrink, and what kind of activities they will be doing.

In the last two years, Measuremen has added location-based sensors, which record data on a more permanent and real-time basis. The result of the various inputs — self-reporting, app-based logs, static sensors, and other data sources — allows Measuremen to assess much more than unused desk spaces or underutilized meeting rooms. It can give businesses a better understanding of how employees experience the workplace and let companies tailor their resources to better suit employee needs. As a result, companies can proactively address more challenging problems like employee productivity, retention, and work-life quality. And Measuremen is hardly finished.

“We’ve been deepening our analysis for the last four years now with Sisense,” said Vincent. “All the data streams combined … give us the insights and decision-making power to help users to improve workplaces and work life for employees. And that journey is still going on.” 

Skullcandy: Listening to the market

The datasets you collect, the way you combine them, and the insights that can come from them depend on your industry, the realities of your business, and your imagination. Personal audio brand Skullcandy had a huge dataset and a huge challenge: using analytics to explore returns and reviews data to inform future product decisions.

Machine learning and predictive modeling allowed the company to use complex historical warranty claim and cost information, previous and new product attributes, and forecasting data to create a predictive data model for future warranty costs. The information not only helps Skullcandy with resource allocation for future warranty fulfillment, but can also drive design improvements. 

Skullcandy’s methodology included delving into sentiment analysis gleaned from online reviews and other customer feedback, which gave rise to exciting revelations. For example, when customer comments focus on a particular defect, Skullcandy can pinpoint the problem, examine the future warranty claims effects, and engage its engineers to make design modifications to help head off these returns. Skullcandy is also exploring ways to use disparate data streams to inform decision-making that improves customer relationships, customer education, and e-commerce ecosystems.

Air Canada: Taking data to new heights

The value of large, varied data sources is becoming obvious in air travel, too. Air Canada uses Sisense to collect and translate a wide variety of safety, quality, environmental, and security data. Safety Analytics & Innovation Manager Shaul Shalev said, “We collect hundreds of gigs of data … but unless you have a clear method of slicing and dicing that data and presenting it to users, it’s not really useful. With a tool like Sisense, it changes the game altogether.” 

The ability to collect data and make it useful allows Air Canada to identify important insights and extract actionable intel so frontline employees can respond to it in real time. In a more forward-looking posture, AI can employ the data to predict component failure, so Air Canada can replace parts before they cease functioning.

Seek out varied datasets to transform your business

The power of vast, varied data sources isn’t constrained to any segment of the economy. Here are a few more stories of companies finding success with large, complex datasets:

As you can see, leveraging diverse datasets to generate game-changing insights spans industries. The takeaway is that you should cast the widest net you can and leverage whatever data you can get your hands on. The benefits you’ll reap from bringing that data together can help drive revolutionary change at your business and further evolve you in a tumultuous business world.

Adam Luba is an Analytics Engineer at Sisense who boasts almost five years in the data and analytics space. He’s passionate about empowering data-driven business decisions and loves working with data across its full life cycle.

Let’s block ads! (Why?)

Blog – Sisense

Read More

Qlik makes pulling data from SAP applications simpler

March 12, 2021   Big Data
 Qlik makes pulling data from SAP applications simpler

The power of audio

From podcasts to Clubhouse, branded audio is more important than ever. Learn how brands are increasing customer loyalty and personalization with these best practices.

Register Now


Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


Qlik, a provider of data integration and analytics software, this week announced it has made it easier to pull data from SAP applications using a set of accelerators optimized for specific business processes.

The first in what will become a series of accelerators is focused on SAP Order to Cash analytics. The accelerators combine data integration and analytics software from Qlik to reduce the time and effort required to surface insights from specific processes within an SAP enterprise resource planning (ERP) application.

The overall goal is to make it simpler to pull data from any SAP application into the data warehouse of their choice, said Matt Hayes, vice president of SAP Business at Qlik.

While SAP currently provides connectors to those databases, its primary focus is on moving data from its applications to a data warehouse based on the SAP HANA database, Hayes said. In contrast, a set of more agnostic connectors provided by Qlik makes it easier to pull data from an SAP application into data lakes provided by, for example, Amazon Web Services (AWS), Microsoft, Google, or Snowflake, added Hayes. He asserted that Qlik data integration software enables real-time delivery of SAP data from any source to any target.

The focus on data integration and analytics within enterprise IT environments has never been greater. Due to the economic downturn brought on by the COVID-19 pandemic, business leaders are trying to optimize in real time a wide range of processes. Achieving that goal requires increases reliance on analytics applications that are only as useful as the most recent data collected.

As organizations launch various digital business transformation initiatives, many of them are discovering they need to be able to pull data from SAP applications that function as systems of record in the enterprise. The challenge has always been that data from an SAP application has to then be normalized alongside other data to enable end users to surface insight from data created using multiple applications. Many organizations are now investing in data warehouses in the cloud to collect massive amounts of data that can be accessed more easily by a range of analytics applications.

Qlik is making a case for a Qlik Sense analytics application that runs in memory to make it easier to surface insights in near real time as data is continuously pulled from various sources. “You never have to refresh the data,” Hayes said.

However, IT teams can employ the data integration software Qlik provides without having to adopt Qlik Sense. This week Qlik revealed it has developed a unified connector based on SAP BEx/InfoProvider joint connectivity and SAP SQL connector software to streamline the process of pulling data from any SAP application into the Qlik Sense Enterprise edition of its analytics software.

SAP has never been especially focused on making it simpler to pull data from its applications and databases. The company has its own portfolio of analytics applications that are tightly integrated with its data warehouse and associated data virtualization tools. However, many organizations have standardized on a wide range of applications that are employed to analyze data aggregated from multiple data sources. Many of the users of those applications want to be able to access data without IT intervention, Hayes noted. Most IT teams are inclined to enable that access so long as the integration and analytics software employed doesn’t have a material impact on the performance of the SAP applications they are running, added Hayes.

Qlik minimizes that impact by identifying what subset of data in an SAP application is actually new versus constantly pulling all the data in an SAP application into an analytics application, said Hayes.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Facebook’s new computer vision model achieves state-of-the-art performance by learning from random images

March 4, 2021   Big Data

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


Facebook today announced an AI model trained on a billion images that ostensibly achieves state-of-the-art results on a range of computer vision benchmarks. Unlike most computer vision models, which learn from labeled datasets, Facebook’s generates labels from data by exposing the relationships between the data’s parts — a step believed to be critical to someday achieving human-level intelligence.

The future of AI lies in crafting systems that can make inferences from whatever information they’re given without relying on annotated datasets. Provided text, images, or another type of data, an AI system would ideally be able to recognize objects in a photo, interpret text, or perform any of the countless other tasks asked of it.

Facebook claims to have made a step toward this with a computer vision model called SEER, which stands for SElf-supERvised. SEER contains a billion parameters and can learn from any random group of images on the internet without the need for curation or annotation. Parameters, a fundamental part of machine learning systems, are the part of the model derived from historical training data.

New techniques

Self-supervision for vision is a challenging task. With text, semantic concepts can be broken up into discrete words, but with images, a model must decide for itself which pixel belongs to which concept. Making matters more challenging, the same concept will often vary between images. Grasping the variation around a single concept, then, requires looking at a lot of different images.

Facebook researchers found that scaling AI systems to work with complex image data required at least two core components. The first was an algorithm that could learn from a vast number of random images without any metadata or annotations, while the second was a convolutional network — ConvNet — large enough to capture and learn every visual concept from this data. Convolutional networks, which were first proposed in the 1980s, are inspired by biological processes, in that the connectivity pattern between components of the model resembles the visual cortex.

In developing SEER, Facebook took advantage of an algorithm called SwAV, which was borne out of the company’s investigations into self-supervised learning. SwAV uses a technique called clustering to rapidly group images from similar visual concepts and leverage their similarities, improving over the previous state-of-the-art in self-supervised learning while requiring up to 6 times less training time.

 Facebook’s new computer vision model achieves state of the art performance by learning from random images

Above: A simplified schematic showing SEER’s model architecture.

Image Credit: Facebook

Training models at SEER’s size also required an architecture that was efficient in terms of runtime and memory without compromising on accuracy, according to Facebook. The researchers behind SEER opted to use RegNets, or a type of ConvNet model capable of scaling to billions or potentially trillions of parameters while fitting within runtime and memory constraints.

Facebook software engineer Priya Goyal said SEER was trained on 512 NVIDIA V100 GPUs with 32GB of RAM for 30 days.

The last piece that made SEER possible was a general-purpose library called VISSL, short for VIsion library for state-of-the-art Self Supervised Learning. VISSL, which Facebook is open-sourcing today, allows for self-supervised training with a variety of modern machine learning methods. The library facilitates self-supervised learning at scale by integrating algorithms that reduce the per-GPU memory requirement and increase the training speed of any given model.

Performance and future work

After pretraining on a billion public Instagram images, SEER outperformed the most advanced state-of-the-art self-supervised systems, Facebook says. SEER also outperformed models on tasks including object detection, segmentation, and image classification. When trained with just 10% of the examples in the popular ImageNet dataset, SEER still managed to hit 77.9% accuracy. And when trained with just 1%, SEER was 60.5% accurate.

When asked whether the Instagram users whose images were used to train SEER were notified or given an opportunity to opt out of the research, Goyal noted that Facebook informs Instagram account holders in its data policy that it uses information like pictures to support research, including the kind underpinning SEER. That said, Facebook doesn’t plan to share the images or the SEER model itself, in part because the model might contain unintended biases.

“Self-supervised learning has long been a focus for Facebook AI because it enables machines to learn directly from the vast amount of information available in the world, rather than just from training data created specifically for AI research,” Facebook wrote in a blog post. “Self-supervised learning has incredible ramifications for the future of computer vision, just as it does in other research fields. Eliminating the need for human annotations and metadata enables the computer vision community to work with larger and more diverse datasets, learn from random public images, and potentially mitigate some of the biases that come into play with data curation. Self-supervised learning can also help specialize models in domains where we have limited images or metadata, like medical imaging. And with no labor required up front for labeling, models can be created and deployed quicker, enabling faster and more accurate responses to rapidly evolving situations.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Researchers find that debiasing doesn’t eliminate racism from hate speech detection models

February 6, 2021   Big Data

Current AI hate speech and toxic language detection systems exhibit problematic and discriminatory behavior, research has shown. At the core of the issue are training data biases, which often arise during the dataset creation process. When trained on biased datasets, models acquire and exacerbate biases, for example flagging text by Black authors as more toxic than text by white authors.

Toxicity detection systems are employed by a range of online platforms including Facebook, Twitter, YouTube, and various publications. While one of the premiere providers of these systems, Alphabet-owned Jigsaw, claims it’s taken pains to remove bias from its models following a study showing it fared poorly on Black-authored speech, it’s unclear the extent to which this might be true of other AI-powered solutions.

To see whether current model debiasing approaches can mitigate biases in toxic language detection, researchers at the Allen Institute investigated techniques to address lexical and dialectal imbalances in datasets. Lexical biases associate toxicity with the presence of certain words, like profanities, while dialectal biases correlate toxicity with “markers” of language variants like African-American English (AAE).

 Researchers find that debiasing doesn’t eliminate racism from hate speech detection models

In the course of their work, the researchers looked at one debiasing method designed to tackle “predefined biases” (e.g., lexical and dialectal). They also explored a process that filters “easy” training examples with correlations that might mislead a hate speech detection model.

According to the researchers, both approaches face challenges in mitigating biases from a model trained on a biased dataset for toxic language detection. In their experiments, while filtering reduced bias in the data, models trained on filtered datasets still picked up lexical and dialectal biases. Even “debiased” models disproportionately flagged text in certain snippets as toxic. Perhaps more discouragingly, mitigating dialectal bias didn’t appear to change a model’s propensity to label text by Black authors as more toxic than white authors.

In the interest of thoroughness, the researchers embarked on a proof-of-concept study involving relabeling examples of supposedly toxic text whose translations from AAE to “white-aligned English” were deemed nontoxic. They used OpenAI’s GPT-3 to perform the translations and create a synthetic dataset — a dataset, they say, that resulted in a model less prone to dialectal and racial biases.

 Researchers find that debiasing doesn’t eliminate racism from hate speech detection models

“Overall, our findings indicate that debiasing a model already trained on biased toxic language data can be challenging,” wrote the researchers, who caution against deploying their proof-of-concept approach because of its limitations and ethical implications. “Translating” the language a Black person might use into the language a white person might use both robs the original language of its richness and makes potentially racist assumptions about both parties. Moreover, the researchers note that GPT-3 likely wasn’t exposed to many African American English varieties during training, making it ill-suited for this purpose.

“Our findings suggest that instead of solely relying on development of automatic debiasing for existing, imperfect datasets, future work focus primarily on the quality of the underlying data for hate speech detection, such as accounting for speaker identity and dialect,” the researchers wrote. “Indeed, such efforts could act as an important step towards making systems less discriminatory, and hence safe and usable.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More
« Older posts
  • Recent Posts

    • Accelerate Your Data Strategies and Investments to Stay Competitive in the Banking Sector
    • SQL Server Security – Fixed server and database roles
    • Teradata Named a Leader in Cloud Data Warehouse Evaluation by Independent Research Firm
    • Derivative of a norm
    • TODAY’S OPEN THREAD
  • Categories

  • Archives

    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited