Tag Archives: solve

How to Solve Lookup Filtering Issue with Dynamics 365 version 8.2.2.128

The October 2017 Dynamics 365 Service Update 1 for 8.2.2 delivers a load of updates and hotfixes for Dynamics 365 Online Instances. Recently, one of our clients noticed that with this update, some of their Lookup filtering on related records was no filtering correctly. Upon investigation, this issue seems to only occur with fields that are being filtered by a lookup, which is located in the header of the form. If you are unsure of whether this will affect your system you can check your version number by clicking on the cog icon in the top right and then clicking on ‘About’.

image thumb How to Solve Lookup Filtering Issue with Dynamics 365 version 8.2.2.128

As mentioned earlier, the issue occurs when a lookup field is being filtered by another lookup field located in the header. The screenshot below shows the customization, where the lookup values of the Contact field should be filtered by the lookup value of the Account field when an Account is selected on an opportunity. Unfortunately, this filtering does not occur and the User is presented with a full list of Contacts in Dynamics 365.

image thumb 1 How to Solve Lookup Filtering Issue with Dynamics 365 version 8.2.2.128

image thumb 2 How to Solve Lookup Filtering Issue with Dynamics 365 version 8.2.2.128

A simple way to fix this would be to also put the field of the related record on the main form as a hidden field. The screenshot below shows that I have added the lookup field located in the header on the form as well and have set it to be hidden. This will allow you to keep the lookup in the header while also maintaining the filtering.

image thumb 3 How to Solve Lookup Filtering Issue with Dynamics 365 version 8.2.2.128

Let’s block ads! (Why?)

Magnetism Solutions Dynamics CRM Blog

How to Solve Site Map images not showing On Premise Dynamics CRM Issue

Recently, I have encountered an error which prevented the custom Site Map icons from being shown in Microsoft Dynamics CRM, instead showing a broken link. Looking into the issue, I found that the image URL displayed in the CRM Site Map was including the Organisation name in the link, so the link was something similar to “/OrgName/%7B%7D/img_name.png”. This was breaking the links, causing the images to be unable to be displayed in CRM Site Map.

image thumb How to Solve Site Map images not showing On Premise Dynamics CRM Issue

Initially, in the Site Map XML, these links were set using the dynamic Web Resource URL: $ webresource as per the Microsoft spec (found here: https://msdn.microsoft.com/en-us/library/gg309473.aspx), as creating the references in this fashion establishes dependencies. Unfortunately, creating the references in this fashion causes the error specified above. In order to circumvent that, the images had to have their location hardcoded similar to how Microsoft’s OOTB Area and SubArea custom icons are, by using an absolute path.

After changing the path, it now looks like this:

“/WebResources/.png”, and the images are now correctly displayed as expected.

image thumb 1 How to Solve Site Map images not showing On Premise Dynamics CRM Issue

While this is not an ideal solution due to the lack of dependencies created – meaning that images used in the Site Map can be deleted, this was the most workable solution to the issue encountered.

Let’s block ads! (Why?)

Magnetism Solutions Dynamics CRM Blog

Automated Data Entry, Data Quality Problems and How to Solve Them

Automated data entry such as OCR and text-to-speech methods of digitizing analog data save a lot of time – but they almost never deliver perfect results. In fact, they can be a data quality nightmare.

blog data conversation Automated Data Entry, Data Quality Problems and How to Solve Them

This is why data quality control is especially important if you rely on tools for automated data entry or conversion.

This article explains why automated data entry creates special data quality challenges and discusses strategies for addressing them.

Digital Data Sources and their Discontents

Some data is “born digital.” That means that it was first created in an electronic format.

blog born digital Automated Data Entry, Data Quality Problems and How to Solve Them

For example, application logs and information that customers input into Web forms is born digital. These types of data are digitized and live on a computer from the start.

Yet most organizations still work with some data sources that are not born digital. For example, a company may require employees to submit paper receipts when requesting reimbursement for travel expenses. Or an organization may receive snail-mail letters that it archives.

Organizations also face the challenge of data that is born in one format but needs to be converted to another. For instance, your customer support team might record phone calls with customers. But even if the raw audio data that you collect in this way is stored in a digital format (like MP3 files), it can’t be analyzed using a text-based analytics platform like Hadoop. You need a way to convert audio files to text.

blog banner landscape Automated Data Entry, Data Quality Problems and How to Solve Them

Automated Data Entry

Situations like the ones described above are why automated data entry tools are useful. Automated data entry tools take data in analog form and digitize it, or convert data from one form (like speech in an audio file) to another (like words in text form).

The most common form of automated data entry involves Optical Character Recognition, or OCR, tools. You can scan a paper document, then use an OCR tool to copy the text from the document into a digital text file.

Another common type of automated data entry involves taking recordings of human speech and converting the speech to text. This can be useful for transcribing phone calls or recordings of meetings.

There are even tools for converting smells to digital data – though, admittedly, your organization probably doesn’t have a reason to do that.

The Perils of Automated Data Entry

Automated data entry tools save loads of time. An OCR program can convert hundreds of thousands of words written on paper to digital text in minutes. A human being would require many tens of hours to input that data manually into a computer.

blog data german book Automated Data Entry, Data Quality Problems and How to Solve Them

The downside of automated data entry, however, is that even the best tools make mistakes. Recognizing words within scanned images or audio files is just hard. Consider the following challenges:

  • When converting text on paper to digital text, your OCR tools may get confused if a stain obscures part of the text. A human reading a piece of paper might be able to sort out the text even if it has coffee spilled on it, but an OCR tool might not because it is not designed to handle situations like that. As a result, some text is not properly digitized.
  • In small print, characters like 0 and 8 can look similar. This confuses OCR tools.
  • OCR and text-to-speech tools generally rely on dictionaries to help them determine what is a word. This works well when all the data they scan consists of common nouns in the language that the tools support. But when you have a proper name that is not in the dictionary file, text in a foreign language, a line of computer code or something else that is unexpected, the tools stand a much poorer chance of converting the information accurately to digital form.
  • OCR works well with text written in plain fonts that tools were designed to support. Good luck, however, in scanning a document written with German Gothic characters. And as far as handwritten text goes, even the very best, most advanced OCR tools stand very little chance of recognizing that correctly.
  • Speech-to-text tools tend to do a poor job of figuring out which words are being spoken by which people, especially in cases where people are talking over each other. As a result, speech-to-text conversions may produce a jumble of words inside a text file but give you little idea of who said what.
  • With automated data entry, you lack the types of metadata that you typically get with born-digital sources. For example, when you are working with data from a computer log file, you can look at file system metadata to determine when the log file was created and when it was last modified. This information adds context that can be useful when performing analytics. In contrast, analog data sources don’t usually have metadata associated with them. You generally can’t tell from looking at a piece of paper whether the words written on it were recorded yesterday or a decade ago, unless there is a date on it (and even if there is a date, you have no way of knowing for certain that it is correct).

OCR errors like these are the reason why databases like Google Books, which relies heavily on OCR for making the words inside older books digitally searchable, contain so many misspelled words.

These types of errors lead to poor data quality when you are working with data sources that have been converted using automated data entry tools. Inaccuracies, missing data and other types of problems undercut the reliability of the data and cause analytics difficulties.

blog optimization Automated Data Entry, Data Quality Problems and How to Solve Them

Improving Data Quality

How do you solve these data quality challenges?

One way, of course, is to have someone review all your automatically converted data by hand. But that takes almost as long as entering the data manually in the first place.

A variant of this approach is to rely on crowd sourcing data correction. Crowd sourcing means that you ask a large number of people – usually volunteers – each to review small pieces of your data to correct errors manually. This is what Google does through the ReCAPTCHA program, for example.

This crowd-sourcing strategy works if you’re an organization as large as Google, and if your data sources can be displayed to the public. Unfortunately, it’s less useful for everyone.

If you can’t crowd source your data quality improvement, you can always use a data quality tool to check and fix the work of your automated data entry tools. Data quality tools scan databases and look for misspellings, missing data, and inconsistencies, then automatically attempt to fix them.

They also cross-check databases against each other to help identify information that may be wrong. This is a good way of correcting, for instance, the risk of misspelling names within OCR’d data sources. If one database that consists of OCR’d address information contains an entry for a Jon Smith living at 123 Main Street, but ten other databases based on the same source say that there is a John Smith at that address, a data quality tool will recognize this inconsistency and surmise that an OCR error caused the name to be misspelled in the first database.

Syncsort’s suite of Big Data solutions now includes data quality tools as well as data analytics and integration solutions. To learn more about how to take advantage of resources like these to streamline and optimize your data operations, check out theTDWI Report: Building a Data Lake Checklist.

 Automated Data Entry, Data Quality Problems and How to Solve Them

Let’s block ads! (Why?)

Syncsort + Trillium Software Blog

How machine learning can solve wireless network issues

shutterstock 308854598 780x520 How machine learning can solve wireless network issues

Wi-Fi is crucial to the way we work today. Fast, reliable, and consistent wireless coverage in an enterprise is business-critical. Many day-to-day operations in the enterprise depend on it. And yet, most of the time, IT teams are flying blind when it comes to individual experience. This springs from two main challenges.

The first challenge is data collection. We want to know the state of every user at every given time. But these states change constantly as network conditions and user locations change. With tens of thousands of devices being tracked, there is a huge amount of information to be collected. This volume of data simply cannot be handled in an access point or a controller running on an appliance with fixed memory and CPU.

The second challenge is data analysis. It takes considerable time and effort to sort through event logs and data dumps to get meaningful insights. And significant Wi-Fi intelligence is required to actually make heads or tails out of the data.

Someday soon, I believe, big data and machine learning will solve the above hurdles. It will allow me to ask my network how it is feeling, it will tell me where it hurts, and it will provide detailed prescriptions for fixing the problem (or will automatically fix it for me). While this seems to be a futuristic vision, the foundation to achieve it is already being laid out through big data tools and machine learning techniques like unsupervised training algorithms.

Using these technologies, we can now continuously update models that measure and enforce the experience for our wireless users. For example, we can ensure specific internet speeds in real time (i.e throughput) with a high level of accuracy. This allows the IT staff to know a wireless user is suffering before they even realize it — and thus before they have to log a call with the help desk.

Once a user problem is detected, machine learning classification algorithms can isolate the root cause of the problem. For example, is the throughput issue due to interference, capacity, or LAN/WAN issues? After isolating the problem, machine learning can then automatically reconfigure resources to mediate the issue. This minimizes the time and effort IT teams spend on troubleshooting, while delivering the best possible wireless experience.

I’ve written before how artificial intelligence will revolutionize Wi-Fi. I would love to be able to just unleash IT teams on sifting through hordes of data so they can glean meaningful information. But it is like finding a needle in a haystack. Machine learning is key to automating mundane operational tasks like packet captures, event correlation, and root cause analysis. In addition, it can provide predictive recommendations to keep our wireless network out of trouble.

Also key to this vision is the elastic scale and programmability that modern cloud elements bring to the table. The cloud is the only medium suitable for treating Wi-Fi like a big data problem. It has the capacity to store tremendous amounts of data, with a distributed architecture that can analyze this data at tremendous speed.

Wi-Fi isn’t new. But how we use Wi-Fi has evolved. And now more than ever, Wi-Fi needs to perform flawlessly. We are in an era where wireless needs to be managed like a service, with all the flexibility, agility, and reliability of other business-critical platforms. With machine learning, big data, and the cloud, this new paradigm is quickly becoming a reality.

Ajay Malik is a wireless technology expert at Google.

Let’s block ads! (Why?)

Big Data – VentureBeat

How lean data helps Mozilla solve the growth equation (VB Live)

bigdata.shutterstock 363791543 780x484 How lean data helps Mozilla solve the growth equation (VB Live)

The more data you have, the better? Mozilla’s CMO says unchecked data collection is only making marketers lazy. In our latest VB Live event, you’ll find out how Mozilla uses lean data practices, and why they’re the real key to customer trust and business growth.

Register here for free.


“As marketers we’ve been talking about not just data, but big data, for almost a decade now,” says Jascha Kaykas-Wolff. “The promise of big data was absolutely enormous. And it was, very simply, if you use big data, you will help your company grow.”

Just two years ago, there were over 2,200 companies selling software aimed directly at helping marketers collect, organize, and take action on data. Now there are close to 3,300, and that number isn’t going to get any smaller.

And because the discussion has been going on for so long, because marketing tech companies just keep building and selling marketers more and more data tools, every marketer thinks that there really is a magic growth equation — and maybe the next kind of data they collect with the next tool will be the one to unlock it.

“It’s almost like thinking about becoming an employee at a company in Silicon Valley,” Kaykas-Wolff explains. “You do it because you get equity, and the expectation is that the company’s going to go public, and you’re going to be able to get rich and retire. The challenge is, and this is the big trip up, is that’s actually not the norm,” he says. “It’s the exception.”

Not every company goes IPO — in fact, the majority don’t. And the majority of people who work for startups are still going in every day, because they’ve got a mortgage to pay.

And so it is with data. Marketers have replaced much of the hard work that they’ve traditionally been responsible for, including audience research, persona development, customer lifecycle breakdowns, and understanding key value and pain points, with this endless quest for more data. “And really,” Kaykas-Wolff says, “this big data is doing nothing more than tripping up a lot of marketers by making us lazy, thinking that’s the path for success.”

There’s no magic bullet, he adds, but there is lean data.

Lean data is a fundamental set of principles and practices for marketers to better take care of their customers’ data. It involves three basic tenets:

  • First and foremost, you only ask for data that you can use from your customers and actually deliver value for.
  • Secondly, marketers commit to being collectively responsible for protecting customers’ data, from collection through storage.
  • And third, any time data is collected, you’re transparent about how, why, and what it will be used for.

“At the end of the day,” Kaykas-Wolff says, “lean data as a concept — and these fundamental practices associated with it — are really just the set of tools to help you as a marketer and your company develop better trust with your customers, and in turn perform better as a business.”

For insight into how to go from big data headaches to the lean data practices that transform customer relationships, don’t miss this upcoming interactive VB Live event.


Don’t miss out!

Register here for free.


In this VB Live event, you’ll:

  • Discuss how big data and marketing data collection tools have made marketers lazy
  • Learn about conscious choosers and how trust can help grow your business performance
  • Hear about lean data practices and practical tips

Speakers:

  • Jascha Kaykas-Wolff, CMO, Mozilla
  • Wendy Schuchart, Moderator, VentureBeat

More speakers to be announced soon

Let’s block ads! (Why?)

Big Data – VentureBeat

How To Solve IoT’s Big Data Challenge With Machine Learning

The data science field is booming as Big Data and advanced analytics become primary players in the boardroom. Dealing with new paradigms, constant evolution, and greater complexity, the entire C-suite now realizes that gut feel, instinct, and experience are no match against volumes of data that are essential components  in solving today’s business challenges. As a result, the science behind sifting through data and making the right connections is at a tipping point – where rigor, discipline, intellectual curiosity, and empathy are critical when identifying unique breakthroughs and innovative thinking.

For people with an aptitude and passion for math, logic, and investigative sleuthing, the role of data scientist appears to be a good fit. The work is meaningful with high visibility. Salary compensation is good. Career prospects are promising. The role even ranks high in work-life balance. Yet, despite all of these compelling advantages, women are still largely underrepresented.

Although women make up half of the world’s population, it is well-known that they don’t even come close to parity in the STEM (science, technology, engineering, and math) fields. Considering that data scientists enjoy many of the benefits that women have been striving to achieve in the workplace for decades, this reality is frustrating for many companies and women’s organizations.

A call for greater gender diversity in data science

In recent years, global technology leaders have been making significant investments in initiatives to support a rich mix of gender perspectives that can help drive innovation and better serve customers. Such efforts include commitments to place women in 25% of all leadership roles, leadership excellence acceleration, and certification for economic dividends for gender equality. Even community initiatives – such as Girls Who Code, Girl Smarts, TechGirlz, and the European Center for Women and Technology – provide an opportunity to empower millions of young girls to explore their talent in STEM fields.

By offering opportunities to further grow skill sets through education and professional development, businesses are wisely investing in the power of diversity to drive innovation and revenue growth through data science. However, for women, data science is more than just teaching women to code or opening new doors to career growth. It’s about standing up and shaping the next digital revolution.

Over the years, a growing population of women has acquired the economic means, education, and social acceptance needed to take on this challenge. Hopefully, this trend will mean that the conversation around STEM-related fields and data science will shift away from gender equality within the next five to 10 years and focus more on creating more innovative technology and making better decisions. But, unfortunately, we are not there yet. Unless changes are made in the current social system, the conversation will not shift organically.

The importance of building and engaging the data science community

One of the key steps to motivate women of all ages to actively contribute to data science is to engage in meaningful and supportive discussion about best practices, exchange personal successes and failures, and connect with potential mentors and collaborators. In essence, women learn best through a community.

This is one of the many reasons why Ann Rosenberg, Vice President and Head of Global SAP University Alliances and SAP Next-Gen, has stepped up to become the Global Ambassador for the Women in Data Science Conference (WiDS) for SAP. This partnership with Stanford University is leading a movement across the company and its global ecosystem to encourage young women to pursue education and careers in data science.

In collaboration with Stanford’s Institute for Computational & Mathematical Engineering (ICME), SAP Next-Gen, Google, Microsoft, and Walmart Labs, the WiDS main conference will take place at Stanford University, and over 50 locations worldwide will host supportive satellite events featuring live streams, recordings, and interactive Skype corners.

With an aim to inspire and educate data scientists of all genders and to support women in the field, this community will present the latest data science research across a variety of industries and scenarios, discuss how leading-edge companies are using data science for success, and provide opportunities to connect with peers. The speaking roster is full of prominent female data science professionals and leaders including Fei-Fei Li, Chief Scientist of Artificial Intelligence and Machine Learning for Google Cloud; Janet George, Fellow and Chief Data Officer of Western Digital; Deborah Frincke, Director of Research for the U.S. National Security Agency; and Sinead Kaiya, Chief Operating Officer of Products and Innovation at SAP.

By sharing stories and participating in community-driven experiences, women of all ages and career levels can get a better sense of how to tackle barriers to personal growth and success. They can find creative ways to leverage their unique skills, mentality, and natural abilities ranging from a collaborative style to nurturing sensibilities such as humility, insight, intellectual curiosity, and empathy.

Over time, these interactions will build on each other – eventually giving women the courage to rise up, take risks, and perform at a level that meets and exceeds corporate expectations. And one by one, every woman who takes up this challenge plays a part in enabling future generations to collaborate, innovate, and compete without bias and with full equality.

We invite women worldwide to join this important movement.

Don’t miss the Women in Data Science Conference keynote, featuring Sinead Kaiya, Chief Operating Officer of Products and Innovation at SAP. Live streamed across over 50 satellite locations worldwide, Sinead will discuss why now is the time for women to consider roles related to data science and how they can play a critical role in the future success of their business, no matter the industry.

See how SAP is supporting the 17 United Nations Sustainable Development Goals and helping to end poverty, protect the planet, fight diseases, and ensure prosperity for all by 2030.

P1 OK 01 47606 DB 47606 en OPT4 How To Solve IoT’s Big Data Challenge With Machine Learning

Comments

Let’s block ads! (Why?)

Digitalist Magazine

Radius CEO Darian Shirazi: Solve for Data Decay

Darian Shirazi is the CEO of Radius.

In this exclusive interview, CRM Buyer discusses with Shirazi how companies can turn complex data into actionable insights.

84175 280x280 Radius CEO Darian Shirazi: Solve for Data Decay

Radius CEO Darian Shirazi

CRM Buyer: What is data decay, and why is it important to prevent?

Darian Shirazi: One of the interesting things about CRM and marketing automation is that most of these systems have poor data quality. Typically, customers buy data lists or get inbound leads, and the information associated with those accounts, opportunities and leads ends up decaying quickly and being inaccurate.

The problem is keeping that information up-to-date and making sure that the contact information and the attributes of the accounts and opportunities that you’re going after stay fresh. That’s a big problem for almost every enterprise out there that has a CRM or marketing automation system. It makes it challenging to know which accounts to target and how to target them.

CRM Buyer: How can data be kept fresh and up-to-date?

Shirazi: One of the things you can do is to have a data stewardship program that allows users to refresh their CRM information and keep it up-to-date. Before they can get started with predictive analytics, and before they can get started with multichannel approaches, they have to solve that underlying data problem. They have to solve for data decay.

CRM Buyer: What is co-marketing, and why is it important?

Shirazi: Co-marketing is the ability of companies to leverage each other’s brands to go after same set of customers. In the consumer space, many companies co-market with each other. But in the B2B space, it can be difficult to do it because you have to focus on key accounts, and you have to select those specific accounts.

You have to see where there’s prospect overlap and customer overlap. By leveraging each other’s brands, as well as going after companies that are already a customer of one but not the other, they can drive higher conversion at a lower cost. Co-marketing is the ability to target the right accounts and leverage each other’s brands.

CRM Buyer: How can companies translate complex data into actionable insights? What is the key to succeeding in this translation?

Shirazi: You need to take the raw data in your CRM and marketing automation systems, predict where you’re going to have success in the future, and target those accounts in the right place.

One of the challenges goes back to the data decay problem. It’s garbage-in and garbage-out if you don’t solve that foundational data problem. The foundational data problem is the first step toward high predictability. It turns out if you have high-quality data, your predictive models don’t actually need to be that good to drive good results.

CRM Buyer: How would you define “predictive acquisition,” and why is it important?

Shirazi: Predictive acquisition is either getting net-new prospects that are recommended to you or inbound leads that are prioritized by which ones you should call first. That’s something that every company needs, especially companies with high volumes of inbound and outbound sales.

CRM Buyer: What are some of the current trends you’re seeing in account-based marketing?

Shirazi: One of the significant trends we’re seeing is that people are moving away from IP-based targeting and more toward cookie-based targeting. One of the challenges is that it’s pretty difficult to match cookie targeting to an actual ad.

The way some companies have solved it is by using IP tracking, but now people are looking for more highly targeted methods of placing ads, so they can target directly to a business contact at a company on Facebook, Twitter, display and mobile. The technology has gotten much better in the last couple of years, so cookie-based targeting is replacing IP-based targeting.

The second trend is that customers are starting to see that in order to convert accounts into customers, they’re going to have to take a multichannel approach. They’re going to have to make sure that the decision makers and the stakeholders see their ads in all the right places, whether it’s online or in a direct mail piece, and then combine all of these channels together before a sales rep makes a phone call.

CRM Buyer: What do you think is in the future for account-based marketing?

Shirazi: In the future, companies will be able to target people with custom-created ads directly. We’re developing an ability to map decision makers directly to a cookie so that we can target them in the right place at the right time with direct targeting across many different channels.

What you’ll see is that the way people market in the B2B space will begin to mirror what happens in the B2C space. end enn Radius CEO Darian Shirazi: Solve for Data Decay


Freelance writer Vivian Wagner has wide-ranging interests, from technology and business to music and motorcycles. She writes features regularly for ECT News Network, and her work has also appeared in American Profile, Bluegrass Unlimited, and many other publications. For more about her, visit her website. You can also connect with Vivian on Google+.

Let’s block ads! (Why?)

CRM Buyer

CAN YOU SOLVE THIS?

Looks simple. Don’t forget your rules.

I got “21.”

 CAN YOU SOLVE THIS?
Following the slew of brainteasers that have been sweeping the web, internet users are now being baffled by a new mind-boggling riddle.

The puzzle involves working out the values of three symbols – a horse, a horseshoe and a cowboy boot.

It sounds simple enough, but the infuriatingly hard-to-grasp solution has foxed plenty of those trying to complete it.

Can you get it right? This puzzle, which appeared on Facebook, has thousands of people stumped

It’s already garnered over 500,000 comments and 13,500 shares since it was posted on Facebook.

The puzzle consists of four questions, the first three have which have already been answered.

The final question requires you to add the cowboy boot and the horse and multiply by the horseshoe to come up with the corresponding numerical value.

Answers left in the comments section have varied wildly, ranging from 12 to 48.

But, despite initial appearances, the true answer is 21.

The first question, involving three horses adding up to 30, tells us that the value of a horse is 10.

The second equation, featuring two horseshoes and a horse, allows us to deduce the value of the horseshoe is 4.

The third equation tells us that the value of a cowboy boot is 2.

Now, the final question is asking the solution to 10 + 2 x 4.

This has understandably led many people to guess the answer as 48.

However, due to the BODMAS rule – which dictates the order of operations in working out maths solutions – you should actually be multiplying the horse by the horseshoe first, then adding the cowboy boot next.So that means the answer is 1 + (10×2) = 21.

Despite this, many people with opposing views are adamant they have solved the puzzle – while others continue to insist it can ‘never’ be solved.

Skyhook launches its new Personas to help solve a $100B industry problem

Location-based marketing remains stuck in the past.

Whether you’re sending a coupon to a customer because GPS says they’re near your store, switching up the creative on your app in-store thanks to smart beacons, or serving an in-app advertisement thanks to a Wi-Fi SSID, all the most often-quoted examples work the same way.

They use a single point of location data to make a decision.

Today, Skyhook Wireless has announced the launch of three new location-based marketing solutions — Retailer Personas, Power Personas, and On-Demand Personas — as part of its independent location platform. What makes these new persona products — and Skyhook’s approach — different from the way marketers use location now?

They focus on the journey your customer takes and the understanding that comes from that data.

Skyhook’s mobile location technology processes trillions of location signals annually. Rather than focus on the current location and what intent signals it may or may not offer the marketer, advertiser, or publisher, Skyhook looks at location signals over time and applies its venue data to these broader movements. This information allows for the creation of personas, and the three new offerings are designed to identify these different types of consumer.

This, in turn, means more relevant mobile advertising for the consumer.

Importantly, it also means more accurate location, venue visit, and customer persona data for advertisers and publishers. This is good news in an industry that is expected to spend $ 100 billion on mobile ads worldwide in 2016, one that wastes money every day on poorly targeted messaging — a problem we’ll discuss during Mobile Summit next month.

My research shows that consumers are relatively happy allowing businesses to use location data to personalize advertising. So how do we move the needle to “completely happy” and make this type of advertising part of using a smartphone without destroying the experience?


We’re studying mobile marketing at VB Insight
Add your voice, and we’ll share the data with you


“Sending better ads to the user makes the user experience better, and makes advertisers happy to see higher engagement,” Matt Kojalo, VP of Adtech Solutions at Skyhook Wireless told me. “Publishers gain insights into who their users are, and advertisers can target and see if a user after seeing, say, an Audi ad went to an Audi dealer. Location data provides both personalization and context — two critical ingredients to great UX. Removing seemingly ‘random advertisements’ and replacing them with relevant promotions and communications also reinforces a sense of ‘this app/brand gets me’ for the user.”

The new persona products each have their particular strengths.

Skyhook personas Skyhook launches its new Personas to help solve a $100B industry problem

Retailer Personas are designed to measure attribution for campaigns associated with the 200 top retailers and brands in North America. Power Personas help identify consumers with strong brand affinity based on a high frequency of venue visits and other behaviors. On-Demand Personas allow advertisers and publishers to create and customize their target segments across more than 3,000 possible combinations.

On-Demand Personas can be created from data that includes venue visits, venue type, visit frequency, and demographic information. For example, you could build a persona that includes all consumers over the age of 17 who have visited Walmart in the past 90 days.

So a consumer who owns a car could be categorized as a Budget or Luxury Auto consumer, thanks to Skyhook’s location data. One who frequents the same types of clothing stores might be classed as a Power Women’s Apparel Shopper. Skyhook provides persona categories across auto, retail, travel, sports/entertainment, food, and general demographics.

The focus for Skyhook is not on buying or selling media, and the company says that it never aggregates, combines, shares, sells, or transfers its partners’ data to a third party. That stance means that, as a pure technology provider, it never competes with the advertisers, publishers, or adtech partners with which it works. In geographical terms, Skyhook is the Switzerland of location-based marketing.

“We charge for Persona use only when the Personas are used within one of our preferred networks on a CPM basis,” Kojalo said. “They are currently available through our partnerships, with more coming soon. In terms of generating Personas using your data, it’s very easy to integrate. We have an API, batch/bulk processing as well as a very easily integrated SDK, so companies can send us data to be ‘Personified’.”

To build a behavioral profile for a user, Skyhook requires user sample locations containing a unique device ID, a time stamp, and a location. The company’s three new Persona products are available today.

Skyhook Wireless is a worldwide leader in location. Skyhook created and operates the most advanced global first party mobile location network that provides the fastest, most accurate and battery-friendly location results to the mobile … read more »

VB Profile Logo Skyhook launches its new Personas to help solve a $100B industry problemNew! Track Skyhook’s Landscape to stay on top of the industry in 3 minutes a day. Understand the entire ecosystem, monitor innovation, and track deal flows. Learn more.

Comment on this story:  @VentureBeat / Facebook

This entry passed through the Full-Text RSS service – if this is your content and you’re reading it on someone else’s site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.

VentureBeat » Big Data News | VentureBeat

Could Big Data And Cognitive Computing Solve Africa’s…

Could Big Data And Cognitive Computing Solve Africa’s Greatest Challenges?
IBM Smarter Planet Contributor, forbes.com

I come from a family of educators. So when it came to choosing a career, it was natural for me to go into education. My vocation, though, is research. I study educational systems so that I can help re-imagine what they can be.

Few places can…

This entry passed through the Full-Text RSS service – if this is your content and you’re reading it on someone else’s site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.

A Smarter Planet