• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: Details

Google details how it’s using AI and machine learning to improve search

October 16, 2020   Big Data

Automation and Jobs

Read our latest special issue.

Open Now

During a livestreamed event this afternoon, Google detailed the ways it’s applying AI and machine learning to improve the Google Search experience.

Soon, Google says users will be able to see how busy places are in Google Maps without having to search for specific beaches, parks, grocery stores, gas stations, laundromats, pharmacies, or other business, an expansion of Google’s existing busyness metrics. The company also says it’s adding COVID-19 safety information to business profiles across Search and Maps, revealing whether they’re using safety precautions like temperature checks, plexiglass, and more.

An algorithmic improvement to “Did you mean,” Google’s spell-checking feature for Search, will enable more accurate and precise spelling suggestions. Google says the new underlying language model contains 680 million parameters — the variables that determine each prediction — and runs in less than three milliseconds. “This single change makes a greater improvement to spelling than all of our improvements over the last five years,” Prabhakar Raghavan, head of Search at Google, said in a blog post.

Beyond this, Google says it can now index individual passages from webpages as opposed to whole pages. When this rolls out fully, it will improve roughly 7% of search queries across all languages, the company claims. A complementary AI component will help Search capture the nuances of what webpages are about, ostensibly leading to a wider range of results for search queries.

“We’ve applied neural nets to understand subtopics around an interest, which helps deliver a greater diversity of content when you search for something broad,” Raghavan continued. “As an example, if you search for ‘home exercise equipment,’ we can now understand relevant subtopics, such as budget equipment, premium picks, or small space ideas, and show a wider range of content for you on the search results page.”

Google is also bringing Data Commons, its open knowledge repository that combines data from public datasets (e.g., COVID-19 stats from the U.S. Centers for Disease Control and Prevention) using mapped common entities, to search results on the web and mobile. In the near future, users will be able to search for topics like “employment in Chicago” on Search to see information in context.

On the ecommerce and shopping front, Google says it has built cloud streaming technology that enables users to see products in augmented reality (AR). With cars from Volvo, Porsche, and other “top” auto brands, for example, they can zoom in to view the steering wheel and other details in a driveway, to scale, on their smartphones. Separately, Google Lens on the Google app or Chrome on Android (and soon iOS) will let shoppers discover similar products by tapping on elements like vintage denim, ruffle sleeves, and more.

 Google details how it’s using AI and machine learning to improve search

Above: Augmented reality previews in Google Search.

Image Credit: Google

In another addition to Search, Google says it will deploy a feature that highlights notable points in videos — for example, a screenshot comparing different products or a key step in a recipe. (Google expects 10% of searches will use this technology by the end of 2020.) And Live View in Maps, a tool that taps AR to provide turn-by-turn walking directions, will enable users to quickly see information about restaurants including how busy they tend to get and their star ratings.

Lastly, Google says it will let users search for songs by simply humming or whistling melodies, initially in English on iOS and in more than 20 languages on Android. You will able to launch the feature by opening the latest version of the Google app or Search widget, tapping the mic icon, and saying “What’s this song?” or selecting the “Search a song” button, followed by at least 10 to 15 seconds of humming or whistling.

“After you’re finished humming, our machine learning algorithm helps identify potential song matches,” Google wrote in a blog post. “We’ll show you the most likely options based on the tune. Then you can select the best match and explore information on the song and artist, view any accompanying music videos or listen to the song on your favorite music app, find the lyrics, read analysis and even check out other recordings of the song when available.”

Google says that melodies hummed into Search are transformed by machine learning algorithms into a number-based sequence representing the song’s melody. The models are trained to identify songs based on a variety of sources, including humans singing, whistling, or humming, as well as studio recordings. They also take away all the other details, like accompanying instruments and the voice’s timbre and tone. This leaves a fingerprint that Google compares with thousands of songs from around the world and identify potential matches in real time, much like the Pixel’s Now Playing feature.

“From new technologies to new opportunities, I’m really excited about the future of search and all of the ways that it can help us make sense of the world,” Raghavan said.

Last month, Google announced it will begin showing quick facts related to photos in Google Images, enabled by AI. Starting in the U.S. in English, users who search for images on mobile might see information from Google’s Knowledge Graph — Google’s database of billions of facts — including people, places, or things germane to specific pictures.

Google also recently revealed it’s using AI and machine learning techniques to more quickly detect breaking news around crises like natural disasters. In a related development, Google said it launched an update using language models to improve the matching between news stories and available fact checks.

In 2019, Google peeled back the curtains on its efforts to solve query ambiguities with a technique called Bidirectional Encoder Representations from Transformers, or BERT for short. BERT, which emerged from the tech giant’s research on Transformers, forces models to consider the context of a word by looking at the words that come before and after it. According to Google, BERT helped Google Search better understand 10% of queries in the U.S. in English — particularly longer, more conversational searches where prepositions like “for” and “to” matter a lot to the meaning.

BERT is now used in every English search, Google says, and it’s deployed across languages including Spanish, Portuguese, Hindi, Arabic, and German.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Intel details chips designed for IoT and edge workloads

September 23, 2020   Big Data
 Intel details chips designed for IoT and edge workloads

Automation and Jobs

Read our latest special issue.

Open Now

Intel today announced the launch of new products tailored to edge computing scenarios like digital signage, interactive kiosks, medical devices, and health care service robots. The 11th Gen Intel Core Processors, Atom x6000E Series, Pentium, Celeron N, and J Series bring new AI security, functional safety, and real-time capabilities to edge customers, the chipmaker says, laying the groundwork for innovative future applications.

Intel expects that the edge market to be a $ 65 billion silicon opportunity by 2024. The company’s own revenue in the space grew more than 20% to $ 9.5 billion in 2018. And according to a 2020 IDC report, up to 70% of all enterprises will process data at the edge within three years. To date, Intel claims to have cultivated an ecosystem of more than 1,200 partners, including Accenture, Bosch, ExxonMobil, Philips, Verizon, and Viewsonic, with over 15,000 end customer deployments across “nearly every industry.”

The 11th Gen Core processors — which Intel previewed in early September — are enhanced for internet of things (IoT) use cases requiring high-speed processing, computer vision, and low-latency deterministic processing, the company says. They bring an up to 23% performance gain in single-threaded workloads, a 19% performance gain in multithreaded workloads, and an up to 2.95 times performance gain in graphics workloads versus the previous generation. New dual video decode boxes allow the processors to ingest up to 40 simultaneous video streams at 1080p up to 30 frames per second and output four channels of 4K or two channels of 8K video.

According to Intel, the combination of the 11th Gen’s SuperFin process improvements, miscellaneous architectural enhancements, and Intel’s OpenVINO software optimizations translates to 50% faster inferences per second compared with the previous 8th Gen processor using CPU mode or up to 90% faster inferences using the processors’ GPU-accelerated mode. (Intel says the 11th Gen Core i5 is up to twice as fast in terms of inferences per second as an 8th Gen Core i5-8500 when running on just the CPU in each product.) AI inferencing algorithms can run on up to 96 graphic execution units (INT8) or run on the CPU with VNNI built in, an x86 extension that’s part of Intel’s AVX-512 processor instruction set for accelerating convolutional neural network-based algorithms.

As for the Atom x6000E Series, Pentium, Celeron N, and J Series, Intel says they represent its first processor platform specifically enhanced for IoT. All four deliver up to 2 times better graphics performance, a dedicated real-time offload engine, enhanced I/O and storage, and the Intel Programmable Services Engine, which brings out-of-band and in-band remote device management. They also support 2.5GbE time-sensitive networking components and resolutions up to 4K at 60 frames per second on upwards of three displays, and they meet baseline safety requirements with built-in hardware-based security.

Intel says it already has 90 partners committed to delivering 11th Gen Core solutions and up to 100 partners locked in for the Intel Atom x6000E Series, Intel Pentium, Celeron N, and J Series.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Qualcomm details Cloud AI 100 chipset, announces developer kit

September 16, 2020   Big Data

Automation and Jobs

Read our latest special issue.

Open Now

During its AI Day conference last April, Qualcomm unveiled the Cloud AI 100, a chipset purpose-built for machine learning inferencing and edge computing workloads. Details were scarce at press time, evidently owing to a lengthy production schedule. But today Qualcomm announced a release date for the Cloud AI 100 — the first half of 2021, after sampling this fall — and shared details about the chipset’s technical specs.

Qualcomm expects the Cloud AI 100 to give it a leg up in an AI chipset market expected to reach $ 66.3 million by 2025, according to a 2018 Tractica report. Last year, SVP of product management Keith Kressin said he anticipates that inference — the process during which an AI model infers results from data — will become a “significant-sized” marked for silicon, growing 10 times from 2018 to 2025. With the Cloud AI 100, Qualcomm hopes to tackle specific markets, such as datacenters, 5G infrastructure, and advanced driver-assistance systems.

The Cloud AI 100 comes in three flavors — DM.2e, DM.2, and PCIe (Gen 3/4) — corresponding to performance range. At the low end, the Cloud AI 100 Dual M.2e and Dual M.2 models can hit between 50 TOPS (50 trillion floating-point operations per second) and 200 TOPS, while the PCIe model achieves up to 400 TOPS, according to Qualcomm. All three ship with up to 16 AI accelerator cores paired with up to 144MB RAM (9MB per core) and 32GB LPDDR4x on-card DRAM, which the company claims outperforms the competition by 106 times when measured by inferences per second per watt, using the ResNet-50 algorithm. The Cloud AI 100 Dual M.2e and Dual M.2 attain 15,000 to 10,000 inferences per second at under 50 watts, and the PCIe hovers around 25,000 inferences at 50 to 100 watts.

 Qualcomm details Cloud AI 100 chipset, announces developer kit

Qualcomm says the Cloud AI 100, which is manufactured on a 7-nanometer process, shouldn’t exceed a power draw of 75 watts. Here’s the breakdown for each card:

  • Dual M.2e: 15 watts, 70 TOPS
  • Dual M.2: 25 watts, 200 TOPS
  • PCI2: 75 watts, 400 TOPS

The first Cloud AI 100-powered device — the Cloud Edge AI Development Kit — is scheduled to arrive in October. It looks similar to a wireless router, with a black shell and an antenna held up by a plastic stand. But it runs CentOS 8.0 and packs a Dual M.2 Cloud AI 100, a Qualcomm Snapdragon 865 system-on-chip, a Snapdragon X55 5G modem, and an NVMe SSD.

 Qualcomm details Cloud AI 100 chipset, announces developer kit

The Cloud AI 100 and products it powers integrate a full range of developer tools, including compilers, debuggers, profilers, monitors, servicing, chip debuggers, and quantizers. They also support runtimes like ONNX, Glow, and XLA, as well as machine learning frameworks such as TensorFlow, PyTorch, Keras, MXNet, Baidu’s PaddlePaddle, and Microsoft’s Cognitive Toolkit for applications like computer vision, speech recognition, and language translation.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Baidu details its adversarial toolbox for testing robustness of AI models

January 18, 2020   Big Data
 Baidu details its adversarial toolbox for testing robustness of AI models

No matter the claimed robustness of AI and machine learning systems in production, none are immune to adversarial attacks, or techniques that attempt to fool algorithms through malicious input. It’s been shown that generating even small perturbations on images can fool the best of classifiers with high probability. And that’s problematic considering the wide proliferation of the “AI as a service” business model, where companies like Amazon, Google, Microsoft, Clarifai, and others have made systems that might be vulnerable to attack available to end users.

Researchers at tech giant Baidu propose a partial solution in a recent paper published on Arxiv.org: Advbox. They describe it as an open source toolbox for generating adversarial examples, and they say it’s able to fool models in frameworks like Facebook’s PyTorch and Caffe2, MxNet, Keras, Google’s TensorFlow, and Baidu’s own PaddlePaddle.

While the Advbox itself isn’t new — the initial release was over a year ago — the paper dives into revealing technical detail.

AdvBox is based on Python, and it implements several common attacks that perform searches for adversarial samples. Each attack method uses a distance measure to quantify the size of adversarial perturbation, while a sub-model — Perceptron, which supports image classification and object detection models as well as cloud APIs — evaluates the robustness of a model to noise, blurring, brightness adjustments, rotations, and more.

AdvBox ships with tools for testing detection models susceptible to so-called adversarial t-shirts or facial recognition attacks. Plus, it offers access to Baidu’s cloud-hosted deepfakes detection service via an included Python script.

“Small and often imperceptible perturbations to [input] are sufficient to fool the most powerful [AI],” wrote the coauthors. “Compared to previous work, our platform supports black box attacks … as well as more attack scenarios.”

Baidu isn’t the only company publishing resources designed to help data scientists defend from adversarial attacks. Last year, IBM and MIT released a metric for estimating the robustness of machine learning and AI algorithms called Cross Lipschitz Extreme Value for Network Robustness, or CLEVER for short. And in April, IBM announced a developer kit called the Adversarial Robustness Toolbox, which includes code for measuring model vulnerability and suggests methods for protecting against runtime manipulation. Separately, researchers at the University of Tübingen in Germany created Foolbox, a Python library for generating over 20 different attacks against TensorFlow, Keras, and other frameworks.

But much work remains to be done. According to Jamal Atif, a professor at the Université Paris-Dauphine, the most effective defense strategy in the image classification domain — augmenting a group of photos with examples of adversarial images — at best has gotten accuracy back up to only 45%. “This is state of the art,” he said during an address in Paris at the annual France is AI conference hosted by France Digitale. “We just do not have a powerful defense strategy.”

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Tencent details how its MOBA-playing AI system beats 99.81% of human opponents

December 25, 2019   Big Data

In August, Tencent announced it had developed an AI system capable of defeating teams of pros in a five-on-five match in Honor of Kings (or Arena of Valor, depending on the region). This was a noteworthy achievement — Honor of Kings occupies the video game subgenre known as multiplayer online battle arena games (MOBAs), which are incomplete information games in the sense that players are unaware of the actions other players choose. The endgame, then, isn’t merely AI that achieves Honor of Kings superhero performance, but insights that might be used to develop systems capable of solving some of society’s toughest challenges.

A paper published this week peels back the layers of Tencent’s technique, which the coauthors describe as “highly scalable.” They claim its novel strategies enable it to explore the game map “efficiently,” with an actor-critic architecture that self-improves over time.

As the researchers point out, real-time strategy games like Honor of Kings require highly complex action control compared with traditional board games and Atari games. Their environments also tend to be more complicated (Honor of Kings has 10^600 possible states and and 10^18,000 possible actions) and the objectives more complex on the whole. Agents must not only learn to plan, attack, and defend but also to control skill combos, induce, and deceive opponents, all while contending with hazards like creeps and fully automated turrets.

Tencent’s architecture consists of four modules: Reinforcement Learning (RL) Learner, Artificial Intelligence (AI) Server, Dispatch Module, and Memory Pool.

The AI Server — which runs on a single processor core, thanks to some clever compression — dictates how the AI model interacts with objects in the game environment. It generates episodes via self-play, and, based on the features it extracts from the game state, it predicts players’ actions and forwards them to the game core for execution. The game core then returns the next state and the corresponding reward value, or the value that spurs the model toward certain Honor of Kings goals.

 Tencent details how its MOBA playing AI system beats 99.81% of human opponents

As for the Dispatch Module, it’s bundled with several AI Servers on the same machine, and it collects data samples consisting of rewards, features, action probabilities, and more before compressing and sending them to Memory Pools. The Memory Pool — which is also a server — supports samples of various lengths and data sampling based on the generated time, and it implements a circular queue structure that performs storage operations in a data-efficient fashion.

Lastly, the Reinforcement Learner, a distributed training environment, accelerates policy updates with the aforementioned actor-critic approach. Multiple Reinforcement Learners fetch data in parallel from Memory Pools, with which they communicate using shared memory. One mechanism (target attention) helps with enemy target selection, while another —  long short-term memory (LSTM), an algorithm capable of learning long-term dependencies — teaches hero players skill combos critical to inflicting “severe” damage.

The Tencent researchers’ system encodes image features and game state information such that each unit and enemy target is represented numerically. An action mask cleverly incorporates prior knowledge of experienced human players, preventing the AI from attempting to traverse physically “forbidden” areas of game maps (like challenging terrain).

In experiments, the paper’s coauthors ran the framework across a total of 600,000 cores and 1,064 graphics cards (a mixture of Nvidia Tesla P40s and Nvidia V100s), which crunched 16,000 features containing unconcealed unit attributes and game information. Training one hero required 48 graphics cards and 18,000 processor cores at a speed of about 80,000 samples per second per card. And collectively for every day of training, the system accumulated the equivalent of 500 years of human experience.

 Tencent details how its MOBA playing AI system beats 99.81% of human opponents

The AI’s Elo score, derived from a system for calculating the relative skill levels of players in zero-sum games, unsurprisingly increased steadily with training, the coauthors note. It became relatively stable within 80 hours, according to the researchers, and within just 30 hours it began to defeat the top 1% of human Honor of Kings players.

The system executes actions via the AI model every 133 milliseconds, or about the response time of a top amateur player. Five professional players — “QGhappy.Hurt,” “WE.762,” “TS.NuanYang,” “QGhappy.Fly,” and “eStarPro.Ca,” — were invited to play against it, as well as a “diversity” of players attending the ChinaJoy 2019 conference in Shanghai between August 2 and August 5.

The researchers note that despite eStarPro.Cat’s prowess with mage-type heroes, the AI achieved five kills per game and was killed only 1.33 times per game on average. In public matches, its win rate was 99.81% over 2,100 matches, and five of the eight AI-controlled heroes managed a 100% win rate.

They’re far from the only ones whose AI beat human players — DeepMind’s AlphaStar beat 99.8% of human StarCraft 2 players, while OpenAI Five’s OpenAI Five framework defeated a professional team twice in public matches.

The Tencent researchers say that they plan to make both their framework and algorithms open source in the near future, toward the goal of fostering research on complex games like Honor of Kings.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Facebook details the AI technology behind Instagram Explore

November 25, 2019   Big Data

According to Facebook, over half of Instagram’s roughly 1 billion users visit Instagram Explore to discover videos, photos, livestreams, and Stories each month. Predictably, building the underlying recommendation engine — which curates the billions of pieces of content uploaded to Instagram — posed an engineering challenge, not least because it works in real time.

In a blog post published this morning, Facebook for the first time peeled back the curtains on Explore’s inner workings. Its three-part ranking funnel, which the company says was architected with a custom query language and modeling techniques, extracts 65 billion features and makes 90 million model predictions every second. And that’s just the tip of the iceberg.

Tools

Before the team behind Explore embarked on building a content recommendation system, they developed tools to conduct large-scale experiments and obtain strong signals on the breadth of users’ interests. The first of these was IGQL, a meta language that provided the level of abstraction needed to assemble candidate algorithms in one place.

IGQL is optimized in C++, which helps minimize latency and compute resources without sacrificing extensibility, Facebook says. It’s both statistically validated and high-level, enabling engineers to write recommendation algorithms in a “Python-like” fashion. And it complements an account embeddings component that helps identify topically similar profiles as part of a retrieval pipeline that focuses on account-level information.

 Facebook details the AI technology behind Instagram Explore

Above: Demonstration of ig2vec predicting account similarity.

Image Credit: Facebook

A framework — Ig2vec — treats Instagram accounts a user interacts with as word sequences in a sentence, which informs the predictions of a model with respect to which accounts the user might interact with. (Facebook notes that a sequence of accounts interacted with in a session is more likely to be topically coherent compared with random accounts.) Concurrently, Facebook’s AI Similarity Search nearest neighbor retrieval library (FAISS) queries millions of accounts based on a metric used in embedding training.

A classifier system is trained to predict a topic for a set of accounts based solely on the embedding, which when compared with human-labeled topics makes evident how well the embeddings capture topical similarity. It’s an important step, because retrieving accounts similar to those a user has expressed interest in helps narrow down a per-profile ranking inventory.

Ranking accounts in Explore based on interests necessitated predicting the most relevant content for each person, according to Facebook, and gave rise to a lightweight ranking distillation model that preselects candidates before passing them to complex ranking models. Using knowledge in the form of input candidates with features and outputs from the more complicated models, the simpler model tries to approximate the main ranking models as much as possible via direct (and indirect) learning.

Building Explore

Explore consists of two main stages, according to the team that designed it: the candidate generation stage (also known as the sourcing stage) and the ranking stage.

During the candidate generation stage, Explore taps accounts that users have interacted with previously to identify “seed accounts” of interest. They’re only a fraction of the accounts about the same interest, but they help identify topically similar accounts when combined with the above-mentioned embeddings.

Knowing the accounts that might appeal to a user is the first step toward sussing out which content might float their boat. IGQL allows different candidate sources to be represented as distinct subqueries, and this enables Explore to find tens of thousands of eligible candidates for the average person across many types of sources.

 Facebook details the AI technology behind Instagram Explore

Above: This graphic shows a typical source for Instagram Explore recommendations.

Image Credit: Facebook

To ensure the recommended content remains safe and appropriate for users of all ages, signals are used to filter out anything that might not be eligible. Algorithms detect and filter spam and other content, typically before an inventory is built for each user.

Those filtering systems are quite effective, if Facebook’s latest Community Standards Enforcement Report is any indication. The network says that 845,000 pieces of content relating to self-injury and self-harm were removed in Q3 2019, of which 79.1% were detected proactively, and that over 99% of child nudity and exploitation posts were deleted over the past four quarters.

For every Explore ranking request, 500 candidates are selected from the thousands sampled and are passed along to the ranking stage. It’s there that they encounter a three-part infrastructure intended to balance relevance with computation efficiency.

In the first pass of the ranking stage, a distillation model mimics the combination of the other stages with a minimal number of features. It picks the 150 highest-quality and most relevant candidates out of the 500, after which a model with a full dense set of features (in the second phase) selects the top 50 candidates. Lastly, another model with a full set of features chooses the best 25 candidates, which populate the Explore grid.

 Facebook details the AI technology behind Instagram Explore

Above: An illustration of the current final-pass model architecture.

Image Credit: Facebook

Sometimes the first-pass distillation model mimics the other two stages in ranking order. The fix is a multi-task, multi-layer algorithm that captures signals to predict actions people might take on content, from positive actions such as tapping Like or Favorite to negative actions like tapping the See Fewer Posts Like This button. The predictions are combined using a value model formula to capture prominence, after which a weighted sum determines whether the importance of a person saving a post, say, is higher than their liking a post.

In the interest of maintaining a “rich balance” between new content and existing content, the Explore team incorporated a rule into the aforementioned value model that boosts content diversity. It downranks posts from the same author or seed account by adding a penalty factor so users don’t see multiple posts from the same person or the same seed account in Explore.

“We rank the most relevant content based on the final value model score of each ranking candidate in a descendant way,” wrote the blog authors. “One of the most exciting parts of building Explore is the ongoing challenge of finding new and interesting ways to help our community discover the most interesting and relevant content on Instagram. We’re continuously evolving Instagram Explore, whether by adding media formats like Stories [or] entry points to new types of content, such as shopping posts and IGTV videos.”

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

It’s All In The Details: Some Thoughts For Executives In The Entertainment Industry

July 17, 2019   BI News and Info

In the first blog of this series, I discussed the entertainment industry’s mandate to hold their customers’ attention. In my second blog, I talked about the importance of creating experiences that extend beyond the entertainment venue. In particular, I mentioned Walt Disney’s genius for extending familiar stories into films and then into theme park experiences. However, perfecting the idea of an extended experience is an ongoing process, not just for Disney, but for everyone in the entertainment industry. Now I’ll reveal the secret for creating a very personal and memorable experience: “specificity.”

For example, when Disneyland opened in 1955, the thematic areas of their park were given rather generic names (Main Street USA, Adventureland, Frontierland, Fantasyland, and Tomorrowland). They did have some dark rides that were tied to specific films, like Snow White’s Adventure and Peter Pan’s Flight, but most of the attractions were also generic in nature, like Autotopia, Storybook Land Canal Boats, and Frontierland Shootin’ Exposition. Ultimately, Disney added some specificity to some of its nonspecific attractions, like Haunted Mansion, Pirates of the Caribbean, and Jungle Cruise, when they went the other direction and became films.

Needless to say, Disney has learned a lot about creating experiences in the last 65 years. The original theory was that generic concepts would appeal to a wider audience, but that turns out not to be the case. In practice, people are drawn to richly detailed experiences because it gives them more potential ways to relate. For example, look at the new Star Wars: Galaxy’s Edge, which opened recently at Disneyland in California and is opening soon at Walt Disney World in Florida. Disney has learned that the more detailed and immersive an experience, the more personal it will be for patrons and the more likely it is to attract repeat business.

Of course, other parks have learned this lesson, too. For example, Bricksburg at Legoland in Florida; Angry Birds Land at Särkänniemi in Tampere, Finland and Thorpe Park in Surrey, England; Sesame Place at SeaWorld in Florida; and of course, The Wizardly World of Harry Potter at Universal Studios in Florida and California. Theme park executives know if they can leverage intellectual property that customers have been exposed to outside the park, it will create a more meaningful experience inside the park because customers will be immersed in a very specific and familiar world.

Casino resorts also offer these rich, immersive adventures. Las Vegas has experiences that range from the ultra-kitsch to the truly opulent. You’ll find Paris, Venice, New York, and ancient Greece within five miles of each other, and casinos use intellectual property to draw players to slot machines. Today, the slots are more interactive than ever and often have dynamic slot symbols moving about the screens. But the biggest experiential distinction among slot machines is branding. Would you rather play Game of Thrones, Kiss, Pac-Man, Wizard of Oz, or the all-time favorite Wheel of Fortune machine? One machine, TMZ, immerses patrons in a personalized experience by taking their picture and using it as a slot symbol.

Just like cinemas and entertainment parks, casino resorts need to hold a patron’s attention, but casinos hold their attention with fantasy. People come to resorts with the dream of not only beating the odds but living like the truly wealthy in luxurious rooms surrounded by sumptuous pools, fountains, and gardens. They’re thrilled by close encounters with beautiful showgirls or their favorite musical performer, and they delight when magicians or acrobats are able to do the seemingly impossible.

Cirque du Soleil is an excellent example of this. It has more than 25 visually mesmerizing shows running around the globe. Seven of them are running in Las Vegas alone and two leverage familiar intellectual property (The Beatles Love and Michael Jackson One). And all the shows are created with extraordinary detail and specificity that delivers a personal, memorable experience. In fact, ever on the cutting edge, Cirque du Soleil has encouraged the use of cellphones and a cloud-based app during its performances of Toruk to improve audience participation.

BrendaEntertainmentBlogOneQuote 1024x576 It’s All In The Details: Some Thoughts For Executives In The Entertainment Industry

The future of technology-infused entertainment is limited only by the imagination.

Currently, Cirque du Soleil has more than 4,000 employees, but it wasn’t always that way. It began in 1984 with only 20 performers, a leaky tent, and considerable debt. The success of Cirque du Soleil is not due only to its attention to detail on stage but its exhaustive, detailed efforts behind the scenes, as well. For a great look at how that happens, check out its truly amazing, behind-the-scenes videos about its production of Ka.

In fact, it’s quite ironic that Las Vegas and other bastions of entertainment have a reputation for a reckless “anything goes” reputation. The reality is that these places are subject to extraordinary regulation and must be able to assure the health and safety of their guests and employees. Only an intelligent enterprise that has complete control of its licensing and regulations, as well as an adequate, certifiably qualified workforce, and the ability to procure the best possible health and safety equipment, can be certain of that.

Further, if casino resorts want to provide their customers with unique experiences, they obviously need the ability to treat each customer uniquely. People come to these resorts for many different reasons. It might be to gamble, but it could be for a convention, a reunion, a wedding, or just for the shows. Patrons might be regular high-rollers, casual visitors, or travelers looking for a once-in-a-lifetime experience, and they should all be treated distinctively. While this probably means gathering significant data about these customers, it’s also critical to assure them that their information will be safely guarded, as casinos take confidentially and customer data security very seriously.

Experiences don’t just happen, they’re created. It’s called “experience management” (XM), and there is no industry that requires zealous experience management more than the entertainment industry. Whether you’re a cinema, museum, concert venue, casino, amusement park, Renaissance fair, or circus, if you want to (figuratively) go from a leaky tent and handful of performers to a billion-dollar business, you need more than a C-suite; you need an X-suite – a group of executives who are focused on the experience of their customers.

Can I honestly say that every startup entertainment enterprise will become wildly successful strictly by focusing on experience management? No. But I can assure you that all highly successful entertainment enterprises are putting considerable focus on their experience management or they’re on the path to obsolescence.

For an in-depth look at how the x-suite is changing the way companies do business, read “Meet the ‘X-Suite’ – the job roles shaping Experience Management” by Qualtrics, an SAP company.

Let’s block ads! (Why?)

Digitalist Magazine

Read More

MIT CSAIL details technique for shrinking neural networks without compromising accuracy

May 7, 2019   Big Data

Deep neural networks — layers of mathematical functions modeled after biological neurons — are a versatile type of AI architecture capable of performing tasks from natural language processing to computer vision. That doesn’t mean that they’re without limitations, however. Deep neural nets are often quite large and require correspondingly large corpora, and training them can take days on even the priciest of purpose-built hardware.

But it might not have to be that way. In a new study (“The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks“) published by scientists at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL), deep neural networks are shown to contain subnets that are up to 10 times smaller than the entire network, but which are capable of being trained to make equally precise predictions, in some cases more quickly than the originals.

The work is scheduled to be presented at the International Conference on Learning Representations (ICLR) in New Orleans, where it was named one of the conference’s top two papers out of roughly 1,600 submissions.

“If the initial network didn’t have to be that big in the first place, why can’t you just create one that’s the right size at the beginning?” said PhD student and coauthor Jonathan Frankle in a statement. “With a neural network you randomly initialize this large structure, and after training it on a huge amount of data it magically works. This large structure is like buying a big bag of tickets, even though there’s only a small number of tickets that will actually make you rich. But we still need a technique to find the winners without seeing the winning numbers first.”

 MIT CSAIL details technique for shrinking neural networks without compromising accuracy

Above: Finding subnetworks within neural networks.

Image Credit: MIT CSAIL

The researchers’ approach involved eliminating unnecessary connections among the functions — or neurons — in order to adapt them to low-powered devices, a process that’s commonly known as pruning. (They specifically chose connections that had the lowest “weights,” which indicated that they were the least important.) Next, they trained the network without the pruned connections and reset the weights, and after pruning additional connections over time, they determined how much could be removed without affecting the model’s predictive ability.

After repeating the process tens of thousands of times on different networks in a range of conditions, they report that the AI models they identified were consistently less 10% to 20% of the size of their fully connected parent networks.

“It was surprising to see that re-setting a well-performing network would often result in something better,” says coauthor and assistant professor Michael Carbin. “This suggests that whatever we were doing the first time around wasn’t exactly optimal and that there’s room for improving how these models learn to improve themselves.”

Carbin and Frankle note that they only considered vision-centric classification tasks on smaller data sets, and they leave to future work exploring why certain subnetworks are particularly adept at learning and ways to quickly spot these subnetworks. However, they believe that the results may have implications for transfer learning, a technique where networks trained for one task are adapted to another task.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

How to read the Dynamics 365 Mailbox details file for troubleshooting

February 15, 2019   Microsoft Dynamics CRM

In this post, we are going to look at the Mailbox Details log file that can be used when troubleshooting Server Side Synchronization and App for Outlook scenarios. Thanks again to Cody Dinwiddie on our Dynamics 365 Support team for helping put this information together.

Now, if you don’t know, Server-Side Sync is processed by the Async Service, which means it can sometimes be difficult to find enough detail to troubleshoot ACT and Email synchronization issues. Mailbox alerts is always a great place to start but it can really only give you a few pieces of information, such as if a Test and Enable was successful or what errors the mailbox is running into. However, this will not show me other key information, such as when the next sync cycle is for the mailbox. In most cases, this is the information I am looking for and it is one of the most common questions I receive.

To find additional information, the mailbox record does have a way to get this data by selecting Download Mailbox Details on the ribbon of the appropriate mailbox record.

Once you click to download this .log file, open it up with a text editor like Notepad. Note: For a Dynamics 365 Online instance, all times will be in UTC.

Let’s break down the sections:

Mailbox Async Processing State

mailboxid : 9fb6600b-0965-e811-a97f-000d4a161089

(GUID of the mailbox record)

hostid: Null         

(This will contain the name of the async server processing the request if mid-process)

processingstatecode : 0         

(This will be 1 if async is currently processing this mailbox and you see a hostid populated)

processinglastattemptedon : 12/3/2018 3:23:36 PM       

(This is the last time the mailbox attempted to process)

Mailbox Synchronization Methods

(EmailRouter here means Server-Side Sync)

incomingemaildeliverymethod : EmailRouter

outgoingemaildeliverymethod : EmailRouter

actdeliverymethod : EmailRouter

Mailbox Enabled State

(These states are tied to the test/enable being successful from a server-side sync perspective)

enabledforincomingemail : True

enabledforoutgoingemail : True

enabledforact : True

Mailbox Idle State

(Shows how many times the mailbox was processed with no Email items or Appointments, Contacts and Tasks to synchronize)

noemailcount : 3        

(This means 3 sync cycles ago, an email was promoted from the Exchange mailbox into Dynamics)

noactcount : 2      

(This means 2 sync cycles ago, an Appointment, Contact or Task was promoted from the Exchange mailbox into Dynamics)

Mailbox Backoff Parameters

(Shows when the mailbox is scheduled to process next. These are the fields I use the most)

postponemailboxprocessinguntil : 12/3/2018 3:28:37 PM

(This value controls when the Asynchronous Processing Service will run on this mailbox, which actually performs a synchronization)

postponesendinguntil : 11/29/2018 6:07:17 PM    

(If this value is the same or before the postponemailboxprocessinguntil value, asynchronous emails will be sent from Dynamics on the next sync)

receivingpostponeduntil : 12/3/2018 3:28:37 PM 

(If this value is the same or before the postponemailboxprocessinguntil value, emails in Exchange will attempt to sync with Dynamics)

receivingpostponeduntilforact : 12/3/2018 3:25:34 PM         

(If this value is the same or before the postponemailboxprocessinguntil value, Appointments, Contacts and Tasks will attempt to sync with Dynamics)

Mailbox Test and Enable Parameters

testemailconfigurationscheduled : False         

(This will be true if test/enable is pending)

testemailconfigurationretrycount : 0       

(This enumerates how many times the test/enable process has attempted to retry in the event of a failure)

testmailboxaccesscompletedon : 11/29/2018 6:07:22 PM         

(This provides the time the last test/enable was run on this mailbox)

postponetestemailconfigurationuntil : 11/29/2018 6:12:22 PM         

(If this value is the same or before the postponemailboxprocessinguntil value, test/enable will be run on the mailbox)

Mailbox Last Sync Cycle Information

lastsuccessfulsynccompletedon : 12/3/2018 3:23:37 PM         

(This provides the time that the mailbox last performed a sync without running into an exception/ error)

lastsyncerror: Null     

(This will provide limited exception details of what error occurred on the last mailbox sync attempt, if one occurred)

lastsyncerrorcode: Null         

(This will provide the exception code for the last error, if available)

lastsyncerrorcount : 0   

(This will enumerate how many consecutive times the same error has occurred for the mailbox synchronization)

lastsyncerroroccurredon : 10/17/2018 5:41:58 PM         

(This provides the time the last error occurred for the mailbox. If this time is before the lastsuccessfulsynccompletedon and processinglastattemptedon times, no error happened on the last sync)

itemsprocessedforlastsync : 2       

(This enumerates how many Exchange items were successfully promoted on the last sync cycle)

itemsfailedforlastsync : 0  

(This enumerates how many items succeeded the promotion criteria, but failed to promote to Dynamics)

Email Server Profile Details

(This section provides details on the Email Server Profile configured to this mailbox)

Email Server Profile General Settings (servertype: 1 is others, 0 is Exchange)

servertype : 0      

(0 is Exchange, 1 is others, such as POP3)

useautodiscover : True       

(False indicates that the EWS URL is explicitly defined in the Email Server Profile)

maxconcurrentconnections : 108       

(This value defines how many simultaneous connections to Exchange that this Email Server Profile can handle)

minpollingintervalinminutes : 0      

(This value determines how often Asynchronous processing is attempted for a mailbox in minutes. The minimum value is 5, so “0” in this context means that mailboxes sync every 5 minutes)

Email Server Profile Incoming Email Settings

incomingauthenticationprotocol : AutoDetect     

(This value defines what authentication method is being used, such as through an impersonation account or via mailbox credentials etc..)

incomingcredentialretrieval : S2S

(This defines the authentication credentials being used, such as S2S (server-to-server), OAuth, etc..)

incominguseimpersonation : False      

(This defines if an account with Application Impersonation is being used for incoming email synchronization)

incomingusessl : True   

(This defines if synchronization of incoming items uses certificates for encryption)

incomingportnumber : 443     

(This defines the port being used for incoming synchronization)

Email Server Profile Outgoing Email Settings

outgoingauthenticationprotocol : AutoDetect      

(This value defines what authentication method is being used, such as through an impersonation account, via mailbox credentials etc..)

outgoingcredentialretrieval : S2S         

(This defines the authentication credentials being used, such as S2S (server-to-server), OAuth, etc..)

outgoinguseimpersonation : False     

(This defines if an account with Application Impersonation is being used for outgoing email synchronization)

outgoingusessl : True      

(This defines if synchronization of outgoing items uses certificates for encryption)

outgoingportnumber : 443    

(This defines the port being used for outgoing synchronization)

Recent Trace Log details

(This section provides the last 10 mailbox specific logs for the associated mailbox. We will not be covering troubleshooting of the different errors here. Some of these will be more straightforward and some will require more telemetry review from a Microsoft resource. That is where the MailboxId and ActivityId can be used to correlate to additional logging.)

tracecode : 126

errortypedisplay : ExchangeSyncServerServiceError

errordetails : T:391

ActivityId: a2559263-94de-4886-bc28-5fc7511d4f5f

>Exception : Unhandled exception:

Exception type: System.Net.WebException

Message: The remote server returned an error: (503) Server Unavailable.   at System.Net.HttpWebRequest.EndGetResponse(IAsyncResult asyncResult)   at Microsoft.Exchange.WebServices.Data.EwsHttpWebRequest.Microsoft.Exchange.WebServices.Data.IEwsHttpWebRequest.EndGetResponse(IAsyncResult asyncResult)   at Microsoft.Exchange.WebServices.Data.ServiceRequestBase.EndGetEwsHttpWebResponse(IEwsHttpWebRequest request, IAsyncResult asyncResult) — End stack trace — Exception type: Microsoft.Exchange.WebServices.Data.ServiceRequestException

Message: The request failed. The remote server returned an error: (503) Server Unavailable.   at Microsoft.Exchange.WebServices.Data.ServiceRequestBase.EndGetEwsHttpWebResponse(IEwsHttpWebRequest request, IAsyncResult asyncResult)   at Microsoft.Exchange.WebServices.Data.SimpleServiceRequestBase.EndInternalExecute(IAsyncResult asyncResult)   at…

tracecode : 2

errortypedisplay : UnknownIncomingEmailIntegrationError

errordetails : ActivityId: 08657d6f-7e7b-424c-a974-6de3d4le2ae4a

>Error : ?<ResponseMessageType xmlns:q1=”http://schemas.microsoft.com/exchange/services/2006/messages” p2:type=”q1:FindItemResponseMessageType” ResponseClass=”Error” xmlns:p2=”http://www.w3.org/2001/XMLSchema-instance”><q1:MessageText>Mailbox move in progress. Try again later., Cannot open mailbox.</q1:MessageText><q1:ResponseCode>ErrorMailboxMoveInProgress</q1:ResponseCode><q1:DescriptiveLinkKey>0</q1:DescriptiveLinkKey></ResponseMessageType>

tracecode : 52

errortypedisplay : IncomingEmailServerServiceError

errordetails : ActivityId: b66413dd-c51e-43d1-9404-adb044abf655

>Error : System.Net.WebException: The request failed with HTTP status 503: Service Unavailable.

at System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall)

at System.Web.Services.Protocols.SoapHttpClientProtocol.EndInvoke(IAsyncResult asyncResult)

at Microsoft.Crm.Asynchronous.EmailConnector.ExchangeServiceBinding.EndFindItem(IAsyncResult asyncResult)

at Microsoft.Crm.Asynchronous.EmailConnector.FindItemsStep.EndCall()

at Microsoft.Crm.Asynchronous.EmailConnector.ExchangeIncomingEmailProviderStep.EndOperation()

Thanks for reading!

Aaron Richards

Let’s block ads! (Why?)

Dynamics 365 Customer Engagement in the Field

Read More

Apple’s new Transparency Report page details government data requests

December 21, 2018   Big Data

In past years, Apple released bi-annual “transparency reports” to detail the personal data requests it has received from governments around the world. Today, the company has upped the ante with an upgraded Transparency Report section of its Privacy mini-site (via TechCrunch), clearly separating key request statistics for dozens of countries while offering individual report links for each of them.

At a glance, the new Transparency Report enables users to see how many requests a government has made across four categories: “Device,” “Account,” “Financial Identifier,” and “Emergency.” Device is generally used by law enforcement officials to help locate lost or stolen devices; Account seeks a user’s basic account information or content; Financial Identifier is for suspected fraudulent credit, debit, or gift card transactions; and Emergency is for situations involving imminent danger to a person.

The individual country reports go into considerable detail regarding requests, with the United States of America entry discussing everything from specific types of legal process to general notes on the nature of government inquiries. Yearly and biannual historical details are also included.

Not surprisingly, the trends over time show an increased number of requests, as well as a generally increased number of cases in which Apple has provided the data requested. As of the last reporting period, Apple provided data for 3,185 cases — 80 percent of the requested number — up from 2,182 in the comparable year-ago period. You can see the details for your own country here.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More
« Older posts
  • Recent Posts

    • Researchers propose Porcupine, a compiler for homomorphic encryption
    • What mean should I use for this exemple?
    • Search SQL Server error log files
    • We were upgraded to the Unified Interface for Dynamics 365. Now What?
    • Recreating Art – the unexpected way
  • Categories

  • Archives

    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited