• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: Details

IBM’s Rob Thomas details key AI trends in shift to hybrid cloud

March 19, 2021   Big Data

Is AI getting used responsibly in health care?

AI is poised to impact health care dramatically, but how do you ensure it’s used equitably across all populations? Learn what’s needed.

Register Now


Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


The last year has seen a major spike in the adoption of AI models in production environments, in part driven by the need to drive digital business transformation initiatives. While it’s still early days as far as AI is concerned, it’s also clear AI in the enterprise is entering a new phase.

Rob Thomas, senior vice president for software, cloud, and data platform at IBM, explains to VentureBeat how this next era of AI will evolve as hybrid cloud computing becomes the new norm in the enterprise.

As part of that effort, Thomas reveals IBM has formed a software-defined networking group to extend AI all the way out to edge computing platforms.

This interview has been edited for brevity and clarity.

VentureBeat: Before the COVID-19 pandemic hit, there was a concern AI adoption was occurring slowly. How much has that changed in the past year?

Rob Thomas: We’ve certainly got massive acceleration for things like Watson Assistant for customer service. That absolutely exploded. We had nearly 100 customers that started and then went live in the first 90 days after COVID hit. When you broaden it out, there are five big use cases that have come up over the last year. One is customer service. Second is around financial planning and budgeting. Thirdly are things such as data science. There’s such a shortage of data science skills, but that is slowly changing. Fourth is around compliance. Regulatory compliance is only increasing, not decreasing. And then fifth is AI Ops. We launched our first AI ops product last June and that’s exploded as well, which is related to COVID in that everybody was forced remote. How do we better manage our IT systems? It can’t be all through humans because we’re not on site. We’ve got to use software to do that. I think that was 18 months ago, I wouldn’t have given you those five. I would have said “There’s a bunch of experimentations.” Now we see pretty clearly there are five things people are doing that represent 80% of the activity.

VentureBeat: Should organizations be in the business of building AI or should they buy it in one form or another?

Thomas: I hate to be too dramatic, but we’re probably in a permanent and a secular change where people want to build. Trying to fight that is a tough discussion because people really want to build. When we first started with Watson, the idea was this is a big platform. It does everything you need. I think what we’ve discovered along the way is if you componentize to focus where we think we’re really good, people will pick up those pieces and use them. We focused on three areas for AI. One is natural language processing (NLP). I think if you look at things like external benchmarks, we had the best NLP from a business context. In terms of document understanding, semantic parsing of text, we do that really well. The second is automation. We’ve got really good models for how you automate business processes. Third is trust. I don’t really think anybody is going to invest to build a data lineage model, explainability model, or bias detection. Why would a company build that? That’s a component we can provide. If you want them to be regulatory compliant, you want them to have explainability, then we provide a good answer for that.

VentureBeat: Do you think people understand explainability and the importance of the provenance of AI models and the importance of that yet? Are they just kind of blowing by that issue in the wake of the pandemic?

Thomas: We launched the first version of what we built to address that around that two years ago. I would say that for the first year we got a lot of social credit. This changed dramatically in the second half of last year. We won some significant deals that were specifically for model management explainability and lifecycle management of AI because companies have grown to the point where they have thousands of AI models. It’s pretty clear, once you get to that scale, you have no choice but to do this, so I actually think this is about to explode. I think the tipping point is once you get north of a thousandish models in production. At that point, it’s kind of like nobody’s minding the store. Somebody has to be in charge when you have that much machine learning making decisions. I think the second half of last year will prove to be a tipping point.

Above: IBM senior VP of software, cloud, and data Rob Thomas

VentureBeat: Historically, AI models have been trained mainly in the cloud, and then inference engines are employed to push AI out to where it’d be consumed. As edge computing evolves, there will be a need to push the training of AI models out to the edge where data is being analyzed at the point of creation and consumption. Is that the next AI frontier?

Thomas: I think it’s inevitable AI is gonna happen where the data is because it’s not economical to do the opposite, which is to start everything with a Big Data movement. Now, we haven’t really launched this formally, but two months ago I started a unit in IBM software focused on software-defined networking (SDN) and the edge. I think it’s going to be a long-term trend where we need to be able to do analytics, AI, and machine learning (ML) at the edge. We’ve actually created a unit to go after that specifically.

VentureBeat: Didn’t IBM sell an SDN group to Cisco a long time ago now?

Thomas: Everything that we sold in the ’90s was hardware-based networking. My view is everything that’s done in hardware from a networking at the edge perspective is going to be done in software in the next five to seven years. That’s what’s different now.

VentureBeat: What differentiates IBM when it comes to AI most these days?

Thomas: There are three major trends that we see happening in the market. One is around decentralization of IT. We went from mainframes that are centralized to client/server and mobile. The initial chapter of public cloud was very much a return to a centralized architecture that brings everything to one place. We are now riding the trend that says that we will decentralize again in the world that will become much more about multicloud and hybrid cloud.

The second is around automation. How do you automate feature engineering and data science? We’ve done a lot in the realm of automation. The third is just around getting more value out of data. There was this IDC study last year that 90% of the data in businesses is still unutilized or underutilized. Let’s be honest. We haven’t really cracked that problem yet. I’d say those are the three megatrends that we’re investing against. How does that manifest in the IBM strategy? In three ways. One is we are building all of our software on open source. That was not the case two years ago. Now, in conjunction with the Red Hat acquisition, we think there’s room in the market for innovation in and around open source. You see the cloud providers trying to effectively pirate open source rather than contribute. Everything we’re doing from a software perspective is now either open source itself or it’s built on open source.

The second is around ecosystem. For many years we thought we could do it ourselves. One of the biggest changes we’ve made in conjunction with the move to open source is we’re going to do half of our business by making partners successful. That’s a big change. That why you see things like the announcement with Palantir. I think most people were surprised. That’s probably not something we would have done two years ago. It’s kind of an acknowledgment that all the best innovation doesn’t have to come from IBM. If we can work with partners that have a similar philosophy in terms of open source, that’s what we’re doing.

The third is a little bit more tactical. We announced earlier this year that we’ve completely changed our go-to-market strategy, which is to be much more technical. That’s what we’ve heard customers want. They don’t want a salesperson to come in and read them the website. They want somebody to roll up their sleeves and actually build something and co-create.

VentureBeat: How do you size up the competitive landscape?

Thomas: Watson components can run anywhere. The real question is why is nobody else enabling their AI to run anywhere? IBM is the only company doing that. My thesis is that most of the other big AI players have a strategy tax. If your whole strategy is to bring everything to our cloud, the last thing you want to do is enable your AI to run other places because then you’re acknowledging that other places exist. That’s a strategy advantage for us. We’re the only ones that can truly say you can bring the AI to where the data is. I think that’s going to give us a lot of momentum. We don’t have to be the biggest compute provider, but we do have to make it incredibly easy for companies to work across cloud environments. I think that’s a pretty good bet.

VentureBeat. Today there is a lot of talk about MLOps, and we already have DevOps and traditional IT operations. Will all that converge one day or will we continue to need a small army of specialists?

Thomas: That’s a little tough to predict. I think the reason we’ve gotten a lot of momentum with AI Ops is because we took the stuff that was really hard in terms of data virtualization, model management, model creation, and automated 60-70% of that. That’s hard. I think it’s going to be harder than ever to automate 100%. I do think people will get a lot more efficient as they get more models in production. You need to manage those in an automated fashion versus a manual fashion, but I think it’s a little tough to predict that at this stage.

VentureBeat: There’re a lot of different AI engines. IBM has partnered with Salesforce. Will we see more of that type of collaboration? Will the AI experience become more federated?

Thomas: I think that’s right. Let’s look at what we did with Palantir. Most people thought of Palantir as an AI company. Obviously, they associate Watson with AI. Palantir does something really good, which is a low-code, no-code environment so that the data science team doesn’t have to be an expert. What they don’t have is an environment for the data scientist that does want to go build models. They don’t have a data catalog. If you put those two together, suddenly you’ve got an AI system that’s really designed for a business. It’s got low code, no code, it’s got Python, it’s got data virtualization, a data catalog. Customers can use that joint stack from us and will be better off than had they chosen one or the other and then tried to fix the things themselves. I think you’ll probably see more partnerships over time. We’re really looking for partnerships that are complementary to what we’re doing.

VentureBeat: If organizations are each building AI models to optimize specific processes in their favor, will this devolve into competing AI models simply warring with one another?

Thomas: I don’t know if it’ll be that straightforward. Two companies are typically using very different datasets. Now maybe they’re both joining with an external dataset that’s common, but whatever they have is first-party data or third-party data that is probably unique to them. I think you get different flavors, as opposed to two things that are conflicting or head to head. I think there’s a little bit more nuance there.

VentureBeat: Do you think we’ll keep calling it AI? Or will we get to a point where we just kind of realize that it’s a combination of algorithms and statistics and math [but we] don’t have to necessarily call it AI?

Thomas: I think the term will continue for a while because there is a difference between a rules-based system and a true learning machine that gets better over time as you feed it more data. There is a real distinction.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Google details how it’s using AI and machine learning to improve search

October 16, 2020   Big Data

Automation and Jobs

Read our latest special issue.

Open Now

During a livestreamed event this afternoon, Google detailed the ways it’s applying AI and machine learning to improve the Google Search experience.

Soon, Google says users will be able to see how busy places are in Google Maps without having to search for specific beaches, parks, grocery stores, gas stations, laundromats, pharmacies, or other business, an expansion of Google’s existing busyness metrics. The company also says it’s adding COVID-19 safety information to business profiles across Search and Maps, revealing whether they’re using safety precautions like temperature checks, plexiglass, and more.

An algorithmic improvement to “Did you mean,” Google’s spell-checking feature for Search, will enable more accurate and precise spelling suggestions. Google says the new underlying language model contains 680 million parameters — the variables that determine each prediction — and runs in less than three milliseconds. “This single change makes a greater improvement to spelling than all of our improvements over the last five years,” Prabhakar Raghavan, head of Search at Google, said in a blog post.

Beyond this, Google says it can now index individual passages from webpages as opposed to whole pages. When this rolls out fully, it will improve roughly 7% of search queries across all languages, the company claims. A complementary AI component will help Search capture the nuances of what webpages are about, ostensibly leading to a wider range of results for search queries.

“We’ve applied neural nets to understand subtopics around an interest, which helps deliver a greater diversity of content when you search for something broad,” Raghavan continued. “As an example, if you search for ‘home exercise equipment,’ we can now understand relevant subtopics, such as budget equipment, premium picks, or small space ideas, and show a wider range of content for you on the search results page.”

Google is also bringing Data Commons, its open knowledge repository that combines data from public datasets (e.g., COVID-19 stats from the U.S. Centers for Disease Control and Prevention) using mapped common entities, to search results on the web and mobile. In the near future, users will be able to search for topics like “employment in Chicago” on Search to see information in context.

On the ecommerce and shopping front, Google says it has built cloud streaming technology that enables users to see products in augmented reality (AR). With cars from Volvo, Porsche, and other “top” auto brands, for example, they can zoom in to view the steering wheel and other details in a driveway, to scale, on their smartphones. Separately, Google Lens on the Google app or Chrome on Android (and soon iOS) will let shoppers discover similar products by tapping on elements like vintage denim, ruffle sleeves, and more.

 Google details how it’s using AI and machine learning to improve search

Above: Augmented reality previews in Google Search.

Image Credit: Google

In another addition to Search, Google says it will deploy a feature that highlights notable points in videos — for example, a screenshot comparing different products or a key step in a recipe. (Google expects 10% of searches will use this technology by the end of 2020.) And Live View in Maps, a tool that taps AR to provide turn-by-turn walking directions, will enable users to quickly see information about restaurants including how busy they tend to get and their star ratings.

Lastly, Google says it will let users search for songs by simply humming or whistling melodies, initially in English on iOS and in more than 20 languages on Android. You will able to launch the feature by opening the latest version of the Google app or Search widget, tapping the mic icon, and saying “What’s this song?” or selecting the “Search a song” button, followed by at least 10 to 15 seconds of humming or whistling.

“After you’re finished humming, our machine learning algorithm helps identify potential song matches,” Google wrote in a blog post. “We’ll show you the most likely options based on the tune. Then you can select the best match and explore information on the song and artist, view any accompanying music videos or listen to the song on your favorite music app, find the lyrics, read analysis and even check out other recordings of the song when available.”

Google says that melodies hummed into Search are transformed by machine learning algorithms into a number-based sequence representing the song’s melody. The models are trained to identify songs based on a variety of sources, including humans singing, whistling, or humming, as well as studio recordings. They also take away all the other details, like accompanying instruments and the voice’s timbre and tone. This leaves a fingerprint that Google compares with thousands of songs from around the world and identify potential matches in real time, much like the Pixel’s Now Playing feature.

“From new technologies to new opportunities, I’m really excited about the future of search and all of the ways that it can help us make sense of the world,” Raghavan said.

Last month, Google announced it will begin showing quick facts related to photos in Google Images, enabled by AI. Starting in the U.S. in English, users who search for images on mobile might see information from Google’s Knowledge Graph — Google’s database of billions of facts — including people, places, or things germane to specific pictures.

Google also recently revealed it’s using AI and machine learning techniques to more quickly detect breaking news around crises like natural disasters. In a related development, Google said it launched an update using language models to improve the matching between news stories and available fact checks.

In 2019, Google peeled back the curtains on its efforts to solve query ambiguities with a technique called Bidirectional Encoder Representations from Transformers, or BERT for short. BERT, which emerged from the tech giant’s research on Transformers, forces models to consider the context of a word by looking at the words that come before and after it. According to Google, BERT helped Google Search better understand 10% of queries in the U.S. in English — particularly longer, more conversational searches where prepositions like “for” and “to” matter a lot to the meaning.

BERT is now used in every English search, Google says, and it’s deployed across languages including Spanish, Portuguese, Hindi, Arabic, and German.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Intel details chips designed for IoT and edge workloads

September 23, 2020   Big Data
 Intel details chips designed for IoT and edge workloads

Automation and Jobs

Read our latest special issue.

Open Now

Intel today announced the launch of new products tailored to edge computing scenarios like digital signage, interactive kiosks, medical devices, and health care service robots. The 11th Gen Intel Core Processors, Atom x6000E Series, Pentium, Celeron N, and J Series bring new AI security, functional safety, and real-time capabilities to edge customers, the chipmaker says, laying the groundwork for innovative future applications.

Intel expects that the edge market to be a $ 65 billion silicon opportunity by 2024. The company’s own revenue in the space grew more than 20% to $ 9.5 billion in 2018. And according to a 2020 IDC report, up to 70% of all enterprises will process data at the edge within three years. To date, Intel claims to have cultivated an ecosystem of more than 1,200 partners, including Accenture, Bosch, ExxonMobil, Philips, Verizon, and Viewsonic, with over 15,000 end customer deployments across “nearly every industry.”

The 11th Gen Core processors — which Intel previewed in early September — are enhanced for internet of things (IoT) use cases requiring high-speed processing, computer vision, and low-latency deterministic processing, the company says. They bring an up to 23% performance gain in single-threaded workloads, a 19% performance gain in multithreaded workloads, and an up to 2.95 times performance gain in graphics workloads versus the previous generation. New dual video decode boxes allow the processors to ingest up to 40 simultaneous video streams at 1080p up to 30 frames per second and output four channels of 4K or two channels of 8K video.

According to Intel, the combination of the 11th Gen’s SuperFin process improvements, miscellaneous architectural enhancements, and Intel’s OpenVINO software optimizations translates to 50% faster inferences per second compared with the previous 8th Gen processor using CPU mode or up to 90% faster inferences using the processors’ GPU-accelerated mode. (Intel says the 11th Gen Core i5 is up to twice as fast in terms of inferences per second as an 8th Gen Core i5-8500 when running on just the CPU in each product.) AI inferencing algorithms can run on up to 96 graphic execution units (INT8) or run on the CPU with VNNI built in, an x86 extension that’s part of Intel’s AVX-512 processor instruction set for accelerating convolutional neural network-based algorithms.

As for the Atom x6000E Series, Pentium, Celeron N, and J Series, Intel says they represent its first processor platform specifically enhanced for IoT. All four deliver up to 2 times better graphics performance, a dedicated real-time offload engine, enhanced I/O and storage, and the Intel Programmable Services Engine, which brings out-of-band and in-band remote device management. They also support 2.5GbE time-sensitive networking components and resolutions up to 4K at 60 frames per second on upwards of three displays, and they meet baseline safety requirements with built-in hardware-based security.

Intel says it already has 90 partners committed to delivering 11th Gen Core solutions and up to 100 partners locked in for the Intel Atom x6000E Series, Intel Pentium, Celeron N, and J Series.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Qualcomm details Cloud AI 100 chipset, announces developer kit

September 16, 2020   Big Data

Automation and Jobs

Read our latest special issue.

Open Now

During its AI Day conference last April, Qualcomm unveiled the Cloud AI 100, a chipset purpose-built for machine learning inferencing and edge computing workloads. Details were scarce at press time, evidently owing to a lengthy production schedule. But today Qualcomm announced a release date for the Cloud AI 100 — the first half of 2021, after sampling this fall — and shared details about the chipset’s technical specs.

Qualcomm expects the Cloud AI 100 to give it a leg up in an AI chipset market expected to reach $ 66.3 million by 2025, according to a 2018 Tractica report. Last year, SVP of product management Keith Kressin said he anticipates that inference — the process during which an AI model infers results from data — will become a “significant-sized” marked for silicon, growing 10 times from 2018 to 2025. With the Cloud AI 100, Qualcomm hopes to tackle specific markets, such as datacenters, 5G infrastructure, and advanced driver-assistance systems.

The Cloud AI 100 comes in three flavors — DM.2e, DM.2, and PCIe (Gen 3/4) — corresponding to performance range. At the low end, the Cloud AI 100 Dual M.2e and Dual M.2 models can hit between 50 TOPS (50 trillion floating-point operations per second) and 200 TOPS, while the PCIe model achieves up to 400 TOPS, according to Qualcomm. All three ship with up to 16 AI accelerator cores paired with up to 144MB RAM (9MB per core) and 32GB LPDDR4x on-card DRAM, which the company claims outperforms the competition by 106 times when measured by inferences per second per watt, using the ResNet-50 algorithm. The Cloud AI 100 Dual M.2e and Dual M.2 attain 15,000 to 10,000 inferences per second at under 50 watts, and the PCIe hovers around 25,000 inferences at 50 to 100 watts.

 Qualcomm details Cloud AI 100 chipset, announces developer kit

Qualcomm says the Cloud AI 100, which is manufactured on a 7-nanometer process, shouldn’t exceed a power draw of 75 watts. Here’s the breakdown for each card:

  • Dual M.2e: 15 watts, 70 TOPS
  • Dual M.2: 25 watts, 200 TOPS
  • PCI2: 75 watts, 400 TOPS

The first Cloud AI 100-powered device — the Cloud Edge AI Development Kit — is scheduled to arrive in October. It looks similar to a wireless router, with a black shell and an antenna held up by a plastic stand. But it runs CentOS 8.0 and packs a Dual M.2 Cloud AI 100, a Qualcomm Snapdragon 865 system-on-chip, a Snapdragon X55 5G modem, and an NVMe SSD.

 Qualcomm details Cloud AI 100 chipset, announces developer kit

The Cloud AI 100 and products it powers integrate a full range of developer tools, including compilers, debuggers, profilers, monitors, servicing, chip debuggers, and quantizers. They also support runtimes like ONNX, Glow, and XLA, as well as machine learning frameworks such as TensorFlow, PyTorch, Keras, MXNet, Baidu’s PaddlePaddle, and Microsoft’s Cognitive Toolkit for applications like computer vision, speech recognition, and language translation.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Baidu details its adversarial toolbox for testing robustness of AI models

January 18, 2020   Big Data
 Baidu details its adversarial toolbox for testing robustness of AI models

No matter the claimed robustness of AI and machine learning systems in production, none are immune to adversarial attacks, or techniques that attempt to fool algorithms through malicious input. It’s been shown that generating even small perturbations on images can fool the best of classifiers with high probability. And that’s problematic considering the wide proliferation of the “AI as a service” business model, where companies like Amazon, Google, Microsoft, Clarifai, and others have made systems that might be vulnerable to attack available to end users.

Researchers at tech giant Baidu propose a partial solution in a recent paper published on Arxiv.org: Advbox. They describe it as an open source toolbox for generating adversarial examples, and they say it’s able to fool models in frameworks like Facebook’s PyTorch and Caffe2, MxNet, Keras, Google’s TensorFlow, and Baidu’s own PaddlePaddle.

While the Advbox itself isn’t new — the initial release was over a year ago — the paper dives into revealing technical detail.

AdvBox is based on Python, and it implements several common attacks that perform searches for adversarial samples. Each attack method uses a distance measure to quantify the size of adversarial perturbation, while a sub-model — Perceptron, which supports image classification and object detection models as well as cloud APIs — evaluates the robustness of a model to noise, blurring, brightness adjustments, rotations, and more.

AdvBox ships with tools for testing detection models susceptible to so-called adversarial t-shirts or facial recognition attacks. Plus, it offers access to Baidu’s cloud-hosted deepfakes detection service via an included Python script.

“Small and often imperceptible perturbations to [input] are sufficient to fool the most powerful [AI],” wrote the coauthors. “Compared to previous work, our platform supports black box attacks … as well as more attack scenarios.”

Baidu isn’t the only company publishing resources designed to help data scientists defend from adversarial attacks. Last year, IBM and MIT released a metric for estimating the robustness of machine learning and AI algorithms called Cross Lipschitz Extreme Value for Network Robustness, or CLEVER for short. And in April, IBM announced a developer kit called the Adversarial Robustness Toolbox, which includes code for measuring model vulnerability and suggests methods for protecting against runtime manipulation. Separately, researchers at the University of Tübingen in Germany created Foolbox, a Python library for generating over 20 different attacks against TensorFlow, Keras, and other frameworks.

But much work remains to be done. According to Jamal Atif, a professor at the Université Paris-Dauphine, the most effective defense strategy in the image classification domain — augmenting a group of photos with examples of adversarial images — at best has gotten accuracy back up to only 45%. “This is state of the art,” he said during an address in Paris at the annual France is AI conference hosted by France Digitale. “We just do not have a powerful defense strategy.”

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Tencent details how its MOBA-playing AI system beats 99.81% of human opponents

December 25, 2019   Big Data

In August, Tencent announced it had developed an AI system capable of defeating teams of pros in a five-on-five match in Honor of Kings (or Arena of Valor, depending on the region). This was a noteworthy achievement — Honor of Kings occupies the video game subgenre known as multiplayer online battle arena games (MOBAs), which are incomplete information games in the sense that players are unaware of the actions other players choose. The endgame, then, isn’t merely AI that achieves Honor of Kings superhero performance, but insights that might be used to develop systems capable of solving some of society’s toughest challenges.

A paper published this week peels back the layers of Tencent’s technique, which the coauthors describe as “highly scalable.” They claim its novel strategies enable it to explore the game map “efficiently,” with an actor-critic architecture that self-improves over time.

As the researchers point out, real-time strategy games like Honor of Kings require highly complex action control compared with traditional board games and Atari games. Their environments also tend to be more complicated (Honor of Kings has 10^600 possible states and and 10^18,000 possible actions) and the objectives more complex on the whole. Agents must not only learn to plan, attack, and defend but also to control skill combos, induce, and deceive opponents, all while contending with hazards like creeps and fully automated turrets.

Tencent’s architecture consists of four modules: Reinforcement Learning (RL) Learner, Artificial Intelligence (AI) Server, Dispatch Module, and Memory Pool.

The AI Server — which runs on a single processor core, thanks to some clever compression — dictates how the AI model interacts with objects in the game environment. It generates episodes via self-play, and, based on the features it extracts from the game state, it predicts players’ actions and forwards them to the game core for execution. The game core then returns the next state and the corresponding reward value, or the value that spurs the model toward certain Honor of Kings goals.

 Tencent details how its MOBA playing AI system beats 99.81% of human opponents

As for the Dispatch Module, it’s bundled with several AI Servers on the same machine, and it collects data samples consisting of rewards, features, action probabilities, and more before compressing and sending them to Memory Pools. The Memory Pool — which is also a server — supports samples of various lengths and data sampling based on the generated time, and it implements a circular queue structure that performs storage operations in a data-efficient fashion.

Lastly, the Reinforcement Learner, a distributed training environment, accelerates policy updates with the aforementioned actor-critic approach. Multiple Reinforcement Learners fetch data in parallel from Memory Pools, with which they communicate using shared memory. One mechanism (target attention) helps with enemy target selection, while another —  long short-term memory (LSTM), an algorithm capable of learning long-term dependencies — teaches hero players skill combos critical to inflicting “severe” damage.

The Tencent researchers’ system encodes image features and game state information such that each unit and enemy target is represented numerically. An action mask cleverly incorporates prior knowledge of experienced human players, preventing the AI from attempting to traverse physically “forbidden” areas of game maps (like challenging terrain).

In experiments, the paper’s coauthors ran the framework across a total of 600,000 cores and 1,064 graphics cards (a mixture of Nvidia Tesla P40s and Nvidia V100s), which crunched 16,000 features containing unconcealed unit attributes and game information. Training one hero required 48 graphics cards and 18,000 processor cores at a speed of about 80,000 samples per second per card. And collectively for every day of training, the system accumulated the equivalent of 500 years of human experience.

 Tencent details how its MOBA playing AI system beats 99.81% of human opponents

The AI’s Elo score, derived from a system for calculating the relative skill levels of players in zero-sum games, unsurprisingly increased steadily with training, the coauthors note. It became relatively stable within 80 hours, according to the researchers, and within just 30 hours it began to defeat the top 1% of human Honor of Kings players.

The system executes actions via the AI model every 133 milliseconds, or about the response time of a top amateur player. Five professional players — “QGhappy.Hurt,” “WE.762,” “TS.NuanYang,” “QGhappy.Fly,” and “eStarPro.Ca,” — were invited to play against it, as well as a “diversity” of players attending the ChinaJoy 2019 conference in Shanghai between August 2 and August 5.

The researchers note that despite eStarPro.Cat’s prowess with mage-type heroes, the AI achieved five kills per game and was killed only 1.33 times per game on average. In public matches, its win rate was 99.81% over 2,100 matches, and five of the eight AI-controlled heroes managed a 100% win rate.

They’re far from the only ones whose AI beat human players — DeepMind’s AlphaStar beat 99.8% of human StarCraft 2 players, while OpenAI Five’s OpenAI Five framework defeated a professional team twice in public matches.

The Tencent researchers say that they plan to make both their framework and algorithms open source in the near future, toward the goal of fostering research on complex games like Honor of Kings.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Facebook details the AI technology behind Instagram Explore

November 25, 2019   Big Data

According to Facebook, over half of Instagram’s roughly 1 billion users visit Instagram Explore to discover videos, photos, livestreams, and Stories each month. Predictably, building the underlying recommendation engine — which curates the billions of pieces of content uploaded to Instagram — posed an engineering challenge, not least because it works in real time.

In a blog post published this morning, Facebook for the first time peeled back the curtains on Explore’s inner workings. Its three-part ranking funnel, which the company says was architected with a custom query language and modeling techniques, extracts 65 billion features and makes 90 million model predictions every second. And that’s just the tip of the iceberg.

Tools

Before the team behind Explore embarked on building a content recommendation system, they developed tools to conduct large-scale experiments and obtain strong signals on the breadth of users’ interests. The first of these was IGQL, a meta language that provided the level of abstraction needed to assemble candidate algorithms in one place.

IGQL is optimized in C++, which helps minimize latency and compute resources without sacrificing extensibility, Facebook says. It’s both statistically validated and high-level, enabling engineers to write recommendation algorithms in a “Python-like” fashion. And it complements an account embeddings component that helps identify topically similar profiles as part of a retrieval pipeline that focuses on account-level information.

 Facebook details the AI technology behind Instagram Explore

Above: Demonstration of ig2vec predicting account similarity.

Image Credit: Facebook

A framework — Ig2vec — treats Instagram accounts a user interacts with as word sequences in a sentence, which informs the predictions of a model with respect to which accounts the user might interact with. (Facebook notes that a sequence of accounts interacted with in a session is more likely to be topically coherent compared with random accounts.) Concurrently, Facebook’s AI Similarity Search nearest neighbor retrieval library (FAISS) queries millions of accounts based on a metric used in embedding training.

A classifier system is trained to predict a topic for a set of accounts based solely on the embedding, which when compared with human-labeled topics makes evident how well the embeddings capture topical similarity. It’s an important step, because retrieving accounts similar to those a user has expressed interest in helps narrow down a per-profile ranking inventory.

Ranking accounts in Explore based on interests necessitated predicting the most relevant content for each person, according to Facebook, and gave rise to a lightweight ranking distillation model that preselects candidates before passing them to complex ranking models. Using knowledge in the form of input candidates with features and outputs from the more complicated models, the simpler model tries to approximate the main ranking models as much as possible via direct (and indirect) learning.

Building Explore

Explore consists of two main stages, according to the team that designed it: the candidate generation stage (also known as the sourcing stage) and the ranking stage.

During the candidate generation stage, Explore taps accounts that users have interacted with previously to identify “seed accounts” of interest. They’re only a fraction of the accounts about the same interest, but they help identify topically similar accounts when combined with the above-mentioned embeddings.

Knowing the accounts that might appeal to a user is the first step toward sussing out which content might float their boat. IGQL allows different candidate sources to be represented as distinct subqueries, and this enables Explore to find tens of thousands of eligible candidates for the average person across many types of sources.

 Facebook details the AI technology behind Instagram Explore

Above: This graphic shows a typical source for Instagram Explore recommendations.

Image Credit: Facebook

To ensure the recommended content remains safe and appropriate for users of all ages, signals are used to filter out anything that might not be eligible. Algorithms detect and filter spam and other content, typically before an inventory is built for each user.

Those filtering systems are quite effective, if Facebook’s latest Community Standards Enforcement Report is any indication. The network says that 845,000 pieces of content relating to self-injury and self-harm were removed in Q3 2019, of which 79.1% were detected proactively, and that over 99% of child nudity and exploitation posts were deleted over the past four quarters.

For every Explore ranking request, 500 candidates are selected from the thousands sampled and are passed along to the ranking stage. It’s there that they encounter a three-part infrastructure intended to balance relevance with computation efficiency.

In the first pass of the ranking stage, a distillation model mimics the combination of the other stages with a minimal number of features. It picks the 150 highest-quality and most relevant candidates out of the 500, after which a model with a full dense set of features (in the second phase) selects the top 50 candidates. Lastly, another model with a full set of features chooses the best 25 candidates, which populate the Explore grid.

 Facebook details the AI technology behind Instagram Explore

Above: An illustration of the current final-pass model architecture.

Image Credit: Facebook

Sometimes the first-pass distillation model mimics the other two stages in ranking order. The fix is a multi-task, multi-layer algorithm that captures signals to predict actions people might take on content, from positive actions such as tapping Like or Favorite to negative actions like tapping the See Fewer Posts Like This button. The predictions are combined using a value model formula to capture prominence, after which a weighted sum determines whether the importance of a person saving a post, say, is higher than their liking a post.

In the interest of maintaining a “rich balance” between new content and existing content, the Explore team incorporated a rule into the aforementioned value model that boosts content diversity. It downranks posts from the same author or seed account by adding a penalty factor so users don’t see multiple posts from the same person or the same seed account in Explore.

“We rank the most relevant content based on the final value model score of each ranking candidate in a descendant way,” wrote the blog authors. “One of the most exciting parts of building Explore is the ongoing challenge of finding new and interesting ways to help our community discover the most interesting and relevant content on Instagram. We’re continuously evolving Instagram Explore, whether by adding media formats like Stories [or] entry points to new types of content, such as shopping posts and IGTV videos.”

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

It’s All In The Details: Some Thoughts For Executives In The Entertainment Industry

July 17, 2019   BI News and Info

In the first blog of this series, I discussed the entertainment industry’s mandate to hold their customers’ attention. In my second blog, I talked about the importance of creating experiences that extend beyond the entertainment venue. In particular, I mentioned Walt Disney’s genius for extending familiar stories into films and then into theme park experiences. However, perfecting the idea of an extended experience is an ongoing process, not just for Disney, but for everyone in the entertainment industry. Now I’ll reveal the secret for creating a very personal and memorable experience: “specificity.”

For example, when Disneyland opened in 1955, the thematic areas of their park were given rather generic names (Main Street USA, Adventureland, Frontierland, Fantasyland, and Tomorrowland). They did have some dark rides that were tied to specific films, like Snow White’s Adventure and Peter Pan’s Flight, but most of the attractions were also generic in nature, like Autotopia, Storybook Land Canal Boats, and Frontierland Shootin’ Exposition. Ultimately, Disney added some specificity to some of its nonspecific attractions, like Haunted Mansion, Pirates of the Caribbean, and Jungle Cruise, when they went the other direction and became films.

Needless to say, Disney has learned a lot about creating experiences in the last 65 years. The original theory was that generic concepts would appeal to a wider audience, but that turns out not to be the case. In practice, people are drawn to richly detailed experiences because it gives them more potential ways to relate. For example, look at the new Star Wars: Galaxy’s Edge, which opened recently at Disneyland in California and is opening soon at Walt Disney World in Florida. Disney has learned that the more detailed and immersive an experience, the more personal it will be for patrons and the more likely it is to attract repeat business.

Of course, other parks have learned this lesson, too. For example, Bricksburg at Legoland in Florida; Angry Birds Land at Särkänniemi in Tampere, Finland and Thorpe Park in Surrey, England; Sesame Place at SeaWorld in Florida; and of course, The Wizardly World of Harry Potter at Universal Studios in Florida and California. Theme park executives know if they can leverage intellectual property that customers have been exposed to outside the park, it will create a more meaningful experience inside the park because customers will be immersed in a very specific and familiar world.

Casino resorts also offer these rich, immersive adventures. Las Vegas has experiences that range from the ultra-kitsch to the truly opulent. You’ll find Paris, Venice, New York, and ancient Greece within five miles of each other, and casinos use intellectual property to draw players to slot machines. Today, the slots are more interactive than ever and often have dynamic slot symbols moving about the screens. But the biggest experiential distinction among slot machines is branding. Would you rather play Game of Thrones, Kiss, Pac-Man, Wizard of Oz, or the all-time favorite Wheel of Fortune machine? One machine, TMZ, immerses patrons in a personalized experience by taking their picture and using it as a slot symbol.

Just like cinemas and entertainment parks, casino resorts need to hold a patron’s attention, but casinos hold their attention with fantasy. People come to resorts with the dream of not only beating the odds but living like the truly wealthy in luxurious rooms surrounded by sumptuous pools, fountains, and gardens. They’re thrilled by close encounters with beautiful showgirls or their favorite musical performer, and they delight when magicians or acrobats are able to do the seemingly impossible.

Cirque du Soleil is an excellent example of this. It has more than 25 visually mesmerizing shows running around the globe. Seven of them are running in Las Vegas alone and two leverage familiar intellectual property (The Beatles Love and Michael Jackson One). And all the shows are created with extraordinary detail and specificity that delivers a personal, memorable experience. In fact, ever on the cutting edge, Cirque du Soleil has encouraged the use of cellphones and a cloud-based app during its performances of Toruk to improve audience participation.

BrendaEntertainmentBlogOneQuote 1024x576 It’s All In The Details: Some Thoughts For Executives In The Entertainment Industry

The future of technology-infused entertainment is limited only by the imagination.

Currently, Cirque du Soleil has more than 4,000 employees, but it wasn’t always that way. It began in 1984 with only 20 performers, a leaky tent, and considerable debt. The success of Cirque du Soleil is not due only to its attention to detail on stage but its exhaustive, detailed efforts behind the scenes, as well. For a great look at how that happens, check out its truly amazing, behind-the-scenes videos about its production of Ka.

In fact, it’s quite ironic that Las Vegas and other bastions of entertainment have a reputation for a reckless “anything goes” reputation. The reality is that these places are subject to extraordinary regulation and must be able to assure the health and safety of their guests and employees. Only an intelligent enterprise that has complete control of its licensing and regulations, as well as an adequate, certifiably qualified workforce, and the ability to procure the best possible health and safety equipment, can be certain of that.

Further, if casino resorts want to provide their customers with unique experiences, they obviously need the ability to treat each customer uniquely. People come to these resorts for many different reasons. It might be to gamble, but it could be for a convention, a reunion, a wedding, or just for the shows. Patrons might be regular high-rollers, casual visitors, or travelers looking for a once-in-a-lifetime experience, and they should all be treated distinctively. While this probably means gathering significant data about these customers, it’s also critical to assure them that their information will be safely guarded, as casinos take confidentially and customer data security very seriously.

Experiences don’t just happen, they’re created. It’s called “experience management” (XM), and there is no industry that requires zealous experience management more than the entertainment industry. Whether you’re a cinema, museum, concert venue, casino, amusement park, Renaissance fair, or circus, if you want to (figuratively) go from a leaky tent and handful of performers to a billion-dollar business, you need more than a C-suite; you need an X-suite – a group of executives who are focused on the experience of their customers.

Can I honestly say that every startup entertainment enterprise will become wildly successful strictly by focusing on experience management? No. But I can assure you that all highly successful entertainment enterprises are putting considerable focus on their experience management or they’re on the path to obsolescence.

For an in-depth look at how the x-suite is changing the way companies do business, read “Meet the ‘X-Suite’ – the job roles shaping Experience Management” by Qualtrics, an SAP company.

Let’s block ads! (Why?)

Digitalist Magazine

Read More

MIT CSAIL details technique for shrinking neural networks without compromising accuracy

May 7, 2019   Big Data

Deep neural networks — layers of mathematical functions modeled after biological neurons — are a versatile type of AI architecture capable of performing tasks from natural language processing to computer vision. That doesn’t mean that they’re without limitations, however. Deep neural nets are often quite large and require correspondingly large corpora, and training them can take days on even the priciest of purpose-built hardware.

But it might not have to be that way. In a new study (“The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks“) published by scientists at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL), deep neural networks are shown to contain subnets that are up to 10 times smaller than the entire network, but which are capable of being trained to make equally precise predictions, in some cases more quickly than the originals.

The work is scheduled to be presented at the International Conference on Learning Representations (ICLR) in New Orleans, where it was named one of the conference’s top two papers out of roughly 1,600 submissions.

“If the initial network didn’t have to be that big in the first place, why can’t you just create one that’s the right size at the beginning?” said PhD student and coauthor Jonathan Frankle in a statement. “With a neural network you randomly initialize this large structure, and after training it on a huge amount of data it magically works. This large structure is like buying a big bag of tickets, even though there’s only a small number of tickets that will actually make you rich. But we still need a technique to find the winners without seeing the winning numbers first.”

 MIT CSAIL details technique for shrinking neural networks without compromising accuracy

Above: Finding subnetworks within neural networks.

Image Credit: MIT CSAIL

The researchers’ approach involved eliminating unnecessary connections among the functions — or neurons — in order to adapt them to low-powered devices, a process that’s commonly known as pruning. (They specifically chose connections that had the lowest “weights,” which indicated that they were the least important.) Next, they trained the network without the pruned connections and reset the weights, and after pruning additional connections over time, they determined how much could be removed without affecting the model’s predictive ability.

After repeating the process tens of thousands of times on different networks in a range of conditions, they report that the AI models they identified were consistently less 10% to 20% of the size of their fully connected parent networks.

“It was surprising to see that re-setting a well-performing network would often result in something better,” says coauthor and assistant professor Michael Carbin. “This suggests that whatever we were doing the first time around wasn’t exactly optimal and that there’s room for improving how these models learn to improve themselves.”

Carbin and Frankle note that they only considered vision-centric classification tasks on smaller data sets, and they leave to future work exploring why certain subnetworks are particularly adept at learning and ways to quickly spot these subnetworks. However, they believe that the results may have implications for transfer learning, a technique where networks trained for one task are adapted to another task.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

How to read the Dynamics 365 Mailbox details file for troubleshooting

February 15, 2019   Microsoft Dynamics CRM

In this post, we are going to look at the Mailbox Details log file that can be used when troubleshooting Server Side Synchronization and App for Outlook scenarios. Thanks again to Cody Dinwiddie on our Dynamics 365 Support team for helping put this information together.

Now, if you don’t know, Server-Side Sync is processed by the Async Service, which means it can sometimes be difficult to find enough detail to troubleshoot ACT and Email synchronization issues. Mailbox alerts is always a great place to start but it can really only give you a few pieces of information, such as if a Test and Enable was successful or what errors the mailbox is running into. However, this will not show me other key information, such as when the next sync cycle is for the mailbox. In most cases, this is the information I am looking for and it is one of the most common questions I receive.

To find additional information, the mailbox record does have a way to get this data by selecting Download Mailbox Details on the ribbon of the appropriate mailbox record.

Once you click to download this .log file, open it up with a text editor like Notepad. Note: For a Dynamics 365 Online instance, all times will be in UTC.

Let’s break down the sections:

Mailbox Async Processing State

mailboxid : 9fb6600b-0965-e811-a97f-000d4a161089

(GUID of the mailbox record)

hostid: Null         

(This will contain the name of the async server processing the request if mid-process)

processingstatecode : 0         

(This will be 1 if async is currently processing this mailbox and you see a hostid populated)

processinglastattemptedon : 12/3/2018 3:23:36 PM       

(This is the last time the mailbox attempted to process)

Mailbox Synchronization Methods

(EmailRouter here means Server-Side Sync)

incomingemaildeliverymethod : EmailRouter

outgoingemaildeliverymethod : EmailRouter

actdeliverymethod : EmailRouter

Mailbox Enabled State

(These states are tied to the test/enable being successful from a server-side sync perspective)

enabledforincomingemail : True

enabledforoutgoingemail : True

enabledforact : True

Mailbox Idle State

(Shows how many times the mailbox was processed with no Email items or Appointments, Contacts and Tasks to synchronize)

noemailcount : 3        

(This means 3 sync cycles ago, an email was promoted from the Exchange mailbox into Dynamics)

noactcount : 2      

(This means 2 sync cycles ago, an Appointment, Contact or Task was promoted from the Exchange mailbox into Dynamics)

Mailbox Backoff Parameters

(Shows when the mailbox is scheduled to process next. These are the fields I use the most)

postponemailboxprocessinguntil : 12/3/2018 3:28:37 PM

(This value controls when the Asynchronous Processing Service will run on this mailbox, which actually performs a synchronization)

postponesendinguntil : 11/29/2018 6:07:17 PM    

(If this value is the same or before the postponemailboxprocessinguntil value, asynchronous emails will be sent from Dynamics on the next sync)

receivingpostponeduntil : 12/3/2018 3:28:37 PM 

(If this value is the same or before the postponemailboxprocessinguntil value, emails in Exchange will attempt to sync with Dynamics)

receivingpostponeduntilforact : 12/3/2018 3:25:34 PM         

(If this value is the same or before the postponemailboxprocessinguntil value, Appointments, Contacts and Tasks will attempt to sync with Dynamics)

Mailbox Test and Enable Parameters

testemailconfigurationscheduled : False         

(This will be true if test/enable is pending)

testemailconfigurationretrycount : 0       

(This enumerates how many times the test/enable process has attempted to retry in the event of a failure)

testmailboxaccesscompletedon : 11/29/2018 6:07:22 PM         

(This provides the time the last test/enable was run on this mailbox)

postponetestemailconfigurationuntil : 11/29/2018 6:12:22 PM         

(If this value is the same or before the postponemailboxprocessinguntil value, test/enable will be run on the mailbox)

Mailbox Last Sync Cycle Information

lastsuccessfulsynccompletedon : 12/3/2018 3:23:37 PM         

(This provides the time that the mailbox last performed a sync without running into an exception/ error)

lastsyncerror: Null     

(This will provide limited exception details of what error occurred on the last mailbox sync attempt, if one occurred)

lastsyncerrorcode: Null         

(This will provide the exception code for the last error, if available)

lastsyncerrorcount : 0   

(This will enumerate how many consecutive times the same error has occurred for the mailbox synchronization)

lastsyncerroroccurredon : 10/17/2018 5:41:58 PM         

(This provides the time the last error occurred for the mailbox. If this time is before the lastsuccessfulsynccompletedon and processinglastattemptedon times, no error happened on the last sync)

itemsprocessedforlastsync : 2       

(This enumerates how many Exchange items were successfully promoted on the last sync cycle)

itemsfailedforlastsync : 0  

(This enumerates how many items succeeded the promotion criteria, but failed to promote to Dynamics)

Email Server Profile Details

(This section provides details on the Email Server Profile configured to this mailbox)

Email Server Profile General Settings (servertype: 1 is others, 0 is Exchange)

servertype : 0      

(0 is Exchange, 1 is others, such as POP3)

useautodiscover : True       

(False indicates that the EWS URL is explicitly defined in the Email Server Profile)

maxconcurrentconnections : 108       

(This value defines how many simultaneous connections to Exchange that this Email Server Profile can handle)

minpollingintervalinminutes : 0      

(This value determines how often Asynchronous processing is attempted for a mailbox in minutes. The minimum value is 5, so “0” in this context means that mailboxes sync every 5 minutes)

Email Server Profile Incoming Email Settings

incomingauthenticationprotocol : AutoDetect     

(This value defines what authentication method is being used, such as through an impersonation account or via mailbox credentials etc..)

incomingcredentialretrieval : S2S

(This defines the authentication credentials being used, such as S2S (server-to-server), OAuth, etc..)

incominguseimpersonation : False      

(This defines if an account with Application Impersonation is being used for incoming email synchronization)

incomingusessl : True   

(This defines if synchronization of incoming items uses certificates for encryption)

incomingportnumber : 443     

(This defines the port being used for incoming synchronization)

Email Server Profile Outgoing Email Settings

outgoingauthenticationprotocol : AutoDetect      

(This value defines what authentication method is being used, such as through an impersonation account, via mailbox credentials etc..)

outgoingcredentialretrieval : S2S         

(This defines the authentication credentials being used, such as S2S (server-to-server), OAuth, etc..)

outgoinguseimpersonation : False     

(This defines if an account with Application Impersonation is being used for outgoing email synchronization)

outgoingusessl : True      

(This defines if synchronization of outgoing items uses certificates for encryption)

outgoingportnumber : 443    

(This defines the port being used for outgoing synchronization)

Recent Trace Log details

(This section provides the last 10 mailbox specific logs for the associated mailbox. We will not be covering troubleshooting of the different errors here. Some of these will be more straightforward and some will require more telemetry review from a Microsoft resource. That is where the MailboxId and ActivityId can be used to correlate to additional logging.)

tracecode : 126

errortypedisplay : ExchangeSyncServerServiceError

errordetails : T:391

ActivityId: a2559263-94de-4886-bc28-5fc7511d4f5f

>Exception : Unhandled exception:

Exception type: System.Net.WebException

Message: The remote server returned an error: (503) Server Unavailable.   at System.Net.HttpWebRequest.EndGetResponse(IAsyncResult asyncResult)   at Microsoft.Exchange.WebServices.Data.EwsHttpWebRequest.Microsoft.Exchange.WebServices.Data.IEwsHttpWebRequest.EndGetResponse(IAsyncResult asyncResult)   at Microsoft.Exchange.WebServices.Data.ServiceRequestBase.EndGetEwsHttpWebResponse(IEwsHttpWebRequest request, IAsyncResult asyncResult) — End stack trace — Exception type: Microsoft.Exchange.WebServices.Data.ServiceRequestException

Message: The request failed. The remote server returned an error: (503) Server Unavailable.   at Microsoft.Exchange.WebServices.Data.ServiceRequestBase.EndGetEwsHttpWebResponse(IEwsHttpWebRequest request, IAsyncResult asyncResult)   at Microsoft.Exchange.WebServices.Data.SimpleServiceRequestBase.EndInternalExecute(IAsyncResult asyncResult)   at…

tracecode : 2

errortypedisplay : UnknownIncomingEmailIntegrationError

errordetails : ActivityId: 08657d6f-7e7b-424c-a974-6de3d4le2ae4a

>Error : ?<ResponseMessageType xmlns:q1=”http://schemas.microsoft.com/exchange/services/2006/messages” p2:type=”q1:FindItemResponseMessageType” ResponseClass=”Error” xmlns:p2=”http://www.w3.org/2001/XMLSchema-instance”><q1:MessageText>Mailbox move in progress. Try again later., Cannot open mailbox.</q1:MessageText><q1:ResponseCode>ErrorMailboxMoveInProgress</q1:ResponseCode><q1:DescriptiveLinkKey>0</q1:DescriptiveLinkKey></ResponseMessageType>

tracecode : 52

errortypedisplay : IncomingEmailServerServiceError

errordetails : ActivityId: b66413dd-c51e-43d1-9404-adb044abf655

>Error : System.Net.WebException: The request failed with HTTP status 503: Service Unavailable.

at System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall)

at System.Web.Services.Protocols.SoapHttpClientProtocol.EndInvoke(IAsyncResult asyncResult)

at Microsoft.Crm.Asynchronous.EmailConnector.ExchangeServiceBinding.EndFindItem(IAsyncResult asyncResult)

at Microsoft.Crm.Asynchronous.EmailConnector.FindItemsStep.EndCall()

at Microsoft.Crm.Asynchronous.EmailConnector.ExchangeIncomingEmailProviderStep.EndOperation()

Thanks for reading!

Aaron Richards

Let’s block ads! (Why?)

Dynamics 365 Customer Engagement in the Field

Read More
« Older posts
  • Recent Posts

    • Accelerate Your Data Strategies and Investments to Stay Competitive in the Banking Sector
    • SQL Server Security – Fixed server and database roles
    • Teradata Named a Leader in Cloud Data Warehouse Evaluation by Independent Research Firm
    • Derivative of a norm
    • TODAY’S OPEN THREAD
  • Categories

  • Archives

    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited