• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Facebook creates shopping data sets to make more humanlike chatbots

June 4, 2020   Big Data

Facebook researchers this week introduced Situated Interactive MultiModal Conversations (SIMMC), a novel research direction aimed at training AI chatbots that take actions like showing an object and explaining what it’s made of in response to images, memories of previous interactions, and individual requests. In a technical paper, they detail new data sets designed for this purpose containing around 13,000 human-to-human dialogs across two domains — furniture and fashion — along with several tasks framed as objective evaluation protocols.

Facebook would appear to be working toward an assistant capable of processing data a user and the assistant co-observe, and then outputting replies beyond just plain text based on this data. The hope is that this assistant emulates human chat partners by responding to images, messages, and messages about images as naturally as a person might. For example, given the prompt “I want to buy some chairs — show me brown ones and tell me about the materials,” the assistant might reply with an image of brown chairs and the text “How do you like these? They have a solid brown color with a foam fitting.”

SIMMC supports the development of such an assistant with the aforementioned data sets and new technical tasks, which address task-oriented dialogs encompassing multimodal user contexts in the form of a co-observed image or a virtual reality environment. The tasks get updated dynamically based on the dialog flow and the assistant’s actions.

 Facebook creates shopping data sets to make more humanlike chatbots


VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

In SIMMC-Furniture, the furniture-focused data set, a user interacts with a conversational assistant to get recommendations for items like couches and side tables. To create it, the Facebook researchers built a virtual environment within Unity where volunteers were connected randomly with humans posing as a virtual, full-featured assistant. The users could ask to see a particular type of furniture, and the assistant could filter a catalog of 3D Wayfair assets by price, color, material, and more while navigating through the filtered results to share their view in focused (i.e., zoomed-in) or carousel (three slots containing three different items) presentations.

Meanwhile, in the SIMMC-Fashion data set, users asked humans posing as virtual assistants for jacket, dress, and other clothing and accessory suggestions. Within the same Unity environment, assistants could sort by price, brand, color, and more as the users browsed and explored options informed by preferences and visual scenes, memories, and assistant-recommended items.

For both corpora, the researchers noted which items appeared in each view. They also developed an ontology to capture the multimodal interactions within dialog flows and provide semantics for assistant and user utterances, consisting of four primary components: objects, activities (e.g., “add to cart”), attributes (“brands”), and dialog acts (“ask”). To complement this, they derived a labeling language for annotation that allowed for the representation of dialog exchanges, such that the SIMMC annotations capture the relations of objects with their corresponding dialog annotations.

Building on these data sets, the Facebook researchers built a basic assistant consisting of four components: an utterance and history encoder, multimodal fusion, an action predictor, and a response generator.

  • The utterance and history encoder creates encodings (numerical representations) from user replies and the dialog history.
  • The multimodal fusion step combines information from the text and multimodal context into a mathematical object called a tensor.
  • The action predictor predicts actions to be taken by the assistant by transforming the tensor into another object called a vector, and then by predicting an API the assistant might need to call.
  • The response generator generates an assistant response that’s semantically relevant to users’ requests. For example, given the request “Show me black couches less than $ 500,” the generator might reply “Here are some” or “Sorry, we do not have any black couches cheaper than $ 500” based on available inventory.

After training the models on both SIMMC-Fashion and SIMMC-Furniture, the researchers found that they outperformed two baseline AI systems across a number of metrics. Despite not leveraging the fine-grained annotations, the best-performing action predictor chose the right API 79.6% of the time for the SIMMC-Furniture corpus and 85.1% of the time for SIMCC-Fashion. Facebook says that it will publicly release the data, annotations, code, and models in the future.

The research follows Facebook’s detailing of the AI systems behind its shopping experiences, which continue to evolve across Instagram, WhatsApp, and Facebook proper. The company says its goal is to one day combine its approaches into a system that can serve up product recommendations on the fly, matched to individual tastes and styles — a sort of hardware-free take on the recently discontinued Echo Look, Amazon’s AI-powered camera that told customers how their outfits looked and kept track of their wardrobe.

Let’s block ads! (Why?)

Big Data – VentureBeat

chatbot’s, creates, data, Facebook, humanlike, more, Sets, Shopping
  • Recent Posts

    • P3 Jobs: Time to Come Home?
    • NOW, THIS IS WHAT I CALL AVANTE-GARDE!
    • Why the open banking movement is gaining momentum (VB Live)
    • OUR MAGNIFICENT UNIVERSE
    • What to Avoid When Creating an Intranet
  • Categories

  • Archives

    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited