• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: researchers

AI Weekly: These researchers are improving AI’s ability to understand different accents

March 6, 2021   Big Data
 AI Weekly: These researchers are improving AI’s ability to understand different accents

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


The pandemic appears to have supercharged voice app usage, which was already on an upswing. According to a study by NPR and Edison Research, the percentage of voice-enabled device owners who use commands at least once a day rose between the beginning of 2020 and the start of April. Just over a third of smart speaker owners say they listen to more music, entertainment, and news from their devices than they did before, and owners report requesting an average of 10.8 tasks per week from their assistant this year compared with 9.4 different tasks in 2019. According to a new report from Juniper Research, consumers will interact with voice assistants on 8.4 billion devices by 2024.

But despite their growing popularity, assistants like Alexa, Google Assistant, and Siri still struggle to understand diverse regional accents. According to a study by the Life Science Centre, 79% of people with accents alter their voice to make sure that they’re understood by their digital assistants. And in a recent survey commissioned by the Washington Post, popular smart speakers made by Google and Amazon were 30% less likely to understand non-American accents than those of native-born users.

Traditional approaches to narrowing the accent gap would require collecting and labeling large datasets of different languages, a time- and resource-intensive process. That’s why researchers at MLCommons, a nonprofit related to MLPerf, an industry-standard set of benchmarks for machine learning performance, are embarking on a project called 1000 Words in 1000 Languages. It’ll involve creating a freely available pipeline that can take any recorded speech and automatically generate clips to train compact speech recognition models.

“In the context of consumer electronic devices, for instance, you don’t want to have to go out and build new language datasets because that’s costly, tedious, and error-prone,” Vijay Janapa Reddi, an associate professor at Harvard and a contributor on the project, told VentureBeat in a phone interview. “What we’re developing is a modular pipeline where you’ll be able to plug in different sources speech and then specify the [words] for training that you want.”

While the pipeline will be limited in scope in that it’ll only create training datasets for small, low-power models that continually listen for specific keywords (e.g. “OK Google” or “Alexa”), it could represent a significant step toward truly accent-agnostic speech recognition systems. By convention, training a new keyword-spotting model would require manually collecting thousands of examples of labeled audio clips for each keyword. When the pipeline is released, developers will be able to simply provide a list of keywords they wish to detect along with a speech recording and the pipeline will automate the extraction, training, and validation of models without requiring any labeling.

“It’s not even really creating a dataset, it’s just training a dataset that comes about as a result of searching the larger corpus,” Reddi explained. “It’s like doing a Google search. What you’re trying to do is find a needle in a haystack — you end up with a subset of results with different accents and whatever else you have in there.”

The 1000 Words in 1000 Languages project builds on existing efforts to make speech recognition models more accessible — and equitable. Mozilla’s Common Voice, an open source and annotated speech dataset, consists of voice snippets and voluntarily contributed metadata useful for training speech engines like speakers’ ages, sex, and accents. As a part of Common Voice, Mozilla maintains a dataset target segment that aims to collect voice data for specific purposes and use cases, including the digits “zero” through “nine” as well as the words “yes,” “no,” “hey,” and “Firefox.” For its part, in December, MLCommons released the first iteration of a public 86,000-hour dataset for AI researchers, with later versions due to branch into more languages and accents.

“The organizations that have a huge amount of speech are often large organizations, but speech is something that has many applications,” Reddi said. “The question is, how do you get this into the hands of small organizations that don’t have the same scale as big entities like Google and Microsoft? If they have a pipeline, they can just focus on what they’re building.”

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Researchers find that labels in computer vision datasets poorly capture racial diversity

February 9, 2021   Big Data
 Researchers find that labels in computer vision datasets poorly capture racial diversity

Datasets are a primary driver of progress in computer vision, and many computer vision applications require datasets that include human faces. These datasets often have labels denoting racial identity, expressed as a category assigned to faces. But historically, little attention has been paid to the validity, construction, and stability of these categories. Race is an abstract, fuzzy notion, and highly consistent representations of a racial group across datasets could be indicative of stereotyping.

Northeastern University researchers sought to study these face labels in the context of racial categories and fair AI. In a paper, they argue that labels are unreliable as indicators of identity because some labels are more consistently defined than others, and because datasets appear to “systematically” encode stereotypes of racial categories.

Their timely research comes after Deborah Raji and coauthor Genevieve Fried published a pivotal study examining facial recognition datasets compiled over 43 years. They found that researchers, driven by the exploding data requirements of machine learning, gradually abandoned asking for people’s consent, leading them unintentionally include photos of minors, use racist and sexist labels, and have inconsistent quality and lighting

Racial labels are used in computer vision without definition or only with loose and nebulous definition, the coauthors observe from the datasets they analyzed (FairFace, BFW, RFW, and LAOFIW). There’s myriad systems of racial classifications and terminology, some of debatable coherence, with one dataset grouping together “people with ancestral origins in Sub-Saharan Africa, India, Bangladesh, Bhutan, among others.” Other datasets use labels that could be considered offensive, like “Mongoloid.”

Moreover, a number of computer vision datasets use the label “Indian/South Asian,” which the researchers point to as an example of the pitfalls of racial categories. If the “Indian” label refers only to the country of India, it’s arbitrary in the sense that the borders of India represent the partitioning of a colonial empire on political grounds. Indeed, racial labels largely correspond with geographic regions, including populations with a range of languages, cultures, separation in space and time, and phenotypes. Labels like “South Asian” should include populations in Northeast India, who might exhibit traits more common in East Asia, but ethnic groups span racial lines and labels can fractionalize them, placing some members in one racial category and others in a different category.

“The often employed, standard set of racial categories — e.g., ‘Asian,’ ‘Black,’ ‘White,’ ‘South Asian’ — is, at a glance, incapable of representing a substantial number of humans,” the coauthors wrote. “It obviously excludes indigenous peoples of the Americas, and it is unclear where the hundreds of millions of people who live in the Near East, Middle East, or North Africa should be placed. One can consider extending the number of racial categories used, but racial categories will always be incapable of expressing multiracial individuals, or racially ambiguous individuals. National origin or ethnic origin can be utilized, but the borders of countries are often the results of historical circumstance and don’t reflect differences in appearance, and many countries are not racially homogeneous.”

Equally problematically, the researchers found that faces in the datasets they analyzed were systematically the subject of racial disagreements among annotators. All datasets seemed to include and recognize a very specific type of person as Black — a stereotype — while having more expansive (and less consistent) definitions for other racial categories. Furthermore, the consistency of racial perception varied across ethnic groups, with Filipinos in one dataset being seen less consistently seen as Asian compared with Koreans, for example.

“It is possible to explain some of the results purely probabilistically – blonde hair is relatively uncommon outside of Northern Europe, so blond hair is a strong signal of being from Northern Europe, and thus, belonging to the White category. But If the datasets are biased towards images collected from individuals in the U.S., then East Africans may not be included in the datasets, which results in high disagreement on the racial label to assign to Ethiopians relative to the low disagreement on the Black racial category in general,” the coauthors explained.

These racial labeling biases could be reproduced and amplified if left unaddressed, the coauthors warn, taking take on validity with dangerous consequences when divorced from cultural context. Indeed, numerous studies — including the landmark Gender Shades work by Joy Buolamwini, Dr. Timnit Gebru, Dr. Helen Raynham, and Raji — and VentureBeat’s own analyses of public benchmark data have shown facial recognition algorithms are susceptible to various biases. One frequent confounder is technology and techniques that favor lighter skin, which include everything from sepia-tinged film to low-contrast digital cameras. These prejudices can be encoded in algorithms such that their performance on darker-skinned people falls short of that on those with lighter skin.

“A dataset can have equal amounts of individuals across racial categories, but exclude ethnicities or individuals who don’t fit into stereotypes,” they wrote. “It is tempting to believe fairness can be purely mathematical and independent of the categories used to construct groups, but measuring the fairness of systems in practice, or understanding the impact of computer vision in relation to the physical world, necessarily requires references to groups which exist in the real world, however loosely.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Researchers find that debiasing doesn’t eliminate racism from hate speech detection models

February 6, 2021   Big Data

Current AI hate speech and toxic language detection systems exhibit problematic and discriminatory behavior, research has shown. At the core of the issue are training data biases, which often arise during the dataset creation process. When trained on biased datasets, models acquire and exacerbate biases, for example flagging text by Black authors as more toxic than text by white authors.

Toxicity detection systems are employed by a range of online platforms including Facebook, Twitter, YouTube, and various publications. While one of the premiere providers of these systems, Alphabet-owned Jigsaw, claims it’s taken pains to remove bias from its models following a study showing it fared poorly on Black-authored speech, it’s unclear the extent to which this might be true of other AI-powered solutions.

To see whether current model debiasing approaches can mitigate biases in toxic language detection, researchers at the Allen Institute investigated techniques to address lexical and dialectal imbalances in datasets. Lexical biases associate toxicity with the presence of certain words, like profanities, while dialectal biases correlate toxicity with “markers” of language variants like African-American English (AAE).

 Researchers find that debiasing doesn’t eliminate racism from hate speech detection models

In the course of their work, the researchers looked at one debiasing method designed to tackle “predefined biases” (e.g., lexical and dialectal). They also explored a process that filters “easy” training examples with correlations that might mislead a hate speech detection model.

According to the researchers, both approaches face challenges in mitigating biases from a model trained on a biased dataset for toxic language detection. In their experiments, while filtering reduced bias in the data, models trained on filtered datasets still picked up lexical and dialectal biases. Even “debiased” models disproportionately flagged text in certain snippets as toxic. Perhaps more discouragingly, mitigating dialectal bias didn’t appear to change a model’s propensity to label text by Black authors as more toxic than white authors.

In the interest of thoroughness, the researchers embarked on a proof-of-concept study involving relabeling examples of supposedly toxic text whose translations from AAE to “white-aligned English” were deemed nontoxic. They used OpenAI’s GPT-3 to perform the translations and create a synthetic dataset — a dataset, they say, that resulted in a model less prone to dialectal and racial biases.

 Researchers find that debiasing doesn’t eliminate racism from hate speech detection models

“Overall, our findings indicate that debiasing a model already trained on biased toxic language data can be challenging,” wrote the researchers, who caution against deploying their proof-of-concept approach because of its limitations and ethical implications. “Translating” the language a Black person might use into the language a white person might use both robs the original language of its richness and makes potentially racist assumptions about both parties. Moreover, the researchers note that GPT-3 likely wasn’t exposed to many African American English varieties during training, making it ill-suited for this purpose.

“Our findings suggest that instead of solely relying on development of automatic debiasing for existing, imperfect datasets, future work focus primarily on the quality of the underlying data for hate speech detection, such as accounting for speaker identity and dialect,” the researchers wrote. “Indeed, such efforts could act as an important step towards making systems less discriminatory, and hence safe and usable.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Researchers claim that AI-translated text is less ‘lexically’ rich than human translations

February 3, 2021   Big Data
 Researchers claim that AI translated text is less ‘lexically’ rich than human translations

Human interpreters make choices unique to them, consciously or unconsciously, when translating one language into another. They might explicate, normalize, or condense and summarize, creating fingerprints known informally as “translationese.” In machine learning, generating accurate translations has been the main objective thus far. But this might be coming at the expense of translation richness and diversity.

In a new study, researchers at Tilburg University and the University of Maryland attempt to quantify the lexical and grammatical diversity of “machine translationese” — i.e., the fingerprints made by AI translation algorithms. They claim to have found a “quantitatively measurable” difference between the linguistic richness of machine translation systems’ training data and their translations, which could be a product of statistical bias.

The researchers looked a range of different machine learning model architectures including Transformer, neural machine translation, long short-term memory networks, and phrase-based statistical machine translation. In experiments, they tasked each with translating between English, French, and Spanish and compared the original text with the translations using 9 different metrics.

The researchers report that in experiments, the original training data — a collection of reference translations — always had a higher lexical diversity than the machine translations regardless of the type of model used. In other words, the reference translations were consistently more diverse in terms of vocabulary and synonym usage than the translations from the models.

The coauthors point out that while the loss of lexical diversity could be a desirable side effect of machine translation systems (in terms of simplification or consistency), the loss of morphological richness is problematic as it can prevent systems from making grammatically correct choices. Bias can emerge, too, with machine translation systems having a stronger negative impact in terms of diversity and richness on morphologically richer languages like Spanish and French.

“As [machine translation] systems have reached a quality that is (arguably) close to that of human translations and as such are being used widely on a daily basis, we believe it is time to look into the potential effects of [machine translation] algorithms on language itself,” the researchers wrote in a paper describing their work. “All [of our] metrics indicate that the original training data has more lexical and morphological diversity compared to translations produced by the [machine translation] systems … If machine translationese (and other types of ‘NLPese’) is a simplified version of the training data, what does that imply from a sociolinguistic perspective and how could this affect language on a longer term?”

The coauthors propose no solutions to the machine translation problems they claim to have uncovered. However, they believe their metrics could drive future research on the subject.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Facebook researchers propose ‘pre-finetuning’ to improve language model performance

February 2, 2021   Big Data
 Facebook researchers propose ‘pre finetuning’ to improve language model performance

How open banking is driving huge innovation

Learn how fintechs and forward-thinking FIs are accelerating personalized financial products through data-rich APIs.

Register Now


Machine learning researchers have achieved remarkable success with language model pretraining, which uses self-supervision, a training technique that doesn’t require labeled data. Pretraining refers to training a model with one task to help it recognize patterns that can be applied to a range of other tasks. In this way, pretraining imitates the way human beings process new knowledge. That is, using parameters of tasks that have been learned before, models learn to adapt to new and unfamiliar tasks.

For many natural language tasks, however, training examples for related problems exist. In an attempt to leverage these, researchers at Facebook propose “pre-finetuning,” a methodology of training language models that involves a learning step with over 4.8 million training examples performed on around 50 classification, summarization, question-answering, and commonsense reasoning datasets. They claim that pre-finetuning consistently improves performance for pretrained models while also significantly improving sample efficiency during fine-tuning.

It’s an approach that has been attempted before, often with success. In a 2019 study, researchers at the Allen Institute noticed that pre-finetuning a BERT model on a multiple choice question dataset appeared to teach the model something about multiple choice questions in general. A subsequent study found that pre-finetuning increased a model’s robustness for name swaps, where the names of different people were swapped in a sentence about which the model had to answer.

In order to ensure that their pre-finetuning stage incorporated general language representations, the researchers included tasks in four different domains: classification, commonsense reasoning, machine reading comprehension, and summarization. They call their pre-finetuned models MUPPET, which roughly stands for “Massive Multi-task Representation with Pre-finetuning.”

After pre-finetuning RoBERTa and BART, two popular pretrained models for natural language understanding, the researchers tested their performance on widely-used benchmarks including RTE, BoolQ, RACE, SQuAD, and MNLI. Interestingly, the results show that pre-finetuning can hurt performance when few tasks are used to a critical point, usually above 15 tasks. But pre-finetuning beyond this point leads to performance improvements correlated with the number of language tasks. MUPPET models outperform their vanilla pretrained counterparts and leveraging representations with 34-40 tasks enables the models to reach higher even accuracies with less data than a baseline RoBERTa model.

“These [performance] gains are particularly strong in the low resource regime, where there is relatively little labeled data for fine-tuning,” the researchers wrote in a paper describing their work. “We show that we can effectively learn more robust representations through multitask learning at scale. … Our work shows how even seemingly very different datasets, for example, summarization and extractive QA, can help each other by improving the model’s representations.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Researchers propose Porcupine, a compiler for homomorphic encryption

January 23, 2021   Big Data
 Researchers propose Porcupine, a compiler for homomorphic encryption

How open banking is driving huge innovation

Learn how fintechs and forward-thinking FIs are accelerating personalized financial products through data-rich APIs.

Register Now


Homomorphic encryption (HE) is a privacy-preserving technology that enables computational workloads to be performed directly on encrypted data. HE enables secure remote computation, as cloud service providers can compute on data without viewing highly sensitive content. But despite its appeal, performance and programmability challenges remain a barrier to HE’s widespread adoption.

Realizing the potential of HE will likely require developing a compiler that can translate a plaintext, unencrypted codebase into encrypted code on the fly. In a step toward this, researchers at Facebook, New York University, and Stanford created Porcupine, a “synthesizing compiler” for HE. They say it results in speedups of up to 51% compared to heuristic-driven, entirely hand-optimized code.

Given a reference of a plaintext code, Porcupine synthesizes HE code that performs the same computation, the researchers explain. Internally, Porcupine models instruction noise, latency, behavior, and HE program semantics with a component called Quill. Quill enables Porcupine to reason about and search for HE kernels that are verifiably correct while minimizing the code’s latency and noise accumulation. The result is a suite that automates and optimizes the mapping and scheduling of plaintext to HE code.

In experiments, the researchers evaluated Porcupine using a range of image processing and linear algebra programs. According to the researchers, for small programs, Porcupine was able to find the same optimized implementations as hand-written baselines. And on larger, more complex programs, Porcupine discovered optimizations like factorization and even application-specific optimizations involving separable filters.

“Our results demonstrate the efficacy and generality of our synthesis-based compilation approach and further motivates the benefits of automated reasoning in HE for both performance and productivity,” the researchers wrote. “Porcupine abstracts away the details of constructing correct HE computation so that application designers can concentrate on other design considerations.”

Enthusiasm for HE has given rise to a cottage industry of startups aiming to bring it to production systems. Newark, New Jersey-based Duality Technologies, which recently attracted funding from one of Intel’s venture capital arms, pitches its HE platform as a privacy-preserving solution for “numerous” enterprises, particularly those in regulated industries. Banks can conduct privacy-enhanced financial crime investigations across institutions, so goes the company’s sales pitch, while scientists can tap it to collaborate on research involving patient records.

But HE offers no magic bullet. Even leading techniques can calculate only polynomial functions — a nonstarter for the many activation functions in machine learning that are non-polynomial. Plus, operations on encrypted data can involve only additions and multiplications of integers, which poses a challenge in cases where learning algorithms require floating point computations.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Researchers propose using the game Overcooked to benchmark collaborative AI systems

January 15, 2021   Big Data

The 2021 digital toolkit – How small businesses are taking charge

Learn how small businesses are improving customer experience, accelerating quote-to-cash, and increasing security.

Register Now


Deep reinforcement learning systems are among the most capable in AI, particularly in the robotics domain. However, in the real world, these systems encounter a number of situations and behaviors to which they weren’t exposed during development.

In a step toward systems that can collaborate with humans in order to help them accomplish their goals, researchers at Microsoft, the University of California, Berkeley, and the University of Nottingham developed a methodology for applying a testing paradigm to human-AI collaboration that can be demonstrated in a simplified version of the game Overcooked. Players in Overcooked control a number of chefs in kitchens filled with obstacles and hazards to prepare meals to order under a time limit.

The team asserts that Overcooked, while not necessarily designed with robustness benchmarking in mind, can successfully test potential edge cases in states a system should be able to handle as well as the partners the system should be able to play with. For example, in Overcooked, systems must contend with scenarios like when a plates are accidentally left on counters and when a partner stays put for a while because they’re thinking or away from their keyboard.

 Researchers propose using the game Overcooked to benchmark collaborative AI systems

Above: Screen captures from the researchers’ test environment.

The researchers investigated a number of techniques for improving system robustness, including training a system with a diverse population of other collaborative systems. Over the course of experiments in Overcooked, they observed whether several test systems could recognize when to get out of the way (like when a partner was carrying an ingredient) and when to pick up and deliver orders after a partner has been idling for a while.

According to the researchers, current deep reinforcement agents aren’t very robust — at least not as measured by Overcooked. None of the systems they tested scored above 65% in the video game, suggesting, the researchers say, that Overcooked can serve as a useful human-AI collaboration metric in the future.

 Researchers propose using the game Overcooked to benchmark collaborative AI systems

“We emphasize that our primary finding is that our [Overcooked] test suite provides information that may not be available by simply considering validation reward, and our conclusions for specific techniques are more preliminary,” the researchers wrote in a paper describing their work. “A natural extension of our work is to expand the use of unit tests to other domains besides human-AI collaboration … An alternative direction for future work is to explore meta learning, in order to train the agent to adapt online to the specific human partner it is playing with. This could lead to significant gains, especially on agent robustness with memory.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Stanford researchers propose AI that figures out how to use real-world objects

January 10, 2021   Big Data

Transform 2021

Join us for the world’s leading event about accelerating enterprise transformation with AI and Data, for enterprise technology decision-makers, presented by the #1 publisher in AI and Data

Learn More


One longstanding goal of AI research is to allow robots to meaningfully interact with real-world environments. In a recent paper, researchers at Stanford and Facebook took a step toward this by extracting information related to actions like pushing or pulling objects with movable parts and using it to train an AI model. For example, given a drawer, their model can predict that applying a pulling force on the handle would open the drawer.

As the researchers note, humans interact with a plethora of objects around them. What makes this possible is our understanding of what can be done with each object, where this interaction may occur, and how we must move our bodies to accomplish it. Not only do people understand what actions will be successful, but they intuitively know which ones will not.

The coauthors considered long-term interactions with objects as sequences of short-term “atomic” interactions, like pushing and pulling. This limited the scope of their work to plausible short-term interactions a robot could perform given the current state of an object. These interactions were further decomposed into “where” and “how” — for example, which handle on a cabinet a robot should pull and whether a robot should pull parallel or perpendicular to the handle.

These observations allowed the researchers to formulate their task as one of dense visual prediction. They developed a model that, given a depth or color image of an object, learned to infer whether a certain action could be performed and how it should be executed. For each pixel, the model provided an “actionability” score, action proposals, and success likelihoods.

 Stanford researchers propose AI that figures out how to use real world objects

“Our approach allows an agent to learn these by simply interacting with various objects, and recording the outcomes of its actions — labeling ones that cause a desirable state change as successful,” the coauthors wrote. “We empirically show that our method successfully learns to predict possible actions for novel objects, and does so even for previously unseen categories.”

The researchers used a simulator called SAPIEN for learning and testing their approach across six types of interactions covering 972 shapes over 15 commonly seen indoor object categories. In experiments, they visualized the model’s action scoring predictions over real-world 3D scans from open source datasets. While they concede that there’s no guarantee for the predictions over pixels outside the articulated parts, the results made sense if motion was allowed for the entire objects.

“Our [model] learns to extract geometric features that are action-specific and gripper-aware. For example, for pulling, it predicted higher scores over high-curvature regions such as part boundaries and handles, while for pushing, almost all flat surface pixels belonging to a pushable part are equally highlighted and the pixels around handles are reasonably predicted to be not pushable due to object-gripper collisions … While we use simulated environments for learning as they allow efficient interaction, we also find that our learned system generalizes to real-world scans and images.”

The researchers admit that their work has limitations. For one, the model can only take a single frame as input, which introduces ambiguities if the articulated part is in motion. It’s also limited to hard-coded motion trajectories. In future work, however, the coauthors plan to generalize the model to freeform interactions.

VentureBeat

VentureBeat’s mission is to be a digital townsquare for technical decision makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you,
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more.

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Researchers design AI that can infer whole floor plans from short video clips

January 7, 2021   Big Data

Transform 2021

Join us for the world’s leading event about accelerating enterprise transformation with AI and Data, for enterprise technology decision-makers, presented by the #1 publisher in AI and Data

Learn More


Floor plans are useful for visualizing spaces, planning routes, and communicating architectural designs. A robot entering a new building, for instance, can use a floor plan to quickly sense the overall layout. Creating floor plans typically requires a full walkthrough so 3D sensors and cameras can capture the entirety of a space. But researchers at Facebook, the University of Texas at Austin, and Carnegie Mellon University are exploring an AI technique that leverages visuals and audio to reconstruct a floor plan from a short video clip.

The researchers assert that audio provides spatial and semantic signals complementing the mapping capabilities of images. They say this is because sound is inherently driven by the geometry of objects. Audio reflections bounce off surfaces and reveal the shape of a room, far beyond a camera’s field of view. Sounds heard from afar — even multiple rooms away — can reveal the existence of “free spaces” where sounding objects might exist (e.g., a dog barking in another room). Moreover, hearing sounds from different directions exposes layouts based on the activities or things those sounds represent. A shower running might suggest the direction of the bathroom, for example, while microwave beeps suggest a kitchen.

The researchers’ approach, which they call AV-Map, aims to convert short videos with multichannel audio into 2D floor plans. A machine learning model leverages sequences of audio and visual data to reason about the structure and semantics of the floor plan, finally fusing information from audio and video using a decoder component. The floor plans AV-Map generates, which extend significantly beyond the area directly observable in the video, show free space and occupied regions divided into a discrete set of semantic room labels (e.g., family room and kitchen).

 Researchers design AI that can infer whole floor plans from short video clips

The team experimented with two settings, active and passive, in digital environments from the popular Matternet3D and SoundSpaces datasets loaded into Facebook’s AI Habitat. In the first, they used a virtual camera to emit a known sound while it moved throughout the room of a model home. In the second, they relied only on naturally occurring sounds made by objects and people inside the home.

Across videos recorded in 85 large, real-world, multiroom environments within AI Habitat, the researchers say AV-Map not only consistently outperformed traditional vision-based mapping but improved the state-of-the-art technique for extrapolating occupancy maps beyond visible regions. With just a few glimpses spanning 26% of an area, AV-Map could estimate the whole area with 66% accuracy.

“A short video walk through a house can reconstruct the visible portions of the floorplan but is blind to many areas. We introduce audio-visual floor plan reconstruction, where sounds in the environment help infer both the geometric properties of the hidden areas as well as the semantic labels of the unobserved rooms (e.g., sounds of a person cooking behind a wall to the camera’s left suggest the kitchen),” the researchers wrote in a paper detailing AV-Map. “In future work, we plan to consider extensions to multi-level floor plans and connect our mapping idea to a robotic agent actively controlling the camera … To our knowledge, ours is the first attempt to infer floor plans from audio-visual data.”

VentureBeat

VentureBeat’s mission is to be a digital townsquare for technical decision makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you,
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more.

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Uber researchers propose AI language model that emphasizes positive and polite responses

January 5, 2021   Big Data
 Uber researchers propose AI language model that emphasizes positive and polite responses

Transform 2021

Join us for the world’s leading event about accelerating enterprise transformation with AI and Data, for enterprise technology decision-makers, presented by the #1 publisher in AI and Data

Learn More


AI-powered assistants like Siri, Cortana, Alexa, and Google Assistant are pervasive. But for these assistants to engage users and help them to achieve their goals, they need to exhibit appropriate social behavior and provide informative replies. Studies show that users respond better to social language in the sense that they’re more responsive and likelier to complete tasks. Inspired by this, researchers affiliated with Uber and Carnegie Mellon developed a machine learning model that injects social language into an assistant’s responses while preserving their integrity.

The researchers focused on the customer service domain, specifically a use case where customer service personnel helped drivers sign up with a ride-sharing provider like Uber or Lyft. They first conducted a study to suss out the relationship between customer service representatives’ use of friendly language to drivers’ responsiveness and the completion of their first ride-sharing trip. Then, they developed a machine learning model for an assistant that includes a social language understanding and language generation component.

In their study, the researchers found that that the “politeness level” of customer service representative messages correlated with driver responsiveness and completion of their first trip. Building on this, they trained their model on a dataset of over 233,000 messages from drivers and corresponding responses from customer service representatives. The responses had labels indicating how generally polite and positive they were, chiefly as judged by human evaluators.

Post-training, the researchers used automated and human-driven techniques to evaluate the politeness and positivity of their model’s messages. They found it could vary the politeness of its responses while preserving the meaning of its messages, but that it was less successful in maintaining overall positivity. They attribute this to a potential mismatch between what they thought they were measuring and manipulating and what they actually measured and manipulated.

“A common explanation for the negative association of positivity with driver responsiveness in … and the lack of an effect of positivity enhancement on generated agent responses … might be a discrepancy between the concept of language positivity and its operationalization as positive sentiment,” the researchers wrote in a paper detailing their work. “[Despite this, we believe] the customer support services can be improved by utilizing the model to provide suggested replies to customer service representatives so that they can (1) respond quicker and (2) adhere to the best practices (e.g. using more polite and positive language) while still achieving the goal that the drivers and the ride-sharing providers share, i.e., getting drivers on the road.”

The work comes as Gartner predicts that by the year 2020, only 10% of customer-company interactions will be conducted via voice. According to the 2016 Aspect Consumer Experience Index research, 71% of consumers want the ability to solve most customer service issues on their own, up 7 points from the 2015 index. And according to that same Aspect report, 44% said that they would prefer to use a chatbot for all customer service interactions compared with a human.

VentureBeat

VentureBeat’s mission is to be a digital townsquare for technical decision makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you,
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more.

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More
« Older posts
  • Recent Posts

    • Dapper Duo
    • AI Weekly: These researchers are improving AI’s ability to understand different accents
    • Why Choose RapidMiner for Your Data Science & Machine Learning Software?
    • How to Use CRM Integration to Your Advantage – Real World Examples
    • WATCH: ‘Coming 2 America’ Movie Review Available On Amazon Prime & Amazon
  • Categories

  • Archives

    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited