• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Researchers’ AI can perform 3D motion capture with any off-the-shelf camera

August 25, 2020   Big Data
 Researchers’ AI can perform 3D motion capture with any off the shelf camera

Automation and Jobs

Read our latest special issue.

Open Now

Motion capture — the process of recording peoples’ movements — traditionally requires equipment, cameras, and software tailored for the purpose. But researchers at the Max Planck Institute and Facebook Reality Labs claim they’ve developed a machine learning algorithm — PhysCap — that works with any off-the-shelf DSLR camera running at 25 frames per second. In a paper expected to be published in the journal ACM Transactions on Graphics in November 2020, the team details PhysCap, which they say is the first of its kind for real-time, physically plausible 3D motion capture accounting for environmental constraints like floor placement. It ostensibly achieves state-of-the-art accuracy on existing benchmarks and qualitatively improves stability at training time.

Motion capture is a core part of modern film, game, and even app development. There’s been countless attempts at making motion capture practical for amateur videographers, from a $ 2,500 suit to a commercially available framework that leverages Microsoft’s depth-sensing Kinect. But they’re imperfect — even the best human pose-estimating systems struggle to produce smooth animations, yielding 3D models with improper balance, inaccurate body leaning, and other artifacts of instability.

By contrast, PhysCap reportedly captures physically and anatomically correct poses that adhere to physics constraints.

In its first stage, PhysCap estimates 3D body poses in a purely kinematic way with a convolutional neural network (CNN) that infers combined 2D and 3D joint positions from a video. After some refinement, the second stage commences, in which foot contact and motion states are predicted for every frame by a second CNN. (This CNN detects heel and forefoot placement on the ground and classifies the observed poses into “stationary” or “non-stationary” categories.) In the final stage, kinematic pose estimates from the €first stage (in both 2D and 3D) are reproduced as closely as possible to account for things like gravity, collisions, foot placement, and gravity.

In experiments, the researchers tested PhysCap on a SONY DSC-RX0 camera and a PC with 32GB of RAM, a GeForce RTX 2070 graphics card, and an eight-core Ryzen7 processor, with which they captured and processed six motion sequences in scenes performed by two performers. The coauthors found that while PhysCap generalized well across scenes with different backgrounds, it sometimes mispredicted foot contacts and therefore foot velocity. Other limitations that arose were the need for a calibrated floor plane and a ground plane in the scene, which the researchers note is harder to find outdoors.

To address these limitations, the researchers plan to investigate modelling hand-scene interactions and contacts between legs and body in sitting and lying poses. “Since the output of PhysCap is environment-aware and the returned root position is global, it is directly suitable for virtual character animation, without any further post-processing,” the researchers wrote. “Here, applications in character animation, virtual and augmented reality, telepresence, or human-computer interaction, are only a few examples of high importance for graphics.”

Let’s block ads! (Why?)

Big Data – VentureBeat

Camera, Capture, motion, offtheshelf, Perform, researchers
  • Recent Posts

    • 5 CRM Marketing Tips to Use as Resolutions for 2021
    • Sidney Poitier To Get A School Named After Him
    • Using Power Automate to Automatically Move Your Email Attachments to SharePoint
    • Why it’s time for fintechs and FIs to jump on the open banking bandwagon (VB Live)
    • Integrating a function with integration limits also dependent on a variable
  • Categories

  • Archives

    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited