• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: VIDEO

Cisco is bringing individual and team insights to Webex video calls

March 31, 2021   Big Data

Want more company growth? Build a better company culture.

Learn how building an employee-first culture determines resilience, growth, and innovation.

Register Now


Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


Starting this summer, Cisco’s Webex will begin to serve up insights for video calls to a select group of users for individuals, teams, and organizations. Examples include engagement insights, like how often you had your video on or showed up on time and the people or teams within an organization that you speak with most often.

The goal, Cisco VP Jeetu Patel told VentureBeat in a phone interview, is to make video calls better for people living in the hybrid world between in-person meetings in the office and virtual meetings at home. The tricky part, he said, is considering what information is good for an individual to know while not giving people the impression that Webex is, for example, flagging employees who are routinely late to meetings to managers.

“Let’s say you did 12 meetings today, and in six of those meetings with four people or less, you actually spoke for 90% of the time. That would be a really bad thing to give your boss, but a really good thing for you to have so you can say, ‘Oh, I should probably do a better job listening,’” he said. “The privacy on that front is not at the organizational level. It’s at the individual level. So when we provide insights like that to an individual, the individual owns the data, not the organization, because we don’t believe that without your explicit permission, you’d want to have your boss see that.”

Webex has introduced a series of new features in recent months, some powered by artificial intelligence, to change how people share information in video calls. Toward this end, Patel said, “We’ve probably invested about a billion dollars or so in the past two years in AI.”

Above: Individual insights

Gesture recognition means that people in video calls can now raise their hand or give a thumbs up or thumbs down to ask to speak or register feedback. Another AI-powered feature on the way will crop the faces of people who attend in-person meetings for the person who’s working from home or remotely.

“Even though there are three people sitting in a conference room, we’ll actually break the stream into three separate boxes and show it to you, and our hardware will actually do that,” he said.

Patel has overseen the acquisition of three companies since joining Cisco last summer, after serving as chief product officer at Box. Last month, Cisco closed its acquisition of IMImobile for $ 730 million in part to beef up its AI capabilities. Last summer, Cisco announced plans to acquire BabbleLabs, an AI startup focused on filtering audio so that the sound of someone doing dishes nearby, a lawnmower, or loud background noise can be reduced or eliminated. And earlier this year, Cisco acquired Slido, a startup that makes engagement features for video calls like word clouds or upvoting questions. Such features can allow a meeting to take the structure of a town hall, with transparency around the top questions for employees within an organization since everyone can see the questions that are being posted.

“Engagement should not be measured based on having a judgment on someone saying, ‘I’m judging that you look sad, and therefore I’m going to do certain things … at that point in time in my mind, you could cross a boundary where there’s more bad that can come out of that than good,” he said.

In 2019, Cisco acquired Voicea to power speech-to-text transcription of meetings. Closed captioning and live translation are also available in Webex calls.

Deciding where to draw the line on which AI-powered features or insights to introduce in video calls can be a challenge with nuance. Earlier this year, Microsoft Research did a study with AffectiveSpotlight on AI for recognizing confusion, engagement, or head nods in meetings. If taken in the aggregate, picking up cues from the audience could be really helpful, particularly for large organizations. But if affective AI for video calls led to critique of how often a person smiles or shows certain forms of expression, it could be considered invasive, or counterproductive, or biased to certain groups of people.

Video analysis of expression today can have major shortcomings. A group of journalists in Germany recently demonstrated that placing a bookshelf in the background or putting on glasses can change affective AI evaluations of a person in a video.

It shouldn’t matter whether a person is an extrovert or prefers not to talk in group settings as long as they fulfill their job duties. And some people talk a lot but have nothing much say, while others talk less often but deliver sharp insights or sage advice. It just depends on the team, role, and scenario.

“I’d rather you give explicit permission than something you pick up because one, it’s bad if you misread [certain stats]. And two, there’s a fine line between ‘This is super productive’ and ‘We can’t do this because it violates my privacy or it’s just outright creepy,’” Patel said.

Cisco plans to roll out Webex People Insights globally over the span of the next year starting with select users in the U.S. this summer, announcing the news today as part of Cisco Live. In other Cisco Live news, on Tuesday Cisco announced plans to combine networking, security, and IT infrastructure offerings and work with the Duo authentication platform it acquired in 2018.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Ring rolls out end-to-end video encryption after a class action lawsuit

January 13, 2021   Big Data

Transform 2021

Join us for the world’s leading event about accelerating enterprise transformation with AI and Data, for enterprise technology decision-makers, presented by the #1 publisher in AI and Data

Learn More


In September, Amazon-owned Ring announced that it would bring end-to-end video encryption to its lineup of home security devices. While the company already encrypted videos in storage and during transmission, end-to-end encryption secures videos on-device, preventing third parties without special keys from decrypting and viewing the recordings. The feature launches today in technical preview for compatible Ring products.

The rollout of end-to-end encryption comes after dozens of plaintiffs filed a class action lawsuit against Ring, alleging they had been subjected to death threats, racial slurs, and blackmail after their Ring cameras were hacked. In 2019, a data leak exposed the personal information of over 3,000 Ring users, including log-in emails, passwords, time zones, and the names people give to specific Ring cameras. Following the breach, Ring began requiring two-step verification for user sign-ins and launched a compromised password check feature that cross-references login credentials against a list of known compromised passwords.

In a whitepaper, Ring explains that end-to-end encryption, which is available as a setting within the Ring app, is designed so users can view videos on enrolled smartphones only. Videos are encrypted with keys that are themselves encrypted with an algorithm that creates a public and private key. The public key encrypts, but the private key is required to decrypt. Only users have access to the private key, which is stored on their smartphone and decrypts the symmetric key, and by extension, encrypted videos.

 Ring rolls out end to end video encryption after a class action lawsuit

When a user opts into end-to-end encryption, the Ring app presents a 10-word auto-generated passphrase used to secure the cryptographic keys. (Ring says these words are randomly selected from a dictionary of 7,776.) The passphrase, which can be used to enroll additional smartphones, is generated on-device. But the public portion of the instance key pair and the account data key pair are copied to the Ring cloud after being signed by the account-signing key, as are the locally encrypted private portions of the account-signing key pair and the account data key pair.

Ring notes that end-to-end encryption disables certain features, including AI-dependent features that decrypt videos for processing work like motion verification and people-only mode. However, Live View, which decrypts video locally on-device, will continue to run while end-to-end encryption is enabled. And users can share videos through Ring’s controversial Neighbors Public Safety Service, which connects residents with local law enforcement by downloading an end-to-end encrypted video to their smartphone, which saves it in decrypted form.

Users can switch off end-to-end encryption at any time, but any videos encrypted with end-to-end encryption can’t be decrypted; the keys to access those videos are removed permanently in the process. Conversely, turning on end-to-end encryption doesn’t encrypt any videos created before enrollment because the service only encrypts videos created post-enrollment.

 Ring rolls out end to end video encryption after a class action lawsuit

Ring recently made headlines for a deal it reportedly struck with over 400 police departments nationwide that would allow authorities to request that owners volunteer footage from Ring cameras within a specific time and location. Ring, which has said it would not hand over footage if confronted with a subpoena but would comply when given a search warrant, has law enforcement partnerships in more than 1,300 cities.

Advocacy groups like Fight for the Future and the Electronic Frontier Foundation have accused Ring of using its cameras and Neighbors app (which delivers safety alerts) to build a private surveillance network via police partnerships. The Electronic Frontier Foundation in particular has singled Ring out for marketing strategies that foster fear and promote a sale-spurring “vicious cycle,” and for “[facilitating] reporting of so-called ‘suspicious’ behavior that really amounts to racial profiling.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Researchers design AI that can infer whole floor plans from short video clips

January 7, 2021   Big Data

Transform 2021

Join us for the world’s leading event about accelerating enterprise transformation with AI and Data, for enterprise technology decision-makers, presented by the #1 publisher in AI and Data

Learn More


Floor plans are useful for visualizing spaces, planning routes, and communicating architectural designs. A robot entering a new building, for instance, can use a floor plan to quickly sense the overall layout. Creating floor plans typically requires a full walkthrough so 3D sensors and cameras can capture the entirety of a space. But researchers at Facebook, the University of Texas at Austin, and Carnegie Mellon University are exploring an AI technique that leverages visuals and audio to reconstruct a floor plan from a short video clip.

The researchers assert that audio provides spatial and semantic signals complementing the mapping capabilities of images. They say this is because sound is inherently driven by the geometry of objects. Audio reflections bounce off surfaces and reveal the shape of a room, far beyond a camera’s field of view. Sounds heard from afar — even multiple rooms away — can reveal the existence of “free spaces” where sounding objects might exist (e.g., a dog barking in another room). Moreover, hearing sounds from different directions exposes layouts based on the activities or things those sounds represent. A shower running might suggest the direction of the bathroom, for example, while microwave beeps suggest a kitchen.

The researchers’ approach, which they call AV-Map, aims to convert short videos with multichannel audio into 2D floor plans. A machine learning model leverages sequences of audio and visual data to reason about the structure and semantics of the floor plan, finally fusing information from audio and video using a decoder component. The floor plans AV-Map generates, which extend significantly beyond the area directly observable in the video, show free space and occupied regions divided into a discrete set of semantic room labels (e.g., family room and kitchen).

 Researchers design AI that can infer whole floor plans from short video clips

The team experimented with two settings, active and passive, in digital environments from the popular Matternet3D and SoundSpaces datasets loaded into Facebook’s AI Habitat. In the first, they used a virtual camera to emit a known sound while it moved throughout the room of a model home. In the second, they relied only on naturally occurring sounds made by objects and people inside the home.

Across videos recorded in 85 large, real-world, multiroom environments within AI Habitat, the researchers say AV-Map not only consistently outperformed traditional vision-based mapping but improved the state-of-the-art technique for extrapolating occupancy maps beyond visible regions. With just a few glimpses spanning 26% of an area, AV-Map could estimate the whole area with 66% accuracy.

“A short video walk through a house can reconstruct the visible portions of the floorplan but is blind to many areas. We introduce audio-visual floor plan reconstruction, where sounds in the environment help infer both the geometric properties of the hidden areas as well as the semantic labels of the unobserved rooms (e.g., sounds of a person cooking behind a wall to the camera’s left suggest the kitchen),” the researchers wrote in a paper detailing AV-Map. “In future work, we plan to consider extensions to multi-level floor plans and connect our mapping idea to a robotic agent actively controlling the camera … To our knowledge, ours is the first attempt to infer floor plans from audio-visual data.”

VentureBeat

VentureBeat’s mission is to be a digital townsquare for technical decision makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you,
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more.

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

AI that directs drones to film ‘exciting’ shots could lower video production costs

November 24, 2020   Big Data

When it comes to customer expectations, the pandemic has changed everything

Learn how to accelerate customer service, optimize costs, and improve self-service in a digital-first world.

Register here

Because of their ability to detect, track, and follow objects of interest while maintaining safe distances, drones have become an important tool for professional and amateur filmmakers alike. This being the case, quadcopters’ camera controls remain difficult to master. Drones might take different paths for the same scenes even if their positions, velocities, and angles are carefully tuned, potentially ruining the consistency of a shot.

In search of a solution, Carnegie Mellon, University of Sao Paulo, and Facebook researchers developed a framework that enables users to define drone camera shots working from labels like “exciting,” “enjoyable,” and “establishing.” Using a software simulator, they generated a database of video clips with a diverse set of shot types and then leveraged crowdsourcing and AI to learn the relationship between the labels and certain semantic descriptors.

Videography can be a costly endeavor. Filming a short commercial runs $ 1,500 to $ 3,500 on the low end, a hefty expense for small-to-medium-sized businesses. This leads some companies to pursue in-house solutions, but not all have the expertise required to execute on a vision. AI like Facebook’s, as well as Disney’s and Pixar’s, could lighten the load in a meaningful way.

 AI that directs drones to film ‘exciting’ shots could lower video production costs

The coauthors of this new framework began by conducting a series of experiments to determine the “minimal perceptually valid step sizes” — i.e., the minimum number of shots a drone had to take — for various shot parameters. Next, they built a dataset of 200 videos using these steps and tasked volunteers from Amazon Mechanical Turk with assigning scores to semantic descriptors. The scores informed a machine learning model that mapped the descriptors to parameters that could guide the drone through shots. Lastly, the team deployed the framework to a real-world Parrot Bepop 2 drone, which they claim managed to generalize well to different actors, activities, and settings.

 AI that directs drones to film ‘exciting’ shots could lower video production costs

The researchers assert that while the framework targets nontechnical users, experts could adapt it to gain more control over the model’s outcome. For example, they could learn separate generative models for individual shot types and expert more direction over the model’s inputs and outputs.

“Our … model is able to successfully generate shots that are rated by participants as having the expected degrees of expression for each descriptor,” the researchers wrote. “Furthermore, the model generalizes well to other simulated scenes and to real-world footages, which strongly suggests that our semantic control space is not overly attached to specific features of the training environment nor to a single set of actor motions.”

In the future, the researchers hope to explore a larger set of parameters to control each shot, including lens zoom and potentially even soundtracks. They’d also like to extend the framework to take into account features like terrain and scenery.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Accelerate Your Digital Transformation with PowerBanking [VIDEO]

November 11, 2020   Microsoft Dynamics CRM

PowerObjects currently partners with many of the largest retail and commercial banks globally, empowering them to deliver omnichannel customer service, business process automation, and intelligent customer insights. All while enabling banks to adhere to stringent data protection and governance policies. And we can do it for your bank, as well! Why are we so successful helping banks transform…

Source

Let’s block ads! (Why?)

PowerObjects- Bringing Focus to Dynamics CRM

Read More

AI Weekly: Nvidia’s Maxine opens the door to deepfakes and bias in video calls

October 10, 2020   Big Data
 AI Weekly: Nvidia’s Maxine opens the door to deepfakes and bias in video calls

Will AI power video chats of the future? That’s what Nvidia implied this week with the unveiling of Maxine, a platform that provides developers with a suite of GPU-accelerated AI conferencing software. Maxine brings AI effects including gaze correction, super-resolution, noise cancellation, face relighting, and more to end users, while in the process reducing how much bandwidth videoconferencing consumes. Quality-preserving compression is a welcome innovation at a time when videoconferencing is contributing to record bandwidth usage. But Maxine’s other, more cosmetic features raise uncomfortable questions about AI’s negative — and possibly prejudicial — impact.

A quick recap: Maxine employs AI models called generative adversarial networks (GANs) to modify faces in video feeds. Top-performing GANs can create realistic portraits of people who don’t exist, for instance, or snapshots of fictional apartment buildings. In Maxine’s case, they can enhance the lighting in a video feed and recomposite frames in real time.

Bias in computer vision algorithms is pervasive, with Zoom’s virtual backgrounds and Twitter’s automatic photo-cropping tool disfavoring people with darker skin. Nvidia hasn’t detailed the datasets or AI model training techniques it used to develop Maxine, but it’s not outside of the realm of possibility that the platform might not, for instance, manipulate Black faces as effectively as light-skinned faces. We’ve reached out to Nvidia for comment.

Beyond the bias issue, there’s the fact that facial enhancement algorithms aren’t always mentally healthy. Studies by Boston Medical Center and others show that filters and photo editing can take a toll on people’s self-esteem and trigger disorders like body dysmorphia. In response, Google earlier this month said it would turn off by default its smartphones’ “beauty” filters that smooth out pimples, freckles, wrinkles, and other skin imperfections. “When you’re not aware that a camera or photo app has applied a filter, the photos can negatively impact mental wellbeing,” the company said in a statement. “These default filters can quietly set a beauty standard that some people compare themselves against.”

That’s not to mention how Maxine might be used to get around deepfake detection. Several of the platform’s features analyze the facial points of people on a call and then algorithmically reanimate the faces in the video on the other side, which could interfere with the ability of a system to identify whether a recording has been edited. Nvidia will presumably build in safeguards to prevent this — currently, Maxine is available to developers only in early access — but the potential for abuse was a question the company hasn’t so far addressed.

None of this is to suggest that Maxine is malicious by design. Gaze correction, face relighting, upscaling, and compression seem useful. But the issues Maxine raises point to a lack of consideration for the harms its technology might cause, a tech industry misstep so common it’s become a cliche. The best-case scenario is that Nvidia takes steps (if it hasn’t already) to minimize the ill effects that might arise. The fact that the company didn’t reserve airtime to spell out these steps at Maxine’s unveiling, however, doesn’t instill confidence.

For AI coverage, send news tips to Khari Johnson, Kyle Wiggers, and Seth Colaner — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI Channel.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Transform the Patient Experience with the Microsoft Healthcare Bot [VIDEO]

October 1, 2020   Microsoft Dynamics CRM

PowerObjects is deeply entrenched in the healthcare industry. Our mission is to advocate for patients through the use of Dynamics 365 and the Microsoft Business Applications platform. In recent years, we’ve delivered dozens of digital transformations for healthcare providers big and small. We speak from experience when we say that nothing is more important in this industry than the patient…

Source

Let’s block ads! (Why?)

PowerObjects- Bringing Focus to Dynamics CRM

Read More

Get 360-Degree Client Views with PowerCapital [VIDEO]

September 3, 2020   Microsoft Dynamics CRM

How can Capital Markets firms deliver consistent excellent customer service if they’re unable to tap into the reliable 360-degree client views that make it possible? Fortunately, PowerObjects has the answer. Built on Microsoft Dynamics 365, PowerCapital is a custom solution accelerator designed by Capital Markets experts at PowerObjects and based on their years of experience within the industry.

Source

Let’s block ads! (Why?)

PowerObjects- Bringing Focus to Dynamics CRM

Read More

Video Consultations Gives Oh My Glasses Renewed Vision

April 9, 2020   NetSuite
d dsc4628 Video Consultations Gives Oh My Glasses Renewed Vision

Posted by Tom Hansford, Content Specialist

Japanese eyewear retailer Oh My Glasses is famous for providing sophisticated and fashionable glasses. With eight stores across six cities, the brand prides itself on exceptional customer service. Because as anyone who wears glasses knows, picking the right frame is a big decision.

So, when the business was forced to reduce store opening times due to social distancing restrictions, the marketing team had to think on their feet. They needed to come up with a new way to engage with customers, whilst still delivering an incredible service.

2020 Vision

Oh My Glasses decided to offer frame-fitting consultations via Zoom, adding a welcome human component to its ecommerce site. Customers can now head to the online store, and with the help of a trained team member, select five frames they like based on face shape, style preference and colour. Next, these five frames are posted to the customer, where they can try the glasses on at home free of charge. After five days, customers return the frames, with no commitment to purchase.

Specs Appeal 

Oh My Glasses marketing manager, Yosuke Watanabe, said that the new initiative is already generating a great response from customers.

“Our customers can enjoy a professional fitting consultation, helping them find the right glasses,” Watanabe said. “What’s nice is that they can now choose from over 7000 glasses that we feature online, rather than being restricted to what’s in a single store. More choice is really appealing.”

The service is also giving people who are trapped at home something fun to do when they can’t go shopping.

“People are stuck inside, so our service helps them engage with someone and hopefully get some awesome new glasses,” he said. “They enjoy the process.”

Not only is the new service benefiting customers, but it also benefits team members who would otherwise not be working.

“Our employees get to work. They provide the same service as they would in store, just on a video call instead,” Watanabe said.

Future Focus

The new video frame-fitting service is still in its infancy, but Yosuke is delighted with how well it’s been received.

“It’s still early days, but we will consider keeping the service going when store opening restrictions are lifted,” he said.

He revealed there was initial uncertainty around launching the Zoom-based service, as some stakeholders in the business felt as though they weren’t ready.

“What we’ve learnt is that nothing is perfect,” Watanabe said. “In this new world, you just have to get out there and try things. Your customers will understand it probably won’t be exactly right. And that’s okay. At least you’re trying to help them.”

Check out Oh My Glasses’ extensive range of eyewear by visiting its online store.

Posted on Wed, April 8, 2020
by NetSuite filed under

Let’s block ads! (Why?)

The NetSuite Blog

Read More

Microsoft’s AI determines whether statements about video clips are true

March 28, 2020   Big Data

In a paper published on the preprint server Arxiv.org, researchers affiliated with Carnegie Mellon, the University of California at Santa Barbara, and Microsoft’s Dynamics 365 AI Research describe a challenge — video-and-language inference — that tasks AI with inferring whether a statement is entailed or contradicted by a given video clip. The idea is to spur investigations into video-and-language understanding, they say, which could enhance tools used in the enterprise for automatic meeting transcription.

As the researchers explain, video-and-language inference requires a thorough interpretation of both visual and textual clues. They to this end introduce a video data set comprising realistic scenes paired with statements from crowdsourced workers via Amazon Mechanical Turk, who watched the videos accompanied by subtitles. The workers wrote statements based on their understanding of both the videos and subtitles, which not only describe explicit information in the video (e.g., objects, locations, characters, and social activity) but that also reveal comprehension of complex plots (understanding events, interpreting human emotions and relations, and inferring causal relations of events).

In total, the data set contains over 95,322 video-statement pairs and 15,887 movie clips from YouTube and TV series — including Friends, Desperate Housewives, How I Met Your Mother, and Modern Family — spanning over 582 hours. Each roughly 30-second video is paired with six either positive or negative statements that identify characters, recognize actions, reason about conversations, infer reasons, or make reference to human dynamics. (In order to prevent bias from creeping in, when collecting negative statements, the researchers asked annotators to use a positive statement as a reference and only modify a small portion of it to make it negative.)

 Microsoft’s AI determines whether statements about video clips are true

To benchmark the data set, the coauthors used a bi-directional long short-term memory model, a type of AI model capable of learning long-term dependencies, to encode video features as numerical representations. A separate model encoded statements and subtitles. Given a video, subtitle, and statement, yet another model — which was trained on 80% of the data set, with 10% reserved for validation and 10% for testing — determined whether the statement entailed or contradicted the video and subtitles. They say that the best-performing baseline achieved 59.45% accuracy, compared with human evaluators’ 85.20% accuracy.

 Microsoft’s AI determines whether statements about video clips are true

“The gap between the baseline models and human performance is significant. We encourage the community to participate in this task and invent stronger methods to push the state of the art on multimodal inference,” wrote the researchers. “Possible future directions include developing models to localize key frames, as well as better utilizing the alignment between video and subtitles to improve reasoning ability.”

The research follows a study by Microsoft Research Asia and Harbin Institute of Technology that sought to generate live video captions with AI by capturing the representations among comments, video, and audio. The system — the code for which is available on GitHub — matches the most relevant comments with videos from a candidate set so that it jointly learns cross-modal representations.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More
« Older posts
  • Recent Posts

    • Accelerate Your Data Strategies and Investments to Stay Competitive in the Banking Sector
    • SQL Server Security – Fixed server and database roles
    • Teradata Named a Leader in Cloud Data Warehouse Evaluation by Independent Research Firm
    • Derivative of a norm
    • TODAY’S OPEN THREAD
  • Categories

  • Archives

    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited