• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Category Archives: Big Data

IBM releases Qiskit modules that use quantum computers to improve machine learning

April 11, 2021   Big Data
 IBM releases Qiskit modules that use quantum computers to improve machine learning

Transform 2021

Join us for the world’s leading event about accelerating enterprise transformation with AI and Data

July 12-16, 2021

Free Registration


Join GamesBeat Summit 2021 this April 28-29. Register for a free or VIP pass today.


IBM is releasing Qiskit Machine Learning, a set of new application modules that’s part of its open source quantum software. The new feature is the latest expansion of the company’s broader effort to get more developers to begin experimenting with quantum computers.

According to a blog post by the Qiskit Applications Team, the machine learning modules promise to help optimize machine learning by using quantum computers for some parts of the process.

“Quantum computation offers another potential avenue to increase the power of machine learning models, and the corresponding literature is growing at an incredible pace,” the team wrote. “Quantum machine learning (QML) proposes new types of models that leverage quantum computers’ unique capabilities to, for example, work in exponentially higher-dimensional feature spaces to improve the accuracy of models.”

Rather than replacing current computer architectures, IBM is betting that quantum computers will gain traction in the coming years by taking on very specific tasks that are offloaded from a classic computing system to a quantum platform. AI and machine learning are among the areas where IBM has said it’s hopeful that quantum can make an impact.

To make quantum more accessible, last year IBM introduced an open source quantum programming framework called Qiskit. The company has said it has the potential to speed up some applications by 100 times.

In the case of machine learning, the hope is that a system that offloads tasks to a quantum system could accelerate the training time. However, challenges remain, such as how to get large data sets in and out of the quantum machine without adding time that would cancel out any gains by the quantum calculations.

Developers who use Qiskit to improve their algorithms will have access to test them on IBM’s cloud-based quantum computing platform.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

AI Weekly: Continual learning offers a path toward more humanlike AI

April 10, 2021   Big Data
 AI Weekly: Continual learning offers a path toward more humanlike AI

Transform 2021

Join us for the world’s leading event about accelerating enterprise transformation with AI and Data

July 12-16, 2021

Free Registration


Join GamesBeat Summit 2021 this April 28-29. Register for a free or VIP pass today.


State-of-the-art AI systems are remarkably capable, but they suffer from a key limitation: statisticity. Algorithms are trained once on a dataset and rarely again, making them incapable of learning new information without retraining. This is as opposed to the human brain, which learns constantly, using knowledge gained over time and building on it as it encounters new information. While there’s been progress toward bridging the gap, solving the problem of “continual learning” remains a grand challenge in AI.

This challenge motivated a team of AI and neuroscience researchers to found ContinualAI, a nonprofit organization and open community of continual and lifelong learning enthusiasts. ContinualAI recently announced Avalanche, a library of tools compiled over the course of a year from over 40 contributors to make continual learning research easier and more reproducible. The group also hosts conference-style presentations, sponsors workshops and AI competitions, and maintains a repository of tutorial, code, and guides.

As Vincenzo Lomonaco, cofounding president and assistant professor at the University of Pisa, explains, ContinualAI is one of the largest organizations on a topic its members consider fundamental for the future of AI. “Even before the COVID-19 pandemic began, ContinualAI was funded with the idea of pushing the boundaries of science through distributed, open collaboration,” he told VentureBeat via email. “We provide a comprehensive platform to produce, discuss and share original research in AI. And we do this completely for free, for anyone.”

Even highly sophisticated deep learning algorithms can experience catastrophic learning or catastrophic interference, a phenomenon where deep networks fail to recall what they’ve learned from a training dataset. The result is that the networks have to be constantly reminded of the knowledge they’ve gained or risk becoming “stuck” with their most recent “memories.”

OpenAI research scientist Jeff Clune, who helped to cofound Uber AI Labs in 2017, has called catastrophic forgetting the “Achilles’ heel” of machine learning and believes that solving it is the fastest path to artificial general intelligence (AGI). Last February, Clune coauthored a paper detailing ANML, an algorithm that managed to learn 600 sequential tasks with minimal catastrophic forgetting by “meta-learning” solutions to problems instead of manually engineering solutions. Separately, Alphabet’s DeepMind has published research suggesting that catastrophic forgetting isn’t an insurmountable challenge for neural networks. And Facebook is advancing a number of techniques and benchmarks for continual learning, including a model that it claims is effective in preventing the forgetting of task-specific skills.

But while the past several years have seen a resurgence of research into the issue, catastrophic forgetting largely remains unsolved, according to Keiland Cooper, a cofounding member of ContinualAI and a neuroscience research associate at the University of California, Irvine. “The potential of continual learning exceeds catastrophic forgetting and begins to touch on more interesting questions of implementing other cognitive learning properties in AI,” Cooper told VentureBeat. “Transfer learning is one example, where when humans or animals learn something previously, sometimes this learning can be applied to a new context or aid learning in other domains … Even more alluring is that continual learning is an attempt to push AI from narrow, savant-like systems to broader, more general ones.”

Even if continual learning doesn’t yield the sort of AGI depicted science fiction, Cooper notes that there are immediate advantages to it across a range of domains. Cutting-edge models are being trained on increasingly larger datasets in search of better performance, but this training comes at a cost — whether waiting weeks for training to finish or the impact of the electricity usage on the environment.

“Say you run a certain AI organization that built a natural language model that was trained over weeks on 45 terabytes of data for a few million dollars,” Cooper explained. “If you want to teach that model something new, well, you’d very likely have to start from scratch or risk overwriting what it had already learned, unless you added continual learning additions to the model. Moreover, at some point, the cost to store that data will be exceedingly high for an organization, or even impossible. Beyond this, there are many cases where you can only see the data once and so retraining isn’t even an option.”

While the blueprint for a continual learning AI system remains elusive, ContinualAI aims to connect researchers and stakeholders interested in the area and support and provide a platform for projects and research. It’s grown to over 1,000 members in the three years since its founding.

“For me personally, while there has been a renewed interest in continual learning in AI research, the neuroscience of how humans and animals can accomplish these feats is still largely unknown,” Cooper said. “I’d love to see more of an interaction with AI researchers, cognitive scientists, and neuroscientists to communicate and build upon each of their fields ides towards a common goal of understanding one of the most vital aspects of learning and intelligence. I think an organization like ContinualAI is best positioned to do just that, which allows for the sharing of ideas without the boundaries of the academic or industry walls, siloed fields, or distant geolocation.”

Beyond the mission of dissemination information about continual learning, Lomonaco believes that ContinualAI has the potential to become a reference points for a more inclusive and collaborative way of doing research in AI. “Elite university and private company labs still work mostly behind close doors, [but] we truly believe in inclusion and diversity rather than selective elitiarity. We favor transparency and open-source rather than protective IP licenses. We make sure anyone has access to the learning resources she needs to achieve her potential.”

For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Amazon launches ML-powered maintenance tool Lookout for Equipment in general availability

April 9, 2021   Big Data
 Amazon launches ML powered maintenance tool Lookout for Equipment in general availability

Join GamesBeat Summit 2021 this April 28-29. Register for a free or VIP pass today.


Amazon today announced the general availability of Lookout for Equipment, a service that uses machine learning to help customers perform maintenance on equipment in their facilities. Launched in preview last year during Amazon Web Services (AWS) re:Invent 2020, Lookout for Equipment ingests sensor data from a customer’s industrial equipment and then trains a model to predict early warning signs of machine failure or suboptimal performance.

Predictive maintenance technologies have been used for decades in jet engines and gas turbines, and companies like GE Digital’s Predix and Petasense offer Wi-Fi-enabled, cloud- and AI-driven sensors. According to a recent report by analysts at Markets and Markets, predictive factory maintenance could be worth $ 12.3 billion by 2025. Startups like Augury are vying for a slice of the segment, beyond Amazon.

With Lookout for Equipment, industrial customers can build a predictive maintenance solution for a single facility or multiple facilities. To get started, companies upload their sensor data — like pressure, flow rate, RPMs, temperature, and power — to Amazon Simple Storage Service (S3) and provide the relevant S3 bucket location to Lookout for Equipment. The service will automatically sift through the data, look for patterns, and build a model that’s tailored to the customer’s operating environment. Lookout for Equipment will then use the model to analyze incoming sensor data and identify early warning signs of machine failure or malfunction.

For each alert, Lookout for Equipment will specify which sensors are indicating an issue and measure the magnitude of its impact on the detected event. For example, if Lookout for Equipment spotted an problem on a pump with 50 sensors, the service could show which five sensors indicate an issue on a specific motor and relate that issue to the motor power current and temperature.

“Many industrial and manufacturing companies have heavily invested in physical sensors and other technology with the aim of improving the maintenance of their equipment. But even with this gear in place, companies are not in a position to deploy machine learning models on top of the reams of data due to a lack of resources and the scarcity of data scientists,” VP of machine learning at AWS Swami Sivasubramanian said in a press release. “Today, we’re excited to announce the general availability of Amazon Lookout for Equipment, a new service that enables customers to benefit from custom machine learning models that are built for their specific environment to quickly and easily identify abnormal machine behavior — so that they can take action to avoid the impact and expense of equipment downtime.”

Lookout for Equipment is available via the AWS console as well through supporting partners in the AWS Partner Network. It launches today in US East (N. Virginia), EU (Ireland), and Asia Pacific (Seoul) server regions, with availability in additional regions in the coming months.

The launch of Lookout for Equipment follows the general availability of Lookout for Metrics, a fully managed service that uses machine learning to monitor key factors impacting the health of enterprises. Both products are complemented by Amazon Monitron, an end-to-end equipment monitoring system to enable predictive maintenance with sensors, a gateway, an AWS cloud instance, and a mobile app.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Tasktop nabs $100M to turn DevOps metrics into visualizations at scale

April 8, 2021   Big Data

Transform 2021

Join us for the world’s leading event about accelerating enterprise transformation with AI and Data

July 12-16, 2021

Free Registration


Join GamesBeat Summit 2021 this April 28-29. Register for a free or VIP pass today.


Value stream management (VSM) platform Tasktop today announced that it raised $ 100 million, bringing its total raised to over $ 129 million. The company says it plans to use the funding to accelerate growth while expanding the size of its customer base.

As traditional businesses pour billions into digital transformation initiatives, they often struggle with the complexity of the teams, tools, and metrics at the core of those investments. Technical leaders deeply understand the software development process and business leaders know the investment strategies, but the two aren’t always aligned. In a study, Geneca found that 75% of executives surveyed admitted that their projects were either “always” or “usually” doomed right from the start.

Vancouver, Canada-based Tasktop, which was founded in 2007, offers a VSM platform designed to reduce time to market and increase the velocity of software development. Sitting above the software development toolchain, Tasktop integrates with software development tools like Jira Software, ServiceNow, Azure DevOps, and more to allow organizations to see potential blockers.

 Tasktop nabs $100M to turn DevOps metrics into visualizations at scale

VSM was born out of the frustration that most enterprises aren’t adequately adaptive. According to Forrester, only 16% say that they can release software more than once a month. For Tasktop’s part, the company asserts that VSM allows organizations to break down silos as well as identify and remove bottlenecks, eliminate waste, and accelerate delivery.

Tasktop’s platform overlays the value stream to provide abstractions, visualizations, and diagnostics that measure all types of software delivery work. Connectors let customers send work between different dev tools, eliminating duplicate data entry and automating traceability. And Tasktop’s testing regimen runs 500,000 tests daily over 300 tool versions to ensure they work properly, handling tooling and API changes to minimize outages and delays.

Coinciding with the new funding, Tasktop this morning launched a dashboard within its Tasktop Viz product — VSM Portfolio Insights — that rolls up analytics generated at the individual product value stream level to the executive plane. The dashboard presents consolidated insights into the performance, quality, value, and impact of delivery, including:

  • The progress of the shift from project to product-based IT
  • The ability to respond rapidly to the market
  • The business processes capable of acceleration
  • The value creation and value protection areas currently lacking appropriate
    investment

 Tasktop nabs $100M to turn DevOps metrics into visualizations at scale

Since the birth of VSM, the market category has grown exponentially compared with the longer-tail development of agile and DevOps. The expansion speaks for itself with players like ServiceNow, IBM, Digital.ai, and of course Tasktop joining the fray. Last December, Tasktop announced record year-over-year growth with a 30% uptick in both revenue and customers. The company now claims to serve leading brands, including over half of the Fortune 100.

Sumeru Equity Partners led Tasktop’s latest funding round, a strategic investment. Management and existing investors also participated.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Snorkel AI’s app development platform lures $35M

April 7, 2021   Big Data

Transform 2021

Join us for the world’s leading event about accelerating enterprise transformation with AI and Data

July 12-16, 2021

Free Registration


Join GamesBeat Summit 2021 this April 28-29. Register for a free or VIP pass today.


Snorkel AI, a startup developing data labeling tools aimed at enterprises, today announced that it raised $ 35 million in a series B round led by Lightspeed Venture Partners. The funding marks the launch of the company’s Application Studio, a visual builder with templated solutions for common AI use cases based on best practices from academic institutions.

According to a 2020 Cognilytica report, 80% of AI development time is spent on manually gathering, organizing, and labeling the data that’s used to train machine learning models. Hand labeling is notoriously expensive and slow, with limited leeway for development teams to build, iterate, adapt, or audit apps. In a recent survey conducted by startup CloudFlower, data scientists said that they spend 60% of the time just organizing and cleaning data compared with 4% on refining algorithms.

Snorkel AI hopes to address this with tools that let customers create and manage training data, train models, and analyze and iterate AI systems. Founded by a team spun out of the Stanford AI Lab, Snorkel AI claims to offer the first AI app development platform, Snorkel Flow, that labels and manages machine learning training data programmatically.

 Snorkel AI’s app development platform lures $35M

Application Studio will expand the Snorkel AI platform’s capabilities in a number of ways, the company says, by introducing prebuilt solution templates based on industry-specific use cases. Customers can leverage templates for contract intelligence, news analytics, and customer interaction routing as well as common AI tasks such as text and document classification, named entity recognition, and information extraction. Application Studio also provides packaged app-specific preprocessors, programmatic labeling templates, and high-performance open source models that can be trained with private data, in addition to collaborative workflows that decompose apps into modular parts.

Beyond this, Application Studio offers a feature that versions the entire development pipeline from datasets to user contributions. With a few lines of code, apps can be adapted to new data or goals. And they keep training data labeling and orchestration in-house, mitigating data breach and data bias risks.

Application Studio is in preview and will be generally available later this year within Snorkel Flow, Snorkel AI says.

Palo Alto, California-based Snorkel AI’s latest fundraising round brings the startup’s total raised to date to $ 50 million, which 40-employee Snorkel AI says will be used to scale its engineering team and acquire new customers. Previous investors Greylock, GV, In-Q-Tel, and Nepenthe Capital, along with new investor Walden and funds and accounts managed by BlackRock, also participated in the series B.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Intel touts latest Xeon processor for scaling 5G networks

April 6, 2021   Big Data

Transform 2021

Join us for the world’s leading event about accelerating enterprise transformation with AI and Data

July 12-16, 2021

Free Registration


Join GamesBeat Summit 2021 this April 28-29. Register for free or grab a discounted VIP pass today.


Intel launched its latest datacenter platform in the form of the 3rd Gen Intel Xeon Scalable processors.

The Santa Clara, California-based chipmaker said that the processors deliver a 46% performance increase on datacenter workloads. The server chips with integrated AI will power cloud-native datacenters and applications such as 5G networks, cryptography, drug discovery, and confidential computing. For 5G, the new chips deliver on average 62% more performance on network and 5G workloads.

In an online briefing, Intel executive vice president Navin Shenoy said Intel has added advanced security with Intel Software Guard Extension and Intel Crypto Acceleration. Intel has shipped more than 200,000 chips for revenue in the first quarter, and it boasts more than 250 design wins for the chips with 50 partners, 15 telecom equipment and communications firms, and 20 high-performance computing labs.

AT&T said it is seeing 1.9 times higher throughput and 33% more memory capacity with the combination of the Intel Xeon Scalable processors and Intel Optane Persistent Memory, so the network can serve the same number of subscribers at higher resolution or a greater number of subscribers at the same resolution. Verizon and Vodafone also said they’re using the new Xeons. With the chips, Intel said communication service providers can increase 5G user plane function performance by up to 42%.

The chip uses Intel’s 10-nanometer manufacturing process (equivalent to the 7-nanometer process of rivals based on nomenclature), and it delivers up to 40 cores per processor and up to 2.65 times higher average performance gain compared to five-year-old systems.

Intel CEO Pat Gelsinger said in an online briefing that over the past year companies have been forced to undertake a warp-speed cloudification of infrastructure to serve remote workforces, and he said the new processors have flexible architecture for advanced security and built-in AI to handle processing from the edge to the cloud.

“Technology is like magic,” he said. “It has the power to improve the lives of every person on the planet. It’s a new day at Intel. We are no longer just the CPU company.”

Above: Intel Xeon

Image Credit: Intel

He said Intel combines software, silicon, and manufacturing to differentiate itself from rivals. The company will operate internal factories, strategically use foundry services to make Intel chips with the help of outside contract manufacturers, and offer its own foundry services to others.

“With a backdrop of fierce competition, Intel is leading with its strengths with its 3rd Gen Xeon processors,” said Patrick Moorhead, an analyst at Moor Insights & Strategy, in a message to VentureBeat. “The company is offering a platform approach to provide its partners solutions incorporating CPUs, storage, memory, FPGAs, and networking ASICs. This is in addition to its ability to leverage resources for co-marketing and co-development. I also believe the company is differentiated with its on-chip ML inference and cryptographic capabilities versus its closest competitors.”

The latest hardware and software optimizations deliver 74% faster AI performance compared with the prior generation and provide up to 1.5 times higher performance across a broad mix of 20 popular AI workloads versus AMD Epyc 7763 and up to 1.3 times higher performance on a broad mix of 20 popular AI workloads versus Nvidia A100 GPU, Intel said.

Above: Intel CEO Pat Gelsinger touts the latest Intel Xeon Scalable processors.

Shenoy said its security-focused SGX protects sensitive code and data with the smallest potential attack surface within the system. It is now available on two-socket Xeon Scalable processors with enclaves that can isolate and process up to a terabyte of code and data to support the demands of mainstream workloads.

And Shenoy said Intel Crypto Acceleration delivers performance across a variety of important cryptographic algorithms. Businesses that run encryption-intensive workloads, such as online retailers who process millions of customer transactions per day, can leverage this protection without impacting user response times or overall system performance.

Intel said that more than 800 of the world’s cloud service providers run on Intel Xeon Scalable processors, and all of the largest cloud service providers are planning to offer cloud services in 2021 powered by the newest chips. HP Enterprise said it has launched new computers across eight different models with the new Xeons, and it uses AMD’s latest Epyc processors as well.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Government audit of AI with ties to white supremacy finds no AI

April 6, 2021   Big Data
 Government audit of AI with ties to white supremacy finds no AI

Transform 2021

Join us for the world’s leading event about accelerating enterprise transformation with AI and Data

July 12-16, 2021

Free Registration


Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


In April 2020, news broke that Banjo CEO Damien Patton, once the subject of profiles by business journalists, was previously convicted of crimes committed with a white supremacist group. According to OneZero’s analysis of grand jury testimony and hate crime prosecution documents, Patton pled guilty to involvement in a 1990 shooting attack on a synagogue in Tennessee.

Amid growing public awareness about algorithmic bias, the state of Utah halted a $ 20.7 million contract with Banjo, and the Utah attorney general’s office opened an investigation into matters of privacy, algorithmic bias, and discrimination. But in a surprise twist, an audit and report released last week found no bias in the algorithm because there was no algorithm to assess in the first place.

“Banjo expressly represented to the Commission that Banjo does not use techniques that meet the industry definition of artificial Intelligence. Banjo indicated they had an agreement to gather data from Twitter, but there was no evidence of any Twitter data incorporated into Live Time,” reads a letter Utah State Auditor John Dougall released last week.

The incident, which VentureBeat previously referred to as part of a “fight for the soul of machine learning,” demonstrates why government officials must evaluate claims made by companies vying for contracts and how failure to do so can cost taxpayers millions of dollars. As the incident underlines, companies selling surveillance software can make false claims about their technologies’ capabilities or turn out to be charlatans or white supremacists — constituting a public nuisance or worse. The audit result also suggests a lack of scrutiny can undermine public trust in AI and the governments that deploy them.

Dougall carried out the audit with help from the Commission on Protecting Privacy and Preventing Discrimination, a group his office formed weeks after news of the company’s white supremacist associations and Utah state contract. Banjo had previously claimed that its Live Time technology could detect active shooter incidents, child abduction cases, and traffic accidents from video footage or social media activity. In the wake of the controversy, Banjo appointed a new CEO and rebranded under the name safeXai.

“The touted example of the system assisting in ‘solving’ a simulated child abduction was not validated by the AGO and was simply accepted based on Banjo’s representation. In other words, it would appear that the result could have been that of a skilled operator as Live Time lacked the advertised AI technology,” Dougall states in a seven-page letter sharing audit results.

According to Vice, which previously reported that Banjo used a secret company and fake apps to scrape data from social media, Banjo and Patton had gained support from politicians like U.S. Senator Mike Lee (R-UT) and Utah State Attorney General Sean Reyes. In a letter accompanying the audit, Reyes commended the results of the investigation and said the finding of no discrimination was consistent with the conclusion the state attorney general’s office reached because there simply wasn’t any AI to evaluate.

“The subsequent negative information that came out about Mr. Patton was contained in records that were sealed and/or would not have been available in a robust criminal background check,” Reyes said in a letter accompanying the audit findings. “Based on our first-hand experience and close observation, we are convinced the horrible mistakes of the founder’s youth never carried over in any malevolent way to Banjo, his other initiatives, attitudes, or character.”

Alongside those conclusions are a series of recommendations for Utah state agencies and employees involved in awarding such contracts. Recommendations for anyone considering AI contracts include questions they should be asking third-party vendors and the need to conduct an in-depth review of vendors’ claims and the algorithms themselves.

“The government entity must have a plan to oversee the vendor and vendor’s solution to ensure the protection of privacy and the prevention of discrimination, especially as new features/capabilities are included,” reads one of the listed recommendations. Among other recommendations are the creation of a vulnerability reporting process and evaluation procedures, but no specifics were provided.

While some cities have put surveillance technology review processes in place, local and state adoption of private vendors’ surveillance technology is currently happening in a lot of places with little scrutiny. This lack of oversight could also become an issue for the federal government. The Government by Algorithm report Stanford University and New York University jointly published last year found that roughly half of algorithms used by federal government agencies come from third-party vendors.

The federal government is currently funding an initiative to create tech for public safety, like the kind Banjo claimed to have developed. The National Institute of Standards and Technology (NIST) routinely assesses the quality of facial recognition systems and has helped assess the role the federal government should play in creating industry standards. Last year, it introduced ASAPS, a competition in which the government is encouraging AI startups and researchers to create systems that can tell if an injured person needs an ambulance, whether the sight of smoke and flames requires a firefighter response, and whether police should be alerted in an altercation. These determinations would be based on a dataset incorporating data ranging from social media posts to 911 calls and camera footage. Such technology could save lives, but it could also lead to higher rates of contact with police, which can also cost lives. It could even fuel repressive surveillance states like the kind used in Xinjiang to identify and control Muslim minority groups like the Uyghurs.

Best practices for government procurement officers seeking contracts with third parties selling AI were introduced in 2018 by U.K. government officials, the World Economic Forum (WEF), and companies like Salesforce. Hailed as one of the first such guidelines in the world, the document recommends defining public benefit and risk and encourages open practices as a way to earn public trust.

“Without clear guidance on how to ensure accountability, transparency, and explainability, governments may fail in their responsibility to meet public expectations of both expert and democratic oversight of algorithmic decision-making and may inadvertently create new risks or harms,” the British-led report reads. The U.K. released official procurement guidelines in June 2020, but weeks later a grading algorithm scandal sparked widespread protests.

People concerned about the potential for things to go wrong have called on policymakers to implement additional legal safeguards. Last month, a group of current and former Google employees urged Congress to adopt strengthened whistleblower protections in order to give tech workers a way to speak out when AI poses a public harm. A week before that, the National Security Commission on Artificial Intelligence called on Congress to give federal government employees who work for agencies critical to national security a way to report misuse or inappropriate deployment of AI. That group also recommends tens of billions of dollars in investment to democratize AI and create an accredited university to train AI talent for government agencies.

In other developments at the intersection of algorithms and accountability, the documentary Coded Bias, which calls AI part of the battle for civil rights in the 21st century and examines government use of surveillance technology, started streaming on Netflix today.

Last year, the cities of Amsterdam and Helsinki created public algorithm registries so citizens know which government agency is responsible for deploying an algorithm and have a mechanism for accountability or reform if necessary. And as part of a 2019 symposium about common law in the age of AI, NYU professor of critical law Jason Schultz and AI Now Institute cofounder Kate Crawford called for businesses that work with government agencies to be treated as state actors and considered liable for harm the way government employees and agencies are.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Pivoting to privacy-first: Why this is an adapt-or-die moment

April 4, 2021   Big Data
 Pivoting to privacy first: Why this is an adapt or die moment

Transform 2021

Join us for the world’s leading event about accelerating enterprise transformation with AI and Data

July 12-16, 2021

Free Registration


Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


Operating in the digital advertising ecosystem isn’t for the faint of heart, and that’s never been truer than it is in 2021. The landscape is undergoing unprecedented transitions right now as we make a much-needed pivot to a privacy-first reality, and a lot of business models, practices, and technologies are not going to survive the upheaval. That said, I’m not here to make doomsday predictions. In fact, there are a lot of reasons for genuine optimism right now.

As an industry, we’re heading in the right direction, and when we emerge on the other side of important transitions — including Google’s removal of third-party cookie support in Chrome and Apple’s limitations on IDFA — our industry will be stronger as a whole, as will consumer protections. Let’s take a look at the principles that will define the digital advertising and marketing world of the future, as well as the players that operate within it.

To win, you have to embrace industry change

Google gave the industry more than two years’ warning of its plans to end third-party cookie support on Chrome in 2022. Since then, a number of companies and industry organizations have rolled up their sleeves and started planning for what has long been an inevitability. Those that leaned into the conversation, digesting Google’s position and anticipating how the cookieless future would look, weren’t surprised when Google clarified in March 2021 that it isn’t planning to build or use alternate identifiers within its ecosystem.

The simple fact is that burying your head in the sand or digging your heels in as it relates to changes of this magnitude isn’t an option. Industry consternation, and even legal pushbacks, might delay implementation of certain policy shifts, but that’s all they will do — delay the inevitable. The writing is on the wall: Greater privacy controls are coming to the digital landscape, and the companies that succeed in the future will be the ones that embrace — and even help to accelerate — this transition.

Don’t put all your eggs into one basket

If the panic that followed Google’s cookieless announcement taught us anything, it should have been this: The digital marketing ecosystem can’t allow itself to become overly reliant on any single technology or provider. The future belongs to those that put interoperability at the heart of their approach.

Moving forward from the cookie, there are a few truths we must recognize. One is that there’s no single universal identifier that’s going to step forward to fill the entirety of the void left by third-party cookies. A number of companies are moving forward with plans for their own universal identifiers, and taken together, these identifiers will help to illuminate user identity on a portion of the open web (i.e., non-Google properties). They will be an important part of the ecosystem but by no means a silver bullet to comprehensive cross-channel, personalized advertising.

Another massive component of the post-cookie landscape will be behavioral cohorts, embodied most prominently in Google’s Federated Learning of Cohorts (FLoC) construct. Through FLoC, Google will be creating targetable groups of anonymous users who navigate the internet in similar ways. The good news is that, through FLoC, nearly all of Chrome’s users will become addressable in a fully private manner, whereas only a portion of them were addressable via cookies. As such, marketers and their partners will need to build solutions that accommodate FLoC and other cohort-driven approaches. But at the same time, they also need to look beyond what Google’s putting into the marketplace in order to continue effective cross-channel marketing and personalization across the broader landscape.

Ultimately, companies that can bring their own ground truth of consumer understanding to the table — and then extend their insights through the most important identifiers and behavioral cohort solutions — will prove the most adaptable to future marketplace shifts. The time of putting all your digital eggs into one ecosystem basket are long gone.

An always-on crystal ball

The next 12 months are going to be transformative in our industry. In 24 months, we’ll all be a lot wiser. We will have taken universal IDs and behavioral cohorts for a few laps around the track, and we’ll have a much stronger sense of the role that they can and will play in furthering our consumer connections and understanding. Likewise, the innovators of our industry will have gotten to work on rewriting the internet economy around the new privacy-first reality, and we’ll all be reaping the benefits of their novel ideas and solutions.

Along the way, of course, we will see a lot of companies pivoting. This might be a period of rapid transformation, but there’s no reason to believe a period of stagnation awaits us on the other side. The future, as always, belongs to the nimble — the ones that anticipate and adapt while others resist. Now is the time to be fearless in building the future of our industry in a way that is sustainable for companies and consumers alike.

Tom Craig is CTO at Resonate.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

IBM bets homomorphic encryption is ready to deliver stronger data security for early adopters

April 3, 2021   Big Data
 IBM bets homomorphic encryption is ready to deliver stronger data security for early adopters

Transform 2021

Join us for the world’s leading event about accelerating enterprise transformation with AI and Data

July 12-16, 2021

Free Registration


Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


The topics of security and data have become almost inseparable as enterprises move more workloads to the cloud. But unlocking new uses for that data, particularly driving richer AI and machine learning, will require next-generation security.

To that end, companies have been developing confidential computing to allow data to remain encrypted while it is being processed. But as a complement to that, a security process known as fully homomorphic encryption is now on the verge of making its way out of the labs after a long gestation period and into the hands of early adopters.

Researchers like homomorphic encryption because it provides a certain type of security that can follow the data throughout its journey across systems. In contrast, confidential computing tends to be more reliant upon special hardware which can be both powerful but also limiting in some respects.

Companies such as Microsoft and Intel have been big proponents of homomorphic encryption. Last December, IBM made a splash when it released its first homomorphic encryption services. That package included education material, support, and prototyping environments for companies that want to experiment.

In a recent media presentation on the future of cryptography, IBM director of strategy and emerging technology Eric Maass explained why the company is so bullish on “fully homomorphic encryption” or FHE.

“FHE is a unique form of encryption and it’s going to allow us to compute upon data that’s still in an encrypted state,” Maass said.

Evolving encryption

First, some context. There are 3 general categories of encryption. The two classic ones are encryption for when data is at rest and is stored and then “data in transit” that protects the confidentiality of data as it’s being transmitted over a network.

The third one is the piece that has been missing: The ability to compute on that data while it’s still encrypted.

That last one is key to unlocking all sorts of new use cases. That’s because until now, for someone to process that data, it would have to be unencrypted, which creates a window of vulnerability. That makes companies reluctant to share highly sensitive data involving finance or health.

“With FHE, the ability to actually keep the data encrypted and never exposing it during the computation process, this has been somewhat akin to a missing leg in a three-legged crypto stool,” Maass said. “We’ve had the ability to encrypt the data at rest and in transit, but we have not historically had the ability to keep the data encrypted while it’s being utilized.”

With FHE, the data can remain encrypted when being used by an application. Imagine, for instance, a navigation app on a phone that can give directions without actually being able to see any personal information or location.

Companies are potentially interested in FHE because they could then apply AI to data such as finance and health while being able to promise users that the company has no way to actually view or access the underlying data.

While the concept of homomorphic encryption has been of interest for decades, the problem is that FHE has taken a huge amount of compute power, so much so that it has been too expensive to be practicable.

But in recent years, researchers have made big advances.

For instance, Maass noted that in 2011, it took 30 minutes to process a single bit using FHE. By 2015, researchers could compare two entire human genomes using FHE in less than an hour.

“IBM has been working on FHE for more than a decade and we’re finally reaching an apex where we believe this is ready for clients to begin adopting in a more widespread manner,” Maass said. “And that becomes the next challenge: widespread adoption. There are currently very few organizations here that have the skills and expertise to use FHE.”

FHE ready for its closeup

During the presentation, AI security group manager Omri Soceanu ran an FHE simulation involving health data bring transferred to a hospital. In this scenario, an AI algorithm was being used to analyze DNA for genetic issues that may reveal risks for prior medical conditions.

Typically, that patient data would have to be decrypted first, which could raise both regulatory and privacy issues. But with FHE, it remains encrypted, thus avoiding those issues. In this case, the data is sent encrypted, remains so while being analyzed, and the resulting results are returned also in an encrypted state.

What’s also important to note is that this system was put in place using just a dozen lines of code, a big reduction from the hundreds of lines of code that until recently have been required. By reducing that complexity, IBM wants to make FHE more accessible to teams that don’t necessarily have cryptography expertise.

Finally, Soceanu explained that the simulation was completed in .069 seconds. Just 5 years ago, he said, the same simulation took a few hours.

“Working on FHE, we wanted to allow our customers to take advantage of all the benefits of working in the cloud while adhering to different privacy regulations and concerns,” he said. “What only a few years ago was only theoretically possible is becoming a reality. Our goal is to make this transition as seamless as possible, improving performance and allowing data scientists and developers, without any crypto skills, a frictionless move to analytics over encrypted data.”

Next steps

To accelerate that development, IBM Research has released open-source toolkits while IBM Security launched its first commercial FHE service in December.

“This is aimed at helping our clients start to begin to prototype and experiment with fully homomorphic encryption with two primary goals,” Maass said. “First, getting our clients educated on how to build FHE enabled applications, and then giving them the tools and hosting environments in order to run those types of applications.”

Maass said IBM envisions FHE in the near term being attractive to highly regulated industries such as financial services and healthcare.

“They have both the need to unlock the value of that data, but also face extreme pressures to secure and preserve the privacy of the data that they’re computing upon,” he said.

But he expects over time that a wider range of businesses will benefit from FHE. Many sectors want to improve their use of data which is becoming a competitive differentiator. That includes using FHE to help drive new forms of collaboration and monetization. As that happens, IBM hopes these new security models will drive wider enterprise adoption of hybrid cloud platforms.

The company sees a day, for instance, when due diligence for mergers and acquisitions are done online without violating the privacy of shareholders or when airlines, hotels, and restaurants could use FHE to offer packages and promotions without giving their partners access to details of closely held customer datasets.

Maass said: “FHE will allow us to secure that type of collaboration, extracting the value of the data, while still preserving the privacy of it.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

AI Weekly: Here’s how enterprises say they’re deploying AI responsibly

April 3, 2021   Big Data
 AI Weekly: Here’s how enterprises say they’re deploying AI responsibly

Transform 2021

Join us for the world’s leading event about accelerating enterprise transformation with AI and Data

July 12-16, 2021

Free Registration


Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


Implementing AI responsibly means different things to different companies. For some, “responsible” implies adopting AI in a manner that’s ethical, transparent, and accountable. For others, it means ensuring that their use of AI remains consistent with laws, regulations, norms, customer expectations, and organizational values. In any case, “responsible AI” promises to guard against the use of biased data or algorithms, providing an assurance that automated decisions are justified and explainable — at least in theory.

But some evidence suggests that organizations are implementing AI less responsible than they internally believe. According to a recent Boston Consulting Group survey of 1,000 enterprises, less than half that achieved AI at scale had fully mature, responsible AI implementations, according to the same report.

The lagging adoption of responsible AI belies the value that these practices can bring to bear. A study by Capgemini found that customers and employees will reward organizations that practice ethical AI with greater loyalty, more business, and even a willingness to advocate for them — and in turn, punish those that don’t. The study suggests that there’s both reputational risk and a direct impact on the bottom line for companies that don’t approach the issue thoughtfully.

To get a sense of the extent to which brands are thinking about — and practicing — the tenets of responsible AI, VentureBeat surveyed executives at companies that claim to be using AI in a tangible capacity. Their responses reveal that a single definition of “responsible AI” remains elusive. At the same time, they show an awareness of the consequences of opting not to deploy AI thoughtfully.

Companies in enterprise automation

ServiceNow was the only company VentureBeat surveyed to admit that there’s no clear definition of what constitutes responsible AI usage. “Every company really needs to be thinking about how to implement AI and machine learning responsible,” ServiceNow chief innovation officer Dave Wright told VentureBeat. “[But] every company has to define it for themselves, which unfortunately means there’s a lot of potential for harm to occur.”

According to Wright, ServiceNow’s responsible AI approach encompasses the three pillars of diversity, transparency, and privacy. When building an AI product, the company brings in a variety of perspectives and has them agree on what counts as fair, ethical, and responsible before development begins. ServiceNow also ensures that its algorithms remain explainable in the sense that it’s clear why they arrive at their predictions. Lastly, the company says it limits and obscures the amount of personally identifiable information it collects to train its algorithms. Toward this end, ServiceNow is investigating “synthetic AI” that could allow developers to train algorithms without handling real data and the sensitive information it contains.

“At the end of the day, responsible AI usage is something that only happens when we pay close attention to how AI is used at all levels of our organization. It has to be an executive-level priority,” Wright said.

Automation Anywhere says it established AI and bot ethical principles to provide guidelines to its employees, customers, and partners. They include monitoring the results of any process automated using AI or machine learning so as to prevent them from producing outputs that might reflect racial, sexist, or other biases.

“New technologies are a two-edged sword. While they can free humans to realize their potential in entirely new ways, sometimes these technologies can also, unfortunately, entrap humans in bad behavior and otherwise lead to negative outcomes,” Automation Anywhere CTO Prince Kohli told VentureBeat via email. “[W]e have made the responsible use of AI and machine learning one of our top priorities since our founding, and have implemented a variety of initiatives to achieve this.”

Beyond the principles, Automation Anywhere created an AI committee charged with challenging employees to consider ethics in their internal and external actions. For example, engineers must seek to address the threat of job loss raised by AI and machine learning technologies and the concerns of customers from an “all-inclusive” range of different minority groups. The committee also reevaluates Automation Anywhere’s principles on a regular basis so that they evolve with emerging AI technologies.

Splunk SVP and CTO Tim Tully, who anticipates the industry will see a renewed focus on transparent AI practices over the next two years, says that Splunk’s approach to putting “responsible AI” into practice is fourfold. First, the company makes sure that the algorithms it’s developing and operating are in alignment with governance policies. Then, Splunk prioritizes talent to work with its AI and machine learning algorithms to “[drive] continual improvement.” Splunk also takes steps to bake security into its R&D processes while keeping “honesty, transparency, and fairness” top of mind throughout the building lifecycle.

“In the next few years, we’ll see newfound industry focus on transparent AI practices and principles — from more standardized ethical frameworks, to additional ethics training mandates, to more proactively considering the societal implications of our algorithms — as AI and machine learning algorithms increasingly weave themselves into our daily lives,” Tully said. “AI and machine learning was a hot topic before 2020 disrupted everything, and over the course of the pandemic, adoption has only increased.”

Companies in hiring and recruitment

LinkedIn says that it doesn’t look at bias in algorithms in isolation but rather identifies what biases cause harm to users and works to eliminate this. Two years ago, the company launched an initiative called Project Every Member to take a more rigorous approach to reducing and eliminating unintended consequences in the services it builds. By using inequality A/B testing throughout the product design process, LinkedIn says it aims to build trustworthy, robust AI systems and datasets with integrity that comply with laws and “benefit society.”

For example, LinkedIn says it uses differential privacy in its LinkedIn Salary product to allow members to gain insights from others without compromising information. And the company claims its Smart Replies product, which taps machine learning to suggest responses to conversations, was built to prioritize member privacy and avoid gender-specific replies.

“Responsible AI is very hard to do without company-wide alignment. ‘Members first’ is a core company value, and it is a guiding principle in our design process,” a spokesperson told VentureBeat via email. “We can positively influence the career decisions of more than 744 million people around the world.”

Mailchimp, which uses AI to, among other things, provide personalized product recommendations for shoppers, tells VentureBeat that it trains each of its data scientists in the fields that they’re modeling. (For example, data scientists at the company working on products related to marketing receive training in marketing.) However, Mailchimp also admits that its systems are trained on data gathered by human-powered processes that can lead to a number of quality-related problems, including errors in the data, data drift, and bias.

“Using AI responsibly takes a lot of work. It takes planning and effort to gather enough data, to validate that data, and to train your data scientists,” Mailchimp chief data science officer David Dewey told VentureBeat. “And it takes diligence and foresight to understand the cost of failure and adapt accordingly.”

For its part, Zendesk says it places an emphasis on a diversity of perspectives where its AI adoption is concerned. The company claims that broadly, its data scientists examine processes to ensure that its software is beneficial, unbiased, following strong ethical principles, and securing the data that makes its AI work. “As we continue to leverage AI and machine learning for efficiency and productivity, Zendesk remains committed to continuously examining our processes to ensure transparency, accountability and ethical alignment in our use of these exciting and game-changing technologies, particularly in the world of customer experience,” Zendesk president of products Adrian McDermott told VentureBeat.

Companies in marketing and management

Adobe EVP of general counsel and corporate secretary Dana Rao points to the company’s ethics principles as an example of its commitment to responsible AI.  Last year, Adobe launched an AI ethics committee and review board to help guide its product development teams and review new AI-powered features and products prior to release. At the product development stage, Adobe says its engineers use an AI impact assessment tool created by the committee to capture the potential ethical impact of any AI feature to avoid perpetuating biases.

“The continued advancement of AI puts greater accountability on us to address bias, test for potential misuse, and inform our community about how AI is used,” Rao said. “As the world evolves, it is no longer sufficient to deliver the world’s best technology for creating digital experiences; we want our technology to be used for the good of our customers and society.”

Among the first AI-powered features the committee reviewed was Neural Filters in Adobe Photoshop, which lets users add non-destructive, generative filters to create things that weren’t previously in images (e.g., facial expressions and hair styles). In accordance with its principles, Adobe added an option within Photoshop to report whether the Neural Filters output a biased result. This data is monitored to identify undesirable outcomes and allows the company’s product teams to address them by means of updating an AI model in the cloud.

Adobe says that while evaluating Neural Filters, one review board member flagged that the AI didn’t properly model the hairstyle of a particular ethnic group. Based on this feedback, the company’s engineering teams updated the AI dataset before Neural Filters was released.

“This constant feedback loop with our user community helps further mitigate bias and uphold our values as a company — something that the review board helped implement,” Rao said. “Today, we continue to scale this review process for all of the new AI-powered features being generated across our products.”

As for Hootsuite CTO Ryan Donovan, he believes that responsible AI ultimately begins and ends with transparency. Brands should demonstrate where and how they’re using AI — an ideal that Hootsuite strives to achieve, he says.

“As a consumer, for instance, I fully appreciate the implementation of bots to respond to high level customer service inquiries. However, I hate when brands or organizations masquerade those bots off as human, either through a lack of transparent labelling or assigning them human monikers,” Donovan told VentureBeat via email. “At Hootsuite, where we do use AI within our product, we have consciously endeavored to label it distinctly — suggested times to post, suggested replies, and schedule for me being the most obvious.”

SVP of product development at ADP Jack Berkowitz says that that responsible AI at ADP starts with the ethical use of data. In this context, “ethical use of data” means looking carefully at what the goal of an AI system is and the right way to achieve it.

“When AI is baked into technology, it comes with inherently heightened concerns, because it means an absence of direct human involvement in producing results,” Berkowitz said. “But a computer only considers the information you give it and only the questions you ask, and that’s why we believe human oversight is key.”

ADP retains an AI and data ethics board of experts in tech, privacy, law, and auditing that works with teams across the company to evaluate the way they use data. It also provides guidance to teams developing new uses and follows up to ensure the outputs are desirable. The board reviews ideas and evaluates potential uses to determine whether data is executed on fairly and in compliance with legal requirements and ADP’s own standards. If an idea falls short of meeting transparency, fairness, accuracy, privacy, and accountability requirements, it doesn’t move forward within the company, Berkowitz says.

Marketing platform HubSpot similarly says its AI projects undergo a peer review for ethical considerations and bias. According to senior machine learning engineer Sadhbh Stapleton Doyle, the company uses proxy data and external datasets to “stress test” its models for fairness. In addition to model cards, HubSpot also maintains a knowledge base of ways to detect and mitigate bias.

The road ahead

A number of companies declined to tell VentureBeat how they’re deploying AI responsibly in their organizations, highlighting one of the major challenges in the field: Transparency. A spokesperson for UiPath said that the robotic process automation startup “wouldn’t be able to weigh in” on responsible AI. Zoom, which recently faced allegations that its face-detection algorithm erased Black faces when applying virtual backgrounds, chose not to comment. And Intuit told VentureBeat that it had nothing to share on the topic.

Of course, transparency isn’t the end-all-be-all when it comes to responsible AI. For example, Google, which loudly trumpets its responsible AI practices, was recently the subject of a boycott by AI researchers over the company’s firing of Timnit Gebru and Margaret Mitchell, coleaders of a team working to make AI systems more ethical. Facebook also purports to be implementing AI responsibly, but to date, the company has failed to present evidence that its algorithms don’t encourage polarization on its platforms.

Returning to the Boston Consulting Group survey, Steven Mills, chief ethics officer and a coauthor, noted that the depth and breadth of most responsible AI efforts fall behind what’s needed to truly ensure responsible AI. Organizations’ responsible AI programs typically neglect the dimensions of fairness and equity, social and environmental impact, and human-AI cooperation because they’re difficult to address.

Greater oversight is a potential remedy. Companies like Google, Amazon, IBM, and Microsoft; entrepreneurs like Sam Altman; and even the Vatican recognize this — they’ve called for clarity around certain forms of AI, like facial recognition. Some governing bodies have begun to take steps in the right direction, like the EU, which earlier this year floated rules focused on transparency and oversight. But it’s clear from developments over the past months that much work remains to be done.

As Salesforce principal architect of ethical AI practice Kathy Baxter told VentureBeat in a recent interview, AI can result in harmful, unintended consequences if algorithms aren’t trained and designed inclusively. Technology alone can’t solve systemic health and social inequities, she asserts. In order to be effective, technology must be built and used responsibly — because no matter how good a tool is, people won’t use it unless they trust it.

“Ultimately, I believe the benefits of AI should be accessible to everyone, but it is not enough to deliver only the technological capabilities of AI,” Baxter said. “Responsible AI is technology developed inclusively, with a consideration towards specific design principles to mitigate, as much as possible, unforeseen consequences of deployment — and it’s our responsibility to ensure that AI is safe and inclusive. At the end of the day, technology alone cannot solve systemic health and social inequities.”

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More
« Older posts
  • Recent Posts

    • TODAY’S OPEN THREAD
    • IBM releases Qiskit modules that use quantum computers to improve machine learning
    • Transitioning to Hybrid Commerce
    • Bad Excuses
    • Understanding CRM Features-Better Customer Engagement
  • Categories

  • Archives

    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited