• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: Study

AI, ML Not Yet a Plug-and-Play Proposition for Marketers: Study

March 25, 2021   CRM News and Info

By Jack M. Germain

Mar 24, 2021 5:00 AM PT

Personalization and automation, two of the hottest buzzwords in the lexicon of CRM practitioners, are all the rage for marketers these days. But only 14 percent of organizations are using artificial intelligence and/or machine language to automate their marketing campaigns.

A global survey Rackspace Technology conducted in January reveals that the majority of organizations worldwide lack the internal resources to support these critical high-powered CRM initiatives.

Rackspace, a multicloud technology solutions company, sought the status of AI/ML use in marketing in its survey “Are Organizations Succeeding at AI and ML?” conducted in the Americas, APJ, and EMEA regions of the world.

Responses indicated that while many organizations eagerly want to incorporate AI and ML tactics into operations, they typically lack the expertise and existing infrastructure needed to implement mature and successful AI/ML programs.

This study shines a light on the struggle to balance the potential benefits of AI and ML against the ongoing challenges of getting these initiatives off the ground.

While some early adopters are already seeing the benefits of these technologies, others are still trying to navigate common pain points such as lack of internal knowledge, outdated technology stacks, poor data quality, or the inability to measure ROI, according to researchers.

The most successful uses for AI/ML almost always deal with intelligence that helps companies understand their customers and their behavior, noted Jeff DeVerter, CTO of Rackspace Technology. The results from these projects are the easiest to directly tie back to revenue — either gaining new customers or better serving and retaining existing ones.

“The highest benefits our participants saw from AI/ML adoption is increased productivity (33 percent) and improved customer satisfaction (32 percent), he told CRM Buyer.

Some Survey Surprises

Three most significant findings about AI/ML adoption surfaced from the survey results. Researchers were actually surprised by a few things, noted DeVerter.

One is that more organizations seem to have an established plan for AI/ML and are actively developing AI/ML projects. The average spend per year is US$ 1.06 million. That, by comparison to total IT budgets, is small.

“It is not insignificant,” said DeVerter.

Another interesting finding is the percentage of companies who recognize that while they may not have established AI/ML practices today, they definitely view the tech as part of their future.

Only 17 percent of the participants stated they were approaching or had factory of model production. The survey found that 51 percent of the participants are exploring what AI/ML is and how to put it into production.

Meanwhile, 31 percent of the participating organizations are moving from pilot to an AI/ML solution in production.

Smart Goals Fall Short

Once the slow adoption rate is resolved, a more glowing vision for AI and ML awaits organizations. The success factor will not be realized until after the data has been curated.

“The best way to gauge success is how deeply business employees (not IT) start asking questions of the findings and then request more results,” said DeVerter.

This shows their trust in the results and adoption/acceptance of the technology as a real tool to help them do their job better, he added.

The survey results show less than stellar interest and efforts among many organizations regarding use of AI/ML technologies to enhance marketing efforts.

For instance, 30 percent of organizations are using AI/ML to create personalized customer journeys. Of current plans to use AI/ML, 36 percent of respondents want to understand customers better.

Current plans to use AI/ML target being able to deliver personalized content for customers for 33 percent of the responding organizations. Another 29 percent of respondents want to understand the effectiveness of marketing channels and content.

Failure Causes Common

One of the initial stumbling blocks to moving into adopting AI/ML strategies for organizations is getting out of the exploration phase. Potential adopters are still exploring how to implement mature AI/ML capabilities, the researchers found.

A mere 17 percent of respondents reported mature AI and ML capabilities with a model factory framework in place. In addition, the majority of respondents (82 percent) said they are still exploring how to implement AI or struggling to operationalize AI and ML models.

AI/ML implementation often fails from a lack of internal resources. More than one-third (34 percent) of respondents reported artificial intelligence R&D initiatives that have been tested and abandoned or failed.

The failures underscore the complexities of building and running a productive AI and ML program. The top causes for failure were nearly evenly divided among four categories:

  • Lack of data quality (34 percent)
  • Lack of expertise within the organization (34 percent)
  • Lack of production-ready data (31 percent)
  • Poorly conceived strategy (31 percent).

Untapped Smart Benefits

Successful AI/ML implementation has clear benefits for early adopters, according to the report. As organizations look to the future, IT and operations are the leading areas where companies plan on adding AI and ML capabilities.

The data reveals that organizations see AI and ML potential in a variety of business units. Among them are IT (43 percent), operations (33 percent), customer service (32 percent), and finance (32 percent).

Further, organizations that have successfully implemented AI and ML programs report increased productivity (33 percent) and improved customer satisfaction (32 percent) as the top benefits.

Successfully reaching those benefits requires organizations to carefully define their key performance indicators (KPIs). That is critical to measuring AI/ML return on investment, noted Rackspace.

Along with the difficulty of deploying AI and ML projects, comes the difficulty of measurement. The top key performance indicators used to measure AI/ML success include profit margins (52 percent), revenue growth (51 percent), data analysis (46 percent), and customer satisfaction/net promoter scores (46 percent).

Hopping Over Adoption Hurdles

The hurdles to smooth adoption of AI/ML technology are fairly consistent. Organizations need to do one essential thing to more quickly get beyond all the barriers, according to DeVerter.

That one key thing is to establish a data office to oversee the validity of the data used. AI/ML is absolutely beholden to the source data to which they apply their machine learning models.

“As such, AI/ML projects can become suspect if the source data is not cleaned and validated by a data office,” said DeVerter.

He noted that 34 percent of participants stated their R&D projects failed due to lack of data quality. Also, 31 percent said it was due to lack of production ready data.

“Unfortunately, not all companies have this office or its equivalent whose mission is to validate and curate approved corporate datasets. With the success of early AI/ML project, companies must establish the data office role in tandem to their AI/ML projects,” DeVerter explained.

In-House or Outsource Efforts?

Many organizations are still determining whether they will build internal AI/ML support or outsource it to a trusted partner, according to the survey results. But given the high risk of implementation failure, the majority of organizations (62 percent) are, to some degree, working with an experienced provider to navigate the complexities of AI and ML development.

“In nearly every industry, we are seeing IT decision-makers turn to artificial intelligence and machine learning to improve efficiency and customer satisfaction,” said Tolga Tarhan, chief technology officer at Rackspace Technology.

Before diving headfirst into an AI/ML initiative, organizations should clean their data and data processes, he reiterated. In other words, get the right data into the right systems in a reliable and cost-effective manner, he explained.

To address adoption obstacles, the majority of organizations (62 percent) work with an experienced provider to navigate the complexities of AI and machine learning development. This solution gives organizations access to expertise and technology that can accelerate development and increase the overall success of a project, according to the report.

The Cost of Customer Knowledge

As noted earlier, organizations adopting AI/ML strategies spend an average of $ 1.06 million per year on initiatives. That spend is spread across current and planned projects to grow revenue, drive innovation, increase productivity, and enhance user experience.

The most common ways that businesses reported using AI and machine learning functionality are as a component of data analytics (40 percent), a driver of innovation (38 percent), and through its application to embedded systems (35 percent). These point to the need for businesses to innovate and spur differentiation, and illustrate how AI and ML technologies can be used to drive an innovation engine.

That spend also supports upcoming AI and machine learning initiatives, the report noted. AI and machine learning projects currently in the planning phase lean more toward customer experience enhancements, with four of the top ten ranked areas specifically focused on improving these customer relationships:

  • Offering new services (38 percent)
  • Understanding customers better (36 percent)
  • Delivering personalized content for customers (33 percent)
  • Understanding the effectiveness of content marketing channels and content (29 percent)

Automation to the Rescue

Citing one key element in the marketing battle — constantly changing supply and demand — Mark William Lewis, CTO of Netalico Commerce explained how automation can aid marketers and retailers.

“With the ever-changing buyer environment, retail marketers have to adjust strategies on the fly. The best way to stand out against competitors is to use intent data to inform the message, medium, and timing of marketing touchpoints,” he told CRM Buyer.

For example, after a consumer views a backpack on a retailer’s website, intent data can automate an email reminder with comparable backpack options. If there is no response to the email, an automated piece of direct mail can be triggered that highlights the backpacks and a 25 percent off coupon, Lewis explained.

Survey Dynamics

The survey occurred between December 2020 and January 2021. It is based on the responses of 1,870 IT decision-makers across manufacturing, digital native, financial services, retail, government/public sector, and healthcare sectors in the Americas, Europe, Asia and the Middle East.

A copy of the full report is available here. Before downloading the report, you must fill in a form with your name, email address, and company affiliation. No promotional consideration or transmission of data from Rackspace is received by this publication, or its parent company ECT News Network, when our readers download the report.
end enn AI, ML Not Yet a Plug and Play Proposition for Marketers: Study


Jack%20M.%20Germain AI, ML Not Yet a Plug and Play Proposition for Marketers: Study
Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open-source technologies. He is an esteemed reviewer of Linux distros and other open-source software. In addition, Jack extensively covers business technology and privacy issues, as well as developments in e-commerce and consumer electronics. Email Jack.

Let’s block ads! (Why?)

CRM Buyer

Read More

Salesforce Research wields AI to study medicine, economics, and speech

February 21, 2021   Big Data
 Salesforce Research wields AI to study medicine, economics, and speech

Data: Meet ad creative

From TikTok to Instagram, Facebook to YouTube, and more, learn how data is key to ensuring ad creative will actually perform on every platform.

Register Now


In 2015, Salesforce researchers working out of a basement under a Palo Alto West Elm furniture store developed the prototype of what would become Einstein, Salesforce’s AI platform that powers predictions across its products. As of November, Einstein is serving over 80 billion predictions per day for tens of thousands of businesses and millions of users. But while the technology remains core to Salesforce’s business, it’s but one of many areas of research under the purview of Salesforce Research, Salesforce’s AI R&D division.

Salesforce Research, whose mission is to advance AI techniques that pave the path for new products, applications, and research directions, is an outgrowth of Salesforce CEO Mark Benioff’s commitment to AI as a revenue driver. In 2016, when Salesforce first announced Einstein, Benioff characterized AI as “the next platform” on which he predicted companies’ future applications and capabilities will be built. The next year, Salesforce released research suggesting that AI’s impact through customer relationship management software alone will add over $ 1 trillion to gross domestic products around the globe and create 800,000 new jobs.

Today, Salesforce Research’s work spans a number of domains including computer vision, deep learning, speech, natural language processing, and reinforcement learning. Far from exclusively commercial in nature, the division’s projects run the gamut from drones that use AI to spot great white sharks to a system that’s able to identify signs of breast cancer from images of tissue. Work continues even as the pandemic forces Salesforce’s scientists out of the office for the foreseeable future. Just this past year, Salesforce Research released an environment — the AI Economist —  for understanding how AI could improve economic design, a tool for testing natural language model robustness, and a framework spelling out the uses, risks, and biases of AI models.

According to Einstein GM Marco Casalaina, the bulk of Salesforce Research’s work falls into one of two categories: pure research or applied research. Pure research includes things like the AI Economist, which isn’t immediately relevant to tasks that Salesforce or its customers do today. Applied research, on the other hand, has a clear business motivation and use case.

One particularly active subfield of applied research at Salesforce Research is speech. Last spring, as customer service representatives were increasingly ordered to work from home in Manila, the U.S., and elsewhere, some companies began to turn to AI to bridge the resulting gaps in service. Casalaina says that this spurred work on the call center side of Salesforce’s business.

“We’re doing a lot of work for our customers … with regard to real-time voice cues. We offer this whole coaching process for customer service representatives that takes place after the call,” Casalaina told VentureBeat in a recent interview. “The technology identifies moments that were good or bad but that were coachable in some fashion. We’re also working on a number of capabilities like auto escalations and wrap-up, as well as using the contents of calls to prefill fields for you and make your life a little bit easier.”

Medicine

AI with health care applications is another research pillar at Salesforce, Richard Socher, former chief scientist at Salesforce, told VentureBeat during a phone interview. Socher, who came to Salesforce following the acquisition of MetaMind in 2016, left Salesforce Research in July 2020 to found search engine startup You.com but remains a scientist emeritus at Salesforce.

“Medical computer vision in particular can be highly impactful,” Socher said. “What’s interesting is that the human visual system hasn’t necessarily developed to be very good at reading x-rays, CT scans, MRI scans in three dimensions, or more importantly images of cells that might indicate a cancer … The challenge is predicting diagnoses and treatment.”

To develop, train, and benchmark predictive health care models, Salesforce Research draws from a proprietary database comprising tens of terabytes of data collected from clinics, hospitals, and other points of care in the U.S. It’s anonymized and deidentified, and Andre Esteva, head of medical AI at Salesforce Research, says that Salesforce is committed to adopting privacy-preserving techniques like federated learning that ensure patients a level of anonymity.

“The next frontier is around precision medicine and personalizing therapies,” Esteva told VentureBeat. “It’s not just what’s present in an image or what’s present on a patient, but what the patient’s future look like, especially if we decide to put them on a therapy. We use AI to take all of the patient’s data — their medical images records, their lifestyle. Decisions are made, and the algorithm predicts if they’ll live or die, whether they’ll live in a healthy state or unhealthy, and so forth.”

Toward this end, in December, Salesforce Research open-sourced ReceptorNet, a machine learning system researchers at the division developed in partnership with clinicians at the University of Southern California’s Lawrence J. Ellison Institute for Transformative Medicine of USC. The system, which can determine a critical biomarker for oncologists when deciding on the appropriate treatment for breast cancer patients, achieved 92% accuracy in a study published in the journal Nature Communications.

Typically, breast cancer cells extracted during a biopsy or surgery are tested to see if they contain proteins that act as estrogen or progesterone receptors. When the hormones estrogen and progesterone attach to these receptors, they fuel the cancer growth. But these types of biopsy images are less widely available and require a pathologist to review.

In contrast, ReceptorNet determines hormone receptor status via hematoxylin and eosin (H&E) staining, which takes into account the shape, size, and structure of cells. Salesforce researchers trained the system on several thousand H&E image slides from cancer patients in “dozens” of hospitals around the world.

Research has shown that much of the data used to train algorithms for diagnosing diseases may perpetuate inequalities. Recently, a team of U.K. scientists found that almost all eye disease datasets come from patients in North America, Europe, and China, meaning eye disease-diagnosing algorithms are less certain to work well for racial groups from underrepresented countries. In another study, Stanford University researchers identified most of the U.S. data for studies involving medical uses of AI as coming from California, New York, and Massachusetts.

But Salesforce claims that when it analyzed ReceptorNet for signs of age-, race-, and geography-related bias, it found that there was statically no difference in its performance. The company also says that the algorithm delivered accurate predictions regardless of differences in the preparation of tissue samples.

“On breast cancer classification, we were able to classify some images without a costly and time-intensive staining process,” Socher said. “Long story short, this is one of the areas where AI can solve a problem such that it could be helpful in end applications.”

In a related project detailed in a paper published last March, scientists at Salesforce Research developed an AI system called ProGen that can generate proteins in a “controllable fashion.” Given the desired properties of a protein, like a molecular function or a cellular component, ProGen creates proteins by treating the amino acids making up the protein like words in a paragraph.

The Salesforce Research team behind ProGen trained the model on a dataset of over 280 million protein sequences and associated metadata — the largest publicly available. The model took each training sample and formulated a guessing game per amino acid. For over a million rounds of training, ProGen attempted to predict the next amino acids from the previous amino acids, and over time, the model learned to generate proteins with sequences it hadn’t seen before.

In the future, Salesforce researchers intend to refine ProGen’s ability to synthesize novel proteins, whether undiscovered or nonexistent, by honing in on specific protein properties.

Ethics

Salesforce Research’s ethical AI work straddles applied and pure research. There’s been increased interest in it from customers, according to Casalaina, who says he’s had a number of conversations with clients about the ethics of AI over the past six months.

In January, Salesforce researchers released Robustness Gym, which aims to unify a patchwork of libraries to bolster natural language model testing strategies. Robustness Gym provides guidance on how certain variables can help prioritize what evaluations to run. Specifically, it describes the influence of a task via a structure and known prior evaluations, as well as needs such as testing generalization, fairness, or security; and constraints like expertise, compute access, and human resources.

In the study of natural language, robustness testing tends to be the exception rather than the norm. One report found that 60% to 70% of answers given by natural language processing models were embedded somewhere in the benchmark training sets, indicating that the models were usually simply memorizing answers. Another study found that metrics used to benchmark AI and machine learning models tended to be inconsistent, irregularly tracked, and not particularly informative.

In a case study, Salesforce Research had a sentiment modeling team at a “major technology company” measure the bias of their model using Robustness Gym. After testing the system, the modeling team found a performance degradation of up to 18%.

In a more recent study published in July, Salesforce researchers proposed a new way to mitigate gender bias in word embeddings, the word representations used to train AI models to summarize, translate languages, and perform other prediction tasks. Word embeddings capture semantic and syntactic meanings of words and relationships with other words, which is why they’re commonly employed in natural language processing. But they have a tendency to inherit gender bias.

Salesforce’s proposed solution, Double-Hard Debias, transforms the embedding space into an ostensibly genderless one. It transforms word embeddings into a “subspace” that can be used to find the dimension that encodes frequency information distracting from the encoded genders. Then, it “projects away” the gender component along this dimension to obtain revised embeddings before executing another debiasing action.

To evaluate Double-Hard Debias, the researchers tested it against the WinoBias data set, which consists of pro-gender-stereotype and anti-gender-stereotype sentences. Double-Hard Debias reduced the bias score of embeddings obtained using the GloVe algorithm from 15 (on two types of sentences) to 7.7 while preserving the semantic information.

Future work

Looking ahead, as the pandemic makes clear the benefits of automation, Casalaina expects that this will remain a core area of focus for Salesforce Research. He expects that chatbots built to answer customer questions will become more capable than they currently are, for example, as well as robotic process automation technologies that handle repetitive backroom tasks.

There are numbers to back up Casalaina’s assertions. In November, Salesforce reported a 300% increase in Einstein Bot sessions since February of this year, a 680% year-over-year increase compared to 2019. That’s in addition to a 700% increase in predictions for agent assistance and service automation and a 300% increase in daily predictions for Einstein for Commerce in Q3 2020. As for Einstein for Marketing Cloud and Einstein for Sales, email and mobile personalization predictions were up 67% in Q3, and there was a 32% increase in converting prospects to buyers using Einstein Lead Scoring.

“The goal is here — and at Salesforce Research broadly — is to remove the groundwork for people. A lot of focus is put on the model, the goodness of the model, and all that stuff,” Casalaina said. “But that’s only 20% of the equation. The 80% part of it is how humans use it.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

3 Reasons to Read our Latest Case Study on canfitpro

February 10, 2021   CRM News and Info
crmnav 3 Reasons to Read our Latest Case Study on canfitpro

Canfitpro is the largest provider of fitness education in the Canadian industry. Founded in 1993, the company strives to deliver accessible, quality education to over 24,000 members including other member services such as tradeshows, conferences, and certifications. To help maintain their member-driven education company and continue to deliver as a voice for fitness professionals canfitpro turned to CRM Dynamics back in February of this year to implement Dynamics 365.

Unfortunately, the year has not gone as per plan, and like all businesses canfitpro had to quickly adapt to the new normal that came in March of 2020. This case study examines the company’s personalized path to Digital Transformation and the impact it had on their business processes and overall survival.  In canfitpro’s IT Manager, Michael Best own words, “We would not have survived the COVID 19 pandemic without CRM Dynamics and our Microsoft Dynamics 365 Portal.”

Here are three reasons we believe the Case study will be of value:

  1. You have a legacy CRM 

With low useability, no centralized data, a lack of extensibility, and very costly – canfitpro’s custom legacy system could no longer keep pace with the organization’s needs.

  1. You are considering a switch or planning to enhance your CRM

Learn why Dynamics 365 Sales and Customer Service was the right choice for canfitpro including the associated organizational benefits that came along with this implementation: Both to the business and to the roles of the Marketing, Sales, and IT teams.

  1. You want to know the KPIs, returns, and real-world value of a Dynamics CRM

The Dynamics solution was able to accelerate the time to market of new products, servicing of customers, and a multitude of marketing benefits to help communicate with and improve their reach to clients.

If you are also just curious about how canfitpro pulled through 2020. Click here to download the case study and feel free to reach out to our experts if you have any questions.

Let’s block ads! (Why?)

CRM Software Blog | Dynamics 365

Read More

University of Michigan study advocates ban of facial recognition in schools

August 11, 2020   Big Data
 University of Michigan study advocates ban of facial recognition in schools

A newly published study by University of Michigan researchers shows facial recognition technology in schools presents multiple problems and has limited efficacy. Led by Shobita Parthasarathy, director of the university’s Science, Technology, and Public Policy (STPP) program, the research says the technology isn’t suited for security purposes and can actively promote racial discrimination, normalize surveillance, and erode privacy while institutionalizing inaccuracy and marginalizing non-conforming students.

The study follows the New York legislature’s passage of a moratorium on the use of facial recognition and other forms of biometric identification in schools until 2022. The bill, which came in response to the launch of facial recognition by the Lockport City School District, was among the first in the nation to explicitly regulate or ban use of the technology in schools. That development came after companies including Amazon, IBM, and Microsoft halted or ended the sale of facial recognition products in response to the first wave of Black Lives Matter protests in the U.S.

The University of Michigan study — a part of STPP’s Technology Assessment Project — employs an analogical case comparison method to look at previous uses of security technology (CCTV cameras, metal detectors, and biometric technologies) and anticipate the implications of facial recognition. While its conclusions aren’t novel, it takes a strong stance against commercial products it asserts could harm students and educators far more than it helps them.

For instance, the coauthors claim that facial recognition would disproportionately target and discriminate against people of color, particularly Black and Latinx communities. At the same time, they say that facial recognition would create new rules for dress and appearance and punish students who don’t fit into narrow standards of acceptability, causing problems whenever a school relies on it to automate activities like taking attendance or purchasing lunch.

Indeed, countless studies have shown facial recognition to be susceptible to bias. A paper last fall by University of Colorado, Boulder researchers showed that AI from Amazon, Clarifai, Microsoft, and others maintained accuracy rates above 95% for cisgender men and women but misidentified trans men as women 38% of the time. Separate benchmarks of major vendors’ systems by the Gender Shades project and the National Institute of Standards and Technology (NIST) suggest that facial recognition technology exhibits racial and gender bias and facial recognition programs can be wildly inaccurate, misclassifying people upwards of 96% of the time.

Facial recognition will take existing racial biases and make them worse, causing more surveillance and humiliation of Black and brown students, the University of Michigan researchers argue. And it will make surveillance a part of everyday life, laying the groundwork for expansion to other uses. Lockport portends this — while the schools’ privacy policy stated that its facial recognition watchlist wouldn’t include students and would only cover non-students deemed a threat, the district superintendent ultimately oversaw which individuals were added to the system and the school board president couldn’t guarantee student photos would never be included for disciplinary reasons.

The University of Michigan study’s coauthors also maintain that facial recognition in schools will create new kinds of student data that will be sold and bought by private corporations. Data collected for one purpose will be used in other ways, such that it will become impossible for students to provide full and informed consent to data collection or control. A legal remedy to this was proposed last week by Sen. Jeff Merkley (D-OR) and Sen. Bernie Sanders (I-VT) in the National Biometric Information Privacy Act, which would make it illegal for businesses to collect, purchase, or trade biometric information obtained from customers without permission. But few protections exist in most U.S. states as of now.

For these reasons, the researchers recommend a nationwide ban on facial recognition in schools. However, they provide policy recommendations for schools that deem the technology “absolutely necessary.” Among other steps, they propose a five-year moratorium on the use of facial recognition technology in schools; convening a national advisory committee to investigate facial recognition and its implications; establishing technology offices to help schools navigate the technical, social, ethical, and racial challenges of facial recognition; and deleting facial recognition data at the end of each academic year or when students graduate or leave the district.

A number of efforts to use facial recognition systems within schools have been met with resistance from parents, students, alumni, community members, and lawmakers alike. At the college level, a media firestorm erupted after a University of Colorado professor was revealed to have secretly photographed thousands of students, employees, and visitors on public sidewalks for a military anti-terrorism project. University of California San Diego researchers admitted to studying footage of students’ facial expressions to predict engagement levels. And last year, the University of California Los Angeles proposed using facial recognition software for security surveillance as part of a larger campus security policy.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

NIST study finds that masks defeat most facial recognition algorithms

July 28, 2020   Big Data

VB Transform

Watch every session from the AI event of the year

On-Demand

Watch Now

In a report published today by the National Institutes of Science and Technology (NIST), a physical sciences laboratory and non-regulatory agency of the U.S. Department of Commerce, researchers attempted to evaluate the performance of facial recognition algorithms on faces partially covered by protective masks. They report that the 89 commercial facial recognition algorithms from Panasonic, Canon, Tencent, and others they tested had error rates between 5% and 50% in matching digitally applied masks with photos of the same person without a mask.

“With the arrival of the pandemic, we need to understand how face recognition technology deals with masked faces,” Mei Ngan, a NIST computer scientist and a coauthor of the report, said in a statement. “We have begun by focusing on how an algorithm developed before the pandemic might be affected by subjects wearing face masks. Later this summer, we plan to test the accuracy of algorithms that were intentionally developed with masked faces in mind.”

The study — part of a series from NIST’s Face Recognition Vendor Test (FRVT) program conducted in collaboration with the Department of Homeland Security’s Science and Technology Directorate, the Office of Biometric Identity Management, and Customs and Border Protection — explored how well each of the algorithms was able to perform “one-to-one” matching, where a photo is compared with a different photo of the same person. (NIST notes this sort of technique is often used in smartphone unlocking and passport identity verification systems.) The team applied the algorithms to a set of about 6 million photos used in previous FRVT studies, but they didn’t test “one-to-many” matching, which is used to determine whether a person in a photo matches any in a database of known images.

Because real-world masks differ, the researchers came up with nine mask variants to test, which included differences in shape, color, and nose coverage. The digital masks were black or a light blue approximately the same color as a blue surgical mask, while the shapes ranged from round masks covering the nose and mouth to a type as wide as the wearer’s face. The wider masks had high, medium, and low variants that covered the nose to varying degrees.

 NIST study finds that masks defeat most facial recognition algorithms

According to the researchers, algorithm accuracy with masked faces declined “substantially” across the board. Using unmasked images, the most accurate algorithms failed to authenticate a person about 0.3% of the time, and masked images raised even these top algorithms’ failure rate to about 5%, while many “otherwise competent” algorithms failed between 20% and 50% of the time.

In addition, masked images more frequently caused algorithms to be unable to process a face, meaning they couldn’t extract features well enough to make an effective comparison. The more of the nose a mask covered, the lower the algorithm’s accuracy; accuracy degraded with greater nose coverage. Error rates were generally lower with round masks and black masks as opposed to surgical blue ones. And while false negatives increased, false positives remained stable or modestly declined. (A false negative indicates an algorithm failed to match two photos of the same person, while a false positive indicates it incorrectly identified a match between photos of two different people.)

“With respect to accuracy with face masks, we expect the technology to continue to improve,” continued Ngan. “But the data we’ve taken so far underscores one of the ideas common to previous FRVT tests: Individual algorithms perform differently. Users should get to know the algorithm they are using thoroughly and test its performance in their own work environment.”

The results of the study align with a VentureBeat article earlier this year that found that facial recognition algorithms used by Google and Apple struggled to recognize mask-wearing users. But crucially, NIST didn’t take into account systems designed specifically to identify mask wearers, like those from Chinese company Hanwang and researchers affiliated with Wuhan University. In an op-ed in April, Northeastern University professor Woodrow Hartzog characterized masks as a temporary technological speed bump that won’t stand in the way of increased facial recognition use in the age of COVID-19. Already, companies like Clearview AI are attempting to sell facial recognition to state agencies for the purpose of tracking people infected with COVID-19.

Perhaps in recognition of this, this summer, NIST plans to test algorithms created with face masks in mind and conduct tests with one-to-many searches and other variations.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

‘Fundamentally flawed’ study describes facial recognition system designed to identify non-binary people

July 15, 2020   Big Data
 ‘Fundamentally flawed’ study describes facial recognition system designed to identify non binary people

VB Transform

The AI event for business leaders

Hosted Online

July 14 – 17

Register Today

Last Chance: Register for Transform, VB’s AI event of the year, hosted online July 15-17.


In a paper published on the preprint server Arxiv.org, coauthors affiliated with Harvard and Autodesk propose extending current facial recognition systems’ capabilities to identify “gender minority subgroups” such as the LGBTQ and non-binary communities. They claim the corpora they created — a “racially balanced” database capturing a subset of LGBTQ people and an “inclusive-gender” database — can mitigate bias in gender classification algorithms. But according to the University of Washington AI researcher Os Keyes, who wasn’t involved with the research, the paper appears to conceive of gender in a way that’s not only contradictory, but dangerous.

“The researchers go back and forth between treating gender as physiologically and visually modeled in a fixed way, and being more flexible and contextual,” Keyes said. “I don’t know the researchers’ backgrounds, but I’m at best skeptical that they ever spoke to trans people about this project.”

Facial recognition is problematic on its face — so much so that the Association for Computing Machinery (ACM) and American Civil Liberties Union (ACLU) continue to call for moratoriums on all forms of it. (San Francisco, Oakland, Boston, and five other Massachusetts communities have banned the use of facial recognition by local departments, and after the height of the recent Black Lives Matter protests in the U.S., companies including Amazon, IBM, and Microsoft halted or ended the sale of facial recognition products.) Benchmarks of major vendors’ systems by the Gender Shades project and the National Institute of Standards and Technology (NIST) have found that facial recognition exhibits race, gender bias, and poor performance on people who don’t conform to a single gender identity. And facial recognition can be wildly inaccurate, misclassifying people upwards of 96% of the time.

In spite of this, the paper’s coauthors — perhaps with the best of intentions — sought to improve the performance of facial recognition systems when they’re applied to transgender and non-binary people. They posit that current facial recognition algorithms are likely to amplify societal gender bias and that the lack of LGBTQ representation in popular benchmark databases leads to a “false sense of progress” on gender classification tasks in machine learning, potentially harming the self-confidence and psychology of those misgendered by the algorithms.

That’s reasonable, according to Keyes, but the researchers’ assumptions about gender are not.

“They settle on treating gender as fixed, and modeling non-binary people as a ‘third gender’ category in between men and women, which isn’t what non-binary means at all,” Keyes said. “People can be non-binary and present in very different ways, identify in very different ways, [and] have many different life histories and trajectories and desired forms of treatment.”

Equally problematic is that the researchers cite and draw support from a controversial study implying all gender transformation procedures, including hormone replacement therapy (HRT), cause “significant” face variations over time, both in shape and texture. Advocacy groups like GLAAD and the Human Rights Campaign have denounced the study as “junk science” that “threatens the safety and privacy of LGBTQ and non-LGBTQ people alike.”

“This junk science … draws on a lot of (frankly, creepy) evolutionary biology and sexology studies that treat queerness as originating in ‘too much’ or ‘not enough’ testosterone in the womb,” Keyes said. “Again, those studies haven’t been validated — they’re attractive because they imply that gay people are too feminine, or lesbians too masculine, and reinforce social stereotypes. Depending on them and endorsing them in a study the authors claim is for mitigating discrimination is absolutely bewildering.”

The first of the researchers’ databases — the “inclusive database” — contains 12,000 images of 168 unique identities, including 29 white males, 25 white females, 23 Asian males, 23 Asian females, 33 African males, and 35 African females from different geographic regions, 21 of whom (9% of the database) identify as LGBTQ. The second — the non-binary gender benchmark database — comprises 2,000 headshots of 67 public figures labeled as “non-binary” on Wikipedia.

Keyes takes issue with the second data set, which they argue is non-representative because it’s self-selecting and because of the way appearance tends to be policed in celebrity culture. “People of color, disabled people, poor people need not apply — certainly not as frequently,” Keyes said. “It’s sort of akin to fixing bias against women by adding a data set exclusively of women with pigtails; even if it ‘works,’ it’s probably of little use to anyone who doesn’t fit a very narrow range of appearances.”

The researchers trained several image classification algorithms on a “racially-imbalanced” but popular facial image database — the Open University of Israel’s Adience — augmented with images from their own data sets (1,500 images from the inclusive database and 1,019 images from the non-binary database). They then applied various machine learning techniques to mitigate algorithmic bias and boost the models’ accuracy, which they claim enabled the best-performing model to predict non-binary people with 91.97% accuracy.

The results in any case ignore the fact that “trans-inclusive” systems for non-consensually defining someone’s gender are a contradiction in terms, according to Keyes.  “When you have a technology that is built on the idea that how people look determines, rigidly, how you should classify and treat them, there’s absolutely no space for queerness,” they said. “Rather than making gender recognition systems just, or fair, what projects like this really do is provide a veneer of inclusion that serves mostly to legitimize the surveillance systems being built — indeed, it’s of no surprise to me that the authors end by suggesting that, if there are problems with their models, they can be fixed by gathering more data; by surveilling more non-binary people.”

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

The Columbia COVID Study

May 21, 2020   Humor

Really interesting study about COVID-19 out of Columbia University. This study based their model on actual data at the county level (rather than state or even country), which is important because virus transmission is always local, and used 7-day averages, which is important to avoid data noise.

The result that is getting all the headlines is that if the US had started distancing measures just 1 week earlier than we did, then the death toll would have been reduced 55% (less than half the people would have died). And if the US had started 2 weeks earlier, then they would have been reduced by 83%.

This is all about exponential growth. Small changes early on lead to huge changes in results later.

A study result that is even more interesting to me is that they actually showed that when distancing measures are relaxed, there is a significant delay in response. Here’s the important quote from the study:

… a decline of daily confirmed cases continues for almost two weeks after easing of control measures. … This decreasing trend, caused by the NPIs [non-pharmaceutical interventions] in place prior to May 4, 2020 coupled with the lag between infection acquisition and case confirmation, conveys a false signal that the pandemic is well under control. Unfortunately, due to high remaining population susceptibility, a large resurgence of both cases and deaths follows.

The original study itself is a bit of a slog. The NY Times has a easier-to-read summary. If you don’t have access to that, I’m sure there will be more articles about this soon (although the further away you get from the original study the more it gets muddied by the media).

Bottom line:

  • Timing is everything in the spread of a virus. If you wait until things are bad, it is way too late. Just a one-week delay killed 36,000 people by May 3 (more than ten times the death toll from 9/11). It will continue to kill more in the future if we don’t get this under control.
  • Opening things up and then waiting two weeks to see what happens is a trap. We are already falling for that trap. If we fall for it a second time, then shame on us.
 If you liked this, you might also like these related posts:
  1. How South Korea responded to COVID-19
  2. A Study in Bad Leadership
  3. Read this NOW
  4. Exponentially Increasing
  5. The Cost of Our Health

Let’s block ads! (Why?)

Political Irony

Read More

Google’s ML-fairness-gym lets researchers study the long-term effects of AI’s decisions

February 6, 2020   Big Data

Determining whether an AI system is maintaining fairness in its predictions requires an understanding of models’ short- and long-term effects, which might be informed by disparities in error metrics on a number of static data sets. In some cases, it’s necessary to consider the context in which the AI system operates in addition to the aforementioned error metrics, which is why Google researchers developed ML-fairness-gym, a set of components for evaluating algorithmic fairness in simulated social environments.

ML-fairness-gym — which was published in open source on Github this week –is designed to be used to research the long-term effects of automated systems by simulating decision-making using OpenAI’s Gym framework. AI-controlled agents interact with digital environments in a loop, and at each step, an agent chooses an action that affects the environment’s state.  The environment then reveals an observation that the agent uses to inform its next actions, so that the environment models the system and dynamics of a problem and the observations serve as data.

For instance, given the classic lending problem, where the probability that groups of applicants pay back a bank loan is a function of their credit score, the bank acts as the agent and receives applicants, their scores, and their membership in the form of environmental observations. It makes a decision — accepting or rejecting a loan — and the environment models whether the applicant successfully repays or defaults and adjusts their credit score accordingly. Throughout, ML-fairness-gym simulates the outcomes so that the effects of the bank’s policies on fairness to the applicants can be assessed.

ML-fairness-gym in this way cleverly avoids the pitfalls of static data set analysis. If the test sets (i.e., corpora used to evaluate model performance) in classical fairness evaluations are generated from existing systems, they may be incomplete or reflect the biases inherent to those systems. Furthermore, the actions informed by the output of AI systems can have effects that might influence their future input.

 Google’s ML fairness gym lets researchers study the long term effects of AI’s decisions

Above: In the lending problem scenario, this graph illustrates changing credit score distributions for two groups over 100 steps of simulation.

Image Credit: Google

“We created the ML-fairness-gym framework to help ML practitioners bring simulation-based analysis to their ML systems, an approach that has proven effective in many fields for analyzing dynamic systems where closed form analysis is difficult,” wrote Google Research software engineer Hansa Srinivasan in a blog post.

Several environments that simulate the repercussions of different automated decisions are available, including one for college admissions, lending, attention allocation, and infectious disease. (The ML-fairness-gym team cautions that the environments aren’t meant to be hyper-realistic and that best-performing policies won’t necessarily translate to the real world.) Each have a set of experiments corresponding to published papers, which are meant to provide examples of ways ML-fairness-gym can be used to investigate outcomes.

The researchers recommend using ML-fairness-gym to explore phenomena like censoring in the observation mechanism, errors from the learning algorithm, and interactions between the decision policy and the environment. The simulations allow for the auditing of agents to assess the fairness of decision policies based on observed data, which can motivate data collection policies. And they can be used in concert with reinforcement learning algorithms — algorithms that spur on agents with rewards — to derive new policies with potentially novel fairness properties.

In recent months, a number of corporations, government agencies, and independent researchers have made attempts at tackling the so-called “black box” problem in AI — the opaqueness of some AI systems — with varying degrees of success.

“Machine learning systems have been increasingly deployed to aid in high-impact decision-making, such as determining criminal sentencing, child welfare assessments, who receives medical attention and many other settings,” continued Srinivasan. “We’re excited about the potential of the ML-fairness-gym to help other researchers and machine learning developers better understand the effects that machine learning algorithms have on our society, and to inform the development of more responsible and fair machine learning systems.”

In 2017, the U.S. Defense Advanced Research Projects Agency launched DARPA XAI, a program that aims to produce “glass box” models that can be easily understood without sacrificing performance. In August, scientists from IBM proposed a “factsheet” for AI that would provide information about a model’s vulnerabilities, bias, susceptibility to adversarial attacks, and other characteristics. A recent Boston University study proposed a framework to improve AI fairness. And Microsoft, IBM, Accenture, and Facebook have developed automated tools to detect and mitigate bias in AI algorithms.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Generational In-store Retail Preferences not What You Might Think, Study Finds

June 30, 2019   NetSuite
ocpc businesssolution cx 608166811 Generational In store Retail Preferences not What You Might Think, Study Finds

Posted by Barney Beal, Content Director

Today’s retailers, already contending with evolving customer preferences, aging IT infrastructure poorly suited to adapt to modern demands, and competition not only from Amazon but also from manufacturers and distributors now face a new challenge—meeting customer expectations across generations.

A recent study conducted by Oracle NetSuite, Wakefield Research and The Retail Doctor, found some significant differences in expectations, and also challenged a few stereotypes across four generations: baby boomers, Gen X, millennials and Gen Z.

The study surveyed 1,200 consumers and 400 retail executives across the U.S., U.K. and Australia.

Generations Have Different Expectations for What’s In-Store

Some of the most profound, and most surprising, differences appear in the in-store shopping experience. Know who’s more likely to do more in-store shopping this year? No, not the baby boomers (13%) and Gen X (29%). It’s the “digital native” Gen Z and millennials (43%), according to the study. More of the younger generations view the retail experience positively (57%) than Gen X (40%) and baby boomers (13%).

The results do not suggest, however, that retailers can sit back and revel in the fact that they have appealed to the coveted younger demographic with the in-store experience.

First, there’s a case to be made that young people still like going out shopping with their friends, whether they’re digitally literate, smartphone users or not. Moreover, when they do get in the store, generations view their interactions with staff very differently. Among Gen Z, 42% are more annoyed by increased interaction by retail associates. Millennials (56%), Gen X (44%) and baby boomers (43%) all noted they would feel more welcomed by more in-store interactions.

What’s the takeaway for retailers? Clearly, Gen Z is the first truly “digital native” generation. They know what they’re looking for, they’ve often researched a purchase before they enter the store and, when they do need help, they expect the store associate to know more than they do. They also want an associate that isn’t pushing products that aren’t relevant. That means store associates need to be sufficiently trained and equipped to handle any situation that arises, whether that’s recommending complementary products or, if something is out of stock, shipping it to the customer’s home from a warehouse or even a nearby store. Provide a poor experience and the customer is likely to take their complaints to social media—something true of all generations.

Invest in Social Media, with Tempered Expectations 

However, social media is not the panacea many retailers believe it to be, according to the study. While almost all retail executives (98%) think that engaging customers on social media is important to building stronger relationships, overall only 12% of consumers think social media has a significant impact on the way they think or feel about a brand.

For those that do engage with brands over social media, the results more closely follow expectations with Gen Z consumers (38%) much more likely than other generations to engage with retailers on social to get to know the brand compared to millennials (25%), Gen X (27 %) and baby boomers (21%). Similarly, Gen Z (65%) consumers and millennials (62%) believe social media platforms have an impact on their relationship with brands, while more than half of baby boomers (53%) and 29% of Gen X consumers do not engage with brands on social media.

The takeaway here for retailers? Those that want to reach millennials and Gen Z had better make a commitment to social. Social media is a form of trust; those that are not active on social channels are viewed as less trustworthy. For example, Gen Z expects brands to showcase their personality online.

If a retailer wants to connect and influence purchases, they need to be active and responsive on social. If a retailer isn’t, their younger customers are. Many Gen Z apparel customers post pictures of items they are considering and then 46% make a purchase based on the feedback (social proof) to that post.

Access the full report for more details on generational differences in retail expectations and what retail executives are thinking.

Posted on Tue, June 25, 2019
by NetSuite filed under

Let’s block ads! (Why?)

The NetSuite Blog

Read More

New study: “Digital natives” value brick and mortar stores more than their parents or grandparents

June 26, 2019   NetSuite
og image New study: Digital natives value brick and mortar stores more than their parents or grandparents

Global Study Highlights the Varying Shopping Expectations of Different Generations and the Role of Technology in Personalizing Retail

REDWOOD CITY, CA.—June 25, 2019—Despite clear differences in expectations among shoppers of different generations, almost half of retailers (44 percent) have made no progress in tailoring the in-store shopping experience according to a recent study conducted by Oracle NetSuite, Wakefield Research and The Retail Doctor. The global study of 1,200 consumers and 400 retail executives across the U.S., U.K. and Australia dispelled stereotypes around generations and found big differences in generational expectations across baby boomers, Gen X, millennials and Gen Z.

“We have seen decades of diminishing experiences in brick and mortar stores, and the differences identified in these results point to its impact on consumers over the years,” said Bob Phibbs, CEO, The Retail Doctor. “Retailers have fallen behind in offering in-store experiences that balance personalization and customer service but there’s an opportunity to take the reins back. The expectation from consumers is clear and it’s up to retailers to offer engaging and custom experiences that will cater to shoppers across a diverse group of generations.”

Beauty is in the eye of the beholder: Retailers struggle to keep stride with generational shoppers

The in-store shopping experience remains an important part of the retail environment for all generations, but the progress retailers are making to improve the in-store experience is being viewed differently by different generations.

  • Despite the stereotypes of “digital natives”, Gen Z and millennials (43 percent) are most likely to do more in-store shopping this year followed by Gen X (29 percent) and baby boomers (13 percent).
  • Gen Z and millennials (57 percent) had the most positive view of the current retail environment feeling it was more inviting, followed by Gen X (40 percent). Baby boomers (27 percent) were more likely to find the current retail environment less inviting than consumers overall.
  • Gen Z valued in-store interaction the least with 42 percent feeling more annoyed from increased interaction with retail associates. In contrast, millennials (56 percent), Gen X (44 percent) and baby boomer (43 percent) generations all noted they would feel more welcomed by more in-store interactions.

Retailers view emerging technologies through rose-colored glasses

While more than three quarters of retail executives (79 percent) believe having AI and VR in stores will increase sales, the study found that these technologies are not yet widely accepted by any generation.

  • Overall, only 14 percent of consumers believe that emerging technologies like AI and VR will have a significant impact on their purchase decisions.
  • Emerging tech in retail stores is most attractive to millennials (50 percent) followed by Gen Z (38 percent), Gen X (35 percent) and baby boomers (20 percent).
  • Perceptions of VR varied widely across different generations. Fifty-eight percent of Gen Z said VR would have some influence on their purchase decisions, while 59 percent of baby boomers said VR would have no influence on their purchase decision.

Insta-famous brands reach Gen Z and millennial consumers, but not as much as retailers think

While almost all retail executives (98 percent) think that engaging customers on social media is important to building stronger relationships with them, the study found a big disconnect with consumers across all generations.

  • Overall, only 12 percent of consumers think their engagement with brands on social media has a significant impact on the way they think or feel about a brand.
  • Among those who engage with brands on social media, Gen Z (38 percent) consumers are much more likely than other generations to engage with retailers on social to get to know the brand compared to millennials (25 percent) and baby boomers (21 percent).
  • Gen Z (65 percent) consumers and millennials (63 percent) believe their engagement with brands on social media platforms have an impact on their relationship with brands.
  • More than half of baby boomers (53 percent) and 29 percent of Gen X consumers do not engage with brands on social media.

“After all the talk about brick and mortar stores being dead, it’s interesting to see that ‘digital natives’ are more likely to increase their shopping in physical stores this year than any other generation,” said Greg Zakowicz, senior commerce marketing analyst, Oracle NetSuite. “Stepping back, these findings fit with broader trends we have been seeing around the importance of immediacy and underlines why retailers cannot afford to make assumptions about the needs and expectations of different generations. It really is a complex puzzle and as this study clearly shows, retailers need to think carefully about how they meet the needs of different generations.”

To read more about NetSuite’s insights into the report’s finding visit NetSuite’s cloud blog.

Methodology
For this survey, 1,200 consumers and 400 retail executives were surveyed around the overall retail environment, in-store and online shopping experiences and advanced technologies. Both retailers and consumers were surveyed from three global markets including the U.S., U.K. and Australia with retail executives representing organizations between $ 10-100 million in annual sales.

About Wakefield Research
Wakefield is a full-service market research firm that uncovers insights for brands to help them solve problems and grow their business. Wakefield Research is a partner to the world’s leading consumer and B2B brands, including 50 of the Fortune 100. Wakefield Research conducts qualitative and quantitative research in 70 countries. For more information, please visit https://www.wakefieldresearch.com

About The Retail Doctor
The Retail Doctor is a New York-based retail consulting firm created by expert retail consultant and leading business mentor Bob Phibbs. With over 30 years of experience in retail, Bob has worked as a consultant, speaker, and entrepreneur, helping businesses revolutionize their brand and grow their success. Bob is also the author of three highly-praised books, including The Retail Doctor’s Guide to Growing Your Business (WILEY). His clients include some of the largest retail brands in the world including Bernina, Brother, Caesars Palace, Hunter Douglas, Lego, Omega and Yamaha. For more information, please visit www.retaildoc.com

About Oracle NetSuite
For more than 20 years, Oracle NetSuite has helped organizations grow, scale and adapt to change. NetSuite provides a suite of cloud-based applications, which includes financials / Enterprise Resource Planning (ERP), HR, professional services automation and omnichannel commerce, used by more than 18,000 customers in 203 countries and dependent territories.

For more information, please visit http://www.netsuite.com.

Follow NetSuite’s Cloud blog, Facebook page and @NetSuite Twitter handle for real-time updates.

About Oracle
The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks
Oracle and Java are registered trademarks of Oracle and/or its affiliates.

Safe Harbor
The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

Let’s block ads! (Why?)

NetSuite's Latest Press Coverage

Read More
« Older posts
  • Recent Posts

    • Derivative of a norm
    • TODAY’S OPEN THREAD
    • IBM releases Qiskit modules that use quantum computers to improve machine learning
    • Transitioning to Hybrid Commerce
    • Bad Excuses
  • Categories

  • Archives

    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited