• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: Enterprise

Sisense and Signals Analytics Bring the Power of External Data to the Enterprise

December 30, 2020   Sisense

We’re stronger when we work together. In our Partner Showcase, we highlight the amazing integrations, joint projects, and new functionalities created and sustained by working with our technology partners at companies like AWS, Google, and others. 

Business teams constantly want to know how their companies are performing — against their internal goals and those of the market they compete in. They benchmark their performance against their previous results, what their customers are asking for, what their customers are buying, and ideally what their customers will buy. To get their answers, businesses typically rely on data sources that are all internal, showing decision-makers only part of the picture.

That’s now in the past. Today, through a strategic partnership, Signals Analytics and Sisense are making it easy to incorporate external data analytics into a company’s main BI environment. The result is a broader, more holistic view of the market coupled with more actionable and granular insights. Infusing these analytics everywhere, democratizes data usage and access to critical insights across the enterprise. 

Organizations who are truly data-driven know how to leverage a wide range of internal and external data sources in their decision-making. The integration of Signals Analytics in the Sisense business intelligence environment gets them there faster and seamlessly, without the need for specialized resources to build complex systems.

Kobi Gershoni, Signals Analytics co-founder and chief research officer

Why external data analytics?

The integration of Signals Analytics with the Sisense platform delivers on the promise of advanced analytics — infusing intelligence at the right place and the right time, upleveling standard decisions to strategic decisions, and speeding the time to deployment. Combining internal and external data unlocks powerful insights that can drive innovation, product development, marketing, partnerships, acquisitions, and more. 

Primary use cases for external data analytics

External data is uniquely well-suited to inform decision points across the product life cycle, from identifying unmet needs to predicting sales for specific attributes, positioning against the competition, measuring outcomes, and more. By incorporating a wide range of external data sources that are connected and contextualized, users benefit from a more holistic picture of the market.

For example, when combining product reviews, product listings, social media, blogs, forums, news sites, and more with sales data, the accuracy rate for predictive analytics jumps from 36% to over 70%. Similar results are seen when going from social listening alone to using a fully connected and contextualized external data set to generate predictions.  

The Sisense and Signals Analytics partnership: What you need to know

  • Signals Analytics provides the connected and contextualized datasets for specific fast-moving consumer goods (FMCG) categories
  • Sisense users can tap into one of the broadest external datasets available and unleash the power of this connected data in their Sisense dashboards
  • The ROI of the analytics investment dramatically increases when combining historical data, sales, inventory, and customer data with Signals Analytics data

Integrate external data analytics in your Sisense environment in three easy steps

Step 1: Connect

From your Sisense UI, use the Snowflake data connector to connect to the Signals Analytics Data Mart. The data can be queried live in the Sisense ElastiCube.

Step 2: Select

Once the data connection has been established, select the data types needed by filtering the relevant “Catalog.”

Step 3: Visualize

Select the dimensions, measures, and filters to apply, then visualize.

More data sources, better decisions

Your company is sitting on a large supply of data, but unless and until you find the right datasets to complement it, the questions you can answer and the insights you can harness from it will be limited. Whatever your company does and whatever questions you are trying to answer, mashing up data from a variety of sources, inside the right platform, is vital to surfacing game-changing insights.

To get started on the next leg of your analytics journey start a free trial or become a partner.

traditional vs cloud data warehouse blog cta banner 770x250 1 Sisense and Signals Analytics Bring the Power of External Data to the Enterprise
Tags: advanced analytics | analytics implementation

Let’s block ads! (Why?)

Blog – Sisense

Read More

9 trends in enterprise database technology

December 30, 2020   Big Data
 9 trends in enterprise database technology

Transform 2021

Join us for the world’s leading event about accelerating enterprise transformation with AI and Data, for enterprise technology decision-makers, presented by the #1 publisher in AI and Data

Learn More


The database has always revolved around rock-solid reliability. Data goes in and then comes out in exactly the same way. Occasionally, the bits will be cleaned up and normalized so all of the dates are in the same format and the text is in the same character set, but other than that, nothing should be different.

That consistency is what makes the database essential for any enterprise — allowing it to conduct things like ecommerce transactions. It’s also why the database remains distinct from the data warehouse, another technology that is expanding its mission for slower-twitch things like analysis. The database acts as the undeniable record of the enterprise, the single source of truth.

Now databases are changing. Their focus is shifting and they’re accepting more responsibilities and offering smarter answers. In short, they’re expanding and taking over more and more of the stack.

Many of us might not notice because we’ve been running the same database for years without a change. Why mess with something that works? But as new options and features come along, it makes sense to rethink the architectures of data flows and take advantage of all the new options. Yes, the data will still be returned exactly as expected, but it will be kept safer and presented in a way that’s easier to use.

Many drivers of the change are startups built around a revolutionary new product, like multi-cloud scaling or blockchain assurance. For each new approach to storing information, there are usually several well-funded startups competing to dominate the space and often several others still in stealth mode.

The major companies are often not far behind. While it can take more time to add features to existing products, the big companies are finding ways to expand, sometimes by revising old offerings or by creating new ones in their own skunkworks. Amazon, for instance, is the master at rolling out new ways to store data. Its cloud has at least 11 different products called databases, and that doesn’t include the flat file options.

The other major cloud providers aren’t far behind. Microsoft has migrated its steadfast SQL Server to Azure and found ways to offer a half-dozen open source competitors, like MySQL. Google delivers both managed versions of relational databases and large distributed and replicated versions of NoSQL key/value pairs.

The old standards are also adding new features that often deliver much of the same promise as the startups while continuing support of older versions. Oracle, for instance, has been offering cloud versions of its database while adding new query formats (JSON) and better performance to handle the endless flood of incoming data.

IBM is also moving dB2 to the cloud while adding new features like integration with artificial intelligence algorithms that analyze the data. It’s also supporting the major open source relational databases while building out a hybrid version that merges Oracle compatibility with the PostgreSQL engine.

Among the myriad changes to old database standards and new emerging players, here (in no particular order) are nine key ways databases are being reborn.

1. Better query language

SQL may continue to do the heavy lifting around the world. But newer options for querying — like GraphQL — are making it easier for front-end developers to find the data they need to present to the user and receive it in a format that can be dropped right into the user interface.

GraphQL follows the standard JavaScript format for serializing objects, making it easier for middle- and front-end code to parse it. It also hides some of the complexity of JOINs, making it simpler for end users to grab just the data they need. Developers are already adding tools like Apollo Studio, an IDE for exploring queries, or Hasura, an open source front-end that wraps GraphQL around legacy databases like PostgreSQL.

2. Streaming databases follow vast flows

The model for a standard database is a big ledger, much like the ones clerks would maintain in fat bound books. Streaming databases like ksqlDB are built to watch an endless stream of data events and answer questions about them. Instead of imagining that the data is a permanent table, the streaming database embraces the endlessly changing possibilities as data flows through them.

3. Time-series database

Most database columns have special formats for tracking date stamps. Time-series databases like InfluxDB or Prometheus do more than just store the time. They track and index the data for fast queries, like how many times a user logged in between January 15 and March 12. These are often special cases of streaming databases where the data in the streams is being tracked and indexed for changes over time.

4. Homomorphic encryption

Cryptographers were once happy to lock up data in a safe. Now some are developing a technique called homomorphic encryption to make decisions and answer queries on encrypted data without actually decrypting it, a feature that vastly simplifies cloud security and data sharing. This allows computers and data analysts to work with data without knowing what’s in it. The methods are far from comprehensive, but companies like IBM are already delivering toolkits that can answer some useful database queries.

5. In-memory database

The original goal of a database was to organize data so it could be available in the future, even when electricity is removed. The trouble is that sometimes even storing the data to persistent disks takes too much time, and it may not be worth the effort. Some applications can survive the occasional loss of data (would the world end if some social media snark disappeared?), and fast performance is more important than disaster recovery. So in-memory databases like Amazon’s ElasticCache are designed for applications that are willing to trade permanence for lightning-fast response times.

6. Microservice engines

Developers have traditionally built their code as a separate layer that lives outside the database itself, and this code treats the database as a black box. But some are noticing that the databases are so feature-rich they can act as microservice engines on their own. PostgreSQL, for instance, now allows embedded procedures to commit full transactions and initiate new ones before spitting out answers in JSON. Developers are recognizing that the embedded code that has been part of databases like Oracle for years may be just enough to build many of the microservices imagined by today’s architects.

Jupyter notebooks started out as a way for data scientists to bundle their answers with the Python code that produced it. Then data scientists started integrating the data access with the notebooks, which meant going where the information was stored: the database. Today, SQL is easy to integrate, and users are becoming comfortable using the notebooks to access the database and generate smart reports that integrate with data science (Julia or R) and machine learning tools. The newer Jupyter Lab interface is turning the classic notebook into a full-service IDE, complete with extensions that pull data directly from SQL databases.

7. Graph databases

The network of connections between people or things is one of the dominant data types on the internet, so it’s no surprise that databases are evolving to make it easier to store and analyze these relationships.

Neo4j now offers a visualization tool (Bloom) and a collection of data science functions for developing complex reports about the network. GraphDB is focusing on developing “semantic graphs” that use natural language to capture linguistic structures for big analytic projects. TerminusDB is aimed at creating knowledge graphs with a versioning system much like Git. All of them bring efficiency to storing a complex set of relationships that don’t fit neatly into standard tables.

8. Merging data storage with transport

Databases were once hidden repositories to keep data safe in the back office. Delivering this information to the user was the job of other code. Now, databases like Firebase treat the user’s phone or laptop as just another location for replicating data.

Databases like FaunaDB are baking replication into the stack, thus saving the DBA from moving the bits. Now, developers don’t need to think about getting information to the user. They can just read and write from the local data store and assume the database will handle the grubby details of marshaling the bytes across the network while keeping them consistent.

9. Data everywhere

A few years ago, all the major browsers began supporting the Local Storage and Indexed Storage APIs, making it easier for web applications to store significant amounts of data on the client’s machine. The early implementations limited the data to 5MB, but some have bumped the limits to 10MB. The response time is much faster, and it will also work even when the internet connection is down. The database is not just running on one box in your datacenter, but in every client machine running your code.

VentureBeat

VentureBeat’s mission is to be a digital townsquare for technical decision makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you,
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more.

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Accelerate Your Enterprise Connectivity with TIBCO Cloud

December 24, 2020   TIBCO Spotfire
cloud integration 696x365 Accelerate Your Enterprise Connectivity with TIBCO Cloud

Reading Time: 2 minutes

Businesses today face unprecedented challenges responding to market volatility, increasing the importance of business agility as the means to successfully adapt to changing market conditions. One key enabler of business agility is the ability to quickly connect digital assets no matter where they are hosted to create new capabilities or streamline processes. 

Accelerated Connectivity = Greater Agility

This growing pressure means many teams are looking for ways to reduce time to market for new connectivity applications as much as possible. As the number of apps created across the company increases, there is an opportunity to streamline development through the reuse of digital assets like APIs. However, the process of tracking down API specs and manually documenting them for consumption in applications can be a time-consuming process that could be spent on more valuable tasks. 

New connections could be created even faster if there was a way for developers to easily discover APIs created by different teams across the business to reuse in new integration apps. To help our customers navigate this challenge and reduce time to market, the TIBCO Cloud makes it easy for application developers across your business to share and discover digital assets. 

Service Discovery with TIBCO Cloud 

The TIBCO Cloud provides a centralized resource for developers to discover and share APIs or other REST services created across the TIBCO Cloud. Using a simple drop-down box, developers can easily find and consume APIs or register a new API that can be reused by other developers. 

 Accelerate Your Enterprise Connectivity with TIBCO Cloud

In today’s volatile market environment, even the smallest time saving can make a difference as businesses quickly pivot to meet new demands. Utilizing the registry allows developers to focus on more valuable work such as defining and building new product capabilities by promoting the reuse of assets across teams.

The TIBCO Cloud makes it easy for application developers across your business to share and discover digital assets.  Click To Tweet

Learn More 

For an example using API discovery to streamline integration app development, check out this demo where you will learn how to easily use TIBCO Cloud Integration to automate your marketing lead generation process. 

Let’s block ads! (Why?)

The TIBCO Blog

Read More

Zoomin has raised $21 million to unify enterprise product content

December 16, 2020   Big Data

The cutting-edge computer architecture that’s changing the AI game

Learn about the next-gen architecture needed to unlock the true capabilities of AI and machine learning.

Register here

Zoomin, a “knowledge orchestration” platform that helps users extract answers from enterprise documentation, today revealed it has raised $ 21 million. Investors include Salesforce Ventures, Bessemer Venture Partners, and Viola Growth, and the investment, which had been undisclosed before today, came in the form of several tranches starting in 2018.

Zoomin was founded in 2015 and has hubs in New York and Tel Aviv. The company aims to help businesses make their vast pools of technical content easier to find and more usable. Companies may have many thousands of manuals, guides, training paraphernalia, online community discussions, and more, but all this disparate content is typically created and managed by different teams, people, and systems and often exists in silos. Zoomin “unifies” this content and delivers it in a more “intuitive and personalized way,” according to CEO and cofounder Gal Oron.

White label

Zoomin’s product can perhaps be crudely described as a white label search engine for enterprise product content, though Oron argues traditional federated search solutions focus on indexing content and taking users from their point of search to whatever external channel contains the results. Zoomin, on the other hand, can bring answers to the user wherever they conduct the search from.

“This means they don’t need to navigate across different sites and experience the fragmentation and drop-off that naturally accompanies this kind of ‘context switching,’” Oron told VentureBeat.

How Zoomin is used largely depends on what the customer needs from it. It could be a standalone technical resource center, perhaps something akin to a companywide intranet or even a public portal, transforming disparate static content into a dynamic search interface replete with filters, auto-suggestions, recommendations, and more. Or it could be a widget that offers content relevant to the context of a given situation, baked into the customer’s own applications, such as a customer relationship management (CRM) tool.

“In some cases, customers replace their existing portals with Zoomin, in other cases they keep their portal but use Zoomin to create an enhanced, intuitive, personalized experience,” Oron added.

Above: Zoomin “in-product”

Zoomin ships with various integration options, including REST APIs, JavaScript APIs, and command line interfaces (CLIs). It also offers prebuilt apps that can be downloaded, customized, and integrated with Salesforce or ServiceNow.

Above: Zoomin integrated into Salesforce

Under the hood, Zoomin says it uses both supervised and unsupervised machine learning (ML) models, developed and trained in-house, alongside off-the-shelf ML services.

“Zoomin’s knowledge graph ties together enterprise content, users, and interactions, powering the platform’s text analysis and classification, dynamic ranking, content recommendations, and predictive insights,” Oron explained.

Analytics also play a sizable role in Zoomin’s offering, including “traffic insights” that detail where traffic is coming from (including the referring domain and location); “content insights” that surface which topics and publications receive the most engagement; and “search insights” that give companies search pattern data that can be used to tweak the UX.

“These insights are designed to help our customers understand what users are searching for, learn which search terms are yielding no results, analyze the usage of search filters, and more,” Oron added.

Above: Zoomin: Content freshness analytics

Although Zoomin has operated fairly under the radar, it has amassed a number of notable clients, including now Adobe-owned Workfront, Chinese hospitality giant Shiji, and cybersecurity veteran Imperva.

Zoomin was entirely bootstrapped up until Bessemer’s inaugural investment in 2018, which was followed by Salesforce Ventures’ investment in 2019. Both VC firms reinvested in the startup this year, alongside Israel’s Viola Growth.

Sign up for Funding Weekly to start your week with VB’s top funding stories.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

What enterprise CISOs need to know about AI and cybersecurity

November 19, 2020   Big Data
 What enterprise CISOs need to know about AI and cybersecurity

Best practices for a successful AI Center of Excellence

A guide for both CoEs and business units

Download Guide

Hari Sivaraman is the Head of AI Content Strategy at Venturebeat.


Modern day enterprise security is like guarding a fortress that is being attacked on all fronts, from digital infrastructure to applications to network endpoints.

That complexity is why AI technologies such as deep learning and machine learning have emerged as a game-changing defensive weapon in the enterprise’s arsenal over the past three years. There is no other technology that can keep up. It has the ability to rapidly analyze billions of data points, and glean patterns to help a company act intelligently and instantaneously to neutralize many potential threats.

Beginning about five years ago, investors started pumping hundreds of millions of dollars into a wave of new security startups that leverage AI, including CrowdStrike, Darktrace, Vectra AI, and Vade Secure, among others. (More on these companies lower down).

But it’s important to note that cyber criminals can themselves leverage increasingly easy-to-use AI solutions as potent weapons against the enterprise. They can unleash counter attacks against AI-led defenses, in a never-ending battle of one-upmanship. Or they can hack into the AI itself. After all, most AI algorithms rely on training data, and if hackers can mess with the training data, they can distort the algorithms that power effective defense. Cyber criminals can also develop their own AI programs to find vulnerabilities much faster than they used to, and often faster than the defending companies can plug them.

Humans are the strongest link

So how does an enterprise CISO ensure the optimal use of this technology to secure the enterprise? The answer lies in leveraging something called Moravec’s paradox, which suggests that tasks that are easy for computers/AI are difficult for humans and vice-versa. In other words, combine the best technology with the CISO’s human intelligence resources.

If clear guidelines can be distilled in the form of training data for AI, technology can do a far better job than humans at detecting security threats. For instance, if there are guidelines on certain kinds of IP addresses or websites that are known for being the source of malicious malware activity, the AI can be trained to look for them, take action, learn from this, and become smarter at detecting such activity in the future. When such attacks happen at scale, AI will do a far more efficient job of spotting and neutralizing such threats compared to humans.

On the other hand, humans are better at judgement-based daily decisions, which might be difficult for computers. For instance, let’s say a particular well-disguised spear phishing email talks about a piece of information, which only an insider ‘could’ have known. A vigilant human security expert with that knowledge and intelligence, will be able to connect the dots and detect that this is ‘probably’ an insider attack and flag the email as suspicious. It’s important to know in this instance, that AI will find it difficult to perform this kind of abductive reasoning and arrive at such a decision. Even if you cover some such use cases with appropriate training data, it is nigh on impossible to cover all the scenarios. As every AI expert will tell you, AI is not quite ready to replace human general intelligence or what we call ‘wisdom’ in the foreseeable future.

But…humans could also be the weakest link

At the same time, humans can be your weakest link. For instance most phishing attacks rely on the naivety and ignorance of an untrained user, and get them to unwittingly reveal information or perform an action which opens up the enterprise for attack. If all your people are not trained to recognize such threats, the risks increase dramatically.

The key is to know that AI and human intelligence can join forces and form a formidable defense against cybersecurity threats. AI, while being a game-changing potent weapon in the fight against cybercrime, cannot be left unsupervised, at least in the foreseeable future, and will always need human assistance by trained, experienced security professionals and a vigilant workforce. This two-factor AI  plus human intelligence (HI) security, if implemented fastidiously as a policy guideline across the enterprise, will go a long way in winning the war against cybercrime .

7 AI-based cybersecurity companies

Below is more about the leading emerging AI-first cybersecurity companies. Each of them bite off a section of enterprise security needs. A robust cybersecurity strategy, which has to defend at all points, is almost impossible for a single company to manage. Attack fronts include hardware infrastructure (data centers and clouds), desktops, mobile devices (cellphones, laptops, tablets, external storage devices, etc.), IoT devices, software applications, data, data pipelines, operational processes, physical sites including home offices, communication channels (email, chat, social networks), insider attacks, and perhaps most importantly, employee and contractor security awareness training. With bad actors leveraging an ever widening range of attack techniques against enterprises (phishing, malware, DoS, DDoS, MitM, XSS, etc.), security technical leaders need all the help they can get.

CrowdStrike

CrowdStrike’s Falcon suite of products are could-native, AI-powered cyber security solutions for companies of all sizes. These products cover next-gen antivirus, endpoint detection and response, threat intelligence, threat hunting, IT hygiene, incident response, and proactive services. CrowdStrike says it uses something called ‘signatureless’ artificial intelligence/machine learning, which means it does not rely on a signature ( i.e. a unique set of characteristics within the virus that differentiates it from other viruses). The AI can detect hitherto unknown threats using something it calls Indicator of Attack (IOA) — a way to determine the intent of a potential attack — to stop known and unknown threats in real-time. Based in Sunnyvale, California, this company has raised $ 481 million in funding and says it has almost 5,000 customers. The company has grown rapidly by focusing mainly on its endpoint threat detection and response product called Falcon Prevent, which leverages behavioral pattern matching techniques from crowd-sourced data. It gained recognition for handling the high-profile DNC cyber attacks in 2016.

Darktrace

Darktrace offers cloud-native, self learning, AI-based enterprise cyber security. The system works by understanding your organization’s ‘DNA’ and its normal healthy state. It then uses machine learning to identify any deviations from this healthy state, i.e. any intrusions that can affect the health of the enterprise and then triggers instantaneous and autonomous defense mechanisms. In this way, it describes itself as similar to antibodies in a human immune system. It protects the enterprise on various fronts including workforce devices and IoT, SaaS, and email. It leverages unsupervised machine learning techniques in a system called Antigena to scan for potential threats and stop attacks before they can happen. The Cambridge, U.K.- and San Francisco, U.S.-based company has raised more than $ 230M in funding and says it has more than 4,000 customers.

Vectra

Vectra’s Cognito NDR platform uses behavioral detection algorithms to analyze metadata from captured packets revealing hidden and unknown attackers in real time, whether traffic is encrypted or not. By providing real-time attack visibility and non-stop automated threat hunting that’s powered by always-learning behavioral models, it cuts cybercriminal dwell times and speeds up response times. The Cognito product uses a combination of supervised and unsupervised machine learning and deep learning techniques to glean patterns and act upon them automatically. The San Jose, California-headquartered Vectra has raised $ 223M in funding and claims “thousands” of enterprise clients.

SparkCognition

SparkCognition’s DeepArmor is an AI-built end-point cybersecurity solution for enterprises that provides protection against known software vulnerabilities exploitable by cyber criminals. It protects against attack vectors such as ransomware, viruses, malware,  and offers threat visibility and management. DeepArmor’s technology leverages big data, NLP, and SparkCognition’s patented machine learning algorithms to protect enterprises from what it says are more than 400 million new malware variants discovered each year. Lenovo partnered with SparkCognition in October 2019 to launch DeepArmor Small Business. SparkCognition has raised roughly $ 175M in funding and boasts “thousands” of enterprise clients.

Vade Secure

Vade Secure is one of the leading products in predictive email defense. It claims it protects a  billion mailboxes across 76 countries. Its product helps protect users from advanced email security threats, including phishing, spear phishing, and malware. Vade Secure’s AI products leverage a multi-layered approach, including using supervised machine learning models trained on a massive dataset of more than 600 million mailboxes administered by the world’s largest ISPs. The France- and U.S.-based company has raised almost  $ 100 million in funding and says it has more than 5,000 clients.

SAP NS2 

SAP NS2’s approach is to apply the latest advancements in AI and machine learning to problems like cybersecurity and counterterrorism, working with a variety of U.S. security agencies and enterprises. Its technology adopts the philosophy that security in this new era requires a balance of human and machine intelligence. In 2019, NS2 won the Defense Security Service James S. Cogswell Outstanding Industrial Security Achievement Award.

Blue Hexagon

Blue Hexagon offers deep learning-based real-time security for network threat detection and response in both enterprise network and cloud environments. It claims to deliver industry-leading sub-second threat detection with full AI-verdict explanation, threat categorization, and killchain (i.e. the structure of an attack starting with identifying the target, counter attack used to nullify the target, and proof of the destruction of the target). The Sunnyvale, California-based company has raised $ 37M in funding.

VentureBeat is the host of Transform, the world’s leading AI event focused on business and technology decision makers in applied AI, and in our July 2021 event (12-16 July), AI in cybersecurity will be one of the key areas we will be focusing on. Register early and join us to learn more.

The author will be speaking at the DTX Cyber Security event next week. Register early to learn more.


Best practices for a successful AI Center of Excellence:

A guide for both CoEs and business units Access here


Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

The secrets of small data: How machine learning finally reached the enterprise

October 9, 2020   Big Data

Automation and Jobs

Read our latest special issue.

Open Now

Over the past decade, “big data” has become Silicon Valley’s biggest buzzword. When they’re trained on mind-numbingly large data sets, machine learning (ML) models can develop a deep understanding of a given domain, leading to breakthroughs for top tech companies. Google, for instance, fine-tunes its ranking algorithms by tracking and analyzing more than one trillion search queries each year. It turns out that the Solomonic power to answer all questions from all comers can be brute-forced with sufficient data.

But there’s a catch: Most companies are limited to “small” data; in many cases, they possess only a few dozen examples of the processes they want to automate using ML. If you’re trying to build a robust ML system for enterprise customers, you have to develop new techniques to overcome that dearth of data.

Two techniques in particular — transfer learning and collective learning — have proven critical in transforming small data into big data, allowing average-sized companies to benefit from ML use cases that were once reserved only for Big Tech. And because just 15% of companies have deployed AI or ML already, there is a massive opportunity for these techniques to transform the business world.

 The secrets of small data: How machine learning finally reached the enterprise

Above: Using the data from just one company, even modern machine learning models are only about 30% accurate. But thanks to collective learning and transfer learning, Moveworks can determine the intent of employees’ IT support requests with over 90% precision.

Image Credit: Moveworks

From DIY to open source

Of course, data isn’t the only prerequisite for a world-class machine learning model — there’s also the small matter of building that model in the first place. Given the short supply of machine learning engineers, hiring a team of experts to architect an ML system from scratch is simply not an option for most organizations. This disparity helps explain why a well-resourced tech company like Google benefits disproportionately from ML.

But over the past several years, a number of open source ML models — including the famous BERT model for understanding language, which Google released in 2018 — have started to change the game. The complexity of creating a model the caliber of BERT, whose aptly named “large” version has about 340 million parameters, means that few organizations can even consider quarterbacking such an initiative. However, because it’s open source, companies can now tweak that publicly available playbook to tackle their specific use cases.

To understand what these use cases might look like, consider a company like Medallia, a Moveworks customer. On its own, Medallia doesn’t possess enough data to build and train an effective ML system for an internal use case, like IT support. Yet its small data does contain a treasure trove of insights waiting for ML to unlock them. And by leveraging new techniques to glean these insights, Medallia has become more efficient, from recognizing which internal workflows need attention to understanding the company-specific language its employees use when asking for tech support.

Massive progress with small data

So here’s the trillion-dollar question: How do you take an open source ML model designed to solve a particular problem and apply that model to a disparate problem in the enterprise? The answer starts with transfer learning, which, unsurprisingly, entails transferring knowledge gained from one domain to a different domain that has less data.

For example, by taking an open source ML model like BERT — designed to understand generic language — and refining it at the margins, it is now possible for ML to understand the unique language employees use to describe IT issues. And language is just the beginning, since we’ve only begun to realize the enormous potential of small data.

 The secrets of small data: How machine learning finally reached the enterprise

Above: Transfer learning leverages knowledge from a related domain — typically one with a greater supply of training data — to augment the small data of a given ML use case.

Image Credit: Moveworks

More generally, this practice of feeding an ML model a very small and very specific selection of training data is called “few-shot learning,” a term that’s quickly become one of the new big buzzwords in the ML community. Some of the most powerful ML models ever created — such as the landmark GPT-3 model and its 175 billion parameters, which is orders of magnitude more than BERT — have demonstrated an unprecedented knack for learning novel tasks with just a handful of examples as training.

Taking essentially the entire internet as its “tangential domain,” GPT-3 quickly becomes proficient at these novel tasks by building on a powerful foundation of knowledge, in the same way Albert Einstein wouldn’t need much practice to become a master at checkers. And although GPT-3 is not open source, applying similar few-shot learning techniques will enable new ML use cases in the enterprise — ones for which training data is almost nonexistent.

The power of the collective

With transfer learning and few-shot learning on top of powerful open source models, ordinary businesses can finally buy tickets to the arena of machine learning. But while training ML with transfer learning takes several orders of magnitude less data, achieving robust performance requires going a step further.

That step is collective learning, which comes into play when many individual companies want to automate the same use case. Whereas each company is limited to small data, third-party AI solutions can use collective learning to consolidate those small data sets, creating a large enough corpus for sophisticated ML. In the case of language understanding, this means abstracting sentences that are specific to one company to uncover underlying structures:

 The secrets of small data: How machine learning finally reached the enterprise

Above: Collective learning involves abstracting data — in this case, sentences — with ML to uncover universal patterns and structures.

Image Credit: Moveworks

The combination of transfer learning and collective learning, among other techniques, is quickly redrawing the limits of enterprise ML. For example, pooling together multiple customers’ data can significantly improve the accuracy of models designed to understand the way their employees communicate. Well beyond understanding language, of course, we’re witnessing the emergence of a new kind of workplace — one powered by machine learning on small data.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

One True Connected Enterprise for Government Organizations

August 12, 2020   Microsoft Dynamics CRM

Government organizations are somewhat unique in that they do not exist to sell to or service customers so much as to provide services to citizens. In today’s blogpost, we’ll explore this industry from a technology perspective. Enjoy! Common Challenges in Government Agencies Disengaged Citizens – Constituents quickly disengage when their government organizations fail to deliver the ‘always-on’…

Source

Let’s block ads! (Why?)

PowerObjects- Bringing Focus to Dynamics CRM

Read More

Launch your product at Transform 2020 — The AI event for enterprise decision makers

June 27, 2020   Big Data
 Launch your product at Transform 2020 — The AI event for enterprise decision makers

VentureBeat is on the lookout for disruptive AI companies of all sizes that are ready to present new tech products or services on the main stage at Transform 2020: Accelerating your business with AI, July 15-17 in San Francisco.

Transform is the first major virtual AI event for business decision makers this year, and will have more than a thousand attendees.

Those companies selected for our AI Showcase will do so in front of hundreds, if not thousands, of industry decision makers. Every presenter will receive coverage from VentureBeat, placing your company squarely in front of our growing reader base of over 6 million monthly readers.

To be sure, with just three weeks left, the Showcase is almost full, but we do have some slots left.

VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

Who should apply? Dynamic companies that have a new product to unveil that is immediately relevant to leaders and practitioners in the areas of AI and ML. Ideally, the companies will have a compelling use case that can incorporate a product demo, multimedia, and other creative ways of presenting their technology or solution on stage.

In total, up to thirteen B2B and/or B2C candidates will be selected from our pool of applicants. We welcome companies of all sizes. If you have a story to tell, and an AI product or service with tangible business results and demonstrative use cases, please submit your application here.

Want to ensure your AI startup gets exposure at Transform? Be sure to also look in into our Expo. Last year, over 50 innovative companies showcased their technology at our inaugural Transform Expo and networked with senior-level execs from some of the most notable brands and tech companies. This year, one expo participant will be voted to present on stage as part of the Tech Showcase. Apply here.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Dathena raises $12 million for AI that monitors and classifies sensitive enterprise data

May 14, 2020   Big Data
 Dathena raises $12 million for AI that monitors and classifies sensitive enterprise data

Dathena, a startup employing AI and machine learning to identify and protect sensitive enterprise data, today announced that it secured $ 12 million in equity financing. The fresh capital follows the opening of the company’s U.S. headquarters in New York City, and CEO Christopher Muffat says the funding will support Dathena’s R&D efforts and the hiring of stateside staff across sales, marking, and customer success teams.

Dathena’s platform is designed to help companies protect against breaches while remaining in compliance with GDPR, the California Consumer Privacy Act (CCPA), and other such regulations. About 37% of businesses operating in the EU say they’ve had to hire at least six new employees to achieve GDPR compliance (according to Netsparker), and among those that have experienced a data breach, the average cost amounted to $ 3.6 million or $ 141 for each record lost (according to the Ponemon Institute).

Five products form Darthena’s suite: Discovery, Privacy, Classify, Protect, and Detect.

Discovery and Privacy reveal what kind of files are stored throughout an organization, where they’re located, and who has access to them at any one time. In addition, they identify business-critical data, sort by toxicity and risk, and eliminate duplicates while mapping all personally identifiable data into a comprehensive backend library.

VB Transform 2020 Online – July 15-17: Join leading AI executives at the AI event of the year. Register today and save 30% off digital access passes.

Classify filters data by business category and level of confidentiality, automatically labeling sensitive information and delivering visualizations of files to managers and administrators. It learns organizations’ documentation patterns for data in motion and at rest, achieving 80% accuracy out of the box and up to 99% after deployment, Darthena claims.

As for Protect and Detect, they ensure classification tags and keywords export to third-party information security systems and shed light on who, how, when, and where employees are creating and accessing data. Detect provides continuous monitoring and spotlighting of data access and storage anomalies, for its part, as well as smart alerts and integrated workflows for IT team remediation.

Dathena competes with well-funded startups like OneTrust and Bitglass in the $ 120 billion data protection market, but it claims to have over 200,000 users and “a multitude” of enterprise clients. Muffat expects to see an uptick in inquiries in the coming months as remote work motivated by the pandemic gives rise to data protection challenges.

Jungle Ventures led this latest funding round (a series A), with participation from Caphorn and Enterprise Singapore’s SEEDS Capital and existing investors Cerracap Ventures and MS&AD Ventures. It follows a previously undisclosed seed round in November 2018.

Muffat, who worked with HSBC as a lead investigator and headed information risk management at Barclays, founded Dathena in 2016. More than 100 employees work out of its headquarters in Singapore and offices in Geneva.

Sign up for Funding Weekly to start your week with VB’s top funding stories.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

4 Things You Need to Know for Successful Enterprise CRM Integration

May 4, 2020   CRM News and Info

The enterprise IT environment is complex. Many systems, technologies and practices that were developed at various times coexist in the same world. With expectations for technological advancements at their peak, we’re tasked with enabling these systems to work together harmoniously to support the continuous sharing of information.

Systems and data must connect to allow full use of capabilities as if all information were native to each. There also must be many ways to present information to end users, though data is evolving on a constant basis. Given this complexity, it’s not surprising that an intelligent CRM system in an enterprise environment requires specialized insight and know-how to ensure a seamless integration that’s both relevant and current.

What does an enterprise intelligent CRM solution look like? Any smart solution first and foremost must be designed as an enterprise-first application: one that is flexible, scalable, upgradable and ultimately easily deployable.

To a great extent, success in the enterprise environment depends on the use of “enterprise literate technology,” as opposed to reliance on “technology literate enterprise” concepts that were common in the past.

Enterprise literate technology understands an organization’s business needs and is built with the standards of what the company needs to accomplish in mind. It can help predict outcomes, better facilitate operations, and even automate day-to-day tasks. With today’s functional and technical complexity, heterogeneous integration requirements, and need for scale and security, CRM integration success depends on an enterprise literate technology approach.

From an enterprise management standpoint, the first task is to find a platform designed for this purpose. The following four considerations are critical when sourcing intelligent CRM software for enterprise systems.

1. Flexibility and Context Matter

Enterprise CRM solutions should be tailored to meet business specific needs. The primary goal for any CRM solution should be a seamless user experience, delivered through a fully integrated single application that is responsive across all devices and platforms.

A good choice will offer a comprehensive customer view by integrating enterprise and third-party data sources through a customized, context-aware system. Systems like this automatically manage all communication and context passing between the CRM system and other third-party portlets needed to ensure everything works together as a single integrated system.

What’s more, enterprise intelligent CRM with built-in artificial intelligence has the ability to incorporate additional data sources to allow users to leverage available information effectively in order to better service customers.

From automating workflows to eliminating burdensome administrative tasks, the flexibility of an intelligent enterprise solution provides a deeper understanding of the customer, which helps with decision-making processes and simplifies day-to-day operations.

2. Is It Scalable?

CRM is more than just a software. It’s an essential strategic growth tool that all enterprises need. One of the main motivators for any enterprise CRM provider should be to equip an organization with the simple tools needed in the early stages of a business, as well as the capabilities to scale up as the business expands and grows.

Implementing a reliable, fully scalable CRM system in the beginning as part of an overarching business strategy ensures seamless growth down the road. After all, a good foundation — which includes a company’s ability to scale up to meet the demands of growth and challenges as they arise — is fundamental to any organization’s success.

With an enterprise-first approach to CRM, organizations can rest assured that the simple coordination of all the different customer touchpoints in a business — from ales and marketing to face-to-face interactions with customers — will be facilitated effectively through one centralized platform as the business grows.

This is what makes ongoing collaboration, information sharing and task management simple and efficient. Keep in mind all of this must be done with a robust and complex security system, too. A truly scalable solution allows organizations to configure security to protect their customers and match their requirements as the business expands, while allowing information to be shared across the company dependably.

3. What are the Integration Capabilities? Is It Upgradable?

When CRM integration is an afterthought, failure is inevitable. Instead, the technology must allow for integration with a wide variety of client platforms, data sources and applications, since every organization has different needs.

Without integration for items such as social media and news feeds, for instance, it is next to impossible to take customer management and experience to the next level. For organizations and CRM solutions alike, a comprehensive understanding based on the client’s complete relationship with the company is essential for customers to receive the expected level of personalized, continuously evolving service.

Look for and integrate a system that understands the nature and concerns of enterprise operations — including its ability to upgrade as needed. Technical and functional challenges are to be expected, but a well-integrated system supports upgradability from the beginning.

Among other things, it is important to not force unnecessary data replication or compromises on data security, visibility and residency. Instead, organizations should deploy an enterprise-first way of thinking that provides configurable, world-class functionality along with the business controls needed to take operations to the next level.

4. Deployment Should Be Simple

A purpose-built enterprise CRM software system should make deployment simple. It should be compatible with a variety of open source and common software platforms. An intelligent platform takes the “how”out of the equation by translating processes easily. It should be easy to tailor it to specific business needs.

The user’s job should be simply to dictate what the system should do, not how it will be done. Both the functional and technical complexity should be handled by the software itself, which should be sophisticated enough to render information-sharing appropriately across all common Web applications and technology devices.

Easy deployment insulates a software investment from the shifting underlying technology landscape, and ultimately makes the development and maintenance of a CRM solution more manageable over time.

There is unmatchable value in deploying an enterprise CRM system that can be customized for an organization’s specific business needs to give the ROI and improved customer experience companies seek.

Integrating an enterprise intelligent CRM system can seem daunting, but it doesn’t have to be. Every business has a unique set of needs, and selecting robust software that allows for thoughtful integration of data and applications can improve operations and help businesses expand and grow, while making the transition seamless and mitigating unnecessary stress.
end enn 4 Things You Need to Know for Successful Enterprise CRM Integration


Adam%20Edmonds 4 Things You Need to Know for Successful Enterprise CRM Integration
Adam Edmonds is VP of products at
NexJ Systems, which specialiazes in CRM products for global financial services institutions. With almost two decades of experience developing customer management solutions in financial services and insurance, Edmonds is responsible for establishing overall product vision and designing easy-to-use solutions that solve financial advisors’ problems.

Let’s block ads! (Why?)

CRM Buyer

Read More
« Older posts
  • Recent Posts

    • Rickey Smiley To Host 22nd Annual Super Bowl Gospel Celebration On BET
    • Kili Technology unveils data annotation platform to improve AI, raises $7 million
    • P3 Jobs: Time to Come Home?
    • NOW, THIS IS WHAT I CALL AVANTE-GARDE!
    • Why the open banking movement is gaining momentum (VB Live)
  • Categories

  • Archives

    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited