• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: already

AI models from Microsoft and Google already surpass human performance on the SuperGLUE language benchmark

January 6, 2021   Big Data
 AI models from Microsoft and Google already surpass human performance on the SuperGLUE language benchmark

Transform 2021

Join us for the world’s leading event about accelerating enterprise transformation with AI and Data, for enterprise technology decision-makers, presented by the #1 publisher in AI and Data

Learn More


In late 2019, researchers affiliated with Facebook, New York University (NYU), the University of Washington, and DeepMind proposed SuperGLUE, a new benchmark for AI designed to summarize research progress on a diverse set of language tasks. Building on the GLUE benchmark, which had been introduced one year prior, SuperGLUE includes a set of more difficult language understanding challenges, improved resources, and a publicly available leaderboard.

When SuperGLUE was introduced, there was a nearly 20-point gap between the best-performing model and human performance on the leaderboard. But as of early January, two models — one from Microsoft called DeBERTa and a second from Google called T5 + Meena — have surpassed the human baselines, becoming the first to do so.

Sam Bowman, assistant professor at NYU’s center for data science, said the achievement reflected innovations in machine learning including self-supervised learning, where models learn from unlabeled datasets with recipes for adapting the insights to target tasks. “These datasets reflect some of the hardest supervised language understanding task datasets that were freely available two years ago,” he said. “There’s no reason to believe that SuperGLUE will be able to detect further progress in natural language processing, at least beyond a small remaining margin.”

But SuperGLUE isn’t a perfect — nor a complete test of human language ability. In a blog post, the Microsoft team behind DeBERTa themselves noted that their model is “by no means” reaching the human-level intelligence of natural language understanding. They say this will require research breakthroughs — along with new benchmarks to measure them and their effects.

SuperGLUE

As the researchers wrote in the paper introducing SuperGLUE, their benchmark is intended to be a simple, hard-to-game measure of advances toward general-purpose language understanding technologies for English. It comprises eight language understanding tasks drawn from existing data and accompanied by a performance metric as well as an analysis toolkit.

The tasks are:

  • Boolean Questions (BoolQ) requires models to respond to a question about a short passage from a Wikipedia article that contains the answer. The questions come from Google users, who submit them via Google Search.
  • CommitmentBank (CB) tasks models with identifying a hypotheses contained within a text excerpt from sources including the Wall Street Journal and determining whether this hypothesis holds true.
  • Choice of plausible alternatives (COPA) provides a premise sentence about topics from blogs and a photography-related encyclopedia from which models must determine either the cause or effect from two possible choices.
  • Multi-Sentence Reading Comprehension (MultiRC) is a question-answer task where each example consists of a context paragraph, a question about that paragraph, and a list of possible answers. A model must predict which answers are true and false.
  • Reading Comprehension with Commonsense Reasoning Dataset (ReCoRD) has models predict masked-out words and phrases from a list of choices in passages from CNN and the Daily Mail, where the same words or phrases might be expressed using multiple different forms, all of which are considered correct.
  • Recognizing Textual Entailment (RTE) challenges natural language models to identify whenever the truth of one text excerpt follows from another text excerpt.
  • Word-in-Context (WiC) provides models two text snippets and a polysemous word (i.e., word with multiple meanings) and requires them to determine whether the word is used with the same sense in both sentences.
  • Winograd Schema Challenge (WSC) is a task where models, given passages from fiction books, must answer multiple-choice questions about the antecedent of ambiguous pronouns. It’s designed to be an improvement on the Turing Test.

SuperGLUE also attempts to measure gender bias in models with Winogender Schemas, pairs of sentences that differ only by the gender of one pronoun in the sentence. However, the researchers note that Winogender has limitations in that it offers only positive predictive value: While a poor bias score is clear evidence that a model exhibits gender bias, a good score doesn’t mean the model is unbiased. Moreover, it doesn’t include all forms of gender or social bias, making it a coarse measure of prejudice.

To establish human performance baselines, the researchers drew on existing literature for WiC, MultiRC, RTE, and ReCoRD and hired crowdworker annotators through Amazon’s Mechanical Turk platform. Each worker, which was paid an average of $ 23.75 an hour, completed a short training phase before annotating up to 30 samples of selected test sets using instructions and an FAQ page.

Architectural improvements

The Google team hasn’t yet detailed the improvements that led to its model’s record-setting performance on SuperGLUE, but the Microsoft researchers behind DeBERTa detailed their work in a blog post published earlier this morning. DeBERTa isn’t new — it was open-sourced last year — but the researchers say they trained a larger version with 1.5 billion parameters (i.e., the internal variables that the model uses to make predictions). It’ll be released in open source and integrated into the next version of Microsoft’s Turing natural language representation model, which supports products like Bing, Office, Dynamics, and Azure Cognitive Services.

DeBERTa is pretrained through masked language modeling (MLM), a fill-in-the-blank task where a model is taught to use the words surrounding a masked “token” to predict what the masked word should be. DeBERTa uses both the content and position information of context words for MLM, such that it’s able to recognize “store” and “mall” in the sentence “a new store opened beside the new mall” play different syntactic roles, for example.

Unlike some other models, DeBERTa accounts for words’ absolute positions in the language modeling process. Moreover, it computes the parameters within the model that transform input data and measure the strength of word-word dependencies based on words’ relative positions. For example, DeBERTa would understand the dependency between the words “deep” and “learning” is much stronger when they occur next to each other than when they occur in different sentences.

DeBERTa also benefits from adversarial training, a technique that leverages adversarial examples derived from small variations made to training data. These adversarial examples are fed to the model during the training process, improving its generalizability.

The Microsoft researchers hope to next explore how to enable DeBERTa to generalize to novel tasks of subtasks or basic problem-solving skills, a concept known as compositional generalization. One path forward might be incorporating so-called compositional structures more explicitly, which could entail combining AI with symbolic reasoning — in other words, manipulating symbols and expressions according to mathematical and logical rules.

“DeBERTa surpassing human performance on SuperGLUE marks an important milestone toward general AI,” the Microsoft researchers wrote. “[But unlike DeBERTa,] humans are extremely good at leveraging the knowledge learned from different tasks to solve a new task with no or little task-specific demonstration.”

New benchmarks

According to Bowman, no successor to SuperGLUE is forthcoming, at least not in the near term. But there’s growing consensus within the AI research community that future benchmarks, particularly in the language domain, must take into account broader ethical, technical, and societal challenges if they’re to be useful.

For example, a number of studies show that popular benchmarks do a poor job of estimating real-world AI performance. One recent report found that 60%-70% of answers given by natural language processing models were embedded somewhere in the benchmark training sets, indicating that the models were usually simply memorizing answers. Another study — a meta-analysis of over 3,000 AI papers — found that metrics used to benchmark AI and machine learning models tended to be inconsistent, irregularly tracked, and not particularly informative.

Part of the problem stems from the fact that language models like OpenAI’s GPT-3, Google’s T5 + Meena, and Microsoft’s DeBERTa learn to write humanlike text by internalizing examples from the public web. Drawing on sources like ebooks, Wikipedia, and social media platforms like Reddit, they make inferences to complete sentences and even whole paragraphs.

As a result, language models often amplify the biases encoded in this public data; a portion of the training data is not uncommonly sourced from communities with pervasive gender, race, and religious prejudices. AI research firm OpenAI notes that this can lead to placing words like “naughty” or “sucked” near female pronouns and “Islam” near words like “terrorism.” Other studies, like one published by Intel, MIT, and Canadian AI initiative CIFAR researchers in April, have found high levels of stereotypical bias from some of the most popular models, including Google’s BERT and XLNet, OpenAI’s GPT-2, and Facebook’s RoBERTa. This bias could be leveraged by malicious actors to foment discord by spreading misinformation, disinformation, and outright lies that “radicalize individuals into violent far-right extremist ideologies and behaviors,” according to the Middlebury Institute of International Studies.

Most existing language benchmarks fail to capture this. Motivated by the findings in the two years since SuperGLUE’s introduction, perhaps future ones might.

VentureBeat

VentureBeat’s mission is to be a digital townsquare for technical decision makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you,
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more.

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Biden All-Female Communications Team Won’t Tell Nation What’s Wrong, Nation Should Already Know

December 2, 2020   Humor
blank Biden All Female Communications Team Won’t Tell Nation What’s Wrong, Nation Should Already Know

WASHINGTON, D.C.—Biden’s transition team has announced they will be appointing an all-female communications team. According to sources, the team will not tell the nation what’s wrong, since the nation should already know.

“It’s fine. Everything’s fine. Nothing’s wrong, OK!?” said Jen Psaki in her first press conference as a part of Biden’s team. “Why would you think I’m not fine? Ugh… if you have to ask, I’m not going to tell you.”

Insiders close to Biden say the communications team will hold periodic press conferences where they will just glare at reporters with an icy look and make them try to guess what’s wrong. If the reporters fail to understand their highly advanced non-verbal communication, they will smile sweetly and walk out of the room before slamming the door as hard as they can.

“This is a huge step for this country,” said Communication Director Kate Bedingfield to reporters. “We need to move beyond archaic and male-centric methods of communication that use things like clear language and written words. We hope this will help deepen the country’s level of intimacy with the Biden administration and open up new channels of understanding and communication.”

The press has been frantically buying flowers, chocolates, and jewelry for the communications team in hopes of receiving some clue as to what the heck is going on. The team responded by rolling their eyes and going to bed early due to a really bad headache.

Let’s block ads! (Why?)

ANTZ-IN-PANTZ ……

Read More

July 10, 2018 Windows updates cause SQL startup issues due to “TCP port is already in use” errors

July 29, 2018   BI News and Info

We have recently become aware of a regression in one of the TCP/IP functions that manages the TCP port pool which was introduced in the July 10, 2018 Windows updates for Windows 7/Server 2008 R2 and Windows 8.1/Server 2012 R2.

This regression may cause the restart of the SQL Server service to fail with the error, “TCP port is already in use”. We have also observed this issue preventing Availability Group listeners from coming online during failover events for both planned and/or unexpected failovers. When this occurs, you may observe errors similar to below in the SQL ERRORLOGs:

Error: 26023, Severity: 16, State: 1.
Server TCP provider failed to listen on [ <IP ADDRESS> <ipv4> <PORT>]. Tcp port is already in use.
Error: 17182, Severity: 16, State: 1.
TDSSNIClient initialization failed with error 0x2740, status code 0xa. Reason: Unable to initialize the TCP/IP listener. Only one usage of each socket address (protocol/network address/port) is normally permitted.
Error: 17182, Severity: 16, State: 1.
TDSSNIClient initialization failed with error 0x2740, status code 0x1. Reason: Initialization failed with an infrastructure error. Check for previous errors. Only one usage of each socket address (protocol/network address/port) is normally permitted.
Error: 17826, Severity: 18, State: 3.
Could not start the network library because of an internal error in the network library. To determine the cause, review the errors immediately preceding this one in the error log.
Error: 17120, Severity: 16, State: 1.
SQL Server could not spawn FRunCommunicationsManager thread. Check the SQL Server error log and the Windows event logs for information about possible related problems.

If the issue is impacting an Availability Group listener, you may also observe the below error in addition to the above:

Error: 26075, Severity: 16, State: 1.
Failed to start a listener for virtual network name ‘<LISTENER NAME>’. Error: 10048.

Additionally, you may also observe the following errors in the Windows System logs:

The SQL Server (<INSTANCE NAME>) service entered the stopped state.
The SQL Server (<INSTANCE NAME>) service terminated with the following service-specific error:  Only one usage of each socket address (protocol/network address/port) is normally permitted.

And if the instance is part of a cluster:

Cluster resource ‘SQL Server (<INSTANCE NAME>)’ of type ‘SQL Server’ in clustered role ‘SQL Server (<INSTANCE NAME>)’ failed. Based on the failure policies for the resource and role, the cluster service may try to bring the resource online on this node or move the group to another node of the cluster and then restart it.  Check the resource and group state using Failover Cluster Manager or the Get-ClusterResource Windows PowerShell cmdlet.
The Cluster service failed to bring clustered role ‘SQL Server (<INSTANCE NAME>)’ completely online or offline. One or more resources may be in a failed state. This may impact the availability of the clustered role.

It is also possible for this issue to impact the creation of a new Availability Group listener. In such scenarios, you may encounter an error like below from SQL Server Management Studio:

The configuration changes to the availability group listener were completed, but the TCP provider of the instance of SQL Server failed to listen on the specified port [<LISTENER NAME>:<PORT>]. This TCP port is already in use. Reconfigure the availability group listener, specifying an available TCP port. For information about altering an availability group listener, see the “ALTER AVAILABILITY GROUP (Transact-SQL)” topic in SQL Server Books Online. (Microsoft SQL Server, Error: 19486)

For this scenario, you may see errors similar to below in the SQL ERRORLOGs:

Error: 19476, Severity: 16, State: 4.
The attempt to create the network name and IP address for the listener failed. If this is a WSFC availability group, the WSFC service may not be running or may be inaccessible in its current state, or the values provided for the network name and IP address may be incorrect. Check the state of the WSFC cluster and validate the network name and IP address with the network administrator. Otherwise, contact your primary support provider.
The Service Broker endpoint is in disabled or stopped state.
Error: 26023, Severity: 16, State: 1.
Server TCP provider failed to listen on [ <IP ADDRESS> <PORT>]. Tcp port is already in use.
Error: 26075, Severity: 16, State: 1.
Failed to start a listener for virtual network name ‘<LISTENER NAME>:’. Error: 10048.
Stopped listening on virtual network name ‘<LISTENER NAME>:’. No user action is required.
Error: 10800, Severity: 16, State: 1.
The listener for the WSFC resource ‘<RESOURCE GUID>’ failed to start, and returned error code 10048, ‘Only one usage of each socket address (protocol/network address/port) is normally permitted.‘. For more information about this error code, see “System Error Codes” in the Windows Development Documentation.
Error: 19452, Severity: 16, State: 1.
The availability group listener (network name) with Windows Server Failover Clustering resource ID ‘<RESOURCE GUID>’, DNS name ‘<LISTENER NAME>’, port <PORT> failed to start with a permanent error: 10048. Verify port numbers, DNS names and other related network configuration, then retry the operation.

Solution:

The Windows team has already released hotfixes to address this issue and we have had multiple customers already confirm that these hotfixes have resolved issues related to this regression. The below tables list the KB articles for the patches that introduced the regression and the KB articles for their correlating hotfixes.

For Windows 7/Server 2008 R2

For Windows Server 2012

For Windows 8.1/Server 2012 R2

You can choose to install either of the applicable KBs that fix the regression in order to resolve issues with SQL service/Availability Group listeners failing to start/come online due to “TCP port is already in use” errors due to this regression. For example, if your system has KB4338815, you can install either KB4338831 or KB4345424 to fix the regression. The difference between the two is that KB4345424 provides only the fix for the regression, whereas KB4338831 includes all of the fixes from KB4338815 as well as some additional quality improvements as a preview of the next Monthly Rollup update (which includes the fix for the regression).

In addition to the monthly rollup/security-only updates mentioned above, this regression was also introduced in updates for specific Windows 10/Server 2016 builds. Please note that the build-specific updates do not have a correlating hotfix-only patch, therefore each build only has one applicable patch to address the regression as noted in the table below.

KB that introduced the regression

KB that fixes the regression

July 10, 2018—KB4338819 (OS Build 17134.165)

July 16, 2018—KB4345421 (OS Build 17134.167)

July 10, 2018—KB4338825 (OS Build 16299.547)

July 16, 2018—KB4345420 (OS Build 16299.551)

July 10, 2018—KB4338826 (OS Build 15063.1206)

July 16, 2018—KB4345419 (OS Build 15063.1209)

July 10, 2018—KB4338814 (OS Build 14393.2363)

July 16, 2018—KB4345418 (OS Build 14393.2368)

July 10, 2018—KB4338829 (OS Build 10240.17914)

July 16, 2018—KB4345455 (OS Build 10240.17918)

There can be other causes of the “TCP port is already in use” errors preventing SQL resources from starting/coming online which are not due to the regression mentioned above. If you are encountering similar errors but do not have the July 10, 2018 updates installed on your system, or you already have the fix installed, then you may find our colleague Chris Thompson’s blog – https://blogs.msdn.microsoft.com/sql_pfe_blog/2016/10/05/tcp-port-is-already-in-use/ – useful in identifying whether any other process(es) may be using the port meant for your SQL instance(s).

Let’s block ads! (Why?)

CSS SQL Server Engineers

Read More

Why You Should Already Have a Data Governance Strategy

June 19, 2018   Sisense

Garbage in, garbage out. This motto has been true ever since punched cards and teletype terminals. Today’s sophisticated IT systems depend just as much on good quality data to bring value to their users, whether in accounting, production, or business intelligence. However, data doesn’t automatically format itself properly, any more than it proactively tells you where it’s hiding or how it should be used. No, data just is. If you want your business data to satisfy criteria of availability, usability, integrity, and security, you need a data governance strategy.

Data governance in general is an overarching strategy for organizations to ensure the data they use is clean, accurate, usable, and secure. Data stakeholders from business units, the compliance department, and IT are best positioned to lead data governance, although the matter is important enough to warrant CEO attention too. Some organizations go as far as appointing a Data Governance Officer to take overall charge. The high-level goal is to have consistent, reliable data sets to evaluate enterprise performance and make management decisions.

Ad-hoc approaches are likely to come back to haunt you. Data governance has to become systematic, as big data multiplies in type and volume, and users seek to answer more complex business questions. Typically, that means setting up standards and processes for acquiring and handling data, as well as procedures to make sure those processes are being followed. If you’re wondering whether it’s all worth it, the following five reasons may convince you.

banner blog 2 Why You Should Already Have a Data Governance Strategy

Reason 1: Ensure data availability

Even business intelligence (BI) systems won’t look very smart, if users cannot find the data needed to power them. In particular, self-service BI means that the data must be easy enough to locate and to use. After years of hearing about the sinfulness of organizational silos, it should be clear that even if individual departments “own” data, the governance of that data must be done in the same way across the organization. Authorization to use the data may be restricted, as in the case of sensitive customer data, but users should not ignore its existence, when it could help them in their work.

Availability is also a matter of having appropriate data that is easy enough to use. With a trend nowadays to store unstructured data from different sources in non-relational databases or data lakes, it can be difficult to know what kind of data is being acquired and how to process it. Data governance is therefore a matter of first setting up data capture to acquire what your enterprise and its different departments need, rather than everything under the sun. Governance then also ensures that data schemas are applied to organize data when it is stored, or that tools are available for users to process data, for example to run business analytics from non-relational (NoSQL) databases.

Reason 2: Ensure users are working with consistent data

When the CFO and the COO work from different sets of data and reach different conclusions about the same subjects, things are going to be difficult. The same is true at all other levels in an enterprise. Users must have access to consistent, reliable data, so that comparisons make sense and conclusions can be checked. This is already a good reason for making sure that data governance is driven across the organization, by a team of executives, managers, and data stewards with the knowledge and authority to make sure the same rules are followed by all.

Global data governance initiatives may also grow out of attempts to improve data quality at departmental levels, where individual systems and databases were not planned for information sharing. The data governance team must deal with such situations, for instance, by harmonizing departmental information resources. Increased consistency in data means fewer arguments at executive level, less doubt about the validity of data being analyzed, and higher confidence in decision making.

Reason 3: Determining which data to keep and which to delete

The risks of data hoarding are the same as those of physical hoarding. IT servers and storage units full of useless junk make it hard to locate any data of value or to do anything useful with it afterwards. Users use stale or irrelevant data as the basis for important business decisions, IT department expenses mushroom, and vulnerability to data breaches increases. The problem is unfortunately common. 33% of the data stored by organizations is simply ROT (redundant, obsolete, or trivial), according to the Veritas Data Genomics Index 2017 survey.

Yet things don’t have to be that way. Most data does not have to be kept for decades, “just in case.” As an example, retailing leader Walmart uses only the last four weeks’ transactional data for its daily merchandising analytics. It is part of good data governance strategy to carefully consider which data is important to the organization and which should be destroyed. Data governance also includes procedures for employees to make sure data is not unnecessarily duplicated, as well as policies for systematic data retirement (for instance, for archiving or destruction) according to age or other pertinent criteria.

Reason 4: Resolve analysis and reporting issues

An important dimension in data governance is the consistency across an organization of its metrics, as well as the data driving them. Without clearly recorded standards for metrics, people may use the same word, yet mean different things. Business analytics are a case in point, when analytics tools vary from one department to another. Self-service analytics or business intelligence can be a boon to an enterprise, but only if people interpret metrics and reports in a consistent way.

When reports lack clarification, the temptation is often to blame technology. The root cause, however, is often the mis-configuration of the tools and systems involved. It may even be in their faulty application, as in the case of reporting tools being wrongly applied to production databases, triggering problems in performance that mean that neither transactions nor analytics are satisfactorily accomplished. Ripping out and replacing fundamentally sound systems is not the solution. Instead, improved data governance brings more benefit, faster, and for far less cost.

Reason 5: Security and compliance with laws concerning data governance

Consequences for non-compliance with data regulations can be enormous, especially where private individuals’ information is concerned. A case in point, the European General Data Protection Regulation (GDPR) for May 2018 sets non-compliance fines up to some $ 22 million or four percent of the offender’s worldwide turnover, whichever is the higher, for data misuse or breach affecting European citizens.

Effective data governance helps an organization to avoid such issues, by defining how its data is to be acquired, stored, backed up, and secured against accidents, theft, or misuse. These definitions also include provision for audits and controls to ensure that the procedures are followed. Realistically, organizations will also conduct suitable awareness campaigns to makes sure that all employees working with confidential company, customer, or partner data understand the importance of data governance and its rules. Education and awareness campaigns will become increasingly important as user access to self-service solutions increases, as will the levels of data security already inherent in those solutions.

Conclusion

If you think about data as a strategic asset, the idea of governance becomes natural. Company finances must be kept in order with the necessary oversight and audits, workplace safety must be guaranteed and respect the relevant regulations, so why should data – often a key differentiator and a confidential commodity – be any different? As IT self-service and end-user empowerment grow, the importance of good data governance increases too. Business user autonomy in spotting trends and taking decisions can help an enterprise become more responsive and competitive, but not if it is founded on data anarchy.

Effective data governance is also a continuing process. Policy definition, review, adaptation, and audit, together with compliance reviews and quality control, are all regularly effected or repeated as a data governance life cycle. As such, data governance is never finished, because new sources, uses, and regulations about data are never finished either. For contexts such as business intelligence, especially in a self-service environment, good data governance helps users to use the right data in the right way, to generate business insights correctly and take sound business decisions.

banner blog 2 Why You Should Already Have a Data Governance Strategy

Tags: Data Analysis | Data Governance

Let’s block ads! (Why?)

Blog – Sisense

Read More

Technology vs Humanity – The Future is already here. A film by…

May 17, 2017   Big Data

[unable to retrieve full-text content]



Technology vs Humanity – The Future is already here. A film by Futurist …

Privacy, Big Data, Human Futures by Gerd Leonhard

Read More

SuiteWorld Software Keynote: If its not already, ASC 606 should be your top NEXT

May 6, 2017   NetSuite

Posted by Barney Beal, Content Director

The three primary forces affecting software companies today are the demand for growth, evolving business models and regulatory changes, NetSuite SVP of Sales Marc Huffman said onstage at the software industry keynote at last week’s SuiteWorld 2017 conference.

SuiteWorld%20Huffman SuiteWorld Software Keynote: If its not already, ASC 606 should be your top NEXT

Judging from the rest of the keynote, it might be that last item that’s most pressing. While investors continue to demand fast growth from the software companies they’ve backed and the evolution of business models from product-based to services-based to subscriptions to every combination thereof, regulatory compliance, specifically around ASC 606 took center stage.

Perhaps in no place was that made more clear than in comments from Scott Davidson, CFO of Hortonworks, a NetSuite customer, and Pradad Cadambi, a partner at KPMG and a member of the Financial Audits and Standards Boards when the ASC 606 rules were written.

“You need to jump into ASC 606 ASAP. You’re not going to get another deferral,” Cadambi said, noting that previous decisions to delay the implementation to 2018 for public companies and 2019 for private companies are unlikely to happen again.

“When you say get started ASAP, I would say that might even be too late,” said Davidson. Hortonworks began preparing for ASC 606 in the third quarter of last year.

Yet, many businesses may in fact be too late. An informal poll of NetSuite customers found that 60 percent have not begun to prepare for ASC 606 according to Huffman.

But, there’s good news. NetSuite has built out capabilities to account for the accounting changes in ASC 606. Notably, NetSuite has built revenue recognition and billing system software together in the same engine, according to William Schonbrun, director of product marketing.

“Some things are inherently meant to stay together — billing and rev rec should not be separated,” he said. “I’ve heard some of the smartest people in software saying keeping these apart and having a lost in translation moment in rev rec is reckless. And I agree. You have an unfair advantage with NetSuite. Take it.”

Yet, adapting to ASC 606 is going to take more than software.

“People call it an accounting impact,” KPMG’s Cadambi said. “Think of this more as a business model change and what you need to do.”

ASC 606 will have an impact on areas outside of accounting, notably investor relations and sales compensation. Many large software companies are already preparing, Cadambi said, including Microsoft.

“Some very large companies and very nervous,” he said. “I am in charge of 65 companies in my office and we are really behind.”

Davidson and Hortonworks is taking a detailed, methodical approach to preparing for ASC 606 in three phases. The first phase was a gap analysis determining where 605 and 606 differed for them and documenting those processes. Phase two focused on existing systems and processes like sales ops, sales compensation, and planning and forecasting. Phase three will be the deployment, which Davidson expects to happen before Dec. 15.

It has not been without its challenges. Like many software businesses, Hortonworks has some intricacies specific to its business. Working with the open source software Hadoop, for analyzing Big Data, engineering gets involved as well.

“Our engineering team deals with customers,” Davidson said. “We co-develop some technology with our customers. It feels like a support arrangement but there’s deliverables with work and materials. The pricing can look a lot different from traditional SaaS.”

Davidson also urged software companies to work with auditors.

“It’s going to have a big impact on forecasting, how you deal with investors and the board,” he said. “A high level of communication for change management is critical. Start last year.”

For more on ASC 606, download a demo on preparing for the new rev rec with NetSuite.

Posted on Fri, May 5, 2017
by NetSuite filed under

Let’s block ads! (Why?)

The NetSuite Blog

Read More

“The Novel Was Already Dying At An Alarming Rate Without My Assistance”

January 28, 2016   Humor

bookomatvending “The Novel Was Already Dying At An Alarming Rate Without My Assistance”

There’s never been greater access to books than there is right now, but all progress comes with a price. If print fiction and histories and such should disappear or become merely a luxury item, digital media would change the act of reading in unexpected ways over time.

Some see screen reading promoting a decline in analytical skills, but the human brain sure seems able to adapt to new forms once it becomes acclimated. Even as someone raised on paper books, I’m not worried that what’s lost in translation will be greater than what’s gained. Of course, I say that while still primarily using dead-tree volumes.

In a smart BBC Future article, Rachel Nuwer traces the fuzzy history of e-books and considers the future of reading. Some experts she interviews hope for a “bi-literate” society that values both the paperback and the Kindle. That would be a great outcome, but I don’t know how realistic a scenario it is. The opening:

When Peter James published his novel Host on two floppy disks in 1993, he was ill-prepared for the “venomous backlash” that would follow. Journalists and fellow writers berated and condemned him; one reporter even dragged a PC and a generator out to the beach to demonstrate the ridiculousness of this new form of reading. “I was front-page news of many newspapers around the world, accused of killing the novel,”James told pop.edit.lit. “[But] I pointed out that the novel was already dying at an alarming rate without my assistance.”

Shortly after Host’s debut, James also issued a prediction: that e-books would spike in popularity once they became as easy and enjoyable to read as printed books. What was a novelty in the 90s, in other words, would eventually mature to the point that it threatened traditional books with extinction. Two decades later, James’ vision is well on its way to being realised.

That e-books have surged in popularity in recent years is not news, but where they are headed – and what effect this will ultimately have on the printed word – is unknown. Are printed books destined to eventually join the ranks of clay tablets, scrolls and typewritten pages, to be displayed in collectors’ glass cases with other curious items of the distant past?

And if all of this is so, should we be concerned?•

This entry passed through the Full-Text RSS service – if this is your content and you’re reading it on someone else’s site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.

Afflictor.com

Read More

Cancelling a Bulk Deletion Job that has Already Started

September 2, 2015   Microsoft Dynamics CRM

Sometimes when you are working in Dynamics CRM, you come across instances where you need to run bulk deletions. Bulk deletion jobs can take several hours so they are handled asynchronously by a CRM Async service in order to ensure that the overall performance of CRM is not affected while the bulk deletion is in progress. Executing a bulk deletion is an easy process, but sometimes the user needs to stop a deletion job that already started. What do you do then? The good news is that you can absolutely cancel a bulk deletion even after it has started, and today we are going to show you exactly how!

Say you kick off a bulk deletion job and then realize that, oops, you did something wrong and you need to stop or cancel it immediately. Once the job has an In Progress status, it cannot be cancelled. CRM will display the error message shown below if the user tries to cancel the bulk deletion job while it’s in the In Progress status.

1 Cancelling a Bulk Deletion Job that has Already Started

2 Cancelling a Bulk Deletion Job that has Already Started

To stop or cancel a job in progress, follow these simple steps:

1. Stop the Async service. To do this, you will need to have access to the CRM Asynchronous Service Box and should be an administrator to stop, start or restart the asynchronous processing service.

  • Open Services from Control Panel -> Administrative Tools -> Services.
  • Find Microsoft Dynamics CRM Asynchronous Processing Service, select it, and then click Stop on the ribbon.

3 Cancelling a Bulk Deletion Job that has Already Started

*This works for on-premises and partner hosted deployments only and not for CRM online.

2. The job status will then change to Waiting for Resources.

3. While the job is in the Waiting for Resources status, select the job you wish to stop and Cancel it via the More Actions dropdown menu as shown below.

4 Cancelling a Bulk Deletion Job that has Already Started

4. To finish, start the Async service up again.

And that’s all it takes! Please note that these steps only work for on-premise and partner-hosted deployments and are not applicable for CRM Online.

That’s all for today, readers. For more tips and tricks on working with Dynamics CRM, subscribe to our blog, and if you love learning all about CRM, consider attending this year’s PowerUp event November 10—11 in Minneapolis, MN. Featuring over 60 breakout sessions, PowerUp welcomes attendees with all levels of experience. PowerUp is the CRM event of the year and you won’t want to miss it!

Until next time, happy CRM’ing!

 Cancelling a Bulk Deletion Job that has Already Started

This entry passed through the Full-Text RSS service – if this is your content and you’re reading it on someone else’s site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.

PowerObjects- Bringing Focus to Dynamics CRM

Read More

Artificial intelligence: don’t fear AI. It’s already on your…

June 29, 2015   BI News and Info

Artificial intelligence: don’t fear AI. It’s already on your phone – and useful
Charles Arthur, theguardian.com

When Joe Weizenbaum found his secretary using a computer program he had created, he was so upset he devoted the rest of his life to warning people not to use its technology. The program was “Eliza”, which gives a passable imitation of a…

This entry passed through the Full-Text RSS service – if this is your content and you’re reading it on someone else’s site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.

A Smarter Planet

Read More

The Internet of Things is already a $2 billion business for…

January 28, 2015   BI News and Info

The Internet of Things is already a $ 2 billion business for Intel
By Vlad Savov, theverge.com

There was no escaping the Internet of Things at CES 2015, it was the omnipresent theme of the expo, but don’t let that fool you into thinking it’s a far-off future concept. Intel’s latest earnings report demonstrates that the IoT age is already up…

Recommended article: Chomsky: We Are All – Fill in the Blank.
This entry passed through the Full-Text RSS service – if this is your content and you’re reading it on someone else’s site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.

A Smarter Planet

Read More
  • Recent Posts

    • Quality Match raises $6 million to build better AI datasets
    • Teradata Joins Open Manufacturing Platform
    • Get Your CRM Ready for Some Good News
    • MTG
    • TripleBlind raises $8.2 million for its encrypted data science platform
  • Categories

  • Archives

    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited