• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: Slow

another slow news day that needs retroactive continuity, so Matt Gaetz is not a witch

November 18, 2019   Humor
 another slow news day that needs retroactive continuity, so Matt Gaetz is not a witch


Some civil disobedience acts are better than others, like storming a SCIF, because to Gaetz, some kayfabe stunts are more legal than others during an impeachment inquiry. On the bright side, at least Gaetz didn’t melt or burst into flames.

Kondrat’yev, 35, in August pleaded guilty to assaulting a U.S. Congress member. She faces up to a year in prison, a $ 100,000 fine or five years of probation.

She admitted to throwing a cup of red liquid at Gaetz while he was leaving a Pensacola restaurant following his “Open Gaetz” event on June 1.

Kondrat’yev was part of a group of protesters outside the event, and she reportedly held a sign that read “Gaetz wipe the blood off your hands, A+ rating NRA, save our kids vote Gaetz out 2020,” referring to the congressman’s rating by the National Rifle Association, Roll Call reported at the time.

If Kondrat’yev had used glitter, would there be a need to rewrite the narrative.

There’s a concept in pop culture called “retconning,” short for retroactive continuity. The idea is that people working on continuations of existing stories — such as the people making a new “Star Wars” film, for example — might want to go in a new direction that breaks from the established narrative. To solve the dilemma, they might, say, include a scene showing an established character from a past film suddenly revealing a new family member. Change what’s known and head in a new direction.

On Tuesday evening, with public impeachment hearings looming, President Trump’s personal attorney Rudolph W. Giuliani attempted to retcon the Ukraine story. In an essay published by the Wall Street Journal, Giuliani attempted to fit an exculpatory narrative into what’s known about how Trump tried to pressure Ukraine into announcing new investigations that would benefit him politically.

Giuliani offered a unique description of Trump’s call with Ukrainian President Volodymyr Zelensky.

“In particular, Messrs. Zelensky and Trump discussed Ukrainian interference in the 2016 U.S. presidential election,” Giuliani writes. “A Ukrainian court ruled in December last year that the National Anti-Corruption Bureau and Ukrainian lawmaker Serhiy Leshchenko illegally interfered in the 2016 election by releasing documents related to Paul Manafort.” Giuliani goes on to riff on a 2017 Politico article that has become to the Ukraine scandal what Carter Page was to the Russia probe.

www.washingtonpost.com/…

Retconning(sic) resembles revisionist history, because the news needs pizzazz.

Trumpism has become the vehicle for logical fallacy.

 another slow news day that needs retroactive continuity, so Matt Gaetz is not a witch

But seriously, there’s just enough steak but not enough sizzle for a meatless Friday that will feature another witness to the crimes of Amigos and Shreks.

x

Journalists are supposed to position us as a public, and then specialize in verified fact. But here, it’s the House Democrats treating people as a public, and presenting verified facts, while Reuters journalists position their readers as an audience craving a jolt. https://t.co/gVqlZoiNJA

— Jay Rosen (@jayrosen_nyu) November 14, 2019

x

Analysis: Nuremberg Trials lay out disturbing details, but lack pizzazz necessary to capture public attention. Unlike the Blitz, which, like, totally persuaded people to get out of bed and pay attention to the nightly news.
– @jonallendc, Nov 20th, 1945, probably.

— Elie Mystal (@ElieNYC) November 14, 2019

x

My final tweet on this tweet. Framing like this makes me realize we learn nothing. Time is a flat circle. The rest of us are in Dante’s Inferno or simply Cassandra tweeting in the void. Also, the rest of us just have to work harder & be better for 2020. https://t.co/eRb4DSbdN3

— Wajahat Ali (@WajahatAli) November 14, 2019

x

I don’t understand the analysis stories calling today’s hearing boring. Most of the information was already known from the closed-door hearings. Hearings are virtually never “fun” or “exciting.” And it’s a pretty unsophisticated take from anyone who does Washington for a living.

— Jennifer Epstein (@jeneps) November 14, 2019

GOP wants moar cowbell in the impeachment inquiry, thus the demand for more witnesses like the whistleblower

x

Glad to see the White House believes the country deserves to hear from witnesses who’ve spoken with Trump. So I assume the White House will reverse itself and encourage Giuliani and Bolton and Mulvaney to testify. https://t.co/qdYb4XGD3f

— Bill Kristol (@BillKristol) November 13, 2019

x

What happened in the five months since Zelensky got elected, Ukraine aid was delayed, Trump and Rudy involvement and when the hold was eventually released after the controversy got Hill attention. The key dates: https://t.co/RtipsW5NZ0

— Manu Raju (@mkraju) November 14, 2019

x

This is mob speak. “Insurance” means that he has dirt on Trump.

This is how EVERYTHING works in their world. Get leverage over others to ensure loyalty.

This works great….until they are all cornered and start setting fire to each other.

— Frederick C. Trump (@TrumpFrederick) November 14, 2019

Let’s block ads! (Why?)

moranbetterDemocrats

Read More

Artificial stupidity: ‘Move slow and fix things’ could be the mantra AI needs

October 6, 2019   Big Data

“Let’s not use society as a test-bed for technologies that we’re not sure yet how they’re going to change society,” warned Carly Kind, director at the Ada Lovelace Institute, an artificial intelligence (AI) research body based in the U.K. “Let’s try to think through some of these issues — move slower and fix things, rather than move fast and break things.”

Kind was speaking as part of a recent panel discussion at Digital Frontrunners, a conference in Copenhagen that focused on the impact of AI and other next-gen technologies on society.

The “move fast and break things” ethos embodied by Facebook’s rise to internet dominance is one that has been borrowed by many a Silicon Valley startup: develop and swiftly ship an MVP (minimal viable product), iterate, learn from mistakes, and repeat. These principles are relatively harmless when it comes to developing a photo-sharing app, social network, or mobile messaging service, but in the 15 years since Facebook came to the fore, the technology industry has evolved into a very different beast. Large-scale data breaches are a near-daily occurrence, data-harvesting on an industrial level is threatening democracies, and artificial intelligence (AI) is now permeating just about every facet of society — often to humans’ chagrin.

Although Facebook officially ditched its “move fast and break things” mantra five years ago, it seems that the crux of many of today’s technology problems come down to the fact that companies have moved (and continue to move) too fast — “full-steam ahead, and to hell with the consequences.”

‘Artificial stupidity’

 Artificial stupidity: ‘Move slow and fix things’ could be the mantra AI needs

Above: 3D rendering of robots speaking no evil, hearing no evil, seeing no evil.

Image Credit: Getty Images / Westend61

This week, news emerged that Congress has been investigating how facial recognition technology is being used by the military in the U.S. and abroad, noting that the technology is just not accurate enough yet.

“The operational benefits of facial recognition technology for the warfighter are promising,” a letter from Congress read. “However, overreliance on this emerging technology could also have disastrous consequences if faulty or inaccurate facial scans result in the inadvertent targeting of civilians or the compromise of mission requirements.”

The letter went on to note that the “accuracy rates for images depicting black and female subjects were consistently lower than for those of white and male subjects.”

While there are countless other examples of how far AI still has to go in terms of addressing biases in the algorithms, the broader issue at play here is that AI just isn’t good or trustworthy enough across the spectrum.

“Everyone wants to be at the cutting edge, or the bleeding edge — from universities, to companies, to government,” said Dr. Kristinn R. Thórisson, an AI researcher and founder of the Icelandic Institute for Intelligent Machines, speaking in the same panel discussion as Carly Kind. “And they think artificial intelligence is the next [big] thing. But we’re actually in the age of artificial stupidity.”

Thórisson is a leading proponent of what is known as artificial general intelligence (AGI), which is concerned with integrating disparate systems to create a more complex AI with humanlike attributes, such as self-learning, reasoning, and planning. Depending on who you ask, AGI is coming in 5 years, it’s a long way off, or it’s never happening — Thórisson, however, evidently does believe that AGI will happen one day. When that will be, he is not so sure — but what he is sure of is that today’s machines are not as smart as some may think.

“You use the word ‘understanding’ a lot when you’re talking about AI, and it used to be that people put ‘understanding’ in quotation marks when they talked about it in the context of AI,” Thórisson said. “When it comes down to it, these machines don’t really understand anything, and that’s the problem.”

For all the positive spins on how amazing AI now is in terms of trumping humans at poker, AlphaGo, or Honor of Kings, there are numerous examples of AI fails in the wild. By most accounts, driverless cars are nearly ready for prime time, but there is other evidence to suggest that there are still some obstacles to overcome before they can be left to their own devices.

For instance, news emerged this week that regulators are investigating Tesla’s recently launched automated Smart Summon feature, which allows drivers to remotely beckon their car inside a parking lot. In the wake of the feature’s official rollout last week, a number of users posted videos online showing crashes, near-crashes, and a general comical state of affairs.

So, @elonmusk – My first test of Smart Summon didn’t go so well. @Tesla #Tesla #Model3 pic.twitter.com/yC1oBWdq1I

— Roddie Hasan – راضي (@eiddor) September 28, 2019

This isn’t to pour scorn on the huge advances that have been made by autonomous carmakers, but it shows that the fierce battle to bring self-driving vehicles to market can sometimes lead to half-baked products that perhaps aren’t quite ready for public consumption.

Crossroads

The growing tension — between consumers, corporations, governments, and academia — around the impact of AI technology on society is palpable. With the tech industry prizing innovation and speed over iterative testing at a slower pace, there is a danger of things getting out of hand — the quest to “be first,” or to secure lucrative contracts and keep shareholders happy, might just be too alluring.

All the big companies, from Facebook, Amazon, and Google through to Apple, Microsoft, and Uber, are competing on multiple business fronts, with AI a common thread permeating it all. There has been a concerted push to vacuum up all the best AI talent, either through acquiring startups or simply hiring the top minds from the best universities. And then there is the issue of securing big-name clients with big dollars to spend — Amazon and Microsoft are currently locking horns to win a $ 10 billion Pentagon contract for delivering AI and cloud services.

In the midst of all this, tech firms are facing increasing pressure over their provision of facial recognition services (FRS) to the government and law enforcement. Back in January, a coalition of more than 85 advocacy groups penned an open letter to Google, Microsoft, and Amazon, urging them to cease selling facial recognition software to authorities — before it’s too late.

“Companies can’t continue to pretend that the ‘break then fix’ approach works,” said Nicole Ozer, technology and civil liberties director for the American Civil Liberties Union (ACLU) of California. “History has clearly taught us that the government will exploit technologies like face surveillance to target communities of color, religious minorities, and immigrants. We are at a crossroads with face surveillance, and the choices made by these companies now will determine whether the next generation will have to fear being tracked by the government for attending a protest, going to their place of worship, or simply living their lives.”

Then in April, two dozen AI researchers working across the technology and academia sphere called on Amazon specifically to stop selling its Rekognition facial recognition software to law enforcement agencies. The crux of the problem, according to the researchers, was that there isn’t sufficient regulation to control how the technology is used.

Above: An illustration shows Amazon Rekognition’s support for detecting faces in crowds.

Image Credit: Amazon

“We call on Amazon to stop selling Rekognition to law enforcement as legislation and safeguards to prevent misuse are not in place,” it said. “There are no laws or required standards to ensure that Rekognition is used in a manner that does not infringe on civil liberties.”

However, Amazon later went on record to say that it would serve any federal government with facial recognition technology — so long as it’s legal.

These controversies are not limited to the U.S. either — it’s a global problem that countries and companies everywhere are having to tackle. London’s King’s Cross railway station hit the headlines in August when it was found to have deployed facial recognition technology in CCTV security cameras, leading to questions not only around ethics, but also legality. A separate report published also discovered that local police had submitted photos of seven people for use in conjunction with King’s Cross’s facial recognition system, in a deal that was not disclosed until yesterday.

All these examples serve to feed the argument that AI development is outpacing society’s ability to put adequate checks and balances in place.

Pushback

Digital technology has often moved too fast for regulation or external oversight to keep up, but we’re now starting to see major regulatory pushbacks — particularly relating to data privacy. The California Consumer Privacy Act (CCPA), which is due to take effect on Jan 1, 2020, is designed to enhance privacy rights of consumers living across the state, while Europe is also currently weighing a new ePrivacy Regulation, which covers an individual’s right to privacy regarding electronic communications.

But the biggest regulatory advance in recent times has been Europe’s General Data Protection Regulation (GDPR), which stipulates all manner of rules around how companies should manage and protect their customers’ data. Huge fines await any company that contravenes GDPR, as Google found out earlier this year when it was hit with a €50 million ($ 57 million) fine by French data privacy body CNIL for “lack of transparency” over how it personalized ads. Elsewhere, British Airways (BA) and hotel giant Marriott were slapped with $ 230 million and $ 123 million fines respectively over gargantuan data breaches. Such fines may serve as incentives for companies to better manage data in the future, but in some respects the regulations we’re starting to see now are too little too late — the privacy ship has sailed.

“Rolling back is a really difficult thing to do — we’ve seen it around the whole data protection field of regulation, where technology moves much faster than regulation can move,” Kind said. “All these companies went ahead and started doing all these practices; now we have things like the GDPR trying to pull some of that back, and it’s very difficult.”

From looking back at the past 15 years or so, a time during which cloud computing and ubiquitous computing have taken hold, there are perhaps lessons to be learned in terms of how society proceeds with AI research, development, and deployment.

“Let’s slow things down a bit before we roll out some of this stuff, so that we do actually understand the societal impacts before we forge ahead,” Kind continued. “I think what’s at stake is so vast.”

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Manufacturing Industry Slow to Adopt Emerging Digital Marketing Software

May 28, 2019   CRM News and Info
MA for Manufacturing Feature Manufacturing Industry Slow to Adopt Emerging Digital Marketing Software

Earlier this year, we set off on a joint venture with our friends at London Research to conduct a major marketing survey. Our goal was to gain a better understanding of how marketers are using digital marketing software (marketing automation, email service providers, customer relationship management systems, etc.) and to learn about their existing pain points. Specifically, we were curious how marketers in certain verticals were (or weren’t, as it were) using marketing automation software, which is why we focused heavily on marketers in the financial services, tech, and manufacturing industries.

As it turns out, manufacturing marketers are still taking their first steps toward digital marketing maturity, as the overwhelming majority of marketing professionals in that sphere are not currently using marketing automation software — or even an email service provider (ESP). Subsequently, there’s a major opportunity for manufacturing and distribution companies to gain a significant advantage over their competition by adopting marketing automation platforms and working with their vendor to get the most out of this powerful and dynamic software.

Keep reading to learn more about how manufacturing marketers are using marketing automation, as well as the largest challenge they’re facing in regards to their software. And, of course, be sure to download the full State of Marketing Automation report for even more useful insights.

How Is the Manufacturing Industry Utilizing Marketing Automation Software?

According to our research, the manufacturing industry appears to be far behind the technology industry vertical and on par with their counterparts in financial services as far as digital marketing software adoption. For instance, manufacturing companies are slightly more likely to use marketing automation software than financial services companies but far less likely than the tech industry to do so. The survey results also suggest that this reluctance among manufacturers to adopt marketing automation is likely to continue — with only 19% of manufacturers saying they plan to invest in marketing software this year (compared to 48% of financial services businesses).

Further, while many marketers in the tech industry are using marketing automation software exclusively (that is, without supplementing their platforms with an ESP), only 21% of manufacturing marketers can say the same. In fact, 26% of manufacturing companies continue to use both marketing automation and an ESP, which, while that figure is less than those in other industries surveyed, still speaks to the fact that many marketers continue to invest in redundant technologies or are simply not getting everything they need from their current systems. It also implies a fundamental misunderstanding of the power and capabilities of marketing automation, despite their significant investment in the software.

Why Manufacturers Need Marketing Automation Software to Improve Their Marketing Efforts

So, while the numbers for marketers in the manufacturing sector aren’t quite where we would expect them to be at this point, the immensely bright silver lining is that now is the perfect time to capitalize on a rare opportunity to differentiate your company from your competitors. Implementing marketing automation software and learning how to use it to its full potential can make your marketing and sales just as streamlined as your production processes

There are a variety of factors leading to the inevitable evolution beyond the traditional batch-and-blast email approach offered by ESPs — including GDPR and other compliance regulations, the need to scale marketing efforts more purposefully, and meeting consumer expectations for more personalized content. Further, marketers are now expected to hand sales-ready leads to their sales teams, which means an increased focus on aligning marketing software with marketing strategy, especially as it pertains to segmentation, lead scoring and nurturing, and website personalization.

Thankfully, now is the time to strike, as many manufacturing and distribution companies have seen amazing success since implementing the Act-On platform for their business. For instance, Allegis — a leading international supplier of latches, handles, hinges, and related products — credits Act-On with helping them conserve resources, eliminate spam complaints, and increase their email open rate by 15-20%. Additionally, Absolute Exhibits — a full-service, turnkey exhibit house that designs and creates custom tradeshow exhibits — has experienced an 85% increase in revenue since utilizing the Act-On platform. In fact, they’ve been so successful, they’re actually hiring more personnel to keep up with demand.

If your company is looking to achieve these kinds of results, we’d like to speak with you further about how Act-On makes marketing personal.

The Major Marketing Challenge Facing Manufacturing Marketers

Although not many manufacturing and distribution marketers are using marketing automation software as of yet, those who do find themselves in the unenviable position of not being able to measure their return on investment, which is one of the great unrealized promises of these platforms for this vertical. Marketers who can’t quantify their success are unable to optimize successful campaigns, pause poor-performing efforts, or justify the investment to their key stakeholders.

This might be due to the fact that many departments using marketing automation have been unsuccessful at integrating their new software with their existing CRM — or haven’t even tried to connect the two. When this happens, accurately tracking performance and ROI becomes extremely difficult at best, not to mention extremely labor intensive. Further, siloing these two powerful pieces of software deepens the wedge between marketing and sales, eliminating efficiencies and causing further division between these historically-opposed departments.

Act-On seamlessly integrates with several of the most popular CRMs — such as Salesforce, SugarCRM, NetSuite, and Microsoft Dynamics, among several others. Without ever having to leave their CRM, manufacturing marketers can now view each aspect of a prospect’s engagement history. This helps manufacturing marketers and sales professionals better track their website visitors, engage those visitors with more targeted, segmented content, prioritize hot leads, and identify and pursue real opportunities.

Download the Full State of Marketing Automation Report to See How You Stack Up!

The infographic above is awesome, but it only tells a fraction of the story. If you’re interested in learning more about how, why, and to what ends marketers across oceans and industries are using digital marketing software, please download the State of Marketing Automation report in its entirety. And if you’d like to learn more about Act-On’s commitment to marketing made personal, you can take a virtual tour by clicking here.

Let’s block ads! (Why?)

Act-On Blog

Read More

Accelerating the Slow March Towards Digitization in the Insurance Industry

October 23, 2018   FICO
Screen Shot 2018 10 18 at 9.46.23 AM Accelerating the Slow March Towards Digitization in the Insurance Industry

Let’s face it – while many see digital disruption as the future in insurance, in fact new entrants into the space have been hindered by the product complexity, distribution systems and legacy infrastructure that the insurance industry carries with it. The landscape is changing, however – consumers (and businesses) are moving quickly to go with insurers that offer targeted, more transparent, 7×24 services. Digitization is indeed alive in insurance, it’s just taking longer to infiltrate – but it is arriving, and insurers that move quickly to upgrade their technologies – particularly, decision management systems – will gain (and grow) a distinct advantage over those who wait.

Is the software to blame?

Like the industry itself, the insurance software that is used to help make decisions is also stuck between business as usual and the immense possibilities offered by digitization. While policy and claim management system vendors have evolved product functionality and performance and even offer cloud options, one-size-fits all approaches have actually hindered insurers that are looking to “break out of the pack” and apply their own organizational DNA to transform their businesses. And indeed, with the massive growth in areas such as machine learning, analytics and data science, there has to be some way to incorporate the best of those capabilities without having to rip and replace existing investments.

Let’s consider a specific use case to highlight how businesses can introduce disruptive digitization.

What’s hampering intelligent automation in commercial underwriting

Historically, commercial underwriting has been a broker-driven, highly manual process – which creates delays that can create unnecessary risk (particularly if human biases enter into the picture) as well as frustrating delays for the customer. Even partially automating the process can deliver substantial profit improvements, but there’s the rub – to automate effectively, you need to have an intelligent, agile decision infrastructure in place that consistently flexes in response to new data and analytic insights.

Five things you should be doing to digitize

In the commercial underwriting world (and also more broadly in any line of business), you may already have some or many of the core ingredients you need to digitize. Any one or more of the above issues – not to mention company-specific issues such as resistance to change, lack of ability to prioritize projects, or simply too many siloes – are likely holding you back. While different approaches fit specific organizational needs and appetites, our own work with insurers in commercial underwriting and beyond typically include these elements to succeed:

  • Adopt a decision-first approach: Align your decision strategies first, and then add data and analytics.
  • Improve your agility by optimizing resources: Empower business experts to rapidly modify, simulate, and measure decisions – without getting stuck in the IT queue.
  • Automate high-volume operational decisions quickly and effectively: Highly repeatable, “need for speed” decisions can be automated, and encompass any combination of data, analytics and other decision assets.
  • Collaborate to operationalize and institutionalize your decision strategies: Decision-first collaboration goes beyond “breaking siloes” – it enables business analysts, policy managers to work closely to harmonize and simulate strategies, reuse decision assets, and drive measurable value across the enterprise.
  • (Carefully) consider a Cloud-based platform as a digitization accelerator: IT is being inundated with requests to do more, even as data, analytic and decision systems proliferate. Adopting a Cloud-based strategy is on the shopping list, and a platform-centric approach can help.

In our next blog, we’ll dig deeper into these five areas to understand the value of a decision-first approach to digitizing your business. In the meantime, download our executive brief to learn how you can put these concepts to work in your organization.

Let’s block ads! (Why?)

FICO

Read More

6/18 Webinar: My Power BI report is slow: what should I do? by Marco Russo

June 17, 2018   Self-Service BI

This week have one of our crowd favorites and Rock star MVPs, Marco Russo who has volunteered to cover the topic: 

My Power BI report is slow: what should I do?

Abstract: You created a wonderful Power BI report, but when you open it you wait too much time. Changing a slicer selection is also slow. Where should you start analyzing the problem? What can you do to optimize performance?
This session will guide you in analyzing the possible reasons for a slow Power BI report. By using Task Manager and DAX Studio, you will be able to determine whether you should change the report layout, or if there is something in DAX formulas or in the data model that is responsible for the slow response.  At the end of this session, you will understand how to locate a performance bottleneck in a Power BI report, so you will focus your attention on the biggest issue.

When: 6/18/2018 9AM PST

Where: https://www.youtube.com/watch?v=B-h3Pohtn1Y 

About the Presenter: 
51kVzGgqZAL. UX250  6/18 Webinar: My Power BI report is slow: what should I do? by Marco Russo
Marco Russo
Consultant and Mentor, SQLBI
Marco Russo is a Business Intelligence consultant and mentor. He has worked with Analysis Services since 1999, and written several books about Power Pivot, Power BI, Analysis Services Tabular, and the DAX language. With Alberto Ferrari, he writes the content published on www.sqlbi.com, mentoring companies’ users about the new Microsoft BI technologies. Marco is also a speaker at international conferences such as Microsoft Ignite, PASS Summit, PASS BA Conference, and SQLBits.

https://www.sqlbi.com/author/marco-russo/

Let’s block ads! (Why?)

Microsoft Power BI Blog | Microsoft Power BI

Read More

Dynamics 365 for Customer Engagement Slow Form Loads for One User

June 12, 2018   Microsoft Dynamics CRM

I recently helped out on an issue with slow Dynamics 365 form loads. It was somewhat unique because the poor performance was only observed for one user, for one entity type(PhoneCall). However, every PhoneCall record that the user opened had the issue. We started with a somewhat typical approach investigating business rules, JavaScript, synchronous retrieve plugins, all the normal customization types we might see execute when a form loads. Disabling or removing any/all of them seemed to make no difference at all. We also investigated the roles/teams that this user was a member of, testing with other similar users and not seeing the same issue.

�
 

After some thought, we decided to query the UserUISettings record for this user/record type. This entity is used to store a record for each user, and each entity type the user accesses, the primary focus of each record is to cache the formxml from the last time the user accessed one of these records, and keep a cache of the records the user viewed, commonly referred to as Most Recently Used (MRU) data. This is displayed in the Dynamics 365 navigation in a dropdown next to the entity name, like this:

061118 2211 Dynamics3651 Dynamics 365 for Customer Engagement Slow Form Loads for One User

Since this issue affected only one user and for only one entity type, an issue with a UserUISettings record potentially makes sense here. I asked the user to query their UserUISettings for PhoneCall and send me the results. Here is a sample query they can execute in the browser to find this information out:

�
 

<org>.crm.dynamics.com/api/data/v8.2/userentityuisettingsset?$ filter=_owninguser_value eq <user guid> and objecttypecode eq 4210

�
 

The column RecentlyViewedXml typically returns 5-10 recently viewed records in xml format, the xml will contain the datatype, primary name, id of the record. In the case of the user with the issue, the xml was very large, and contained 17,259 records. Trying to render this massive dataset in every form the user opened would almost certainly cause a performance problem.

�
 

It’s important to mention that the application is in charge of keeping this xml a manageable size, and that there was an old defect identified that prevented this cleanup. That defect has long since been corrected in the application, however we’ve observed that if these records grew to an unmanageable size, the cleanup never happens or times out/fails. Therefore a one-time cleanup for affected users is a viable long term solution and not just a stop-gap.

�
 

One thing that makes cleaning up this data very challenging, is that it is stored in two places. First in the UserEntityUISettings record in the database like we discussed but it is also cached in Html DOM storage on the browser. You can see this by navigating to Dynamics 365, opening the f12 developer tools in your browser, and typing localStorage in the console and pressing Enter. This cache/database relationship is not one directional as you might think, but they actually try to keep each other in sync. Therefore, if we delete everything from the RecentlyViewedXml field in the database, the next time we access Dynamics 365, the browser cache will upload all the bad data back to the server and we won’t observe any performance improvement. There needs to be a tandem effort to clear the localStorage cache and server data at the same time (or very close to it).

�
 

To assist with this effort, I created a solution that uses supported sdk methods to delete the data from the UserEntityUISettings record and clear the localStorage cache. Since it needs to execute on the browser of the affected user, there is a dashboard included in the solution that can be shared with users. When the user is instructed to navigate to the dashboard, they have the option of selecting an entity to clear the data for, or clear for all entities. The output window will provide progress updates and let the user know once the task is complete and they will not need to do any other steps like clearing history or closing the browser.

�
 

A view of the dashboard is included below.

�
 

�
 

061118 2211 Dynamics3652 Dynamics 365 for Customer Engagement Slow Form Loads for One User

061118 2211 Dynamics3653 Dynamics 365 for Customer Engagement Slow Form Loads for One User

Hope this helps,

Matt

�
 

Let’s block ads! (Why?)

Dynamics CRM in the Field

Read More

How an Insurance Company Leveraged Marketing Automation in an Industry That’s Slow to Change

February 9, 2018   CRM News and Info
2018.02.07 bnr insurance case study 351x200 How an Insurance Company Leveraged Marketing Automation in an Industry That’s Slow to Change

Marketing Challenges for the Insurance Industry

Tony: How would you describe the challenges that face your industry as a whole? Are there any disruptions that are impacting the way you approach marketing?

John: One of the greatest challenges insurance companies face today is how to manage the rapid advances in MarTech. New technologies that help engage buyers are constantly emerging, but insurance companies are typically slow to adopt them. We’re very old school in that regard.

Tony: Have you seen that at Abram?

John: Yes. Our audience is multi-generational, and while younger agents tend to embrace new online tools, many agency owners are older and aren’t aware of them. We had to find ways to effectively communicate about our products to both groups.

Things were further complicated by the fact that we have a large portfolio of offerings, and much of our revenue comes from upselling customers with products they may not even know they need. We had to educate them before we could sell to them, and they have many niches in their business, so our communications had to be both frequent and targeted. We just weren’t set up to do that.

Tony: How so?

John: We had a very old CRM called ACT, and we were using Constant Contact for email. Both platforms had functional limitations, and they weren’t integrated, so we had to manually manage, segment, and update our data, which was very time-consuming. It was difficult to track the activity of our buyers, and we had very little insight into how effectively we were engaging them. We were operating in the dark.

Marketing Automation as a Catalyst for Change

Tony: Your solution was to adopt Act-On and SugarCRM. How did you get there?

John: We did a thorough evaluation of every major marketing automation platform, including Marketo, Pardot, and HubSpot, but we ultimately chose Act-On because it had all the functionality we needed and was very easy to use. The platform came highly recommended by customers of yours that I knew personally, and the demos confirmed we could quickly build emails and automated campaigns and easily segment our audience. We saw that Act-On’s reports would give us the campaign insight we were missing, and the company’s active contact pricing model made the platform very cost-effective.

We were also evaluating CRMs at the same time, and we really liked Sugar, so it was critical that our marketing automation platform could integrate with it. Once we saw that Act-On and Sugar worked together seamlessly, we were sold.

Tony: Many companies that want to modernize their MarTech worry that adopting new platforms will be complicated.

John: We were one of those companies. My biggest concern was that using Act-On would create an IT burden, but implementation was fast and smooth. The onboarding process was well structured and got us up to speed quickly. We’re all non-technical marketers, but Act-On’s intuitive interface made it easy to access the entire platform.

Tony: How have you used Act-On to engage your buyers?

John: When a prospect is entered into Sugar, they’re immediately enrolled into a new-customer campaign that automatically nurtures them for three months. As we gain more information about them from their interaction with our content and site visits with our sales reps, we move them into more targeted nurture campaigns.

Tony: What kinds of content do you send them?

John: Most of our messaging is product-centric, and the more we know about a buyer, the easier it is to tailor it to their needs. In addition to email campaigns, we send newsletters with business tips, thought leadership articles, and online tools that provide a service, such as generating an instant quote or estimating replacement costs.

We also run monthly webinars designed for both prospects and customers, and we use Act-On to manage the entire process – from email invitations through registration and follow up campaigns.

Tony: How were you running your webinars before Act-On?

John: We didn’t have any. Act-On’s integration with WebEx is what enabled us to launch the program.

Tony: You said Act-On’s integration with Sugar was a deciding factor. What does your sales team think?

John: They love Act-On. They love the buyer data it gives them, and they love being able to access it directly through Act-On’s dashboard in Sugar.

Our reps sell to a wide customer base, which makes it hard to keep track of where buyers are in the purchase process and what they care most about. The insight Act-On provides makes every exchange sales has with customers more meaningful and effective. They can pick up any conversation as if they just left it, even if a long period of time has passed.

They also love using Act-On’s templates to personalize their emails and send small campaigns to segments within their audience. It gives them autonomy and makes them feel empowered.

Reaping the Benefits of Marketing Automation

Tony: How has Act-On impacted your business? What kind of results have you seen?

John: The ROI with Act-On has been great. Out Time-to-Value was very quick – we ran our first campaign in less than two months. And our engagement rates have soared. Our email open rates have improved by as much as 75%.

The data Act-On provides has dramatically shortened the buying process. The insight Act-On provides allows marketing to optimize their programs, and helps sales move through the discovery process with prospects more quickly. These enhancements have cut our sales cycle in half. In fact, our email campaigns frequently convert a lead without any further involvement from sales, and prospects can even finalize their purchase by completing the necessary forms online. It’s an amazing way to work.

Tony: One of your biggest pain points before Act-On was the amount of time you had to spend managing and segmenting your lists. Has that changed?

John: Yes – significantly. Act-On has enabled us to implement campaigns five times faster. It’s like night and day. And Act-On’s customer support is excellent – knowledgeable and responsive. Any time we’ve had an issue, they’ve helped us resolve it quickly.

Act-On University is a terrific online resource, too. We can answer many of our questions without even having to call for support.

Tony: It sounds like Act-On has fundamentally changed the way you engage your audience. How has it changed the way they experience your company?

John: It’s deepened our relationship with them by allowing us to send them meaningful content at the times they need it most. Personalizing our exchanges with them creates a sense of trust that we understand their concerns and are looking out for their best interests.

Tony: I think many marketers will be inspired by your success. What advice would you have for those who are just beginning their marketing automation journey?

John: Make sure to fully evaluate your needs and the ability of your users, and let those drive the product you choose. Act-On has worked out so well for us because we had clear requirements and were confident the platform would meet them.

And keep in mind what your needs will be down the road. One of the nice things about Act-On is that there’s so much functionality we have yet to tap into. It’s one of the keys to making sure our future remains bright.

Let’s block ads! (Why?)

Act-On Blog

Read More

The Quick and the Dead Slow: Importing CSV Files into Azure Data Warehouse

September 15, 2017   BI News and Info

The series so far:

  1. Creating a Custom .NET Activity Pipeline for Azure Data Factory
  2. Using the Copy Wizard for the Azure Data Factory
  3. The Quick and the Dead Slow: Importing CSV Files into Azure Data Warehouse

In my previous article, I described a way to get data from an endpoint into an Azure Data Warehouse (called ADW from now on in this article). On a conceptual level, that worked well, however there are a few things to consider for the sake of performance and cost, especially important when you are regularly importing large amounts of data.

The final architecture of the data load in my previous article looked like this:

word image 33 The Quick and the Dead Slow: Importing CSV Files into Azure Data Warehouse

As we can see, the files are taken from an FTP server, copied to a blob storage and then imported to the Azure Data Warehouse from there. This means that we will not achieve great levels of performance, especially when you load larger amounts of data, because of the intermediate step of copying data through blob storage .

The fastest way to import data into an Azure Data Warehouse is to use Polybase, and there are some requirements to be met before Polybase can step in.

Just to give an example of what happens if Polybase can be used: I was recently working on an import of a dataset of CSV files with an approximate size of 1Tb. I started by choosing an ‘intermediate storage’ approach (as in the picture above), and it was about to take 9 days to complete, and this with an Azure Data Warehouse scaled to 600DWUs. For more information about ADW scalability and DWUs, have a look at https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-manage-compute-overview ). Given that an ADW with 600DWU costs 7.02 EUR/hour, I was pretty confident that my project accountant would have been unhappy with the cost, which would have been about 1,500 EUR for this load! Instead, by making sure of meeting the criteria for Polybase, I managed to import the entire 1Tb data into my Azure Data Warehouse in the means of 3 hours, i.e. at the cost of about 21 EUR.

In this article we will base our work on the idea of my previous article, however we will change the architecture in order to save time and resources. In this case we will do the following:

  1. Download the files from the FTP (ftp://neoftp.sci.gsfc.nasa.gov/csv/) to our Azure storage
  2. Decompress the files
  3. Look into Polybase requirements and import the data reasonably fast into our ADW

Download the files from FTP

In this section we will not spend too much time on describing how to get the files from the FTP site, because the method is very similar to the one described in my previous article. For downloading the files from the FTP, we will be using the Copy Data wizard. The only difference is that in this case, because the files on the FTP server are compressed, we will need to use Blob storage as a destination. In the download process, we will instruct the ADF to decompress the files as they are being downloaded.

The reason for this is that Polybase does not yet support direct import from compressed files.

To get the files, we need to start the Copy Data wizard from our Data Factory:

word image 34 The Quick and the Dead Slow: Importing CSV Files into Azure Data Warehouse

then configure the FTP server properties:

word image 35 The Quick and the Dead Slow: Importing CSV Files into Azure Data Warehouse

Then we need to select the folder we want to process recursively:

word image 36 The Quick and the Dead Slow: Importing CSV Files into Azure Data Warehouse

Now choose ‘Azure Data Lake store’ (ADL) as a destination. Of course, there are many ways to go about it, but in this case I choose to use ADL because I want to demonstrate the power of U-SQL scripting for preparing the data for Polybase import.

For the ADL destination, I will be using Service-to-Service authentication. This is also a requirement for Polybase loads, so now is a great time to create an Active Directory App which will carry out the task of authenticating our data operations.

Creating a Service-to-Service authentication

In the Azure portal we need to go to Azure Active Directory blade and from there to ‘App registrations’, and click on ‘New App Registration’.

word image 37 The Quick and the Dead Slow: Importing CSV Files into Azure Data Warehouse

In the next screen we give a name to our app and we create it:

word image 38 The Quick and the Dead Slow: Importing CSV Files into Azure Data Warehouse

Now that we have created our application, we need to gather its properties for later use, and also we need to create a key for it.

After creating the application we search for it in the ‘New application registration’ tab, and we click on the application we just created:

word image 39 The Quick and the Dead Slow: Importing CSV Files into Azure Data Warehouse

We need to note the Application ID in the next screen:

word image 41 The Quick and the Dead Slow: Importing CSV Files into Azure Data Warehouse

And next we need to create a key for the app by clicking on the Keys link to the right

word image 42 The Quick and the Dead Slow: Importing CSV Files into Azure Data Warehouse

Make sure to write down the key, since it is impossible to retrieve it at a later time.

Back to the ADL destination in the ADF pipeline

Now that we have created the Azure Active Directory App, we are ready to use Service-to-Service authentication for the FTP files to be downloaded and extracted to our data lake.

word image 43 The Quick and the Dead Slow: Importing CSV Files into Azure Data Warehouse

In the above case, we need to specify the Subscription, the Data Lake account and the Tenant ID.

The Service principal id term is a bit inconsistent, but in this field we need to paste the Application Id we gathered from the Properties tab of our Azure AD App. And then the Service principal key is the key we created for the app.

After we click ‘Next’ in the screen above, we will be asked where to store the files on our ADL. For the purpose I have created a folder called Aura. For the copy behaviour, I have chosen ‘Flatten hierarchy’. This means that I will get as many files as there are in the FTP, but in a single folder.

word image 44 The Quick and the Dead Slow: Importing CSV Files into Azure Data Warehouse

In the next screen we are asked to specify the properties of the destination flat file. This is a very important step, since Polybase has a very specific set of expectations for the format of the file, and if these requirements are not met, then we will need to use an intermediary storage to process the files and prepare them for import (and this, as we discussed above, is extremely slow and costly).

Here are the requirements for using Polybase:

The input dataset is of type AzureBlob or AzureDataLakeStore, and the format type under type properties is OrcFormat, or TextFormat with the following configurations:

  • rowDelimiter must be \n.
  • nullValue is set to empty string (“”), or treatEmptyAsNull is set to true.
  • encodingName is set to utf-8, which is default value.
  • escapeChar, quoteChar, firstRowAsHeader, and skipLineCount are not specified.
  • There is no skipHeaderLineCount setting under BlobSource or AzureDataLakeStore for the Copy activity in the pipeline.
  • There is no sliceIdentifierColumnName setting under SqlDWSink for the Copy activity in the pipeline.
  • There is no columnMapping being used in the associated in Copy activity.

The following screen looks like this by default:

word image 45 The Quick and the Dead Slow: Importing CSV Files into Azure Data Warehouse

Usually, we would set up the above screen properly so that we can get the files ready for Polybase directly. For this article, however, I will leave the settings as they are because I would like to demonstrate a data preparation step by using U-SQL.

U-SQL is a language used together with Data Lake and it is a hybrid between T-SQL (the select statement) and C# (used for the WHERE clause). The U-SQL language is extremely flexible and scalable. For more information on U-SQL, check the online documentation here.

Another reason to U-SQL in this case is because Polybase does not support column mapping, and in this case my data has over 3000 variables. This poses a few challenges: in SQL Server and in ADW there is a limitation of 1024 columns per table, which means that in this particular case I need to resort to U-SQL to make sure the data is managed correctly.

So, I click ‘Next’ and end up at the final screen, ready to run the pipeline.

word image 46 The Quick and the Dead Slow: Importing CSV Files into Azure Data Warehouse

Creating a ADW login and user to load the data

When the Azure Data Warehouse was created, we had to specify a user with a password to connect to it. The permissions on that login are not very restricted, and because of this we will now create a login and a database user to do our data import.

The following T-SQL code will create this:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

CREATELOGINDataImporter

   WITHPASSWORD=‘fsklj2#%3245%#&skjhgdfk236′

  GO

  CREATEUSERDataImporter

   FORLOGINDataImporter

   WITHDEFAULT_SCHEMA=dbo

  GO

  USE[myDB]

  GO

  CREATEUSER[DataImporter]FORLOGIN[DataImporter]WITHDEFAULT_SCHEMA=[dbo]

  GO

  – Add user to the database owner role

  EXECsp_addrolememberN‘db_owner’,N‘DataImporter’

  GO


Creating the ADW table

For this article, we will create a small table called NEOData with only a few columns. Here is the T-SQL:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

CREATETABLE[dbo].[NeoData]

  (

   [Column0][varchar](200)NULL,

   [Column1][varchar](200)NULL,

   [Column2][varchar](200)NULL,

   [Column3][varchar](200)NULL,

   [Column4][varchar](200)NULL,

   [Column5][varchar](200)NULL,

   [Column6][varchar](200)NULL,

   [Column7][varchar](200)NULL,

   [Column8][varchar](200)NULL,

   [Column9][varchar](200)NULL,

   [Column10][varchar](200)NULL

  )

  WITH

  (

   DISTRIBUTION=ROUND_ROBIN,

   HEAP

  )

Note: it is still valid even in Azure Data Warehouse that heaps are the fastest way to import data into SQL Server.

Selecting columns to work with

So far we have a Data Lake with the downloaded files from the FTP server, which were extracted from the GZip. In other words, we have our CSV files in the Data Lake.

There is a challenge in this case because the CSV files we have downloaded have 3600 columns. As mentioned, ADW has a limit of 1024 columns per table, and in our case our data science team is only interested in the first 11 columns anyway.

In a case like this, we can use the flexibility of U-SQL, combined with Azure Data Lake analytics views (you can read more about U-SQL views here https://msdn.microsoft.com/en-us/library/azure/mt771901.aspx ).

To do this, we need to construct a view which uses an Extractor in U-SQL which contains all 3600 columns and specifies their data type. In our case all columns are of the float datatype.

Then we need to create a second view, which uses the first view to select only the first 11 columns from it.

And finally, we can output the file from the result of the second view.

Conceptually the code will look like:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

CREATEVIEWIFNOTEXISTSdbo.view1

      AS

  EXTRACTcol1float,

          col2float,

          col3float

  FROM“someFile.txt“

  USINGExtractors.Csv();

  CREATEVIEWIFNOTEXISTSdbo.view2

      AS

  SELECT  col1,

          col2

  FROMdbo.view1;

  @input=

      SELECT*

  FROMdbo.view2;

  OUTPUT@input

  TO“adl://somethingHere.azuredatalakestore.net/Aura/ReadyForImport/Aura.txt“

  USINGOutputters.Csv(rowDelimiter:“\n“,encoding:Encoding.UTF8,quoting:false);

There are several ways to prepare the actual U-SQL script which we will run, and usually it is a great help to use Visual Studio and the Azure Data Lake Explorer add-in. The Add-in allows us to browse the files in our Data Lake and right-click on one of the files and then click on the “Create EXTRACT Script” from the context menu. In this case, however, it will take a very long time, since the file is so wide.

Another way to do it is to just to use Excel to write out the column1 to column 3600 and append the data type.

Either way, our final U-SQL script will look similar to this:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

CREATEVIEWIFNOTEXISTSdbo.view1

      AS

  EXTRACT[Column1]   float?,

          [Column2]   float?,

          [Column3]   float?,

          ...

          [Column3600]   float?

  FROM“someFile.txt“

  USINGExtractors.Csv();

  CREATEVIEWIFNOTEXISTSdbo.view2

      AS

  SELECT  [Column1],

          [Column2],

          [Column3],

          [Column4],

          [Column5],

          [Column6],

          [Column7],

          [Column8],

          [Column9],

          [Column10],

          [Column11]

  FROMdbo.view1;

  @input=

      SELECT*

  FROMdbo.view2;

  OUTPUT@input

  TO“adl://somethingHere.azuredatalakestore.net/Aura/ReadyForImport/Aura.txt“

  USINGOutputters.Csv(rowDelimiter:“\n“,encoding:Encoding.UTF8,quoting:false);

As mentioned above, the View1 is used to extract the data from the CSV files, view2 is used to sub-set the data from the view1. Finally the view2 is used to write the final result to our Data Lake. The parameters in the outputter are very important, since these are the requirements for using Polybase to push the data in the fastest way to the Data Warehouse in the next step.

And finally, it is important to boost up the parallelism of the U-SQL processing before submitting the job, since it might take a while if we use the default setting. In my case I am using parallelism of 120.

word image 47 The Quick and the Dead Slow: Importing CSV Files into Azure Data Warehouse

U-SQL scales very well. In my case of about 500Mb of CSV files, it took about 2 minutes for the above script to produce a CSV file of 22Mb, by reducing the width from 3600 to 11 columns.

Importing the file to the Data Warehouse with Polybase

When the U-SQL script above is ready, we can finally import the file that we produced to our Data Warehouse.

To import the data, we are going to use the Copy Data wizard, with which we are already familiar, to create an ADF pipeline. It is just a matter of setting up the ADL as a source, ADW as a destination and setting up the Service-to-Service authentication for ADL and the DataImporter credential for the ADW. After setting up all of this, it is very important to verify that in the last screen there is NO staging storage account used and that Polybase is allowed:

word image 48 The Quick and the Dead Slow: Importing CSV Files into Azure Data Warehouse

Finally, the architecture looks like this, with a direct import from ADL to ADW:

word image 49 The Quick and the Dead Slow: Importing CSV Files into Azure Data Warehouse

Monitoring and performance of the pipeline

After a couple minutes, the pipeline is finished, and we get the following information:

word image 50 The Quick and the Dead Slow: Importing CSV Files into Azure Data Warehouse

Notice that it took about a minute to import 21 MB of data and 277K rows. This is with a 100 DWUs for the Data Warehouse, which is 1.17 EUR per hour.

If we wanted the import to be faster, then we would Scale up the Data Warehouse to 600 DWUs, for example.

Having the feature of a scalable Data Warehouse is great because the user gets to scale up when the resource is used (for imports and for busy read times). However, the downside is that connections get terminated when scaling is in process, and this means down time.

On a final note, all good-old-rules from data warehousing are still valid when it comes to speedy data imports. For example, it is still faster to insert into a heap than to anything else. And let’s not forget to create and rebuild those statistics after the import!

Conclusion:

When you are paying for a resource by the hour, you soon get increasingly interested in the time a data import takes. In this article we explored the options and considerations it takes to import data into an Azure Data Warehouse in a fast and economic way. We saw that the old data warehousing rules are still valid, and that Polybase is a great tool for speedy imports of large volumes of data.

Let’s block ads! (Why?)

SQL – Simple Talk

Read More

Why is My Database Application so Slow?

August 25, 2017   BI News and Info

Slow applications affect end users first, but the impact is soon felt by the whole team, including the DBA, Dev team, network admin and the system admins looking after the hardware.

With so many people involved, each with their own view on the likely cause, it can be hard to pin down where the bottlenecks really are.

Broadly, there are two main causes of performance issues with a SQL Server application:

  • Network problems – relating to the speed and capacity of the “pipe” connecting your SQL application client to the database
  • Slow processing times – relating to the speed and efficiency with which requests are processed, at end side of the pipe.

In this article, we’re going to look in a bit more detail about how to diagnose these and get to the bottom of performance issues.

Network problems

Network performance issues broadly break down into problems relating to the speed of responses across the network (latency) or to the capacity of the network (bandwidth) i.e. how much data it can transmit in a set time.

Of course, the two are interconnected. If network traffic generated by your app (or other apps on the same network) is overwhelming the available bandwidth, this in turn could increase latency.

Latency

Latency is the time it takes to send a TCP packet between the app and the SQL Server. You incur latency on the way up to the DB and on the way down. People generally talk about latency in terms of round trip times: i.e. the time to get there and back

Figure 1 shows a 60-millisecond round trip.

c users dan appdata local microsoft windows inetc Why is My Database Application so Slow?

Figure 1

Bandwidth

The amount of data that can be sent or received in an amount of time, normally measured in kb/s or Mb/s (megabits per second).

People often talk about the ‘size of your pipe’ when discussing bandwidth and it’s a good analogy (plus it sounds naughty): the fatter your pipe, the more data you can get through it at once.

If your app needs to receive a 10-megabyte response (that’s 80 megabits!) and you have a 20 Mb/s connection, the response will take at least 4 seconds to be received. If you have a 10Mb/s connection it will take at least 8 seconds to be received. If other people on your network are streaming Game of Thrones, that will reduce the available bandwidth for you to use.

Application problems: slow processing times

Whenever the client sends a request to SQL Server, to retrieve the required data set, the total processing time required to fulfill a request comprises both:

  • App processing time: how long it takes for the app to process the data from the previous response, before sending the next request
  • SQL processing time: how long SQL spends processing the request before sending the response

Figure 2 provides a simple illustration of this concept.

c users dan appdata local microsoft windows inetc 1 Why is My Database Application so Slow?

Figure 2

Where is the time being spent?

We spend a lot of time investigating the performance of Client/Server SQL applications, and there are an overwhelming number of different tools, scripts and methods to help you troubleshoot any number of different types of performance issue.

So how, when confronted with slow application response times, do we pinpoint quickly the root cause of the problem? The flowchart in Figure 3 shows a systematic way of approaching the problem.

word image 121 Why is My Database Application so Slow?

Figure 3

When investigating performance problems, you may have more than one issue. It’s worth looking at a few different parts of the app. Is it a general problem? Or are some parts much slower than others?

It’s best to start small. It will make life easier if you can focus on a specific area of the app that is particularly slow, for example let’s say when you click the “Select All” button on the invoices page, it takes 10 seconds to load the results. Focusing on a small repeatable workflow will let you isolate the issue.

The next question to answer, of course, is Why is it taking 10 seconds? The first and easiest way to narrow the issue down is to run the app as close to the SQL Server as possible, on the same machine, or on the same LAN.

If having effectively removed any network latency and bandwidth constraints it suddenly takes a second or less to select all the invoices, then you need to investigate what network problems might be eating up all the rest of the time.

If the app is still taking about 10 seconds to load the results, then congratulations, you’ve again eliminated 2 of the 4 issues! Now, you need to look at where the bulk of this processing time is being spent.

Let’s take a closer look at how to work out where the bulk of this time is being spent. You’ll need either Wireshark or SQL Profiler (whichever you’re more comfortable with).

Investigating application processing times

You’ll see the time in one of two places: between sending a response to the app and getting the next request (the app processing time) or between issuing a request to SQL Server and getting a response (the SQL processing time).

To work out which one is causing your issue you can use either Wireshark or SQL Profiler as both can tell us the approximate app and SQL processing time (although the exact numbers may differ slightly).

Using Wireshark

We can use Wireshark to capture the network traffic while the workflow is executing. Using Wireshark allows us to filter out non-application traffic and look at the time difference between all the packets in the workflow.

To calculate the approximate application processing time:

  1. Capture the packets for the workflow: Start a Wireshark capture and run the app workflow, remember to stop the capture once the workflow is complete. Remember to select the relevant network interface and note that you’ll need to run the app on a different machine from the database for Wireshark to see the traffic. Make sure you’re not running any other local SQL applications other than the one you’re trying to capture.
  2. Get rid of the non-app traffic by applying the filter tds and then File | Export Specified Packets, giving a filename and making sure “Displayed” is selected. Open this new file in Wireshark.
  3. Show the time difference between the current and previous packet, simply by adding the time delta column, as follows:
    1. Select Edit | Preferences | Appearance | Columns
    2. Click the + button, change the type dropdown to “Delta Time” and the Title to “Delta“
  4. Filter the traffic to just Requests:

    (tds.type == 0x01 || tds.type==0x03 || tds.type == 0x0E) && tds.packet_number == 1

    The above filter will show just the first TDS packet in each request, and the Delta column will now show the time between the last response packet of the previous request and the next request. Make sure the packets are ordered by the “No.” column as this will ensure that the packets are in the order they were sent/received.

  5. Export as a CSV, by navigating File | Export Packet Dissections | As CSV
  6. Calculate app processing time in seconds – open the CSV in Excel and sum up the values in the Delta column.

To get approximate SQL processing time:

  1. Reopen the file you created in step 2. above in Wireshark, filter the traffic to just responses:

    tds.type == 0x04 && tds.packet_number == 1

    The above filter will show just the first TDS packet in each response, and the Delta column will now show the time between the last request packet of the previous request and the first response packet sent back from the SQL Server. Again, ensure the packets are ordered by the “No.” column.

  2. Export as a CSV, by navigating File | Export Packet Dissections | As CSV
  3. Calculate SQL processing time in seconds – open the CSV in Excel and sum up the values in the Delta column.

Using SQL Profiler

Although collecting diagnostic data using SQL Profiler is known to add some overhead to your workflow it should still give you a broad picture of the processing times. You can minimize this overhead by running a server-side trace, and then exporting the data as described below. Alternatively, if you’re confident with Extended Events and XQuery, you should be able to get similar data via that route.

Start by capturing a Profiler trace of the workflow, just using the “Standard (default)” trace template. Make sure that nothing else is hitting the database at the same time so you’re only capturing your traffic. Once you’ve captured the workload in the trace, save it to a trace table using File | Save As | Trace Table.

In SQL Management Studio, query the table you created with the following two queries to give you the approximate app and SQL processing times:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

/* Calculate approximate SQL Processing time for RPCs and SQL Batch queries*/

  SELECTSUM(DATEDIFF(MILLISECOND,StartTime,EndTime))AS‘SQL Processing Time in ms’

  FROMTraceTable

  WHEREEventClassIN(10,12);

  – Selects the sum of the time difference between the start and end times

  – for event classes 10 (RPC:Completed) and 12 (SQL:BatchCompleted)

  /* Calculate approximate app processing time*/

  WITHEvents

  AS(SELECT*

      FROMTraceTable

      WHEREEventClassIN(10,11,12,13)

     )

  SELECTSUM(DATEDIFF(MILLISECOND,PreviousRow.EndTime,CurrentRow.StartTime))AS‘App Processing Time in ms’

  FROMEventsCurrentRow

      JOINEventsPreviousRow

          ONCurrentRow.RowNumber=PreviousRow.RowNumber+1

  WHERECurrentRow.eventclassIN(11,13)

        ANDPreviousRow.eventclassIN(10,12);

  – Select the sum of the time difference between an end of query event

  – (either 10 RPC:Completed or 12 SQL:BatchCompleted)

  – and the next query starting event

  – (either 11 RPC:Starting or 13 SQL:BatchStarting)


Investigating latency and bandwidth issues

If the application is fast when run locally, it looks like you have network issues. At this point, you will need to know the latency between the application and SQL Server. You can get a rough idea of this from a ping, which will tell you the time it takes to make a round trip between the two. Try and take the measurement when the network is at low load as high network load can increase ping times.

If you count the number of queries the application issues, you can work out how much time is taken by latency.

To get the number of queries from Wireshark, you can apply the following filter and then look at the “displayed” count in the status bar:

(tds.type==0x01||tds.type==0x03||tds.type==0x0E)&&tds.packet_number==1

To get the number of queries in SQL Profiler, create a trace table as described previously, and run the following query:

SELECTCOUNT(1)FROMTraceTableWHEREEventClassin(11,13)

You need to multiply this query count by the network latency (the ping value). For example, if the application sends 100 queries and your network latency is 60ms then the total transit time is 100 * 60 = 6000ms (6 seconds), whereas on a LAN it would take 100 *1 = 100ms (0.1 second).

This should tell you if latency is your issue. If it’s not, then you have a bandwidth issue.

What a moment though. We haven’t explicitly seen a bandwidth issue, we just ruled out the other issues. How do we confirm it? Great question. It’s a bit fiddlier I’m afraid.

If you have a network-level device with traffic monitoring, and a dedicated connection to the SQL server, you can look and see if your workflow is saturating the available bandwidth.

Alternatively, you need to look at how much bandwidth the application uses when you know you don’t have a bandwidth bottleneck. To do this, you again need to run the application close to the database, capture the packets in Wireshark, and examine the bandwidth used by the application. Again, make sure you’re not running any other local SQL applications other than the one you’re trying to capture.

Once you have completed the capture in Wireshark:

  1. Use the filter: tds
  2. Click Statistics | Conversations and tick the box “Limit to display filter“. You should then see your App workflows conversation in the conversations window.
  3. The bandwidth used is shown as “Bytes A -> B” and “Bytes B -> A“

Repeat the capture while running the application over the high latency network, and look again at the bandwidth used. If there is a large discrepancy between the two, then you are probably bandwidth constrained.

Of course, for an accurate comparison, you need to be running SQL Server and the application on similar hardware, in both tests. If, for example, SQL Server is running on less powerful hardware, it will generate less traffic across the network in a given time.

Root cause analysis

It’s quite possible you have multiple problems! However, after going through the steps outlines above, you should be able to account for all the time being spent to process the workflow. If the 10 seconds processing time turns out to comprise 6 seconds of SQL processing time, 3 seconds of transit time, and 1 second application processing time, then you know how to prioritize your investigations.

If the main problem is slow SQL processing times, then there is a lot of information out there about tuning and tracking down issues. For example, since we already captured a Profiler trace, Gail Shaw’s articles give a good overview of how to find the procedures and batches within the trace that contributed most to performance problems. Also, Jonathan Kehayias’s book is great for a deeper dive into troubleshooting common performance problems in SQL Server.

Conversely, if most of the time is being spent in client-side processing, you may want to consider profiling your application code to locate the issues. There are lots of profiling tools out there depending on your programming language (for example, for .NET languages you can use ANTS from Redgate or dotTrace from JetBrains).

If you’re suffering from network bandwidth issues, then you may need to limit the size of the data you’re requesting. For example, don’t use “SELECT *” when you’re requesting data. Return only the necessary columns, and use WHERE or HAVING filters to return only the necessary rows.

A very common cause of performance problems, in our experience, is running “chatty” applications over high latency networks. A chatty application is one that sends many duplicate and unnecessary queries, making way more network round trips than necessary.

Often, these applications were originally developed on, and deployed to, a high-speed LAN and so the ‘chattiness’ never really caused a problem. What happens though, when the data moves to a different location, such as to the Cloud? Or a customer on a different continent tries to access it? Or you need to build geographically diverse disaster recovery environments? If you consider every query on a 1ms LAN will be 60x slower on a 60ms WAN, you can see how this can kill your performance.

In short, when writing a Client/Server application, you need to avoid frequent execution of the same queries, to minimize the number of necessary round trips to collect the required data. The two most common approaches to this are:

  • Rewriting the code – for example, you can aggregate and filter multiple data sets on the server to avoid having to make a query per data set, although it’s not always to change the application
  • Using query prefetching and caching – there are WAN optimization tools that will do this, but they are sometimes expensive, and hard to configure to get high performance, without introducing bugs into the application

We’ve done a lot of research into these problems, while developing our Data Accelerator tool, and have adopted an approach that uses machine learning to predict what your application is about to do, and prefetch the required data, so it’s ready just in time for when the application requests it.

In Conclusion

Make sure you work out where your problem is before you spend a lot of time and money on a possible solution. We’ve seen companies spend vast amounts of money and man hours optimizing SQL queries, when their biggest problems have been application performance issues. Conversely, we’ve seen companies putting more and more RAM or CPUs into a SQL server when this will never make up for the extra time the network latency is adding. If you can pinpoint where the workflow processing time is really being spent, you direct you time and efforts in the right way.

Hopefully this has given you some ideas of how to investigate your own app’s performance, or start to track down any issues you might have.

Let’s block ads! (Why?)

SQL – Simple Talk

Read More

Slow cooker

March 27, 2017   Humor

Posted by Krisgo

 Slow cooker

via

Advertisements

Like this:

LikeLoading…

 Slow cooker


About Krisgo

I’m a mom, that has worn many different hats in this life; from scout leader, camp craft teacher, parents group president, colorguard coach, member of the community band, stay-at-home-mom to full time worker, I’ve done it all– almost! I still love learning new things, especially creating and cooking. Most of all I love to laugh! Thanks for visiting – come back soon icon smile Slow cooker

Let’s block ads! (Why?)

Deep Fried Bits

Read More
« Older posts
  • Recent Posts

    • Accelerate Your Data Strategies and Investments to Stay Competitive in the Banking Sector
    • SQL Server Security – Fixed server and database roles
    • Teradata Named a Leader in Cloud Data Warehouse Evaluation by Independent Research Firm
    • Derivative of a norm
    • TODAY’S OPEN THREAD
  • Categories

  • Archives

    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited