• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: Know

New to Formula One Racing? The Top 5 Facts You Need to Know

April 3, 2021   TIBCO Spotfire
TIBCO F1Helmet scaled e1616559656982 696x366 New to Formula One Racing? The Top 5 Facts You Need to Know

Reading Time: 3 minutes

There are some big changes happening in Formula One, which means it’s a great time to start watching if you are new to the circuit. With the recent Mercedes-AMG Petronas car launch, new 2021 regulations, and shifting race schedules, there’s a lot to catch up on for long-time fans and newbies alike!

Want to impress your friends with your F1 knowledge? Here are the top 5 facts new F1 fans need to know.

1. The Language of Formula One: Explained 

Formula One has a lot of industry-specific terms, which have come to the forefront with the recent car launches. New regulations talk a lot about downforce, dirty air, and floor edges. But what are they and what do they mean?

Here are some key terms, explained:

  • Downforce, also called negative lift, pushes the car onto the track and helps with grip
  • Dirty Air, related to slipstream, is the wind turbulence caused by the car in front–allowing the car behind it to reduce drag
  • Floor Edges refers to the parts of the bottom of the racecar that help with aerodynamics (new regulations have changed what this can look like in 2021)

2. Racing All Over the Map: Why It Matters

If you’ve ever watched James Bond, you’re familiar with the prestige of Monaco. The flashing lights, shiny cars, bedazzled stars–all that is true of a race weekend–but don’t let the glamour fool you. Racing is hard work.

The most important thing you need to know about Formula One circuits is that each racetrack comes with its own set of challenges, whether it’s altitude, angles, speed, or climate–each team has to meticulously prepare for every race.

Here are some quick facts about some of the hardest tracks:

  • Monaco Grand Prix: Cars race through the streets of Monaco, taking tight turns onto narrow alleys and through a tunnel–it’s the only race that doesn’t adhere to all F1 safety standards.
  • Bahrain Grand Prix: This race has tricky turns and major environmental factors due to the sandy desert conditions. Sand blowing on the track can reduce visibility and traction.
  • Singapore Grand Prix: Imagine sauna-like humidity and heat along with grueling driving conditions and blinding lights for two hours straight–this is the Singapore Grand Prix. Just one mistake can wreck a car–no runoffs included.

3. What’s With All the Flags? A Quick Explanation

Flags identify when cars must slow down or exit the track based on external conditions. Believe it or not, racing flags can actually change the outcome of a Grand Prix. A black flag can end a race for the best of drivers. Here’s what they mean:

 New to Formula One Racing? The Top 5 Facts You Need to Know

4. Formula One in the Media: What to Watch

Formula One is no stranger to the movies and every die-hard fan has seen them all. The easiest way to understand the sport (and your racing friends) is to see F1 in action both on and off the big screen.

Here are some of the most famous movies and tv shows you can binge:

5. Mercedes Reigns Supreme: the Ultimate Racing Champions

So who should you keep an eye out for during race weekends?

Mercedes-AMG Petronas is a 7-time world champion in F1. Their premiere driver, Lewis Hamilton, broke over 4 major F1 records just last year. And over our five years of partnership, the Team has amassed more than 50 race wins by using data insights to inform car design, race strategy, and driver performance.

For the latest information on Formula One, keep up to date here, where you can learn how the Mercedes-AMG Petronas Formula One Team uses data and analytics as a competitive advantage to fuel victory on and off the track. 

And follow along on all the excitement with TIBCO on our LinkedIn, Facebook, Twitter, and Instagram accounts with the hashtag #TIBCOfast. See you in Bahrain on the 26th!

Let’s block ads! (Why?)

The TIBCO Blog

Read More

Knowledge is Power! How to Know What’s Important to Your Business

February 19, 2021   TIBCO Spotfire
TIBCO MDMKnowledgeIsPower scaled e1613067324649 696x365 Knowledge is Power! How to Know What’s Important to Your Business

Reading Time: 3 minutes

Know your customers! Your people are your most valuable asset! Your company is only as good as your products! While there is wisdom in these business truisms, they are easier said than done when it comes to your most valuable assets–your customers, employees, products, and more.

In Meditationes Sacrae and Human Philosophy in 1597, Sir Francis Bacon wrote that “Knowledge is Power.” To empower your engagement with your customers, employees, and products, start by seeking greater knowledge.

Why Knowledge is Powerful for Customer Engagement:

Knowledge is definitely power when engaging, delighting, and creating intimate relationships with customers. But gaining this knowledge requires an all-encompassing 360-degree view of your customer data.  

For example, as a service provider, you have a wide variety of data—cable, broadband, and mobile usage, billing, operations support, and service history data. But unless this data is properly accessed and aligned, it won’t help you delight or better serve your customers. And doing it wrong might actually annoy them, for example, pitching your customers something they already own or that bears no relevance to them.

Capturing your customer data, gift-wrapping it with business insight, and presenting it back to them as a relevant offer informed by their unique situation is much more effective than your guesses about their interests. Knowing your customers helps ensure that every point of engagement is a path to their delight, loyalty, pocketbooks, and a boon to your bottom line. 

Why Knowledge is Powerful for Innovation:

Now apply this to a 360-degree view of employees, where your goal is to maximize your people’s innovation and productivity. With an overarching view of their skills, experiences, interests, and aspirations, you can optimize your talent pool. 

For example, if you are a pharmaceutical firm working on a new drug, a 360-degree view of your R&D programs might uncover bottlenecks, as well as staff member capabilities you can use to keep them on track. That’s a productivity win, a time to market win, and a business win.

Why Knowledge is Powerful for Process Optimization: Now, consider how a similar 360-degree view of your products can help you optimize your supply chain. With a complete picture of all your supply sources and demand destinations, you have all the data you need to build the right product and deliver it to the right people, at the right time.  

With this up-to-the-minute supply chain snapshot, you will never build more than what you can sell, and you can quickly divert products wherever needed if things change. Real-time and correct information lets you lean out the supply chain to increase turns, reduce carrying costs, and eliminate waste. 

Why Knowing is Difficult

But gathering and garnering this kind of knowledge is not easy when your customer data is distributed, diverse, unaligned, and siloed across multiple systems. For example, is customer X in your billing system the same customer X across your call center, operations support, and marketing systems?

Where MDM fits:

Your data will remain stuck in silos if it is not rationalized. This is where Master Data Management (MDM) comes in. MDM is the catalyst for you to rationalize your diverse customer IDs, product IDs, employee IDs, and more together, thereby creating the linkages you need to know what is truly common and what is not. 

As a result, your most important shared data—your customers, employees, suppliers, assets, locations, materials, products, legal entities, financial accounts, reference data—is consistent and accurate, no matter when it was captured and where it currently resides.   

Where Data Virtualization fits:

Leveraging your MDM foundation, you can use data virtualization (DV) to integrate related non-master data required to complete your 360-degree views. With DV, you can create a complete history of customer engagement with as much detail as you require. And because it is virtual, you can leave the data where it already resides, and yet still query it and deliver it whenever a 360-degree view is needed.

Armed with the complete picture, you can personalize customer service, make the smartest next best offer, and engage the way your customers prefer. It helps your iPad-using repairman know they have the right parts when making a service call. It empowers your marketers to efficiently offer services they know, not just hope, your customer will want.  It empowers your call center reps to focus on how to help, because they already know about the customer’ past experiences and current situation. 

With DV, you also eliminate the need for your applications developers to learn how to access the data their business applications consume. Instead, your call center developers can focus 100% on optimizing call center rep productivity. Ecommerce developers can focus on your shopping experience. Point of sales developers can focus on ease of use. All build on your easy to understand and use, DV-enabled 360-degree data views. 

Knowledge is definitely power when engaging, delighting, and creating intimate relationships with customers. But gaining this knowledge requires an all-encompassing 360-degree view of your customer data.  Click To Tweet

Know What’s Important to your Business 

Knowledge becomes power when you can use your data to help your business. And combining master data management and data virtualization can accelerate your education. 

Up your knowledge by exploring these resources and learn how to gain a competitive advantage with a comprehensive, modern data management strategy.  

Let’s block ads! (Why?)

The TIBCO Blog

Read More

How to know if federated learning should be part of your data strategy

February 13, 2021   Big Data

Data: Meet ad creative

From TikTok to Instagram, Facebook to YouTube, and more, learn how data is key to ensuring ad creative will actually perform on every platform.

Register Now


AI researchers and practitioners are developing and releasing new systems and methods at an extremely fast pace, and it can be difficult for enterprises to gauge which particular AI technologies are most likely to help their businesses. This article — the first part of a two-part series — will try to help you determine if federated learning (FL), a fairly new piece of privacy-preserving AI technology, is appropriate for a use case you have in mind.

FL’s core purpose is to enable use of AI in situations where data privacy or confidentiality concerns currently block adoption. Let’s unpack this a bit. The purpose of AI systems/methods/algorithms is to take data and autonomously create pieces of software, AI models, that transform data into actionable insight. Modern AI methods typically require a lot of data, collect all the necessary data at some central location, and run a learning algorithm on the data to learn/create the model.

However, when that data is confidential, siloed, and owned by different entities, gathering the data at a central location is not possible. Federated learning very cleverly gets around this problem by moving the learning algorithm or code to the data, rather than bringing the data to the code.

As an example, consider the problem of creating an AI model to predict whether patients have COVID-19 given their lung CT-scans. The data for this problem, CT-scans, are obviously confidential, and are owned by various different entities — hospitals and medical and research facilities. They are also stored all across the world under very various different jurisdictions. You want to combine all of that data because you want your AI models to be able to detect all the forms in which the disease has manifested in CT-scans. However, combining all this data in the one location is not possible for obvious data confidentiality and jurisdictional reasons.

Here’s how federated learning bypasses the confidentiality issue: A worker node, which is a computing system capable of machine learning, is deployed at each hospital or facility. This worker node has full access to the confidential data at the location. During federated learning, each such worker node creates a local AI model using the data at the facility and sends over the model to the central FL server. Note that the confidential data never leaves the client site — only the model does. A central server combines all of the insights from the local models into a single global model. This global model is sent back to all the local worker nodes, which now have insights from all the other worker nodes without having seen their data. When you iterate on these steps many times, you end up with a model that is, in many cases, equivalent to a model that would have been built if you’d trained it on all the data in the same place. (See a schematic illustration of the process below.)

Federated learning was initially developed by Google as a way to train the Android keyboard app (GBoard) to predict what the user will type next. Here the confidential data being used is the text that the user is typing. However, it turns out that the data confidentiality issue appears in many guises across industries. Indeed, you will face this problem if you are

* a maker of autonomous vehicle and you want to combine confidential video and image data from across your vehicles to build a better vision system,

* a bank and want to build a machine learning based anti-money laundering system by combining data from various jurisdictions, possibly even other banks,

* a supply chain platform provider and you want to build a better risk assessment or route optimization system by combining confidential sales data from multiple businesses,

* a cell service provider and you want to build a machine learning model to optimize routes by combining confidential data from cell towers,

* a consortium of farmers and you want to build a model to detect crop disease using confidential disease data from the members of your consortium,

* a consortium of additive manufacturers and you want to build AI-based process controllers and quality assurance systems using confidential build data from members of your consortium.

An important point to note about the above examples is that the owner of the data can be different units of the same organization but in different jurisdictions (as in the bank and cell service provider examples) or clients of the same organization (as in the autonomous vehicle maker and supply chain service provider examples), or completely independent units (as in the consortium of farmers and additive manufacturer examples above, and the hospitals/COVID-19 example described earlier).

You can use the following checklist to see if federated learning makes sense for you:

I will end by emphasizing that, if you are able to combine your data into a central location, you should probably go for that option (barring cases where you want to, for instance, future-proof your solution). Centralizing your data may make for better final model performance than you’ll be able to achieve with federated learning, and the effort required to deploy an AI solution will be significantly lower. However, if it turns out that FL is just what you need to remove your barrier to AI adoption, part two of this series will provide you with an overview of how to do that.

M M Hassan Mahmud is a Senior AI and Machine Learning Technologist at Digital Catapult, with a background in machine learning within academia and industry.

VentureBeat is always looking for insightful guest posts related to enterprise data technology and strategy.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

DBA in training: Know your server’s limits

December 11, 2020   BI News and Info

The series so far:

  1. DBA in training: So, you want to be a DBA…
  2. DBA in training: Preparing for interviews
  3. DBA in training: Know your environment(s)
  4. DBA in training: Security
  5. DBA in training: Backups, SLAs, and restore strategies
  6. DBA in training: DBA in training: Know your server’s limits 

Having taken steps to map your database applications to the databases and address your security and backups, you need to turn your attention to your server’s limits.

What do I mean by limits? Certainly, this is an allusion to how you will monitor your server drive capacity, but I also mean how you will track resource limits, such as latency, CPU, memory, and wait stats. Understanding all of these terms, what normal values are for your server, and what to do to help if the values are not normal, will help to keep your servers as healthy as possible.

These measures are big, in-depth topics in and of themselves. This will only serve to get you started. Links to more in-depth resources are included with each topic, and you will doubtless find others as you progress through your career.

Drive Space

Whether you are hosting servers on-prem or in the cloud, and however your drives may be configured, you need to know how much space your files are consuming, and at what rate. Understanding these measures is essential to helping you to both manage your data (in case you find that it is time to implement archiving, for instance) and your drives (i.e., you’ve managed your data as carefully as you can, and you simply need more space). It is also vital to help you to plan for drive expansion and to provide justification for your requests. Whatever you do, avoid filling the drives. If your drives fill, everything will come to a screeching halt, while you and an unhappy Infrastructure team drop everything to fix it. If you are using Azure Managed Instances, you can increase the space as well. Storage limits and pricing in the cloud will depend on a number of factors – too many to explore here.

How can you monitor for drive capacity? Glenn Berry to the rescue! His diagnostic queries earned him the nickname “Dr. DMV”, and they are indispensable when assessing the health of your servers. They consist of nearly 80 queries, which assess nearly anything you can imagine at the instance and database levels. He is good about updating these once a month, and they work with Azure as well as SQL Server. If you do not like manually exporting your results to Excel and prefer using PowerShell instead, his queries work with that as well. This should get you started. This example (Query 25 of his SQL Server 2016 Diagnostic Information Queries) will give you the information you need for drive space:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

SELECT DISTINCT

       vs.volume_mount_point,

       vs.file_system_type,

       vs.logical_volume_name,

       CONVERT(DECIMAL(18, 2),

            vs.total_bytes / 1073741824.0) AS [Total Size (GB)],

       CONVERT(DECIMAL(18, 2), vs.available_bytes / 1073741824.0)

            AS [Available Size (GB)],

       CAST(CAST(vs.available_bytes AS FLOAT) /

            CAST(vs.total_bytes AS FLOAT) AS DECIMAL(18, 2))

           * 100 AS [Space Free %]

FROM sys.master_files AS f WITH (NOLOCK)

    CROSS APPLY sys.dm_os_volume_stats(f.database_id, f.[file_id])

        AS vs

OPTION (RECOMPILE);

Tracking the results of this diagnostic query should help to get you started in monitoring your space and checking where you are. Regular tracking of your drive space will help you to see how quickly it is growing and to help you plan when (and how much) to expand them.

To help you track your database growth, you might try something like this query, which I have used countless times. It comes from here and is based on backup file sizes:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

DECLARE @startDate DATETIME;

SET @startDate = GetDate();

SELECT PVT.DatabaseName

    ,PVT.[0]

    ,PVT.[-1]

    ,PVT.[-2]

    ,PVT.[-3]

    ,PVT.[-4]

    ,PVT.[-5]

    ,PVT.[-6]

    ,PVT.[-7]

    ,PVT.[-8]

    ,PVT.[-9]

    ,PVT.[-10]

    ,PVT.[-11]

    ,PVT.[-12]

FROM (

    SELECT BS.database_name AS DatabaseName

        ,DATEDIFF(mm, @startDate, BS.backup_start_date)

              AS MonthsAgo

        ,CONVERT(NUMERIC(10, 1), AVG(BF.file_size / 1048576.0))

              AS AvgSizeMB

    FROM msdb.dbo.backupset AS BS

    INNER JOIN msdb.dbo.backupfile AS BF ON BS.backup_set_id

             = BF.backup_set_id

    WHERE BS.database_name NOT IN (

            ‘master’

            ,‘msdb’

            ,‘model’

            ,‘tempdb’

            )

        AND BS.database_name IN (

            SELECT db_name(database_id)

            FROM master.SYS.DATABASES

            WHERE state_desc = ‘ONLINE’

            )

        AND BF.[file_type] = ‘D’

        AND BS.backup_start_date BETWEEN DATEADD(yy, - 1, @startDate)

            AND @startDate

    GROUP BY BS.database_name

        ,DATEDIFF(mm, @startDate, BS.backup_start_date)

    ) AS BCKSTAT

PIVOT(SUM(BCKSTAT.AvgSizeMB) FOR BCKSTAT.MonthsAgo IN (

            [0]

            ,[-1]

            ,[-2]

            ,[-3]

            ,[-4]

            ,[-5]

            ,[-6]

            ,[-7]

            ,[-8]

            ,[-9]

            ,[-10]

            ,[-11]

            ,[-12]

            )) AS PVT

ORDER BY PVT.DatabaseName;

This gives you an idea of how quickly the databases on your servers have grown over the last twelve months. It can also help you to predict trends over time if there are specific times of year that you see spikes that you need to get ahead of. Between the two, you will have a much better idea of where you stand in terms of space. Before asking for more though, your Infrastructure and network teams will thank you if you carefully manage what you have first. Look at options to make the best use of the space you have. Perhaps some data archival is an option, or compression would work well to reduce space. If you have a reputation for carefully managing space before asking for more, you will have less to justify when you do make the request.

If you have SQL Monitor, you can watch disk growth and project how much you will have left in a year.

sql monitor disk usage page showing current and pr DBA in training: Know your server’s limits

Know Your Resource Limits

You should now have some sort of idea of idea of how much space you currently have and how quickly your databases are consuming it. Time to look to resource consumption. There are a host of metrics that assess your server’s resource consumption – some more useful than others. For the purposes of this discussion, it will be limited to the basics – latency, CPU, and memory.

Latency

Latency means delay. There are two types of latency: Read latency and write latency. They tend to be lumped together under the term I/O latency (or just latency).

What is a normal number for latency, and what is bad? Paul Randal defines bad latency as starting at 20 ms, but after you assess your environment and tune it as far as you can, you may realize that 20 ms is your normal, at least for some of your servers. The point is that you know what and where your latencies are, and you work toward getting that number as low as you possibly can.

Well, that sounds right, you are probably thinking. How do you do that? You begin by baselining – collecting data on your server performance and maintaining it over time, so that you can see what is normal. Baselining is very similar to a doctor keeping track of your vital signs and labs. It’s common knowledge that 98.6 F is a baseline “normal” temperature, for instance, but normal for you may be 97.8 F instead. A “normal” blood pressure may be 120/80, but for you, 140/90 is as good as it gets, even on medication. Your doctor knows this because they have asked you to modify your diet, exercise and take blood pressure medication, and it is not going down any more than that. Therefore, 140/90 is a normal blood pressure for you. Alternatively, maybe you modified your diet as much as you are willing to, but are faithful to take your medications, and you exercise when you think about it. In that case, your blood pressure could still go down some, but for now, 140/90 is your normal.

The same is true for your servers. Maybe one of your newer servers is on the small side. It does not have a lot of activity yet, but historical data is in the process of back loading into one of the databases for future use. It has 5 ms of read latency and 10 ms of write latency as its normal.

Contrast that with another server in your environment, which gets bulk loaded with huge amounts of data daily. The server is seven years old and stores data from the dawn of time. The data is used for reports that issue a few times a day. It has 5 ms of read latency, but 30 ms of write latency. You know that there are some queries that are candidates for tuning, but other priorities are preventing that from happening. You also realize that this is an older server approaching end of life, but there is no more budget this year for better hardware, so 30 ms of write latency is normal – at least for now. It is not optimal, but you are doing what you can to stay on top of it. The idea is to be as proactive as possible and to spare yourself any nasty surprises.

To understand your baselines, you must collect your data points on a continuous basis. If you are new and no one is screaming about slowness yet, you might have the luxury of a month to begin your determination of what is normal for your environment. You may not. Nevertheless, start collecting it now, and do not stop. The longer you collect information on your I/O latency (and the other points discussed in this article), the clearer the picture becomes. Moreover, you can measurably demonstrate the improvements you made!

If you find that your latency is in the problem zone, the article I referred to before has some great places to begin troubleshooting. Try to be a good citizen first and look at all the alternatives Paul suggests before throwing hardware at the problem. Many times, you are in a position to help. I once had a situation where we were implementing some software, and a developer wrote a “one ring to rule them all” view that wound up crashing the dev server – twice. By simply getting rid of unnecessary columns in the SELECT statement, we reduced the reads in the query from over 217 million to about 344,000. CPU reduced from over 129,000 to 1. If we could have implemented indexing, we could have lowered the reads to 71. On those days, you feel like a hero, and if your server could speak, you would hear the sigh of relief from taking the weight off its shoulders.

Other query issues can also lead to unnecessary latency. One place to check is your tempdb. Here, you want to look for queries inappropriately using temporary structures. You may find, for instance, that temp tables are loaded with thousands of rows of data that are not required, or they are filtered after the temp table is already populated. By filtering the table on the insert, you will save reads – sometimes, a lot of them! You could find a table variable that would perform better as an indexed temp table. Another place to look is at indexing. Duplicate indexes can cause latency issues, as well as bad indexing, which causes SQL Server to throw up its hands and decide that it would be easier to read the entire table rather than to try to use the index you gave it.

CPU

Think of CPU as a measure of how hard your queries are making SQL Server think. Since SQL Server is licensed by logical core, that may lead you to wonder what the problem is with using your CPU. The answer is, nothing – as long as it is normal CPU usage.

So, what is “normal”? Again, baselining will tell you (or your server monitoring software will). Look for sustained spikes of high CPU activity rather than short spurts. Needless to say, if your CPU is pegged at 100% for 20 minutes, that is a problem! On the other hand, if you see 90% CPU usage for short spurts of time, that may be okay. If you do find a server with CPU issues, sp_BlitzCache is helpful to track down possible problem queries. You can, sort by reads or CPU. Even better, you will get concrete suggestions to help.

If you have SQL Monitor, you can also sort the top queries by CPU time to find queries taking the most CPU time.

sql monitor top 10 queries screen  DBA in training: Know your server’s limits

One of the most insidious consumers of CPU are implicit conversions. Implicit conversions occur when SQL Server must compare two different data types, usually on a JOIN or an equal operator. SQL Server will try to figure out the “apples to oranges” comparison for you using something called data type precedence, but you will pay in CPU for SQL Server to figure this out – for every agonizing row.

Implicit conversions are not easy to see. Sometimes, the two columns in the implicit conversion have the same name, but under the covers have two different data types. Or it can be more subtle – for instance, an NVARCHAR value without the “N’” used. Worse, you won’t even always see them on execution plans unless you go through the XML, so without monitoring for them, you may never know that you have an issue with them. Yet these invisible performance killers can peg your CPU. Running sp_BlitzCache on your servers will find these and help you with them.

High numbers of recompiles can also cause CPU issues. You might encounter these when code contains the WITH RECOMPILE hint to avoid parameter sniffing issues. If you have stored procedures using WITH RECOMPILE at the top of the procedure, one thing you can try is to see if you have any other alternatives. Maybe only part of the sproc needs the recompile hint instead of the whole thing. It is possible to use the recompile hint at the statement level instead of for the entire stored procedure. On the other hand, maybe a rewrite is in order. BlitzCache will catch stored procedures with RECOMPILE and bring them to your attention.

Memory

When discussing memory issues in SQL Server, a good percentage of DBAs will immediately think of the Page Life Expectancy (PLE) metric. Page life expectancy is a measure of how long a data page stays in memory before it if flushed from the buffer pool. However, PLE can be a faulty indicator of memory performance. For one thing, PLE is skewed by bad execution plans where excessive memory grants are given but not used. In this case, you have a query problem rather than a true memory pressure issue. For another, many people still go by the dated value of 300 seconds as the low limit of PLE, which was an arbitrary measure when first introduced over twenty years ago – it should actually be much higher. How much? It depends on your baseline. If you really love PLE and rely on it as an indicator anyway, look for sustained dips over long periods, then drill down to find out their causes. Chances are that it will still be some bad queries, but the upside is that you may be able to help with that.

What other things might be causing memory pressure? Bad table architecture can be the culprit. Wide tables with fixed rows that waste space still have to be loaded (with the wasted space!) and can quickly become a problem. The fewer data pages that can be loaded at a time, the less churn you will see in your cache. If possible, try to address these issues.

While you are at it, check your max memory setting. If it is set to 2147483647, that means that SQL Server can use all the memory on the OS. Make sure to give the OS some headspace, and do not allow any occasion for SQL Server to use all the memory.

If you are using in-memory OLTP, there are additional things for you to consider. This site will help you with monitoring memory usage for those tables.

Bad indexing can be another possible issue. Here, look for page splits and fragmentation, or missing indexes. If SQL Server can use a smaller copy of the table (a nonclustered index) rather than loading the entire table into memory, the benefits become obvious!

If none of these issues apply to you (or if you find that you just do not have enough memory), you may need to throw some hardware at it. There is nothing wrong with the need for hardware, if it is the proven, best solution.

Summary

Before you can tell if your SQL Server is not performing as well as expected, you need to know what normal performance is for that server. This article covered the three main resources, disk latency, CPU, and memory that you should baseline and continue to

Let’s block ads! (Why?)

SQL – Simple Talk

Read More

You must know about this shortcut key in #PowerBI Desktop

December 7, 2020   Self-Service BI

Working with the field list on a large model in Power BI Desktop can quickly make you end up with a lot of expanded tables and you collapsing them one by one.

 You must know about this shortcut key in #PowerBI Desktop

Don’t do that

Even though that is good if you want to improve your chances of beating your kids in Fortnite – it probably won’t – so instead do one of the following

If you want to use your mouse

Click the show/hide pane in the header of the Fields panel

 You must know about this shortcut key in #PowerBI Desktop

This will collapse all expanded tables in the field list at once – plus if you have used the search field – it will clear that as well.

But you want to do it using the keyboard use

ALT + SHIFT + 1

This will collapse all the expanded tables as well.

Here is a link to the documentation about short cut keys in Power BI desktop – run through them – there might be some that can save you a click or two

Keyboard shortcuts in Power BI Desktop – Power BI | Microsoft Docs

Let’s block ads! (Why?)

Erik Svensen – Blog about Power BI, Power Apps, Power Query

Read More

Biden All-Female Communications Team Won’t Tell Nation What’s Wrong, Nation Should Already Know

December 2, 2020   Humor
blank Biden All Female Communications Team Won’t Tell Nation What’s Wrong, Nation Should Already Know

WASHINGTON, D.C.—Biden’s transition team has announced they will be appointing an all-female communications team. According to sources, the team will not tell the nation what’s wrong, since the nation should already know.

“It’s fine. Everything’s fine. Nothing’s wrong, OK!?” said Jen Psaki in her first press conference as a part of Biden’s team. “Why would you think I’m not fine? Ugh… if you have to ask, I’m not going to tell you.”

Insiders close to Biden say the communications team will hold periodic press conferences where they will just glare at reporters with an icy look and make them try to guess what’s wrong. If the reporters fail to understand their highly advanced non-verbal communication, they will smile sweetly and walk out of the room before slamming the door as hard as they can.

“This is a huge step for this country,” said Communication Director Kate Bedingfield to reporters. “We need to move beyond archaic and male-centric methods of communication that use things like clear language and written words. We hope this will help deepen the country’s level of intimacy with the Biden administration and open up new channels of understanding and communication.”

The press has been frantically buying flowers, chocolates, and jewelry for the communications team in hopes of receiving some clue as to what the heck is going on. The team responded by rolling their eyes and going to bed early due to a really bad headache.

Let’s block ads! (Why?)

ANTZ-IN-PANTZ ……

Read More

What enterprise CISOs need to know about AI and cybersecurity

November 19, 2020   Big Data
 What enterprise CISOs need to know about AI and cybersecurity

Best practices for a successful AI Center of Excellence

A guide for both CoEs and business units

Download Guide

Hari Sivaraman is the Head of AI Content Strategy at Venturebeat.


Modern day enterprise security is like guarding a fortress that is being attacked on all fronts, from digital infrastructure to applications to network endpoints.

That complexity is why AI technologies such as deep learning and machine learning have emerged as a game-changing defensive weapon in the enterprise’s arsenal over the past three years. There is no other technology that can keep up. It has the ability to rapidly analyze billions of data points, and glean patterns to help a company act intelligently and instantaneously to neutralize many potential threats.

Beginning about five years ago, investors started pumping hundreds of millions of dollars into a wave of new security startups that leverage AI, including CrowdStrike, Darktrace, Vectra AI, and Vade Secure, among others. (More on these companies lower down).

But it’s important to note that cyber criminals can themselves leverage increasingly easy-to-use AI solutions as potent weapons against the enterprise. They can unleash counter attacks against AI-led defenses, in a never-ending battle of one-upmanship. Or they can hack into the AI itself. After all, most AI algorithms rely on training data, and if hackers can mess with the training data, they can distort the algorithms that power effective defense. Cyber criminals can also develop their own AI programs to find vulnerabilities much faster than they used to, and often faster than the defending companies can plug them.

Humans are the strongest link

So how does an enterprise CISO ensure the optimal use of this technology to secure the enterprise? The answer lies in leveraging something called Moravec’s paradox, which suggests that tasks that are easy for computers/AI are difficult for humans and vice-versa. In other words, combine the best technology with the CISO’s human intelligence resources.

If clear guidelines can be distilled in the form of training data for AI, technology can do a far better job than humans at detecting security threats. For instance, if there are guidelines on certain kinds of IP addresses or websites that are known for being the source of malicious malware activity, the AI can be trained to look for them, take action, learn from this, and become smarter at detecting such activity in the future. When such attacks happen at scale, AI will do a far more efficient job of spotting and neutralizing such threats compared to humans.

On the other hand, humans are better at judgement-based daily decisions, which might be difficult for computers. For instance, let’s say a particular well-disguised spear phishing email talks about a piece of information, which only an insider ‘could’ have known. A vigilant human security expert with that knowledge and intelligence, will be able to connect the dots and detect that this is ‘probably’ an insider attack and flag the email as suspicious. It’s important to know in this instance, that AI will find it difficult to perform this kind of abductive reasoning and arrive at such a decision. Even if you cover some such use cases with appropriate training data, it is nigh on impossible to cover all the scenarios. As every AI expert will tell you, AI is not quite ready to replace human general intelligence or what we call ‘wisdom’ in the foreseeable future.

But…humans could also be the weakest link

At the same time, humans can be your weakest link. For instance most phishing attacks rely on the naivety and ignorance of an untrained user, and get them to unwittingly reveal information or perform an action which opens up the enterprise for attack. If all your people are not trained to recognize such threats, the risks increase dramatically.

The key is to know that AI and human intelligence can join forces and form a formidable defense against cybersecurity threats. AI, while being a game-changing potent weapon in the fight against cybercrime, cannot be left unsupervised, at least in the foreseeable future, and will always need human assistance by trained, experienced security professionals and a vigilant workforce. This two-factor AI  plus human intelligence (HI) security, if implemented fastidiously as a policy guideline across the enterprise, will go a long way in winning the war against cybercrime .

7 AI-based cybersecurity companies

Below is more about the leading emerging AI-first cybersecurity companies. Each of them bite off a section of enterprise security needs. A robust cybersecurity strategy, which has to defend at all points, is almost impossible for a single company to manage. Attack fronts include hardware infrastructure (data centers and clouds), desktops, mobile devices (cellphones, laptops, tablets, external storage devices, etc.), IoT devices, software applications, data, data pipelines, operational processes, physical sites including home offices, communication channels (email, chat, social networks), insider attacks, and perhaps most importantly, employee and contractor security awareness training. With bad actors leveraging an ever widening range of attack techniques against enterprises (phishing, malware, DoS, DDoS, MitM, XSS, etc.), security technical leaders need all the help they can get.

CrowdStrike

CrowdStrike’s Falcon suite of products are could-native, AI-powered cyber security solutions for companies of all sizes. These products cover next-gen antivirus, endpoint detection and response, threat intelligence, threat hunting, IT hygiene, incident response, and proactive services. CrowdStrike says it uses something called ‘signatureless’ artificial intelligence/machine learning, which means it does not rely on a signature ( i.e. a unique set of characteristics within the virus that differentiates it from other viruses). The AI can detect hitherto unknown threats using something it calls Indicator of Attack (IOA) — a way to determine the intent of a potential attack — to stop known and unknown threats in real-time. Based in Sunnyvale, California, this company has raised $ 481 million in funding and says it has almost 5,000 customers. The company has grown rapidly by focusing mainly on its endpoint threat detection and response product called Falcon Prevent, which leverages behavioral pattern matching techniques from crowd-sourced data. It gained recognition for handling the high-profile DNC cyber attacks in 2016.

Darktrace

Darktrace offers cloud-native, self learning, AI-based enterprise cyber security. The system works by understanding your organization’s ‘DNA’ and its normal healthy state. It then uses machine learning to identify any deviations from this healthy state, i.e. any intrusions that can affect the health of the enterprise and then triggers instantaneous and autonomous defense mechanisms. In this way, it describes itself as similar to antibodies in a human immune system. It protects the enterprise on various fronts including workforce devices and IoT, SaaS, and email. It leverages unsupervised machine learning techniques in a system called Antigena to scan for potential threats and stop attacks before they can happen. The Cambridge, U.K.- and San Francisco, U.S.-based company has raised more than $ 230M in funding and says it has more than 4,000 customers.

Vectra

Vectra’s Cognito NDR platform uses behavioral detection algorithms to analyze metadata from captured packets revealing hidden and unknown attackers in real time, whether traffic is encrypted or not. By providing real-time attack visibility and non-stop automated threat hunting that’s powered by always-learning behavioral models, it cuts cybercriminal dwell times and speeds up response times. The Cognito product uses a combination of supervised and unsupervised machine learning and deep learning techniques to glean patterns and act upon them automatically. The San Jose, California-headquartered Vectra has raised $ 223M in funding and claims “thousands” of enterprise clients.

SparkCognition

SparkCognition’s DeepArmor is an AI-built end-point cybersecurity solution for enterprises that provides protection against known software vulnerabilities exploitable by cyber criminals. It protects against attack vectors such as ransomware, viruses, malware,  and offers threat visibility and management. DeepArmor’s technology leverages big data, NLP, and SparkCognition’s patented machine learning algorithms to protect enterprises from what it says are more than 400 million new malware variants discovered each year. Lenovo partnered with SparkCognition in October 2019 to launch DeepArmor Small Business. SparkCognition has raised roughly $ 175M in funding and boasts “thousands” of enterprise clients.

Vade Secure

Vade Secure is one of the leading products in predictive email defense. It claims it protects a  billion mailboxes across 76 countries. Its product helps protect users from advanced email security threats, including phishing, spear phishing, and malware. Vade Secure’s AI products leverage a multi-layered approach, including using supervised machine learning models trained on a massive dataset of more than 600 million mailboxes administered by the world’s largest ISPs. The France- and U.S.-based company has raised almost  $ 100 million in funding and says it has more than 5,000 clients.

SAP NS2 

SAP NS2’s approach is to apply the latest advancements in AI and machine learning to problems like cybersecurity and counterterrorism, working with a variety of U.S. security agencies and enterprises. Its technology adopts the philosophy that security in this new era requires a balance of human and machine intelligence. In 2019, NS2 won the Defense Security Service James S. Cogswell Outstanding Industrial Security Achievement Award.

Blue Hexagon

Blue Hexagon offers deep learning-based real-time security for network threat detection and response in both enterprise network and cloud environments. It claims to deliver industry-leading sub-second threat detection with full AI-verdict explanation, threat categorization, and killchain (i.e. the structure of an attack starting with identifying the target, counter attack used to nullify the target, and proof of the destruction of the target). The Sunnyvale, California-based company has raised $ 37M in funding.

VentureBeat is the host of Transform, the world’s leading AI event focused on business and technology decision makers in applied AI, and in our July 2021 event (12-16 July), AI in cybersecurity will be one of the key areas we will be focusing on. Register early and join us to learn more.

The author will be speaking at the DTX Cyber Security event next week. Register early to learn more.


Best practices for a successful AI Center of Excellence:

A guide for both CoEs and business units Access here


Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

What your business needs to know about CPRA

November 8, 2020   Big Data
 What your business needs to know about CPRA

Maintain your employer brand in a pandemic

Read the VentureBeat Jobs guide to employer branding

Download eBook

After achieving a narrower than expected mandate of 56% on November 3, the California Privacy Rights Act (CPRA) has now passed. This new act overhauls the preexisting California Consumer Privacy Act (CCPA) and is a landmark moment for consumer privacy.

In essence, the CPRA closes some potential loopholes in the CCPA – but the changes are not uniformly more stringent for businesses (as I’ll show in a moment). It also moves California’s data protection laws closer to the EU’s GDPR standard. When the CPRA becomes legally enforceable in 2023, California residents will have a right to know where, when, and why businesses use their personally identifiable data. With many of the world’s leading tech companies based in California, this act will have national and potentially global repercussions.

The increased privacy is undoubtedly good news to consumers. But the act’s passage is likely to create concern among businesses that depend on customer data. With stricter enforcement, harsher penalties, and more onerous obligations, many companies are likely to wonder whether this new law will make operating more difficult.

While many of the finer details of the CPRA are likely to change before it becomes enforceable, here’s what your business needs to know right now.

Will you be subject to the CPRA?

The preexisting CCPA law applied only to businesses that:

1) had more than $ 25 million in gross revenue

2) derived 50% or more of their annual revenue from selling consumers’ personal information, or

3) bought, sold, or shared for commercial purposes the personal information of 50,000 or more consumers, households, or devices.

The CPRA keeps most of these requirements intact but makes a few changes. First, the revenue requirement (point 1 above) is now clearer: A company must have made $ 25 million in gross revenue in the previous calendar year to become subject to the law.

Second, when it comes to personal information (point 2), sharing is now considered the same as selling. While the CCPA applied to businesses that made more than half their revenue from selling data, the CPRA now also applies to companies that make half their revenue from sharing personal information with third parties.

Finally, point 3 is now more lenient, with the threshold for personal information-based businesses raised from 50,000 consumers, households, or devices to 100,000.

For businesses wondering if they can avoid regulations for sister companies under the same brand, the CPRA has clarified what the term “common branding” means. The CPRA now defines “a shared name, service mark, or trademark, such that the average consumer would understand that two or more entities are commonly owned.”

It also specifies that a sister business will fall under the CPRA if it has “personal information shared with it by the CPRA-subject business.” In practical terms, this means that two related businesses (one of which is subject to the CPRA) that might share a trademark but be different legal identities, will be subject to the CPRA only if they share data. The same joint responsibility for consumer information also applies to partnerships where a shared interest of more than 40% exists, regardless of branding.

So with the CPRA, some businesses are now more likely to become subject to data protection legislation while others may no longer fall under the Californian legislation.

For organizations that operate multiple legal entities, it is still ideal to have a one-size-fits-all approach to consumer data privacy. By allowing non-subject businesses to self-certify that they are compliant, the CPRA also gives companies an opportunity to be transparent with their customers about data usage even if they do not necessarily need to be.

Consumers have a right to know why you’re collecting their ‘sensitive personal information’

The CPRA will give consumers additional rights to determine how businesses use their data. As well as receiving the right to correct their personal information and know for how long a company might store it, under the CPRA, consumers will be able to opt-out of geolocation-based ads and of allowing their sensitive personal information to be used.

The concept of “sensitive personal information” is itself a new legal definition created by the CPRA. Race/ethnic origin, health information, religious beliefs, sexual orientation, Social Security number, biometric/genetic information, and personal message contents all fall under this definition.

Businesses also need to be careful when it comes to dealing with data they have already collected. Suppose a company plans to reuse a customer’s data for a purpose that is “incompatible with the disclosed purposes for which the personal information was collected.” In that case, the customer needs to be informed of this change.

Similarly to the CCPA, employee data now falls under the CPRA. While this won’t be legally enforceable until 2023, one stipulation of the CPRA is that businesses will need to be transparent with their staff regarding data collection.

Businesses will soon need to give consumers more comprehensive opt-out abilities whenever they interact with them, but it may still take a while before unified standards around these procedures become commonplace. Undoubtedly there will be more than one way to communicate consumer requirements within the CPRA framework. Besides opt-out forms, businesses may increase their use of the Global Privacy Control standard, a browser add-on that simplifies opt-out processes. However, as geolocated targeting becomes more legally problematic, companies may need to reconsider reliance on some forms of targeted advertising.

There will be fines for data breaches

The CPRA stipulates that “businesses should also be held directly accountable to consumers for data security breaches.” As well as requiring businesses to “notify consumers when their sensitive information has been compromised,” the CPRA sets out financial penalties. Companies that allow customer data to be leaked will face fines of up to $ 2,500 or $ 7,500 (for data belonging to minors) per violation. The newly formed California Privacy Protection Agency will be authorized to enforce these fines.

While in the short term, a relatively limited budget is likely to mean the agency will undertake only a few large scale instances of legal action, every business will face increased financial risk related to data breaches. As the CPRA raises the stakes for businesses regarding data protection, threat actors are likely to be emboldened further. In the EU, the GDPR has been linked to increased ransomware incidences as hackers use the threat of fines as leverage to extract larger ransoms from their victims.

In this respect, compliance will mean adopting stronger organizational security postures through increased multi-factor authentication use and zero trust protocols. It is likely to drive up the costs of cybersecurity business insurance as well.

You have until 2023 but shouldn’t delay

While the CPRA will not become law until January 1, 2023, its regulations will apply to all information collected from January 1, 2022, onwards. So, as of now, you have over two years to prepare. However, as seen in polls from earlier this year, the vast majority of businesses have yet to comply with even currently-enforceable CCPA legislation.

The timeline for compliance with CPRA is relatively generous. As both regulators and businesses rush to catch up with their new obligations, it is unlikely that companies will face a torrent of legal action in the short term.

Nevertheless, in the longer term, the CPRA is likely to drive further legislation across the US. This law may be the beginning of a push towards federal-level data protection regulations, which will have similar rules, requirements, and penalties for businesses, regardless of where their customers are. Companies should start preparing for a future where customer data is legally protected now.

Rob Shavell is a cofounder and CEO of onine privacy company Abine / DeleteMe and has been a vocal proponent of privacy legislation reform, including as a public advocate of the California Privacy Rights Act (CPRA).


How startups are scaling communication:

The pandemic is making startups take a close look at ramping up their communication solutions. Learn how


Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

The Dynamics CRM Sales Process: Everything You Need to Know

September 17, 2020   Microsoft Dynamics CRM
crmnav The Dynamics CRM Sales Process: Everything You Need to Know

For any business, small or large, success depends entirely on sales figures. And no matter how great the product is or how much was spent on marketing, the primary goal for any organization is to increase revenue. Microsoft Dynamics CRM is designed to support the sales process from beginning to end – right from acquiring a new lead through the close of a sale. By relying on a set of steps and actions, it enables you to close more deals and increase revenue.

What is the Goal of the Dynamics CRM Sales Process?

The Dynamics CRM sales process aims to generate potential sales opportunities and nurture leads for businesses. It is designed to support the sales process from acquiring a new lead through the close of a sale and to generate accurate sales forecasting. By automating and optimizing several stages, it helps streamline the sales process while improving the rate of closure. It also helps track and measure every sales activity and understand every number and component of the sales funnel in order to grow income. Using the Dynamics CRM sales process, you can zero-in on the right leads and build an extraordinary sales pipeline.

Microsoft Dynamics CRM Sales Process Life Cycle

The Dynamics CRM sales process life cycle provides a streamlined process to generate potential sales opportunities for your business. Since Dynamics CRM stores all of the information of new leads, it helps track follow-up communication including phone calls, emails, and appointments, and aids in qualifying leads into accounts and opportunities. Here’s a look at the Dynamics CRM sales process life cycle:

1. Lead Capture

A lead represents any person or organization that your organization might have the potential to do business with. Leads are generally captured through online or offline advertising, trade shows, direct communication, and other marketing campaigns. In Dynamics CRM, in order to create an entry in the system, you need to enter the name of the lead under the “lead entity” tab and mention information about the lead under the “topic” tab. Once a lead has been captured, follow up activities that include emails, phone calls, and appointments need to be conducted to capture more information about the lead and to proceed to the next stage of lead qualification.

2. Account Creation

Accounts are entities with which your organization has a relationship. In Dynamics CRM, this is where all account information is stored in the database. Accounts can include prospects, vendors, business partners, and more.

3. Contact Setup

Contacts are individuals with whom your organization has a relationship; these are generally customers or contacts of customers. Contacts are often related to an account, but certain organizations and businesses may serve or sell to individual consumers as well. Since contacts in the Dynamics CRM integrate with contacts in Microsoft Outlook, any contact you set up or any change you make to the contact record fields in CRM will automatically reflect in Outlook depending on the synchronization settings.

4. Opportunity Management

Opportunities in Dynamics CRM represent potential sales for a specific customer; when an opportunity is created, it gets included in the sales pipeline. An opportunity is said to be won when a customer agrees with the quotation; however, when a customer disagrees, the opportunity is lost and the sales life cycle ends. Opportunities allow you to measure the success of marketing efforts by tracking sales back to the original lead source and source campaign. When an opportunity is lost, reasons for the loss can be tracked.

5. Product Catalog

Dynamics CRM enables you to maintain a product catalog with multiple customizable prices and discount lists for all of the business’ needs. You can create quotes, invoices, and orders directly from Dynamics CRM and include various factors including territory-based pricing, discount lists, product or price bundles, and more. You can also list product relationships to facilitate product substitutions, highlight cross-sell and up-sell opportunities, as well as mention write-in discounts.

6. Quote Management

A quote contains the list of products or services with a defined price list and discounts the customer is interested in. After reviewing, the customer can either agree or disagree to place the order. In Dynamics CRM, you can create quotes in two ways: from an opportunity using system-calculated pricing, or as a new quote. Multiple quotes can be created from one opportunity to include special pricing offers.

7. Order Management

An Order is a confirmation of a sale that is to be invoiced and placed for further processing in terms of logistics. Since they are accepted quotes, they document what specific products or services the customer is buying. In Dynamics CRM, orders can be created by selecting the “Create order” button on an active quote.

8. Invoice Management

Invoices represent the final stage of the sales cycle. After an order is placed successfully, an invoice is generated in the Dynamics CRM system. You can either create an invoice directly from a specific order screen or navigate to the invoice section and select a new invoice. You can create more than one invoice for an opportunity or an order.

9. Sales Business Process

Every organization follows a specified business process to capture sales information and close a sale. The Dynamics CRM business process flow enables users to follow the guided process without any confusion. You can either choose the out-of-the-box business process flow or custom define your own in accordance with your sales process life cycle.

How Can Your Business Optimize the CRM Sales Process?

Dynamics CRM is intended to help boost your sales by guiding you through the journey from prospect to customer, as well as helping marketers to create lead generation campaigns and monitor where leads are coming from. Here are 5 ways your business can optimize cost and the sales process using Dynamics CRM:

1. Goal Management

Microsoft Dynamics CRM benchmarks performances against KPIs to inform sales directors, managers, and other sales professionals with easy to follow charts and reports for real-time monitoring of individual and team progress towards sales goals.

2. New Sales Opportunities

Using Dynamics CRM, you can take advantage of the data, and segment it based on user behavior such as the pages they have visited, or the products they are interested in. This way, you can identify new sales opportunities with ease.

3. Sales Dashboards

Dynamics CRM provides live reporting sales dashboards, which enable you to monitor active leads and sales opportunities and react with timely, and informed decision-making. These dashboards, which include charts, statistics, sales metrics, and KPI graphics, provide real-time visibility that can help raise productivity, increase sales, improve operational efficiency, and optimize business.

4. 360-Degree Customer View

Dynamics CRM provides full information about sales data to anyone who needs it – from sales and marketing to finance and management. You can align your sales activities with your business and achieve better sales. By leveraging contextual information available in Dynamics CRM, you can also create up-sell and cross-sell campaigns within minutes.

5. Mobile Sales

Dynamics CRM’s mobile app gives field sales teams the ability to work efficiently on-the-go. With any-time access to updated sales information, you can make sure they stay in the loop. Mobile CRM helps sales agents update CRM data in real-time, even in locations with intermittent or no internet connectivity.

Improve Sales Efficiency

A well-defined sales process is key to successfully managing your sales staff as well as your sales pipeline. Successful companies are far more likely to have a formal and structured sales process in place, that increases the chances and efficiency of sales, ultimately increasing revenue in the long run. The Dynamics CRM sales process aims to prioritize, collaborate, and organize sales activities in an efficient manner to directly impact sales pipelines and business results.

Using the Dynamics CRM sales process, you can move from one stage of the sales process to the next with ease, set the right goals using built-in dashboards, and get actionable insights that help you close more deals. You can drive more consistent sales interactions with leads and prospects, improve sales efficiency, and provide your business organization the competitive advantage necessary for success in today’s fast-paced, ever-changing environment.

Learn more about how to properly leverage Dynamics CRM to improve your sales processes by contacting an expert at Synoptek.

Let’s block ads! (Why?)

CRM Software Blog | Dynamics 365

Read More

10 machine learning algorithms you need to know

September 7, 2020   BI News and Info

If you’ve just started to explore the ways that machine learning can impact your business, the first questions you’re likely to come across are what are all of the different types of machine learning algorithms, what are they good for, and which one should I choose for my project? This post will help you answer those questions.

There are a few different ways to categorize machine learning algorithms. One way is based on what the training data looks like. There are three different categories used by data scientists with respect to training data:

  • Supervised, where the algorithms are trained based on labeled historical data—which has often been annotated by humans—to try and predict future results.
  • Unsupervised, by contrast, uses unlabeled data that the algorithms try to make sense of by extracting rules or patterns on their own.
  • Semi-supervised, which is a mix of the two above methods, usually with the preponderance of data being unlabeled, and a small amount of supervised (labeled) data.

Another way to classify algorithms—and one that’s more practical from a business perspective—is to categorize them based on how they work and what kinds of problems they can solve, which is what we’ll do here.

There are three basic categories here as well: regression, clustering, and classification algorithms. Let’s jump into each.

Regression algorithms

There are basically two kinds of regression algorithms that we commonly see in business environments. These are based on the same regression that might be familiar to you from statistics.

1. Linear regression

Described very simply, linear regression plots a line based on a set of data points, called the dependent variable (plotted on the y-axis) and the explanatory variable (plotted on the x-axis).

Linear regression is a commonly used statistical model that can be thought of as a kind of Swiss Army knife for understanding numerical data. For example, linear regression can be used to understand the impact of price changes on goods and services by mapping the sales of various prices against its sales, in order to help guide pricing decisions. Depending on the specific use case, some of the variants of linear regression, including ridge regression, lasso regression, and polynomial regression might be suitable as well.

2. ARIMA

ARIMA (“autoregressive integrated moving average”) models can be considered a special type of regression model.

It allows you to explore time-dependent data points because it understands data points as a sequence, rather than as independent from one another. For this reason, ARIMA models are especially useful for conducting time-series analyses, for example, demand and price forecasting.

Clustering algorithms

Clustering algorithms are typically used to find groups in a dataset, and there’s a few different types of algorithms that can do this.

3. k-means clustering

k-means clustering is generally used to segregate groups with related characteristics and group them together.

Businesses looking to develop customer segmentation strategies might use k-means clustering to better target marketing campaigns that groups of customers should respond to. Another use case for k-means clustering would be detecting insurance fraud, using historical data that in the past had showed tendencies to defraud the insurance provider to examine current cases.

4. Agglomerative & divisive clustering

Agglomerative clustering is a method used for finding hierarchal relationships for data clusters.

It uses a bottom-up approach, putting each individual data point into its own cluster, and then merging similar clusters together. By contrast, divisive clustering takes the opposite approach, and assumes all the data points are in the same cluster and then divides similar clusters from there.

A timely use case for these clustering algorithms is tracking viruses. By using DNA analysis, scientists are able to better understand mutation rates and transmission patterns.

Classification algorithms

Classification algorithms are similar to clustering algorithms, but while clustering algorithms are used to both find the categories in data and sort data points into those categories, classification algorithms sort data into predefined categories.

5. k-nearest neighbors

Not to be confused with k-means clustering, k-nearest neighbors is a pattern classification method that looks at the data presented, scans through all past experiences, and identifies the one that is the most similar.

k-nearest neighbors is often used for activity analysis in credit card transactions, comparing transactions to previous ones. Abnormal behavior, like using a credit to make a purchase in another country, might trigger a call from the card issuers fraud detection unit. The algorithm can also be used for visual pattern recognition, and it’s now frequently used as part of retailers’ loss prevention tactics.

6. Tree-based algorithms

Tree-based algorithms, including decision trees, random forests, and gradient-boosted trees are used to solve classification problems. Decision trees excel at understanding data sets that have many categorical variables and can be effective even when some data is missing.

They’re primarily used for predictive modeling, and are helpful in marketing, answering questions like “which tactics should we be doing more of?” A decision tree might help an email marketer decide which customers would be more likely to order based on specific offers.

A random forest algorithm uses multiple trees to come up with a more complete analysis. In a random forest algorithm, multiple trees are created, and the forest uses the average decisions of its trees to make a prediction.

Gradient-boosted trees (GBTs) also use decision trees but rely on an iterative approach to correct for any mistakes in the individual decision tree models. GBTs are widely considered to be one of the most powerful predictive methods available to data scientists and can be used by manufacturers to optimize the pricing of a product or service for maximum profit, among other use cases.

7. Support vector machine

A support vector machine (SVM) is, according to some practitioners, the most popular machine learning algorithm. It’s a classification (or sometimes a regression) algorithm that’s used to separate a dataset into classes, for example two different classes might be separated by a line that demarcates a distinction between the classes.

There could be an infinite number of lines that do the job, but SVM helps find the optimal line. Data scientists are using SVMs in a wide variety of business applications, including classifying images, detecting faces, recognizing handwriting, and bioinformatics.

8. Neural networks

Neural networks are a set of algorithms designed to recognize patterns and mimic, as much as possible, the human brain. Neural nets, like the brain, are able to adapt to changing conditions, even ones that weren’t originally intended.

A neural net can be taught to recognize, say, an image of a dog by providing a training set of images of dogs. Once the algorithm processes the training set, it can then classify novel images into ‘dogs’ or ‘not dogs’. Neural networks work on more than just images, though, and can be used for text, audio, time-series data, and more. There are many different types of neural networks, all optimized for the specific tasks they’re intended to work on.

Some of the business applications for neural networks are weather prediction, face detection and recognition, transcribing speech into text, and stock market forecasting. Marketers are using neural networks to target specific content and offers to customers who would be most ready to act on the content.

Deep learning is really a subset of neural networks, where algorithms ‘learn’ by analyzing large datasets. Deep learning has a myriad of business uses, and in many cases, it can outperform the more general machine learning algorithms. Deep learning doesn’t generally require human inputs for feature creation, for example, so it’s good at understanding text, voice and image recognition, autonomous driving, and many other uses.

Other algorithm types

In addition to the above categories, there are other types of algorithms that can be used during model creation and training to help the process, like fuzzy matching and feature selection algorithms.

9. Fuzzy matching

Fuzzy matching is a type of clustering algorithm that can make matches even when items aren’t exactly the same, due to data issues like typos. For some natural language processing tasks, preprocessing with fuzzy matching can improve results by three to five percent.

A typical use case is customer profile management. Fuzzy matching lets you identify very similar addresses as the same so that only one unique record ID and source file would be used for the two similar addresses.

10. Feature selection algorithms

Feature selection algorithms are used to whittle down the number of input parameters from a model. A lower number of input variables may lower the computational cost of running a model, as well as improve the performance of the model.

The commonly used techniques like PCA and MRMR are useful for picking up as much information as possible from a reduced subset of features. Using a subset of features can be beneficial because your model may be less confused by noise and the computation time of your algorithm will go down. Feature selection has been used to show business competitor relationships, for example.

If you want to dive deeper into machine learning, including how to get your first project off the ground, check out RapidMiner’s Human’s Guide to Machine Learning Projects.

Let’s block ads! (Why?)

RapidMiner

Read More
« Older posts
  • Recent Posts

    • Accelerate Your Data Strategies and Investments to Stay Competitive in the Banking Sector
    • SQL Server Security – Fixed server and database roles
    • Teradata Named a Leader in Cloud Data Warehouse Evaluation by Independent Research Firm
    • Derivative of a norm
    • TODAY’S OPEN THREAD
  • Categories

  • Archives

    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited