Tag Archives: tune

Tune In for a New Webinar Series this January – CRM for Dynamics 365

CRM webinar series 300x225 Tune In for a New Webinar Series this January – CRM for Dynamics 365

 PowerObjects is putting on a nine-session series of Microsoft Dynamics 365 educational webinars that will cover a range of topics from apps, to design, to what’s staying the same. Although change can sometimes be daunting, this is not one of those times – this update will make your CRM journey even better! And as a bonus, with PowerObjects as your CRM partner, you can rest assured knowing that you have an entire team of CRM experts to guide you every step of the way.

Our CRM for Dynamics 365 webinar series will kick off on Thursday, January 5th and continue every following Tuesday and Thursday at 1:00 pm until Thursday, February 2nd. Add the entire series (both the “Tuesday Sessions” and “Thursday Sessions”) to your calendar and then join us on the days that work for your schedule.

CRM for Dynamics 365: What’s New Overview
Thursday, January 5, 1:00-1:30 pm

This session will explore and demo what’s new with the CRM for Dynamics 365 update. For those on previous versions of Microsoft Dynamics CRM, you’ll learn what’s changing and what’s staying the same. 

Speaker
Tad Thompson CRM-MCT
Senior Technical Advancement Developer, PowerObjects

Service CRM for Dynamics 365: What’s New in Field Service
Tuesday, January 10, 1:00-1:30 pm

The service industry continues to become more transient with our mobile workforce but the need for system tracking and maintenance remains constant.  This session will explore how the new field service functionality can anticipate, automate, and even help prevent system disturbance. 

Speakers
Dan Cefaratti
Practice Director-Field Service, PowerObjects

Bill Kern
Senior Architect-Field Service Practice, PowerObjects

CRM for Dynamics 365: App for Outlook
Thursday, January 12, 1:00-1:30 pm

Outlook and Microsoft Dynamics CRM have always played nice but, now their super powers have been combined to be better, stronger, and more streamlined.  In this session, we will see how old friends like knowledge base articles, sales literature and email templates can be pulled in directly from CRM to emails. The result will help you work faster and more efficiently while still tracking all client activities from Outlook into CRM.

Speaker
Avni Pandya
CRM Training Consultant, PowerObjects

CRM for Dynamics 365: Relationship Insights
Tuesday, January 17, 1:00-1:30 pm

We will explore the new relationship insight functionality that Sales Managers and CRM administrators have been craving displayed visually within your CRM for Dynamics 365.  See the amount of time dedicated to an opportunity, or the health of a client relationship at the record level without having to ask the account executive.

Speaker
Tad Thompson CRM-MCT

Senior Technical Advancement Developer, PowerObjects

CRM for Dynamics 365: Mobile Features
Thursday, January 19, 1:00-1:30 pm

During this session we will see all the new bells and whistles that empower us to do our jobs via our pockets.  Learn about the new streamlined interface that won’t do the job for you, but nudges you in the right direction by putting content and priority a tile away.

Speaker
Abe Saldana

Senior Technical Architect, PowerObjects

CRM for Dynamics 365: What’s New in Portals
Tuesday, January 24, 1:00-1:30 pm

Portals are the gateway for your CRM for Microsoft Dynamics 365. They give users access, but not too much access.  This session will explore how to make your customer, business partners, or employees’ experience better via the updated functionality.  If you currently have a portal or are looking to add a portal, don’t miss this session!

Speaker
Tad Thompson CRM-MCT

Senior Technical Advancement Developer, PowerObjects

CRM for Dynamics 365: Designing the User Experience
Thursday, January 26, 1:00-1:30 pm

Designing your CRM for Dynamics 365 just got a little bit easier.  We’re talking about any business application needed for your CRM for Dynamics 365 system: entities, charts and business process flows would be some examples.  During this session, we’ll talk about the drag and drop functionality in designing these applications and, get this, the drag and drop designer extends to the site map.  Yes, that means designing with NO CODE.

Speaker
Avni Pandya
CRM Training Consultant, PowerObjects

CRM for Dynamics 365: Learning Paths
Tuesday, January 31, 1:00-1:30 pm

Have you ever leveraged a “how to” YouTube video?  Or used a step-by-step guide to complete a task?  In this session, we will demo this functionality and explore the CRM for Dynamics 365 Learning Paths.  That’s right, learn CRM while navigating CRM!  Learn how this will look and feel within your system and start planning on how you can leverage this for training and onboarding needs!

Speaker
Gretchen Opferkew CRM MVP

Director of Education, PowerObjects

CRM for Dynamics 365: What’s New in Project Service
Thursday, February 2, 1:00 pm

Project managers rejoice!  This session will dive into how Microsoft project can now connect to CRM for Dynamics 365.  Staffing that project just got easier!  Build your project plan, staff your plan and repeat–now that you have this new functionality at your disposal.

Speaker
Robert Justen

Solution Design Consultant, Sales, PowerObjects

A recording of each webinar will be published on our website for anyone who can’t attend. Register for the webinar series and we’ll send you a follow-up email with a link to the recorded presentation. That way you can watch it when it’s convenient for you!

To learn even more about Dynamics 365, check out our upcoming course specially dedicated to Updates for Dynamics 365 as well as our upcoming CRM Boot Camps:

CRM Boot Camp for Microsoft Dynamics 365 | Minneapolis
CRM Fast Track for Microsoft Dynamics 365 | Minneapolis
CRM Boot Camp for Microsoft Dynamics 365 | New York

Happy CRM’ing!

Let’s block ads! (Why?)

PowerObjects- Bringing Focus to Dynamics CRM

Big Data SQL Quick Start. My query is running too slow or how to tune Big Data SQL. – Part13.

In my previous posts, I was talking about different features of the Big Data SQL. Everything is clear (I hope), but when you start to run real queries you may have doubts – is it a maximum performance which I could get from this Cluster? In this article, I would like to explain steps which are required for the performance tuning of the Big Data SQL.

SQL Monitoring.

First of all, the Big Data SQL is the Oracle SQL. You may use to start to debug Oracle SQL performance/other issues with SQL Monitor. Same for Big Data SQL. For start working with it, you may need to install OEM and use the lightweight version of it – Database Express. If you don’t want/like/can use GUI tools you may use it with SQLPLUS, like it showed here.Some of the performance problems could be unrelated with Hadoop and may be a general Oracle Database issues, like active using TEMP tablespace 

Many of the waiting events are standard for Oracle Database, you may found the only couple which is specific for the Big Data SQL:

1)  “cell external table smart scan” – which is the typical event for Big Data SQL and it tells us that something happens (scan) on the Hadoop side.

1 Big Data SQL Quick Start. My query is running too slow or how to tune Big Data SQL.   Part13.

2) “External Procedure call” – this event is also natural for the Big Data SQL, through the extproc Database fetch the metadata and define the block location on the HDFS for future planning, but if you observe a lot of “External Procedure call” the waiting events – it could be a bad sign. Usually, it means that you fetch the HDFS block on the Database side and parse/process it there (without the offloading)

2 Big Data SQL Quick Start. My query is running too slow or how to tune Big Data SQL.   Part13.

Quarantine.

If your query has failed few times it may be placed in the quarantine. It works like in Exadata – SQLs which are in the quarantine will not proceed on the cell side and instead this will be shipped to the Database and proceed there (“External Procedure Call” wait event will tell you about this).

For checking, which queries are in the quarantine you have to run:

—————————————————————————————————————————————————–

#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|

[Linux] $ dcli -C bdscli -e “list quarantine”

#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#

—————————————————————————————————————————————————–

for dropping it off:

—————————————————————————————————————————————————–

#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|

[Linux] $ dcli -C  “bdscli -e “drop quarantine all””

#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#

—————————————————————————————————————————————————–

Storage Indexes.

Storage Indexes (SI) is very powerful performance feature. I explained the way how it works here. I don’t recommend you to disable it. In most cases, SI brings you the great performance boost. But it has one downside – first few runs are slower than without SI. But again I don’t recommend you to disable it. If you want to get consistent performance with SI – I advise you to warm it up by running few times query, which returns exactly 0 rows. It may be done by putting WHERE predicate which is never TRUE, for Example:

—————————————————————————————————————————————————–

#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#

SQL> select * from customers WHERE age= -1 and passport_id = 0;

#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#

—————————————————————————————————————————————————–

The first run will be slow, but after few times query will be finished within couple seconds. 

Data types.

Well, let’s imagine that you made sure, that everything that may work on the cell side works there (in other words you don’t have a lot of “External Procedure Call” wait events), don’t have any Oracle Database related problem, Storage Indexes warmed up, but you may still think that query could run faster.

Next thing to check is datatype definition in the Oracle Database and Hive. In nutshell – you may work in few times slower with wrong datatype definition. Ideally, you just pass the data from Hadoop level to the database layer without any transformation otherwise, you burn a lot of CPU resources on the cell side. I put all details here, so be very careful with your Oracle DDLs.

File Formats.

Big Data SQL has a lot of improvements for working with Text Files (like CSV). It proceeds it in C engine.

You may also get some profit from the Columnar File Formats like Parquet File or ORC. The main optimization is Predicate Push Down. Another one big optimization, which you could do with the Columnar File Formats is list less columns. Avoid queries like, 

—————————————————————————————————————————————————–

#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#

SQL> select * from customers

#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#

—————————————————————————————————————————————————–

instead, list the minimum number of columns: 

—————————————————————————————————————————————————–

#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#

SQL> select col1, col2 from customers

#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#

—————————————————————————————————————————————————–

If you are creating parquet files it may also be useful to reduce page size for reduce Big Data SQL memory consumption.

For example, you could do this with hive – create the new table:

—————————————————————————————————————————————————–

#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#

hive> CREATE TABLE new_tab

    STORED AS PARQUET tblproperties (“parquet.page.size”=”65536″)

       AS SELECT * FROM old_tab;

#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#|#

—————————————————————————————————————————————————–

What is your bottleneck?

It’s  very important to understand where is your bottleneck. Big Data SQL is the complex product which involves two sides – Database and Hadoop. Each side has few components which could limit your performance. For Database side I do recommend to use OEM. Hadoop is easier to debug with Cloudera Manager (it has a plenty of pre-collected and predefined charts, which you could find in the charts bookmark).

3 Big Data SQL Quick Start. My query is running too slow or how to tune Big Data SQL.   Part13.

What is the whole picture?

Many thanks for Marty Gubar for this picture, that shows overall picture of the Big Data SQL processing:

4 Big Data SQL Quick Start. My query is running too slow or how to tune Big Data SQL.   Part13.

whenever you run the query first of all Oracle Database obtain the list of Hive partitions. This is the first Big Data SQL optimization – you read only data what you need. After this database obtain the list of the blocks and plan the scan in the way which will evenly distribute the workload. After the column prunning database runs the scan on the Hadoop tier. If Storage Indexes exist they are applied as a first step. After this (in case of parquet files or ORC) Big Data SQL applies Predicate Push Down and starts to fetch the data. Data stored in the Hadoop format and need to be converted to Oracle type. After this Big Data SQL run the Smart Scan (filter) over rest of the data (which were not prune out by Storage Indexes or Predicate Push Down).

Let’s block ads! (Why?)

The Data Warehouse Insider

Frequently used knobs to tune a busy SQL Server

In calendar year 2014, the SQL Server escalation team had the opportunity to work on several interesting and challenging customers issues. One trend we noticed is that many customers were migrating from old versions of SQL Server running on lean hardware to newer versions of SQL Server with powerful hardware configurations. Typical examples would look like this: SQL 2005 + Win 2003 on 16 cores + 128 GB RAM migrated to SQL 2012 + Win 2012 on 64 cores + 1 TB RAM. The application workload or patterns remained pretty much the same. These servers normally handle workloads that is multiple thousand batches per sec. Under these circumstances, the normal expectation is that the throughput and performance will increase in line with the increase in the capabilities of the hardware and software. That is usually the case. But there are some scenarios where you need to take some additional precautions or perform some configuration changes. These changes were done for specific user scenarios and workload patterns that encountered a specific bottleneck or a scalability challenge.

As we worked through these issues, we started to capture the common configuration changes or updates that were required on these newer hardware machines. The difference in throughput and performance is very noticeable on these systems when these configuration changes were implemented. The changes include the following:

- SQL Server product updates [Cumulative Updates for SQL Server 2012 and SQL Server 2014]

- Trace flags to enable certain scalability updates

- Configuration options in SQL Server related to scalability and concurrency

- Configuration options in Windows related to scalability and concurrency

All these recommendations are now available in the knowledge base article 2964518:

Recommended updates and configuration options for SQL Server 2012 and SQL Server 2014 used with high-performance workloads

As we continue to find new updates or tuning options that are used widely we will add them to this article. Note that these recommendations are primarily applicable for SQL Server 2012 and SQL Server 2014. Few of these options are available in previous versions and you can utilize them when applicable.

If you are bringing new servers online or migrating existing workloads to upgraded hardware and software, please consider all these updates and configuration options. They can save a lot of troubleshooting time and provide you with a smooth transition to powerful and faster systems. Our team is using this as a checklist while troubleshooting to make sure that SQL Servers running on newer hardware is using the appropriate and recommended configuration.

Several members of my team and the SQL Server product group contributed to various efforts related to these recommendations and product updates. We also worked with members of our SQL Server MVP group [thank you Aaron Bertrand and Glenn Berry] to ensure these recommendations are widely applicable and acceptable for performance tuning.

We hope that you will implement these updates and configuration changes in your SQL Server environment and realize good performance and scalability gains.

Suresh B. Kandoth

SQL Server Escalation Team

Microsoft SQL Server

Recommended article: Chomsky: We Are All – Fill in the Blank.
This entry passed through the Full-Text RSS service – if this is your content and you’re reading it on someone else’s site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.

CSS SQL Server Engineers