Tag Archives: Management

D365 In Focus: 4 Misconceptions About Change Management [VIDEO]

D365 In Focus Change MGMT Misconceptions 800x600 300x225 D365 In Focus: 4 Misconceptions About Change Management [VIDEO]

At PowerObjects, we know that your organization’s people are your biggest asset. We want to make sure we bring them along on this Microsoft Dynamics 365 journey with us! As change managers, we focus on how your team is going to adapt to new processes and how to make your implementation smoother. In today’s Dynamics 365 In Focus video, Sara Jo discusses four common misconceptions we hear from prospects when talking about change management on a Dynamics 365 implementation!

Let’s block ads! (Why?)

PowerObjects- Bringing Focus to Dynamics CRM

4 Key Benefits of Omnichannel Order Management for Retailers

Posted by Ian McCue, Senior Associate Content Manager

Efficiently and cost-effectively getting a shirt, hat and shoes delivered to a customer in Billings, Mont. when a retailer’s stock spans warehouses from Burlington, Vt. to Bellevue, Wash. and a physical store in Baltimore is no small feat. Yet, for even the smallest retailers, it’s imperative to balance customer expectations with profitability — whether the order is shipped from a warehouse, shipped from a store, drop shipped or picked up in store.

That’s why optimizing order management is a crucial piece of a successful omnichannel strategy. Order management is now an art form. Businesses devote endless hours to improving order management because it is so critical to maintaining margins and keeping customers happy.

The lion’s share of that burden falls on employees, who typically must comb through all the fulfillment options and choose the best one based on criteria like closest fulfillment location, lowest shipping costs, potential for splitting or bundling orders and maintaining safety stock. When handled manually, this quickly becomes a tedious, time-consuming process of looking through reports for actual and potential issues.

Automating that process and enabling employees to manage by exception can maximize opportunities and minimize mistakes to make fulfillment a strategic differentiator for your business.

Optimizing Fulfillment Across Channels

At its core, an order management system (OMS) will evaluate all channels and supply sources to find the best way to fulfill an order with intelligent and automated order sourcing and allocation.

In other words, an order management system should make complex fulfillment decisions much easier. For instance, if a complete order cannot be shipped from the closest warehouse or retail store, the system runs through the options of splitting shipments across warehouses by distance, splitting shipments without restrictions on fulfillment source or location and, if it makes sense, backordering to the closest warehouse. The beauty of an OMS is that it finds the best method for that specific order without an employee touching it.

An OMS can also resolve a problem when something goes wrong with an order. If sufficient inventory is not actually available at the location where it was supposed to be, the system automatically reroutes it, choosing the next best location and shipping it from there.

Benefits for Retailers

Increased customer satisfaction: Using the closest possible fulfillment centers will get packages to customers faster. And by ensuring ecommerce and in-store channels can be utilized for fulfillment, retailers have an easier time solving customers’ problems.

value%20brief 4 Key Benefits of Omnichannel Order Management for Retailers

Decreased labor costs: Reducing the amount of time and labor devoted to making sure the business is optimizing inventory movement will mean significant savings. Staff members previously devoted to order planning can spend time on more value-added tasks to generate additional revenue instead of routing orders. In addition, in-store staff can handle a greater portion of order fulfillment, reducing the load on warehouse employees and optimizing your labor force.

Decreased shipping costs: By optimizing fulfillment by closest location, items included in an order/available inventory and more, retailers can reduce shipping costs. The OMS will always find the cheapest way to get an order to a customer while still meeting their expectations.

Increased sales and margins: Offering shoppers additional delivery options will increase conversions. In-store pick-up, for example, is vital for someone who needs a product right away. Also, safety stock can be dramatically reduced – if not eliminated – with an order management system. Ensuring any order can be fulfilled regardless of channel means safety stock – which can become quite expensive – is no longer necessary. Retailers can keep inventory in retail stores while allowing all channels to sell.

Any omnichannel strategy simply isn’t possible without a functionally rich, reliable order management system. It will help you deliver exceptional customer experiences at a lower, sustainable cost.

Learn more about solutions NetSuite offers to help with your order management challenges.

Posted on Wed, May 9, 2018
by NetSuite filed under

Let’s block ads! (Why?)

The NetSuite Blog

TIBCO Named a Leader in Gartner MQ for API Management Four Times in a Row

iStock 675826546 e1525294317464 TIBCO Named a Leader in Gartner MQ for API Management Four Times in a Row

TIBCO is excited to be named a leader in Full Lifecycle API Management for the fourth time in a row by Gartner. We’re especially excited since only two other vendors have achieved this accomplishment. It truly highlights the valuable and unique capabilities that our API management platform, TIBCO Mashery®, offers in this space.

Customers like retail-giant Macy’s have used Mashery® to decrease customer check-out times, increase conversion rates, and do away with unnecessary customer friction. Leading car advice website, Edmunds.com, turned their APIs into a multi-million dollar revenue generator through Mashery®. One of the richest image search sites, Getty Images, uses Mashery® to offer customers a competitive solution by providing 50 million+ images, as well as videos, through its Connect API.

API’s have evolved from simple external exposure points to the fundamental glue that connects today’s enterprise application architecture. You truly cannot realize a digital platform without them.

But, to realize the full value of your APIs and avoid the pitfalls of exposing your systems, it’s vital to deploy a technology that enables and simplifies key processes related to all aspects of API management. It used to be that API design and management were separate processes maintained by different systems. However, to achieve a connected digital business, it is becoming increasingly important for the enterprise to utilize a seamless design, deployment, and integration workflow, while maintaining an end-to-end view of the API lifecycle in a central tool.

Achieving a connected digital business with a comprehensive API Platform

The new generation of API Platforms (which Gartner refers as “Full Lifecycle API Management”) are evolving to include everything from API creation, integration, orchestration, security, optimization, lifecycle management,  developer engagement, and analytics. This new Gartner Magic Quadrant report is clear validation of the need for a holistic view of API Management as a fundamental aspect to solving the pervasive integration problem, based on their vendor rankings.

Mashery® is right at the forefront of this evolution. The software features API gateways to simplify security and management, an API developer portal, and API creation tools that allow for quick generation of APIs. Mashery also offers the flexibility of on-premises, cloud or hybrid deployments, and analytics on your API usage and performance. All in one, easy to use platform.

IoT, Edge, Serverless, Cloud-Native, and Your API Programs

As your applications develop to support IoT, Edge, serverless, and cloud-native, your API platform must similarly evolve. We continue to evolve Mashery® to support these new use-cases and keep it ahead of the market in executing the vision that enterprises now expect. We are constantly evolving and developing our technology to help consumers take advantage of today’s latest technologies in order to solve pervasive problems.

For instance, our TIBCO Cloud™ Integration platform now allows for all pervasive integration use cases from cloud to cloud, cloud to on-premises, on-premises to edge and everything in between. TIBCO-supported open source projects Flogo® and Mashling™ enable the latest capabilities in microservices, serverless, and FaaS in a lightweight package. Our TIBCO Cloud™ offering enables you to do messaging, integration, analytics and full life cycle API management by simply signing up for an account.

The value of a full life-cycle API management platform

APIs are an incredibly valuable tool for digital transformation — they unlock data, increase agility, encourage innovation and speed project time-to-value. This is why a successful API program is vitally important to your business. Our leadership position validates TIBCO’s vision and execution to extend the Mashery® platform to solve for problems across the entire API lifecycle and pervasive integration.

Welcome to API Management of today: the ability to monetize and manage your API’s as a new business opportunity.

To try Mashery®  on TIBCO Cloud, please visit Mashery.com.

Let’s block ads! (Why?)

The TIBCO Blog

Released: Microsoft Azure SQL Database Management Pack (7.0.4.0)

We are happy to announce the release of Azure SQL DB Management Pack. You can download the management pack and find a summary of features at:

Microsoft System Center Management Pack for Microsoft Azure SQL Database

New Features and Fixes

  • Fixed issue: The management pack may stop working due to a conflict of the Azure REST API libraries with the ones coming from the Microsoft Azure Management Pack
  • Provided a few minor UI improvements to the Add Monitoring Wizard

We are looking forward to your feedback.

Let’s block ads! (Why?)

SQL Server Release Services

Facebook, Cambridge Analytica, and the Need to Audit your API Management

iStock 937504280 e1524779018960 Facebook, Cambridge Analytica, and the Need to Audit your API Management

If you run an open API program, the current controversy surrounding Cambridge Analytica’s use of Facebook data to create psychographic profiles of millions of Facebook users should concern you, and not just because of how your profile data may have been used.

I recall being very surprised at how much data I could access through Facebook’s application programming interface (API) back when they first released it. I could easily navigate through a specific user’s news feed and friends list and all but replicate that user’s web of social interactivity with only a handful of calls. Facebook opened this data to allow developers to create games and applications that enhanced the core purpose of Facebook at the time — connecting people and allowing them to share their lives with their friends online. While the terms of service made it clear that data was not intended to be captured and stored, there was also nothing stopping a developer from breaking those rules — and nothing Facebook could do to easily tell if the rules had been violated.

Subsequent updates to the Facebook API limited the access to much of that data, but the genie was already out of the bottle. It appears the data Cambridge Analytica used may have been gathered some time prior to 2015, before those limits were put in place.

It isn’t just Facebook

Facebook is taking a big hit in all this controversy, but there’s a part of me that feels it’s somewhat undeserved. The same data that may have been used to target specific audiences with messages of questionable veracity also allowed companies like Zynga to flourish and helped Facebook evolve from a simple social bulletin board to a genuine social platform. I don’t believe any of this was malicious on Facebook’s part. I think it’s the unintended consequences of a drive toward radical openness marred by a culture of “move fast and break things.”

Read the full story here

Learn how TIBCO can help you implement, scale, and secure your own API ecosystem.

Let’s block ads! (Why?)

The TIBCO Blog

Assigning Resource Management Permissions for Azure Data Lake Store (Part 2)

This is part 2 in a short series on Azure Data Lake permissions. 

Part 1 – Granting Permissions in Azure Data Lake
Part 2 – Assigning Resource Management Permissions for Azure Data Lake Store {you are here}
Part 3 – Assigning Data Permissions for Azure Data Lake Store
Part 4 – Using a Service Principal for Azure Data Lake Store
Part 5 – Assigning Permissions for Azure Data Lake Analytics

In this section, we’re covering the “service permissions” for the purpose of managing Azure Data Lake Store (ADLS). Granting a role on the resource allows someone to view or manage the configuration and settings for that particular Azure service (i.e., although we’re talking about ADLS, this post is applicable to Azure services in general). RBAC, or role-based access control, includes the familiar built-in Azure roles such as reader, contributor, or owner (you can create custom roles as well).

Tips for Assigning Roles for the ADLS Service

Setting permissions for the service + the data stored in ADLS is always two separate processes, with one exception: when you define an owner for the ADLS service in Azure, that owner is automatically granted ‘superuser’ (full) access to manage the ADLS resource in Azure *AND* full access to the data. Any other RBAC role other than owner needs the data access specifically assigned via ACLs. This is a good thing because not all system administrators need to see the data, and not all data access users/groups/service principals need access to the service itself. This type of separation is true for certain other services too, such as Azure SQL Database.

Try to use groups whenever you can to grant access, rather than individual accounts. This is a consistent best practice for managing security across many types of systems.

If you are using resource groups in Azure the way they are intended to be used, you may be able to define service permissions at the resource group level rather than at the individual resource level (although the example shown is here is setting RBAC for ADLS specifically). Managing permissions at the resource group level reduces maintenance, assuming your resource group isn’t too broadly defined.

Typically, automated processes which do need access to the data (discussed in Part 3), don’t need any access to the ADLS resource itself. However, if any access to the Azure portal or to manage the ADLS service (such as through ARM or PowerShell) is needed, then the appropriate RBAC entry is necessary. 

In Part 4 we’ll talk a bit about using service principals. I’ve found that frequently a service principal needs data access (ACLs), but not any RBAC access to the service.

The RBAC functionality is consistent across Azure services. When roles are updated for an Azure resource, it is recorded in the Activity Log:

ADLS RBAC ActivityLog Assigning Resource Management Permissions for Azure Data Lake Store (Part 2)

Defining RBAC Permissions in the Azure Portal

Setting up permissions can be done in the portal in the Access control (IAM) pane. (By the way, the IAM acronym stands for Identity and Access Management.)

ADLS RBAC Portal Assigning Resource Management Permissions for Azure Data Lake Store (Part 2)

Defining RBAC Permissions via PowerShell Script

The technique shown above in the portal is convenient for quick changes, for learning, or for “one-off” scenarios. However, in an enterprise solution, and for production environments, it’s a better practice to handle permissions via a script so you can do things such as:

  • Promote changes through different environments
  • Pass off scripts to an administrator to run in production
  • Include permission settings in source control

In the following PowerShell script, we are assigning contributor permissions to an AAD group:

ADLS RBAC PowerShell Assigning Resource Management Permissions for Azure Data Lake Store (Part 2)

Here’s a copy/paste friendly script from the above screenshot:

#-----------------------------------------

#Input Area
$  subscriptionName = 'YourSubscriptionName'
$  resourceGroupName = 'YourResourceGroupName'
$  resourceName = 'YourResourceName'
$  groupName = 'YourAADGroupName'
$  userRole = 'Contributor'

#-----------------------------------------

#Manual login into Azure
Login-AzureRmAccount -SubscriptionName $  subscriptionName

#-----------------------------------------

$  resourceId = Get-AzureRmResource `
    -ResourceGroupName $  resourceGroupName `
    -ResourceName $  resourceName 
$  groupId = Get-AzureRMADGroup `
    -SearchString $  groupName

New-AzureRmRoleAssignment `
    -ObjectId $  groupId.Id `
    -RoleDefinitionName $  userRole `
    -Scope $  resourceId.ResourceId 

The online examples for the New-AzureRmRoleAssignment cmdlet enumerates the IDs or GUIDs, which makes things clear for learning but isn’t ideal for operationalized scripts. Therefore, the purpose for $ resourceId and $ groupId above is to do the work of looking up the GUIDs so you don’t have to do that manually.

Personally, I like using PowerShell instead of ARM (Azure Resource Manager) templates for certain things, such as permissions, but you do have additional options beyond what I’ve discussed here based on what you’re most comfortable with.

Finding More Information

Get Started with Role-Based Access Control in Azure

Want to Know More?

My next all-day workshop on Architecting a Data Lake is in Raleigh, NC on April 13, 2018

Let’s block ads! (Why?)

Blog – SQL Chick

Released: Public Preview for SQL Server Management Packs Update (7.0.3.0)

We are getting ready to update the SQL Server Management Packs. Please install and use this public preview and send us your feedback (sqlmpsfeedback@microsoft.com)! We appreciate the time and effort you spend on these previews which make the final product so much better.

Please download at:

Microsoft System Center Management Packs (Community Technical Preview) for SQL Server

Included in the download are Microsoft System Center Management Packs for SQL Server 2008/2008 R2/2012/2014/2016 (7.0.3.0).

Please read if you have more than 1500 database files

When a SQL Server instance has about 1500 or more database files, the Data Files Discovery generates an output, which exceeds the SCOM Agent 4 MB threshold, and then the discovery fails. We added a new “Discover DB Group Seed” discovery to make the MP monitor such instances smoothly. If the data files in your environment cannot be discovered due to the 4 MB threshold, just enable the new discovery. Note that you should not enable it unless you face the problem described above.

New SQL Server 2008-2012 MP Features and Fixes

  • Fixed issue: Agent tasks do not have any SQL MP Run as Profile mapped
  • Implemented caching of data received from WMI to reduce the number of requests to WMI
  • Changed ps1 data source scripts to avoid the “Pipe is being closed” error
  • Disabled Latency Disk Read/Write performance rules by default
  • Added the actual value of available disk space in DB Space monitoring alerts
  • Updated some of the display strings
  • Increased SQL Command timeout in the data source scripts up to 60 seconds (previously it was 30 seconds)

New SQL Server 2014-2016 MP Features and Fixes

  • Added a new discovery for dealing with cases when an SQL Server instance has a vast of database files (near 1500 and more); it is disabled by default
  • Implemented caching of data received from WMI to reduce the number of requests to WMI
  • Improved the output of discoveries of Database Filegroups and Files so they do not contain unnecessary data
  • Changed ps1 data source scripts to avoid the “Pipe is being closed” error
  • Disabled Latency Disk Read/Write performance rules by default
  • Added the actual value of available disk space in DB Space monitoring alerts
  • Updated some of the display strings
  • Increased SQL Command timeout in the data source scripts up to 60 seconds (previously it was 30 seconds)

For more details, please refer to the user guides that can be downloaded along with the corresponding Management Packs.
We are looking forward to hearing your feedback at sqlmpsfeedback@microsoft.com. 

Let’s block ads! (Why?)

SQL Server Release Services

Webcast: Methods of Forecasting for Capacity Management

The importance of forecasting when dealing with Capacity Management can not be overstated. Making statements or predictions for future events require careful analysis of all information, and events can be anything from the state of resource consumption, to service levels, and even computing environment changes at future points in time.

Methods of Forecasting for Capacity Management Banner Webcast: Methods of Forecasting for Capacity Management

Syncsort has released their webcast, “Methods of Forecasting for Capacity Management” which dives into the importance of forecasting, along with the proper ways to go about it.

The presentation covers topics such as: why we forecast, forecasting scenarios, techniques, and virtualization.

Download the on-demand webcast and learn the importance of forecasting for Capacity Management.

Let’s block ads! (Why?)

Syncsort + Trillium Software Blog

Dynamics 365 User Adoption: End Users are the Most Important! (But Management is Important, too)

CRM Blog Dynamics 365 User Adoption: End Users are the Most Important! (But Management is Important, too)

End users of your CRM system (or any other system!) are the most important building blocks to a successful implementation. Why then are they so often overlooked? Why does management so often dictate requirements without a single consideration of how the end user will react?

“They need to learn how to use this to do their job; they will figure it out.” – Management

Picture this: An organization gathers all major decision makers and managers into a room for a requirements gathering session. The Project Manager says, “what metrics do you want to report on?” The Sales Manager wants 15 fields. The Customer Service Manager needs another 24 fields. The Operations Manager needs 12 different fields. The CEO is looking for roll-up metrics that require another 10 fields. All of a sudden, the Project Manager leaves requirements gathering meeting #1 with 61 new required fields to add to the solution.

I’m willing to bet that many readers have experienced a version of this picture I have painted for you.

How will adding 61 required fields to forms in CRM effect your salesperson? Your Customer Service reps? Do these configuration changes add value to the business goals? How much time does a salesperson lose selling by entering 61 required fields in CRM when they may only need 3?

Now, picture this second scenario: An organization gathers a sampling of end users into a room for a requirements gathering session. There are tenured sales reps, inside sales reps, customer service reps, marketing associates and others from across all areas of the organization. The Project Manager says, “how can this system help you to do your job more efficiently?” The sales reps talk about manual reporting they do weekly for the Sales Managers. Customer Service reps discuss how many screen pops, tabs and programs they go back and forth between on any given call. The Project Manager leaves the requirements gathering meeting #1 with a different to-do list. His/her challenge is now to leverage technology to alleviate pain points for these end users and optimize business processes.

Now, that’s not to say that management shouldn’t be involved! I would recommend having Business Requirements Meeting #1 play out as mentioned above, with the end users. THEN, the management meeting should occur.

Simply put, end users should be involved starting at the requirements gathering stage. Walk the fine line of customizing for your end users while meeting business requirements from management. Apart from involving them in requirements meetings, you can also do ride alongs, job shadowing – anything to better understand what they are doing and where technology can help improve the process.

Need help walking that fine line? We are User Adoption experts at Beringer Technology Group.  Let us help you make sure you don’t miss the mark when it comes to User Adoption.


This blog is the first in a series that will focus on a deep dive in User Adoption. User Adoption is so very important in a CRM implementation and often overlooked. So, what can you do to help encourage adoption for a system? Over the next several months, we will look at ten ways to help with User Adoption at your organization.


Beringer Technology Group, a leading Microsoft Gold Certified Partner specializing in Microsoft Dynamics 365 and CRM for Distribution. We also provide expert Managed IT ServicesBackup and Disaster RecoveryCloud Based Computing and Unified Communication Systems.

Let’s block ads! (Why?)

CRM Software Blog | Dynamics 365

Capacity Management: How to Keep Your Mainframe Cruising

Businesses and IT organizations are changing, but the mainframe continues to be an important hub of business activity. Mainframe capacity management can ensure effective and efficient operation to help you maximize your return on investment and keep customers happy well into the future.

Meeting Enterprise Operational Needs

A friend of mine bought a 20-foot boat last summer so he could take his family tubing, water skiing, and fishing.  It’s fast, fairly easy to operate, and relatively inexpensive to fix and maintain.  One person can do most of the tasks required to get the boat in and out of the water and it doesn’t take long to learn how to do most tasks involving a small craft like his.

blog ship boat Capacity Management: How to Keep Your Mainframe Cruising

This summer, my family is planning on spending a week’s vacation on a cruise liner. In doing some research, I found that the boat, at top speed, is only half as fast as my friend’s 20-foot boat and common sense (and a very long 1996 movie) tells me that the ship can’t turn nearly as quickly in the water. And it takes thousands of people to make a cruise ship do what it does, from deck hands and entertainment staff all the way up through the ship’s captain.

In other words, a novice may be able to keep a speedboat running, but that doesn’t mean he knows how to handle a cruise liner. But when managed well, a cruise liner is a smoothly operated vessel that will make thousands of people satisfied customers.

A mistake in the operation of that speedboat will make a few people unhappy or put a few people at risk, while a mistake in the operation of the cruise liner could affect thousands of people and risk the health and future of the operating company.

blog banner webcast CM for the mainframe Capacity Management: How to Keep Your Mainframe Cruising

Changing Course with Mainframe Capacity Management

Think of the mainframe as that cruise liner.  Many organizations have invested a lot of time and money into making sure that the mainframes are operating at peak efficiency – much of that knowledge, however, is held within long-tenured, very experienced employees who may be thinking about buying their own fishing boats and heading into retirement.

More specifically, does the end of working life for the baby boomer generation mean problems for the many businesses for which the mainframe is still the most mission-critical server?

Like the cruise ship, changing course with a mainframe requires planning, expertise, and resolve. Organizations have been talking about replacing the mainframe for decades, but find it is still the most efficient and most secure data processor around. Today, without customers even knowing, it is the backbone for customer-facing, web-enabled applications and services. The mainframe isn’t going anywhere. But those mainframe experts might be, and organizations have to plan for that possibility.

The story sounds dire, but it doesn’t have to be. athene®, Syncsort’s capacity management software, provides automation around key processes and requires little-to-no mainframe expertise to operate in a cost-effective manner.

As a cross-platform solution, athene allows organizations to bring data into a centralized Capacity Management Information System (CMIS) from all components that comprise a service and provides a 360° view of those services in a single dashboard. It also has predictive analytics that give organizations insight into how the mainframe is performing today and will perform in the future, even with changes in the hardware or the workload.

Evaluate your mainframe capacity management process and keep it sailing smoothly – take our Maturity Survey today! Answer 20 quick questions about your organization and its processes, and you’ll immediately receive an initial maturity level as well as a comprehensive report with suggestions on how to improve your maturity.

Let’s block ads! (Why?)

Syncsort + Trillium Software Blog