Tag Archives: Benchmarking
MIP Benchmarking: Don’t Abuse the Standards

Recently, a performance announcement from a FICO competitor caused a big shakeup in the mathematical optimization community. My colleague Timo Berthold and I wrote a detailed post about MIP benchmarking on the FICO Community blog, but I thought it was worth sharing the gist of it here.
At issue is how developers benchmark the performance of mathematical optimization tools, particularly mixed integer programming (MIP) solvers. The community already has a clear set of standards for MIP benchmarking (see MIPLIB2010), which have evolved over time.
In the case at hand, there were clear issues with the way the competitor generated and discussed their results:
- When there is a test set, clearly defined as the “benchmark set”, picking subsets of instances to justify general claims about performance is a bad and misleading practice. It is particularly troubling when statements can be read as if they held for the full set and not only for a subset. Read more about the MIPLIB2017 benchmark set below, after the bullets.
- Even if one was to present comparative results on a subset, those results should (a) be put into context to the results on the full set and (b) it needs to be explicitly named which instances belong to the subset. Neither happened in this particular case.
- Every community has its standard measure for performance. In computational MIP, this is shifted geometric mean(1) of running times. There are a few, let’s say “minor standards”, such as node counts, and number of solved instances for example. However, no one should use a measure for comparison that is non-standard to the community, without explaining it in detail. In the case mentioned above, this was not done consistently in the majority of ongoing communications.
- Non-standard measures can be tricky or misleading. In this particular case, the PAR10(2) measure computes a score number, not a speedup factor, since it multiplies some of the involved values with penalty terms. Therefore, a PAR10 score cannot and must not be used to make a statement such as “solver A is x times faster than solver B”, as has happened. PAR10 is not a speed factor. In our opinion, this is a good argument to not use PAR10 for computational MIP in general, since its results do not represent a quantitative statement.
- Doing all of the above and publishing a result so far off official numbers, just days before a new benchmark set and results will be published, is bad practice.
What Is FICO’s Take on MIP Benchmarking?
When we present comparisons of FICO Xpress Optimization against competitors on MIPLIB or other sets from Hans Mittelmann’s benchmark site, FICO always uses the numbers presented there and will continue to do so.
FICO strikes a careful balance between putting effort in to benchmarking and delivering value to customers. We believe in the strength of the mixed-integer programing community to define their standards and we see ourselves as active members of this community. We feel honored that FICO representatives were part of the MIPLIB2010 and MIPLIB2017 committees and we stand by the results of those international research and norm-defining projects.
It is an exciting time for mathematical optimization, and we hope that the great community spirit can be kept up.
For a fuller description, see our original post.
Three global contact center benchmarking mistakes to avoid
If you’re like most global contact center managers today, you’re on the hunt for information to gain a competitive…
edge. Enter benchmarking.
Contact center benchmarking is the process of uncovering the secrets to standout companies’ success. Since you can’t know whether you’re successful if you don’t know what your goals are, benchmarking and identifying best practices are critical to business. The process helps you establish the contact center metrics to use, such as key performance indicators (KPIs), to assess both a call center’s overall performance and each agent’s performance specifically.
Still, while global contact center benchmarking sounds straightforward, it’s not easy. It requires in-depth analysis, subtle insight and contextualizing; indeed, contact center leaders need to understand that some best practices that bring success to one organization may not qualify as a best practice for another. Here are three common mistakes in contact center benchmarking and thoughts on how you can avoid making them.
A guide to contact center benchmarking
Creating best practices in a vacuum. During a benchmarking exercise, contact center A, which devotes 10% of its contact center agents’ time to ongoing training, determines that a leader in the same industry, contact center B, devotes 5% of its contact center agents’ time to ongoing training. Since ongoing agent training undoubtedly factors into cost per call, contact center A decides to also establish 5% as a target. But just because that 5% number works for contact center B does not mean it will also work for contact center A.
A deeper analysis indicates that contact center B recently implemented a new desktop system that had automated process management features, which streamlined processes. It also had an extensive knowledge database in place, which put key information about products and support at agents’ fingertips. Both technologies reduced the need for agent training.
Training can influence virtually every common KPI, including first call resolution and average handle time, but so can systems such as the one contact center B implemented. So if contact center A creates a new training “best practice” in a silo and doesn’t have an equivalent desktop system, measures of customer experience are likely to be negatively affected. A more informed response contact center A could take would be to consider implementing an improved desktop system that in turn might also reduce the time required for ongoing agent training, thus making a number closer to the 5% ongoing training rate more realistic.
The takeaway: Consider how support systems and other variables in the contact center influence what determines a best practice.
Contact center leaders need to understand that some best practices that bring success to one organization may not qualify as a best practice for another.
Cherry-picking key metrics. During a benchmarking exercise, contact center A, which has an agent average handle time of four minutes and an interactive voice response (IVR) completion rate of 10%, used two other company’s contact centers in the same industry to determine the best standalone targets for these two contact center metrics. Contact center A found these results:
- Contact center B’s agents have average handle times of three minutes and an IVR completion rate of 5%.
- Contact center C’s agents have average handle times of five minutes and an IVR completion rate of 50%.
Contact center A then used these numbers to establish KPIs that called for an average handle time of three minutes and an IVR completion rate of 50%, because it considered these numbers “best practices.” Deeper analysis reveals a more complex picture. Assuming a total of 1,000 calls go into the IVR at both companies and that the goal of each company is to minimize labor expense, Contact center C has better overall results as measured by the number of minutes that agents require to interact with customers. Contact center B requires 2,850 minutes of total handle time (950 agent calls at three minutes each) to interact with customers, while contact center C requires 2,500 minutes of total handle time (500 agent calls at five minutes each) to interact with customers.
While setting aggressive goals of three minutes average handle time and a 50% IVR completion rate might seem beneficial, are they realistic? For most organizations, having a higher IVR completion rate results in higher average handle time because agents often handle more complex calls.
The takeaway: Examine relationships between key contact center metrics and other data points.
Overlooking differences in measurement practices. During a benchmarking exercise, contact center A, which has a 6% call abandonment rate, determined that contact center B, part of a company in the same industry, has a 3% abandonment rate and therefore establishes 3% target as a KPI because it is a “best practice.”
A deeper analysis identifies a key difference in the calculation of abandon rates. Contact center A factors into its abandon rate calculation of all abandoned calls, regardless of when they occur. In contrast, contact center B does not factor in calls that are abandoned in less than five seconds (a common practice). This difference accounts for contact center A’s additional 3% of abandoned calls, and subsequently, its 6% abandon rate.
The takeaway: Understand the differences in how various items are measured.
Contact center benchmarking allows companies to determine how to improve processes and results by analyzing how other organizations perform similar functions and processes. But benchmarking cannot be performed blindly with the assumption that a best practice at one organization will have the same results at another. Instead, contact center benchmarking must be executed with acute awareness of the complexities that influence any given metric.
This entry passed through the Full-Text RSS service – if this is your content and you’re reading it on someone else’s site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.
Administrator Certification Brings Skills Validation and Benchmarking to Cloud ERP
Posted by Joanne Coleman, Director of NetSuite Certification
The rapid rise of cloud ERP has created significant advantages for companies that have adopted it. It’s freed them from the hardware, IT maintenance and upgrade nightmares of on-premise software and provided scalable, customizable systems that can rapidly adapt to new conditions or market opportunities. But the rapid rise of cloud ERP has also left some businesses struggling to find qualified help and unsure of the qualifications of the help they can find.
The new NetSuite Administrator Certification Program, the industry’s first initiative to validate the competencies of administrators managing a cloud ERP business environment, is changing that. The certification program is designed to help organizations get the maximum value from their NetSuite investment by benchmarking the skills and knowledge of NetSuite administrators and making adjustments as necessary. NetSuite Administrator Certification will also be an important data point for organizations to consider as they recruit for an open NetSuite administrator position and size up the qualifications of candidates.
For current or prospective administrators, NetSuite Administrator Certification validates that they have mastered managing configurations, customizations, user roles and permissions, and NetSuite’s twice-yearly updates, as well as their ability to support end-users and the overall cloud ERP environment. It can also mean better employment marketability and earning power as the NetSuite customer base continues to grow. Certified administrators receive a NetSuite Administrator Certification logo that can be used on business cards and in digital signature blocks. Administrator Certification is also of interest to NetSuite partners that employ administrators in offering NetSuite-based business process outsourcing (BPO) services to client organizations.
For more information, including training opportunities with seasoned NetSuite instructors with a combined 200 years of experience in NetSuite ERP implementations, please visit www.netsuite.com/certification.
The NetSuite Administrator Certification Program aligns with two similar programs introduced in the past year, NetSuite Certified ERP Consultant and NetSuite Certified SuiteCloud Developer programs, two certifications aimed at validating the skills of consultants and developers.
Beyond ensuring administrators have the right skills, the certification also minimizes the risk of an under-skilled administrator who can actually damage the business with bad customizations or a misunderstanding of standard out-of-the-box workflows.
Registration for the NetSuite Administrator Certification program is now open at the NetSuite Certification web page. Administrator Certification is based on passing two rigorous exams that test both baseline and advanced knowledge; candidates who have already passed the baseline SuiteFoundation exam do not need to retake it. Testing is available at more than 600 locations around the world or as an online proctored exam with the use of an external camera. Generally, candidates should possess at least one year of experience configuring and managing a robust NetSuite implementation. The candidate should be able to perform the day-to-day tasks of managing the application, with a sound understanding of the standard business processes, standard accounting practices, advanced features, options and capabilities of NetSuite.
The best way to prepare for any NetSuite Certification Exam is to visit the NetSuite Certification webpage and review the Study Guides for each exam, especially the table listing all the test objectives in the middle. Any areas that are less familiar can be researched in SuiteAnswers, and a list of recommended NetSuite Training classes are also provided for each exam.
Follow @SuiteTraining on Twitter for real-time updates on training courses, events, promotions, and training webinars.
This entry passed through the Full-Text RSS service – if this is your content and you’re reading it on someone else’s site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.
Want something else to read? How about ‘Grievous Censorship’ By The Guardian: Israel, Gaza And The Termination Of Nafeez Ahmed’s Blog