• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

MIP Benchmarking: Don’t Abuse the Standards

November 18, 2018   FICO
Benchmarking MIP Benchmarking: Don’t Abuse the Standards

Recently, a performance announcement from a FICO competitor caused a big shakeup in the mathematical optimization community. My colleague Timo Berthold and I wrote a detailed post about MIP benchmarking on the FICO Community blog, but I thought it was worth sharing the gist of it here.

At issue is how developers benchmark the performance of mathematical optimization tools, particularly mixed integer programming (MIP) solvers. The community already has a clear set of standards for MIP benchmarking (see MIPLIB2010), which have evolved over time.

In the case at hand, there were clear issues with the way the competitor generated and discussed their results:

  1. When there is a test set, clearly defined as the “benchmark set”, picking subsets of instances to justify general claims about performance is a bad and misleading practice. It is particularly troubling when statements can be read as if they held for the full set and not only for a subset.  Read more about the MIPLIB2017 benchmark set below, after the bullets.
  2. Even if one was to present comparative results on a subset, those results should (a) be put into context to the results on the full set and (b) it needs to be explicitly named which instances belong to the subset.  Neither happened in this particular case.
  3. Every community has its standard measure for performance. In computational MIP, this is shifted geometric mean(1) of running times. There are a few, let’s say “minor standards”, such as node counts, and number of solved instances for example. However, no one should use a measure for comparison that is non-standard to the community, without explaining it in detail. In the case mentioned above, this was not done consistently in the majority of ongoing communications.
  4. Non-standard measures can be tricky or misleading. In this particular case, the PAR10(2) measure computes a score number, not a speedup factor, since it multiplies some of the involved values with penalty terms. Therefore, a PAR10 score cannot and must not be used to make a statement such as “solver A is x times faster than solver B”, as has happened. PAR10 is not a speed factor. In our opinion, this is a good argument to not use PAR10 for computational MIP in general, since its results do not represent a quantitative statement.
  5. Doing all of the above and publishing a result so far off official numbers, just days before a new benchmark set and results will be published, is bad practice.

What Is FICO’s Take on MIP Benchmarking?

When we present comparisons of FICO Xpress Optimization against competitors on MIPLIB or other sets from Hans Mittelmann’s benchmark site, FICO always uses the numbers presented there and will continue to do so.

FICO strikes a careful balance between putting effort in to benchmarking and delivering value to customers. We believe in the strength of the mixed-integer programing community to define their standards and we see ourselves as active members of this community. We feel honored that FICO representatives were part of the MIPLIB2010 and MIPLIB2017 committees and we stand by the results of those international research and norm-defining projects.

It is an exciting time for mathematical optimization, and we hope that the great community spirit can be kept up.

For a fuller description, see our original post.

Let’s block ads! (Why?)

FICO

abuse, Benchmarking, Don’t, Standards
  • Recent Posts

    • Kevin Hart Joins John Hamburg For New Netflix Comedy Film Titled ‘Me Time’
    • Who is Monitoring your Microsoft Dynamics 365 Apps?
    • how to draw a circle using disks, the radii of the disks are 1, while the radius of the circle is √2 + √6
    • Tips on using Advanced Find in Microsoft Dynamics 365
    • You don’t tell me where to sit.
  • Categories

  • Archives

    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited