Ironstream: Metrics for Mainframe Metal’s Superheated Performance

Optimizing OpIntel

Operational Intelligence (or “OpIntel”) has gained momentum as a buzzword, but the concept has been central to the broader category of systems management since the dawn of the mainframe era. For economic reasons, early tools tended to be unwieldy beasts designed to be run by data center nerds.

Their status was elevated somewhat during what might be called the First Timesharing Era. Between 1964 and 1969, roughly 150 companies were founded with timesharing services as a primary offering, according to computerhistory.org.

Software called the Compatible Time-Sharing System (CTSS) was developed at MIT’s Computation Center and first demonstrated in 1961. Timesharing was also pioneered by BBN’s Dartmouth Time Sharing System on GE hardware from 1964, and, surprisingly, lasting for several decades, it was followed by IBM’s optional Time-Sharing Option (TSO). TSO became a standard product on OS/360 MVT in 1974.

Tymshare was perhaps the best-known firm of the First Timesharing Era. Formed in 1964, Tymshare evolved into a business that, to supply one data point, was selling $ 15M in the first half of 1974 (source: ComputerWorld September 11, 1974). (That is $ 76M in today’s dollars.)

 Ironstream: Metrics for Mainframe Metal’s Superheated Performance
Tymshare co-founder Tom O’Rourke visited the trading floor of the New York Stock Exchange when Tymshare was first listed.

Tymshare’s competitors included GE, CSC Infonet, National CSS (the “CSS” in NCSS was from CTSS), one of whose founders, Dick Orenstein, was an original CTSS author. Its 1975 full-year sales were $ 32.6M ($ 145M in 2015).

More on NCSS later.

The IBM System/370 was launched in 1970, but the IBM 360/67 was the first model to support timesharing using CP/CMS, a product traceable to CTSS. Tom Van Vleck, whom some describe as a software pioneer, worked at the MIT Urban Systems Lab back in the day, where he describes himself as working”. . . with a team of system programmers who were as green as I, . . . trying to satisfy the civil engineers etc. who would much rather have control of their own machine, except they couldn’t afford it.”

This recollection is apt and accurate. It echoes a dialog — some would say, a conflict — between domain specialists who want to manage computing resources that can be tailored to their own purposes (like Van Vleck’s civil engineers), rather than centrally managed assets. It’s a dialog that veers from below-the-surface simmering to open warfare, one that dogs CIOs to this day.

CP/CMS evolved into VM/370, which, as Van Vleck explained it, evolved through a combination of technology and commerce into “a real product.”

The point of this historical diversion? That improvements in systems management tooling were not driven by the desire to improve usability for “system programmers” and their troublesome ilk. Systems management had to improve because downtime wasn’t billable, and because a resource utilization “report” turned out to greatly resemble a timesharing customer invoice.

NCSS and “System Overhead”

Let’s return to NCSS. Orenstein provides a useful historical account of how NCSS arrived at its earliest pricing, which it first implemented in December 1968.

“We saw that General Electric (also in the timesharing business) was charging $ .40 [$ 2.72 in 2015] per minute of computer usage on their GE computers, so we did. However, their minute was to be counting all system overhead time and ours wasn’t. Also, we charged $ 5 per hour (for connect time, and I think we thought this was not a significant thing, so we put in a charge just so people would ‘hang-up’ if they weren’t using the computer. [About $ 34 in 2015]. And we charged $ 10 per month [$ 68 in 2015] for 110,000 bytes of storage.”

To state the obvious, the business success of NCSS would depend not only its use of “CSS,” but the ability of its technology to identify user compute cycles, disk space, and (for this was relatively new at the time) connect time.

Orenstein must have had a good understanding of the systems management needs as they connected to the NCSS bottom line, because at the tender age of 29, he was made CEO of NCSS.

Fast Forward Three or Four Decades

If all this seems too foreign and irrelevant for younger readers, consider, by analogy, today’s newish metrics for Amazon Web Service “timesharing,” provided by tools like Amazon CloudWatch: EC2 instances, table sizes, database instances, app logs and customer-designed metrics. Users can grab a near-real-time stream of system events, and set up rules that match events and route them to one or more target functions. From a technology viewpoint, this is strictly evolutionary stuff, but it reflects current expectations for service management.

Pair this trend with mainframe DevOps, described in an IBM introduction to the topic for that audience as driven by “. . .the explosion of inter-dependent multi-platform . . . truly composite, business applications.” Mainframe data centers should expect to host configurations with greatly shortened lifecycles. All this adds to the need for automation of OpIntel for billing, reliability, deployment speed and scalability.

That automation, at its core, involves pushing log data at Big Data volume and velocity into analytic “containers” where appropriate tools can be deployed.

It’s a familiar paradigm, old-timers like Orenstein might say, but applied to a much-evolved setting.

Still More Metal: Edging Toward OpIntel Big Data

Big Data, both in the cloud and for z/OS, is edging toward streaming analytics, and for leading edge data centers a fait accompli. As seasoned business intelligence experts know, ETL can be central to creating tractable datasets that are amenable to analysis.

Syncsort Ironstream® can push z/OS SMF, SYSLOGs, Log4J and other event data into resources such as Splunk, where analysts can leverage Splunk’s visualization tools to act on the newly received OpIntel.

ironstream 360 view Ironstream: Metrics for Mainframe Metal’s Superheated Performance
Syncsort Ironstream makes it simple to collect, transform, and securely forward mainframe logs into Splunk Enterprise, Splunk Enterprise Security, and Splunk Cloud platforms.

Sometimes, that OpIntel may have a critical security component. According to Splunk’s VP of Marketing, Kevin Conklin, Splunkbase is the “ideal vehicle” to further Splunk’s real time Anomaly Detection application. If so, mainframe developers may come to see Ironstream as a new “alloy,” designed for availability and flexibility.

IBM itself describes z Systems as “optimized for real time analytics,” but IBM’s high performance metal could still go underutilized or improperly configured. Recognizing and acting upon such events — accumulated at streaming velocity, variety and volume — will require a combination of IBM and third party products, partly due to the speed of innovation in the Big Data sector.

Even when products like Ironstream function in a traditional ETL role for systems management, the role is no less critical. Writing about business intelligence and analytics for Big Data in MIS Quarterly (Vol 36 No. 4, 2012), R. Chiang and V. Storey reiterate that “Design of data marts and tools for extraction, transformation, and load (ETL) are essential for converting and integrating enterprise-specific data” (p. 1166).

Pairing today’s z/OS superheated mainframe with suitable data and tooling for systems management would have been a good call in the First Timesharing Era of 1976. Not doing so in the Big Data era of 2016 may be dangerously close to driving blind.

This entry passed through the Full-Text RSS service – if this is your content and you’re reading it on someone else’s site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.

Syncsort blog