Late last month we looked at IBM UK’s arguments that the mainframe is more viable than 15 downsized alternatives (CI No 1,976), but sceptics are critical of IBM’s methods and measurements. First of all, the benchmarks published by Arthur Andersen in 1989. The report states that each customer’s requirements are unique, and it would be inappropriate to use the results and configurations as a basis for system sizing. Further, it would be inappropriate to use the results to determine price-performance. By citing this report as evidence of the cost-effectiveness of the mainframe, IBM appears to be honouring neither the spirit nor the letter of Andersen’s study. Secondly, the team did not verify the accuracy of all the numbers reported by the performance measurement tools, although it did reconcile the results reported with whatever other information was available. The summary of the results states that throughput at 70% CPU utilisation was derived by selecting the benchmark point for each system closest to 70% utilisation and extrapolating the number of transactions which would be processed at 70% utilisation. The arts of extrapolation and derivation rather than scientific measurement?

Unannounced

Thirdly, the report refers to appendices describing RAMP-C and the various testing and measurement procedures. IBM has chosen not provide these details. Finally, the machines measured by IBM were unnanounced at the time of the report. Some commentators question the wisdom of associating a three-year-old benchmark with machines measured earlier this year. However, even if there were no reservations about citing the report, several key points remain unclear. For example, configuration and the true cost of ownership. Initially, Arthur Parker said that IBM would be willing to provide more details on how the systems were configured. Yet, when that information was requested, IBM replied that …our objective in analysing the cost-performance of various systems was to establish a comparable comparison point – not to define a typical customer configuration. The purpose of the analysis was to highlight some of the economic aspects in moving applications from a central machine to multiple smaller systems that are often overlooked. In the context of this analysis, there is no additional detail that we can give. So much for glasnost. If IBM’s objective is to establish a comparable comparison point, then we need to know exactly what is being compared. Like-with-like, as Parker says, or the proverbial apples with oranges. The Andersen report stresses …no single benchmark workload can adequately represent the broad diversity of commercial transaction processing applications. Response times, throughputs, and the relative positions of systems may vary under a different workload. IBM has provided no details on the workload except to say it was a commercial one. Next, IBM says that RAMP-C benchmarks actually favour the AS/400 and do not take account of MVS functionality. But proponents of all three systems could claim to be disadvantaged by some aspects of RAMP-C. IBM’s use of Erlang, or derivation of that theory, to determine warehouse capacity, is unusual.

By Janice McGinn

It seems to be an attempt to show that the single faster engine can process a a number of transactions more quickly that a greater number of slower systems, but commentators say that while valid when applied to telecommunications, it’s not necessarily appropriate in assessing the peak load handling of data processing warehouses. Most importantly, the question of cost, the true cost of ownership, is never really addressed by IBM. The true cost of ownership is not based on list prices after all, IBM does not publish mainframe list prices in the UK. According to the broking company, Econocom UK Ltd, IBM’s costings over a five-year period are not entirely unrealistic, although it has strong reservations about the lack of configuration information. In the absence of more detail, Econocom made certain assumptions in order to gain a fair comparison. In the second-user

market the RS/6000 with about 18Gb of disk comes out at UKP80,000, about 60% of IBM’s list price and software is available at 15% below list. Assuming the AS/400 is configured to 20.4Gb of memory with six 9336s and 12 9304s so that is comparable with the Unix box, it costs UKP156,000, again, about 60% of list. The 9121-130 is more difficult. Since IBM refuses to say how many disk drives were installed on the system, along with every other salient configuration detail, Econocom’s reaction is guarded. The 9121 is a feature-sensitive machine which makes it difficult to place in a specific price band. The hardware itself has been trading at around UKP45,000 in the UK, which is about 50% of list price. Econocom says that UKP613,000 over a five-year period seems very high, but the 9343 and 9345s are not yet available on the second-user market, which may account for apparently inflated price, and users are much more likely to use 9335s. Again, IBM’s unwillingness to provide configuration details makes it impossible to compare the three systems, but IBM has shown itself more than willing to match second-user prices and to provide used kit if pressed. As regards the cost of the software, IBM has raised prices by 6% over the past year, and there is no indication in these figures whether future increases have been accounted for, similarly with hardware maintenance. Also, it is simplistic to base software costs on a one-time charge only. Users ought to compare the cost of one-time charges with monthly licences, and other considerations include when a user enters the product price cycle, the rate of inflation and price multipliers.

Future costs

Nonetheless, IBM’s argument about the cost-effectiveness of the mainframe is not wrong necessarily, and costings published by Xephon Plc in its Mainframe Market Monitor substantiate IBM’s claims. Despite its higher cost per MIPS, says Xephon, the mainframe does not show any distinct cost disadvantage. Also, future costs are more likely to move in the mainframe’s favour. So, how should users react to IBM’s arguments? There are a number points to bear in mind. The presentation is defensive and ill-thought out. No data processing manager nor finance director ought to accept IBM’s costings without being shown how it arrived at those figures. That means examining configuration detail and comparing costs with second-user prices. However, the brokerage community complains that IBM is reluctant to provide users with the details necessary for exhaustive costing and comparisons. IBM may claim to be undergoing a cultural revolution, but that ‘trust-me-I’m-a-doctor syndrome’ is ongoing. We do not know if IBM’s statistics are accurate since there is so little detail – no statement of allowance for experimental error and no indication of the extent to which the figures are true benchmarks or derivative. Finally, use of list pricing is a red herring. The issue is not list prices, nor discounted prices. To downsize or not is about choosing an appropriate system for individual workloads and business requirements.