IBM Corp has finally recognised that traditional supercomputers are running out of gas, and is making a strike into the already hotly-contended market for massively massive parallel supercomputing. And its Rios RISC is the key. In terms of floating point operations, boasted Herb Budd, European director of Scientific & Technical Solutions at Supercomputing Europe ’92 in Paris last month, IBM tops the lot – the RS/6000 Model 560 just announced offers 100 MFLOPS peak performance per processor, and over 30 LinPack MFLOPS. Where is RISC going? Budd asked rhetorically? In the next five, six, seven years, he forecast, suitably vaguely, IBM’s per-processor performance will be somewhere between 500 MFLOPS and 1 GFLOPS. That said, Budd began to explain IBM’s new supercomputing strategy, which aims to create a new family of highly parallel supercomputers, to be brought rapidly to market by the new Highly Parallel Supercomputing Systems Laboratory in Kingston, New York (CI No 1,860). There, IBM will design and develop a series of scalable parallel computers based on the Rios RISC technology used in the RS/6000. The market for vectors won’t go away, said Budd, but the exciting work, and the price-performance, is in massive parallelism. The new development effort combines resources from IBM’s Enterprise Systems, Advanced Workstations, Research and Federal Sector divisions. The group will use multiple RISC processors to build a scientific parallel machine theoretically scalable to TeraFLOPS performance. IBM’s supercomputing strategy is multi-pronged, and will involve the continued enhancement of the ES/9000 and ES/3090 mainframe vector facility – we will not at any point in the game forget our mainframes; the development of a stand-alone highly-parallel system, using multiple RISCs and optionally front-ended by the ES/9000; clusters of RS/6000s consisting of three to 32 economical systems, as already available, as an entry-level parallel server, batch server or data server; and development alliances with other companies to complement these offerings. Initial delivery of the first low-end system being developed at the new New York laboratory is expected to be announced later this year, follow-on systems with additional processors to be offered at intervals throughout the rest of the decade. The new scalable systems will build on experience gained with the parallel RS/6000 systems developed at the Rome European Centre for Scientific & Engineering Computing and the Stavanger European Petroleum Application Centre. The parallel systems will run AIX, OSF/1 Unix and Posix. IBM emphasises that its Application Programming Interface will be key to its strategy, to enable easy software porting from small clusters of RISCs to large TeraFLOPS-scale machines. In addition, IBM expresses recognition of the need for standards, and is collaborating with Cambridge, Massachusetts-based Thinking Machines Corp, from which it will gain access to high-level development languages, and an understanding of how Fortran code can run across different processors. (Budd was adamant that IBM has no plans to build a machine with Thinking Machines, and said that Thinking Machines’ interest in the collaboration was in memory, semiconductors and disk technology). Recognising that the challenge of the massive parallelism game is in the software, IBM’s first offering under its massive parallelism initiative is a cluster of eight to 16 RS/6000s, which can be configured as a parallel server, so that at least users can beging to parallelise their code. Anxious that the mainframe not be made redundant, IBM stresses that its intention is that the RS/6000 clusters be interconnected to ES/9000 mainframes, as soon as possible. In a couple of years, Budd said, IBM will open up the mainframe with faster interconnections – faster than the HIPPI channel – making it easier for the RS/6000 to talk to the mainframe. Customers with seismic processing applications, he explained, require data to be farmed out from the host to the RS/6000s at a very fast speed, so the performa

nce of the link is crucial. According to Budd, clusters of RS/6000s will work together reasonably effectively by the end of the year, though the harmonising of the systems – so that they can operate efficiently in parallel on one task – will be a gradual process. The cluster offerings are not being marketed as a special product with a name or number, but they will begin to look more like one united machine, with a reduced footprint and unified packaging.

Vulcan project

As time goes on, these RS/6000 clusters will be gradually integrated with technology from a long-running IBM research project code-named Vulcan. It is Vulcan technology which is expected to take IBM into the TFLOPS realm by 1995. Randy Mouilic, from IBM’s T J Watson Research Centre in Yorktown Heights, New York, giving a technical briefing at the Paris show, said that the Vulcan machine would feature a mixed multiple data architecture with both single and multiple instruction working. Vulcan is intended to exploit RISC and disk technology, to feature high-performance processor interconnect, and will run a low-level operating system kernel, which will reside on the processor node. It will offer an integrated programming environment and some type of unified system management programming. The key to Vulcan, Mouilic said, is its message-switch. The architecture will feature a high bandwidth, whereby each processor will send and receive one byte of data every cycle, with the clock set at 20 cycles per microsecond. Meanwhile, Budd’s message to new customers of the RS/6000 and ES/9000 mainframes is invest in this approach because we’re sticking to it. – Sue Norris