Microsoft says that by the end of the decade a single personal computer will be able to run Citicorp, quoted a top Amdahl executive recently – That may be true, but you couldn’t run it all from the desktop. So begun a briefing to consultants, which covered the role of client-server and mainframe technology in the computing infrastructures of the 1990s. Amdahl, predictably, pooh-poohed competitor’s claims that the mainframe will die over the next few years. Less predictable (and less sane?) were its predictions for integration within today’s volatile market. The main prediction was this: there will be a mainframe Unix operating system in most Unix enterprises by the end of the decade. Amdahl is quick to admit that applications development for proprietary mainframe environments is lagging behind, primarily due to out-of-date 1980s corporate computing structures that are too hierarchical and fast-expanding. The company estimates an average development lag of two years on centralised systems. A case for client-server technology, then? Perhaps, but Amdahl points out some problems unique to client-server.
Worst of both worlds
First off, it’s hard to client-serverise many monolithic applications because they’re so tightly integrated, and because databases often have many closely-tied pointers and links that make them difficult to decentralise. Secondly, it’s difficult to know exactly what kind of client-server system to implement. The company has pinpointed 24 different types of implementation, placing varying amounts of client emphasis on what it sees as the five major sections of an application infrastructure: user interface, data entry and editing, application logic, data access and storage management. The answer according to Amdahl seems to be to merge the two types of systems within an organisation into a system in which the mainframe, acting now as an enterprise server, interacts with satellite servers and clients. This coincidentally ensures Amdahl continued business for the future. The problem with this best-of-both-worlds-type solution is that unless managed properly it can evolve into a worst of both worlds scenario. Data processing managers shouldn’t give too many users access to the power that such an implementation offers, because this loosens control, according to Amdahl. To avoid this, the company recommends that users should work out where to place the lion’s share of the functionality within a system early on by matching the characteristics of an application with the characteristics of the various components of the system. In this way users can implement high function or low function servers and clients on an informed basis, rather than tacking clients and servers onto a centralised system willy-nilly. Mainframe systems, as the company proudly points out, are rich in performance, memory, data storage and bandwidth.
By Danny Bradbury
Client workstations lack the memory capacity of the mainframe, and suffer especially in in storage and bandwidth terms – hence the initial Citicorp jibe. Amdahl puts the last piece into the puzzle by recommending its Huron applications development system as an easy method of chopping and changing the power balance in a client-server system between centralised and satellite systems. The first generation of client-server development saw development of printer and file servers with little means of communication while the second saw front-end tools and database servers linking into back-end SQL servers at the storage management level. The problem here was that groups of information were developing which had little way of communicating with each other. The third generation saw this situation addressed with the aid of distributed database management systems. These enabled users to hook into different databases on distributed servers through technologies such as EDA/SQL, or standards such as Microsoft Corp’s Open Data Base Connectivity or IBM’s Distributed Relational Database Architecture, (if anyone actually supported the latter). These systems can be serviced using larger
community servers, which replace groups of smaller servers, hence reducing the points of control and therefore the complexity of a system. These community servers are used for processing application logic and are in turn linked to the enterprise servers – the centralised systems. This is the current state of the (black?) art. The next, fourth generation step – and the one that we are still making – is consolidation of data. With corporate data being much more valuable than the application it-self, Amdahl believes that it makes more sense to put all of it into one data repository. The company says that this will help ensure the safety of mission-critical systems by whittling away the numbers of vendors on which customers have to depend, but this is questionable: Amdoahl seems to believe that many ven-dors may have disappeared by the end of the decade, which could be true, but is putting all your eggs in one basket going to reduce the risk in the long run? The other benefit of pooling your data is that it will help to reduce computer auditing problems, according to Amdahl. This pooling of data and operating systems onto one large back-end monster is presumably where the mainframe Unix implementation comes in, although the definition begs the question of whether these Unix servers will have much in common architecturally with the traditional mainframe – they are more likely to relatively simple parallel processors. Amdahl says it will be ready with the technology to implement its predictions, and it is certainly working flat out to reach that point. Its Antares Corp venture with Electronic Data Systems Corp using Huron and inCase, gives it a strong client-server applications development technology, and its recent OEM deal with Sun Microsystems Inc for Sparc servers and Solaris operating system, and to integrate its UTS mainframe Unix into Solaris, underlines its acceptance of the client-server religion.
Having to pretend
The amusing truth is, of course, that the company has little choice, and has evidently followed IBM Corp down the same hole. When the mainframe industry began, customers were only ever going to buy from one company because traditional, hierarchical corporate stuctures rarely demanded anything more than the dumb terminal-host arrangement. As user companies emerge from their recessionary cocoons and start airing their new, streamlined management wings, their information technology structures are having to become just as streamlined to stay on top of things. Amdahl is now having to pretend that the mainframe’s goal was always to integrate with intelligent client systems and that it was never intended to tie customers down. In fact, Amdahl is admitting that what it was selling as a mainframe five years ago will now be nothing more than an enterprise server, and that these days much more than that is needed to exploit the rest of the market. The customer has cut loose, and it’s the system vendor that’s cocooned now.