From Software Futures, a sister publication
A recent survey from consulting firm Bloor Research identified some 148 products currently available that claim to be able to support client/server application development. That’s a lot of products.
By Gary Flood
About 125 too many, according to Bloor Research, whose latest tome has provided Software Futures with quite a few interesting hours’ worth of reading this past month. Authors Steve Barrie and Philip Howard have compiled a very worthwhile set of comparisons, opinions, provocative statements, and research findings on what’s good and bad about object-oriented, CASE/model-based and 4GL software offerings. Anyone serious about understanding and making informed decisions about the high-end mission-critical client/server development market would be well served buying this 600-page report. Whenever a report based on product comparisons like this comes out, there’s always a faint howl off-camera from the vendors concerned, who usually cry foul about some aspect of the methodology or feature matrix. Just as predictable is the way we all immediately thumb our way to the charts to see who gets the best and worst ratings – journalists, fellow analysts and customers alike. Bloor knows this, and Barrie sums up the philosophical approach well: There is a certain element of the personal in our view, but it’s clearly based on our view of the world, and we’re always very clear about our methodology. That’s why we always include the questionnaire and survey results used.
Three camps
Bloor has split the products surveyed into the three camps mentioned earlier – model-, object-, and 4GL-based tools. All products have been graded according to a common set of criteria. Capabilities have been categorized into repository capability, application partitioning, development environment, language/OO capability, modeling tools support, scalability, RAD (rapid application development) and deployment tools support. Also included are marks for ease of use, portability, distributed support and interoperability. In the model (often originally CASE-based) arena the researchers have poked at Texas Instruments’ Composer, Oracle’s Designer/2000 & Developer/2000, Software AG’s Natural Lightstorm, Sapiens’ ObjectPool, Seer’s HPS and USoft’s USoft Developer. In the object category you may not be too surprised to find CA’s (via its acquisition of Ingres) OpenRoad, the eponymous Dynasty and Fort tools, Informix’s NewEra, Neuron Data’s Elements, Nat Systems’ NatStar, Hitachi’s ObjectIQ, Template Software’s SNAP, and IBM’s VisualAge. And finally the spotlight falls on such 4GL luminaries as Gupta/Centura’s offering, Magic from Magic Software Enterprises, Antares ObjectStar, Progress 8 from Progress and Uniface/Compuware’s Uniface 6, the Sybase 4GL, Unify’s Vision and IBM’s VisualGen (recently renamed VisualAge Generator).
Buy this report
It’s tempting at this point to blow Bloor’s chances of selling any copies of this report at all by revealing in detail what each offering has been rated. However, that would be slightly dirty pool, in our view. If you’re a serious buyer, developer, or watcher of this market, buy this report. (We have no commercial link with Bloor whatsoever, by the way; we just think it’s a great bit of reference material you should have.) But we will give you a flavor of what to expect by asking Barrie what the headline findings were. Before we sat down to write the report, I was sure that the CASE tools would come out best in terms of full coverage of our requirements for second generation client/server – in fact I privately thought Seer*HPS would fit best. Instead, I was surprised to find that the OO tools have matured to an extent that enterprises really should now be looking at them. Though some aren’t doing quite so well at their marketing as they should, objects fit so well into the whole picture of a distributed computing environment that we’re recommending should be built. The message to management is that they should be looking at a lot of these lesser known tools; they’ll get surprising value for money. It’s only fair to point out that some may disagree with a lot of what Bloor considers necessary for tools to really cut it as worthy of the name second-generation. But a lot wouldn’t. The company mandates a strict set of criteria, some as prerequisite, some merely desirable. In a development language, the tools must support application development at least as well as a 3GL like Cobol can; it must be able to support batch, external system calls (to legacy), reuse of objects or subroutines, and it has to know about event-driven processing. Desirable: it can do syntax checking. In performance terms, it must provide asynchronous (ie non-blocking) processing at the desktop level, it must offer compiled executables, multithreaded server capability or TP monitor support, scalability to at least hundreds of concurrent users and access to network and messaging facilities. Merely desirable might be the ability to offer load balancing and a means to monitor network traffic between client and server. In the fashionable area of partitioning, vendors won’t cut the mustard for Bloor unless their products can offer its own messaging mechanism/architecture, partition at a lower level than SQL messaging, allow creation and modification of partitions by very simple means, preferably in a graphical fashion, and support three-tier (defined in this sense as dialogue, business and server objects). Desirable in this sector – failover capability (ie replication of partitions), triggers and stored procedure support for target databases, and dynamic partitioning capabilities.
Tough questions about objects
As a last reason for checking out this study, we’d like to let you know that although Bloor’s analysts are favorable toward objects, they haven’t quite entered the OO ashram just yet. They’re not afraid of asking some tough questions about what we think we know – and why we’d want to use – objects. For instance: When you read popular accounts of distributed objects technology, you could be forgiven for being left with the impression that there is something intrinsically favorable about an application being distributed…. At present only a small proportion of commercial applications are inherently distributed at the server side. Typically, where multiple, geographically-remote servers are used in an application, this is due to some historical accident or some technological inconvenience… there is no need for applications to have the paraphernalia of distributed computing, with all of the development complication and performance overhead that this entails. Similarly, the authors ask, heretically!, do your distributed objects really need to be object-oriented? Or whether distributed objects fit well with your client/server computing strategy? We do not see that a large proportion of enterprises are ready to make such a sophisticated use of distributed object technology, they say. We think that enterprises will not be ignoring this technology, rather that they will introduce it into their mainstream client/server computing strategy in an incremental way, taking advantage of improved functionality and sophistication as it emerges from the ORB vendors and their associates. Don’t write that check – or have lunch with that salesman or MD – until you’ve read this report. Ignorance is very much not strength in the second-generation client/server world.
Enterprise Client/Server Development Tools: An Evaluation And Comparison, 1996
Bloor Research Group. Tel: +44 (0)1908 373311
Sales enquiries in US: InfoEdge, +1 203-363-7150.