Robert Palmer’s chief executive perspectives speech at Windows World was liberally sprinkled with demonstrations of just what you can do with the aid of fast processors like Alpha. Most impressive was a demonstration of real-time speech to text, where Digital Equipment Corp’s chief tempted fate by getting the machine to translate a few words. Not only did the computer get them right but Palmer didn’t slow his delivery at all, and the printed word appeared a fraction of a second later. The demonstration came courtesy of a piece of software called Sphinx II from the Carnegie Mellon University. Sphinx II uses hidden Markov modelling to achieve its results; a set of algorithms developed in the mid to late 1980s. What is new, according to DEC consultant engineer Lawrence Stewart, is that hardware now has the horse-power to drive them in real time. The version that Palmer played with has a vocabulary of 2,500 words, very small by human standards, though enough for application-specific work such as booking airline tickets, or controlling computer front ends. Larger versions are already in the works though – Lawrence says that the company already has a system with a vocabulary of 20,000 words that will happily cope with most stories read out of the Wall Street Journal. But the real significance of the demonstration is that for the first time high-powered personal computers are capable of keeping up with normal speech without… the… speaker … doing… this. The delay that appeared in the demo, he says, is constant, irrespective of the length of sentence that is being interpreted. From here on in, says Lawrence, increased performance and memory will simply lead to larger vocabularies. The next real challenges lie in natural language processing – enabling the machine understand exactly what is meant by the rambling, illogical verbalisations in which humans tend to indulge.