Get Ready for the Talking Data Center

    Slide Show

    How to Transform into an Insight-Driven Enterprise

    It seems a given at this point that automation will play a major role in IT infrastructure management going forward, and from there it is only a small step toward artificial intelligence and cognitive computing to turn the data center into a largely autonomous entity.

    But what will life, and work, be like in an automated environment, and how will humans interact with the intelligent systems that are managing the bulk of the operational workload?

    Among the more intriguing aspects of this ongoing development are the twin fields of speech recognition and voice simulation. This is one area in which science fiction may have gotten it right with characters like HAL and the Starship Enterprise’s onboard computer: an overarching data environment that can process human commands and queries through speech rather than typing, clicking or tapping.

    This technology is a lot closer than many people realize. Digital assistants like Siri and Cortana are only the vanguard of what is expected to be a rapidly evolving technology that will quickly shed the stilted computer-speak we hear today in favor of more natural, flowing diction.

    Microsoft recently announced a breakthrough in its “conversational speech recognition system” that is said to produce a more intuitive, responsive environment that is closer than ever to the way humans converse with each other. The system has a word error rate of 6.3 percent under the National Institute of Standard and Technology’s (NIST) 2000 Switchboard benchmark, says the UK Express, the lowest score to date. While the technology, which is based on neural networking and other emerging developments, will likely make its way into Windows 10, it isn’t much of a stretch to imagine a Cortana-like interface for Azure-Windows Server hybrid cloud management.

    Google is on a similar track with its DeepMind program, seeking to replace traditional concatenative text-to-speech (TTS) techniques, in which speech is generated by recombining earlier recorded fragments, with a system called WaveNet that relies on statistical sampling to analyze and interpret live audio waveforms. Again, neural networking forms the underlying framework for the system, which is said to generate raw audio at a rate of 16,000 samples per second. Google says the technology is powerful enough to incorporate emotional context and accents, and can even generate speech without text in ways that may one day evolve into a native computer language – sort of like a certain blue-and-white astromech droid who has starred in a few movies over the years.

    IBM, of course, has a keen interest in natural-sounding voice simulation for platforms like Watson. The company recently tapped Nvidia’s Tesla P100 Pascal GPU and NVLink high-speed interconnect to augment its intelligent analytics and cognitive computing business, which is now posting revenues on the order of $5 billion per quarter, says capital venture newsletter Guru Focus. The intent is to tie the P100 to IBM’s Power CPU to up the ante in areas like deep learning and AI. At the same time, Chinese web services company Baidu is employing Nvidia platforms for projects ranging from self-driving cars to its Deep Speech 2 speech recognition program.

    It will be a strange world indeed when IT managers can simply walk into a room and tell an ambient computer presence to provision new resources for a certain workload or run a diagnostic on last night’s batch job, but this is where the technology seems to be headed.

    It may take a while for the new hire to fit in, but with time and hard work, it stands every chance of disproving the doubters and becoming the most valued member of the team.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Latest Articles