Speech to voice?
You may have heard it when calling a company or when you see a product presentation online. It is a computer voice where speech to voice has been used. To see what we mean, check Google Translate and try its pronunciation out yourself in different languages. It sounds a little robotic right?
“Translators not needed any more?” The beginning of a new translation world!
Today if you call a bank in the US you will almost certainly talk to a computer that can answer simple questions about your account and connect you to a real person if necessary. Several products on the market today, including XBOX Kinect use speech to voice input to provide simple answers or to navigate a user interface. In fact our Microsoft Windows and Office products have had speech recognition included in them since the late 90s. This functionality has been invaluable to our customers with accessibility needs.
Until recently though, even the best speech systems still had word error rates of 20-25% on arbitrary speech.
Just over two years ago, researchers at Microsoft Research and the University of Toronto made another breakthrough. By using a technique called Deep Neural Networks, which is patterned after human brain behavior, researchers were able to train more discriminative and better speech recognizers than previous methods.
According to Rick Rashid, Microsoft’s Chief Research Officer: During my October 25 presentation in China, I had the opportunity to showcase the latest results of this work. We have been able to reduce the word error rate for speech by over 30% compared to previous methods. This means that rather than having one word in 4 or 5 incorrect, now the error rate is one word in 7 or 8. While still far from perfect, this is the most dramatic change in accuracy since the introduction of hidden Markov modeling in 1979, and as we add more data to the training we believe that we will get even better results.