![]() ![]() Microsoft’s speech recognition system is used right now in Cortana, Microsoft Cognitive Services, and Presentation Translator. In other words, the researchers taught the system to more capably take in the whole picture when working to understand what it was hearing. This also allowed the system to more successfully adapt its transcriptions to context, just as humans do naturally in conversation. ![]() Significantly, they also enabled the system's speech recognizer to make use of entire conversations instead of just snippets, which allowed it to more ably predict what phrases or words would most probably come next. Researchers in this study reduced the error rate by around 12 percent, primarily by improving the language and neural net-based acoustic models of Microsoft’s speech recognition system. The most recent study was conducted by a team from Microsoft AI and Research with the aim of improving accuracy and achieving human parity, even despite human advantages such as ability to cooperate and make use of context and experience. The recordings that formed the basis of both studies came from the Switchboard collection, a research collection of thousands of telephone conversations used to test speech recognition systems since the early 1990s. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |