HTML5 may help Web pages talk, listen

Sometime in the near future, users might not only read Web pages but hold conversations with them as well, at least if a new activity group in the W3C (World Wide Consortium) bears fruit.

The W3C is investigating the possibility of incorporating voice recognition and speech synthesis interfaces within Web pages. A new incubator group will file a report a year from now summarizing the feasibility of adding voice and speech features into HTML, the W3C’s standard for rendering Web pages.

AT&T, Google, Microsoft and the Mozilla Foundation, among others, all have engineers participating in this effort.

The human voice and the Web are not strangers: Google includes a voice-based Web search app in its Android smart phone operating system and Microsoft promises robust voice-driven features in its upcoming Windows Phone 7.

The HTML Speech Incubator Group is studying the feasibility of developing a standard Web interface for both speech recognition and synthesis, said group chair Dan Burnett, who is also director of speech technologies and standards at voice response system provider Voxeo.

Such an interface could be used across multiple browsers. Using built-in or plug-in voice recognition and speech synthesis engines, browsers could read pages aloud or permit users to audibly fill out Web forms.

While this work may overlap with another voice-based W3C effort, VoiceXML, the two efforts are somewhat different, Burnett said. VoiceXML wouldn’t work very well for the Web, given that it was primarily designed for voice-driven applications, such as telephone-based voice response systems, where it is used widely. Like HTML itself, the voice capabilities of HTML would be stateless, or not require a dedicated session with the user.

Burnett noted that while the report would discuss the feasibility of establishing a set of interfaces, the work of developing the interfaces themselves, should they be warranted, would be taken on by another W3C group, such as the HTML Working Group.

The W3C has been busy with speech technologies on a number of other fronts as well. The organization also recently released version 3.0 of VoiceXML. In this new version, the working group added semantic descriptions of the features, and organized the functionality into modules.

The W3C also plans to shortly release version 1.1 of SSML (the Speech Synthesis Markup Language) — often used in conjunction with VoiceXML — that will incorporate Asian languages, and provide developers more flexibility with voice selection and handling of content in unexpected languages.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Featured Articles

Cybersecurity in 2024: Priorities and challenges for Canadian organizations 

By Derek Manky As predictions for 2024 point to the continued expansion...

Survey shows generative AI is a top priority for Canadian corporate leaders.

Leaders are devoting significant budget to generative AI for 2024 Canadian corporate...

Related Tech News

Tech Jobs

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Tech Companies Hiring Right Now