W3C talks up next-gen multi-lingual talking Web

By on
W3C talks up next-gen multi-lingual talking Web

Consortium publishes early draft of SSML 1.1.

The World Wide Web Consortium (W3C) is to encourage support for multi-lingual voice applications by unveiling the First Public Working Draft of Speech Synthesis Markup Language (SSML) 1.1. 

W3C estimates that, within three years, the web will contain significantly more content from Chinese and Indian languages, among others.

The consortium added that, in many of the regions where these languages are spoken, people can access the Web more easily through a less expensive mobile handset than through a desktop computer.

It estimates that the world has more than 10 times as many mobile phones as Internet-connected personal computers.

With an improved SSML, people worldwide will have an increased ability to listen to synthesised speech through mobile phones, desktop computers and other devices.

This will extend the reach of computation and information delivery to nearly every corner of the globe, according to the W3C.

SSML 1.1 improves on SSML 1.0 by adding support for more conventions and practices in the world's languages.

One new feature helps to disambiguate "word boundaries" in languages that do not use white space as a boundary, including Chinese, Thai and Japanese.

SSML 1.1 allows references to language-specific pronunciation alphabets, clarifying the relationship between the author's specified speaking voice and the language being spoken.

It also provides finer-grained control over lexicon activation and entry usage, along with features to better integrate with existing and upcoming Speech Interface Framework specifications.
Copyright ©v3.co.uk

Most Read Articles

Log In

|  Forgot your password?