Hi, When I looked at the notification stuff I noticed that speech should be a possible output. And there are some text messages already... :-) My idea was to view the speech synthesizer as - a synthesizer! Compare with reading an mp3 encoded file with a text file. * You have to decode it to waveform * You have to output it to a multimedia device * The data is preferably transfered over the interfaces in its most compressed format - in this case text => the text to speech part could be an arts plugin... But then it hit me: * The spoken messages need to be localized! => Languages do not use the letters in the same way. => You will need a speech engine per language. => And sometimes that won't be enough - you will need additional hints. => The hints has to be standardized to be able to use an alternative engine for a language. => So - how will you encode the sentences? My suggestion - use the already translated i18n sentences as much as possible. * Button names, menu names, tool tips, ... With a way of adding extra hints and/or more text. It is possible that what you really want is a "qt-speech" - one that only says the item with focus. In addition to this you will need the proposed interfaces - to handle users modifiable fields... (like when writing or reading this mail...) And one more thing: I often read English text with Swedish localization, and vice versa... /RogerL -- Roger Larsson Skellefteċ Sweden >> Visit http://mail.kde.org/mailman/listinfo/kde-devel#unsub to unsubscribe <<