[Kde-accessibility] Speech Dispatcher, KDE, dcop, and perlbox

david powell achiestdragon at gmail.com
Tue Nov 22 02:33:15 CET 2005


On Tuesday 22 November 2005 12:44 am, Luke Yelavich wrote:
> On Tue, Nov 22, 2005 at 10:53:49AM EST, Gary Cramblitt wrote:
> > On Monday 21 November 2005 09:51 am, david powell wrote:
> > > i ask as i have been having a look at perlbox , thats a speech to text
> > > and text to speech, it suppots kde , but does not seem to intigrate
> > > with kttsd and drives festival directly , the tts side has all the
> > > /dev/dsp problems that not using arts/alsa/gestreamer normaly gets
> > > ,just think that it would beinifit from beeing able to use kttsd in kde
> > > , but thay try and suport other xwindows managers , and there is a
> > > console mode for non x , with speech to text in some ways beeing
> > > another accessibility reqiuirement for some  there are some forms of
> > > disability that it would be a very usefull
> >
> > addition
> >
> > We need yet another TTS system like we need a hole in our heads.

agreed but it is the speech to text part of perlbox that i was interested in 
there tts seems to suffer the same porblem that we have , but without the 
driver base we currently offer in kttsd 


> >
> > As for audio.  Frankly the situation is a mess.  As Hynek Hanke recently
> > put it, NONE of the existing audio frameworks is really suitable for
> > accessibility TTS.  ALSA suffers from poor documentation and mixing
> > problems. aRts has its problems and of course is dependent on KDE. 
> > GStreamer has high latency (and to make matters worse, the latest
> > releases are ABI incompatible with earlier releases).  NAS and NMM also
> > have problems.  Hynek could probably give a more detailed and accurate
> > list of the problems than I.
> >
> > The freestandards.org Accessibility Group drafted a set of accessibility
> > audio requirements and Olaf presented it at aKademy 2005.  Unfortunately,
> > I think like 2 people attended the talk, so the requirements haven't
> > received wide recognition.  Most of the multimedia framework guys are
> > concentrating on playing music and videos and aren't really thinking much
> > about our needs.  I don't think the situation will improve anytime soon.
>
> I just thought I would join in here. Myself and others from the Ubuntu
> project are looking at doing som extra work on improving Ubuntu's
> accessibility. I guess the recommendations are not trivial, but is there
> anywhere that I can find this information as I am meeting with a few
> others later this week about what we would like to do accessibility wise
> for Ubuntu.
>
> > My personal opinion is that ALSA is our best hope.  Most of the other
> > multimedia frameworks work with ALSA, the ALSA guys claim that the dmix
> > problems are finally solved in the latest versions, and if someone could
> > clean up the documentation problem..
> >
> > But actually, I'm just tired of struggling with audio issues.

the linux sound system needs sorting out 
if it was only  tts then little would be done , but its almost 
every audio application in one way or another 
this is not just a accessibility problem
and like any application using the sound is going to get some problems
its just the way we need to use it that makes us more aware of its downfalls 

> > I want to 
> > expend my energy solving accessibility problems; not solving multimedia
> > framework problems.
>
> Another issue to consider is the use of hardware speech synthesizers. I
> am well aware that more people these days use software speech, however
> there are many of us who prefer to use hardware speech, for a few
> reasons. Have you thought of the best way for implementing support for
> that?
>
there is provision in kttsd to drive a synth by running a commandline function
if the h/w synth as software that lets you do basic  say_text " message to 
speak " type  commands 

other than that , its information on them , and the not having any about them
that is the biggest problem , if we had details of them then it would be 
easyer to say for certain ,

the problem that a hardware synth may have is once given text to speak it 
is often the case that there is no way of stopping it , and its the interface 
method that it uses for this sort of command that is dificult with the 
drivers method that would currently work with one 

another solution that i have had success with is a second sound card 
one being driven by arts/alsa for normal sounds and having kttsd
using the alsa driver  on the second card for speech 
i still get some applications fail trying to use the first card together 
but have no problems with the speech on the second card 

> My personal feeling is to use SD again, and write hardware synth modules
> to work with hardware speech synths. Then a desktop environment
> independant configuration suite could be developed for easy
> configuration from either the console, KDE, or GNOME.
>
think whatever we try to do  configuration should be avalable from 
console , kde or  gnome 

> Apologies if I may sound a bit ahead of myself here, but I also do
> believe that SD is the better way to go.
>

dave

> Luke
> _______________________________________________
> kde-accessibility mailing list
> kde-accessibility at kde.org
> https://mail.kde.org/mailman/listinfo/kde-accessibility


More information about the kde-accessibility mailing list