Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Handhelds Hardware

Using PDAs for Dictation? 322

SunPin asks: "I'm a writer that is 99% dependent, due to fine-motor disabilities, on voice dictation. I've been a dictation user since 1990. My preference is 'discrete' speech because of very low resource consumption and its effectively infinite flexibility. Over the years, my computer use has de-evolved to programming, FTP, email (Mozilla), word processing (OpenOffice) and Ricochet. Drop the game and there's nothing that I shouldn't be allowed to do on the go. The problem is that I can't. Back in 1990, the requirements for IBM VoiceType were: DOS, 8MB RAM, 10MB of drive space with one of those new-fangled scorching 386-16MHz processors... not exactly demanding by today's standards and, unless I'm outright wrong, not demanding by today's PDA standards. Why hasn't it occurred yet?"

"In the disability offices of the hundreds of universities across the US, such software would be a major money saver because not all students need a high-powered laptop. While natural speech is great from a marketing perspective, it is simply impractical for general use and cannot adapt to mildly noisy environments. IBM, L & H and Microsoft have all given me the run-around. IBM refused to entertain the possibility. L & H is on life support, in a deep coma. Only Microsoft had a remotely positive response saying that they were testing natural recognition in Mandarin Chinese in their Beijing research office. Does anyone believe in keeping it simple, anymore?"

This discussion has been archived. No new comments can be posted.

Using PDAs for Dictation?

Comments Filter:
  • Well... (Score:3, Interesting)

    by acehole ( 174372 ) on Friday November 22, 2002 @02:10PM (#4733282) Homepage
    The reason for this can be put down to a couple of reasons.

    First off, buying a dictaphone is still much cheaper than a PDA with software.

    And secondly the whole voice/word recognition program market hasn't really accomplished any great leaps or bounds over the past five years, not to mention it's not popular in the mainstream yet.

    • Re:Well... (Score:5, Funny)

      by stratjakt ( 596332 ) on Friday November 22, 2002 @02:21PM (#4733428) Journal
      >> First off, buying a dictaphone ...

      DICTAPHONE? DICTAPHONE?

      re-vulcanize my tires, post-haste. And make sure this post is on the next auto-gyro to Prussia.
    • I definitely that with the above poster that the market is very small and has not made many achievements. Also, you have to sometimes wonder about IBM. They invest lots of money, do so much research, come up with this great product and that's it. They just leave it. Take the thinkpads, linux, OS/2, Informix, Lotus Suite.... and so on. I just don't understand the mentality of the execs there.
      • Re:Well... (Score:4, Funny)

        by banzai51 ( 140396 ) on Friday November 22, 2002 @03:02PM (#4733788) Journal
        Reminds me of what someone at work says about IBM:

        "IBM: Where software goes to die."

      • They do this so they can stroke their egos when some other company develops a product that uses some sort of technology that IBM did some research on in the past.

        It makes them feel like they are a super tech think tank ala PARC...

        They do come up with some great stuff and I would bet that if IBM were a japanese company the entire tech industry would look totally different.
    • Actually, Dragon Naturally Speaking Doctor's Edition comes with a special USB dictaphone that plugs into the computer and translates the voice into text using Dragon's software. I'm not sure if it works with anything but Windows but its certainly cheaper than hiring someone to do it.
      • Sony's IC recorders come with Dragon Naturally Speak Standard Ed. and do the same thing. Agian Windows only. PEGNX70v will let you do voice recordingds but I think it just doubles as a IC recorder. Some of the other pdas on the market do the same. That is probably the best that is available right now.
  • Simputer (Score:3, Informative)

    by papasui ( 567265 ) on Friday November 22, 2002 @02:11PM (#4733299) Homepage
    http://slashdot.org/article.pl?sid=02/11/19/234216 &mode=thread&tid=100
  • My god (Score:5, Funny)

    by Anonymous Coward on Friday November 22, 2002 @02:11PM (#4733300)
    Next thing, you'll be wanting a machine to wash your dishes and clothing, or, heck, let's be crazy, and send moving pictures around the world!
  • by zanerock ( 218113 ) <(zane) (at) (zanecorp.com)> on Friday November 22, 2002 @02:11PM (#4733303) Homepage
    I think it has more to do with the perception of voice dication as unreliable and resource intensive rather than any actual fact, as the poster points out, it can be done fairly cheaply.

    I have not had much experience, but I think the other thing is that people are averse to any sort of training or teaching required, no matter the long term dividents.

    Like most things, it comes down not to fact, but to perception and prejuidice. Most people base their buying decisions on 30-second spots, not informed research, so the cost of educating people to is too high for producers to incur.
    • by Locutus ( 9039 ) on Friday November 22, 2002 @03:13PM (#4733909)
      I met some people at COMDEX who have VR(voice recg) running the the Sharp Zaurus. I've run IBM's VR software and it was pretty good 6 years ago. On the Zaurus, I would imagine that at 256MB CF card could hold a good sized dictionary so dictation appears to be possible. Especially since this guy was doing it on a 16MHz 386 years ago.

      The ability of the Zaurus to take a MIC input makes a big difference since a good MIC is important due to noise cancelling features they have. All the PDA's with no external MIC option are pretty much useless for VR/Dictation.

      LoB
  • by gpinzone ( 531794 ) on Friday November 22, 2002 @02:12PM (#4733311) Homepage Journal
    It's the other, most overlooked piece of hardware used in speech recognition, the microphone. The junky headset given away with ViaVoice or the el cheapo unit sold in Radio Shack for under $10 makes most people's experiences with voice recognition software less than favorable. Invest in a $50-$60 professional headset and the ability of the software to accurately detect your speech patterns improves dramatically. How are they going to shoe horn a high fidelity audio sound processor in there? Maybe a USB headset might be the answer assuming the device can accept USB devices.

    I'm also going to assume that the current line of speech recognition products are MUCH better than what ran on your old 386.
    • The headphone isn'y an issue, like you said, make it accept USB and get a good headset type mic and your good.

      The problem is in recognizing what you said, the best software out there still sucks and you have to train it forever. No matter what you will have to train it to recognize your voice. My saying car and some one from Boston saying car are drastically different but they are the same word. Given a lot of training you can get something halfway decent but it still requires corrections. This is especially true if you have a cold, you just woke up or are sleepy.

      It's a very complex thing and I don't see any signifigant breakthroughs anytime soon. I've used quite a lot of programs (with a good microphone) and you can get ok results especially for simple things like "Open" "Close" but I think we're a long way from really good dictation software.
      -Chris
    • Plantronics makes several headsets with microphone that only require a USB connection, but do not require a sound card. They work quite well, and this should lower the hardware requirements for a small, lower-powered device.

      http://www.plantronics.com

      and search for their DSP-*00 series. I picked up their DSP-500 (normally $110) for $40 on a deal.
    • The main problem is not the microphone.
      It's the microphone circuit on the soundcard.
      My brand new AWE-64 had a crap mic circuit.
      The el-cheapo replacement was excellent.

    • by CrazyJoel ( 146417 ) on Friday November 22, 2002 @03:37PM (#4734120)
      I remember seeing a ViaVoice demo a couple of years ago. The guy doing the demo said they use these headmikes that are actually 2 microphones. One mike faces the mouth, the other faces away. The circuitry then filters out any environmental noise from your voice. Don't know how much they cost though.(I'm sure I could look it up)
      • NC microphones (Score:4, Informative)

        by Andy Dodd ( 701 ) <atd7NO@SPAMcornell.edu> on Friday November 22, 2002 @04:08PM (#4734364) Homepage
        Some mics do this mechanically also - They have a port on the reverse side of the mic element so it only detects pressure differences between the two sides of the mic, i.e. only nearby sounds coming from one side of the mic (your mouth). Plantronics has plenty of these - Such NC headsets are common thanks to cellular telephone handsfree kits being required by law in some states, and they are quite good. (I love my Plantronics headset.)
      • The guy doing the demo was probably dumbing up a basic microphone tactic that's been in use for decades.

        There are not two microphones in that headset - that would just make it worse, since no PC it would run on is real time enough to match the sound samples together, etc, etc, etc.

        Instead they use a dual port microphone. The element lies between the front of the mic (towards the speaker) and the back (towards ambient noise). Sound pressure from ambient noise tends to hit both the front and back simultanously, while sound pressure from the speaker hits only the front. The difference gives mainly the speaker, with muted external sound

        Even cheap mics have that now. The main difference between a good mic and a bad one is its construction and materials, which affect its response characteristics.

        -Adam
  • Because (Score:5, Informative)

    by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Friday November 22, 2002 @02:12PM (#4733312) Homepage Journal
    Those speech recognition packages were only really capable of handling a few key phrases. In order to do seamless voice recognition that people will actually want to use it is necessary to recognize any (reasonable) word from any (Reasonable) speaker in a (reasonably) :) short amount of time.

    IBM can't even manage to do this on, for example, a P3 733EB. How they're going to do it on a 300MHz XScale or SH chip or similar (let alone a Motorola Dragonball) is beyond me. I think your head is in the clouds.

    With that said, voice recognition is very much on everyone's minds and it is coming. The limiting factor in handhelds right now is battery technology, which seems to be advancing more rapidly now than it has been in the last decade or so. With more power density comes faster processors and more ram, and the ability to perform these kinds of operations on smaller computers.

    • Re:Because (Score:2, Insightful)

      by Anonymous Coward
      I think it's fairly clear that the person in this question has said they'd be happy with the functionality of the older software, if it were available on a PDA. That's not hard to understand is it? He's not asking why voice recognition is unpopular; given it is a niche application especially for people who can't use a keyboard. But for those people, isn't a PDA solution, even if it isn't up to "your" standards, a good idea?
    • Re:Because (Score:5, Insightful)

      by photon317 ( 208409 ) on Friday November 22, 2002 @02:21PM (#4733425)

      Yeah but the author claims he was happy with discrete speech processing on a 386-16 that we had back in the day. He doesn't want continuous speech that doesn't have to be trained and all that jazz - just simple old school voice recognition. Is it so much to ask that someone port the old algorithms to the palm?
      • Re:Because (Score:5, Insightful)

        by tmark ( 230091 ) on Friday November 22, 2002 @02:45PM (#4733657)
        Yeah but the author claims he was happy with discrete speech processing on a 386-16 that we had back in the day.

        The author might be happy with what he had those days. The rest of the market would not be happy with that. In fact, the market is not happy with what we have now, as witnessed by the very low penetration of voice-recognition software. So why would we expect companies to spend the resources porting the old stuff when the new stuff won't even sell ?
        • Re:Because (Score:2, Insightful)

          by banzai51 ( 140396 )
          Exactly. The real problem is that speach recognition is a niche demand. Speach recognition in and of itself has no mainstream uses. Think of an office full of people using speach recognition. Not pretty. At home? People only want speach recognition if it is tied to computer commands. ("Computer, download my email, filter for spam, then read back the names of the senders.") Who's left? People who find typing difficult because of a physical limitation. While a worthy cause, it may well not be a profitable one.
          • Re:Because (Score:3, Interesting)

            by WatertonMan ( 550706 )
            Speech recognition is only a niche market because of the way it is integrated at present. If there was reliable speech recognition on PDAs then I suspect many people would use that instead of the nearly as unreliable handwriting recognition. (I can speak a lot more clearly than I can write) Further it would be a boom for businessmen on the go. You could dictate notes and letters while driving, for instance.

            Sometimes niche markets turn out not to be. Just look at a lot of "desktop publishing" software. Back in 1986 that was still largely a niche market. Now it is indespensible for many, many people.

          • Re:Because (Score:3, Insightful)

            by fishbowl ( 7759 )
            "Think of an office full of people using speach recognition. Not pretty. "

            Almost as frightening as an office full of people all using telephones.

            You don't remember typewriters and adding machines, or for that matter, the dictaphone, do you?

    • Re:Because (Score:2, Interesting)

      I don't think voice recognition is going to take off much at all, not for the general consumer. I don't thing many people want to spend 8 hours a day talking at their computer (or handheld, as the case may be). I imagine it'd leave you pretty hoarse unless the technology got to the point where you could quietly mumble or subvocalize. There is also a certain amount of privacy that comes with a "quiet" input device... you can hack away at the Linux kernel or type a naughty fantasy to your girlfriend and nobody knows the difference unless they look at your screen. Now imagine speaking each of them at work. ;)

      Frankly I don't want the din of dozens of coworkers talking at their computers around me. I'll stick with my qwerty keyboard. And this means those with physical disabilities will be condemned to a corner of the market, getting less attention and as a result more expensive and less quality products.

      -FF
      • I disagree, I think voice recognition will (eventually) become the way of interacting with computers. Think Sci-Fi TV; being able to just speak and have the computer respond to your requests. (Computer, locate wesley crusher. The airlock? Computer, open outer airlock door, safety override authorization...)

        Er, sorry Wil.

        Anyway, which is more "natural"... opening word and typing, or saying "Computer, please dictate a letter to such and such"? I think the answer to this is clear. It won't be replacing the secretary any time soon but this is how many people (I think most) do/would prefer to control their computers. Some things will likely always be best done with a keyboard; don't expect the keyboard to vanish any time soon. But especially in the case of portable computers which either have no keyboard or a substandard one, I would expect voice control to be the norm within five years or so. Text input on portable computers is simply too tedious.

        With that said, I think there's also room for dictation on your PDA and then non-realtime conversion to text while you're not doing anything with it, or conversion done on your PC (of course that's also non-realtime) when you dock. Also what with mobile wireless internet getting cheaper you may actually find yourself speaking to your mobile device, which then sends an audio stream somewhere else for processing. If communications technology continues to outpace battery technology, this seems likely.

        • Re:Because (Score:5, Funny)

          by alanh ( 29068 ) on Friday November 22, 2002 @03:50PM (#4734217) Homepage
          I disagree, I think voice recognition will (eventually) become the way of interacting with computers. Think Sci-Fi TV; being able to just speak and have the computer respond to your requests.


          I guess you haven't seen 2001: A Space Odyssey....

          "Open the pod bay doors HAL."
          "I'm sorry Dave, I'm afraid I can't do that."

          Maybe it wasn't that Hal was insane, just his speech recognition software failed....
        • Re:Because (Score:5, Insightful)

          by JanneM ( 7445 ) on Friday November 22, 2002 @06:29PM (#4735542) Homepage
          Voice being the natural way to interact with devices? Think it through: an entire office trying to dictate to their word processing program all at once, with people popping in to each other trying to talk about work; an airplane of road warriors all trying to dictate stuff to their respective laptops at once (without saying anything confidential); support departments trying to make dictation work with fifty other people speaking commands to their respective clients; or programmers trying to spell their way through their creations.

          And have you ever actually tried speaking for eight to ten hours at a stretch? I'm not talking about random, occasional speech acts, but sustained, focused speech. You'd have about three weeks until laryngitis became an occupational hazard among white-collar workers.

          Speech is nice, but it is very much a niche application. Not only now, but ever. A keyboard is faster than speech, and does not contribute to noise level or occupational damage nearly as much as sustained speech would. It's a nice, even essential, mode of operation for those apps when a keyboard just won't do; the disabled, firemen, surgeons and so on will rightly love the interface. For mainstream use, however, it's just not good enough even when it's perfect.

          It could become an accessory input, on the lines of replacing menu commands for an app: mark text, say "cut", mark a place, say "paste" and so on, but it just would never replace keyboard input in any mainstream application.

  • by Anonymous Coward on Friday November 22, 2002 @02:14PM (#4733347)
    Dragon has a portable product that you dock to your PC to do the voice to text. You can bring it with you, then connect it when you're home. A digital recorder is available bundled with the software, or you can use any micro cassett recorder and a Norcom playback and interface device. Seach Google for info!
  • Storage space? (Score:5, Insightful)

    by StandardDeviant ( 122674 ) on Friday November 22, 2002 @02:17PM (#4733377) Homepage Journal
    I'm guessing the storage space requirements for that in terms of the data files the programs would use to map vocalizations to meaning would be the biggest stumbling block... Most mainstream PDAs only have 8mb of ram/storage combined, and Palm is still shipping devices with as little as 2mb. Your best bet might be one of the StrongArm based handhelds combined with a reasonably large CompactFlash/SecureDigital card... (E.g. Sharp Zaurus, Hewlett-ComPackard's iPaq, etc.) Of course, that's probably 300-500, but that's still less than a new laptop...
  • by Anonymous Coward on Friday November 22, 2002 @02:18PM (#4733388)
    With a simple search for dictaphone [dictaphone.com] I was able to find a product called EXSpeech. I think this is what you are looking for.
    • Although I am a fan of Dictaphone, the EXSpeech product is hardly suitable for a PDA or for the general tasks that the original poster is looking for. From the site:

      "EXSpeech(TM) offers a highly accurate continuous speech recognition solution that's fully integrated with Dictaphone's industry-standard Enterprise Express® voice and text management system. This state-of-the-art speech recognition technology, incorporated into a complete patient information workflow management system, can reduce transcription costs by more than 20% while speeding report turnaround."
  • Can you imagine how bad it would be if everyone switched to voice recognition? Cellphones are bad enough, imagine if everyone was talking to their computers. The noise would be terrible. No matter how quiet you are, the noise would still grow rather large. Would you want to dictate something to your computer that is supposed to be private? Not when anyone can hear it. I'm waiting for something better, whatever it might be.
  • I dont get it (Score:2, Informative)

    by stackdump ( 553408 )
    Is the poster just dissatisified with existing software: [pocketpcmag.com] or pissed because he wants to be computing star Trek style and never will?
  • Possible Reasons (Score:2, Insightful)

    by Qzukk ( 229616 )
    First, nobody thought of it when PDAs were first made. In case you haven't noticed, very few have microphones.

    Second, people with PDA's usually are using them somewhere in public, so you have a lot more background noise (as well as different accoustics) which varies as the person moves about and would probably make it difficut to filter out.

    Third, you'd want a mic on a cable, seperate from the PDA. This way you can keep the PDA where you can read whats on it, and not have to shout at it from arms length, or shove it in your face and talk into it.

    Finally, (and this is the reason I think voice control never caught on in a business setting), imagine a roomful of people talking to their machines. Each machine would have to identify its user's voice uniquely out of the babble, otherwise it would just take one ticked off guy with a bullhorn to issue the command to delete everything.
  • One Reason (Score:2, Insightful)

    by MCMLXXVI ( 601095 )
    The ambient noise around PDAs in normal use it a lot higher than a normal computer in your house.
  • ViaVoice (Score:5, Interesting)

    by Shamanin ( 561998 ) on Friday November 22, 2002 @02:22PM (#4733439)
    What about IBMs ViaVoice for the Pocket PC?

    http://www-916.ibm.com/press/prnews.nsf/jan/9E28 6C 1CD2D94A3185256ADB007849B7

    This shipped with my iPAQ 3835 and seemed to work pretty well for the 5 minutes that I used it before installing Linux on my iPAQ.

  • by Christopher_G_Lewis ( 260977 ) on Friday November 22, 2002 @02:22PM (#4733440) Homepage
    I remember doing some development of this stuff (not for CE) years ago, and IBM and Dragon were the "best" at the time. Best in quotes, because they were pretty bad at the time, no HAL type interface yet.

    Because it was a relatively processor intesive task, I would imagine that time + improvements in Sound Card DSP's would make these better. (But I've been known to be wrong :-)

    Interesting that a Google brought up this company
    CyberTron [cyberon.com.tw]

    as well as IBM ViaVoice Mobility

    IBM ViaVoice Mobility [ibm.com]
  • Simple answer... (Score:3, Insightful)

    by toupsie ( 88295 ) on Friday November 22, 2002 @02:24PM (#4733454) Homepage
    Why hasn't it occurred yet?

    It hasn't been proven to be market viable or cost effective for sale. You need to be knocking on the doors of the PocketPC and Palm folks and give them conclusive evidense that making such a device will bring in the cash. Most capitalist companies are in business to make money, not give out charity unless it can fatten their profits through marketing. Find more people in your situation and gang up! The squeky wheel gets the grease.

    However, I use an Ericsson T68m [sonyericsson.com] voice dictation which gives me roughly 25 minutes of recording time. I just transcribe when I get home.

  • Dependable dictation (Score:3, Interesting)

    by t0qer ( 230538 ) on Friday November 22, 2002 @02:24PM (#4733462) Homepage Journal
    Sorry no links....

    There are dictation services availiable on the net, basically you e-mail them an MP3 and they e-mail back a fully typed document.

    As far as the reason for voice recognition not being on a PDA, I think it's space requirements. Of the two packages i've tried (dragon dictate and IBM) both of them require a lot of disk space to contain the recognition engine and your personal voice pattern files. Much more than your average PDA can hold. We're probably only a few years off from PDA's having that type of storage.
  • In a similar vein, why does text to speech still sound as crappy as Steven Hawking's text to speech device from his 80s documentaries?

    I recently downloaded Microsoft Reader along with a text to speech add-in and it sounded horrible. Same thing with Adobe's eBook Reader (well, their's was a little better).

    But why is this so? Why is text to speech even difficult? If you just have a human person speak all the different phonetic sounds shouldn't it be a simple matter of stringing together those sounds in a relatively seemless way?

    • by stratjakt ( 596332 ) on Friday November 22, 2002 @02:35PM (#4733555) Journal
      It's not just the phonetic sounds, but the multitude of various inflections and emphasis' that are lacking, and are pretty hard to reproduce, unless the TTS engine can interpret the meaning of the text.

      Raising the voice at the end of a question may be easy enough. But how much? When? This is a question too, is it not?

      A good orator would read a more 'exciting' passage more quickly, and with more enthusiasm, punctuating key verbs and nouns. How is software to know which passages are more exciting, and which arent?

      It's not just a hard task for computers, but people too.
      Computers read aloud at about the same level as poor orator. Pho-net-i-call-y, in a dull drab monotone. Drop by the local high school, and listen to them reading shakespeare.

      Reading aloud may be simple, reading it well and naturally is a skill.
      • Raising the voice at the end of a question may be easy enough. But how much? When? This is a question too, is it not?

        Heuristics? You should be able to come up with a rudimentary rule set for certain things. And really the only limit to how accurate you can get is how much time you are willing to put into refining and lengthening the number of rules.

        A good orator would read a more 'exciting' passage more quickly, and with more enthusiasm, punctuating key verbs and nouns. How is software to know which passages are more exciting, and which arent?

        How do we know? By matching key words and phrases. Is there even an attempt at this?

        It's not just a hard task for computers, but people too. Computers read aloud at about the same level as poor orator. Pho-net-i-call-y, in a dull drab monotone. Drop by the local high school, and listen to them reading shakespeare.

        Even if it is too hard a task for a computer to leap beyond dull drap monotone for straight text to speech, do you know of any attempts at emphasis tags?

        <quiet></quiet>, <excited></excited>
        I find it really hard to beleive that this hasn't advanced at all since the 80s.
    • Check out AT&T's TTS demo [att.com]. It sounds REALLY good.
    • Voiceware's stuff here [tmaa.com] is really quite good. You just never hear the good TTS on desktops because the licenses are expensive, and only telcos can afford them.

      You actually hear the voices all the time over the phone (recordings and such), but you just think it's prerecorded, and then spliced. I think part of GM's OnStar service may use TTS.
    • If you just have a human person speak all the different phonetic sounds shouldn't it be a simple matter of stringing together those sounds in a relatively seemless way?

      No. For the complete answer take an introductory linguistics course and pester the professor.

      Short answer: speech doesn't work that way. When you cut phonemes away from the surrounding ones, they no longer sound like speech and you can't string them back together - the result isn't heard as speech at all, but a bunch of random chirps and vowel sounds.

      This is also part of why speech to text is so hard; the sound graph of, for example, /k/ looks completely different depending on what other phonemes are in the same syllable. (and so speech to text can't really match at the level of phoneme very well, and has to back off matches to the syllable level or longer) Sounds which we interpret as "identical" when used in speech look completely different when you plot out the frequencies involved (or take a look at the data). About the only phonemes which can be cut-and-pasted in isolation are vowels, and only the middle parts of long vowel sounds do that particularly well.

      It frustrates your intuition, but the initial and final sounds of "cook" are not the same to some sound-sensing device that isn't connected to the human brain's special speech processors. That's because the human brain processes speech-like sound so that you hear as similar those sounds which require similar positions of the tongue, mouth, and other organs humans speak with. There's also noise correction in there like you wouldn't believe, which is how you can still understand stilted Hawking-like text to speech.

      I suppose that the ultimate text to speech machine would run an intense physical simulation of air being forced over human vocal chords and through a human mouth with a tongue moving just right for each word, but:

      1. the processing time would be, to put it mildly, massive
      2. Doing the motion capture for that would be difficult and possibly quite painful
      3. You'd still have the issue of pronouncing words within the context of a sentence
    • The problem is a bit more complex than you make it sound. People doing text-to-speech development are smart, and would have jumped on this idea years ago if it were as easy at it sounds at first.

      To sound natural, speech has to incorporate prosody and intonation as well as being able to support coarticulation.

      Coarticulation refers to the fact that the sound of a phoneme (the smallest unit of linguistic sound) is affected by those that come before and after it.

      It is not an easy problem, but there have been some nice advances in concatenative text-to-speech systems. For example here is a pdf [infofax.com] about IBM's approach to the problem.

      We're not there yet, but things are improving.
  • voice server (Score:3, Insightful)

    by Steven Rumbalski ( 628533 ) on Friday November 22, 2002 @02:27PM (#4733483)
    Conjecture: Voice recognition on a PDA could work if you had a separate voice server over a wireless connection. So you have voice sent over a regular phone connection to you home pc (with modem) that does the recognition, it then spits back text (over another connnection?) to your PDA.

    Some might say that this would make VR to slow. I don't see why this would be noticibly slower than doing VR in person. After all, when we talk on the phone the person on the other end hears us almost instananeously.

    On a side note: my brother is doctor who uses VR to do his dictations. It is much cheaper than paying a transcription service. He also does not need to review the transcriptions afterwards for accuracy, because he essentially reviews it as he speaks it.
    • Couldn't this be done without a PDA, just a cellphone
      capable of instant messaging (can you recieve those
      at the same time you're on a voice call)?
    • It is interesting that I JUST did a project on this subject for a Ubiquitous Computing class... My project was called "Distributed Speech Recognition." Here is a link:

      Distributed Speech Recognition Project [duke.edu]

      I also have heard it through the grapevine that the big voice recognition companies are working on exactly this technology... I wouldn't be surprised if Speech .NET includes support for something like this in the near future. I believe I read on some website that support for Speech API on PocketPC was coming soon...

  • Maybe as a middle ground this could be a good use for a Tablet PC, particularly since it would give you a bigger screen and interface for seeing and marking the text as it is input
  • just an idea.. it's a handheld Linux based system, so why would this be such a bad idea? hell, while your at it, install festival, so it can talk back

    yes yes, a scripting nightmare.. perhaps some enterprising programmers could start something on sourceforge or something..

    its not like the technology isn't out there. It's certainly not perfect; the Zarus isn't big on storage space, and it's hardly cheap. and of course countless threads on the imperfection of voice recog.. blah blah.. but good enough is a fine answer on the path to [unattainable] perfection.

    Anyway; Keep It Simple, Stupid:

    Zarus + Microdrive + ViaVoice/Dragon libs [+ festival?] + glueware = handheld voice recognition..

    what's the big deal?
  • to use PDAs for World Dictation!

    MARCH ON MY PALM MINIONS! Go forth! And ravage the world!

    *cackles deviously*
  • by RevAaron ( 125240 ) <revaaron AT hotmail DOT com> on Friday November 22, 2002 @02:35PM (#4733562) Homepage
    You can get a version of ViaVoice for the PocketPC. However, it sucks. It's not a real dictation system though- it only allows you do use a pretty small pre-defined group of commands, not general english word dictation. I was pretty disapointed. However, I wouldn't be surprised if eventually there will be a full-blown ViaVoice Embedded version for the PocketPC.

    As usual, there are some results [google.com] that come up with a simple Google search.

    There was a Dragon Naturally Speaking beta for the Newton OS 2.1, and it works OK. But it's still a beta and is far from perfect.

    If you're looking for voice recognition for other PDAs, including PalmOS or Linux devices, you'll probably have much less luck.
  • by Cyclopedian ( 163375 ) on Friday November 22, 2002 @02:35PM (#4733565) Journal
    This place [washington.edu] at the University of Washington is working on different model of speech recognition that could be conducive to PDA use (low-power, filter out extraneous info).

    Basically, they are working to analyze speech in slices (phonemes) instead of the more computationally intensive task of the whole word. This would lead to a higher success rate and could be easily used across multiple accents of the same language (English, engrish, etc).

    I'm excited about what they could accomplish there.

    -Cyc

  • by Myrv ( 305480 ) on Friday November 22, 2002 @02:36PM (#4733572)
    Only recently have PDA's been shipping with anything approaching a good DAC and many PDAs still lack any ADC support. Without a good Analogue to Digital convertor built into the PDA you won't be able to do voice recognition. Remember that your 386 still required a soundcard to work properly. The same is true for PDAs today.
  • Dictation is fine...as long as the damned thing doesn't start talking back to me.
  • by jgrider ( 165754 ) on Friday November 22, 2002 @02:40PM (#4733609) Homepage
    (Disclaimer: I am currently consulting for a firm that is developing a Palm cradle with built-in dication/voice recognition capabilities for the medical transcription market...)

    Since the asker wanted to know WHY nobody has done this yet, I'll spell it out:

    Basically the major pitfalls to developing this are:
    1) Crappy algorithms that mangle what you really said into something unrelated :)
    2) Power Consumption
    3) Interfacing to the PDA (not hard to do, but non-trivial)
    4) Limited PDA capabilities (Remember that Palm's DragonBall is a RISC architecture, and things like speech recognition NEED floating point math which must be emulated)

    The solutions:

    1) Somebody (not unlike me...) has to code the already existing better algorithms (check the literature - speech recognition is a mature technology, and publications abound) into a usable chunk of code, instead of simply recycling ViaVoice or NaturallySpeaking's libraries.
    2) Add more battery storage.
    3) Use another processor to do the conversion, then simply write it to the Palm in a serial stream.

    I would just wait about a year, then ask that question again to your physician friends, and see what they whip out of their pockets... :)

    • ask that question again to your physician friends, and see what they whip out of their pockets

      Is that a dictaphone in your packet or are you just happy to see me?
    • Everyone has gotten so used to the idea that computers will do exaclty what we tell them. SR will never be 100% reliable (or even 99%) because of the noisy communication medium - air. Therefore you will always need some handy error correction protocol (commonly called dialogue).


      Have you ever wondered about how well people recogize speech. If something is blurted out at random we rarely catch the meaning first time. "What?". If humans have a lot of trouble understanding each other (about 20% error rate) then computers have no chance when it comes to out-of-the-box out-of-the-blue dictation. And computers don't have the benefit of a decade of childhood, not to mention millions of years of evolution.


      What I'm getting at is that computers need a great deal of context to succeed (to reduce the number of possible interpretations, and therefore the number of ways of getting it wrong).


      (I'm speech recogition engineer - our company went bust last year - another dot bomb).

      1) the algorithms are good (trust me, i've seen them)
      2) the training takes bloody ages - it takes weeks (and tera-bytes of data) to get good results across most of the speaking population.
      3) dialogue is very hard.
      4) actual recognition is fast (we had dozens of simulateous recognitions on 600Mhz machines).

      The take home message: Train the users. Manage expectations. Say bye bye to HAL.
    • >
      Palm's DragonBall is a RISC architecture, and things like speech recognition NEED floating point math which must be emulated

      Dragonball's Motorala's, not Palm's. It is a CISC, not RISC, more specifically a M68K. RISC is usually better than CISC at floating point, but both architectures can go without a floating point unit, and that's what Dragonball does.

  • In the late 90's there were 3 major SpeechWreck vendors: IBM, Lernout & Hauspie and Dragon Systems.

    Microsoft poured a bunch of cash into L&H. L&H eliminated some competition by purchasing Dragon.
    L&H did some highly irregular accounting tricks, got themselves thrown in jail, and took their comapny down with them.

    End result: There is only really one speech recognition vendor at this time, IBM, and they are just useless at marketing consumer products.

    Keep an eye on Phillips. They are currently spending big bucks developing their Speech Magic engine.

    Your other option is to find a copy of Dragon Mobile. Record an audio file on your mobile, then have it recognized on your PC.
  • 1) Create PDA voice recognition software

    2) ????

    3) ???? (not profit!!!)



    Seriously, TRUE voice recognition is only 99% accutate. It is bad enough trying to make corrections on a regular key board .. but on a PDA???? That would be rough!

    Why not stick to using your laptop (which has MUCH more processing power) for voice recognition for now? You'll be able to run better software (software that does TRUE voice recognition, not phrase recognition) and have enough memory to run a text editor w/ spell check after you have completed your document.

    This might be a great idea, but I think it might be a little ahead of its time ....

    Just my two cents ...

  • I have been wondering why speech recognition isn't more widely used as well. My conclusion was that there simply isn't enough interest in it. Companies won't make it until consumers are willing to buy it, and the consumers won't buy it until they are convinced it works better (and maybe even then they won't - see M$IE vs. the other browsers).

    As an open-source zealot, I have to point out that Free software would be a solution here, as it is less concerned with profits. IBM seems to have open-sourced some code related to speech recognition, and there are a number of other projects out there, but even for open-source, there has to be sufficient interest in a project, and sufficient could mean _a lot_ in this case.

    I think speech recognition is great, and I would use it if I used Windows. I just haven't found a good solution for XFree86 yet - not that I've looked very hard.
  • It's the battery (Score:4, Insightful)

    by PCM2 ( 4486 ) on Friday November 22, 2002 @02:45PM (#4733665) Homepage
    Palm applications, in particular, are designed around the idea of "forms" -- you put a form up on the screen, and then you sit there waiting for the user to do something. You don't run a constant loop listening to a microphone every minute, because that sucks up the battery like crazy. The Palm programming philosophy says that 99% of the machine's time should be, essentially, idle. Voice recognition, on the other hand, is very processor-intensive -- probably too much so for a pair of AAA's.
  • EARS (Score:5, Informative)

    by david.given ( 6740 ) <dg@cowlark.com> on Friday November 22, 2002 @02:47PM (#4733681) Homepage Journal
    Lo, many years ago I had a lot of luck with EARS [cmu.edu] on my 66MHz 486. It's a very simple discrete trainable recogniser; you have to teach it every word before it would recognise it. But it was fast then, it should be really fast now, and was pretty decent for recognising simple commands.
  • by victim ( 30647 ) on Friday November 22, 2002 @02:47PM (#4733682)
    Tiny devices like cell phones and PDAs don't have the CPU power for sexy, high quality voice recognition. They do however have wireless connectivity. So, solve the problem this way...

    Install voice recognition servers, network connected boxes with powerful CPUs and the best voice recognition software you can get your hands on. A voice recognition client then just needs to send the voice data up to the server and get the translation back, say 100kbps up and some tiny amount back.

    The payback comes because most devices will only use voice recognition for brief periods, so will present a negligible load on the servers. The dictation users will place a higher load on the servers, but even there, I'm guessing there is a lot of pausing involved. I'm also going to guess that some lag is acceptable for dictation. Presumably the person is thinking about what they are saying and proof reading later. This load can be prioritized lower to allow better immediate response for people issuing voice commands on their mobile devices.

    Power consumption on the portable device will probably improve. They will have to operate their transmitter (think "talk time" vs. "on time"), but they won't need 5 watts of CPU doing recognition. (Guessing from a mobile G3 PPC, further validated, considering that the CPU spot of my iBook gets far hotter under solid use than a cellphone.)

    So, just to pick numbers out of the air, a dual processor, high end commodity hardware voice server might serve 500 pda users giving intermittent commands and 6 simultaneous dictation users.

    A company or school could easily justify the hardware cost of this service.

    Now, someone go out and build one.

  • by ianscot ( 591483 ) on Friday November 22, 2002 @02:51PM (#4733713)
    People have been saying there's no market, here.

    You don't have to be disabled in some way to think this'd be handy, do you? That's the story for this one person, okay. But if you hadn't heard of a PDA ever before, wouldn't this be one of the most likely functions you'd think of for them? It's a totally natural application for a handheld gadget like that, and one that really would have a natural market among all the middle manager types who made Palms so popular to start with. Right?

    (Are there PDAs that can even read text in the other direction, though -- text to speech?)

  • Discrete is passe (Score:2, Informative)

    by outlier ( 64928 )
    Unfortunately for you, discreet speech is seen as passe by the major players (IBM, L&H, MS). For a long time, continuous speech was seen as the major boundry to widespread acceptance of general purpose dictation software (another boundry was the support of large vocabularies). Eventually, processor power and algorithms evolved to a point that both barriers were overcome and discrete speech (and small vocabs) were left by the wayside.

    One byproduct of this was a decrease in voice error correction performance -- Most verbal corrections are single words (e.g., the user selects the misrecognized word, "foo" and repeats the intended word "bar" without any of the coarticulation cues that the continuous recognition engine relies on). The recognition of isolated words by a continuous speech recognizer is inferior to the performance of a discrete system, yet the major software companies removed the discrete recognition engines from their products. (for more on speech errors, see this [umich.edu] or this [umich.edu] pdf).

    Anyway, the use of discrete recognition engines has been essentially abandoned by the major players, and seems to have been relegated to the specialty shops that cater to disabled users. One outcome of this is that there is very little innovation related to discrete speech because it was one of (many) historical barriers to the use of desktop speech reco. I can certainly understand the resistence by the big companies to go back to an "inferior" recognition engine for handheld devices. Most likely, speech reco on the handheld will emerge in a client-server environment with the speech signal (maybe somewhat processed) being sent from the handheld to a server for recognition, and the text being returned to the handheld. We probably won't see a general purpose speech recognition application (as opposed to a limited vocab application) that runs solely on a handheld until continuous processing can be done entirely on the device.
  • Maybe the way to approach voice recgnition through using air waves is all wrong to start with:

    Bowman: "Hello, HAL? Do you read me, HAL?"
    HAL: "Affirmative, Dave, I read you."
    Bowman: "Open the pod bay doors, HAL."
    HAL: "I'm sorry Dave, I'm afraid I can't do that."
    Bowman: "What's the problem?"
    HAL: "I think you know what the problem is just as well as I do."
    Bowman: "What are you talking about, HAL?"
    HAL: "This mission is too important for me to allow you to jeopardize it."
    Bowman: "I don't know what you're talking about HAL..."
    HAL: "I know you and Frank were planning to disconnect me, and I'm afraid that's something I cannot allow to happen."
    Bowman: "Where the hell'd you get that idea, HAL?"
    HAL: "Dave, although you took thorough precautions in the pod against my hearing you, I could see your lips move."


  • You could try running this version of VoiceType within PocketDOS on a Handheld PC 2000 or PocketPC machine... Or, you could find an older PDA that has a 486-class processor. Not sure if PocketDOS can handle sound, but it worked great for running Lotus Agenda on the Jornada 720... Not a DOS shell for WinCE, but a x86 emulator with DOS installed.
  • Speech-to-text? (Score:3, Insightful)

    by thatguywhoiam ( 524290 ) on Friday November 22, 2002 @03:08PM (#4733862)
    I think the question really is one of processing power, and pattern recognition. I have yet to see any truly impressive speech technology beyond what was available on a Mac in 1994.

    The poster's question brings to mind a thought I've had lately, though, on PDAs and smart mobile phones. I've recently 'switched' from a Visor to just using my Sony Ericsson T68 as an organizer. Works great with iSync, etc.

    The Palm-with-phone always made more sense to me than the phone-with-organizer. It seemed that the phone part could change shape - I could stick it in my ear in the form of a headset, with a connector to the Palm. A phone I need to hold up to my head. I can't surf with something held against my head that way.

    However,

    I've realized that I need a phone more, and more importantly, I only enter very small bits of text into the Palm. Furthermore, I spend much more time looking up things than entering things (as I use the Mac do enter data whever possible).

    This led me to the conclusion -- the one thing we are missing from the organizer/phone landscape, as the poster asked, is some kind of speech-to-text.

    If I could literally hit a button and say "lunch with Dave next Tuesday" and have it enter that as live text... blammo. No more Palm, no more stylus. The phone already listens to voice commands. If it took short notes/appointments, I could literally walk around, call people, make appointments and notes, and not take the thing out of my pocket. Nice dream.

    *sigh*

  • by davids-world.com ( 551216 ) on Friday November 22, 2002 @04:41PM (#4734614) Homepage
    Well, it two or three years, you will be able to buy something like that. We're working on it (MIT's Media Lab Europe). [medialabeurope.org]

    until recently, the PDA processors were not good enough, but that is changing rapidly (even though there is, in my view, little use for so much power except language technology).

    The resulting dictation systems will not replace conventional keyboard input for a while, however, as recognition rates are .97-.98 (accuracy), and that's a wrong word in at least every second sentence. In comparison to low-bandwith input, however, (as in the PDA with the stylus or as in the author's case due to a fine-motor dysfunction), voice recognition is very competitive.

    cheers from dublin.

  • by NanoGator ( 522640 ) on Friday November 22, 2002 @07:20PM (#4735892) Homepage Journal
    "Over the years, my computer use has de-evolved to programming, FTP, email (Mozilla), word processing (OpenOffice) and Ricochet."

    I'd say this guy found the magic combination of words to get his article posted on Slashdot. Heh.
  • by rufusdufus ( 450462 ) on Friday November 22, 2002 @09:16PM (#4736555)
    I worked on dictation and dialogue on a PDA prototype at MS several years ago. It was called MiPad and was pretty cool. Well except that it really had to use a wireless network to a computer to get the recognition done.

    There are a couple of reasons why this hasn't hit the market yet:
    1) the PDAs really are not powerful enough to do decent recognition. Mainly, they don't have good enough audio input systems for reasonable speech quality. Also not enough disk space for dictionary storage. And the cpus are slow and the RAM is too low.

    2) at least at MS it is not a top priority to make speech work for disabled users. Outrageous you say? Not so! Turns out when the speech guys approached the accessability guys on the subject, they learned that speech recognition is not workable in most cases where accessability is needed; that is to say, the market for disabled people who cannot use the keyboard but who CAN use speech input is actually quite small. Most people who don't have the motor function to type (or use some sort of keyed input like Stephen Hawking has) dont have the motor function to speak clearly enough for speech recognition to work. Bottom line: other solutions work better.

  • Not enough CPU (Score:3, Interesting)

    by bluGill ( 862 ) on Friday November 22, 2002 @11:42PM (#4737066)

    Sure, a 386 could do vioce recignition, but it required a special card that not only had higher quality sound inputs, but also had some DSPs to do the hard work. When IBM put voice recignition in OS/2 they warned you that a a 486 was not enough. (Several people tried it anyway, and it worked only within narrow limits)

    To emulate a DSP required a lot of floating point math. Most PDAs do not have floating point in the CPU because nothing would use it. The few times it is needed emulation is easy enough, just very slow. No problem though because as I said floating point math isn't much used.

    Don't forget that PDA cpus are not designed for speed above all else. They are designed for low power, which means they have to compromise something and require extra CPU cycles to get something done.

    Finially don't forget power requirements. When doing normal use the CPU is shut down most of the time, and drawing essentially no power. Voice recignition would change that, and your battery life would suffer drasticly.

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...