Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Hardware

Best Device For Gesture Based Input? 133

jotaeleemeese writes: "A few days ago there was a discussion about gesture navigation in the Opera browser, that prompted my to buy Black & White, download Opera and get the evaluation version of Sensiva. Being a trackball user, I found gesture navigation too cubersome, I found a mouse not much better either. Then I thought a pen based device or a touchpad could be ideal for this kind of input, but before investing my hard cash buying something, I would like opinions from /.ers that have already tried something with these or other programs using gesture recognition and what the results have been."
This discussion has been archived. No new comments can be posted.

Best Device For Gesture Based Input?

Comments Filter:
  • by Anonymous Coward

    What about brain gestures from this device?

    http://www.IBVA.com

  • by Anonymous Coward
    Having a gesture-sensitive glove would make it fairly difficult to type, I would think.
  • by Anonymous Coward
    ::Kinda of off topic:: I, myself, have been a user and teacher of Alias|Wavefront's 3D applications for a few years now, and really like their user interfaces. Particulary their interface for Maya. It has a slightly larger learning curve than a web browser, and once you got it you can fly through the application (after eight hours though, I start looking for the spacebar/right-click context sensitive menus in other applications > especially netscape and photoshop). Seeing as how I'm an animator, I'm always looking at what other packages offer. I remember playing with Blender (free 3D app for Linux, BeOS, or Windows) [blender.nl] for a while. It had gestures, and that was years ago. I think I remember the gestures being fairly useful, but I don't remember much more. Someone should comment on how wonderful or horrible they are. -Me
  • by Anonymous Coward
    I too have been using Opera religously since finding the gesture based navigation. Have everyone that works within 20' of me hooked as well.
    I tried Sensiva, and did find that to be a pain in the rear. It will be interesting to see if gestures can be well implemented accross more than just one application.
  • Try PS2Rate [mediaone.net]. Its a little program that allows you to manually set the sampling rate for your mouse.
  • by Anonymous Coward
    Quantel (UK mfg. of very high end broadcast paint and compositing systems) has been using gestural input on their devices for years....the pen gestures work great for oft repeated tasks. On a paint system where most of your work is done with a pen anyway it makes a lot of sense.....but if most of your input is through the keyboard it can be very annoying to have to switch constantly from pen to keyboard. If you do get a pen go for a large tablet ...... the closer the tablet size is to the size of your screen the easier it is for you to coordinate your pen movement to what you see on the screen.
  • by Anonymous Coward
    As opposed to actually typing...
  • by Anonymous Coward
    I use the Wacom Graphire tablet, and I have been for several months now, and I love it. I have used the Sensiva software a little that came with it, but this did get a bit cumbersome, mainly because I didn't take the time to learn it properly (same reason I haven't got a Palm yet, as simple as the new input method is (I know it only takes a minute ir two) I just know I won't take the time to learn it). I'm sure there are people out there who love it, but I cannot see myself (or anyone, really) using it with a mouse. I recommend a pen tablet if anything, and the Graphire works great in Win98. I just upgraded to RH 7.1 and finally it works with too, but it is not as configurable, and I can't get it how I had it in the Windows set-up.

    The other thing with the Graphire is that you get a cool cordless mouse to go with it, of which I have only one complaint. It wasn't designed to be able to click either of the outside buttons while rolling the wheel in the middle. I'm thinking about doing some reconstructive surgery with a craft knife so that pushing a button doesn't push down on the wheel and stop it from turning!
  • by Anonymous Coward on Thursday April 26, 2001 @12:01PM (#263140)
    It will let you enter the precise x-y-z coordinates of your gesture.
  • by Anonymous Coward on Thursday April 26, 2001 @12:05PM (#263141)
    I'm playing B&W with a Wacom Intuos tablet/pen combo. Casting miracles/spells with the pen in midair (over the pad) is the like waving a magic wand. It is so cool.
  • by Anonymous Coward on Thursday April 26, 2001 @12:12PM (#263142)
    Mice are terrible for gesture based interfaces. A pen-based interface with a touchscreen works the best, like on a palm pilot. Large touch-screens are still out of reach for home PC's, so you're out of luck there. But browsing the web on my laptop with a trackpad (not eraser) makes me think that a trackpad might work well, especially one set to interpret a tap as a click.

    Frankly, though, I think gesture based interfaces are overrated.

  • by Anonymous Coward on Thursday April 26, 2001 @12:49PM (#263143)
    That was fun for a while until my hand wouldn't fit into it anymore.
  • by abischof ( 255 ) <alex@NOSpaM.spamcop.net> on Thursday April 26, 2001 @12:12PM (#263144) Homepage
    If you're looking for a fully gesture-based input system, you may be interested in the KeyBowl [keybowl.com], a no-key no-wrist-movement "keyboard". Contrary to what the pictures might imply, the two domes don't rotate. Rather, letters are formed by sliding the domes [keybowl.com] while keeping the wrists straight.

    I don't own one of these (I have a Kinesis Contoured [kinesis-ergo.com] keyboard, which I'm very pleased with), but I'd be half-tempted to buy a KeyBowl if they weren't almost $400 [keybowl.com] (%$#!).

    Alex Bischoff
    ---
  • I believe the whole concept of gesture-based menus was first pioneered (and put into production) by Alias|Wavefront...

    The cool thing is that gesture-based menus have been part of the Alias|Wavefront products since 1996.

    I don't know what a "gesture-based menu" is, but I was using the gesture (stroke) interface of Mentor Graphics (circuit/system CAD) package back in '93. I'm sure it existed even before then.

    The nice thing about Mentor is that you can do things in many different ways: menus, keystrokes, gestures, scripts and so forth. It's flexible without being cumbersome. Really, it's probably the best computer user experience I've ever had.

    This was with a mouse, BTW, which seemed perfectly natural to me. The middle button initiated a stroke, but I think this was configurable.

    --

  • by jandrese ( 485 ) <kensama@vt.edu> on Thursday April 26, 2001 @12:03PM (#263146) Homepage Journal
    One thing I noticed about Black and White is that if your input device has very low resolution (say your comptuer is overtaxed and only servicing interrupts every 100ms or so), then the gesture based input can be a real pain, but when I'm on a fast enough machine (with a good precision mouse) the gestures are easy to preform. The problem with slow input is that when you go around a curve, the mouse may only register at two or three points along the curve, and your software will interpolate that into a straight line between those points. If what you are trying to draw is curved, then there is a good chance the recognition software will get it wrong.

    Down that path lies madness. On the other hand, the road to hell is paved with melting snowballs.
  • by Caine ( 784 ) on Thursday April 26, 2001 @05:19PM (#263147)
    Right click the application while holding shift. Choose "Run as...", enter the user you want it to run as (administrator in this case), enter password. *Presto*.
  • Wow - that *does* look cool...

    Now...any chance the pad/touchscreen works in any other OS than Windows? If it did, I'd consider buying one - unfortunately I doubt that's the case.

    ** keying email to IBM sales **

  • I recently download Opera 5.11 in all of it's gesture glory.

    I must say that I'm really impressed and don't find it difficult at all to use the mouse. It took me a day I'd say to get the perfect feel so that I do the correct gesture with the minimal effort required.

    I think people who don't like mice aren't going to like using the mouse... for gestures or otherwise. So listen up to anyone that was swayed by the article author -- the mouse works fine... just give it a shot.

    And of course a pen will work wonders too. I have one but most people don't (should change, they're awesome!) but I think a glove would let us express the most gestures... heh heh no puns. ;)

    I don't know.. maybe I'm just a post 8 bit nes, power glove dreaming child stuck in this adult body.

    Later
  • Oops. I really need to use the preview button.. hehe

    (five lashes for spreading bad internet grammar, spelling, and bad taste)

    :P
  • Me, too.

    Only problem is when I move the mouse to do something, and then change my mind and move it somewhere else... and end up closing the window. Pisses me off every time. I'm developing the habit of "squiggling" when I change my mind...!

    --
  • Try setting your monitors color depth and resolution to the one you use in game. I couldn't get my B&W to work on Win2000 until I changed the resolution manually.

    Hope that helps,
    ~Squiggle
  • Very few topics get me started. Here's one. The glove showed that people would be willing to use new and different interfaces, blindly. Hence, the creation of lightpens, 4d mice, and other devices. The important aspect of the glove was not way it was used or it's shortcomings, but the fact that it provided an altogether new set of inputs. The reason it was unpopular (and gloves continue to be) is because you expect to lose a comfortable and efficient interface to the glove. The glove which comes out with a gesture to activate/deactivate the glove quickly (be it touching your forefinger to your thumb or a button on a keyboard) will garner signifigant shelf-space at Fry's. If it wasnt blatantly apparent by now, console platforms perform well in the market based on two simple factors that have nothing to do with the games or technology (in 90% of the cases). The number of buttons and design of the controller. These are essentially the concepts I am examining. There is no reason you would need to move your hands/arms in some theoretical 3d shape, that could not be represented by a combination of inputs includng a 2d gesture and button or approxiamte physical location (the upper right corner). If you had a system that had so many commands that you had to include 3d gestures (since you ran out of 2d gestures or button combinations), everyone can realize that the user wouldnt be able to comfortably remember it all anyways. A scenario that is a little more realistic, is a glove/set_of_gloves that you wear while your hands rest above a desk. The key is not which interface is the best, but which is the best way to switch between them.

    Often wrong but never in doubt.
    I am Jack9.
    Everyone knows me.
  • Yeah, I totally love my Intuos 6x8. The whole cordless/batteryless thing is a huge bonus.

    ---
  • I like that, a lot like the interface in William Gibson's "Idoru", basically glasses and gloves that plug into your belt CPU with 3d GUI environment that the gloves are used to manipulate, THAT is where I would like to see wearable PC's head.
  • by Ravenscall ( 12240 ) on Thursday April 26, 2001 @11:56AM (#263156)
    I think something like the old Nintendo power glove would be great, hit a button, gesture, hit a button to confirm, all of it fingertip controlled. Either that, or a touchpad/screen. What would be more natural for gesture based control than touching the screen and making the gesture?

  • The gesturing feature in B&W is very cool, IMHO. I haven't tried it with a trackball, but I've seen it work with a mouse and pretty well with a touchpad.

    I was fascinated by the gestural interface on the shuttles in Earth: Final Conflict. In many ways it seems to be a very intuitive and natural way to control things. It would be interesting to have a gesture recognition device that would allow us to use our actual hand (or hands) to make gestures without cumbersome things like gloves. It would, of course, have additional advantages: people who know sign language could even use a sophisticated system to type.

    In the meantime, I'll settle for the traditional way of giving people the finger. ;-)
  • I'm all for touching one button instead of moving my arm around in unintuitive motions that just increase RSI risks.
  • There's a large possibility that I just have issues with writing... my handwriting is horrid, so I can imagine what it looks like in the game... I think I may be curving some motions without realizing it too because I keep drawing the gestures quite large on the 12x9 tablet, maybe I should try something a bit smaller. Have you found a way to disable the tip? I find that gliding over the tablet makes me make mistakes that touching the tablet would not (namely slipping and hitting the tip against it)
  • by JDLazarus ( 15077 ) on Thursday April 26, 2001 @12:06PM (#263160) Homepage
    I have a Wacom tablet (12x9) for drawing purposes, I figured this would work nicely for doing gesture based motion in B&W... bah... it's worthless for that... I've found that Black & White either has something in it to stop perfect motion from counting (so you can't use pentablets) or I'm a worse pen-writer than I thought... I mean, a giant "R" should work no matter what, and it's not hard to make a spiral. Basically, I stick with the Trackman Marble. It works. (oh, and the funky Wacom "4D" mouse doesn't help either.... wonder if I could configure it's thumbwheel to do nifty easy zooming for me though... hmm)

    -Laz
  • by jamesneal ( 15488 ) on Thursday April 26, 2001 @12:12PM (#263161)
    I believe the whole concept of gesture-based menus was first pioneered (and put into production) by Alias|Wavefront, which is designed to be used with a Wacom tablet-- pens work much better than mice for gesturing.

    The idea is that the human brain isn't good at discerning differences between short distances, such as "Move the mouse pointer to the menu bar, click within a .5 inch box, scroll down 2.5 inches to the appropriate menu item and release", however it's quite good at producing and remembering changes in directions. So, for instance, File|Save would be "Up, Left".

    With just two gestures, it's possible to represent over 48 different actions. Add a third gesture, and that number goes to 288. Their research showed that their average subject had no problem remembering four levels deep!

    Gesture interfaces are especially useful as a user-interface for blind people, where it's just not possible to choose items from a menu visually.

    The cool thing is that gesture-based menus have been part of the Alias|Wavefront products since 1996.

    -James

  • Funny, to get B&W to work in Windows 2000 for me, all I had to do was "install the damn thing". And it worked, imagine that!

  • In my experience, many games need to be run as administrator in order to run in Win2K. This may be your problem.

    Since there's no real way to run things in "su" in windows, you can use a nice little trick to make it work - rename the executable "setup.exe", and then it will ask you if you want to run it as administrator.

    So. I renamed the B&W executable, and now it asks me for the admin password when I want to run it, and it works fine.
    --
  • by bobdehnhardt ( 18286 ) on Thursday April 26, 2001 @12:26PM (#263164)
    I gesture at my computer constantly. It doesn't increase my productivity or improve my computing, but it does make me feel a whole lot better....
  • by Black Parrot ( 19622 ) on Thursday April 26, 2001 @01:01PM (#263165)
    For surfing p()orn sites and playing Tomb Raider, I have found that a life-size inflatable doll makes the best gesture-based "input" device.

    --
  • I am surprised that I haven't seen anyone suggest the idea that, although this would be a fantastic breakthrough for limited use, we're stepping back not moving forward. What prompted the idea for me was the concept that with only two gestures it's possible to represent over 48 different actions. Don't get me wrong, let me preface by saying that I believe strongly that evolution is a good thing, but are we trying to do away with the keyboard or what? I have watched users very closely over the past 10 years. What I have been very disappointed to notice is people's increased dependance on the mouse. When I was a regular Windows user I would push my mouse off to the side once a month and navigate entirely with the keyboard. You'd be amazed at how quickly you learn keyboard shortcuts you didn't even know existed. I'm not saying that I was more productive that day... far from it... but I *was* more productive the other days as I didn't need to keep reaching for my mouse to perform common tasks. Today I watch people use their mouse to position the cursor at the end of a highlighted string of text so they can backspace over the entire string. Slightly better, I'll watch someone use their mouse to re-highlight an already highlighted string of text and delete it. Both are indicators to me that most users (present company excluded) just don't get it! Yes, it would be pretty cool, and perhaps even quite useful, to use a minimal set of gestures in a browser. Until the introduction of the tab key in IE (and Netscape shortly after) there was no way to navigate in a browser without the mouse and I've hated it. Lynx is the only tool you can use to browse w/o a mouse that I have tried, and it's painful. Otherwise, unless I want to put both hands on the keyboard to go back I'm forced to move my mouse to the top-left corner and click back, or right-click and select back. Either way it's annoying. This is where I think gestures make sense. Where I don't think gestures make sense is ordinary tasks like File // Save, Edit // Paste (all hail middle-click), etc. Speaking of middle-clicks, hooray for Netscape's "Open Link in New Window" functionality. Can you imagine the typical user (again, present company excluded) trying to learn these motions, or the simple concept behind it? How intuitive do most of you find console gaming? To start with, take *any* of the fighting games for N64, PS(2), or DC... do those key-combinations make the slightest sense to you at all? Sure, the basics do, but what about all the "special moves" you have to learn if you want to win a level? How often do special moves require flawless execution of 6 or directions/buttons? Even THPS2 has too many *assignable* combinations to ever use knowingly for all the different characters. Now take that concept and apply it to all the separate applications in the Windows space. Every one of them is going to think their gestures should be different. Look how long it took to reach an acceptable standard for copy-n-paste... vs. ... and it's STILL not a given that either will work unless the software you're using is from the same vendor as the OS. Take the concept and apply it to the KDE/Gnome war as well - no one is going to standardize on anything other than what they believe to be the best way to do it. In closing, I'm all for limited gestures, it sounds like a really incredible concept, I just hope things don't get carried away to the point that people forget how to use their keyboards even more than they have now. A mouse is a wretched interface, although it provides some very useful assitance. Perhaps certain applications (web browsers for those who truly surf and do nothing productive) will benefit tremendously from this new paradigm, but for people who make a living on a computer there currently is no substitute for the keyboard. Unfortunately those same people are putting a roof over their heads by writing the software that the other 95% of the population uses and can probably continue to use by gestures alone. =(
  • I am surprised that I haven't seen anyone suggest the idea that, although this would be a fantastic breakthrough for limited use, we're stepping back not moving forward.

    What prompted the idea for me was the concept that with only two gestures it's possible to represent over 48 different actions.

    Don't get me wrong, let me preface by saying that I believe strongly that evolution is a good thing, but are we trying to do away with the keyboard or what?

    I have watched users very closely over the past 10 years. What I have been very disappointed to notice is people's increased dependance on the mouse.

    When I was a regular Windows user I would push my mouse off to the side once a month and navigate entirely with the keyboard. You'd be amazed at how quickly you learn keyboard shortcuts you didn't even know existed. I'm not saying that I was more productive that day... far from it... but I *was* more productive the other days as I didn't need to keep reaching for my mouse to perform common tasks.

    Today I watch people use their mouse to position the cursor at the end of a highlighted string of text so they can backspace over the entire string. Slightly better, I'll watch someone use their mouse to re-highlight an already highlighted string of text and delete it. Both are indicators to me that most users (present company excluded) just don't get it!

    Yes, it would be pretty cool, and perhaps even quite useful, to use a minimal set of gestures in a browser. Until the introduction of the tab key in IE (and Netscape shortly after) there was no way to navigate in a browser without the mouse and I've hated it. Lynx is the only tool you can use to browse w/o a mouse that I have tried, and it's painful. Otherwise, unless I want to put both hands on the keyboard to go back I'm forced to move my mouse to the top-left corner and click back, or right-click and select back. Either way it's annoying. This is where I think gestures make sense.

    Where I don't think gestures make sense is ordinary tasks like File // Save, Edit // Paste (all hail middle-click), etc. Speaking of middle-clicks, hooray for Netscape's "Open Link in New Window" functionality. Can you imagine the typical user (again, present company excluded) trying to learn these motions, or the simple concept behind it?

    How intuitive do most of you find console gaming? To start with, take *any* of the fighting games for N64, PS(2), or DC... do those key-combinations make the slightest sense to you at all? Sure, the basics do, but what about all the "special moves" you have to learn if you want to win a level? How often do special moves require flawless execution of 6 or directions/buttons? Even THPS2 has too many *assignable* combinations to ever use knowingly for all the different characters.

    Now take that concept and apply it to all the separate applications in the Windows space. Every one of them is going to think their gestures should be different. Look how long it took to reach an acceptable standard for copy-n-paste... vs. ... and it's STILL not a given that either will work unless the software you're using is from the same vendor as the OS.

    Take the concept and apply it to the KDE/Gnome war as well - no one is going to standardize on anything other than what they believe to be the best way to do it.

    In closing, I'm all for limited gestures, it sounds like a really incredible concept, I just hope things don't get carried away to the point that people forget how to use their keyboards even more than they have now. A mouse is a wretched interface, although it provides some very useful assitance. Perhaps certain applications (web browsers for those who truly surf and do nothing productive) will benefit tremendously from this new paradigm, but for people who make a living on a computer there currently is no substitute for the keyboard. Unfortunately those same people are putting a roof over their heads by writing the software that the other 95% of the population uses and can probably continue to use by gestures alone. =(
  • by LL ( 20038 ) on Thursday April 26, 2001 @12:11PM (#263168)
    Essentially the mouse has 2.5 degrees of freedom (relative x,y + activate mouse). A stylus is absolute x,y + tap code (morse code?) There are other devices which goes up to 6 (3D + twist, turn, roll). However this basically mirrors the major axes of your forearm. Imagine extending your forearm away from chest clenching a fist and moving or rotating the fist. Some other devices (e.g. wand) have been invented to mirror broad gestures but I don't think the tech is quite there yet to do real-time sign language which relies on alignment/spacing of the fingers. The problem is as always software applications which can recognise the gestures. Anywhich which is 2.5 degrees can be adapted to map to a mouse based input but you are really limiting yourself once you go to higher degrees of freedom. It would be easier to work out the complexity of the system you want to control, then work out the dimensions you can partition it into, then work out the type of device which best suits your needs. Note that even a high-end stylus can add extra levels of complexity such as angle, pressure, acceleration ... think caligraphy. The other factor is human support in that it is tiring to continously hold up your pinkie all the time. Tricks are to mount any coordinate device on something like glasses or helmet.

    LL
  • Any experience with getting these working in *nix?

    No, the driver gives new meaning to closed source. You can't even download it - you have to call Logitech to get another copy of the CD sent to you if you lose yours. It's not because it's big, either - the drivers aren't that sizable. Logitech loses points for that one. That's the only driver CD I actually keep.
  • by Brento ( 26177 ) <brento@@@brentozar...com> on Thursday April 26, 2001 @11:59AM (#263170) Homepage
    As a guy who plays Black & White with a Logitech iFeel mouse, I've gotta say your initial take on mice needs to be revisited. Having the mouse kick back when you do something right, wrong, powerful, whatever, that means a lot, and it helps you get used to doing things the right way.

    The only drawback is that it's too tiring for day-to-day use. I usually leave the feedback turned off when surfing the web, for example, because it just beats your wrists to death as you glide over a zillion links. I've got carpal tunnel, and the buzz that it makes when jumping over hyperlinks makes my wrists feel like they've been typing for hours.

    It's remarkably cheap, too - it was $45 on the shelf the last time I looked.
  • It's just the last thing on the specs that makes me doubt that there will be linux drivers for this:

    Recommendation: "IBM recommends Windows 2000 Professional for business."

    Too bad, it looks great

    ----------------------------------------------
  • Yes, "Graphic Tablet" is nice for fluid motions. There are several technologies, but most look like a tablet with a pen or a puck. Little friction and very sensitive to your motions. Ever try handwriting with a mouse? It's much easier on a tablet with a pen.
  • HOLY SHIT!!! There is NO WAY that stops carpal tunnel syndrome if you're an EMACS user!!!
  • sorry, but ...

    s/want/wont/

    you used the wrong word ;)

  • Just call me Fiddler Crab! :-)

  • It sounds like a nice idea, but using a glove or a touchscreen for hours will kill your arm.


    See gorilla arm [astrian.net]
  • Having both played Black and White with a mouse, and done some various artwork and basic navigation with a wacom tablet 'pen', I think that using a pen for this type of interface would be ideal. It's a fairly accurate device, and it allows for the 'motion' type commands that B&W needs. With a mouse these gestures are a pain in the ass, since they're essentially 'drawing'.

    However, I'd like to add that such a device as a pen is fairly cumbersome for the traditional GUI, since they're based off entirely different principles.

    -------
    CAIMLAS

  • Now that's the most logical statement I've seen on /. in a loooong time.

    Black & White sucks. Gesture controls suck. Get a fucking clue people.
    --
  • by Velox_SwiftFox ( 57902 ) on Thursday April 26, 2001 @12:43PM (#263179)
    I want a good lips-and-tongue input device. It would take advantage of the deftness anyone who can talk (and even someone lacking vocal cords) has....
  • by joib ( 70841 ) on Thursday April 26, 2001 @12:39PM (#263180)
    Whoa, hold it! Mankind is still a _veeeery_ long way from making something like direct brain control over a computer possible. I looked at that site, and while it certainly seemed impressive, you have to keep in mind that what they did was some really simple things. I've been working a little with MEG (ok, very little...), and I can tell you it ain't simple nor cheap stuff. The equipment I worked with was located underground, in a room magnetically shielded by a couple of layers of mu-metal, cooled by liquid helium (to make the SQUID:s superconducting and for reducing thermal noise) and used gradiometer spools to reduce the effect of external noise. SQUID:s being the most sensitive measurement equipment man has ever built, one could still notice stuff like a bus driving by, the nearest road was at about 100m distance. Anyway the noise was at about 50% of signal amplitude. Of course, a lot of the "noise" was the patient thinking all kinds of other thoughts, it's kind of hard to not think at all. So to make something like brain control of anything reasonably complicated we need orders of magnitude better measurement equipment, and lots and lots and lots and lots of research into how the brain actually works. I wouldn't count on something like me thinking about my first kiss (or uh, ahem, _that_ first time...:))and then the mental picture in my head appearing on the screen happening in my lifetime. OTOH, maybe I should be happy about it, could be embarrasing if other people were around...:)

  • If your talking about Black & White why use a physical device, a netcam would be awesome. The moving around would be odd but casting spells would be cool. Maybe a two camera system could be put together to capture real 3-D movement, something I don't expect to see for quite a while.

    Now if yout talking about 3rd person shooters, I love the strategic commander and a mouse. You physically can move the device to move about in the world and with two programmable buttons under each finger you can easly select weapons, jump, reload and whatever fills your heart with glee. Then I use the mouse to look/aim the weapons. I also like to use a wireless mouse and keyboard, it's always nice to be able to move the mouse clear across the desk without every having to pick it up or tug on the cord for slack. Another advantage with the strategic commander I often put the keyboard on top of the computer or monitor, don't need it because you can just program one of the buttons on the commander to do whatever you need to, no large clunky keyboard in your way (leaving plenty of room for soda and chips)! Now if they only made a wireless commander I'd be set!
  • In essence the "gesture recognition" system is tracing out a 2-dimensional shape, with (in the case of Black and White, for instance) accuracy affecting performance. Of course the benefit of it is mostly ease of use.

    With that in mind, a fast intuitive "pointing" device is best for this purpose. A touchpad allows you to sketch these shapes out with your finger, and in fact Black and White with one (with the accuracy set right!) is great. I imagine a graphics would be great for this purpose as well.

    On a related note, anyone else here seen the "menu" system in Sacrifice? instead of it being a dropdown system, it's cross-shaped, with new crosses spawned from each vertex. very fast, and no need for great accuracy when choosing selections. couldn't find a screenshot i'm afraid, but it's similar to the "circular" menu systems seen around.

    /fross
  • I run Black & White at work on a Win2k machine and at home on Win98. I runs, not without trouble of course, but playable.
    On both systems I use a Razer Boomslang 2000, which of course has a higher refresh rate than you would believe. It really doesn't help that much. What is more important, from what I can tell, is how much time the processor has to spend onhandling other things (like graphics) and how much time it spends on getting input from the mouse.
  • Sure it would kill your arm for teh first couple of days, but use it for just 15 minutes 3 times a day and see improvement in as little as 2 weeks. This could be the next big exersize infomercial for geeks.
  • by gblues ( 90260 ) on Thursday April 26, 2001 @12:17PM (#263185)
    If you've ever used a Palm for anything more than solitaire, you've probably come into contact with Graffiti. Graffiti is basically a gesture-based means of inputting alphanumeric data. It's a very small step to make the graffiti gestures perform macros instead of representing a single letter.
  • Pen-based gestures were the one part of the NewtonOS that worked well even on the first machines. To erase something, you scribbled up and down over it. To select something, you circled it. To move something that was selected, you tapped and dragged.
  • You sir, are a fucking idiot. Read his post again.

    echo $email | sed s/[A-Z]//g | rot13
  • by AJGriff ( 94198 ) on Thursday April 26, 2001 @12:29PM (#263188)
    I've been playing Black & White everyday since I got it about a month ago, and I think I had this same dilema when I first started.

    I got the hang of the controls quickly, but eventually I found that the mouse works well for maneuvering around, but was still unreliable when casting miracles. Often the gestures for casting a miracle would have to be slow and deliberate, and even then would take 2-3 trys to work. When you're trying to cast a miracle in a hurry, this is unacceptable.

    I dug out a touchpad I won as door prize at a conference (which I hadn't opened until then) and gave it a shot. Casting miracles with a touchpad was an incredible improvement, the movement was much more intuitive to do with a finger rather than your whole hand. However, maneuvering in Black & White with touch pad is a pain, and in some situations it was impossible to use (fighting and other quick point-and-click motions).

    So I got a serial mouse and starting using the mouse to play the game and the touch pad to cast miracles. This, IMHO, is the way the game was meant to be played. After a while, you'll find things that work better with the touchpad, such as interacting with your creature, and things that work better with the mouse, i.e. fighting and catching followers to feed to your creature. Give em both I shot, it's worth the $50 or so for a touchpad.

  • I am currently using a Dell CPxJ for browsing with Opera. I had the regular drivers installed for the PS2 mouse...they blew for this. I downloaded the Alps drivers. This allowed me to click and drag, right click, everything could be set so I just placed my finger and drug...all's done

    I think the key to anything is find something you are comfortable with, and then just make it work. Don't spend a lot of money on something you aren't going to be happy with. And when you do get it, don't half ass it!

  • I saw a doco on theramin's today.. and this article prompted me to thinking about how a theramin would work as a input device for a pc.
    extrodinary instrument anyway.
  • The "default" sensiva gestures are unfortuantly ungainly and boring, and thusly, I've implemented 5 gestures that I use regularly...

    1) Mouse down and to the left, minimize: this SOO beats moving up to the corner of the screen, the trick is to have sensiva set to be VERY forgiving.

    2) Mouse up and to the right, maximize.

    3) Mouse streight down: close. This is a miracle, their default action is some lenghty thing that I would never ever use, but just doing a very very short down drag (approx 1 cm) is great.

    4) Mouse left and then up, move window to other monitor

    5) mouse left and down, turn of secondary monitor (with ultramon, a must for anyone who has multiple monitors)

    Anyways, hopefully this will give you ideas, and remember to turn off the plugins thing, it just gets confusing and it bothers you overly.
  • So instead of just walking down the street with an earbug cellphone laughing and talking apparently to yourself as people are want to do these days, you could be stumbling around making inane "Johnny Mnemonic" gestures too? Tres, tres cool...

    Dork.

  • Oh god it's happening. I've been surrounded by bad grammer and spleling four sew lawn that its starting to inflect my own riting skils.
  • I tried a hi-end wacom penpad with Sensiva. No fun at all. It failed to recognise many gestures and obviously expects people to push a mouse all over the available screen surface, rather than just make an elegant little gesture. Pointer: These gesture interfaces can be seen at work in the films of the Xerox work back in the 70s. In fact PARC invented not only the mouse, but also the penpad, I believe. The latest instantiation of Smalltalk, Squeak has a learning gesture recogniser buried in it somewhere, according to an Alan Kay interview I saw somewhere. And, as Philippe once asked, what will we do when the Xerox ideas run out ?
  • I have the Wacom 6x8 and I use it for the gesture part of B&W. I do everything else with the mouse since it seems more natural to move and manipulate thigs with the mouse than the pen, but fighting and casting miracles with the Wacom pen is very easy.

    It is like you described at first though, but I think I was doing something weird drawing the gestures with the pen, but after a few successful gestures, I can pull them off with no problem and in quick succession. I don't think B&W has anything to stop perfect gestures. Seeing the red trail on the screen certainly helps too..

  • by Dungeon Dweller ( 134014 ) on Thursday April 26, 2001 @12:36PM (#263199)
    Grafitti (handwriting recognition in PDAs) is very much the same as the gesture based input in these systems.

    Think about the act of dragging your mouse as the act of writing a little scrible to represent a letter that is places serially along an input stream.

    They are very similar.

  • Gestures are a nice thing to have in a window manager.

    There was a patch to let FVWM2 use gestures, and I once modified wmx to use the same guy's library.

    Pester your friendly neighborhood window manager team for gesture support, it's fun.

    If anyone actually cares, and uses wmx, I probably still have the patch.
  • by Com2Kid ( 142006 ) <com2kidSPAMLESS@gmail.com> on Thursday April 26, 2001 @03:58PM (#263203) Homepage Journal
    I've yet to be able to cast the fireball in B&W, yet I am an avid FPS player and I can kick major (42:1 kill/death ratio) ass in HL, CS, etc. Gestures are just a pain in the ass for people like me who have shitty hand writting to begin with. Come on, if I had good hand-eye coordination (there are 2 types of hand-eye coordination, one is true hand-eye coordination and is related to catching balls and such, I don't have that, the other is pushing a button really fucking fast repetivly to make blood appear on the screen, I have _ALOT_ of that:) I wouldn't be using a frigging computer folks! Sheesh. . . . .

    I find that hotkeys are ALOT easier, just have the letter in small print next in the lower right hand corner of the gesture icons in B&W and you would remain uncluttered. . . .

    Shit, 4 out of 5 times I cannot even draw the circle in B&W to cast a shield spell. . . . Damn, I _HATE_ gestures. if it wasn't for the M hotkey and the R hotkey I wouldn't be able to play the game at all without zipping back to my temple to cast spells each time!

    Gestures easier in a browser? Excuse me, WTF? Hmm, lets see now.

    Enter, ooh, enter the data! Nice big key there, easy to hit. Backspace, go back a page. Once again. . . easy to do.

    Also, don't forget that the amount of mouse movement you use for gestures is equal or GREATER then the distance that you would move your mouse to click on an icon! And if your just browsing then your icons are all up in the toolbar anyways right along with your mouse. Hell, just use tab to go between items and enter to enter the data, hell, I've browser without using the mouse at all, WTF would I want to complicate things even more?
  • A company called Essential Reality [essentialreality.com] has a product in the works that is actually based on their earlier work on the Nintendo Power Glove (if I'm not mistaken). It's called the P5 [essentialreality.com]. I saw a prototype at GDC this year and talked with one of the engineers on the project. It looks very interesting to me.

    If you happen to be going to E3 this year, you can check out their latest version there. The unit is slated to sell for $149, which might really make it a possibility for widespread use/adoption.
    ----
    PointlessGames.com -- Go waste some time.
    MassMOG.com -- Visit the site; Use the word.

  • by Perdo ( 151843 ) on Thursday April 26, 2001 @06:19PM (#263205) Homepage Journal
    For drawing pictures freehand on a 'puter nothing beats it. Pressure sensitive and integrates with Adobe and Corel. Darker, fatter lines when you press hard, lighter thinner lines when you ease up. You can actually sketch with this thing. Has a similar feel to a soft pencil or the spongy tipped ink pens. Put a piece of soft plastic over the tablet to provide a better feel of resistance to pen strokes. Nothing rough though. Anything rough will actually give you the effect of gravestone rubbings. It transfers the grain of the paper you are using to provide resistance directly to the screen. Yes, it is that sensitive.
  • by Bloodrage ( 157297 ) on Thursday April 26, 2001 @03:17PM (#263208) Homepage
    I noticed that some people seem to assume that the gestures in B&W are in the plane of the screen. The manual reads that these gestures are 'drawn on the ground'. I've found that gestures are more successful in the game if they are distorted to account for the slope of the terrain. Hence, doing a 'perfect' gesture on a tablet or with a mouse won't work too well, it needs to be put into perspective...
  • by Moosifer ( 168884 ) on Thursday April 26, 2001 @12:41PM (#263209)
  • by fjordboy ( 169716 ) on Thursday April 26, 2001 @11:55AM (#263210) Homepage
    I have used a pen based system at my school to do drafting and some drawing, and I found it quit a pain. I was able to use the pen for normal mouse type movements, but the thing got to really be a pain. It eventually started to hurt my wrist, because it wasn't quite the right size and i had to move my fingers around strangely to grip it. So...i don't really like the pens...

    I also use a glidepad occasionally on my father's laptop. I hated it at first, but then I got used to it and I can use that more effectively than a mouse (as long as i avoid finger drag). If you go the glidepad route..make sure you get one that you can click by tapping. :)

  • A whole 5 levels, and WAY too much wood required to do anything.
    Yes, I agree on that point. You need stupid amounts of wood. Add to the way the 'advisors' tell you useless things like 'we're running out of wood'... but don't tell you which effing village is out of wood. After a while you have stacks of villages, but still have generic messages that don't tell you where to look.
    And how do you become evil? I taught my creature to eat people, I destroy entire villages, I set people on fire, fling them into mountains, sacrifice 'em all over the place, starve them to death and I'm a GOOD God? They got some good weed down at Lionhead, uh-huh.
    I don't know what you're doing, but I'm on the 4th world and my god's hand looks pretty damn evil... lots of icky veins and so on.

    Burn villages, electrocute children, throw around the elderly, get your giant cow to eat the innocent, that sort of thing.

    I wasn't even trying to be evil, those damn villagers just got in the way :-p It's not my fault they're impressed by carnage :-p
  • I've only got experience of a mouse with gesture recognition, so I can't speak for any other device.

    What I have seen is how much the 'refresh rate' of the mouse's position (temporal frequency?) affects the usability of gestures.

    I've bought Black and White, and it has serious issues on Windows 2000. As in it doesn't run at all. Fantastic.

    I've got a triple-boot machine (Slackware/Win98/Win2k), so I'm forced to run B&W in Windows 98 where the update rate of the mouse is pretty appalling.

    Getting B&W to recognise some of the more complex gestures is a pain because the time between updates of mouse position gives the gesture considerably more 'jaggy' edges, making it look less like what you actually did with the mouse.

    Windows 2000 has the refresh rate pretty high, so I'd have thought it's far easier to use gestures successfully on there.

    I've not used the mouse much under Linux; my dedicated Linux box doesn't have a monitor, let alone a mouse, I just use it over ssh or X-Win32, so I don't know if the PS/2 refresh rate has been increased (or is configurable); the last I saw was that it wasn't particularly fast.

    Opera's gestures are fairly simple (so far), not nearly as complex as some of B&W's gestures, so the rate isn't as critical. But, add more complex ones and you will see the difference.

    It's not a new technology by any stretch of the imaginatio (emacs strokes mode anyone?) but it's very useful; even something as simple as Opera's 'back' gesture is so convenient, I wonder 'why didn't they put this in earlier!'.

    Nice one Mr. Molyneux; he was always the king of games back in the good old days of Atari STs, and now something from his latest game seems to have started a bit of trend elsewhere in the software business.
  • You need stupid amounts of wood.

    You can get huge quantities of wood by multiple clicking on the Wood miracle as you cast it. This is not a cheat, the game was designed to allow it and there are hidden drawbacks (which I'll leave you to find) but there is NO shortage of wood in B&W.

    After a while you have stacks of villages, but still have generic messages that don't tell you where to look.

    You're falling into a trap which the game sets for you (and is related to the wood issue). One option is to just train your creature to look after the villages for you. It can be done.

    TWW

  • by Archangel Michael ( 180766 ) on Thursday April 26, 2001 @12:42PM (#263215) Journal
    This was an ancient piece of software that took your average Joystick and turned it into a mouse. Buttons = Buttons. Navigation was by moving your Joystick around. Beautiful and elegant. I can only imagine the "enhancements" that could be made with all the new force feedback and related controls on the new joysticks.

    Imagine feeling a "kick" in the joystick to warn you not to click on the "Goat sex" link.
  • I can see the pen gesture. But I think that this is better suited to a 3d interface down the road a bit.

    I get a smile out of the idea of people controlling their computers with the equivalent of wands, magic wands. In terms of a 3d interface it makes sense, complete with custom interfaces with funny symbols on them in a circle around the user.

    ;-)

    Check out the Vinny the Vampire [eplugz.com] comic strip

  • by Yunzil ( 181064 ) on Thursday April 26, 2001 @01:49PM (#263217) Homepage
    BTW, Black and White sucks.

    Nah, it's great.

    A whole 5 levels,

    Which take a long time to complete, at least for me.

    and WAY too much wood required to do anything. If I wanted to do the same task over and over and over again for hours on end...

    Yeah, I think the wood requirements are too high, but you can do this:
    Step 1: Teach creature to cast wood miracle
    Step 2: Attach creature to village store with the compassion leash

    And how do you become evil?

    I dunno, my problem is becoming good; my hand looks all orange and veiny, with sharp fingernails.

  • by psyclone ( 187154 ) on Thursday April 26, 2001 @12:08PM (#263219)
    Ninento-style glove was what I was thinking as well.. possibly more adapted to current technology (such as electronic gyroscopes to record subtle movement if desired). Basically the virtual reality style interface shown in Johnny Mnemonic. (of course you wouldn't have to hold your hands up, you could just rest them and be limited to more finger movement -- with 'goggles' for a viewport, you could even have a virtual keyboard.)
  • This has some potentially interesting implications for the p0rn industry...

    I can imagine sharing computers will become a little unpopular. :-)
  • Detecting hand gestures, eye movement, body position, etc, can be done through a webcam. Current ones don't have a high refresh rate, so the gestures could not include ones with much motion.

    Being able to detect where the user looks would open up a field for new user interface inventions. Being able to detect wether there is a person sitting in front of the computer could be usefull too.

    Or reading the user expression (smiling, laughing, scowling, yawning, etc). Scowl and look at the paper clip to make it go away, rate web comics depending on if you smile or laugh when lookin at them. Have the IRC cliet insert "lol" automatically. Aargh.

  • I've been playing Black and White a bit, which uses gesture-based control for spells and such. Perhaps it is a poor implementation, but I don't like it at all.

    Gesture-based input is error-prone and can be frustrating. Try playing a multiplayer game of B&W and see how mad you get when you do a gesture incorrectly in the middle of a battle. If only I could have simply pressed a button instead!

    I can see gestures being useful for web browsing (draw a 'B' to go back), but I can also see it being more of a hinderance than its worth.

  • by 3-State Bit ( 225583 ) on Thursday April 26, 2001 @01:38PM (#263231)
    Our company is developing software for true "gesture recognition". Basically, it takes a number of arbitrary points of view (from higher-quality [not "web"] cameras) and calculates the location of three-space objects from them. The only "set-up" hardware-wise is holding up a calibrator (a scepter-like) device by its handle and pressing a button to mechanically (the mechanics so far are just toy-like, the important aspect to the calibrater is its gradations, a proprietary system serving the purpose of interlocking rulers) turn it 360 degrees a couple of times. It doesn't even matter if you move it while you do it, as long as you don't move it too fast to have distinct, clear frames. As long as there is a line of sight between the cameras and the calibrator, the software will be able to calculate their positions relative to the calibrator. Afterward, our software is able to keep a running matrix of all three-space that is visible to at least two cameras. Using five cameras, it's possible to have more or less a total view (well, total opaque view) of the three-space in front of your monitor, for instance, and the one out of the five cameras is only necessary when you happen to be blocking one of the other necessary ones. All this is very processor-intensive, but so far it's very straight-forward. Basically, simple trigonometry. We haven't been working on optimizing tricks, since our 800 quad xeon test server already does 30 frames per second with five cameras at 800 by 600. So our process looks like this:
    1. Synchronize a "frame" from the point of view of every camera. You must already know their "absolute" positions, which is relative to some zero-point. (Determined by the original location of the calibrator).
    2. For each pixel that a given camera sees:
      • Assume that you are seeing a pixel at the nearest point that the second camera in your stereo set could also see. To draw a human comparison, bring your finger closer and closer to your eye, until with your other eye it passes the line of your nose and you can't see it anymore. This is the "closest point".
      • Calculate where this point would appear in the other camera, as well as the sorrounding blocks of pixels, and see whether it matches what the other camera in the stero pair actually sees.
      • If it doesn't match, assume that it must be farther than you initially assume. Repeat process.
      • Repeat until you "converge"...ie, get images where many pixels in the area "line up" as calculated by the assumption that they are at absolute point x,y,z. This process actually is very similar to what your eye does if you ever notice when it's scanning for how far away something is. At first it assumes it's close, then keeps looking farther and farther away until the two images are brought together. Your brain is the only thing bringing the two images together! Your eyes are still an inch point five apart, silly. :) In the same way, for each pixel (or rather, group of pixels large enough to identify a small area on an object), our software's "brain" converges the image for various distances until it finds a match.
      • If you cannot find a match, assume that the other camera in the pair is not seeing that particular pixel, either because something near you is blocking the nearest area that the other camera is seeing, or because something near the other camera is blocking the line of sight that goes to what you're seeing, or because it's outside the line of sight of another camera entirely. This last is easiest because you don't even need to scan the pixels you know only one camera sees.
    3. Repeat this process for each stereo pair.
    4. Assemble every picture you have an absolute coordinate from (that a stereo pair can see) into a three-space.
    Note that I've left out such things as massaging the image from different cameras in various ways (color, brightness, etc) to get them near, using more or less fuzzy "matches" depending on how much you might expect an object to differ at different angles, and calculating lighting sources based on the calibrator. While these are serious issues, they're really basic math stuff that's well-explored in the field of optical recognition, and it's basically a cut-and-paste of components, and, like I said, a $5,000 server can do thirty frames per second without having any graphics hardware specifically enabled for this stuff. The number of three-space "pixels" it ends up getting varies with conditions, but you can always do well enough to read standard braille that's reasonably close in proximity (1.5 feet) to a stereo pair of cameras. Needless to say, there are more useful applications to these kinds of technology than reading braille on your computer screen :). This leads me to the real area we're flinging resources at:
    Developing a gesture recognition system. I did not mean to outline everything I did above, but it really is not involved, and a lot more viable than some people think. Anyway, the interesting thing about the three-space that you develop from the process above is that it is very easily analyzable. Not only do you have a solid "block" of where pixels are, but it's easy to tell lines that separate, for instance, individual fingers that overlap. In fact, the human brain uses more picture analysis than stereoscopic analysis, and our system is actually more precise than the human brain at finding the exact location of a point two or three feet away relative to a point near it, compared with the human brain, if you are given no color clues! When looking at a hand, therefore, we can pretty take the basic shape of a hand and (here is where we get tricky) apply a very fuzzy algorithm for fitting it to the hand that we actually see. It is "fuzzy" almost to the extent of being neural-netty (although we control it very much), since it not only needs to choose between an infinite number of ways that two hands can contort themselves, but also learn the size of individual aspects of it (which changes slightly), and their shape, and for this purpose also takes into account where the hand "used" to be in the previous frame, how fast it was moving over the previous few frames, and how likely it is to move in a certain way, with respect to speed and with respect to what positions are unnatural. All this is necessary to get 30 frames per second, because we aren't just interested in the "position" of the hand, but its important aspects (the relative bend in each joint). To test, we have another application that is ONLY given the absolute position of hands and the relative joints we are measuring, and then reconstructs the hands visually. You can therefore have all three programs running, the stereoscopic analyzer feeding the hand-position recognizer data, and the hand-position recognizer feeding the renderer data, so that your screen shows how the renderer is getting the info about where your hands are. Mostly, however you move your hands will be reflected on the screen, but if you move it very quickly and unusually you can still confuse the hand-position analyzer and get an image that's out of sync with what your hand actually is doing. This is independent of the stereoscopic anaylzer, which comes up with the correct data, which if you feed directly to the renderer you see always matches what your hand is doing, at 30 fps.
    So now I've outlined how we get the position of joints, which includes quite a bit of fuzziness. But by far the most fuzziness is not in this, but in the actual "recognition" of a GESTURE. We've already gotten the first-generation information about what a gesture is by spending several hours each in front of a test server set up for it, already equipped with a popular voice command system, and agreeing to surf the web and do various other tasks the voice command system is equipped for (we didn't make that, it's just purchased off the floor somewhere) while also doing the gesture we have set up for each command. So we end up with "sample" gestures to analyze, and have already manually looked at the major indicators and drawn them up and programmed them. The way we have done the first time is very crude, however, eyeing as we have each sample ourselves, but we are now in the process of collecting second-generation information, so that when a user successfully uses a gesture and doesn't complain that it wasn't what he wanted, that particular instance of gesturing gets put into the database of gesturing instances associated with a gesture, and we are developing fuzzy logic to link these gestures more closely and reliably. The gestures make sense for the most part, such as having your right thumb open to the left with your other fingers closed, in a quick leftward motion to go back, or up and with a quick rightward motion to be right. Stopping is pushing your palm forward toward the screen, closed a window is putting your finger and thumb together and drawing your hand back, as if you're flicking the window away, and refresh is a sweeping gesture with your palm toward you, from bottom left toward top-right (only a small part of the way). The software recognizes a "gesture" because you perform it particularly fast and deliberately, so if you playing with your hands slowly, it doesn't misrecognize any of these.
    Anyway I'm getting really tired of typing all this, and even though there is much, much, more, I'm just kidding. Wouldn't all this be cool though?

    ~
  • Assuming you have a ps2 mouse, there are "overclocking" programs that will increase the mouse refresh rate for you in win9x. Go look around (sorry no links, I use a usb mouse so I don't know where to find it).
  • Gloves seem somewhat outdated in a strange way. They never caught on, and are a little cumbersome for day-to-day activity (ever tried typing in gloves?). But what about something smaller? MEMS technology is developing better and smaller solutions every day - and accelerometers could be an answer.

    Instead of putting a glove on your hand, how about sticking a small IR/Bluetooth/802.11/whatever MEMS device to the back of your hand, and go for it. Wave your hand around, and have the sensors track the motion in 3D. I'm fairly certain that in time (if not already) the sensors will be sensitive and reliable enough for you to be able to select screen items by "pressing" them.

    This would be a wonderful way to go for interfaces (for selection, at least, not text input), as it would work with a device of any size, and is totally portable.

    My 2c on the work of hi-tech.

  • A few PS/2 refresh rate adjusters...

    Mouseadjuster [tweakfiles.com]
    PS/2Rate [tweakfiles.com]

    Those should solve your problems - I've used one, and it really smooths things out.

    -- Chris

  • by spoocr ( 237489 ) on Thursday April 26, 2001 @12:09PM (#263237)
    I use a Wacom [wacom.com] tablet for digital artwork, as well as occassionally using it as an input device, and don't find it to be cumbersome or uncomfortable at all. A mouse is definately faster for cursor navigation, but the pen is precise. It feels very natural, and you have a great deal more control over the cursor than you do with a mouse.

    Say you have a "Z" gesture (Haven't played B&W or used the new version of Opera yet, so don't know exactly what types of gestures we're talking here) - you could try to do it with a mouse, but it could be difficult. With a tablet, it's as simple as writing a Z as you normally would on paper. Quite simple.

    The tablets aren't too expensive, either. I chose Wacom as they're recognized as the industry leader among artists (Higher pressure sensitivity and more gee-whiz bells and whistles), but most any tablet could work. I got mine (6x8 intuos) for $130, refurbished. No problems with it. It's a serial interface, but that doesn't bother me too much. You can get the smaller "average" version for about $70 or $80, I think, and it'd work just as well. I certainly consider it to be worth the price.

    -- Chris

  • by litheum ( 242650 ) on Thursday April 26, 2001 @11:54AM (#263240)
    well "to each his own" i guess.... i use opera religously now and i find myself trying to use gestures in all other programs that i use. i think that it really lets the mouse live up to its capabilities.
  • by Thangorodrim ( 258629 ) on Thursday April 26, 2001 @12:07PM (#263243)
    ...anything that attaches to the head. It seems counter-intuitive, but could you imagine using a helmet or visor to play Quake? I can see the whiplash lawsuits at Id's door now... "Our client, D34thFr0mAB0V3, suffered a fractured sternum while attempting to 'rail-snap' a third party..." -t
  • by matrix29 ( 259235 ) on Friday April 27, 2001 @03:56AM (#263244) Homepage
    So instead of just walking down the street with an earbug cellphone laughing and talking apparently to yourself as people are want to do these days, you could be stumbling around making inane "Johnny Mnemonic" gestures too? Tres, tres cool...
    Hey, I've got an idea. Rewrite the basic gestures to be Mime gestures. Then we can have people walking by and resizing an invisible box or doing "touching the invisible wall" when moving windows. You could have them pulling an invisible rope to scroll the windows and walking against the wind to move deeper into directory areas and file folder rooms.
    Silly, but cheap fun for children. My perfect concept for user accessories is a Hexadecimal (from Reboot) harlequin mask. If you could attach a bronze cup to one of the interface gloves there could be even a easy way to get free money.
    The only problem is a frightening world of faceless mimes walking around the city streets. That's plenty of nightmare fuel for me.
  • You are failing to make the distinction between feedback and gesture devices. They are separate entities that benefit when combined. Feedback joysticks have been around for some time now and other feedback devices (iFeel mouse and others probably) are on the market now. Hell, Nin64 controllers have feedback modules. Gesture based input need not have feedback, but it helps.
  • Same here. Ever since that story last week about Opera I've been hooked. Every now and then I'll find a site that doesn't work with Opera and I have to open it in IE. I then try to Righ-Click:Left to go back, and nothing happens... Seriously thinking about registering. It's been a long time since I've found a program so useful.

    For those of you who do use the program, be sure to set it in the options to report as Opera, that way when admins look at the User-Agent logs it will show up properly. Just maybe, we will get people to take Opera seriously when developing sites.

    Now, to something actually on topic. I use a Logitech Trackman Marble+ at home, and a standard MS Mouse at work. I like my Trackball better for doing the mouse gestures, but I generally prefer it over a mouse anyway. I guess to each his own, but I really don't think the input device matters, as long as you have the standard drag motions mastered.
  • I worked at GO for almost four years. Our PenPoint operating system was gesture-centric, and used a pen directly on the screen. For desktop development we used Wacom tablets. I found pens much easier to use than mice. If memory serves there was even a Wacom tablet with an LCD screen behind it, so you could have an external monitor with pen input.
  • by r_j_prahad ( 309298 ) <r_j_prahad@hotm[ ].com ['ail' in gap]> on Thursday April 26, 2001 @12:49PM (#263251)
    I wish the Windoze PC at work would understand the simple gesture I keep making at it. A single extended finger, highlighted against a background of deep blue. I'm going to end up with a repetitive stress injury that's going to be tough to explain to the claims adjustor.
  • by Liquid-Gecka ( 319494 ) on Thursday April 26, 2001 @12:14PM (#263253)
    IBM has a new laptop [ibm.com] that is awsome for gesture navigation. It is large and heavy, but it opens up with a notebook on one side, and the laptop/monitor on the other. It has both a normal laptop mouse and a pen mouse. The pen mouse can be used on the screen, or on the pad beside the laptop. It comes with a documentation program that allows you to write/draw into the software itsself =) Its _REALLY_ cool... the pen allows you to do gesture type actions just like you where writing them down!
  • Here is how to disable the tip for B&W: Start B&W then minimize it (this is so you can add app specific settings for B&W) Go into the control panel. Add applications (B&W) Select B&W in app list Select Stylus Go to the Tool Buttons tab Click near the tip where it says 'Left Click' A hidden menu will appear. Select 'ignored'. Click yes to the warning (it may be a good idea to set something else to 'left click'). Click apply. Now when B&W is in the foreground the tip will be ignored. Very few people ever peek at the Wacom control panel because the pen works so well out of the box. Learn to harness it's power and it will set you free!
  • by Magumbo ( 414471 ) on Thursday April 26, 2001 @12:23PM (#263259)
    It seems people either love or hate these pen devices. Personally I use a USB 6x8 Wacom tablet with the Intuos pen, and totally love it. It works well under linux, macos, and win2k.

    You can customize how you want it to behave (map the screen to the tablet, or use a mouse-like interface), the pressure sensitivity thresholds, macros for the two buttons, angle behavior, and eraser behavior/sensitivity. On win and mac you can easily set these independently for different programs. Another cool feature is that you can buy multiple pens (which I find pretty comfortable,btw) and have independent settings for each one.

    I'll be the first to admit it does take a while to get used to using one. But after playing around with it for a while I fell in love with it.

    They are a bit costly, but well worth it. Last I heard, Wacom was selling refurbished ones at nice discounts.

    --

  • by 8934tioegkldxf ( 442242 ) on Thursday April 26, 2001 @12:20PM (#263260)
    I find the mouse is excellent for this sort of thing. However, I have a Logitech Mouseman which fits my hand perfectly and I have very high sensitivity set and acceleration turned on. A gesture for me means moving my mouse within an area no bigger than about 1/4" x 1/4". Most people have their mouse sensitivity set way too low.

    The only better device would be a 3D glove since you could do 3D motions, which gives a much larger domain for your gestures to be in, probably making it both easier to remember them and less likely you'll mess them up. But don't sneeze or you may delete you root directory.

    BTW, Black and White sucks. A whole 5 levels, and WAY too much wood required to do anything. If I wanted to do the same task over and over and over again for hours on end I'd get a job in a factory and get paid for it. And how do you become evil? I taught my creature to eat people, I destroy entire villages, I set people on fire, fling them into mountains, sacrifice 'em all over the place, starve them to death and I'm a GOOD God? They got some good weed down at Lionhead, uh-huh.
  • by Jim42688 ( 445645 ) on Thursday April 26, 2001 @11:55AM (#263262)
    I use the VersaPad from Interlink Electronics (69.95). It works VERY well with all these programs, and I strongly suggest getting one. Even if you have a mouse and want to enhance gesture recognition, it is a vey good deal.
  • The best gesture package I've ever used was built into Synaptics' Windows98 driver. Synaptics makes virtually all touchpads for notebooks/laptops. It would recognize gestures on the touchpad. No clicking... And it was forgiving too... Not only did it do that, but it also supports two-finger gestures. Touchpads have the ability to sense when you touch it with two-fingers. This greatly reduces the chances of accidental-gestures, which can become quite annoying, because if it happens, it will most likely happen again and again. Too bad their windows2000 do not have this feature. And they don't make their own linux drivers either.. In Opera, gestures feel primitive, but it -is- a good start. For one thing, you have to perform the direction controls exactly. If you want to do a "back" gesture, you're supposed to click-move mouse left-let go. but most right-handed people will move the mouse to the left, but also slightly downward. Opera does not understand this.. sometimes, I accidentally minimize the window (left-down) instead of going back..

CChheecckk yyoouurr dduupplleexx sswwiittcchh..

Working...