Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Robotics Entertainment Games Technology

AI Taught How To Play Ms. Pac-Man 167

trogador writes with the news that researchers are working to teach AIs how to play games as an exercise in reinforced learning. Software constructs have been taught to play games like chess and checkers since the 50s, but the Department of Information Systems at Eotvos University in Hungary is working to adapt that thinking to more modern titles. Besides Ms. Pac-Man, game like Tetris and Baldur's Gate assist these programs in mapping different behaviors onto their artificial test subjects. "Szita and Lorincz chose Ms. Pac-Man for their study because the game enabled them to test a variety of teaching methods. In the original Pac-Man, released in 1979, players must eat dots, avoid being eaten by four ghosts, and score big points by eating flashing ghosts. Therefore, a player's movements depend heavily on the movements of ghosts. However, the ghosts' routes are deterministic, enabling players to find patterns and predict future movements. In Ms. Pac-Man, on the other hand, the ghosts' routes are randomized, so that players can't figure out an optimal action sequence in advance."
This discussion has been archived. No new comments can be posted.

AI Taught How To Play Ms. Pac-Man

Comments Filter:
  • Not Really (Score:5, Funny)

    by ilikepi314 ( 1217898 ) on Saturday January 19, 2008 @03:23PM (#22111506)
    It just lied that it could play Ms. Pac-Man so it could get more reward food.
  • by mastermemorex ( 1119537 ) on Saturday January 19, 2008 @03:25PM (#22111532)
    live fast, eat chips, big ones are the best and avoid the gosh with ugly faces
  • by jelizondo ( 183861 ) * <jerry.elizondo@g ... m minus language> on Saturday January 19, 2008 @03:25PM (#22111534)

    In Ms. Pac-Man, on the other hand, the ghosts' routes are randomized, so that players can't figure out an optimal action sequence in advance.

    I feel I'm beginning to understand ...

    Perhaps the greatest achievement of AI would be to understand female behavior

  • Bad idea (Score:5, Funny)

    by QuickFox ( 311231 ) on Saturday January 19, 2008 @03:30PM (#22111572)
    As if everybody didn't already waste too much time on games, do we have to teach programs to waste time too?
  • so... (Score:5, Funny)

    by dashslotter ( 1093743 ) on Saturday January 19, 2008 @03:31PM (#22111582) Homepage
    Who's Al?
  • by eyenot ( 102141 ) <eyenot@hotmail.com> on Saturday January 19, 2008 @03:32PM (#22111590) Homepage
    I think most press releases re: AI are misleading. I highly doubt there is anything like "AI" behind the program they have that attempts to solve Ms.Pacman. Consider if you wrote an "AI" that started off with what you as a human starts off with: the ability to see the screen and understand what the various graphics depict or mean; how to control the pac character; what the basic goals and obstacles are; and a desire to rack up points. An "Artificial Intelligence" (AI) would be able to start with that much and build its skill level as it plays. Presumably it would quickly build a talent that can beat average humans, then most humans, then eventually all humans since it has faster reflexes and doesn't get tired (or make errors once it's learned). That, I think, would justify a press release "AI learns to play Ms.Pacman". However, scripting something that plays the game as well as you can imagine it should be played doesn't seem to be news any more than "scripters automate online game play". I only note this because the article mentioned "teaching" the "AI"; that's not very scientific, considering you're trying to see something learn, and should be maintaining scientific control over the learning process.
    • by bunratty ( 545641 ) on Saturday January 19, 2008 @04:10PM (#22111920)
      AI can be as simple as basic search algorithms such as breadth-first, A*, and minimax. When you play any board game against a machine, that's AI. When you get driving directions from a computer, that's AI. It seems to reason that AI is behind a computer playing Ms. Pacman. And in this case, the computer generate playing policies on its own, so it really is learning, improving its performance based on previous experience.
      • by eyenot ( 102141 )
        Well, suffice it to say, I am simply not of the "camp" that believes "artificial intelligence" should be applied so cleanly and popularly to something that obviously does not do much "learning" on its own, at all.
        • Re: (Score:3, Insightful)

          by bunratty ( 545641 )
          I think you're referring to general or strong AI, which hasn't been developed yet. All we have now is weak AI, which even when it seems to demonstrate "learning", all it's really doing is running mechanical search algorithms and heuristics really, really fast.
          • by eyenot ( 102141 )
            I'm just stating that there's no need for such a general application of the term "AI" so as to render it practically useless (dict. "hyper-realised"). It's akin to the term "nano-technology" being used now to refer to anything very small that's somehow industrially useful but not biological, such as buckytubes and other Fullerine structures. A carbon-lattice, curled-up tube -- seen by itself, out of context -- may have come just as easily from natural processes as "technological" (which is how they were dis
            • by bunratty ( 545641 ) on Saturday January 19, 2008 @06:01PM (#22112884)
              But that is indeed how the term has been used for decades. What you describe is taught in AI classes and is described in AI books. It's the only kind of AI we have. As such, the term isn't useless. If you want to refer to original thought by a computer, use the term "strong AI," which hasn't been invented yet.
              • If you want to refer to original thought by a computer, use the term "strong AI," which hasn't been invented yet.

                Invented, no — evolved, yes.

              • Whilst this is indeed a clear case of weak AI, it's not quite as simple as the weak AI vs. strong AI thing. Weak AI in itself can be broken down into different levels, the AI mentioned in this article seems to be just a run of the mill application of symbolic AI, and whilst symbolic AI.

                Because such programs like this are the ones that for some reason make the headlines they're also the ones that make people think "well, AI is a bit of a let down then really isn't it" but weak AI goes further than just symbo
      • AI can be as simple as basic search algorithms such as breadth-first, A*, and minimax. When you play any board game against a machine, that's AI.


        I'd argue that. IMHO, it's the heuristics used to evaluate the positions discovered by the search that's the AI part of it.

    • AI seems to be nothing more than try random outputs and use feedback to reinforce outputs that resulted in success. It's sort of funny, my first recollection of this was in 1962 when a student in my grade school class performed this exercise for a project. A game was played repeatedly with losing moves recorded, developing a chart. Playing from the chart the game was eventually unbeatable by fellow students. The more things change the more they stay the same.

      I wro
      • by eyenot ( 102141 )
        Earlier, more abitious ideals of AI are what I based my criticism of this research on. Granted, there was a great deal of failure, but there was also some slight innovation (such as the tiny "e-life" routines) that went along with perhaps too much fear of returning to older methods or applying anything new back to them.
      • by Unoti ( 731964 )

        If we ever achieve AI it will be with a core of code that can generate modules of code that attempt different strategies, in other words grows a brain as program code and database, not just a matrix recording true - false results from random permutation outputs.

        In that case, the future is already here. You should look into the work of John Koza [wikipedia.org] and others. Their work involves generating code, real computer generated programs, not a matrix of lookup tables. I highly recommend his books, they are eye ope

      • If we ever achieve AI it will be with a core of code that can generate modules of code that attempt different strategies, in other words grows a brain as program code and database, not just a matrix recording true - false results from random permutation outputs.

        I think you're referring to strong AI. Note that what you describe is not sufficient for strong AI. Doug Lenat used a technique like what you describe with Automated Mathematician [wikipedia.org] in 1977, but didn't succeed at doing much even in the limited field o

    • by GroeFaZ ( 850443 )
      I only note this because the article mentioned "teaching" the "AI"; that's not very scientific, considering you're trying to see something learn, and should be maintaining scientific control over the learning process.

      All machine learning methods can be controlled, that's not the problem. The learning models either have parameters that can be retained or changed at will between runs, or they don't have parameters, which means the conditions are always the same, which saves the same purpose. The outcome ca
      • by eyenot ( 102141 )
        I made a response to a later comment about brute forcing the pseudo-random element, and how if the designers had thought that through and simply included the obvious RNG and seeding subroutine, the script could have jumped that hurdle ahead of time and might already have shown the quantification of risk-taking in luring ghosts and timing pills (though I predict that those behaviours simply won't win out over avoiding risk and racking up points through stamina and perfection). I just don't think "AI" is a te
    • by Hado ( 923277 ) on Saturday January 19, 2008 @04:49PM (#22112234)
      I feel I must comment since I am familiar with the AI used in this case: Reinforcement Learning. RL is a method of finding a mapping of states to actions in a setting where rewards can be obtained. The interesting part is that RL algorithms can learn to behave optimally when only very basic information is given. For instance, it should be enough to simply give small rewards for eating the dots and large punishments for being caught by a ghost. There are many theoretical results in the field that also hold in the case of stochastic environments (such as when the ghosts move randomly). In a sense you don't have control over the learning process, at least not in the sense that you control what exactly happens and which actions get tried. However, in the end theoretically still perfect behavior can be learned. This may take quite some time though, but fortunately good behaviors usually emerge much sooner.

      That being said, it is relatively easy to apply these techniques to games such as Ms. Pacman. Much harder problems have already been solved using RL algorithms. What seems missing in the article (though I don't know if this is also the case in the actual research) is comparisons with other RL methods than their own. Though their approach sounds promising and it's nice that they beat some human players, this is not uncommon in games for RL.
      • by eyenot ( 102141 )
        Conversely, punishments could be foregone since it's only a game, and the script could be left with the in-game punishment provided, which is failing to make it as far and push the limits of the process, and rewards are already provided in-game in the form of the score. I can see how application of "RL" algorithms to the script itself might reveal some things about the application of scripted behaviour to the learning process, but I personally don't feel that this script constitutes AI (see other responses?
    • Re: (Score:2, Informative)

      by Tyir ( 622669 )
      Actually, what you describe is exactly what Reinforcement Learning (RL) is. RL can be considered a subbranch of AI. In RL, an agent starts by knowing nothing about the environment. It explores the environment by taking available actions, in this domain, the actions would be exactly the actions available to the human players. It also has a reward signal R, which is used to train the agent to do the correct thing. Completing the level will probably give a high reward, encountering a ghost will give a negative
      • by eyenot ( 102141 )
        Understood, and I have no clinical knowledge of AI at all, and can only make general assumptions. However, I see what has been made here for Mc.Pacman (and for other specific environments) more as a combination between strategics and data-mining. The data to be mined in Ms.Pacman would be the predictability of the randomized ghost paths. I haven't looked at the arcade game code, but there is probably some pseudo-RNG involved that is seeded by a timer (if this is correct, in fact the ghosts would behave the
        • this "AI" isn't really learning anything, it's just dealing with missing variables. It can't make any cognitive leaps from the human equivalent of "intuition", it can't re-apply what it's learned (though in this specific case that's probably more due to the restraints of the tiny and simplistic environment), and if I read the article correctly (nor did I read the research paper) it doesn't properly make informed decisions, and all of its actions are entirely predetermined

          That's the only AI we've ever devel

          • by cnettel ( 836611 )

            this "AI" isn't really learning anything, it's just dealing with missing variables. It can't make any cognitive leaps from the human equivalent of "intuition", it can't re-apply what it's learned (though in this specific case that's probably more due to the restraints of the tiny and simplistic environment), and if I read the article correctly (nor did I read the research paper) it doesn't properly make informed decisions, and all of its actions are entirely predetermined

            That's the only AI we've ever developed. As you point out, it's completely incapable of doing anything original. It's called weak AI, as opposed to strong AI [wikipedia.org], which exhibits general intelligence. Strong AI is strictly limited to science fiction at this point.

            All along, we've also seen a shift in specific tasks, where we once thought that they would require strong AI. I would expect machine translation to be one area where larger data sets and only slightly more complex models (which are possible to train, thanks to the larger datasets), might result in the conclusion that good translation actually doesn't require understanding, or that this weak AI, at some level, shows equivalent understanding, even though it still wouldn't be able to practice it generally.

    • The summary was wrong, should read "AI programmed to play Pacman" I agree that AI is overhyped. Now we can debate the definition of "AI" for days but the fact is, this is simple programming. You tell the computer how to do something, and it does it (heh, i know it's not that simple, but the idea is that simple). AI is a fun topic. But ultimately the question of really defining Artificial intelligence is connected to how we define Human thought. In an abstract sense, humans are just programmable meat b
  • Average score of only 8186 (vs. 8064 by humans). Nothing really amazing here; if the AI could soundly trounce the best humans on a regular basis I might be impressed, but I can consistently score above 10000, and I'm not very good. TFA also notes that humans make better decisions on scoring points, while the AI shows some survival ability. Sounds like they need a better Ms. Pacman program.
  • by waveformwafflehouse ( 1221950 ) on Saturday January 19, 2008 @03:51PM (#22111748) Homepage
    So now we're teaching our AI that it's a round, dot hungry trans-gender Miss-Man being chased by ghosts?
  • When the AI manages to play (and beat!) Baldur's Gate, I'll be seriously impressed. Pacman/Tetris simply aren't that exciting.
    • I can imagine it will be able to kill kobolds, but i wonder how will it pick the right dialogue options :)
  • Oblig (Score:5, Funny)

    by MobileTatsu-NJG ( 946591 ) on Saturday January 19, 2008 @03:55PM (#22111782)
    The most interesting development came when the machine suddenly stopped killing ghosts and simply displayed the message: "The only way to win is not to play!"
    • by notnAP ( 846325 )
      nah...

      Much more interesting was the point a few minutes ago, when the researchers watched the AI somehow manage to change the game to Missile Command, at the same time that they noticed outside a massive rocket laun

  • Now we just need one that can play WoW for my friends so they can get their lives back!
  • by roystgnr ( 4015 ) * <roy AT stogners DOT org> on Saturday January 19, 2008 @04:02PM (#22111842) Homepage
    The new AI game playing routines can handle Ms. Pacman, Tetris, and Baldur's Gate. Can their mathematics routines find sums of integers, roots of quadratics, and proofs of Fermat's Last Theorem?
    • No, current AI does not exhibit general intelligence. That would be strong AI. We haven't developed it yet. The article is about weak AI, "the use of software to study or accomplish specific problem solving or reasoning tasks that do not encompass (or in some cases, are completely outside of) the full range of human cognitive abilities."
      • > No, current AI does not exhibit general intelligence. That would be strong AI.

        Whatever passes for "AI" would be better called Artificail Ignorance.

        The "strong AI" you mentioned, I would call Artifical Intelligence and until Comp Sci and Biologists get a clue what consciousness is, we'll be forever stuck in the Artificial Ignorance mode.

    • The new AI game playing routines can handle Ms. Pacman, Tetris, and Baldur's Gate. Can their mathematics routines find sums of integers, roots of quadratics, and proofs of Fermat's Last Theorem?

      Certainly Baldur's Gate, taken as a three-part epic campaign, is an immeasurably harder problem for AI than the other two; you have to be able to understand a whole lot of English dialogue, for a start. But the core game mechanism is very constrained. It's second-edition Dungeons and Dragons, a standardised rule se

  • Perfect Game? (Score:2, Interesting)

    by Sangui ( 1128165 )
    I've never been a big Ms. Pac Man player, always preferred the original, but when there's an AI that can pull off a perfect game then I'll be impressed, like that guy who got a perfect score on Pac Man without losing a life in the 80's. When the AI can do that it's done something. Not breaking 10,000 points? Meh.
    • Re: (Score:3, Interesting)

      by greg1104 ( 461138 )
      The first perfect (meaning all the possible points were collected) game of Pac-Man wasn't until 1999 and was played by Billy Mitchell. It took him 17 years of playing to get that good. Here's some background [tripod.com]. That page has one of my favorite quotes about the ill effects of video games:

      Imagine a world in which Billy Mitchell never encountered Pac-Man. Put to good use his sharp mind, excellent hand-eye coordination, incredibly long attention span and his prodigious talent for problem-solving probably would

    • First of all, AI is in a rather basic stage compared to what we tend to expect from it. AI isn't going to be doing anything impressive any time soon, but that doesn't make progress less significant.

      Second of all, getting a perfect score on Pac-Man without losing a life isn't that impressive to me, considering that by learning a handful of patterns you can play a perfect game (as long as you don't fuck up and mis-time a turn or something).

  • I'd almost be more impressed if it could have learned the routes of the ghosts in Pac-Man vs learning to avoid random movements. AI in the Ms Pac-Man game just needs to run away while to succeed in Pac-Man you need to first realize there are planned routes and then learn them.
  • Other uses (Score:2, Insightful)

    by TheSpengo ( 1148351 )
    This is cool, being able to choose smart moves against a random opponent could have a lot of uses in enemy AI in other games too. The unpredictability of a human opponent has always been an issue when creating realistic AI. It always kind of bugged me that even in new advanced games like Crysis, enemies will sometimes move in the most stupid ways possible. The next generation of FPS AI could use something similar to this.
  • Can the AI play Tic Tac Toe?
  • That's it? Get right out of town!
  • Honest to Jebus, I was writing Netrek bots in 1994 that used a genetic algorithm to self-guide their development, and you don't get more "random" than human opponents. When all those Quake bots hit the scene a couple of years later, it was already old hat as far as I was concerned, and now some Korean MMOs are almost entirely populated by robots. Are people really still getting grants for this?
    • some Korean MMOs are almost entirely populated by robots

      Keep in mind that MMO "robots" (more typically called "bots"), are mostly automated scripts that utilize very specialized record-and-playback functionality combined with techniques for screen-analysis, such as recognizing the name of a piece of text used as a navigation marker. These bots exploit the predictive and repetitive nature of MMOs, such as the fact that a particular creature will always spawn in the same location, that a vendor will be in the same place all the time, that the same sequence of ac

      • My point, which was admittedly very badly made, is that as the environments that real humans actually care about have become more complex, that 'successful' (i.e. extant in the wild) 'AIs' have become more primitive. You could view that as lazy devolution, or as honing away the parts that nobody (in the real, funds-delivering) world actually cares about. I guess your experience indicates that it's the latter case.

        Academic AI research still seems to be producing solutions to the problems of the 1980s. T

    • by Sigma 7 ( 266129 )

      When all those Quake bots hit the scene a couple of years later, it was already old hat as far as I was concerned

      IIRC, the Quake bots were severely limited by the platform they were running on. In particular, they couldn't "see" the map directly and had to rely on waypoints. These waypoints took up space in the 600/768 entity limit which made the bots fail if you tried using them on large maps.

      These bots also need to know how maps work - either by seeing players proceed through the map or by having a developer setup waypoints for the bots. In the first case, bots would be confused by complex map structures beyond

  • So we're now creating AIs that are learning how to eat things and we have that run on meat [cnn.com].

    Nope. I don't see any way how this could result in the destruction of the human race!
  • ... between the AI and humans tested playing the game appears to be (last para) that the humans were able to adapt their tactics.

    E.g. They learned to lure ghosts close to Ms Pac-man so they would easuer to catch and eat once they became edible.

    I'm sure this tactic could be programmed as a new rule and added to appropriate position on the AI's 'priority' list.
    But until this 'cross-entropy' learning method (and any other AI learning technique for that) can truly teach the AI to adapt by itself - from it'
  • by Old Wolf ( 56093 ) on Saturday January 19, 2008 @04:34PM (#22112100)
    ...and we've had Angband Borg [phial.com] for some time (which is very impressive!)
    • As I recall, Angband has a game-breaking strategy that requires spending hundreds of thousands of turns on the first level. It's too tedious for most humans, but computers don't get bored. The presence of this strategy makes writing an AI for Angband easier than it is for most other games.
  • Would you like to Play a game?
  • First Ms. Pac-Man... next Skynet.... I'd pull the plug if I were teaching the PC before it's too late.
  • ...that the only difference was that Ms Pac Man had a bow in her hair.
  • Tetris, Ms. Pac-Man, and Baldur's Gate... One of these things just doesn't belong!
  • Thought John Koza had Genetic Programs playing pac man over 15 years ago.
  • There is an xscreensaver hack [jwz.org] that is a pacman game with various level styles. I suspect that the monsters in that are a bit more random in their movement. However, the monsters move slower than pacman, and the pacman currently seems rather stupid, running towards monsters, and just collecting air when there's still plenty of pills to pick up. It would be nice to work on the AI in that, then I'd get a more interesting screensaver to watch.
  • I rocked at this game back in the day.
    Now I've been pwned by a largish calculator.

  • > However, the ghosts' routes are deterministic, enabling players to find patterns and predict future
    > movements. In Ms. Pac-Man, on the other hand, the ghosts' routes are randomized, so that players can't
    > figure out an optimal action sequence in advance.

    How sure are they that this AI hasn't simply learned how the random number generator works, so it CAN predict the ghost's movement patterns? Unless the random number generator is reseeded at unpredictable and unmanipulable intervals, then
    • So TFA implies the techniques they used are different, that they actually taught the machine how to play the game and gave it a rules-based AI, as opposed to something like genetically evolve a program that executes aribtrary code to map the inputs to the outputs. But I'm not very familiar with these techniques ...
  • Besides Ms. Pac-Man, game like Tetris and Baldur's Gate assist these programs in mapping different behaviors onto their artificial test subjects

    BALDUR'S GATE?!

In the long run, every program becomes rococco, and then rubble. -- Alan Perlis

Working...