Boston Dynamics' Robot Dog Can Now Read Gauges, Spot Spills, and Reason (ieee.org) 91
Boston Dynamics has integrated Google DeepMind into its robotic dog Spot, giving it more autonomous reasoning for industrial inspections like spotting spills and reading gauges. Spot can also now recognize when to call on other AI tools. IEEE Spectrum reports: Boston Dynamics is one of the few companies to commercially deploy legged robots at any appreciable scale; there are now several thousand hard at work. Today the company is announcing that its quadruped robot Spot is now equipped with Google DeepMind's Gemini Robotics-ER 1.6, a high-level embodied reasoning model that brings usability and intelligence to complex tasks.
[T]he focus of this partnership is on one of the very few applications where legged robots have proven themselves to be commercially viable: inspection. That is, wandering around industrial facilities, checking to make sure that nothing is imminently exploding. With the new AI onboard, Spot is now able to autonomously look for dangerous debris or spills, read complex gauges and sight glasses, and call on tools like vision-language-action models when it needs help understanding what's going on in the environment around it. "Advances like Gemini Robotics-ER 1.6 mark an important step toward robots that can better understand and operate in the physical world," Marco da Silva, vice president and general manager of Spot at Boston Dynamics, says in a press release. "Capabilities like instrument reading and more reliable task reasoning will enable Spot to see, understand, and react to real-world challenges completely autonomously."
You can watch a demo of Spot's new capabilities on YouTube.
[T]he focus of this partnership is on one of the very few applications where legged robots have proven themselves to be commercially viable: inspection. That is, wandering around industrial facilities, checking to make sure that nothing is imminently exploding. With the new AI onboard, Spot is now able to autonomously look for dangerous debris or spills, read complex gauges and sight glasses, and call on tools like vision-language-action models when it needs help understanding what's going on in the environment around it. "Advances like Gemini Robotics-ER 1.6 mark an important step toward robots that can better understand and operate in the physical world," Marco da Silva, vice president and general manager of Spot at Boston Dynamics, says in a press release. "Capabilities like instrument reading and more reliable task reasoning will enable Spot to see, understand, and react to real-world challenges completely autonomously."
You can watch a demo of Spot's new capabilities on YouTube.
Reason (Score:5, Insightful)
I highly fucking doubt that their robot or AI can reason.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
There are no machines that can reason for any practical purpose. (The depth is missing.) Why would an inferior computing platform, of all things, be able to?
Re: (Score:2, Insightful)
There are no machines that can reason for any practical purpose.
Is finding, reporting, and fixing latent bugs in C or C++ code a practical purpose? Because they're doing a damn good job of that.
Re: (Score:2)
The problem with "all" the AI companies today, is with their CEO's and Sales and Marketing departments! They took the easy route of the grift. Instead of the honest route "Advanced Automation! The path to an AGI future!" but that ship has sailed in the minds of the users now.
Re: (Score:3, Insightful)
There's no magic required to reason.
All something needs to reason, is the ability to apply logic to a set of data, and LLMs are perfectly able to apply logic to a set of data.
Re: Reason (Score:1)
Point of order. LLMs are not applying logic to a data set. They are applying statistical probability models to a data set. Not quite the same thing.
Re: (Score:2)
High order approximations of logic can absolutely be implemented stochastically. If they couldn't, you'd have to concede that humans can't logic.
Neurons do not fire reliably. At the base, your neural circuitry is fundamentally probabilistic.
The phrase "LLMs are not applying logic to a data set. They are applying statistical probability models to a data set." is as pointless as saying, "The Universe can be modeled with a Markov Chain."
Re: (Score:1)
Sorry, I didn't mean to twist your panties. I was merely pointing out (pedantically, I admit) that the rules of logic are not identical to the rules of statistics. LLMs (and arguably all current generations of AI models) are inherently statistical systems. This can be pretty easily demonstrated by the fact that all current LLMs can be coaxed into saying things that are illogical. As you point out you can "approximate" logic and arrive at logical conclusions with these systems, but you are not doing so using
Re: (Score:2)
are inherently statistical systems
So is your brain. Inherently. That was the point.
This can be pretty easily demonstrated by the fact that all current LLMs can be coaxed into saying things that are illogical
Ah, yes. Can't do that with a human.
As you point out you can "approximate" logic and arrive at logical conclusions with these systems
I was actually referring to your brain.
but you are not doing so using a logic engine
lol. Do you feel that your brain has a logic engine that doesn't sit upon a purely probabilistic circuit?
At the philosophical level, logic exists outside of our neurons
Absolutely. Logic exists outside of everything.
Human logic does not exist outside of its neurons. To claim otherwise is dualist absurdity.
Its rules are (as far as we accept anything in reality) independent of human existence.
Indeed they are. In order to make that claim have anything whatsoever to do with anything, you'll have to pivot to your mind also being independ
Re: (Score:2)
It depends on how you define the term. I tend to consider any choice an act of reasoning (including a simple if test). I know that most people have a different definition, but I can rarely get them to define what they mean by the term. I tend to suspect it's an "I know it when I see it" kind of thing.
Re:Reason (Score:5, Interesting)
It can. Reasoning according to the dictionary is "the action of thinking about something in a logical, sensible way" .. or by my own definition it's "identify a situation and make a decision towards a goal based on that situation."
Either way, an here's example of reasoning in a self driving car: Recognizing an object on the highway as an immovable road hazard, and making the decision to drive around it instead of hard-breaking due to the fact that there are no cars on the adjacent lane.
The car was taught, via simulator or training data, that if you don't check for hazards before switching lanes an accident will occur. It was also taught that hitting an object = bad. It knows the goal is to get to a destination and had to weigh the fact that if it hard braked, it would get to the destination later than if it went around the object and also be uncomfortable for the passengers. Therefore, in the situation where it won't hit another car or pedestrian, it would choose to go around rather than hard brake. That's "reasoning" .. doing a projection into the future and making a decision based on the optimal route.
Re: (Score:3)
You should send your ideas to Tesla.
Re: (Score:2)
Tesla does that.
Re: (Score:3)
Tesla's problem is that their computer vision system is crap. They mostly rely on it being able to recognize objects, and if it can't (say because a truck turned over and is showing its underside) the car just ploughs into it at full speed.
They seem to have recently added something that can determine when an object is stationary, even if it can't understand what it is. Unfortunately only newer models of their cars are getting it, the old ones will continue to kill the occupants.
Re: (Score:2, Troll)
Tesla's problem is that their computer vision system is crap. They mostly rely on it being able to recognize objects, and if it can't (say because a truck turned over and is showing its underside) the car just ploughs into it at full speed.
But that's not only the vision system being crap, that's them being crap. (Mostly it's Elon being crap, since he's the one who keeps insisting they do it his way in order to save a few hundred bucks per car, and also to try to prove his shit idea is genius.) They don't put sensors that every other automaker uses on the vehicles, sensors which would reveal the density of an object being approached and reveal that the vision system doesn't know what's happening.
Re: Reason (Score:2)
The vision system works well. I have a Tesla with FSD, it works well â" see my other reply. Also humans navigate the world just fine with vision only.
Re: (Score:2)
Also humans navigate the world just fine with vision only.
First of all, are you new? Have you ever driven? If you think humans are just fine, you must be a shit driver with low fucking standards. Second, as bad as they are, humans have brains and cars don't. They can do things with those brains that the cars can't do. Finally, try not to suck Elon Musk's dick on your way through the parking lot.
Re: Reason (Score:2)
I have a Tesla, the self driving is amazing I use it exclusively and it has driven me thousands of miles much of it urban with various scenarios youâ(TM)ll find it a city (like San Francisco â" I donâ(TM)t live there but I am there very often). I know people using it in Boston and NYC and other places. I will go so far as saying that FSD itself is already basically perfected. The only issues are it sometimes misreads speed limits not meant for it such as âoetrucks onlyâ and slows do
Re: (Score:2)
That's the problem. Luls you into a false sense of security, then when you aren't paying attention it fails and you die.
Re: Reason (Score:2)
The stats say itâ(TM)s 10x safer than a human driver. So
how is it different than taking an Uber?
Re: (Score:2)
Tesla's own stats, which I don't believe for a second.
And those stats are for managed "Full Self Driving", where a human has to monitor it all the time. Competitors have had actual self driving cars where there is no full time human monitoring, for years.
Re: (Score:2)
The Tesla robotaxi in Austin has been operating unsupervised for months now and I personally have been using (supervised) FSD, with no critical interventions. You can find many on youtube doing the same. Just try it out. Go to your local Tesla dealer, try it. Then come back and comment. If you're in Austin or the SF bay area then call a Robotaxi.
References:
https://www.youtube.com/shorts... [youtube.com]
https://www.youtube.com/watch?... [youtube.com]
https://www.youtube.com/watch?... [youtube.com]
https://www.youtube.com/watch?... [youtube.com]
Re: (Score:2)
It can. Reasoning according to the dictionary is "the action of thinking about something in a logical, sensible way" .. or by my own definition it's "identify a situation and make a decision towards a goal based on that situation."
Compilers and assemblers have been reasoning for nearly 80 years, by that definition.
Re: (Score:2)
An LLM has no such programmed constraints, and while it can be argued that all of the reasoning was done before-hand, the same argument could be made for your brain.
Re: (Score:2)
Do you have a counter-argument to the statement that statistical inference is not the same as logical inference ? I fail to see that an NN/LLM can perform logical inference. My understanding of NN/LLM proof systems is that they detect proof patterns in their corpus, but, having identified candidate proofs, submit them to Lean (or some other GOFAI, e.g., Prolog) for verification. Which suggests to me that NN/LLM cannot reason, which I define as predicate calculus. But I'd like to hear your view.
Re: (Score:2)
Do you have a counter-argument to the statement that statistical inference is not the same as logical inference
The statement is true, so I offer no counterpoint.
I will, however, ask you a question.
Have you ever seen the results of a logic test described in a non-statistical way?
Am I then to think that humans are unable to perform logical inference? Some of them can just approximate it?
I fail to see that an NN/LLM can perform logical inference.
I fail to see how it isn't obvious that any kind of logical inference is merely an approximation.
Humans are notoriously poor at logical inference. Through much training, they can become quite good, though.
My understanding of NN/LLM proof systems is that they detect proof patterns in their corpus, but, having identified candidate proofs, submit them to Lean (or some other GOFAI, e.g., Prolog) for verification.
They most definitely submi
Re: (Score:2)
| Have you ever seen the results of a logic test described in a non-statistical way?
Yes! Multiple times in my high school geometry class, and thereafter -- many times to my sorrow. Also in sentential logic tests. The proofs/answers that I submitted were either correct or incorrect, and could be verified by deduction or induction.
| I fail to see how it isn't obvious that any kind of logical inference is merely an approximation.
You might take this up with George Boole, he is far more an authority than I. Sn
Re: (Score:2)
Yes!
No- you misunderstood me.
Any individual test may be deterministically evaluated.
But you may get it wrong, and the person next to you may get it right.
Are you capable of reasoning, and them not?
i.e., we can only describe the aggregate human "ability to reason" statistically. It is not a boolean.
You might take this up with George Boole, he is far more an authority than I. Snark not meant maliciously, I always learn from your commentary.
Let me ask you this,
If every token in LLM produces is statistically inferred (a fair claim) does that mean the entire output is a statistical inference?
Now what if I told you that a neuron in your brain can on
Re: (Score:2)
| If every token in LLM produces is statistically inferred (a fair claim) does that mean the entire output is a statistical inference?
Yes. Is the Pythagorean theorem an approximation ? Within the limits of measurement, yes. In geometric logic, no.
| Now what if I told you that a neuron in your brain can only be accurately modeled statistically? Is it statistical inference all the way up?
| Or do we accept the systems that are inherently based on statistical inference (broad definition, not narrow) can, to a h
Re: (Score:2)
Yes.
Good- we agree here. We're on the same page, logic wise.
Is the Pythagorean theorem an approximation ?
Is it? No. It is not.
Is your implementation of it an approximation? Yes. By any definition you can conjure.
How long will you argue that this will be true ?
Indefinitely, because it is.
Neurons are fundamentally non-deterministic (at least in the sense that we cannot reasonably predict Brownian motion). Only the group of neurons as a whole provide any semblance of order.
This is how an inherently stochastic system providers a high order approximation of logical inference.
Re: (Score:2)
Making decisions is not the same thing as reasoning, although it's nice when the former is the result of the latter.
Re: (Score:2)
Re: (Score:2)
Reason.
There is no master but Master and QT1 is His prophet.
(if anybody remembers)
Re: (Score:2)
Basic If... then ... else
Nothing more.
Is this reasoning?
It can also lie about its capabilities! (Score:4, Informative)
No, it cannot "reason". Stop making that claim.
Re: (Score:2, Flamebait)
I do know how automated deduction works. That is reasoning. Faking it with an LLM is not.
Re: (Score:3)
I think the question of whether the LLMs perform reasoning is an interesting one, so I asked Gemini and here is the answer:
"Large Language Models (LLMs) do not engage in human-like reasoning but simulate it by generating statistically probable text based on learned patterns. While techniques like Chain-of-Thought (CoT) enable them to break down problems into steps and solve complex tasks, this is often pattern matching or "mimicry" rather than true logical deduction, frequently failing on novel or slightly
Re: (Score:2)
Really the question comes down to philosophy.
If you are looking at it from a functionalist perspective, then yes, it reasons.
If the bridge holds, the math was right. Did it do the math, or did it simulate it?
There are 2 camps. Functionalists, and idiots.
The fact that it regurgitated something it was trained to say blindly without applying its ability to reason is really just an example that it reasons as well as you.
Re: (Score:2)
The LLM produced that answer because it was trained to.
So it's as smart as you!
Re: (Score:2)
FFS. You continue to be my best piece of incontrovertible evidence that humans aren't essentially more intelligent than LLMs.
Re: It can also lie about its capabilities! (Score:2)
Re: (Score:2)
All humans are trained to reason.
I want you to repeat this to yourself with a straight face.
So by your argument brains don't reason
No, by my argument, they "reason" as much as an LLM.
which is an argument for which I openly admit you do seem to be an example.
Since your "reasoning" completely missed the point, I'd take a look in the mirror before looking anywhere else.
Re: It can also lie about its capabilities! (Score:2)
Re: (Score:2)
You're impressively unintelligent, bud.
Why don't you ask Aristotle where maggots come from.
Re: It can also lie about its capabilities! (Score:2)
Re: (Score:2)
Nice try, dumbfuck.
Re: (Score:2)
There are interesting counterpoints to functionalism that aren't just "idiocy". The Chinese Room [wikipedia.org] is the example I'm most familiar with offhand. I believe it was proposed in the context of "consciousness" not "reasoning" but I say it's relevant.
To use your bridge analogy: If I ask my "AI" how to build a bridge that spans X, can carry Y load, needs to handle Z wind shear, (and so on, for all relevant parameters), and it provides an answer from a (mind-bogglingly large) lookup table containing instructions
Re: (Score:2)
It presupposes magical brain function. Searle himself admits this when pushed. If you believe in magical brain function, the intuition pump is compelling for you. If you do not, it's obviously absurd.
I agree with your central point: LLMs can reason. But in my opinion, that conclusion relies on more than that their output looks like it. It comes from the fact that, for example, you might conceivably train an LLM on a bridge-building textbook and ask it to tell you how to build a bridge that wasn't explicitly defined in its training set, and get a correct answer.
Your view, is that if the model- faced with a novel set of parameters, but a concept it was trained to understand (if you'll suffer the non-anthropocentric reading of that word)- doing the math correctly is... simulating it? Or the opposite?
If the oppo
Re: (Score:2)
Your view, is that if the model- faced with a novel set of parameters, but a concept it was trained to understand (if you'll suffer the non-anthropocentric reading of that word)- doing the math correctly is... simulating it? Or the opposite? If the opposite, then we're in agreement.
The opposite. I use the example of an LLM correctly solving a problem given a novel set of parameters based on its "understanding" of the concepts as an example of something that goes beyond simulation.
So, yes, I think we're in agreement that LLMs can reason. It seems we may still be in slight disagreement about whether the fact that two systems produce functionally identical outputs is sufficient to say that they both are reasoning agents, if we take for granted that one of them is.
I apologize to anyo
Re: (Score:2)
If a human produces the numbers without reasoning or work to go with them, would you say that it has reasoned?
I'd argue that at best you could say, "well, it may have reasoned..." and the same applies to any LLM that didn't show its work.
Re: (Score:2)
If a human produces the numbers without reasoning or work to go with them, would you say that it has reasoned?
No. I included it as a joke, but that's what my anecdote about the engineer and the mathematician was meant to highlight. Even in the context of human behavior, if you get your answer by looking up the input in a table and responding with whatever's in the other column, I wouldn't consider it reasoning.
This of course raises thorny questions: If we add a physicist to my anecdote who simply memorized "A = pi*r^2", are they doing proper "reasoning" when they successfully give the area of a circle? Maybe. If
Re: (Score:2)
I was hinting that the shown work is part of the functional output.
i.e., if we're including so much as hidden state so as to make it impossible to tell if someone is using a lookup table vs. reasoning, then we who are describing the function have erred in where we drew the lines.
Re: (Score:2)
At this point, you are just digging yourself deeper. Recognize you are following a cult. If you do, you still have a pretty good chance to get out.
Oh, and LLMs _cannot_ do math. There are hard proofs of that. Claiming anything else is a direct lie at this time.
Re: (Score:2)
At this point, you are just digging yourself deeper. Recognize you are following a cult. If you do, you still have a pretty good chance to get out.
Wrong answer. You're part of a subset of people that will contort their minds in any way necessary to believe a technology they feel threatened by.
Oh, and LLMs _cannot_ do math. There are hard proofs of that.
Incorrect.
Claiming anything else is a direct lie at this time.
Incorrect.
Only one thing has been proven- and that's that a statistical neural model solves via heuristics, with an increasing probability of it being correct the more it is trained.
This is perfectly analogous to human neuronal math solving abilities.
A statistical network can only ever offer a high level approximation of formal logic- be it human,
Re: (Score:2)
Re: (Score:2)
The LLM produced that answer because it was trained to.
The problem is it does not do so reliably. Tiny details may throw it off. It has no understanding of context. It has no understanding of hard facts that cannot be ignored but must be taken as truth.
Re: (Score:2)
You seem to be completely ignorant as to what "reasoning" actually means. Stacking unreliable steps is not reasoning, regardless of whether some marketing people make different claims, because inaccuracies and hallucination risks multiply and hence errors grow exponentially with depth. That makes this approach non-suitable for any reasoning tasks.
Re: It can also lie about its capabilities! (Score:2)
Re: (Score:2)
Re: (Score:1)
No, it cannot "reason". Stop making that claim.
Eh, if it can simulate "reasoning" well enough for a task, then it may not matter.
Re: (Score:2)
I would love a functional definition of non-simulated reasoning that doesn't involve hand-waving.
Ultimately, the argument really goes that LLMs "simulate" everything, because "it's all just statistics".
From this comes the indefensible assertion that whatever your brain does is "more".
It always boils down to magical brain functionality.
Re: (Score:2)
"It's hard to define, therefore you can't disprove me and I can say anything."
You're just trolling. You're so stupid you can't even have an argument with people.
Re: (Score:2)
That is because you cannot do it yourself. Sorry, no other conclusion is possible from what you post.
Let me break it down a bit: There is non-general intelligence. You can fake that with tables and other preconfiguration mechanisms, because the domain and the set of conclusion is finite and known in advance.
On the other side, there is general intelligence. That one cannot be done with any precomputations. It requires the elusive quality of "insight" and that is whu LLMs can (provably) not do it.
In essence,
Re: (Score:2)
True. But they cannot. A typical reasoning chain for anything real is somewhere between 20 and 100 elementary steps, sometimes longer. The fake need to be very good to get theough that.
Re: (Score:2)
I understand that it doesn't reason the same way that you do- with your magical consciousness quantum field that knows things non-computationally- but reason it does, nonetheless.
Re: (Score:2)
You can continue to push lies. But I will not get on board with that. LLMs cannot reason. We have mathematical proof of that. Not that this is in any way a surprise to people that actually understand the mechanisms they are using. Among actual experts, this is not even a debate. The only thing these people look at is why the fake is so convincing.
Re: (Score:2)
No, it cannot "reason". Stop making that claim.
You not understanding the term "reason" and conflating it with "thinking" is the only problem here. "Reason" is literally what algorithms do. It's literally baked into one of the standard definitions of this word. You don't even need to have LLMs or AI to do it.
Re: (Score:2)
"Algorithms do reasoning"? WTF are you smoking? If you have any CS degree, please hand it in, you just proved yourself completely unworthy of it.
That Demo's A Joke (Score:2)
Dancing is cool (Score:2)
They can put the Jabberwockies out of business I suppose? Big whoop. How about showing some dexterity tasks? Like do sculpting or assemble a Lego set? I mean if it can even build something using Lego Duplo blocks I'd be impressed.
Robotics People are ... Aspirational (Score:5, Interesting)
I recently joined a robotics company, and quickly learned that there's a giant divide between the "aspirational" robotics companies, which promise humanoid (or canine) robots, and the practical real-world companies. The humanoids get all the press, while the practical robots rarely make the news at all.
But if you notice, you will almost never see a humanoid robot demo next to an actual human ... because those things are freaking dangerous! Humanoids are still a decade or more away from being able to safely interact with human beings. But, just from all the overhyped robot demo videos you see, you'd think they're all but ready for production.
Meanwhile, practical robots all look nothing like a living creature: they look like your Roomba! In other words, they have a boring/practical form factor, which almost certainly involves wheels to move around. There's some incredibly cool stuff happening with those kinds of robots, and some are being used in the workplace today ... but they don't make for sexy robot demo videos, so few people outside the industry even know they exist.
Re: (Score:2)
That's what made the recent demo in China so impressive. Okay, it was scripted, but they still had dozens of robots doing complex moves that involved being able to self balance, and interacting with humans on the same stage, all done live.
Elon Musk must have shat himself with rage when he saw that.
just one question ? (Score:2)
Why do they require eye's, human or otherwise, on a gauge anyway ? Shouldn't that be automated ?
Note: I claim no production knowledge
Re: (Score:2)
Re: (Score:2)
Maybe that could have been replaced with a digital gauge that reports to a central system, but if it ain't broke...?
But it is broke.
You could park a small camera in front of it, with some machine vision, and have that talk to the mothership. This robot is just that, but can move from one gauge to another.
This robot is that, only with a lot more latency and cost.
Re: (Score:1)
It obviously makes more sense to add sensors than to spend even more on a robot that's going to run around and look at gauges, getting in the way of any humans, who the gauges were actually designed for so they could just look at a thing and see what's going on with the thing. Using a robot to go look at something a human can reach is stupid, the plant is already a robot and you can make IT aware of the reading.
It makes sense to use robots to look for leaks in pipes or what have you, not to read pressures.
Can it talk like a Scottisch construction worker? (Score:2)
That would be braw.
Yea but... (Score:1)
Re: (Score:2)
robot version (Score:2)
AI 'reasoning' also means you can manipulate it. Uh oh, all gauges are in the red and there are spills everywhere.
Re: (Score:2)
AI 'reasoning' also means you can manipulate it.
If you have access to its command-input interface, either you own the system and are expected to be able to manipulate it, or you've somehow obtained unauthorized access, in which case it has a security problem, and it would be an equally serious problem for a non-AI system.
Re: (Score:2)
Obviously, https://kottke.org/25/12/this-... [kottke.org]
The future (Score:2)
is coming [youtube.com].
Pitbull Weapon System Urban Normalization (Score:2)
Black Mirror (2017) Metalhead (Score:2)