

OpenAI Has Discussed Making a Humanoid Robot, Report Says (theinformation.com) 28
An anonymous reader shares a report: Over the past year, OpenAI has dropped not-so-subtle hints about its revived interest in robotics: investing in startups developing hardware and software for robots such as Figure and Physical Intelligence and rebooting its internal robotics software team, which it had disbanded four years ago.
Now, OpenAI could be taking that interest to the next level. The company has recently considered developing a humanoid robot, according to two people with direct knowledge of the discussions. As a refresher, humanoid robots typically have two arms and two legs, distinguishing them from typical robots in a warehouse or factory that might have a single arm repeatedly performing the same task on an assembly line. Developers of humanoid robots think it will be easier for them to handle tasks in the physical world -- which is tailored to humans -- than it would be to change our physical environments to suit new robots.
Now, OpenAI could be taking that interest to the next level. The company has recently considered developing a humanoid robot, according to two people with direct knowledge of the discussions. As a refresher, humanoid robots typically have two arms and two legs, distinguishing them from typical robots in a warehouse or factory that might have a single arm repeatedly performing the same task on an assembly line. Developers of humanoid robots think it will be easier for them to handle tasks in the physical world -- which is tailored to humans -- than it would be to change our physical environments to suit new robots.
Oh I so wish they did that (Score:3)
and one of their creepy valley machines fucks up bad, hurts someone and finally turns society full luddite against that damn job-killing technology.
Hello, I'm Sam Waterston (Score:2)
Re: (Score:2)
Robots have killed and injured people many times.
The same was said about self-driving cars: As soon as someone dies, they'll be banned. Well, SDCs have now killed dozens, and no ban.
We don't expect automation to be perfect.
Re: (Score:2)
Re: (Score:2)
just equip them with firearms and its covered by the 2nd
Sorta. Robotic weapons are legal as long as a human is in the loop to make the "kill" decision.
Fully autonomous lethal weapons are not legal in America.
Re: (Score:2)
Sorta. Robotic weapons are legal as long as a human is in the loop to make the "kill" decision.
Fully autonomous lethal weapons are not legal in America.
Not legal for you and me, maybe. It's perfectly legal for the US military to use them-- they've openly stated that they are in the process of developing them-- and once they've been widely adopted by the military, LEOs won't be far behind.
PornBots arrive via robotaxis (Score:1)
Customer: "But I like my porn extra rough! Hurts so good, right Mellencamp?"
Re: (Score:2)
Not sure if you're serious about this, but I'm waiting for this to happen. I think some form of modern Luddite-ism is coming. The real question is what form it will take.
I don't think it will look like Dune's "Butlerian Jihad", and I don't even think it will look like the original Luddite movement. There is ultimately no point in trying to smash up the machines you don't like, because destroying a machine is empty symbolism-- there are 8 billion people on the planet, and most of them are not going to do
Re: Oh I so wish they did that (Score:2)
Re: (Score:2)
Anti-intellectualism is certainly a thing that exists in America (there were books written about it in the 1960s), but intellectuals are also a thing, one doesn't exist without the other. And among intellectuals, that's where you will find members of the neo-Luddite communities.
The biggest problem with AI technology isn't just the economic impact (terrible though it is). The biggest problem is that it sets the stage for certain human abilities to wither away, and specifically the abilities that intellectu
Re: (Score:2)
Re: (Score:2)
OpenAI has always done way more than just LLMs.
Re: (Score:1)
Elon?
What does she look like? (Score:2)
Re: (Score:1)
Yo Mama
Re: (Score:2)
Harmless (Score:3)
Don't worry. They won't be able to do any harm. They'll have six twisted fingers on one hand, and another nine on the other foot.
Company with no hardware experience... (Score:3)
Is going to spend a ton of effort, money, and time on something NOBODY ELSE CAN DO RIGHT EITHER.
"AI seems to be peetering out. We can't do what we said. I know! Let's claim we're making a robot!"
At absolute best they should buy the hardware from someone else and apply AI/SW on top.
Re: (Score:2)
That's probably the best approach. Training is going to be difficult, though. There's nothing really equivalent to the web's text database.
Perhaps they should start with a robot companion (not a servant). That way it wouldn't be expected to do anything it didn't feel like. But for that to work, the hardware would need to be extremely cheap. I'm thinking of the robot "fur seals" that were used in some hospitals for the aged (in Japan, IIRC). They were already helpful just being cute and active, if they
Re: (Score:2)
That's probably the best approach. Training is going to be difficult, though. There's nothing really equivalent to the web's text database.
With use of simulated physical environments through trial and error. With many (many) repetitions.
Google's Deepmind has had this approach to AI from the beginning. For example this was done already back in 2016: https://deepmind.google/discov... [deepmind.google]
After that, they had similar approach with more and more complex games and by 2019 they already mastered StarCraft: https://deepmind.google/discov... [deepmind.google]
Neural networks are SO much more than just LLMs.
Re: (Score:2)
Re: (Score:2)
The problem is that simulations only apply those situations you foresaw. Working out in the physical world will reveal situations you didn't expect. That's why starting simple/cheap and growing from there is better.
The problem with this approach, of course, is "privacy!!". The robot needs to learn from the events that it encounters, and share that learning with the developers in order for this to lead to "next generation improvements". So cheap if you allow recording of what it learns, and otherwise qui
Sam Altman may have used a toilet (Score:2)
WSJ reports it.
SAN FRANCISCO—In the world of technology, where innovation is the currency of success, even the most mundane activities can become a testament to human progress. This narrative unfolded when Sam Altman, CEO of OpenAI, encountered what might be considered the epitome of bathroom technology during a visit to a Silicon Valley startup. On a brisk Thursday afternoon, Altman, known for his visionary approach to artificial intelligence, found himself at the headquarters of a company specializi
jFC (Score:2)
Don't worry... (Score:2)