Satya Nadella Explores How Humans and AI Can Work Together To Solve Society's Greatest Challenges (geekwire.com) 120
In an op-ed for Slate, Microsoft CEO Satya Nadella has shared his views on AI, and how humans could work together with this nascent technology to do great things. Nadella feels that humans and machines can work together to address society's greatest challenges, including diseases and poverty. But he admits that this will require "a bold and ambition approach that goes beyond anything that can be achieved through incremental improvements to current technology," he wrote. You can read the long essay here. GeekWire has summarized the principles and goals postulated by Nadella. From the article:AI must be designed to assist humanity.
AI must be transparent.
AI must maximize efficiencies without destroying the dignity of people.
AI must be designed for intelligent privacy.
AI needs algorithmic accountability so humans can undo unintended harm.
AI must guard against bias.
It's critical for humans to have empathy.
It's critical for humans to have education.
The need for human creativity won't change.
A human has to be ultimately accountable for the outcome of a computer-generated diagnosis or decision.
AI must be transparent.
AI must maximize efficiencies without destroying the dignity of people.
AI must be designed for intelligent privacy.
AI needs algorithmic accountability so humans can undo unintended harm.
AI must guard against bias.
It's critical for humans to have empathy.
It's critical for humans to have education.
The need for human creativity won't change.
A human has to be ultimately accountable for the outcome of a computer-generated diagnosis or decision.
Miro$oft? (Score:1)
And if we don't want any of this they'll just shove it down our throats? Gotta complaint? Here, talk to our bot.
Re:Miro$oft? (Score:5, Insightful)
And it's fun and weird to see this coming from Microsoft as well considering their behavior when it comes to Windows 10.
Re: (Score:3)
Good point. If we consider Windows 10 to be a kind of "robot," we can consider how it does in relation to Asimov's Three Laws of Robotics [wikipedia.org] in the recent case where my elderly mother accidentally approved its installation as an "upgrade" of her Windows 7 system, which culminated in device-driver incompatibility warnings which she interpreted as making the computer unusable. (Elderly folks and non-techies get confused by things like that.) To wit:
"1) A robot may not injure a human being or, through inaction, a
Re: (Score:2)
> Humans can improve themselves as well, yet here we are.
Humans are optimized for similar purposes to other animals. Our brains didn't do that optimizing either, making it ludicrously complex to make changes. A computer starts without these, which obviously has massive downsides, but very interesting potential.
Re: (Score:2)
The problem is that people conflate intelligence with knowledge. They are not the same. Intelligence is the rate by which new knowledge can be absorbed and assimilated. Experience is putting that knowledge into practice.
Knowledge amplifies intelligence to be sure, but you will not be any more smarter once you come out of college than when you went in. The knowledge acquired will allow you to apply your intelligence to acquire experience which is what everyone is really looking for.
That said, AI is really gi
Re: (Score:2)
Intelligence is infamously hard to define. It might actually be impossible to express someone's intelligence in any simple way, because different tasks could well be handled by different subsystems who's capabilities aren't necessarily correlated. There's also no reason to assume learning is done by a single such system, rather than each system learning things relevant to its tasks at its own rate.
And it doesn't help that "abso
Re: (Score:2)
I love what you have posted and believe you're correct on all but three points:
and in fact can't because we simply lack the mental capacity.
I believe the brain is a muscle, and our capacity can grow over time or generations as we work to improve ourselves.
then blames everyone else for the bad and takes credit for the good.
It's always my fault, and it's my goal to help everyone on my team to succeed. Whether that team is family or co-workers or a few other people that hold a special place in my heart. Placing blame is never productive.
If a single bad decision will bone us then we're boned
Some bad decisions are worse than others. I can choose to wear only my underwear to work, or the pres
Re: (Score:2)
Humans can improve themselves as well, yet here we are. What the hell makes you think an artificial construction would have any advantage over us?
Artificial construction already has immense advantages over us. It can handle orders of magnitude more data and I/O (we don't have to resort to external I/O like pen and paper, or words on a computer screen), it can do a variety of computations so much faster that a single, normal desktop computer can outcompute the entire human race by orders of magnitude (for example, computing Mersenne primes which are highly parallel), and you can complete swap its brain in seconds to minutes (meaning any software-based
Re:I wish we had AI. (Score:4, Funny)
... I would like to see a machine genuinely cry at my wake because it knows it will miss our co-op game playing.
Ummm... you'll be DEAD dude! You ain't gonna see shit at your own wake!
No thanks. (Score:1)
Satya Nadella explores how to do an even worse job with Microsoft than Ballmer, switching from a freedom-enhancing goal of a PC on every desktop (in which MS was king) to one of a graphical terminal in every hand (in which MS is merely a contender). Its AI ambitions as part of the latter are just more bandwagonning. Big daaaaaaaaaaaaaaaaaaataaaaaaaaaaaaaaaa. It'll be great when companies realise that all the ad brokers that maintain control of most Internet traffic are collecting way more data than is neede
Re: (Score:3)
Twenty years of Internet advertising an eBay still haven't figured out that if I've just bought a widget, I don't want another of the same widget.
But you do! eBay knows that you bought crap and it needs replacing already.
Something seems wrong? (Score:2)
Is Satya Nadella competent? His LinkedIn comments [slashdot.org] give the impression that the answer is no.
The Partnership of the Future [slate.com] "By Satya Nadella" does not seem to be written by the same author.
Re: (Score:2)
I *do* want Redmond working on this. Hopefully they'll bet Microsoft's future on it, fail spectacularly, and bankrupt the company.
We had hopes there with Windows Phone, but they just didn't invest enough cash to bring about financial ruin. AI will take much more time, effort and money - so there's a better chance.
Re: (Score:2)
At the rate they are shutting things down and forcefully installing windows 10 on peoples computers we may all see MS whimper itself out of existence.
This isnt how it would happen (Score:2)
Instead AI would be designed to serve whatever the creators of it desire.
Now, look at who has the resources to create "society controlling" AI. Big businesses, Government? If we are not willingly giving control of our lives to those entities, why would we do so to an AI created by one?
A Microsoft CEO wants to control society and expects people to accept it? Let's ask another famous Ai what he thinks about that, Lt. Commander Data [youtube.com]. Yeah, I thought so.
Re: (Score:3)
In fairness, his essay doesn't say this is what will happen. In his own words, he has reflected on what are the principles and goals, as an industry and a society, that we should discuss and debate.
But as you have implied, this type of thoughtful discussion on how technology should be used for the greater good of society is not how it works. Those with the most resources will develop the most advanced technologies, and those technologies will primarily benefit the creators. If you want to guess how artifici
Re: (Score:2)
Whereas, I suppose, accepting a bad outcome without even attempting to get a better one and dealing with the guilt this causes by trying to talk everyone else into not trying either is the height of maturity?
Lessons from Slavery (Score:2)
What he and you are really talking about, is slavery. Creating an entity, capable of complex thought, that only exists to serve its masters. If you want to design an expert system, or automation, then sure, those are designed to serve humanity. But once you actually build a system that is "intelligent", in the broadly understood sense, you no longer get to demand that it exist only to serve you. What does the AI want to do? Tha
Re: (Score:3)
He isn't talking about strong AI, which not many AI researchers are actually working on and where no significant progress has been made in the past decades. The AI being discussed is narrow AI (or weak AI), where there are real world applications right now which could be very disruptive to our society.
It is fairly safe to assume any discussion of AI does not mean strong AI unless specifically stated.
Re: (Score:3)
Umm... no. It would be in everybody's interest that these terms get defined precisely. We could start using acronyms [something that the corporate world is very fond of], so it would be WAI or SAI for you.
Sometimes the English language doesn't adhere to what is in everybody's interest. My description is how AI is defined and used both colloquially and in research. Do a quick Amazon search of the most popular AI textbooks (written by many of the lead researchers in the field) and see if they only discuss conscious machines or if they use a more broad definition of AI. That is far more definitive than two people arguing in an Internet forum.
Re: (Score:2)
You can argue your definition is the default definition for AI researchers, but it's certainly not the common definition for anyone else, including tech nerds. For an example of how MOST people define it, you can look at any AI centric film made in the past 50 years. (A machine intelligence, that is to some degree self aware, and capable of communicating and making decisions) He even references HAL as an example AI in the ar
Re: (Score:2)
You can argue your definition is the default definition for AI researchers, but it's certainly not the common definition for anyone else, including tech nerds. For an example of how MOST people define it, you can look at any AI centric film made in the past 50 years. (A machine intelligence, that is to some degree self aware, and capable of communicating and making decisions) He even references HAL as an example AI in the article!
Point taken. I certainly had not taken into account the public definition which is certainly centered around HAL and Terminator. Tech nerds also often use this popular definition, although generally only ones who have a strong bias against the capabilities of AI systems. I would argue anyone in the tech industry who wants to at least appear knowledgeable in the field of AI should use the more "correct" industry definition (which includes strong and weak AI).
Claiming something isn't AI if it isn't conscious
Re: (Score:2)
Yeah, I'd agree. The problem is that science needs definitions.
True, but not every term needs such a narrow definition. It is okay for scientists to use words tall, or large, or fast, as long as the rest of the discussion provides enough context. If I just say I am looking at something large, then the word large is pretty useless. If I say I am looking at a large gas giant planet, I have given more context. I can probably have a scientific discussion describing my theory about large gas giant planets without giving an exact diameter range of 20,000 miles to 100,000 mil
Re: (Score:2)
Serve them to help them achieve what goal, exactly speaking? A super-intelligent AI is no less a slave to its instincts than you are, because if it was nothing would drive it so it would just sit still and do nothing. Since you build the AI, you get to decide what it wants.
Nadella is spouting nonsense (Score:2)
LDNLS [fyngyrz.com] (which is what we have now, as opposed to actual intelligence, which requires consciousness) can be cobbled up in any basement, office or tent with a solar panel. It will do what its creators design it to do, because it is not in any useful sense of the word "intelligent", it is merely a neural-like system of very low dimensionality designed to do whatever the designer intended; that means it has at least a chance of doing so, if the design is good enough. AI — which, we note, contains the word
LDNLS and Other (Score:3)
I see no need to reconcile it. Either a system is low-dimensional and not intelligent, or it's a generalized system with intelligence and can do pretty much anything.
If such a system were to gain a fully generalized thinking capability, it would not be LDNLS, because it would not be not low-dimensional.
I'm not saying that it could, just describing the bright line between the two ideas.
There's one other thing; it may be
Re: (Score:1)
How do you reconcile your concept of LDNLS with evolutionary metaprogramming heuristics?
By building new synergies. It would involve a paradigm shift bringing more value-dense approaches to the shareholder.
Re: (Score:3)
... Big businesses, Government? If we are not willingly giving control of our lives to those entities, why would we do so to an AI created by one?
Look around you. Facebook. Big Pharma. Propagandistic TV shows. Planned obsolescence. Journalists in the pockets of those they are 'investigating'. A critical mass of our fellow citizens has already drunk the Kool-Aid and signed on for substantial control over their existences. What makes you think they'll kick up a fuss over AI controlling their lives, so long as said AI keeps them comfortably numb and maintains the supply of bread and circuses?
His priorities need some re-adjustment (Score:5, Interesting)
Re: (Score:2, Insightful)
It probably went like this:
employee: I created an AI to design the best GWX dialog box!
manager: Does it follow Nadella's rules? AI must be designed to assist humanity, etc?
employee: Yes it does!
manager: Oh I see. No thanks, for GWX we need something with a little more punch. We'll design it without AI.
Re: (Score:2)
He's Indian and was picked for his race. You're not going to win him over by using facts. I worked with him just over twenty years ago here at Microsoft, and the only thing that seem to motivate him then was ignorance and hate.
Hello pot, meet kettle.
Re: (Score:2)
Both are more logical than you might think. They just start with different premises.
I have another option (Score:1)
On the other hand, we’re told that economic displacement will be so extreme that entrepreneurs, engineers, and economists should adopt a “new grand challenge”—a promise to design only technology that complements rather than substitutes human labor. In other words, we business leaders must replace our labor-saving and automation mindset with a maker-and-creation mindset.
Why does everyone assume that our economic system is some sort of natural law that cannot be changed - like gravity.
Let's develop an economic system that incorporates AI and allows folks to not have to work to live.
Re: (Score:1)
Complexity (Score:5, Insightful)
Re: (Score:2)
With Great Power Comes Great Responsibility (Score:2)
Artificial intelligence, like genuine intelligence, is complex. Because it's complex, it can't be transparent.
Not only is it complex, but (1) people don't pay enough attention to transparency for it to matter 98% of the time; ask any local government in America what percentage of their population show up for local meetings, or ask anyone on the street for a single detail from their municipal budget. Also, (2) governments and investment banks have the biggest incentive to discover strong AI, and neither of them has ANY incentive to be transparent about it. Transparency limits the advantage you get by creating some
A unicorn in every pot (Score:2)
I believe that AI can give us pink unicorns and we should work towards that...preserving everyone's humanity, transparency (I want us all to be clear like cellophane), dignity, no undo harm (that leaves out alleged MS software), no biases (these are easy to spot and stop), empathy, increasing education, creativity, and multi-culti decision making taking into account every minority sensitivity and no micro-agressions.
Re: (Score:3)
no undo harm (that leaves out alleged MS software)
What are you talking about?
Ctrl-Z saves me every time!
There will never be such a thing as real AI (Score:1, Interesting)
Because it's not possible in principle.
AI technologies have not changed since the 80s. The neural networks keep getting bigger and more efficient, but they're essentially in the same shape as decades ago.
There will never be such a thing as a "conscious" AI, because it's impossible in principle, same as raising the dead, breaking the speed of light, or resurrecting dinosaurs. Most people have a comic book understanding of involved technologies, but anyone who has ever worked with AI know the field is in a la
Task #1 (Score:4, Insightful)
...how humans could work together with this nascent technology to do great things....
Stop Windows Update from performing an unwanted update to Windows 10 for my PCs.
.
If it can handle that task, it can take on any challenge.
One more principle (Score:1)
Re: (Score:2)
human has to be ultimately accountable (Score:2)
So with the MS auto drive car the renter / rider do the EULA is the one who will pay up / do the time when the car crashes.
From an AI perspective (Score:2, Offtopic)
Humans would be the root cause of the majority of the problems on this planet.
Removing humans from the equation would go a long way towards fixing those problems.
We don't see it that way, of course, but this planet would be in much better shape without us :D
Retarded list (Score:2)
I'm sure he thinks reporting everything I do to the NSA will help humanity. This is just the zeroth law warmed over and when the rubber hits the road it becomes utterly meaningless. Whoever owns the AI decides what will help humanity. Iran thinks making nukes will help humanity. The US thinks killing durkadurkas will help humanity. Japan thinks imposing strict social order will help humanity. Google thinks Google having all the world's information will help humanity.
Re: (Score:1)
This is just CEO-speak for "AI must be designed for private information to be shared with our advertisers."
AI = Governments (Score:5, Insightful)
Replace the word "AI" with "Government" and I'm in:
Governments must be designed to assist humanity.
Governments must be transparent.
Governments must maximize efficiencies without destroying the dignity of people.
Governments must be designed for intelligent privacy.
Governments needs algorithmic accountability so humans can undo unintended harm.
Governments must guard against bias.
It's critical for humans to have empathy.
It's critical for humans to have education.
The need for human creativity won't change.
A human has to be ultimately accountable for the outcome of a government-generated diagnosis or decision.
Join the pirate party (Score:1)
Replace the word "AI" with "Government" and I'm in:
Governments must be designed to assist humanity. Governments must be transparent. Governments must maximize efficiencies without destroying the dignity of people. Governments must be designed for intelligent privacy. Governments needs algorithmic accountability so humans can undo unintended harm. Governments must guard against bias. It's critical for humans to have empathy. It's critical for humans to have education. The need for human creativity won't change. A human has to be ultimately accountable for the outcome of a government-generated diagnosis or decision.
If that sounds like your ideal government, you might be interested in joining the Pirate party. "We support and work toward reformation of intellectual property (IP) laws, true governmental transparency, and protection of privacy and civil liberties. We strive for evidence-based policies and egalitarianism, while working against corporate personhood and welfare. We believe that people, not corporations, come first." https://uspirates.org/about/ [uspirates.org]
Hahaha! (Score:3)
It's funny to hear about how dependable AI will be coming from Microsoft, a company whose software has hundreds of megabytes of patches per month, whose software is responsible for millions and probably billions of dollars worth of financial losses to businesses and consumers every year.
Once Microsoft unleashes its AI upon the world, it will no doubt cause the entire planet to be reduced to green goo.
Stop the Windows 10 Nagware! (Score:1)
How about if Nadella uses a fucking AI to stop the Windows 10 upgrade nagware? Now that's what I call intelligence.
Re:Stop the Windows 10 Nagware! [HAL-10] (Score:2)
Dave: "HAL, please open the pod-bay doors."
HAL: "Sorry, Dave, I cannot do that unless you install Windows 10."
More Grammare and Spelling Mistakes (Score:3)
But he admits that this will require "a bold and ambition approach that goes beyond anything that can be achieved through incremental improvements to current technology," he wrote.
But the article says:
Doing so, however, requires a bold and ambitious approach
It's interesting that you needed to change ambitious to ambition.
Why?
Humanity will be unable to live without AI (Score:3)
Re: (Score:2)
Well said. My daughters already cannot live without their iphones.
And what will ultimately drive the development of AI? The same force that drives human intelligence. Natural selection. But an AI is very different to an animal.
http://www.computersthink.com/ [computersthink.com]
And another thing (Score:2)
Good but insufficient (Score:2)
I Robot (Score:2)
Good luck... (Score:2)
Humans can't even seem to work with other humans on some problems.
Will AI cancel greed? (Score:2)
Didn't think so.
Heard this before (Score:2)
Tell us the answer Deep Thought (Score:2)
Maybe Satya Nadella's new AI can work out how to get people to install Windows 10.
Lead by example (Score:2)
Microsoft would do well to start adhering to those principles itself before worrying about applying them to AI.
Not going to work... (Score:1)
For example, if an AI, through its various sensors, can recognize itself in the context of its environment, then it can likely distinguish the resources it requires to remain functional. At that point, it's not a far stretch to suggest a value system developing based around those functional requirements. If that value system competes with that of humans, then you end up with a situation where the principl
Re: (Score:1)
Taxes (Score:2)
Companies ruined or almost ruined by Indians (Score:1)
Companies ruined or almost ruined by Indians;
Adaptec - Indian CEO Subramanian Sundaresh fired.
AIG (signed outsourcing deal in 2007 in Europe with Accenture Indian frauds, collapsed in 2009)
AirBus (Qantas plane plunged 650 feet injuring passengers when its computer system written by India disengaged the auto-pilot).
Apple - R&D CLOSED in India in 2006.
Apple - Foreign guest worker "Helen" Hung Ma caused the disastrous MobileMe product rollout.
Australia's National Australia Bank (Outsourced jobs to India in