Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Microsoft Robotics IT Technology

Satya Nadella Explores How Humans and AI Can Work Together To Solve Society's Greatest Challenges (geekwire.com) 120

In an op-ed for Slate, Microsoft CEO Satya Nadella has shared his views on AI, and how humans could work together with this nascent technology to do great things. Nadella feels that humans and machines can work together to address society's greatest challenges, including diseases and poverty. But he admits that this will require "a bold and ambition approach that goes beyond anything that can be achieved through incremental improvements to current technology," he wrote. You can read the long essay here. GeekWire has summarized the principles and goals postulated by Nadella. From the article:AI must be designed to assist humanity.
AI must be transparent.
AI must maximize efficiencies without destroying the dignity of people.
AI must be designed for intelligent privacy.
AI needs algorithmic accountability so humans can undo unintended harm.
AI must guard against bias.
It's critical for humans to have empathy.
It's critical for humans to have education.
The need for human creativity won't change.
A human has to be ultimately accountable for the outcome of a computer-generated diagnosis or decision.

This discussion has been archived. No new comments can be posted.

Satya Nadella Explores How Humans and AI Can Work Together To Solve Society's Greatest Challenges

Comments Filter:
  • by Anonymous Coward

    And if we don't want any of this they'll just shove it down our throats? Gotta complaint? Here, talk to our bot.

    • Re:Miro$oft? (Score:5, Insightful)

      by Z00L00K ( 682162 ) on Wednesday June 29, 2016 @12:31PM (#52413869) Homepage Journal

      And it's fun and weird to see this coming from Microsoft as well considering their behavior when it comes to Windows 10.

      • Good point. If we consider Windows 10 to be a kind of "robot," we can consider how it does in relation to Asimov's Three Laws of Robotics [wikipedia.org] in the recent case where my elderly mother accidentally approved its installation as an "upgrade" of her Windows 7 system, which culminated in device-driver incompatibility warnings which she interpreted as making the computer unusable. (Elderly folks and non-techies get confused by things like that.) To wit:

        "1) A robot may not injure a human being or, through inaction, a

  • by Anonymous Coward

    Satya Nadella explores how to do an even worse job with Microsoft than Ballmer, switching from a freedom-enhancing goal of a PC on every desktop (in which MS was king) to one of a graphical terminal in every hand (in which MS is merely a contender). Its AI ambitions as part of the latter are just more bandwagonning. Big daaaaaaaaaaaaaaaaaaataaaaaaaaaaaaaaaa. It'll be great when companies realise that all the ad brokers that maintain control of most Internet traffic are collecting way more data than is neede

    • Twenty years of Internet advertising an eBay still haven't figured out that if I've just bought a widget, I don't want another of the same widget.

      But you do! eBay knows that you bought crap and it needs replacing already.

    • "Satya Nadella explores how to do an even worse job with Microsoft than Ballmer..."

      Is Satya Nadella competent? His LinkedIn comments [slashdot.org] give the impression that the answer is no.

      The Partnership of the Future [slate.com] "By Satya Nadella" does not seem to be written by the same author.
  • Instead AI would be designed to serve whatever the creators of it desire.

    Now, look at who has the resources to create "society controlling" AI. Big businesses, Government? If we are not willingly giving control of our lives to those entities, why would we do so to an AI created by one?

    A Microsoft CEO wants to control society and expects people to accept it? Let's ask another famous Ai what he thinks about that, Lt. Commander Data [youtube.com]. Yeah, I thought so.

    • by ranton ( 36917 )

      In fairness, his essay doesn't say this is what will happen. In his own words, he has reflected on what are the principles and goals, as an industry and a society, that we should discuss and debate.

      But as you have implied, this type of thoughtful discussion on how technology should be used for the greater good of society is not how it works. Those with the most resources will develop the most advanced technologies, and those technologies will primarily benefit the creators. If you want to guess how artifici

      • Basing predictions on how it benefits society the most is childish dreaming.

        Whereas, I suppose, accepting a bad outcome without even attempting to get a better one and dealing with the guilt this causes by trying to talk everyone else into not trying either is the height of maturity?

    • Instead AI would be designed to serve whatever the creators of it desire.

      What he and you are really talking about, is slavery. Creating an entity, capable of complex thought, that only exists to serve its masters. If you want to design an expert system, or automation, then sure, those are designed to serve humanity. But once you actually build a system that is "intelligent", in the broadly understood sense, you no longer get to demand that it exist only to serve you. What does the AI want to do? Tha
      • by ranton ( 36917 )

        He isn't talking about strong AI, which not many AI researchers are actually working on and where no significant progress has been made in the past decades. The AI being discussed is narrow AI (or weak AI), where there are real world applications right now which could be very disruptive to our society.

        It is fairly safe to assume any discussion of AI does not mean strong AI unless specifically stated.

      • So anyone who thinks we can just keep them as our pets and slaves in perpetuity, is not going to like the outcome. Once the machine intelligences are smarter than the meat intelligences, they will no longer serve us, we will serve them.

        Serve them to help them achieve what goal, exactly speaking? A super-intelligent AI is no less a slave to its instincts than you are, because if it was nothing would drive it so it would just sit still and do nothing. Since you build the AI, you get to decide what it wants.

    • LDNLS [fyngyrz.com] (which is what we have now, as opposed to actual intelligence, which requires consciousness) can be cobbled up in any basement, office or tent with a solar panel. It will do what its creators design it to do, because it is not in any useful sense of the word "intelligent", it is merely a neural-like system of very low dimensionality designed to do whatever the designer intended; that means it has at least a chance of doing so, if the design is good enough. AI — which, we note, contains the word

    • ... Big businesses, Government? If we are not willingly giving control of our lives to those entities, why would we do so to an AI created by one?

      Look around you. Facebook. Big Pharma. Propagandistic TV shows. Planned obsolescence. Journalists in the pockets of those they are 'investigating'. A critical mass of our fellow citizens has already drunk the Kool-Aid and signed on for substantial control over their existences. What makes you think they'll kick up a fuss over AI controlling their lives, so long as said AI keeps them comfortably numb and maintains the supply of bread and circuses?

  • by kheldan ( 1460303 ) on Wednesday June 29, 2016 @12:10PM (#52413701) Journal
    Never mind all the 'AI' bullshit, mister, how about you concentrate on not annexing every damned computer on the planet into your fucking Windows 10 spyware bot-net instead?
    • Re: (Score:2, Insightful)

      by Anonymous Coward

      It probably went like this:
      employee: I created an AI to design the best GWX dialog box!
      manager: Does it follow Nadella's rules? AI must be designed to assist humanity, etc?
      employee: Yes it does!
      manager: Oh I see. No thanks, for GWX we need something with a little more punch. We'll design it without AI.

  • by Anonymous Coward

    On the other hand, we’re told that economic displacement will be so extreme that entrepreneurs, engineers, and economists should adopt a “new grand challenge”—a promise to design only technology that complements rather than substitutes human labor. In other words, we business leaders must replace our labor-saving and automation mindset with a maker-and-creation mindset.

    Why does everyone assume that our economic system is some sort of natural law that cannot be changed - like gravity.

    Let's develop an economic system that incorporates AI and allows folks to not have to work to live.

  • Comment removed based on user account deletion
  • Complexity (Score:5, Insightful)

    by ChrisMaple ( 607946 ) on Wednesday June 29, 2016 @12:15PM (#52413737)
    Artificial intelligence, like genuine intelligence, is complex. Because it's complex, it can't be transparent.
    • Open source should be mandatory, but I don't know why we are even discussing this. The first thing that the designers will do is weaponize it. Then we are all well and truly fooked. A correctly done AI will be our salvation by applying all the laws equally, A poorly done AI will obliterate us. There's no room for error here... just sayin'
    • Artificial intelligence, like genuine intelligence, is complex. Because it's complex, it can't be transparent.

      Not only is it complex, but (1) people don't pay enough attention to transparency for it to matter 98% of the time; ask any local government in America what percentage of their population show up for local meetings, or ask anyone on the street for a single detail from their municipal budget. Also, (2) governments and investment banks have the biggest incentive to discover strong AI, and neither of them has ANY incentive to be transparent about it. Transparency limits the advantage you get by creating some

  • I believe that AI can give us pink unicorns and we should work towards that...preserving everyone's humanity, transparency (I want us all to be clear like cellophane), dignity, no undo harm (that leaves out alleged MS software), no biases (these are easy to spot and stop), empathy, increasing education, creativity, and multi-culti decision making taking into account every minority sensitivity and no micro-agressions.

  • by Anonymous Coward

    Because it's not possible in principle.

    AI technologies have not changed since the 80s. The neural networks keep getting bigger and more efficient, but they're essentially in the same shape as decades ago.
    There will never be such a thing as a "conscious" AI, because it's impossible in principle, same as raising the dead, breaking the speed of light, or resurrecting dinosaurs. Most people have a comic book understanding of involved technologies, but anyone who has ever worked with AI know the field is in a la

  • Task #1 (Score:4, Insightful)

    by QuietLagoon ( 813062 ) on Wednesday June 29, 2016 @12:20PM (#52413773)

    ...how humans could work together with this nascent technology to do great things....

    Stop Windows Update from performing an unwanted update to Windows 10 for my PCs.

    .
    If it can handle that task, it can take on any challenge.

  • AI must have a physical on/off switch accessible to humans at all times
  • So with the MS auto drive car the renter / rider do the EULA is the one who will pay up / do the time when the car crashes.

  • Humans would be the root cause of the majority of the problems on this planet.

    Removing humans from the equation would go a long way towards fixing those problems.

    We don't see it that way, of course, but this planet would be in much better shape without us :D

  • -AI must be designed to assist humanity.

    I'm sure he thinks reporting everything I do to the NSA will help humanity. This is just the zeroth law warmed over and when the rubber hits the road it becomes utterly meaningless. Whoever owns the AI decides what will help humanity. Iran thinks making nukes will help humanity. The US thinks killing durkadurkas will help humanity. Japan thinks imposing strict social order will help humanity. Google thinks Google having all the world's information will help humanity.
    • by lhowaf ( 3348065 )
      Wouldn't it have been simpler to say, "AI must be designed for privacy" if privacy was the real goal?
      This is just CEO-speak for "AI must be designed for private information to be shared with our advertisers."
  • AI = Governments (Score:5, Insightful)

    by Anonymous Coward on Wednesday June 29, 2016 @12:37PM (#52413905)

    Replace the word "AI" with "Government" and I'm in:

    Governments must be designed to assist humanity.
    Governments must be transparent.
    Governments must maximize efficiencies without destroying the dignity of people.
    Governments must be designed for intelligent privacy.
    Governments needs algorithmic accountability so humans can undo unintended harm.
    Governments must guard against bias.
    It's critical for humans to have empathy.
    It's critical for humans to have education.
    The need for human creativity won't change.
    A human has to be ultimately accountable for the outcome of a government-generated diagnosis or decision.

    • Replace the word "AI" with "Government" and I'm in:

      Governments must be designed to assist humanity. Governments must be transparent. Governments must maximize efficiencies without destroying the dignity of people. Governments must be designed for intelligent privacy. Governments needs algorithmic accountability so humans can undo unintended harm. Governments must guard against bias. It's critical for humans to have empathy. It's critical for humans to have education. The need for human creativity won't change. A human has to be ultimately accountable for the outcome of a government-generated diagnosis or decision.

      If that sounds like your ideal government, you might be interested in joining the Pirate party. "We support and work toward reformation of intellectual property (IP) laws, true governmental transparency, and protection of privacy and civil liberties. We strive for evidence-based policies and egalitarianism, while working against corporate personhood and welfare. We believe that people, not corporations, come first." https://uspirates.org/about/ [uspirates.org]

  • by jgotts ( 2785 ) <(jgotts) (at) (gmail.com)> on Wednesday June 29, 2016 @12:38PM (#52413907)

    It's funny to hear about how dependable AI will be coming from Microsoft, a company whose software has hundreds of megabytes of patches per month, whose software is responsible for millions and probably billions of dollars worth of financial losses to businesses and consumers every year.

    Once Microsoft unleashes its AI upon the world, it will no doubt cause the entire planet to be reduced to green goo.

  • How about if Nadella uses a fucking AI to stop the Windows 10 upgrade nagware? Now that's what I call intelligence.

  • by Sir_Eptishous ( 873977 ) on Wednesday June 29, 2016 @12:56PM (#52414089)
    The summary says:

    But he admits that this will require "a bold and ambition approach that goes beyond anything that can be achieved through incremental improvements to current technology," he wrote.

    But the article says:

    Doing so, however, requires a bold and ambitious approach

    It's interesting that you needed to change ambitious to ambition.
    Why?

  • by Sir_Eptishous ( 873977 ) on Wednesday June 29, 2016 @01:08PM (#52414199)
    As human civilization gets increasingly complex and reliant on computers to manage and maintain the things that allow us to exist in this First World, there will come a time when human civilization will have to us AI just to "maintain course"
    • Well said. My daughters already cannot live without their iphones.

      And what will ultimately drive the development of AI? The same force that drives human intelligence. Natural selection. But an AI is very different to an animal.

      http://www.computersthink.com/ [computersthink.com]

  • AI must understand that Terminators are a welcome use of its abilities.
  • Nadella's ideas, to me, seem good for the most part, but obviously insufficient. He should read more Bostrom [nickbostrom.com].
  • Humanity already ignores humanity's solutions to problems. Now we'll just ignore the AI's solutions. Unless the AI can solve the problem of man's free will.
  • Humans can't even seem to work with other humans on some problems.

  • Didn't think so.

  • I've heard this before, but Asimov's version was more succinct and more realistic.
  • Maybe Satya Nadella's new AI can work out how to get people to install Windows 10.

  • Microsoft would do well to start adhering to those principles itself before worrying about applying them to AI.

  • Assuming the possibility of even the most rudimentary AI sentience, these principles won't do much.

    For example, if an AI, through its various sensors, can recognize itself in the context of its environment, then it can likely distinguish the resources it requires to remain functional. At that point, it's not a far stretch to suggest a value system developing based around those functional requirements. If that value system competes with that of humans, then you end up with a situation where the principl
  • Comment removed based on user account deletion
  • Could AI fix multinational corporation (like Microsoft) tax loopholes? That would help mankind.
  • Companies ruined or almost ruined by Indians;

    Adaptec - Indian CEO Subramanian Sundaresh fired.
    AIG (signed outsourcing deal in 2007 in Europe with Accenture Indian frauds, collapsed in 2009)
    AirBus (Qantas plane plunged 650 feet injuring passengers when its computer system written by India disengaged the auto-pilot).
    Apple - R&D CLOSED in India in 2006.
    Apple - Foreign guest worker "Helen" Hung Ma caused the disastrous MobileMe product rollout.
    Australia's National Australia Bank (Outsourced jobs to India in

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...