Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

×
AI

Machine Intelligence and Religion 499

Posted by Soulskill
from the i'm-sorry-dave,-god-can't-let-you-do-that dept.
itwbennett writes: Earlier this month Reverend Dr. Christopher J. Benek raised eyebrows on the Internet by stating his belief that Christians should seek to convert Artificial Intelligences to Christianity if and when they become autonomous. Of course that's assuming that robots are born atheists, not to mention that there's still a vast difference between what it means to be autonomous and what it means to be human. On the other hand, suppose someone did endow a strong AI with emotion – encoded, say, as a strong preference for one type of experience over another, coupled with the option to subordinate reasoning to that preference upon occasion or according to pattern. what ramifications could that have for algorithmic decision making?
AI

The Believers: Behind the Rise of Neural Nets 45

Posted by samzenpus
from the back-in-the-day dept.
An anonymous reader writes Deep learning is dominating the news these days, but it's quite possible the field could have died if not for a mysterious call that Geoff Hinton, now at Google, got one night in the 1980s: "You don't know me, but I know you," the mystery man said. "I work for the System Development Corporation. We want to fund long-range speculative research. We're particularly interested in research that either won't work or, if it does work, won't work for a long time. And I've been reading some of your papers." The Chronicle of Higher Ed has a readable profile of the minds behind neural nets, from Rosenblatt to Hassabis, told primarily through Hinton's career.
Businesses

5 White Collar Jobs Robots Already Have Taken 249

Posted by samzenpus
from the I-for-one-welcome-our-new-robot-coworkers dept.
bizwriter writes University of Oxford researchers Carl Benedikt Frey and Michael Osborne estimated in 2013 that 47 percent of total U.S. jobs could be automated and taken over by computers by 2033. That now includes occupations once thought safe from automation, AI, and robotics. Such positions as journalists, lawyers, doctors, marketers, and financial analysts are already being invaded by our robot overlords. From the article: "Some experts say not to worry because technology has always created new jobs while eliminating old ones, displacing but not replacing workers. But lately, as technology has become more sophisticated, the drumbeat of worry has intensified. 'What's different now?' asked Leigh Watson Healy, chief analyst at market research firm Outsell. 'The pace of technology advancements plus the big data phenomenon lead to a whole new level of machines to perform higher level cognitive tasks.' Translated: the old formula of creating more demanding jobs that need advanced training may no longer hold true. The number of people needed to oversee the machines, and to create them, is limited. Where do the many whose occupations have become obsolete go?"
United States

US Govt and Private Sector Developing "Precrime" System Against Cyber-Attacks 55

Posted by samzenpus
from the knowing-is-half-the-battle dept.
An anonymous reader writes A division of the U.S. government's Intelligence Advanced Research Projects Activity (IARPA) unit, is inviting proposals from cybersecurity professionals and academics with a five-year view to creating a computer system capable of anticipating cyber-terrorist acts, based on publicly-available Big Data analysis. IBM is tentatively involved in the project, named CAUSE (Cyber-attack Automated Unconventional Sensor Environment), but many of its technologies are already part of the offerings from other interested organizations. Participants will not have access to NSA-intercepted data, but most of the bidding companies are already involved in analyses of public sources such as data on social networks. One company, Battelle, has included the offer to develop a technique for de-anonymizing BItcoin transactions (pdf) as part of CAUSE's security-gathering activities.
AI

Facebook AI Director Discusses Deep Learning, Hype, and the Singularity 71

Posted by timothy
from the you-like-this dept.
An anonymous reader writes In a wide-ranging interview with IEEE Spectrum, Yann LeCun talks about his work at the Facebook AI Research group and the applications and limitations of deep learning and other AI techniques. He also talks about hype, 'cargo cult science', and what he dislikes about the Singularity movement. The discussion also includes brain-inspired processors, supervised vs. unsupervised learning, humanism, morality, and strange airplanes.
AI

The Robots That Will Put Coders Out of Work 265

Posted by timothy
from the uber-drivers-will-be-replaced-by-robots-oh-wait dept.
snydeq writes Researchers warn that a glut of code is coming that will depress wages and turn coders into Uber drivers, InfoWorld reports. "The researchers — Boston University's Seth Benzell, Laurence Kotlikoff, and Guillermo LaGarda, and Columbia University's Jeffrey Sachs — aren't predicting some silly, Terminator-like robot apocalypse. What they are saying is that our economy is entering a new type of boom-and-bust cycle that accelerates the production of new products and new code so rapidly that supply outstrips demand. The solution to that shortage will be to figure out how not to need those hard-to-find human experts. In fact, it's already happening in some areas."
AI

Breakthrough In Face Recognition Software 142

Posted by Soulskill
from the anonymity-takes-another-hit dept.
An anonymous reader writes: Face recognition software underwent a revolution in 2001 with the creation of the Viola-Jones algorithm. Now, the field looks set to dramatically improve once again: computer scientists from Stanford and Yahoo Labs have published a new, simple approach that can find faces turned at an angle and those that are partially blocked by something else. The researchers "capitalize on the advances made in recent years on a type of machine learning known as a deep convolutional neural network. The idea is to train a many-layered neural network using a vast database of annotated examples, in this case pictures of faces from many angles. To that end, Farfade and co created a database of 200,000 images that included faces at various angles and orientations and a further 20 million images without faces. They then trained their neural net in batches of 128 images over 50,000 iterations. ... What's more, their algorithm is significantly better at spotting faces when upside down, something other approaches haven't perfected."
AI

Replacing the Turing Test 129

Posted by timothy
from the thinking-is-hard-to-pin-down dept.
mikejuk writes A plan is afoot to replace the Turing test as a measure of a computer's ability to think. The idea is for an annual or bi-annual Turing Championship consisting of three to five different challenging tasks. A recent workshop at the 2015 AAAI Conference of Artificial Intelligence was chaired by Gary Marcus, a professor of psychology at New York University. His opinion is that the Turing Test had reached its expiry date and has become "an exercise in deception and evasion." Marcus points out: the real value of the Turing Test comes from the sense of competition it sparks amongst programmers and engineers which has motivated the new initiative for a multi-task competition. The one of the tasks is based on Winograd Schemas. This requires participants to grasp the meaning of sentences that are easy for humans to understand through their knowledge of the world. One simple example is: "The trophy would not fit in the brown suitcase because it was too big. What was too big?" Another suggestion is for the program to answer questions about a TV program: No existing program — not Watson, not Goostman, not Siri — can currently come close to doing what any bright, real teenager can do: watch an episode of "The Simpsons," and tell us when to laugh. Another is called the "Ikea" challenge and asks for robots to co-operate with humans to build flat-pack furniture. This involves interpreting written instructions, choosing the right piece, and holding it in just the right position for a human teammate. This at least is a useful skill that might encourage us to welcome machines into our homes.
AI

Facebook Will Soon Be Able To ID You In Any Photo 153

Posted by timothy
from the we-shall-call-it-facebook dept.
sciencehabit writes Appear in a photo taken at a protest march, a gay bar, or an abortion clinic, and your friends might recognize you. But a machine probably won't — at least for now. Unless a computer has been tasked to look for you, has trained on dozens of photos of your face, and has high-quality images to examine, your anonymity is safe. Nor is it yet possible for a computer to scour the Internet and find you in random, uncaptioned photos. But within the walled garden of Facebook, which contains by far the largest collection of personal photographs in the world, the technology for doing all that is beginning to blossom.
AI

Programming Safety Into Self-Driving Cars 124

Posted by timothy
from the but-just-not-plan-c dept.
aarondubrow writes Automakers have presented a vision of the future where the driver can check his or her email, chat with friends or even sleep while shuttling between home and the office. However, to AI experts, it's not clear that this vision is a realistic one. In many areas, including driving, we'll go through a long period where humans act as co-pilots or supervisors before the technology reaches full autonomy (if it ever does). In such a scenario, the car would need to communicate with drivers to alert them when they need to take over control. In cases where the driver is non-responsive, the car must be able to autonomously make the decision to safely move to the side of the road and stop. Researchers from the University of Massachusetts Amherst have developed 'fault-tolerant planning' algorithms that allow semi-autonomous machines to devise and enact a "Plan B."
AI

The Poem That Passed the Turing Test 187

Posted by timothy
from the stuff-it-in-the-ceiling dept.
merbs writes In 2011, the editors of one of the nation's oldest student-run literary journals selected a short poem called "For the Bristlecone Snag" for publication in its Fall issue. The poem seems environmentally themed, strikes an aggressive tone, and contains a few of the clunky turns of phrase overwhelmingly common to collegiate poetry. It's unremarkable, mostly, except for one other thing: It was written by a computer algorithm, and nobody could tell.
AI

Google Brain's Co-inventor Tells Why He's Building Chinese Neural Networks 33

Posted by samzenpus
from the build-a-brain dept.
An anonymous reader writes "Here's an interview with Andrew Ng, former leader of Google Brain, discussing Baidu, Deep Learning, computer neural networks, and AI. An interesting excerpt from the interview on biological vs. computer neural networks: "A single 'neuron' in a neural network is an incredibly simple mathematical function that captures a minuscule fraction of the complexity of a biological neuron. So to say neural networks mimic the brain, that is true at the level of loose inspiration, but really artificial neural networks are nothing like what the biological brain does."
Windows

Latest Windows 10 Preview Build Brings Slew of Enhancements 214

Posted by Soulskill
from the lots-to-break-and-lots-to-fix dept.
Deathspawner writes: Following its huge Windows 10 event last Wednesday, Microsoft released a brand-new preview build to the public, versioned 9926. We were told that it'd give us Cortana, Microsoft's AI assistant, as well as a revamped Start menu and updated notifications pane. But as it turns out, that's not even close to summing up all that's new with this build. In fact, 9926 is easily the most substantial update rolled out so far in the beta program, with some UI elements and integral Windows features seeing their first overhaul in multiple generations.
AI

Inside Ford's New Silicon Valley Lab 39

Posted by samzenpus
from the take-a-look-around dept.
An anonymous reader writes Engadget takes a look at Ford's new Research and Innovation Center located in Palo Alto. The company hopes to use the new facility to speed the development of projects such as autonomous cars and better natural voice recognition. From the article: "This isn't Ford's first dance with the Valley — it actually started its courtship several years ago when it opened its inaugural Silicon Valley office in 2012. The new center, however, is a much bigger effort, with someone new at the helm. That person is Dragos Maciuca, a former Apple engineer with significant experience in consumer electronics, semiconductors, aerospace and automotive tech. Ford also hopes to build a team of 125 professionals under Maciuca, which would make the company one of the largest dedicated automotive research teams in the Valley."
Crime

Fujitsu Psychology Tool Profiles Users At Risk of Cyberattacks 30

Posted by timothy
from the did-you-click-on-the-taboola-link? dept.
itwbennett writes Fujitsu Laboratories is developing an enterprise tool that can identify and advise people who are more vulnerable to cyberattacks, based on certain traits. For example, the researchers found that users who are more comfortable taking risks are also more susceptible to virus infections, while those who are confident of their computer knowledge were at greater risk for data leaks. Rather than being like an antivirus program, the software is more like "an action log analysis than looks into the potential risks of a user," said a spokesman for the lab. "It judges risk based on human behavior and then assigns a security countermeasure for a given user."
AI

Google Search Will Be Your Next Brain 45

Posted by timothy
from the whaddya-mean-next? dept.
New submitter Steven Levy writes with "a deep dive into Google's AI effort," part of a multi-part series at Medium. In 2006, Geoffrey Hinton made a breakthrough in neural nets that launched Deep Learning. Google is all-in, hiring Hinton, having its ace scientist Jeff Dean build the Google Brain, and buying the neuroscience-based general AI company DeepMind for $400 million. Here's how the push for scary-smart search worked, from mouths of the key subjects. The other parts of the series are worth reading, too.
AI

An Open Letter To Everyone Tricked Into Fearing AI 227

Posted by timothy
from the robot-is-making-me-post-this dept.
malachiorion writes If you're into robots, AI, you've probably read about the open letter on AI safety. But do you realize how blatantly the media is misinterpreting its purpose, and its message? I spoke to the organization that released letter, and to one of the AI researchers who contributed to it. As is often the case with AI, tech reporters are getting this one wrong on purpose. Here's my analysis for Popular Science. Or, for the TL;DR crowd: "Forget about the risk that machines pose to us in the decades ahead. The more pertinent question, in 2015, is whether anyone is going to protect mankind from its willfully ignorant journalists."
AI

AI Experts Sign Open Letter Pledging To Protect Mankind From Machines 258

Posted by Soulskill
from the i'm-sorry-dave-i-can't-sign-that dept.
hypnosec writes: Artificial intelligence experts from across the globe are signing an open letter urging that AI research should not only be done to make it more capable, but should also proceed in a direction that makes it more robust and beneficial while protecting mankind from machines. The Future of Life Institute, a volunteer-only research organization, has released an open letter imploring that AI does not grow out of control. It's an attempt to alert everyone to the dangers of a machine that could outsmart humans. The letter's concluding remarks (PDF) read: "Success in the quest for artificial intelligence has the potential to bring unprecedented benefits to humanity, and it is therefore worthwhile to research how to maximize these benefits while avoiding potential pitfalls."
AI

The New (Computer) Chess World Champion 107

Posted by Soulskill
from the champions-who-cannot-wear-belts dept.
An anonymous reader writes: The 7th Thoresen Chess Engines Competition (TCEC) has ended, and a new victor has been crowned: Komodo. The article provides some background on how the different competitive chess engines have been developed, and how we can expect Moore's Law to affect computer dominance in other complex games in the future.

"Although it is coming on 18 years since Deep Blue beat Kasparov, humans are still barely fending off computers at shogi, while we retain some breathing room at Go. ... Ten years ago, each doubling of speed was thought to add 50 Elo points to strength. Now the estimate is closer to 30. Under the double-in-2-years version of Moore's Law, using an average of 50 Elo gained per doubling since Kasparov was beaten, one gets 450 Elo over 18 years, which again checks out. To be sure, the gains in computer chess have come from better algorithms, not just speed, and include nonlinear jumps, so Go should not count on a cushion of (25 – 14)*9 = 99 years."
Robotics

What Happens To Society When Robots Replace Workers? 628

Posted by Soulskill
from the fewer-wrong-orders-at-the-drivethru dept.
Paul Fernhout writes: An article in the Harvard Business Review by William H. Davidow and Michael S. Malone suggests: "The "Second Economy" (the term used by economist Brian Arthur to describe the portion of the economy where computers transact business only with other computers) is upon us. It is, quite simply, the virtual economy, and one of its main byproducts is the replacement of workers with intelligent machines powered by sophisticated code. ... This is why we will soon be looking at hordes of citizens of zero economic value. Figuring out how to deal with the impacts of this development will be the greatest challenge facing free market economies in this century. ... Ultimately, we need a new, individualized, cultural, approach to the meaning of work and the purpose of life. Otherwise, people will find a solution — human beings always do — but it may not be the one for which we began this technological revolution."

This follows the recent Slashdot discussion of "Economists Say Newest AI Technology Destroys More Jobs Than It Creates" citing a NY Times article and other previous discussions like Humans Need Not Apply. What is most interesting to me about this HBR article is not the article itself so much as the fact that concerns about the economic implications of robotics, AI, and automation are now making it into the Harvard Business Review. These issues have been otherwise discussed by alternative economists for decades, such as in the Triple Revolution Memorandum from 1964 — even as those projections have been slow to play out, with automation's initial effect being more to hold down wages and concentrate wealth rather than to displace most workers. However, they may be reaching the point where these effects have become hard to deny despite going against mainstream theory which assumes infinite demand and broad distribution of purchasing power via wages.

As to possible solutions, there is a mention in the HBR article of using government planning by creating public works like infrastructure investments to help address the issue. There is no mention in the article of expanding the "basic income" of Social Security currently only received by older people in the U.S., expanding the gift economy as represented by GNU/Linux, or improving local subsistence production using, say, 3D printing and gardening robots like Dewey of "Silent Running." So, it seems like the mainstream economics profession is starting to accept the emerging reality of this increasingly urgent issue, but is still struggling to think outside an exchange-oriented box for socioeconomic solutions. A few years ago, I collected dozens of possible good and bad solutions related to this issue. Like Davidow and Malone, I'd agree that the particular mix we end up will be a reflection of our culture. Personally, I feel that if we are heading for a technological "singularity" of some sort, we would be better off improving various aspects of our society first, since our trajectory going out of any singularity may have a lot to do with our trajectory going into it.