Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Businesses Robotics The Almighty Buck

Wall Street's Research Jobs Are the Most Likely To Be Upended By AI (qz.com) 66

An anonymous reader quotes a report from Quartz: Research analysts are the most likely employees on Wall Street to find themselves working with -- or being replaced by -- robots, according to a survey by Greenwich Associates. By next year, some 75% of banks and financial firms will either explore or implement artificial intelligence technologies, harnessing a variety of digital services to extract insights from mountains of data. While AI is probably near the peak of its hype cycle, several factors have helped it gain traction in recent years, according to Greenwich. Billions of images and documents are now available online for training computers to spot patterns and other high-level tasks. Advances in graphical processing units, which are adept at the kind of data crunching required by AI, are making sifting through daunting datasets much easier. The cloud has also made it cheaper for researchers and startups to boost their computing power to service sophisticated AI-enabled systems. AI makes sense for financial research, as machines can crunch reams of data more quickly than human analysts and, with the right data, identify obscure correlations and patterns.
This discussion has been archived. No new comments can be posted.

Wall Street's Research Jobs Are the Most Likely To Be Upended By AI

Comments Filter:
  • by Anonymous Coward

    Good

    • Good for who? Good for the bank, surely.

      It may not be good for a subset of the bank's employees, but those guys have made it into "the club" already. Their big fear is that their new job will *only* pay them double what they're worth instead of triple.

      Most relevant is, is it good for society? That is tougher to answer without seeing exactly how AI gets implemented. The devil is in the details, as they say. But if they come to a juncture where the AI could either be tuned for maximal fairness, or maximal
      • by MrL0G1C ( 867445 )

        "Most relevant is, is it good for society?"
        No, this is for complex financial instruments that benefit people who invest billions, this isn't for the proles who are in debt.

  • Eh, maybe (Score:5, Interesting)

    by JBMcB ( 73720 ) on Friday October 27, 2017 @10:50PM (#55448693)

    Problem with using AI in these scenarios is that it's really good at finding correlations in what you tell it to look at. So maybe it finds correlations between interconnected stock prices, or maybe futures and trading volumes, or the consumer price index and stock prices of certain retail stocks, things like that.

    When everyone has AI's doing this, the margins get eaten up pretty quick, since everyone is getting the same results and takes the same positions.

    The areas you make money on are finding the niche correlations. A nationalistic dictator takes over some African country and shuts down rare mineral exports causing a spike in prices. A geothermal plant in Iceland goes down, shutting down it's aluminum smelters and aluminum prices rise. Those are the things AI sucks at.

    • They don't have to win constantly to turn a profit. Those things you mentioned happened rarely, and all the bots need is a 60 to 70% win rate. Winning more than you lose is the only way forward.

      • Re: Eh, maybe (Score:5, Interesting)

        by KramberryKoncerto ( 2552046 ) on Friday October 27, 2017 @11:45PM (#55448805)

        There are some one-in-a-lifetime trades where people pour in really high stakes, like the one made by Soros in the GBP crash last century. On the other hand, stuff like brexit and Trump's win were really good bets even if you predicted their odds to be 50:50 - because disproportionately few people bet money on the alternative outcome.

        I agree with GP - a lot of these roles would benefit from the increased productivity aided by good statistical tools, where one equity researcher has to work with or become a so-called data scientist to produce better insights.

      • Re:Eh, maybe (Score:4, Insightful)

        by sheramil ( 921315 ) on Saturday October 28, 2017 @04:19AM (#55449155)

        They don't have to win constantly to turn a profit. Those things you mentioned happened rarely, and all the bots need is a 60 to 70% win rate. Winning more than you lose is the only way forward.

        Yeah, and when that level of winning pans out and they get greedier, they pay for the development of AI that understands a little more about the real world. And when THAT pans out and they get even greedier, they'll okay the development of AI that actively interferes in the real world to produce situations that profit can be made from.

        Then it'll be too late.

      • That, and stopping loss faster. Being able to ensure you remain positive even if you aren't making as much on individual trades is what AI excels at, as firms are fine playing the long game.
    • Those are the things AI sucks at

      ...for now. Those are the AI goals the big banks keep in mind, and are working on heavily.

      • They can work on it all they like, it's a fundamental problem with AI that people have been working on for *decades* and haven't got much closer to.

        That problem is context. AI is terrible at it. The problem is it only understands what you program it to understand. So it lacks "common sense" knowledge skills.

        The classic example is Abraham Lincoln giving the Gettysburg Address. You can feed that information into an expert system - that Lincoln gave the Gettysburg address at such and such a time / place / etc.

    • When everyone has AI's doing this, the margins get eaten up pretty quick, since everyone is getting the same results and takes the same positions.

      And that's a problem? I thought this was how perfectly functioning markets are supposed to work.

    • by zifn4b ( 1040588 )

      Problem with using AI in these scenarios is that it's really good at finding correlations in what you tell it to look at. So maybe it finds correlations between interconnected stock prices, or maybe futures and trading volumes, or the consumer price index and stock prices of certain retail stocks, things like that.

      Remember the episode of Star Trek Voyager where Seven of Nine identifies all these correlations and thinks there is a conspiracy theory going on board the ship? Yea...

    • Problem with using AI in these scenarios is that it's really good at finding correlations in what you tell it to look at. So maybe it finds correlations between interconnected stock prices, or maybe futures and trading volumes, or the consumer price index and stock prices of certain retail stocks, things like that.

      When everyone has AI's doing this, the margins get eaten up pretty quick, since everyone is getting the same results and takes the same positions.

      The areas you make money on are finding the niche correlations. A nationalistic dictator takes over some African country and shuts down rare mineral exports causing a spike in prices. A geothermal plant in Iceland goes down, shutting down it's aluminum smelters and aluminum prices rise. Those are the things AI sucks at.

      Which is why I think AI will be good at identifying where to look, but needs a human to decide what it is really revealing and what else may be useful.

      • It's the opposite. *You* tell the computer what to look at (by feeding it data) and then it calculates an answer (=makes a decision) based on what you feed it. Just like any computer ever. AI is not magic.

        What is new about AI is precisely that it lets computers begin to usurp the decision-making role. If the only thing the AI did was collate information and present it to a human, nobody would be worried about it.
        • It's the opposite. *You* tell the computer what to look at (by feeding it data) and then it calculates an answer (=makes a decision) based on what you feed it. Just like any computer ever. AI is not magic. What is new about AI is precisely that it lets computers begin to usurp the decision-making role. If the only thing the AI did was collate information and present it to a human, nobody would be worried about it.

          that's my pint about "where to look." You have given it data, it uses it to make a recommendation, which you can then use to dig deeper into teh solution to determine if it works. Ideally, it would tell you what logic it used to draw the conclusion so you can verify its correctness. One of the problems with using AI for say, stock analysis, is if enough people use the same logic it could become a self fulfilling prophecy as everyone buys or sell X at the same time, driving the price in the predicted direct

    • If a bot at a hedge fund gets caught profiting by insider information, what does the SEC do, pull its plug?

      I suppose this also means that upcoming seasons of “Billions” won’t be nearly as good.

    • Right now most of the research documents are produced in India by people who are effectively little better than the AI you speak of; there is little insight and mostly just rote data. (This isn't because of the people doing it being Indian, but because the institutions don't value the data the same way they once did.)

      Basically, every chance they get, big money goes for easier ways to profit. HFT was one of those things, but there is more.

  • p-hacking (Score:5, Insightful)

    by VeryFluffyBunny ( 5037285 ) on Friday October 27, 2017 @11:54PM (#55448823)
    Ah, it looks like the financial sector are going to explore the limitations of automated p-hacking. With p-hacking, the larger the data set, the greater the probability of identifying background noise as significant patterns. Without knowing what specific, clearly defined questions you want to answer, you've got no idea of what kinds of data will hold the answers you're looking for and so you end up answering irrelevant questions but thinking that these answers are somehow significant.
    • "With p-hacking, the larger the data set, the greater the probability of identifying background noise as significant patterns."

      The root cause is, once again, "correlation does not mean causation". And stock traders are very keen to that fallacy: "I got five quarters in a row of profits for my customers, therefor you should trade with me". These systems are very good finding patterns *in the past*, does this mean those patterns will reproduce *in the future*? Heck, no, and the larger the data set, the more

      • There are correlations that are significant though. I remember when one of the first social network mining bots was used to predict stock price moves. We had seen it here on /. for several months at least-- plug for Corel led to a stock bump. It was great; you could easily double your money in a week.

        (The real meaning though of those findings was that we were in the proverbial bubble where the bellboy is giving stock tips.)

    • Ah, it looks like the financial sector are going to explore the limitations of automated p-hacking.

      This is a common and known problem with machine learning, where it is known as "overfitting". There are many remedies, the most important is to separate your data into "training data" and "testing (or validation) data".

    • I never knew I was p-hacking.

      I thought it was just my imagination seeing lions, mermaids, dogs and cats in the clouds. Once I even spotted Saraswati, Goddess of Learning and Knowledge in the morning toast and I aced the test on that day too. Coincidence? I think not. (BTW I am not Descartes, so I would not vanish)

    • by HiThere ( 15173 )

      While you can't trust those results, they give you good things to look at carefully, and eliminate a lot of things that wouldn't be useful.

      This presumes that while you get a lot of false positives, you don't get many false negatives. You can usually set things up that way. When you do, the results are unreliable, but still useful. You just need to properly understand them.

  • by account_deleted ( 4530225 ) on Saturday October 28, 2017 @02:19AM (#55449009)
    Comment removed based on user account deletion
    • Comment removed based on user account deletion
    • I don't know how the big houses do it, but one of the common tools is multi-dimensional spreadsheets which can let you easily run sensitivity analysis on various permutations. I wanted to get one of the programs to simplify a task I was doing, but they were simply too expensive given the core market they served. Ended up cobbling together a perl script to manage/mangle the data.

      The problem is most of the research today is geared towards "slightly above/below average" companies. The exceptional companies

  • It would be more interesting to hear exactly what analyses they are, since it might give some fun ideas to the /. readership. But since it is a bit of a puff piece with a leading photo of a woman taking a selfie with a toy robot what do you expect? Guessing they have a clear training manual for their research analysts and they are able to automate 80% of that, then increase hiring of people who understand machine learning to improve that?

  • Using AI to recognize patterns is fine if you already know there is a pattern, like visual cues that guide a driver. Aside from obvious correlations between macro-indicators, it still is not determined that there are actual patterns in price movement of stocks, indexes or even currencies. Setting an AI to discover a pattern there might be the same thing as asking it to prove there is a god.
  • ... that it's specialized upper end jobs that will be amoung the first to be replaced by AI/ML. Like, for example, ours. A buddy I did webdev with left the industry a few years ago and did an apprentice as a plumber for this exact reason. He can pick his employer.

    Software development is being streamlined as we speak, and most of the work left to be done is mucking about with badly and very badly designed legacy systems and trying to migrate the whole shebang to something resembling a feasible concept runnin

  • by JOstrow ( 730908 )

    Please, journalists, stop calling AI robots and robots AI.

"The whole problem with the world is that fools and fanatics are always so certain of themselves, but wiser people so full of doubts." -- Bertrand Russell

Working...