Skip to main content

“And what they’re doing, quite often, is panicking — thoughtfully, while holding eye contact with their own expressionless face in the Zoom window.”

 

Once upon a time (last year), some really smart people at a big university (Stanford) built a robot (model) that did better than almost all the grown-ups who try to make money by picking stocks. It beat 93% of them over 30 years. The robot was very pleased with itself. The newspapers shouted: “The Machines Have Won!”.

And yes, that robot was very clever indeed. It looked at loads of numbers, company earnings, reports, trends and made smart guesses about which businesses might make more money.

So far so good, but, just like animals at the zoo, not all trades behave. Some are calm, predictable and cuddly. Others throw things and bite.

So what does that mean for trading robots? Perhaps the question we need to ask is where they are actually useful, and where do we still need a human who can raise an eyebrow and say, “Pass the wet-wipes. Whatever that monkey just thew at us does not smell good!”.

Robots Like Things That Stay the Same

The robot in the Stanford story focused on picking stocks. The kind where you look at numbers, patterns and maybe some nice charts, then decide if something is a “good deal.”

This is robot heaven. Predictable data, repeatable signals and no sudden global events to upset the apple cart. If it looks cheap, smells like growth and hasn’t spooked the regulators, the model says “yes, please”.

That’s why firms like Two Sigma, Citadel and a few others are quietly hoovering up people who can build and deploy these models. Machine learning researchers, data engineers and NLP folk are in high demand. If your idea of a good time is building the pipeline that teaches a machine to read an earnings call and spot the warning signs before the analysts do, congratulations, the robots want you on their team!

 

But Not Everything is Neat and Tidy

Now imagine trying to tell a robot why hedge funds are going big on Japan and India but backing away from China. That’s not a tidy spreadsheet problem, that’s a cocktail of interest rates, geopolitics and vibes.

You can try to model that. People are trying. But it’s still the sort of trade where someone gets a hunch on a Thursday afternoon, mutters something about “capital outflows,” draws three arrows on a whiteboard and suddenly reallocates half a billion dollars.

It’s not Excel. It’s closer to rune stones. Or possibly chicken entrails. Maybe an ancient ritual where someone reads central bank press releases under a blood moon.

This is where human judgment still carries weight. And it’s not just a theoretical point. I spoke to the Head of Ops at a discretionary hedge fund this week, and he said their researchers are all quietly worried the machines are coming for them. Not later. Now. And they might be right — in discretionary shops, there’s growing pressure to automate more of the research process, even while key decisions still rest with humans. The tension is real: automate the inputs, but preserve the judgment. And sometimes, that judgment means ignoring every model, every signal and every spreadsheet — and doing the mad thing that no robot would. (Except maybe HAL 9000).

 

Mr Risky, Mr Wrong, and Mr Right-Eventually

Some trades don’t make sense on a spreadsheet. Some trades would make even the bravest risk committee sweat through their double-cuffs. But sometimes, those trades work. Gloriously.

Let me introduce you to a few of the more colourful characters:

  • Mr Soros bet that the British pound was too strong and wouldn’t stay in the European money club (ERM). Everyone said he was being naughty. He made a billion pounds in a day.
  • Mr Burry read lots of boring documents about American mortgages, spotted a problem, and bet against the housing market. People laughed. Then the banks stopped laughing. (Yes, the man from The Big Short.)
  • Mr Jones saw trouble brewing in the charts and went all-in before the 1987 crash (Paul Tudor Jones).

Would an AI have made those bets? Probably not. They weren’t based on tidy signals or backtested models. They sounded mad. Or at least mad-adjacent. And yet, they worked.

These are the trades that don’t show up in models. They show up in books, documentaries and the smoking terrace at 5 Hertford Street.

So, What About Jobs?

AI isn’t replacing everyone. It’s replacing tasks. The boring ones. The repetitive ones. The ones that can be turned into code.

That’s creating new demand for:

  • People who can build the models (ML engineers and quant researchers that firms like Two Sigma, G-Research and Citadel have been expanding their headcount for, often to support new research platforms or model deployment at scale)
  • People who can feed them (data specialists, i.e. data engineers and curators, the kind being snapped up by firms like Hudson River Trading and IMC to manage alternative data pipelines and infrastructure)
  • People who can help them read human stuff like text (NLP researchers and tooling engineers, the kind Jane Street, Citadel and Point72 have all been hiring to work on everything from summarising earnings calls to parsing legal disclosures and news feeds)

Meanwhile, if your job mostly involves summarising earnings reports with phrases like “solid topline growth,” you might want to start learning Python.

But let’s be clear: there’s still plenty of space for humans and especially the ones who know when the model is missing something, or when the world is turning in a way the data hasn’t caught up with yet.

The Future: Part Robot, Part Human, Fully Hired

AI is staying. So are people. Most firms are blending both: models that spot patterns and humans who spot problems. Because even in 2025, with all the LLMs and quant stacks and fancy dashboards, sometimes the best trades still start with a raised eyebrow and a scribbled note on the back of a Nobu delivery bag.

And that, dear reader, is why we don’t let the robot run the whole show. Especially not the social committee.