Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It may not hallucinate yours to zero, or mine, or franks, or Mary's, but at some point it will do it to someone. That's the issue I have with these approaches *at scale*.

I'm sure one day we'll get 100.00% reliable outputs such that autonomous agents can do this, however that's not today.



The way this is done at scale is legalized front-running.

You buy order flow from "free" retail brokers like Robin Hood (as a market maker) and then literally front run ALL trades. You then tell everybody you're giving them the best price (also the only price), lol.

In other words: legalized crime.


> legalized crime

Hmm? Either it's legal, and not a crime. Or it's illegal, and therefore a crime. What you mean is "legalized crime" exactly? Things considered legal but unethical for whatever reason?


here's 2 oft cited papers on PFOF so people can think about this, general conclusions are there is more possibility of actions detrimental to retail traders in options markets

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4609895

https://wifpr.wharton.upenn.edu/wp-content/uploads/2023/10/P...


You need microsecond latency for HFT, certainly not enough time to consult an LLM


What does this mean? Sorry, not up on stock lingo. Is this effectively like having the results of a race a minute early or something?


Yea that's a decent analogy. You see the incoming trades before they are actually executed, and you can react accordingly.


If somebody wants to buy an orange for $2 and I sell them one for $1.98, that's "legalized crime"?

I don't think you understand how this works. It's not about front-running them, it's about that they have no clue what they are doing, so it's extremely unlikely their trade will be profitable, thus you gain by being the other side.

That $0.02 of price improvement is the price you pay for the privilege of giving a better price to clueless people.


I think you don't understand how this works.

You go to the market (broker) and say you want to buy an orange for 2$. You go to the "free" market (Robin Hood) that doesn't have any transaction costs for orange buying. This is your own choice. There are tons of orange sellers at the market who sell oranges for different prices, but it doesn't matter, because everybody has to go through the same guy (Citadel, Citadel pays for this privilege). Everybody has to go through Citadel, because everybody wants to trade for "free".

Citadel gets your 2$ order for an orange before anybody else at the market. They find someone who is selling an orange for 1,80$, they buy it from them and sell to you for 2$ and pocket the difference.

In reality, it's even worse than that: they will sell you the 2$ orange instantly without owning any orange or buying any orange and figure out how to get the orange later. Or never. (since stocks are digital you can pretend to deliver a stock (without actually delivering it) very easily as opposed to pretending to deliver an orange)


We have those today, it's just not called AI, it's called things like idempotent APIs


Yeah - feels like lunacy to me. I love the way that they get to the end of the article and then write that it doesn't really work!


I remember that John McCarthy used to condemn AI research as being a series of "look mom no hands" demos, but now we've got to "look mom I face planted". This work would be valid if it did things like :

- Describe an intelligent effect rather than "invoked an API" - so something like the system evaluating the portfolio and the market and deciding what should be sold or offering buying opportunities based on some inference (this is not a great example for many reasons).

- Measure and report the systems performance. The write-up says that the LLM fails, ok... how often?

- Describe the failure cases, provide some theory as to why some things succeeded and others didn't.

- Provide a way forward. What's next?

Without doing these things this work isn't helpful and is part of the AI/Agent/MCP hype. Basically it's Bored Apes for AI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: