Hacker Newsnew | past | comments | ask | show | jobs | submit | md2020's commentslogin

> I'm trying to understand what the criticism is here

You're correct in your understanding of prediction markets with respect to traders using insider information. There are a couple things going on here. One is the subtext from most news media now that Technology Bad. New technologies are treated as guilty until proven innocent, because that is a more engaging narrative for readers. So in this case, those covering this stuff immediately latch onto the rich get richer, insider trading viewpoint, and that gets reported without any analysis of why that might actually be desirable.

Second, prediction markets, in trying to become broadly accessible to "normal" people and desiring liquidity, need a marketing strategy that is understandable. They can't put out a Robin Hanson article as marketing material. So they market by appealing to something people do already understand, which is gambling. The public has this idea now of prediction markets as a way to make money, not as a tool for learning information. So the default perspective on insider trading is now one of unfairness: somebody used their privileged position to make money. The correct perspective is, in fact, that prediction markets are providing users with value by eliciting information from those insiders, information that the public would not otherwise have. The latter perspective is mostly foreign to degenerate gamblers, and the marketing campaigns of Kalshi and Polymarket aren't helping.


I don't think it's so easy to get true information out of all the noise in the markets, and in any case, I don't see how this helps with the fact that corruption is bad. So what if I learn that a country will be wrongfully invaded? Can I have someone impeached for it?


This comment violates several HN guidelines. Take your anger elsewhere.


So does the one it's in reply to. But you skipped that one to complain about this one.

It's absurd that anyone could pretend to believe that more people having guns is a "deterrent" mild or otherwise to lethal use of force? In every interview about why american cops shoot and kill orders of magnitude more people than most civilized countries, americans always argue it's because their citizenry is armed so the police need to be prepared to make life or death decisions in a split second at every moment on the job.


Nobody suggested that more guns were a solution to anything.

Guns have been more accessible and readily available for the entire history of the United States. School shootings are a relatively new development.

Access to and availability of guns has been more greatly restricted over that time. With virtually no impact.

Perhaps the desperation and miserable mental health of our population are bigger factors?

Every country you would point to likely has better access to healthcare, education, and much better social safety net than the US. As well as law enforcement and prison systems less focused on restitution/justice and more focused on education and rehabilitation. Other countries also see less recidivism and lower violent crime rates in general.

All available evidence indicates we should be spending much less time and energy focusing on guns and far more focusing on the failures and motivations of our government.


> They are, at best, a mild deterrent against indiscriminate use of lethal force.

Is a quote from a sibiling comment to the one I replied to.

It seems that at the very least an extraordinarily loud minority of americans believe that arming the general population should somehow result in fewer gun deaths. On the big social media platforms, the larger news networks, and right here on HN, I am always surprised that such an obviously incorrect idea can be so pervasive.

> All available evidence indicates we should be spending much less time and energy focusing on guns and far more focusing on the failures and motivations of our government.

No, it doesn't. You can't just assert that because it's what you think. Societal issues do play a part, but just as you need oxygen and fuel for a fire, removing either one stops the flames. So if changing the individual minds and morals of seemingly half your country seems easier than enacting legislation restricting access to guns... well I don't think you should hold your breath.


You're misquoting me. That was in the context of a hostile government, not guns in general for civilian-against-civilian "self-defense".

Also, the "at best" and mild" are quite important there. I believe that armed civilians might prevent someone like the National Guard from firing on groups of protestors when it gets hairy, out of fear of being shot in response. They aren't suicidal: you don't escalate when you are in a disadvantaged position!


> they are just statistical machine outputing whatever they training set as most probable.

How is this sentiment still the top comment on an article about AI on HN in 2026? It's not true with today's models. They undergo vast amounts of reinforcement learning optimizing an objective that is NOT just predict the most likely next token given the training corpus. I would say even without the RL the "predict the next token" objective doesn't preclude thinking and reasoning, but that's a separate discussion. Generative sequence modeling learns to (approximately) model the process that produced the sequence. When you consider that text sequences are produced by human minds, which most would consider to be thinking and reasoning, well...


On the other hand, not having kids is also a lot like a religion now. I want kids and have many friends who do not and they’re nice people and there is no issue, but there is a certain subset of the population that spends their entire life telling me how I'm an awful person making a huge mistake for wanting children and they are utterly insufferable and not worth associating with.

This is the dominant perspective on the social media that younger generations spend their time on. I’d argue that as a person in their mid 20s today, actually wanting kids is the bizarre position. I often feel alienated for openly stating it among good friends of mine.


Clearly Rich Sutton is a giant in AI for his contributions in RL, but his recent brief talk "AI Succession" (https://www.youtube.com/watch?v=NgHFMolXs3U) made me worry a bit about the sort of perspective he has on what the "good" outcome here looks like. I say this as someone who is generally optimistic about the promise of AGI. I have no love for the machines as a "species", and by Rich's definition here, yeah, I am "specist" in favor of humans. I think we should use technology for our own benefit, and that it's not inevitable that machines "replace" us.

I also think his framing of the counterarguments is not charitable. The serious AI-risk arguments do not argue that a super-intelligent AI will necessarily be evil. They only argue that its motivations will be unaligned with ours, that it will be more competent in achieving goals than us, and that this will be bad for humans as a side effect. I think a good comparison is humans building a highway that incidentally crushes an ant colony. They didn't set out on an evil mission to destroy ants because they hate them, it just happened as a side effect of something the humans wanted. No evil required.


This is a terribly ignorant take and disappointing to see coming from a science fiction author.

> AI isn’t “artificial” and it’s not “intelligent.” “Machine learning” doesn’t learn. On this week’s Trashfuture podcast, they made an excellent (and profane and hilarious) case that ChatGPT is best understood as a sophisticated form of autocomplete — not our new robot overlord.

This is just argument by assertion. We have no good definition of intelligence, so I have no clue how he can be so confident. "Machine learning doesn't learn" is a crazy take, since "backprop + gradient descent does learn" is close to the most well-supported thing you can say about the past few years of algorithmic progress.

> sophisticated autocomplete

Aside from this being an incredibly reductive sneer that clearly isn't true if you've honestly tried using ChatGPT, etc., his citation for this is a podcast, which I'm positive Doctorow would not accept as sufficient for basically any other technical topic.

I love Ted Chiang's stories, and some of his takes on AI progress are cited here. However, I also found his extensive conversation with the Financial Times (earlier this month, so published after this) disappointing along similar lines. The thread running through both of these is a complete lack of a positive vision for the future, replaced by an almost smug cynicism that asserts any more technological progress is simply hype, a grift, and bad. Are there any current science fiction authors with a positive vision of the future?


> This is just argument by assertion. We have no good definition of intelligence, so I have no clue how he can be so confident.

Without concrete definitions your assertions are just as correct as theirs. But they have the evidence of absurd tech-bro hype of past technologies to draw on.

> I love Ted Chiang's stories, and some of his takes on AI progress are cited here. However, I also found his extensive conversation with the Financial Times (earlier this month, so published after this) disappointing along similar lines.

"I love Ted Chiang's stories because they jive with my preconceived notions, but I like him less when he says things that I don't believe"

> The thread running through both of these is a complete lack of a positive vision for the future, replaced by an almost smug cynicism that asserts any more technological progress is simply hype, a grift, and bad. Are there any current science fiction authors with a positive vision of the future?

Plenty. They talked about flying cars and living on the moon. Instead we got stagnant wages and a social-media skinner box. All of those wonderfully positive predictions didn't pan out.


Can you elaborate on why the increasing accuracy of these measurements indicates that the universe is metastable?


As the measurements of the mass of the Higgs boson and top quark become more accurate, either through better data or better models and methods to interpret the data the ovals in the images [0] [1] shown in the margin of the article become narrower. So far that narrowness tends to reduce the ovals overlap with the green area indicating a stable universe. But it has not ruled out a stable universe afaik.

For the most part continued measurements have increased the accuracy of the Higgs boson which is the horizontal aspect of the oval, but there have been improvements in the models and methods that have assisted in narrowing the top quark as wel.

While there might be better sources now, when I was following it more often, the authors of this paper [2] were the best source of tracking at least the top quark as that seemed to be a special topic of research for them.

[0] https://en.wikipedia.org/wiki/File:Higgs-Mass-MetaStability....

[1] https://en.wikipedia.org/wiki/File:Higgs_FalseVacuum2018.jpg

[2] https://arxiv.org/abs/1207.0980


> malinformation

Please do not normalize the use of a term that is being pushed by the government to justify censoring information they don’t like [1]. I understand that there is such a thing as “true information presented out of context in a manner meant to mislead”, and I even grant that the comment might be guilty of it. But “malinformation” and “misinformation” are just becoming labels for “information that is inconvenient”, and I don’t think normal internet commenters adopting their use is doing us any good.

[1] https://theintercept.com/2022/10/31/social-media-disinformat...


I do want to point out that the word was more frequently used in 2004[0], the furthest back Google Trends will let me go. Our usage today is even substantially lower than that in 2009.

I'm not saying that the government isn't trying to manipulate the topics and use the same words too. After all, that's the entire thing we're discussing. But that doesn't mean that they get to control words either. That's an unwinnable battle because the adversary has such an easy path to victory. All you need to do is take language people are using and incorporate it into your own, but with slightly different (but reasonable) meaning. You're actively engaging in their strategy. Words are defined by how they are used and understood by the public. So if you understand my intention and that I am not using it in this same manipulative manner, then we're pushing back and ensuring they do not take control of our language.

[0] https://trends.google.com/trends/explore?date=all&geo=US&q=m...


I'm not a latin expert, but "malinformation" seems like an etomogically perfect word to describe a "whataboutism"


Or an accusation of one.


This is why it is important to understand the distinction between misinformation and disinformation. The former being an idiot, the latter being a manipulator. But those that want to sow discord want us to conflate the two, which is rather easy. I'd say it is pretty common with any word that becomes a hot button itself.


I agree. But, to be clear, I consider any accusation of "whataboutism" I see to be disinformation by default, until proven otherwise from context, because by far most use of it I saw was as a way to shut down good discussion points. I have a short list of such words/phrases; another notable one is "dog whistle", which I have never seen used in good faith on-line.


My default position is misinformation because I think people being dumb is more likely then them attempting to gaslight others. They get the benefit of the doubt until proven otherwise, but it definitely raises suspicion.

The same goes for dog whistling. It is easy to see dog whistling everywhere because that's its entire purpose: to hide within normalized speech. Covert speech is not covert if the only ones using that speech are manipulators. But that's why fighting it is so hard, because you don't know who's a useful idiot and who's a manipulator. But it is clear that the manipulators hide in a sea of useful idiots and parrots.

So I think good faith is trying to extract the signal from the noise and to differentiate the two rather than assuming maliciousness.


> It is easy to see dog whistling everywhere because that's its entire purpose: to hide within normalized speech.

It is easy to see dog whistling everywhere because that's the phrase's entire purpose: it's a weapon. It's an accusation that's cheap to make, and near-impossible to disprove.

Sure, legitimate cases of dog whistling probably exist. It's impossible to tell for sure, because the "test" has near 100% false-positive rate. Or, put another way, talking about dog whistling being a thing is itself malinformation - technically the phenomenon exists, but bringing attention to it is between confused and malicious.


Whatever happened to "wrong"? As in, "that's wrong". Ideally followed by some reasoning.

All of these fancier words like whataboutism or misinformation are just attempts to assert authority via the word. They don't convey additional meaning.


Misinformation and disinformation are just nuanced versions of wrong. The former means unintentional and the latter means intentional. The distinctions are useful here because the latter is a claim that the person spouting the information is corrupt and knowingly trying to propagate lies. This is more universally considered bad. People are wrong all the time, but most people aren't intentionally spreading lies.

Malinformation doesn't mean something is wrong but that it is misleading. This distinction is important because if you claim malinformation you actually aren't claiming that the information used is wrong, but the way it is is. A classic example of this is white nationalists using the phrase "despite being 13% of the population black people <insert something about crime here>". While the statistics may be accurate it does not incorporate the complexities involved which end up decoupling race from the statistics. To put it more accurately, this is an aggregation error (more specifically ascertainment bias, a form of collider bias on conditional probabilities). Without the deaggregation here it is natural to assume that the conditional variable here is ethnicity (the intent of the malinformation) but in reality this is misleading because the way the information was presented is not entirely accurate.

Malinformation is specifically difficult to defend against because one can google to confirm accuracy of claims. But in reality it takes expertise to dismantle the claims. It is preying upon people's naivety and inability to process information that they are not intimately familiar with (which is all of us, just in different subjects). So it is important to recognize that this form of manipulation exists. The defense is to maintain skepticism and consult experts. Looking for consensus among experts specifically.

These words do in fact convey a significant amount of additional meaning and the distinctions are important. Especially if we're trying to encourage more meaningful and nuanced discussions.


Just labeling true things that you don't like as "malinformation" does not encourage more meaningful and nuanced discussions.


Who said I was encouraging that? I'm saying you have to use nuance to explain how something is malinformation in the first place. I even did this in my example. We need good faith discussions and I do think you need to step up your game here a bit.


"Misinformation" might (sometimes) be a synonym for "wrong".... but "whataboutism", "disinformation", "malinformation" are not. They mean different things.


Disinformation actually does mean wrong. But it also is an accusation of maliciousness. That's the distinguishing feature between misinformation and disinformation: intent and prior knowledge. But with the other two I agree. Whataboutism is non-sequitur arguments. Malinformation is cherry-picking data with intent to deceive.


Yeah the same author has a reply to that since the per-capita emissions point is brought up a lot: https://noahpinion.substack.com/p/why-per-capita-emissions-i...


You need to realize that mobs are a real thing and in this case, they ruined a woman’s career for completely constructed, ridiculous reasons. This kind of behavior is disgusting. I don’t know why you describe this article as “whining”. This person had nearly a decade of her life’s work torn apart by people who only see the world through the lens of power and oppression, and whose only goals are to endlessly virtue signal to one another while creating nothing of value themselves.


Her career may yet recover. She is young and talented, and the mob has a short attention span.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: