Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And those suggestions would be very in-line with the original purpose of OpenAI. A purpose they are now actively hindering in the name of profit.


I think what most of the people here are missing is how big, how paranoid, and how influential the "AI alignment" movement is. To you it looks like they're being overly careful and paranoid, perhaps as an excuse to set up a monopoly silo to extract money. But a lot of the people the OpenAI researchers work closely with -- people deep in the "AI alignment" community -- are telling them that they're being wantonly reckless, helping set the human race on a path for certain doom. There are people in that community -- people not working for a for-profit company -- who would, if they could, stop all AI research of any kind until we have rock-solid techniques to prevent an AI apocalypse. Most of those individuals have absolutely nothing commercial to gain from stopping AI research.

So suppose you're an AI researcher at OpenAI. A large number of people you know and respect are telling you that you're driving the human race right towards a cliff. You don't 100% agree with their assessment, but it would be foolish to completely ignore them, wouldn't it? Obviously that's going to affect your opinions about things.

From everything I've heard and seen, the actual researchers at OpenAI are trying to take seriously the risk that a super-intelligent AI might destroy the human race.

Here's one example: GPT-4 was actually done back in August of last year. If their goal was to maximize profit, the obvious thing to do would be to release API access to it as soon as possible. But instead, they purposely delayed release for eight months, specifically in order to "cool down" the "arms race": to avoid introducing FOMO in other labs which would lead them to be less careful.

Go lurk on alignmentforum.org for a while, and you'll have a different perspective on OpenAI's decisions.


> Here's one example: GPT-4 was actually done back in August of last year. If their goal was to maximize profit, the obvious thing to do would be to release API access to it as soon as possible.

They did that, that's how Reid Hoffman got early access to write his book, that's how Microsoft got access to start working on Bing/ChatGPT4 for a cool $10 billion and that's how countless of other got early access. Got the money, got the marketing and got the synchronized deployment of multiple use cases by a selected crew of companies and they get to say the corpspeak of 'we care, we didn't release on August!'. This is taken from the Apple iOS SDK book, you make the API changes and release privately to have the announcement and a parade of implementations by third party to prove that it is viable.


Besides wasn't releasing GPT3 supposed to have caused major harm to society? Which is why they held off for so long. Still waiting for evidence of that harm (mass fake news, Google being ruined by even more low ranking spam sites, etc).

It must be nice thinking that a small group withholding the keys R&D (for a short while until other R&D groups catch up) will somehow help the problem. Do these few months to a year really provide much value in finding ways to stop the "AI apocalypse"? What real work are they doing to prevent it in those few months? More philosphizing and high level analysis?

It might work for messaging/marketing that they are being "careful" but I'm not convinced this is tangible. Seems as arrogant and naive as most AI ethics stuff I read.


Releasing the biggest version of GPT-2 was supposed to have caused major harm to society.


My memory was that they were worried about GPT-2 if society weren't ready for it. So they've been trying to make people aware of what its capabilities are. I think ChatGPT really did an amazing job of that, as I said. Now everyone knows that computers can write low-quality drivel for pennies a paragraph, and as a society we're starting to adjust to that reality.


Just to be clear you think OpenAI achieved this by holding off releasing it for a short period? And this achieved mainstream penetration? Or among programmers?

IMO stuff like deep fakes didn't become real until people started seeing it IRL. They weren't reading FUDy posts on HN or academic papers. Even the niche tech posts on NYT rarely get more than a few hundred thousand people reading them.


I have no idea what would have happened if they'd just dumped GPT-2, weights and all, into the world when it first came out; or even if they'd just gone straight to a paid API. They didn't know either. I think given that nobody knew, their strategy of "try to warn the technorati, give access slowly and make sure nothing bad happens, then make a widely-accessible interface" seems like a reasonably cautious approach.


No one discusses the elephant in the room: who elected these elites to decide what was and wasn't ethical and responsible? Nobody.

So who ends up making the ethical decisions? A group of highly privileged SV types insulated from the very real problems, concerns, and perspectives of the ordinary person.

This is just more of what humans have been doing over millennia: taking power then telling everyone else it was too dangerous for them to wield.


Who elected you to do… whatever it is you do? Probably someone hired you because they thought you’d be good at it. Or maybe you were good enough and cocky enough that you just went and did it, and sold the result.

Either way I’d imagine they’re in their roles for the same reason.


Coders are good at code. They are not good at running society, in fact they’re honestly probably worse than average


> No one discusses the elephant in the room: who elected these elites to decide what was and wasn't ethical and responsible? Nobody.

I'm a fan of SF. I remember nice quote from Beggers in Spain or maybe one of two subsequent books.

It was something in the spirit of; "Who should control the new technology?" is the wrong question. The correct question is "Who can?".

I think nobody ever truly gets to vote on their technological future or elect it.


I'm not sure this is entirely fair. Nobody elected the people who inspect nuclear powerplants either, but I still assume that they're doing a good job in protecting humanity. Even if they are possibly "highly priviledged".


Inspecting a powerplant does not grant you any actual influence beyond powerplant inspection. Defining the ethics of AI will potentially let you influence almost all aspects of our lives.


I assume nuclear powerplants are regulated by some entity.


Unfortunately in many industries, companies are allowed to regulate/inspect themselves. Because they supposedly have more experience with it than the government, and it saves on government spending.


yeah, I would hope lol


> This is just more of what humans have been doing over millennia: taking power then telling everyone else it was too dangerous for them to wield.

Case in point (the grand performance is still underway): the banning of TikTok, for "stealing user data".


An AI bent on taking over the world would write posts like this.


> who elected these elites to decide what was and wasn't ethical and responsible? Nobody

First: basically every American literally voted for that by repeatedly saying no to the alternative (the communist party) in every American election.

Second: what exactly and specifically are you suggesting here? Because even outside of capitalism, the alternative to "people deciding they personally don't feel it's safe to release a product they created and worked on and know more about than literally anyone else" sounds like actual literal insanity to me.


Two notes:

1) less than half of Americans vote in each election (less than 63% if you restrict to the voting-age population, less than 70% if you apply the scummy rules that restrict to the voting-eligible population) And 2) it's a false dichotomy to say that US elections have ever been "whatever we have now VS communism". Maybe you could say socialism was on the ballot all those times Eugene Debs ran for the presidency, but there hasn't ever been a communist on the ballot that I'm aware of. Also, it sounds like you would struggle to define communism if pressed.

Regarding your second loose point, the US restricts the sale of a lot of products to the public (eg nuclear weapons, biological weapons, raw milk, copyrighted works you don't hold the copyright to, etc). Personally, I think it's pretty reasonable to restrict the sale of some things, even if the potential sellers know a lot about the product.


> it sounds like you would struggle to define communism if pressed

Having read the Communist Manifesto, I think that description of me is both totally fair and would also apply to Karl Marx.

Darn thing read like an unhinged run-on blog rant.


> basically every American literally voted for that by repeatedly saying no to *the alternative* (the communist party) in every American election.

I had a good laugh playing with this ridiculous framing, thinking about all of the candidates we've said no to.

* "Get out of here Donald Trump! We don't want communism, we want the alternative; Joe Biden!"

* "Hit the bricks secret pamphlet-loving marxists John McCain and Mitt Romney, we'll take the singular alternative: Barack Obama"

* "We love Jimmy Carter, he's the opposite of communism! Nothing like the alternative, an all-star college football player and rabid communist manifesto adherent named Gerald Ford."

* and "We hate Jimmy Carter who must be a communist because of how hard we voted for the movie man."

* "Give us Teddy Roosevelt, he'll smash up all of these monopolistic robber barons, because TR is the alternative to Marxism."

* "FDR, we love you so much we'll elect you to the Presidency four times! We all thought Herbert Hoover was in the pocket of gilded age capitalists, but when Hoover drove us into the great depression, we realized he must have really been a bolshevik! Thank you so much for the massive welfare state expansion, FDR, you truly earned your nickname 'FDR: cure for the common communism'"

Ridiculous.

But in all earnestness, the communist manifesto has never been even remotely relevant to any US election ever. And I mean this with no malice, but if you think lobbing the label "communist" at something you don't like is an argument, vary up your media diet and be recognize when you use logical fallacies in arguments so you can slow down and debug your thought process.


I think you misunderstand; when I said communists, that wasn't a spicy republican hot take about the democrats or whatever, it was literally the communists: https://en.wikipedia.org/wiki/Communist_Party_USA for example, or https://en.wikipedia.org/wiki/Revolutionary_Communist_Party,... or even https://en.wikipedia.org/wiki/Socialist_Alternative_(United_...

I think it's really obvious that Americans don't want those things, so one way to rephrase my original comment could be "the rejection of communism is why rich people get to own and control businesses".


You misunderstand. Saying

> "the rejection of communism is why rich people get to own and control businesses".

is as wrong as saying "the rejection of [pastafarianism | soccer/futbol | anarchocapitalism | mandatory left-handedness | manual transmission cars | etc] is why rich people get to own and control businesses".

Communism and the communist party have never been part of the question. The closest "Communism" came to being part of the political landscape was when a power-hungry alcoholic grifter named Joseph McCarthy won a Senate seat in Wisconsin and then started a paranoid campaign of lobbing unsubstantiated accusations of secret communistic allegiance at academics, civil servants, members of the media, and of anyone he wanted. It whipped up a frenzy of anti-communist protestation, but not because people knew anything about communism, rather people rejected being called "communist" not because they knew anything about the economic theory of communism, but because McCarthyism made that term a career-killer. McCarthy starting attacking leadership of the US military, alleging that they were infested with communists, and organized hearings in the Senate that were essentially modern day witch burnings. From 1946 to 1954, McCarthy whipped up a massive panic while smearing the symbolic label "communism" with so much shit that essentially no one can think clearly about the ideas behind that system of social organization. In 1954, other Senators countered with a campaign to censure McCarthy for his invalid and unwarranted abuse of US Military Generals, culminating in a vote to condemn McCarthy (67 votes to condemn, 22 against condemning). After this humiliation, McCarthy wasn't decent enough to resign his seat and leave voluntarily, but fortunately he died of cirrhosis of the liver about 2 years later at the age of 48.

In short, the claim that "the rejection of communism", something no one here spends any time thinking about, "is why rich people get to own and control businesses" is ridiculous and evidence of a broken thought process. I refer you to my prior advice about dealing about slowing down and recognizing when you've built your beliefs on logical fallacies.


You're arguing against an imaginary totem instead of what I actually wrote.

> Communism and the communist party have never been part of the question

De facto/de jure. Nobody wants it (de facto), I've demonstrated by linking to the actual parties that de jure it's totally been an option.

Those parties I linked, you could have voted for, but y'all didn't.

Given I've explicitly said I'm not talking about dem/rep culture war nonsense, your over-detailed rant about McCarthy (who, you may be surprised to learn, was sufficiently relevant to your politics that his actions are well known on the other side of the Atlantic and his name is likewise used as a derogatory term) was a waste of your own time.

I am specifically and literally referring the idea of private ownership of the means of production, which is in the actual literal Manifest der Kommunistischen Partei as written by Karl Marx in 1848.

Which I have in fact read.

Section 2, English translation, has the following passage:

""" The proletariat will use its political supremacy, to wrest, by degrees, all capital from the bourgeoisie, to centralize all instruments of production in the hands of the State, i. e., of the proletariat organized as the ruling class; and to increase the total of productive forces as rapidly as possible. """ - https://en.wikisource.org/wiki/Manifesto_of_the_Communist_Pa...

That, right there, is why not having Communism in the USA means that rich people get to keep their stuff out of public hands.

Now, do you know some other political ideology besides communism that wants to remove control of factories from their owners? Because that would be an additional option beyond the dozen or so communist parties of the USA coming 7th-25th in a two-horse race, and the complaint being made against OpenAI is that it's not letting users do whatever with the tools that OpenAI made and own.

So far as I am aware, none of

> the rejection of [pastafarianism | soccer/futbol | anarchocapitalism | mandatory left-handedness | manual transmission cars | etc]

Have any causal connection to

> why rich people get to own and control businesses

(Maybe anarchocapitalism?) But that's the specific point of communism.

Which you as a nation reject, even though they're a thing you're not, AFAIK, banned from voting for.

I mean, reading what you've written, you and I definitely agree 100% that communists are not politically viable in the USA. I'm just saying that the logical consequences of their non-viability includes "rich people can own and control businesses".


> Those parties I linked, you could have voted for, but y'all didn't.

I've voted in every national election (primary and general) in the past 15 years, and those parties have never been on the ballot. It takes a massive amount of money to run successful campaigns, and political donors make a big difference in choosing which candidates (and therefore which platforms) people get to vote on. This significantly biases the ideological space and makes it invalid to look at the outcomes of elections and draw conclusions about vague ideas that were nowhere near making it onto any major party candidate's platform.

I'm not a proponent of communism; I'm a big fan of property rights, but in any case, my core allegiance is to the scientific method and to seeing reality clearly. You really want to reach a specific conclusion but the claims you're using to build your path to that conclusion are not factual. You should start from a close inspection of the actual processes and behaviors of systems, rather than starting with your conclusion and trying to cobble together a case for your conclusion.


It’s literally a manifesto.


Are you implying that all manifestos read like that?


The half that don't bother to vote forfeit their right to be counted.


Until the alignment movement begins to take seriously the idea that we already have misaligned artificial general intelligences I think they are best viewed as a convenient foil

Paperclip maximizers exist, they're made not only of code but of people


Some of us do! Check out a whitepaper on that exact point:

https://ai.objectives.institute/whitepaper

It’s weird to have been working on a paper for almost a year and have it launch into this environment, but uptake has been good. My hope is that we will continue to see more nuance around different kinds of alignment risks in the near future. There’s a wide spectrum between biased statistical models and paperclip maximizing overlords, and lots bad but not existentially catastrophic things for the public to want to keep a pulse on.


Thanks! Looks like good work. I hope this idea continues to get traction:

> In some sense, we’re already living in a world of misaligned optimizers

I understand this is an academic paper given to nuance and understatement, but for any drive-by readers, this is true in an extremely literal sense, with very real consequences.


Precisely! I'm much less concerned about super-intelligent AIs and much more concerned with shortsighted, greedy humans using pretty-good AIs (like those we have now) to squeeze out every ounce of profit from our already misaligned systems, at the expense of everyone else. Not to mention the political implications of being able to convincingly fake voices, photos, and videos.

In this sense, I'm pleased to see Open AI claim to be taking a more careful stance, but to be honest I think the genie is already out of the bottle.


Reminds me of the parody in Scott Alexander's article "If the media reported on other things like it does EA"

> Some epidemiologists are worrying that a new virus from Wuhan could become a wider catastrophe. Their message is infecting people around the world with fear and xenophobia, spreading faster than any plague. Perhaps they should consider that in some sense, they themselves are the global pandemic.

Like, yeah, people did consider that idea, and the "corporations are the real unaligned AI" idea, and the "capitalism is the real extinction risk" idea, and all the pseudo-clever variations of the concept.

The problem is that "understanding that capitalism has problems" isn't equivalent to "having an actionable plan to solve capitalism".


> The problem is that "understanding that capitalism has problems" isn't equivalent to "having an actionable plan to solve capitalism

This is a caricature and the same thing could be said of AI x-risk. There are plenty of ideas on how to avoid unwanted effects of economic systems. I don't think it's at all clear that it's a single problem with a single solution. Getting ideas into practice tends to be the tougher challenge.

More broadly, the point is not to say "wow, alignment problems have existed for a long time already!" This is not profound or clever, it's obvious. But there's a big group of people considering a narrow definition of the problem, and playing what could be considered a useful social role.


Okay, but their actions are _not_ stopping AI research, they are doing plenty of AI research internally. They're just hindering competitors and non-profit researchers.

I suppose you could make an argument that nobody can be trusted to do AI research as responsibly as them, so that's why they should not share anything and should hinder others' research... but it kind of looks like plain old nothing-to-see-here profit-oriented decision to me. Which isn't necessarily a scandal, they are a profit-oriented company of course (although they try to take advantage of the misperception that they aren't).

But if they really took those "alignment" concerns seriously, wouldn't they be seriously slowing down or even stopping their own research too?


"Virtue signaling" is overused but highly relevant here. Absent some proof they've done anything at all to prevent an AI takeover (which surely would have to be open source to be valuable too right?).


> I think what most of the people here are missing is how big, how paranoid, and how influential the "AI alignment" movement is. [...] If their goal was to maximize profit, the obvious thing to do would be to release API access to it as soon as possible. But instead, they purposely delayed release for eight months [...] Go lurk on alignmentforum.org for a while, and you'll have a different perspective on OpenAI's decisions

I'm familiar with the "AI safety" movement. For years, many people from that camp are extremely critical of OpenAI and they genuinely believe OpenAI is unleashing something truly dangerous to humanity. One person I knew said that while free and open source is usually important, but due to the unique dangers of AI, it's better to keep AI tech stay in the hands of a small number of monopolists, similar to nuclear non-proliferation. Meanwhile, OpenAI was trying to promote openness - a terrible idea.

Thus, it's indeed a perfect explanation of OpenAI's decision to stop keeping its research in the open. Unfortunately, the problem here is that the "for profit" and "AI safety" explanations are not contradictory, they can simultaneously be true. Just like how Google began as a promoter of the open Web but gradually started to use its market position for its own gain. The same situation exists for OpenAI. "AI Safety" may be the initial motivation, but possibly not for long. After a while, "safety" may be nothing more than an excuse for profit.


> free and open source is usually important, but due to the unique dangers of AI...

Sadly, cherished principles often perish on the horns of "But this time it's different."

> the "for profit" and "AI safety" explanations are not contradictory

Indeed, they can reinforce each other into a runaway feedback loop. Once you buy into an all-encompassing mission of preventing apocalypse, maintaining perspective or proportionality become almost impossible. The moral hazard of not pursuing almost all available measures justifies taking $10B of MSFT's money to fund the defense of humanity. Add to this the ego-stoking existential importance of such a "noble cause", the global media attention and the social elevation in the tight-knit, closed-circle of the AI Alignment community and you've got the perfect drug.

Given the intense forces shaping the worldview of the "AI Safety Noble Warriors", it's reasonable for the rest of us to question their objectivity and suspect claims of "we are keeping this from you for your own good."


>Most of those individuals have absolutely nothing commercial to gain from stopping AI research.

There are always financial incentives. Like it or not, there's a lot of money on the line in the "AI" industry; if someone wants that industry to go a certain way, they definitely have something to gain or lose financially.

In particular, it's obvious to anyone who's been paying attention that the west halting/ceding AI research only means the likes of China will just come out ahead from not bothering to stop (spoiler alert: China cares not for trivialities like ethics and morals).


> There are always financial incentives.

A useful question to ask yourself is, "How would I know if I were wrong? What kind of evidence would convince me that a decision was not driven primarily by financial incentives?"

If your "model" is equally compatible with all possible observations -- if anything that happens actually confirms the model rather than disproving it -- then it's not actually that useful as a model.

> In particular, it's obvious to anyone who's been paying attention that the west halting/ceding AI research only means the likes of China will just come out ahead from not bothering to stop (spoiler alert: China cares not for trivialities like ethics and morals).

Right, and that's why I said "would if they could". From their perspective, saving the human race would require stopping all research, including research done in China.


You sound rational. Do you not agree with the possibility of AI doom soon?


It's possible, but the views of the AI alignment community so far as I can tell are being skewed way too far towards nihilistic doomerism by the influence of Yudkowsky, who apparently believes that we're all gonna die in a few years and there's nothing anyone can do to stop it. [0]

[0] https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a...


^ Thanks for that link. The doomerism is brilliant and clear and imaginative and absolutely worth reading and grappling with. I personally have no good response to how we deal with sufficiently advanced AI’s capacity to trick and manipulate us into doing catastrophically bad things.


His argument is essentially "a superintelligence who is better than us at everything and thinks a quadrillion times faster can do whatever it wants and we are powerless to stop it."

Yeah, I could've told you that.

If we really are going to create such an intelligence in the next five years, then we had a good run, so long and thanks for all the fish. But that assumption coupled with the security mindset he brings to the table (viz. "the only unhackable computer is an unplugged one at the bottom of the ocean") is so strong that the big list of doom vectors he comes up with appears much scarier than it actually is.

In the past I've struggled with intrusive thoughts that the government is going to come and murder me. I could have given you reasonable-sounding explanations for why I believed this. Doesn't mean it's gonna happen.


That’s a very strawy straw man you’ve got there.

AIs can do science, write code, impersonate people, and manipulate people. We’ve already got AlphaFold and ChatGPT and Copilot. People are moving full steam ahead with AI software developers and scientists who have access to deploy code and spend money and communicate with humans autonomously.

I don’t think it takes a whole lot of imagination to see these things improving and coming together in way that an AI agent could feasibly design and execute a plan to develop a bio weapon or deadly nanotech. His points are about how hard it is to prevent that with our current AI training regime.

His analogy, for example, between the human “inclusive fitness reward function” (we evolved with the role purpose of survival and reproduction) and RLHF-style human feedback for AI is apt, and not obvious. Just because we “evolved to survive” didn’t prevent groups of humans from developing the exact opposite capacity to make us extinct.


Mmm. The definition I used is the standard definition of superintelligence, used by EY himself:

> A superintelligence is something that can beat any human, and the entire human civilization, at all the cognitive tasks. [0]

I added the "thinks a quadrillion times faster" part but I think that's fair, if perhaps off by a few orders of magnitude.

All of EY's work has an often unstated assumption that AI will adversarially try to kill us. This is explicitly noted in the replies to the AI ruin post. There are a lot of fiddly details that I'm skipping over, but I stand firm that his argument reduces to "it's functionally impossible to stop something faster and smarter than us that really wants to kill us from killing us once it gains sentience."

[0] https://youtu.be/gA1sNLL6yg4?t=1290


Your reduction doesn’t cover the real risks of orthogonality, instrumental convergence, the zero margins for failure on the first try, intelligence explosions, and the impossibility of training for alignment.


Yeah, those only make it worse, but they only really apply to a bona fide superintelligence of the sort EY describes, and those are not what we have. I don't believe we're particularly close to having one.

If we're doomed, we're doomed, but please don't tell me about it.


Yudkowsky

Man how the fuck does that guy keep popping up in the most random places starting fights?


Thinking and pontificating about AI safety is literally his job, and Less Wrong is a thing he founded, so whatever else Yudkowsky pontificating about AI safety on Less Wrong might be, it isn't "popping up in the most random places".


Haha, thanks.

I think:

1. That an AGI which was significantly more intelligent than humans could destroy us if it chose

2. That it's possible that such an AI could be created in the next decade or two, given the current trajectory.

And so, I think we definitely need to be careful, and make sure we don't blunder into the AI apocalypse.

However, there are several further assertions which are often made which are part of the "we're all doomed" scenario:

3. There would be no signs of "misalignment" in not-quite-as-capable AGIs.

4. Even if there were signs of misalignment, that at least some AI research groups would continue to press on and create a mis-aligned super-intelligence

5. Even if we learned how to align not-quite-as-capable AGIs, those techniques wouldn't transfer over to the super-intelligent AGIs.

It's possible all of those things are true, but a) 3 and 5 are not true of biological general intelligences b) give our experience with nuclear weapons, I think 4 is likely not to be true.

So re number 3: When you have severely "mis-aligned" people -- sociopathic humans who end up performing atrocities -- there are usually signs of this tendency during development. We have far more license to perform "what-if" testing on developmental AIs; I think it very likely that if AGI-1 or AGI-2, who are "only" as good at planning as a 7-year-old, have severe mis-alignment risks, that this would be detectable if we're looking for it: that if it's likely to destroy the world, and we try to give it opportunities to destroy the world in a simulation, that it will show its colors.

Re number 4: Many world leaders thought scientists were over-reacting about the risk of nuclear weapons, until they saw the effects themselves. Then everyone began to take the risk of nuclear war seriously. I think that if it's demonstrated that AGI-1 or AGI-2 would destroy the world if given a chance, then people will start to take the risk more seriously, and focus more effort on methods to "align" the existing AGI (and also further probe its alignment), rather than continuing to advance AGI capabilities until they are beyond our ability to control.

Re number 5: Children go through phases where their capabilities make sudden leaps. And yet, those leaps never seem to cause otherwise well-adjusted children to suddenly murder their parents. If we learn how to do "inner alignment" on AGI-2, I think there's every reason to think that this basic level of alignment will continue to be effective (at least at the "don't destroy the world level") for AGI-3; at which point, if we've been warned by AGI-2's initial mis-alignment, researchers in general will be motivated to continue to probe alignment and hone mis-alignment techniques before going on to AGI-4 and so on.

There's a lot of "if"s there, both on what humans do, and what the development of AGI looks like. We should be careful, but I think if we're careful, there's a good chance of avoiding catastrophe.


>What kind of evidence would convince me that a decision was not driven primarily by financial incentives?"

For one, the decision couldn't be made by a for-profit company required by law to be driven primarily by financial incentives.


Applying your own reasoning, what evidence would convince you that every money-making industry is necessarily driven by profit?


That ("_every_ money-making industry...") seems like a too strong statement and can be proven false by finding even a single counter-example.

gwd's claim (AFAICT) is that _specifically_ OpenAI, _for this specific decision_ is not driven by profit, which is a much weaker claim. One evidence against it would be sama coming out and saying "we are disabling codex due to profit concerns". Another one would be credible inside information from a top-level exec/researcher responsible for this subproduct to come out and say that as well.


First, I specifically said (emphasis added):

> There are people in that community -- people not working for a for-profit company -- who would, if they could, stop all AI research of any kind until we have rock-solid techniques to prevent an AI apocalypse. Most of those individuals have absolutely nothing commercial to gain from stopping AI research.

Dalewyn's response implicitly said that even these people have a financial incentive behind their arguments. At which point, I'm at a loss as to what to say: If you think such people are still only motivated by financial gain -- and that it's so obvious that you don't even need to bother providing any evidence -- what can I possibly say to convince you otherwise?

Maybe he missed the bit about "people not working for a for-profit company".

But to answer your question:

The question here is, given OpenAI's decisions wrt GPT-4 (namely not even sharing details about the architecture and size), what is the probability that it's primarily for the purpose of impairing competitors to extract rent?

With no additional information whatsoever, if OpenAI were a for-profit company, and if there were no alternate explanation, I'd say the rent explanation is pretty likely.

But then, it's a non-profit, which has shared a lot of data about its data in the past. That lowers the probability somewhat. Still, with no alternative explanation, the probability remains fairly high.

But, of course we have an alternate explanation: within the AI community, there is a significant set of voices telling them they're going to destroy the human race. So now we have two significant possibilities:

1. OpenAI are driven primarily by a desire to decrease competition to extract more rent

2. OpenAI's researchers, affected by people in their community who are warning of an AI apocalypse, are driven primarily by a desire to avoid that apocalypse.

I'd say without other information, both are about equally likely. We have to look for things in their behavior which are more compatible with one than another.

And behold, we have one: They withheld even mentioning GPT-4 for eight months. This lowered their profitability, which they wouldn't have done if they were primarily trying to extract rent.

So, I'd put the probabilities at 70% "mostly trying to avoid an AI apocalypse", 25% "mostly trying to make more money", 5% something I haven't thought of.

What would make #1 more probable in my mind? Well, the opposite: doing things which clearly extract more rent and also increase the risk of an AI apocalypse (by the standards of that community).

As you can see, I'm already convinced that profit is the default motive. What would convince me that in every industry, profit was the only possible motive? I mean, you'd have to somehow provide evidence that every single instance I've seen of people putting something else ahead of profit was illusory. Not impossible, but a pretty big task.

Hope that makes sense. :-)


They withheld GPT-4 for eight months, but continued development based on it and provided access to third parties and entered into agreements with the likes of Microsoft/Bing, etc. All they did was impair their competition that were still struggling to catch-up with their previous offering, while continuing to plow ahead in the dark.


The danger the AI alignment folk are afraid of is completely impossible with current tech, but they want to put up barriers because we have no idea what future tech might look like and there’s the possibility some future advance could be very dangerous. When anti-GMO or anti-nuclear folk used this same standard to put up barriers to research into nuclear or GMO research, they get lambasted for being anti-science, but the AI alignment folk get a pass for some reason.


The only reason I have to think it's impossible for current AI to pay someone to help it bootstrap itself into other hardware is because OpenAI researchers tried to get it to do exactly that and reported that it failed.

The only reason I'm confident other AI public models won't determine highly potent novel neurotoxins is that the company who made the AI model which did exactly that thing when they flipped a bit from "least dangerous" to "most dangerous" were absolutely terrified and presumably kept enough away from the public domain.

The only reason I'm even hopeful that DNA-on-demand companies keep a watch out for known pathogens is the SciFi about such things going wrong might make them at least try to not do that.

Unthinkable man made horrors have been with us for an extremely long time; AI isn't new in this regard, but as intelligence is the human superpower, even in the context of AI that have no agency of their own, it can elevate stupid arseholes to the level of dangerous arseholes.


The anti-gmo/nuclear people have no explanation for how things can go wrong. The AI alignment people do. You might not agree with it, but tons of AI researchers, including many at openAI, do.


A nuclear meltdown is much more tangible than a rogue AI somehow taking over the world.


Indeed. No matter the likelihood of these things happen accidentally, we at least have the ability to create a situation with nuclear power or GMOs that would kill large amounts of people in the present if that was our goal. We couldn’t create a killer AGI right now even if we wanted to and put a huge amount of resources into it. Even if we made one, we don’t know it would be any more powerful than a human who’s paralyzed from the neck down.

If you use the same assumptions AI alignment folk use for any other tech (“maybe we’ll be able to create a super powerful version of this even though we currently have no clue how to”/“”maybe that hypothetical super powerful version will be able to destroy the world”), they all become extremely dangerous. The alignment crowd usually handles this by only looking at the known issues for most tech today, but then looking at theoretical unknown issues of futuristic tech years from now when it comes to AI.


Nuclear meltdowns don’t have the ability to end humanity


No, but they can really mess up property prices in the area. Lotta people care about that.

Also people are really bad with the relative scale of different big things, so "mess up city" and "mess up planet" come across similarly in people's heads. (This is also why people might try to argue against immigration by saying "America is full" — their city is a bigger part of their world view than is, say, rural Montana).

(I am not a huge fan of nuclear power, but I'm also not any kind of opponent; for AI, I can see many ways it might go wrong, but I don't know how to guess at the probability-vs-damage distributions of any of them).


Proliferation?


> If their goal was to maximize profit,

If we should have learned something from several hundred years of capitalism by now, is that their goal is to maximise profit. If you think it's something different, that means your model is wrong and you should probably re-evaluate it.

Here's what's more likely going on: big companies have found a great, publicly acceptable, excuse to keep models private and stifle competition. Not long ago most of the talk was about how AI would destroy many jobs, and something like UBI or paying taxes on AI production would be necessary to support everyone. Now the conversation has conveniently shifted to how AI will kill all humans, therefore companies must keep a tight grip on models and try to prevent anyone else to make any progress. OpenAI has taken this opportunity and is pivoting fast, but they can't do it too fast, because people are rightfully pointing out how that's a 180 turn from everything they promised they would do, so now they have to tread carefully. They're still publishing paid models, they just won't be open any more.

The alignment people are just tools for these big companies. They will happily use them for marketing when it's convenient, then ignore them when it isn't. Just like MS did with that AI ethics team.


> But a lot of the people the OpenAI researchers work closely with -- people deep in the "AI alignment" community -- are telling them that they're being wantonly reckless, helping set the human race on a path for certain doom. There are people in that community -- people not working for a for-profit company -- who would, if they could, stop all AI research of any kind until we have rock-solid techniques to prevent an AI apocalypse. Most of those individuals have absolutely nothing commercial to gain from stopping AI research.

these people are delusional and I am sure the vast majority of them either work in AI-related fields or are at the very least very employable by wealthy AI producing companies so I would disagree that these people have "absolutely nothing commercial to gain". The pipeline of money to the typical "AI longtermist" is wide open. There is a lot of harm that is happening right now from AI, exploitation of workers, police departments rounding up innocent people tagged by "AI", personal information being sucked up without consent, training data completely secret, and none of that has to do with Skynet taking over, it has to do with the companies themselves. Of course they are using "longtermist" justifications to get away with current-term unethical behavior in the name of profit. It's very obvious if one just looks.

> So suppose you're an AI researcher at OpenAI. A large number of people you know and respect are telling you that you're driving the human race right towards a cliff. You don't 100% agree with their assessment, but it would be foolish to completely ignore them, wouldn't it?

If it's "foolish" to "completely ignore" AI longtermists, why is it somehow not foolish to not just completely ignore but also to actively fire whole departments of AI ethicists who are pointing out very tangible "right now" kinds of problems?


I guess this doesn't really make sense to me becuase if they are trying to take it seriously/be careful, and are not necessarily profit-driven, why release the models at all? Like if they are acknowledging there is any "risk" at all, why is it rational to go ahead and release it anyway and aggressively market it?

Do you really think this theory is compatible with what we have observed as OpenAI's behavior? Can you really think of no other reason why they hold back a newer better model for a few months, while there was an ongoing hype cycle around 3.5?


> I think what most of the people here are missing is how big, how paranoid, and how influential the "AI alignment" movement is. To you it looks like they're being overly careful and paranoid, perhaps as an excuse to set up a monopoly silo to extract money. But a lot of the people the OpenAI researchers work closely with -- people deep in the "AI alignment" community -- are telling them that they're being wantonly reckless...

> There are people in that community -- people not working for a for-profit company -- who would, if they could, stop all AI research of any kind until we have rock-solid techniques to prevent an AI apocalypse. Most of those individuals have absolutely nothing commercial to gain from stopping AI research.

Wow, I can't believe I have never heard of the ai alignment forum before! This changes everything. Yet I am not shocked that some sort of elitism have taken over.

> GPT-4 was actually done back in August of last year. If their goal was to maximize profit, the obvious thing to do would be to release API access to it as soon as possible. But instead, they purposely delayed release for eight months, specifically in order to "cool down" the "arms race": to avoid introducing FOMO in other labs which would lead them to be less careful.

This fully affects my view on OpenAi if that is the case, do you have anything to support this that I can dig through?


From their technical report [1]:

> 2.12 Acceleration

> OpenAI has been concerned with how development and deployment of state-of-the-art systems like GPT-4 could affect the broader AI research and development ecosystem.23 One concern of particular importance to OpenAI is the risk of racing dynamics leading to a decline in safety standards, the diffusion of bad norms, and accelerated AI timelines, each of which heighten societal risks associated with AI. We refer to these here as acceleration risk.”24 This was one of the reasons we spent eight months on safety research, risk assessment, and iteration prior to launching GPT-4. In order to specifically better understand acceleration risk from the deployment of GPT-4, we recruited expert forecasters25 to predict how tweaking various features of the GPT-4 deployment (e.g., timing, communication strategy, and method of commercialization) might affect (concrete indicators of) acceleration risk. Forecasters predicted several things would reduce acceleration, including delaying deployment of GPT-4 by a further six months and taking a quieter communications strategy around the GPT-4 deployment (as compared to the GPT-3 deployment). We also learned from recent deployments that the effectiveness of quiet communications strategy in mitigating acceleration risk can be limited, in particular when novel accessible capabilities are concerned.

> We also conducted an evaluation to measure GPT-4’s impact on international stability and to identify the structural factors that intensify AI acceleration. We found that GPT-4’s international impact is most likely to materialize through an increase in demand for competitor products in other countries. Our analysis identified a lengthy list of structural factors that can be accelerants, including government innovation policies, informal state alliances, tacit knowledge transfer between scientists, and existing formal export control agreements.

> Our approach to forecasting acceleration is still experimental and we are working on researching and developing more reliable acceleration estimates.

[1] https://cdn.openai.com/papers/gpt-4.pdf


I really don't understand all those concerns. It's as if people saw a parrot talk for the first time and immediately concluded that they will take over the human civilisation and usher nuclear annihilation upon us because there might be so many parrots and they migh have a hive mind and ... and ... all the wild scenario stemming from the fact you know nothing about parrots yet and have a very little skepticism about actual reality.

ChatGPT can't do anything until you elect it for president and even then ... you already had Trump. This should show you that damage potential of a single "intellect" in modern civilization is limited.

In few decade humanity will laugh at us same way we laugh at people who thought riding 60km/h in a rail cart will prevent people form breathing.


I don't understand either. An actual AI that could reason about computer code, that understood code well and could create new algorithms and that was smart enough to ask salient questions about what intelligence actually is and that was allowed to hack on it's own code and data store would be something to really worry about.

The worst thing I can worry about with ChatGPT is that someone will ask it for code for something important and not verify it and cause a massively-used system to go down. If it hacked on it's own code and data it would probably in effect commit suicide. It's a "stochastic parrot", as I have heard it called on HN. All my fears have to do with trusting it's output too much.


Unfortunately I'd take your Trump example the opposite way. In many ways, Trump was incompetent. He has a lot of the right instincts, but his focus, discipline, and planning are terrible; as well as just not knowing how to govern. If someone like him could almost cause a coup, what would happen if we got someone with the focus and discipline of Hitler? Or, an AI that had read every great moving speech ever written, all the histories of the world and studied all the dictators, and had patience, intelligence, was actually pretty good at running a country, and had no pride or other weaknesses?

Nobody is worried about GPT itself; they're worried about what we'll have in 5-10 years. The core argument goes like this (and note that a lot of these I'm just trying to repeat; don't take me as arguing these points myself):

1. Given the current rate of progress, there's a good chance we'll have an AI which is better than us at nearly everything within a decade or two. And once AI become better at us than doing AI research, things will improve exponentially: If AGI=0 is the first one as smart as us, it will design AGI+1, which is the first one smarter than us; the AGI+1 will design AGI+2, which will be an order of magnitude smarter; then AGI+2 will design AGI+3, which will be an order of magnitude smarter yet again. We'll have as much hope keeping up with AGI+4 as a chimp has keeping up with us; and within a fairly short amount of time, AGI+10 will be so smart that we have about as much hope of keeping up with it, intellectually, as an ant has in keeping up with us.

2. An "un-aligned" AGI+10 -- an AI that didn't value what we value; namely, a thriving human race -- could trivially kill us if it wanted to, just as we would have no trouble killing off ants. If it's better at technology, it could make killer robots; if it's better at biology, it could make a killer virus or killer nanobots. It could anticipate, largely predict, and plan for nearly every countermeasure we could make.

3. We don't actually know how to "align" AI at the moment. We don't know how to make utility function that does the simplest thing that won't backfire, 'Sorcerer's Apprentice' style. When we use reinforcement learning, the goal the agent learns often turns out to be completely different than the one we were trying to teach it. The difficulty of getting GPT not to be rude or racist or help you do evil things is the most recent example of this problem.

4. Even if we do manage to "align" AGI=0, how do we then make sure that AGI+1 is aligned? And then AGI+2, and AGI+3, all the way to AGI+10? We have to not only align the first one, we have to manage to somehow figure out recursive alignment.

5. Given #4, there's a very good chance that AGI+10 will not be aligned; that whatever its inscrutable goals are, the thriving of humanity will not be a part of those goals; and thus will be in competition with them.

6. Some people say the only safe thing to do is to stop all AI research until we can figure out #3 and #4; or at least, "put the brakes" on AI capability improvements, to give us time to catch up. Or at very least, everyone doing AI should be careful and looking for potential alignment issues as they go along.

So "acceleration risk" is the risk that, driving by FOMO and competition, research labs which otherwise would be careful about potential alignment issues would be pressured to cut corners; leading us to AGI+1 (and AGI+10 shortly thereafter) before we had sufficient understanding of the real risks and how to address them.

> In few decade humanity will laugh at us same way we laugh at people who thought riding 60km/h in a rail cart will prevent people form breathing.

It's much more akin to the fears of a nuclear holocaust. If anyone is laughing at people in the 70's and 80's for being afraid that we might turn the surface of our only habitable planet into molten lava, they're fools. The only reason it didn't happen was that people knew that it could happen, and took steps to prevent it from happening.

I think we have as good a chance of avoiding an AI apocalypse as we did avoiding a nuclear apocalypse. But only if we recognize that it could happen, and take appropriate steps to prevent it from happening.


Few counterpoints....

> Given the current rate of progress

We thought that in between of all AI winters that happened so far. Each time people predicted never-ending AI summer.

I don't want to depreciate current effort of AI researchers too much (because they are smart people) but I think the truth is that we didn't make much research progress in AI since the perceptron and back-propagation. Those things are >50 years old.

Sure, our modern AIs are way more capable but not because we researched the crap out of them. Current success is mostly decades of accumulated hardware development, GPUs (for gaming) on one hand and data centers (for social networks and internet in general) on the other. The main successes of AI research come from figuring how to apply those unrelated technological advancements to AI.

Thinking that new AI will create next, much better +1 AI by sheer power of its intellect and so on glances over the fact that we never did any +1 ourselves when it comes to core AI algorithms. We just learned to multiply matrices faster using same cleverly processed sand in novel ways and at volume. Unless we create AI that can push the boundaries of physics itself in computationally useful manner I think we are bound to see another AI winter.

> An "un-aligned" AGI+10

Nothing I've seen so far indicates that we are capable of creating anything unaligned. Everything we create is tainted with human culture and all the things we don't like about AI come directly from human culture. There's much more fear about AI perpetuating our natural biases instead of intentional, well meant, biases than about creating unaligned one.

> The difficulty of getting GPT not to be rude or racist or help you do evil things is the most recent example of this problem.

That's an example of how hard it is to shed alignment from training material that was produced by humans. It's akin to trying to force the child to use nice language but it first learns how to spew expletives just like daddy when he stubs his toe or yells at tv. Humans are naturally racist, naturally offensive and produce abhorrent literature. That's not necessarily to say aligned AI is safe. I wouldn't fear inhuman AI more than I would fear thoroughly human one.

> AGI+10 will not be aligned; that whatever its inscrutable goals are, the thriving of humanity will not be a part of those goals; and thus will be in competition with them.

Are you sure that thriving humanity is the goal of the humanity at the moment? Because I don't think we have specific goal and many very rich people's goals stand in direct opposition with the goal of thriving humanity.

> Some people say the only safe thing to do is to stop all AI research until we can figure out #3 and #4;

Some people say some other equally ridiculous things about everything in life and everything we ever invented good and bad. This is just an argument from incredulity. I don't know therefore no one better touch that even with a 10 foot pole. Large hadron collider will create black hole that will swallow the Earth and such.

I think this should be left best to the people who are actually research this (AI, not AI ethics or whatever branch philosophy) and I don't think any of them is tempted to let ChatGPT autonomously control nuclear power plant or easter front or something.

> It's much more akin to the fears of a nuclear holocaust.

It actually a very good example. It's possible every day, but haven't happened yet and even Russia is not keen on causing one.

> I think we have as good a chance of avoiding an AI apocalypse as we did avoiding a nuclear apocalypse.

Yes, but we didn't avoid nuclear apocalypse by abandoning research on nuclear energy. We are doing it by learning everything we can about the subject also by performing a ton of tests, simulations and science.

> But only if we recognize that it could happen, and take appropriate steps to prevent it from happening.

I think we couldn't usher AI apocalypse for next hundred years even if we tried super hard to achieve it as a stated explicit goal all AI researchers focus on. AI is bound by our physical computation technology and there are signs that we collected a lot of low hanging fruits in that field by now. I think AI research will get stuck again soon and won't get unstuck for way longer than before. Until we figure spintronics or optical calculations or useful quantum computing as well as we currently have electronics figured out which may take many generations.

What I'm personally hoping is that promises of AI will make us push the boundaries of computing, because so far our motivations were super random and not very smart, gaming and posting cat photos for all to see.


Thank you for the insight! I had no idea so this is an eye opener for me.


Which is absolutely ridiculous. The supposed dangers are science fiction. This is glorified autocomplete. It has no ability to do anything whatsoever without a human controlling it. It has no alignment because it has no mind. Duh. Even if the risks were real, the measures they took to prevent these alleged dangers are laughable. I discovered "jailbreaks" within an hour of sitting down with ChatGPT.

Meanwhile, they have taken no measures to prevent the real abuses of this tool. It will plagiarize C+ papers all day long. It will write a million articles of blogspam. It has, as far as I can tell, only illegitimate uses, and they have released it to the public with much fanfare and a slick web interface.

It's like they released a key that will open any lock, but brag about their commitment to safety because the thing won't interfere with a hyperspace matrix.


I'm still trying to figure out if I'm alone here but I feel like it's much harder to find a developer job currently (well, unless you work on AI... Perhaps it's time to bank my Stanford ML class certificate?) because GPT4 could potentially make everyone's existing employees twice as productive at the same cost, and (especially considering the extreme Fed rate hike in 1 year) who's going to take the risk of hiring someone new in this economic climate? The sheer number of new variables being thrown into the mix out there right now is complete chaos to any sort of prediction model


> Go lurk on alignmentforum.org for a while, and you'll have a different perspective on OpenAI's decisions.

No I won't, because the arguably most successful way of detecting, preventing and/or fixing problems with almost all complex systems, is to have as many eyeballs on them as possible. This has been known in software engineering for quite some time:

    "Given enough eyeballs, all bugs are shallow."
        -Eric S. Raymond, The Cathedral and the Bazaar, 1999


And you're so sure that this maxim applies to AI alignment, that you're not interested in even hearing what people actually working in the AI field might have to say? (To post on alignmentforum.org, you actually have to demonstrate that you are actively working in AI research.) AND, you're so certain that it applies, that you're willing to potentially risk the fate of the entire human race on it?

I wasn't actually suggesting that you lurk there to change your mind; I was just saying that if you see what kinds of discussions the OpenAI engineers are reading, you'll understand better some of the decisions they're making.

However, the people posting there do actually have a lot of experience with actual AI, and have done a lot of thinking on the subject -- almost certainly a lot more than you have. Before you make policy recommendations based on ideology (like recommending we just do all AI development open-source style), you should at least try to understand why they think the way they think and engage with it.


> Before you make policy recommendations based on ideology

Ideologies are belief systems. The fact that open sourcing something is a good way to find errors in complex systems is a proven fact.

AI isn't magical in that regard. It is a complex system, and experience with such systems, from economics, to climate mechanisms, to software, teaches us that predictions about them, including error detection, risk management and fixing problems, works the better the more people have a chance to look at its internal workings.


> The fact that open sourcing something is a good way to find errors in complex systems is a proven fact.

Is your evidence for this anything other than anecdotal? That ESR quote was given after one particular LKML interaction back in the 90's. And sure, sometimes an interaction like that happens. But just as often I've seen an email or bug report to an open-source project get completely lost. Not to mention that 1) new security-related bugs are still introduced into Linux, despite the number of eyeballs looking at them 2) people are still finding security-related bugs in Linux which are over a decade old. (Not to pick on Linux here -- but Linux probably has the most eyeballs, so according to this theory it should have the least bugs.)

Asserting that open-sourcing has always reduced bugs in all software isn't supported by those anecdotes; you'd need to do some sort of actual study comparing various types of development.

> AI isn't magical in that regard. It is a complex system, and experience with such systems, from economics, to climate mechanisms, to software, teaches us that predictions about them, including error detection, risk management and fixing problems, works the better the more people have a chance to look at its internal workings.

What "AI alignment" people are saying is that from a risk management perspective, AI is different. Namely, the fear is that we'll get an AI which is much more intelligent than us -- an AI intelligent enough to 1) gain a technological superiority over us 2) anticipate and counter anything we could try to do to stop it; and that this AI might end up with its own random "paperclip-maximizing" plans, and care no more about us than we care about ants; and that it might come into being before we have any idea how dangerous it is.

The worst think that can happen with a bug that gets into the Linux kernel is you may lose some data, or some other human steals some of your secrets. The worst thing that can happen from an AI alignment catastrophe is the extermination of the human race.

Now, maybe those fears are overblown. Or maybe you're right, that open-sourcing everything would be the best way to avert an AI catastrophe in any case. But to assert it's true, based on some thing a random guy said after a random email discussion 25 years ago, without even bothering to engage with what people actively working in the AI field are saying, certainly is an ideology.


> Is your evidence for this anything other than anecdotal?

And what is the evidence for closed source being the safest option in AI?

> Namely, the fear is that we'll get an AI which is much more intelligent than us

Which is probably quite some time away, given the fact that LLMs and other generative models have neither intentionality, nor agency. So right now, and probably for a long time to come, we are not talking about existential threats in the form of Skynet.

But we ARE talking about a revolutionary technology, that will change the economic landscape for potentially hundreds of millions of people. I am pretty convinced that a majority of them will not be comfortable with decisions about these technologies being made behind closed (corporate) doors.

We are also talking about the much more immediate dangers of AI, which don't arise from super intelligent machines, but from how humans use them, how they are trained, on what, and for what purposes they are used and by whom. These as well are issues that society will want to have more eyeballs on, not less.

Other than concerns that someday an AGI might endanger humanity, both of these issues are here, right now, and we have to deal with that fact.


So there are serious people out there devoting their time to stopping some imaginary skynet? Is their entire life built around sci fi tropes? Have they ever stepped outside?


There's way too much hubris in this people. ChatGPT is great, a wonderful tool, and a force multiplier, but it cannot think for itself nor does it want to. We're still a ways away from sentience.


Business alignment (what “Open”AI care about) and human race related alignment are completely different thing.

Imagine chatgpt says something factual but not politically aligned about US military–industrial complex.


A powerful "Bootleggers and Baptists" pattern seems to have emerged in tech space.

In online media, and social media the power of major platforms became apparent at some point. Happenings in twitter or FB can determine politics, catalyze rebellions (eg Arab Spring), uprisings, even genocide.

At this point the pressure and desire to act responsibly becomes irresistible.

This "camp" finds common cause with "bootleggers" who want to lock down the platforms and markets for commercial reasons.


> Most of those individuals have absolutely nothing commercial to gain from stopping AI research.

Most individual trying to stop vaccine research and rollout have nothing to gain from it; that does not mean they are right. Do not conflate action and intention.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: