It feels weird to view this case as a primary Section 230 issue rather than looking at the fact that the DA did not prosecute this case and took a plea deal.
Civil trials have a lower burden of proof, so it's unsurprising the plaintiff is seeking a remedy here, but the core issue seems to be that we are unable to prosecute this case as the criminal matter it clearly is.
They were facing up to 93 years in prison for the rape and obscene content distribution charges and the prosecutor agreed to a plea deal of one year probation, 100 hours of community services, and an agreement to cease posting content to social media. People have gotten worse sentences for simple marijuana possession. Sometimes the criminal justice system makes absolutely no sense to me.
I presume the issue here is that the DA did not believe they could prove beyond reasonable doubt that this was assault and not a model regretting she did a porn shoot.
It vaguely sounds like they had an OF that did a bunch of these shoots, and presumably the DA didn't manage to get any other girls on record saying they were assaulted (how hard they tried etc is unclear).
Citation needed on being a criminal matter (in the case against the content provider). I mean I guess the only real definition of something being a criminal matter, is if there's litigation and court rules in favour of plaintiff. Second best definition is if someone opens a criminal case, at least then there would be a contention that it's a criminal matter.
In this case neither happened so there's no source to backup that claim. I wouldn't be surprised to see a criminal case being opened against a publisher/alleged publisher for dissemination of a criminal act, but I do think it would be dismissed eventually.
Should the terms of service of a tech platform apply to all public activity a person conducts?
Would it be fair for TikTok to ban me because I posted nudes on instagram? Or execution videos on Telegram? Or angry comments about my local Starbucks on Facebook?
> Should the terms of service of a tech platform apply to all public activity a person conducts?
I don't see how this could possibly function in law. If I get a job with Facebook that says I'm not allowed to discuss my salary, could I then be contractually forbidden from filing taxes, because I would necessarily have to inform the IRS of how much I made? Or apply for credit, because I would have to tell Experian?
Eventually we'll have to reconcile a few key issues:
- People deserve to consensually communicate between themselves however they see fit, as long as just laws are not being broken.
- The world wide web now represents a dominant form of communication.
- The "web" as the average, non-tech person perceives it, predominantly consists of a mixture of large tech conglomerates with major conflicting interests between what's best for their users and what's best for their shareholders.
- These tech conglomerates have adopted a strange combination of conservative politics, liberal politics and an extremely restrictive and prudish system of rules for what kind of communication may take place. Navigating this system is entirely arbitrary and once you're out, you're out, with no recourse, unless you engage in further TOS-breaking by creating alternate accounts.
- Switching to an open platform is not enough; if none of your peers are on the platform, or it doesn't support open protocols, then you still can't communicate with them.
Without some form of (people-led!!!) government intervention I see no way to reconcile all of these issues. Perhaps it involves tax breaks, or only applies to platforms meeting certain thresholds, such as conglomerates or subsidiaries of them. The details can be worked out, but the fact remains that something needs to be done.
I don't see how you require a private entity to broadcast communications without getting rid of the first amendment.
That's where all the whinging is, people want to require Facebook or Twitter to show whatever stupid thing they said to lots of others, they aren't mad that Facebook blocked their private messages with their mom.
It's a problem they created themselves, so it's moot. They wanted walled gardens and a larger slice of the ad pie. I have zero sympathy for these tech conglomerates who dare to rule public discourse with an iron fist. They have not earned it. They colonized the world wide web and systematically attacked old ways of communication.
If the choice is between preserving freedom of speech for a individual and for a corporation, it's not a head scratcher. You favor the individual, almost universally.
The "whinging" is completely warranted. Gone are the days when I can easily email a phone number, or use XMPP to talk with Facebook users.
And now these companies have the audacity to lean on "you posted it on our platform, so we reserve every right but copyright" and systematically make the internet opaque to search engines in order to muscle some AI money from other companies. They're openly eviscerating the public forum, after intentionally killing off forums by employing sickening addiction-bolstering algorithms and dark UX patterns.
We are openly engaged in information warfare with the modern, digital equivalents of the East India Tea Company, and again, I have zero sympathy and will not let them hide behind "muh freedom of speech" when they actively undermine multiple universal human rights as recognized by the US Constitution. They undermine your freedom of speech, your right to due process, right to decline unwarranted search and seizure, and more.
We have had this same exact issue swelling with landlords for decades. As citizens increasingly are forced into lifelong rentership instead of home ownership, they are finding their rights slowly eroding. First, it happened with the poor and disenfranchised. A landlord can enter their home at any time, and force all sorts of rules that inhibit freedom of expression and reduce privacy, increasing government reach. The experiment worked, and now we're seeing it manifest in what's left of the middle class. And I feel zero sympathy for landlords and property management groups over this, as well. None. All of these user-hostile companies, these artifacts of late-stage capitalism, they are causing societal harm that may end up being irreparable if something is not done.
We made phone lines neutral. Mark Zuckerberg can keep his rights just as we can, but Meta The Company is not a human.
I mean all of these issues basically boil down to that these services and platforms that now constitute whatever can vaguely pass as if you squint your eyes enough a third place, though if you analogize it to a park, it's a park where like 6 marketing people follow you around and make note every time you stop to look at a flower, but that's a different conversation.
People are entitled to express themselves in any way that's not illegal per the first amendment, but because all these companies are entirely, 100% beholden to advertisers to continue to exist, that expression is necessarily limited to suit the tastes of those advertisers. Perfect example is Twitter. They've adopted much looser restrictions on content and as a result, the company is worth about a fifth what it was when it was purchased by Musk, and the ads in that time have shifted significantly from prominent, respected brands and being... ubiquitous, but not to an extent that it felt so, to being virtually every third or fourth post, and completely unknown companies hawking all manner of unimpressive goods, crypto schemes, dude wipes, and of course weight loss drugs and dick pills. Basically the exact kinds of ads you'd expect to find on crank conspiracy websites circa 2010 (and probably today) and a shitload MORE of them, because they're trying desperately to keep the company financially solvent.
The government intervention would probably be something like these various networks (or a new one) that is owned and operated by the public, like a utility.
It's an interesting problem because we do see innovation from private companies (tech, scaling, UX, etc) that I don't think we'd see in a publicly/government-owned platform. But the fact that it comes at the cost of hiring the world's top minds to figure out how to generate more ad clicks at any cost (mind control, teen suicide, etc.) is just unacceptable.
Whatever way out we might choose, I hope that it doesn't impact indie hackers and people who want to try new things. We have to make sure that the root of the problem, advertising, is properly captured in the letter of the law, and that any legislation doesn't get disemboweled by corporate interests on the way out the door.
> It's an interesting problem because we do see innovation from private companies (tech, scaling, UX, etc) that I don't think we'd see in a publicly/government-owned platform.
I don't see any reason to suspect this. Even a blurrier incentive is still incentive.
Secondly a lot of that innovation has been bad. Consider the disappearance of a chronological timeline, or ads themselves. There's nothing inherently positive about the concept of innovation but profit motivates every type (as it stands). It just tends to happen faster with private enterprise and without any say from the users.
You're right, I just mean more on the UX and tech side of things. Government software UX is typically atrocious. Some of it is well-founded and just outdated, though.
But, there are some nice efforts taking place within the government to modernize and standardize tech and UX. digital. Digital.gov is killing it lately. https://designsystem.digital.gov/
I mean, UX is atrocious across the board really, at least in my experience. I can't remember the last time I used a tech product and didn't have at least one moment of bafflement as to how anyone thought this was how this should be designed.
> It's an interesting problem because we do see innovation from private companies (tech, scaling, UX, etc) that I don't think we'd see in a publicly/government-owned platform.
I mean, would you need half of that scaling if not for the demands of surveillance capitalism? How much of Google's tech stack is built around, for example, delivering email, and how much of Google's tech stack is built around harvesting user data for Adsense? I'm sure gmail runs on something that would be as far from a standard email server as anything I could imagine, but I feel like there's a natural ceiling in terms of the sheer amount of compute you could reasonably bring to bear to accomplish the task of mail transfer, whereas I could easily see the demands of running intensive, page-by-page analytics on a variety of websites and apps all at the same time easily demanding an absolutely stressful amount of server traffic and compute, let alone the delivery of the ads after that work is finished.
And, that also presumes that gmail as a service is made better by being run on the infrastructure of a hyper-scaler. Running it across datacenters worldwide. Is it? Is the experience meaningfully better for the end user in the bargain, or is it everything around that service that requires such vast resources?
Like, genuinely, how much electricity could we save if we just turned off user data collection tomorrow?
"(g) took a 20% share of the profits from all of Romelus’ videos."
and court judgment summary:
"OnlyFans’ revenue share didn’t add to the content illegality."
Couldn't it be argued (and this is a very general question about the proportionality of prices charged and expected services provided) that by charging 20% the responsibilities of the provided must be conmesurate?
The legal expectations for a free platform like twitter or facebook cannot be the same as a host that charges 50$ per file for example, as there is an expectation when hosting a file of some services, whether they be reliability or integrity of the content.
If there were a trial against facebook for the data being lost, the case should be easily dismissed as there was no consideration, the content was hosted for free, now if we go to trial against a host that charges 50$ for a small file, then we would have to discover where the money went! And what expenses were used to host the material!
Similarly we cannot expect the responsibilities and warranties of a free service to be the same as those of a paid service.
I view them as completely separate issues. We need 230 immunity in some form or else the whole telecommunications system falls apart (although I am a fan of Trump's plan to make it apply only to sites following the law and removing only illegal content, as this censorship nonsense we have been dealing with for the last decade or two implies the platform knows what they are publishing and is editorializing which should make it liable, just as a newspaper is liable for what it publishes).
What is charged for distribution is more of an antitrust issue to me (i.e. 20% is clearly ridiculous unless there are a bunch of reach/network effects the platform offers due to it having market power. mitigating that market power is almost always a good idea).
What is charged for distribution is more of an antitrust issue to me (i.e. "20% is clearly ridiculous unless there are a bunch of reach/network effects the platform offers due to it having market power. mitigating that market power is almost always a good idea)."
It's not off topic. The person I was responding to was using that as an indication that they should be responsible for 20% of the liability and that is totally incorrect. I outlined what it actually was and that is not an indication of liability for wrongdoing by the users of the distribution method.
There's a lot of nuance in what we are saying, I feel you are not being quite precise:
"The person I was responding to was using that as an indication that they should be responsible for 20% of the liability and that is totally incorrect. "
I didn't say that charging 20% means you have 20% of the liability, although I don't think it's unreasonable if you posit that it follows naturally from what I said.
I said that charging money for a service creates an expectation of some kind of consideration, that is something in exchange. So the company would be liable for services of value equal or close to equal to 20% of the price that the customer paid.
You might argue that in the aggregate this might be close to 20% of the liability, but liabilities might be very big or they might be 0. Also the consideration itself is a liability ( an obligation to perform services)
So yeah, not the same thing at all.
And what I say is out of scope is saying that some price is too high, we are getting into a free market/controlled prices debate that is completely unrelated to damages of lack of consent in sex and distribution of sexual images.
And I have been correctly saying that this is foolish because it will destroy the entire internet communication industry and we get far more social benefit than drawback from said industry. If everyone has to know everything on their service then they can't let users post anything. Further, this has been well litigated already for other telecom services (i.e. for the phone company) and is why there is no liability for the telecom company if you, for example, use their cellular network to set off a bomb in a crowded area.
It doesn't matter what the price is, the price is a signal of a totally separate problem, as I have already said. As I've also already said, this censorship nonsense of the last decade is indicative of editorializing and therefore in that instance it would make sense to have some liability to the platform, as they are exercising editorial control over what is being published above and beyond legality.
"And I have been correctly saying that this is foolish because it will destroy the entire internet communication industry and we get far more social benefit than drawback from said industry. If everyone has to know everything on their service then they can't let users post anything. Further, this has been well litigated already for other telecom services (i.e. for the phone company) and is why there is no liability for the telecom company if you, for example, use their cellular network to set off a bomb in a crowded area."
I get this is the underlying gist of the debate, but it doesn't follow at all from the deep thread, which focuses on a very specific point.
"It doesn't matter what the price is, the price is a signal of a totally separate problem, as I have already said. As I've also already said, this censorship nonsense of the last decade is indicative of editorializing and therefore in that instance it would make sense to have some liability to the platform, as they are exercising editorial control over what is being published above and beyond legality"
I'm sorry but you have no capacity for keeping a discussion. You bring the subject of free speech and politics in what originally is a rape case with almost no relevance.
Try to separate your free speech concerns to a separate top level comment or wherever it would be appropriate (probably not top level). And try to engage with the specific argument of a thread. Encapsulate, this is a nuanced topic, you can't repeat the base argument in a depth 7 comment.
The person i responded to made the claim that becauae they are paid they should be liable for some portion, which is wrong for economic and freedom reasons. Reading comprehension is clearly a problem here but it is not my problem.
I think there's distributors and there's publishers, which traditionally were very distinct roles in the physical industry, a book is published by an editorial, who prints it, sells it to a book shop, and performs some content review.
As such the publisher bears some responsibility for the context and not the book shop.
So ISPs are undeniably distributors and do not bear liabilities for the content they distribute.
This has been extended to social media like twitter, granted. The claim being that they are a distributor and that users self publish.
Your argument that whenever a platform exercises their right to review and editorialize, they enter a category of publisher and thus become liable as a whole is compelling. You would hold the company in estoppel, either they are a neutral distributor as a whole, or they editorialize as a whole.
Now how this relates to the onlyFans case. I'm personally not convinced this is a distributor. I believe they may be a publisher. I feel this is very similar to other companies like neobanks and uber and airbnb, or an employer using a contractor categorization, attempting on paper to be something, but effectively being something else. Legal fictions.
The first element common in these legal fictions is that there's a legal relationship that bears a lot of liabilities, be that an employer relationship with taxes, a bank-client relationship with regulations, or a publisher with content liability. That incentive is present here.
Second, the undesirable liability role is most likely to be assumed whenever other roles have already been fulfilled genuinely. For example: if Company A contracts worker X from Company B, then the nature of company A with worker X is that of a contractor, so whatever the relationship of worker X is with company B, there is a strong claim that it is an employer, as the contractor role is already served in that chain.
Similarly, the role of neutral content distributor is already filled by the ISP and content neutral hosts. Onlyfans specifically hosts sexual content (de facto, they may allege to be a general social media platform but it is not the case in practice).
So what I'm saying is that content neutral distributors can be immune from the liabilities of the content they distribute, in the sense that they are ISPs or hosts, but these immunities may not apply to some companies like content-specific last-mile distributors which would assume the role of publisher.
I'm not familiar with the specifics of OF, maybe they make their best efforts to avoid being a publisher and have users sign as independent self publishers or whatever. But even if they make all of the right decisions, somebody has to be the publisher, I don't buy this self-publish legal fiction. There's a lot of elements that may constitute the publisher relationship, and charging a fee for transactions per content may be a big one.
Complaint "(f) failed to enforce its alleged corporate policy requiring a signed release form showing consent by any third party (like Plaintiff) who appears in a posted video an"
Court ruling summary
"OnlyFans’ failure to enforce its policy requiring signed model releases didn’t add to the illegality and runs contrary to Section 230’s negation of the moderator’s dilemma."
So bottom line, a pornographic video distrbutor doesn't have the legal responsibility to ask for proof that the subjects of the video consent to being recorded and distributed?
Sounds like an easy bipartisan law to pass right? I get that it may not be the case for filming people walking on the streets, but if there's fucking there should be a verification system similar to the one I have to do when I open an bank account or whatever, you know where they scan your id and ask you to make a video saying you consent to opening an account or to put your face closer or farther from the camera?
"every performer portrayed in a visual depiction of actual sexually explicit conduct—
(1)ascertain, by examination of an identification document containing such information, the performer’s name and date of birth, and require the performer to provide such other indicia of his or her identity as may be prescribed by regulations;
(2)ascertain any name, other than the performer’s present and correct name, ever used by the performer including maiden name, alias, nickname, stage, or professional name; and
(3)record in the records required by subsection (a) the information required by paragraphs (1) and (2) of this subsection and such other identifying information as may be prescribed by regulation."
I mean, this seems like a basic requirement to verify everyone is of age. Did the rapist send ids and names to OnlyFans or something?
The definitions in the law
"(2)the term “produces”—
(A)means—
(i)actually filming, videotaping, photographing, creating a picture, digital image, or digitally- or computer-manipulated image of an actual human being;
"
(ii)digitizing an image, of a visual depiction of sexually explicit conduct; or, assembling, manufacturing, publishing, duplicating, reproducing, or reissuing a book, magazine, periodical, film, videotape, digital image, or picture, or other matter intended for commercial distribution, that contains a visual depiction of sexually explicit conduct; or
(iii)inserting on a computer site or service a digital image of, or otherwise managing the sexually explicit content,[1] of a computer site or service that contains a visual depiction of, sexually explicit conduct; and
(B)does not include activities that are limited to—
(i)photo or film processing, including digitization of previously existing visual depictions, as part of a commercial enterprise, with no other commercial interest in the sexually explicit material, printing, and video duplication;
(ii)distribution;
(iii)any activity, other than those activities identified in subparagraph (A), that does not involve the hiring, contracting for, managing, or otherwise arranging for the participation of the depicted performers;
(iv)the provision of a telecommunications service, or of an Internet access service or Internet information location tool (as those terms are defined in section 231 of the Communications Act of 1934 (47 U.S.C. 231)); or
(v)the transmission, storage, retrieval, hosting, formatting, or translation (or any combination thereof) of a communication, without selection or alteration of the content of the communication, except that deletion of a particular communication or material made by another person in a manner consistent with section 230(c) of the Communications Act of 1934 (47 U.S.C. 230(c)) shall not constitute such selection or alteration of the content of the communication; and"
The actual arguments would be lengthy for both parties, OF would want to be categorized as a distributor or ISP, which are in the NOT category.
I would argue, and the plaintiff would, that they are publishers. The commercial intentions are mentioned, which was dismissed as irrelevant in the case.
As much as I disagree with the concept of OF content, I do believe that people should be free to do what they want, provided it is consentual and it isn't harming others. However, OF—and any adult content for that matter—can lead to exploitation. Because of this, I think it is sensible for online platforms of adult content to voluntarily require a model release and proof of Government ID as part of their Terms of Service. Since this isn't happening, it now seems sensible to regulate that any, and all, adult content posted online require a model release. The way I see it, OF was morally complicit in this sordid affair, but couldn't be held to account due to the law. The OF management could have easily protected themselves, and sexual assault victims, from these worst case scenarios but didn't. The law really needs to change and there needs to be regulation in this area.
I am skeptical that putting a model release requirement in their ToS is sufficient, since the goal here is to have sex offenders be separable from regretful models in court proceedings.
I think this might require a separate law to be meaningfully enforceable since you really want people to be directly liable for the lack of release, rather than having defendants claim "nobody ever reads the ToS or has a release".
At the very least it would probably require very clear messaging from OFs that this is required, such that basically everyone on the platform actually has/is able to produce it.
Stock photography sites have figured out how to manage model releases. There's no reason that OF can't do it as well. I think that any model appearing in a video that is released on OF should have an "account" that they can manage their releases as well even if that is a separate account that does not include a normal page for publishing content. If a video is uploaded that does not have each model linked, it is not allowed to be published type of thing
I think this is a great point; I don't think it is widely known (it certainly was not to me) that stock photography sites already have a model release process to reference.
I wasn't really thinking about regretful models, I was thinking about rape and sexual assault victims. I don't have any firsthand knowledge about any of this stuff, but I imagine that a signed model release and verfied Government ID falls under contract law. I have no idea about how the law would deal with a situation where a model recinds their consent. My guess is that would have to be dealt with in court. However, I do think requiring signed releases and verified Government ID might have prevented the case mentioned in the post... not the rape or sexual assault, but putting the video online, which would have caused even more pain and suffering on the part of the victim.
Right, the point of releases is to separate those who were assaulted from those who were not. Without releases these two groups can appear the same, making prosecution difficult.
Model release isn't enough, the feds need to actually back it with legislation and perhaps ensure all the signing happen at a police station, so no accusations of consent can be bandied about. Given the amount of revenue the industry generated for LA and Florida, this should not be controversial. There was the infamous "girl do porn" case last year where even with model release the producers were accused of public distribution when the contract said it was for private release and the models decided to sue for sexual assault based on a contract law breach. The industry needs to be thoroughly regulated with lots of sunshine lists on all players, contract law and civil courts will only lead to tragedy.
The model releases did not protect the offenders in the GirlsDoPorn case and serious prison sentences were handed out, so I'm not sure what you think went wrong with the law here? (My reading here is cursory, so do let me know) It seems like the main offender escaped justice for a while because he fled the country?
Requiring this level of sustained abuse to prosecute people is obviously bad, so I am entirely open to the idea that more should be done here to prevent this abuse, but it seems like these people were prosecuted beyond a reasonable doubt despite having some sort of releases.
Quite the horrifying read, and absolutely a case that will be used as ammo to tear down S230.
I love S230 in principle. I don’t think platforms should generally be held accountable for the postings of their users to a point. On paper, it’s an excellent safeguard that has enabled the modern internet for better or worse.
As with all “good things”, however, bad actors have found ways to exploit it for their own ends, and an incompetent Congress has failed to reform it. Modern polarization isn’t helping, as the respective camps regarding S230 tend to be “protect it as is” or “repeal it entirely”, neither of which is appealing or worthwhile. S230 is flawed, and it must be amended, updated, or replaced with a better version that addresses common grievances.
In my subjective experience, I would prefer to see S230’s protections “scale” with the size of the platform and its function. For instance (and I know this will be a highly controversial take), I don’t think large, centralized platforms like FB or X should have S230 protections at all, because of the high degree of control, immense profits available to invest in R&D or better moderation, and the repeated failures of those companies to moderate misinformation and harmful content while also claiming to be authoritative and reliable sources of public information. On the flip side, your small (sub-10k users) community should have full S230 protections, because you’re a niche that likely serves a smaller cohort, rather than a centralized monolith of the general public.
Could such restrictions be exploited to create similar harms as the above? Of course, but at least a modest reform like that would serve as a warning shot to both sides that no legislation is ever “done”, and we must continue to adapt it to ensure it meets the needs of modern times.
My take on it is that opaque, individualized content recommendation algorithms should be treated the same as human editors selecting the content in question. Something Facebook-sized that only shows users a reverse chronological feed of things they manually subscribed to wouldn't be nearly so problematic.
As it is opaque we can't even know if it is human editors behind the scene making all the decisions instead of algorithms. I heard Tiktok uses a lot of human judges to decide what content should go viral, when you do that why do section 230 still apply?
That's acting enough like a publisher that 230 might not apply. If they decide to make something illegal viral, it could lead to interesting litigation.
> and it must be amended, updated, or replaced with a better version that addresses common grievances.
That assumes it will be "better" and less flawed. To that point:
> In my subjective experience, I would prefer to see S230’s protections “scale” with the size of the platform and its function. For instance (and I know this will be a highly controversial take), I don’t think large, centralized platforms like FB or X should have S230 protections at all, because
sec230 in its current form is better than every "reform" I hear suggested.
Not really, if you see the other case law, almost all of these semi-professional "amateur" porn cases come down to he-said, she-said. One party would agree to be filmed/paid for sex, back out half way through, and then sue for involuntary pornography and consent. Very few are the true voyeur cams that you would expect when you initially read about these cases. In this case they couldn't prove any of it so the defendants got a plea deal with no jail time. Suing OnlyFans was just a cash grab, like how lawyers love to go after pornhub (the mindgeek company mentioned in the other case law). If anything, this is section 230 working as intended. There is nothing broken to reform.
What the federal government should do perhaps is to have a federal disclosure and consent form database for all participants in an adult video, listing what they are willing to do, for how long, and when they signed it. That would go a lot further than this sort of wishy washy he-said she-said money grab lawsuits.
If you're filming sexual acts to be placed onto a website for monetary gain, then there should be a written form showing that consent. He said/she said shouldn't be something that should come within 10 miles of these conversations. The consent form should grant either party the ability to with draw that consent again with written consent.
See my other comment about the "girl do porn" cases. The models signed release forms with the expectation their videos go to private buyers but instead they were sold online, so they turned around and sued the producers for sexual assault instead. This sort of thing needs sunshine lists and clear regulations. Plus having to disclose your identity on a public registry would reduce porn participation too, so it's a win-win even for the Republican legislators who are against porn in the first place.
I would go as far as to suggest that full SSN-level KYC ought to be legally required for all actors. From my understanding the adult film/sex industry currently rely purely on physical signatures and maybe driver licenses for their releases. If you need to disclose your SSN to sign up for a penny stock app like Robinhood, then you damn well should be required to do the same for anything sex work related. And if an underaged person try to do that, automatically notify the parents. It's not as if certain segments of the American government are unfamiliar with "don't say trans" tattle-to-the-parents legislation for kids.
> If you need to disclose your SSN to sign up for a penny stock app like Robinhood, then you damn well should be required to do the same for anything sex work related.
Stock trading accounts need your SSN for its original purpose, which is taxes.
And I think that having the government keep a list of people who do things that many consider immoral might worry people who don't trust the government to use such lists responsibly.
> The models signed release forms with the expectation
If the website hosting the content requires consent forms before allowing the video to be published, then these types of situations will be covered. To allow the content to be published, the consent must explicitly grant the host site to publish. If not, then the site should not allow the content. All content should be explicitly granted permission for use. This is how the rest of the world works with licensed content. Not just photography, but even things like fonts grant a license for personal desktop use, but requires additional licensing for use on a website or printed material. Music grants rights for personal use, but again needs additional rights for use in movies, TVs, commercials, social media platforms, etc.
It's yet another example of a working system in place for everyone, yet the people either disrupting or flagrantly disregarding those systems causes chaos.
This is in the same ballpark as S230: sounds great on paper, but is terrible in practice. Having such a database would absolutely expose performers to reprisal or persecution by a bad actor government - i.e., LGBTQ+ actors being persecuted by a religious or fascist government - and therefore shouldn’t exist.
However, I do agree with other posters here that any sort of adult production should absolutely have signed, written consent forms outlining safewords/signals for all participants involved, the nature of what will be filmed, and the expected acts of the parties therein, such that there can be no question about these issues in a court of law. Paperwork saves lives.
> One party would agree to be filmed/paid for sex, back out half way through, and then sue for involuntary pornography and consent.
I mean, I understand what you're saying but you're making it sound like "it's not real rape but more like contractual disagreement".
If someone backs out half way through, they absolutely should be allowed. There might be compensation in their contract etc ... But that's implementation details, the base fact is just because you said "Yes I agree to do X for Y money" doesn't mean you can't quit your job after it starts, and your employer cannot force you to keep working until he's satified with the result. That apply to pretty much any job, but especially to something like porn.
> your employer cannot force you to keep working until he's satified with the result. That apply to pretty much any job, but especially to something like porn
Yes, but there is no equivalent of a clear cut "at will" employment for sex workers, and this needs to change. Legislators refuse to touch this topic with a six foot pole, and that's why it's all in a grey area. Contract law shouldn't come into play here at all, and given the high stakes, it's unfair for producers to be ipso facto accused of sexual assault everytime there's a breach of contract (you can extend this to prostitution too).
IMO, it's pretty simple: freedom must be bundled with responsibility. However if a company doesn't want responsibility, it should be given immunity, but in return it must give up control over user content to some extent.
In practice, the bigger the platform, the more it should be regulated, and platforms with billions of users should be overseen by elected officials.
Ah I see, in the context os Section 230 it would be the new categorization of the party that is no longer considered a publisher and no longer subject to the liabilities that come with being a publisher.
Does Trump still want to repeal Section 230? He was in favor in 2020, but that was a long time ago.
Now he’s co-governing with Musk. Both billionaires own social media platforms that benefit from Section 230 protections. What’s the incentive for them to push for a repeal?
Aren't you assuming that laws will be applied uniformly? It's entirely possible that Musk thinks a repealed Section 230 wouldn't harm X because of that co-governing.
There will be very little accountability that the change they desire matches the rhetoric. In his first term, Trump's DOJ and FCC tried weakening 230 -- not repealing it.
The most useful Section 230 might:
- Intimidate fact checkers
- Promote new "outsiders", conveniently approved with a polished message
- Sew a framework of attribution, especially between loyal states versus blue states.
- Obscure that some messages are paid, while others just win an algorithmic jumble.
- Sew a framework of counter-attribution, where a platform may challenge my message as untruthful. Regardless, if I seem to pass a particular loyalty test, I win the right not just to promote it harder, but also advertise "what they don't want you to know" and they must carry it as such.
Twitter has already changed their TOS so that any legal challenges happen in a district with a Trump-appointed judge. I imagine similar tactics would be used to create a "rules for thee but not for me" situation. There are a lot of Trump-appointed judges now.
Say someone posts things on Twitter calling for violence against some group, and members of that group get harassed or attacked and they sue the poster and they sue Twitter.
All they need to do is choose plaintiffs who have not agreed to Twitter's TOS.
Anyone who thinks they can say with certainty what Trump will do just isn't credible. That said, my guess is that Trump will change Section 230 to force neutrality and free speech or Section 230 protections will be lost. I don't see how that is incompatible with X or Truth Social.
> “change Section 230 to force neutrality and free speech”
How would that revised law look like? Would courts be tasked with determining whether a defendant in a Section 230 lawsuit had been sufficiently neutral?
The standard for libel is extremely high in America compared to many other countries. By the same logic, the standard for losing Section 230 protections should be similarly high. Otherwise courts become unwilling arbiters of what’s acceptable speech.
Does anyone actually believe Twitter’s algorithm is public? When the company’s mercurial owner wants to amplify some voices and silence others, does he make a pull request on the GitHub repo?
If you don’t think that kind of thing happens on Twitter/X, you haven’t been paying attention.
Anyway, Republicans generally favor less regulation and less enforcement, not more. If there’s a new rule that requires companies to publish something, but it’s not actually enforced, then it doesn’t mean anything. Companies can just publish something vaguely plausible out of some old code snapshot (like Twitter does today).
"b) displayed a check mark next to Romelus’ name showing that he was “verified” and creating the impression he should be trusted;"
Not sure what express warranties OnlyFans makes to its customers, or regulators. But a verified checkmark in this context should obviously mean that the subject of the account, either is the administrator of the account, or that the administrator and subject are connected by express permission to administrate the account in the name of the subject.
That is, that a girl is uploading her own videos, and not that someone else is uploading the videos for them.
I could understand that in this case maybe the man in the video was verified. And it is the account that is verified, so if the girl changes in each video, there would need to be a per-video verification, or a policy that treats men and women differently with regards to verification, or some kind of third party verification.
Not an easy case.
Furthermore, there is the case or interpretation that the material is acted, but requiring verification and consent of third parties in uploads seems like a pretty standard requirement to me.
Not a lawyer, not a judge, but it seems pretty sensible to me that all parties in a video should consent and be verified by the provider, perhaps not on a per-video basis, but just add allowed subjects/actors, the first time may be a hassle and the next time would be easier. There's various types of consent that might be evident in video, consent to record, consent to distribute and consent to have sex obviously. I think it's unreasonable to ask for consent to be a part of the video or depend on the contents of the video for such consent verification. The begged question is, how many videos exist were the third parties did not consent to recording or distributing the videos? Logistically the mechanisms for consent for the distributor should be the same, there is never a dependency in the contents of the video to verify such consent, nor is there any expectation that the distributor will see the videos to verify such consent (although there may be an expectation to view the contents of the video for other purposes, such as verifying age or verifying that it contains the allowed actors) Such verification must occur off-band, not merely by analysis of the content of the videos itself.
Caveat, I do think OnlyFans is following regulations and goes in the right direction, fuck free "tube" types of sites, even after regulation. If you consume any type of porn, at least pay for it. Or just do whatever you do, but don't pretend you have the high road over those who pay and fund legal efforts to regulate this shit.
And if I take the focus off the rape, I apologize, but I think that general consent, in this case to record and distribute, is probably the only one relevant to the defendant of OnlyFans. Which is no small thing, being liable for distributing sexual videos without the consent of the subject. The fact that the subject was raped would be an aggravant, but the general technicalities of the case play out the same way as a consensual sex partner that did not consent to being recorded or distributed.
The court actually responded to this as this was a small part of the complaint, which I believe holds the most merit.
" (f) failed to enforce its alleged corporate policy requiring a signed release form showing consent by any third party (like Plaintiff) who appears in a posted video and "
the response was
"OnlyFans’ failure to enforce its policy requiring signed model releases didn’t add to the illegality and runs contrary to Section 230’s negation of the moderator’s dilemma."
I'll straight out say that I don't know what this means or how it plays out, but I will say that, even though the complaints are handled independently. Possibly the efforts expended on other complaints that had less merit took some resources away from this point. I think in the future sounder legal strategies will focus on this point, and case law will develop around vanilla failure to consent to record and publish, rather than lack of consent to sex (rape).
Are you seriously complaining that a blog that focuses on technology law mentioned Trump's prior actions and positions specifically related to the technology law that this blog entry was about without mentioning a bunch of (alleged) Harris positions on subjects that have nothing whatsoever to do with technology law?
Turning that on its head, Trump's political positions are just as fluid as his allegiances.
I don't believe that Trump really cares about section 230, but Musk definitely cares a lot as he has $40B invested in an edgy social media platform.
As long as Musk keeps the money flowing, Trump doesn't have any reason to backstab him. It's possible they may fall out at some point but that hasn't happened yet.
Civil trials have a lower burden of proof, so it's unsurprising the plaintiff is seeking a remedy here, but the core issue seems to be that we are unable to prosecute this case as the criminal matter it clearly is.