It might be a plausible argument for stuff sold 6 years ago when the future was uncertain. But in 2022 I'm not seeing how it's plausible as an excuse. Considering it's still being sold and marketed as 'Full self driving'.
Clearly even level 4 autonomy is far beyond what any near future technology is capable of. If at the very least due to the fact that a very commonly encountered condition, construction work, effectively requires an AGI to navigate.
Which would imply the best achievable self driving is around level 3 autonomy.
Pretend you don’t know anything about the different levels. You’re just a random person looking to buy a car. What does “full self driving” sound like to you?
To me, “full self driving” has always meant the car has a built-in cab driver so I can work while the car drives to my parents for dinner 6 hours away, then sleep while the car drives me home. I’m not sure how the phrase “full self-driving” could mean anything other than that.
Edit: When I pre-ordered a Model 3 Musk’s promise at that time was literally “press a button and your car will work as a taxi to earn money when you’re not using it”.
> I’m not sure how the phrase “full self-driving” could mean anything other than that.
There is some room to argue about the conditions under which the car can drive itself. If the car couldn't handle a high-grade, winding mountain road with an inch of snow on the ground and yet more coming down, I don't think too many people would be fussed is a "full self-driving" car said "uh, no" (especially as a good fraction of drivers couldn't handle that either).
To my end, if the car can't handle any condition that doesn't make me go "I think I should stay home," that should disqualify it.
Humans often drive in conditions were it's unsafe -- icy roads, dense fog, heavy rain, strong winds. We take the risk, and most times it's okay, except for the occasional pile-up on the highway.
But I don't think a "full self-driving" car refusing to drive in those conditions would be a deal breaker. I'd prefer it if autonomous vehicles were more conservative and just refused to drive in unsafe conditions.
Right. But you then end in a sport where driver barely does any driving 99% of the time but the 1% they do need to drive, they are out of any practice for normal driving, let alone in hard conditions.
We already had problems back when people started coming back from mandatory WFH and accident rate spiked, this would be essentially similar or worse, a bunch of drivers that might do 10k miles a year but drive 500 of them and have extremely rusty normal driving skill, let alone driving in bad conditions.
It would be essentially like letting your kid drive your car only in blizzard and pouring rain...
I'm not sure that experience with driving in normal conditions actually helps when driving in dense fog or on icy roads. People are overconfident and drive just like they normally would, which is why the pile-ups happen every time we have worse than usual road conditions.
Tesla FSD can barely handle a balmy 70 degree day, so I think you'd be disappointed.
I might biased because I work on actual self-driving efforts, but nothing at all about Tesla has ever indicated they were serious about solving self driving in any capacity.
The day that Elon proclaimed they already had the sensor suite decided was the day that they openly admitted this was fraud.
My opinion doesn’t matter, but your last point resonates with me because this struck me as totally bizarre too.
I don’t work in sensing let alone using AI with sensing data, but I’m around it a lot and my second-hand knowledge kind of set off some red flags.
In remote sensing for landscape architecture, hydrography, and other similar fields (making sense of static things where no one is at risk of dying immediately, and humans review the data), the care and knowledge that goes into choosing sensor suites and processing the data properly is huge. Yet Tesla comes out saying “we’ve got some mediocre cameras and we’re gonna run with it!”
Seriously? What? I get that it’s a lot of cameras but making sense of 3d space isn’t trivial at all. It’s so bizarre.
Again, I know nothing, they know more than I do. The whole thing just doesn’t sit right.
> Yet Tesla comes out saying “we’ve got some mediocre cameras and we’re gonna run with it!”
To be fair, humans managed to solve self-driving with some mediocre cameras, some crappy microphones, and some unreliable force-feedback and chemical sensors, without (importantly) adding any new sensors to the hardware between start-of-project and successful full driving. So it's not a priori implausible that "use whatever sensors we have now, and just learn to compensate for any deficiencies in software", could work, since our only preexisting examples of full-self-driving (humans) essentially do that.
It's just kind of stupid to hobble youself that way when you don't have to.
The difference here is that our “cameras” are processed in ways we really don’t understand and at speeds we can’t seem to compete with yet (at least not with the quality of processing we have, so far).
Our sight is also enhanced by our hearing or familiarity with certain driving situations and, in most cases, familiarity with certain places we drive frequently.
If we isolated our eyes from our other senses I have a feeling we might actually be much worse at detecting things around us or making sense of what we’re seeing.
Our senses like road feel can even help us use our eyes in anticipation of dangerous conditions like loose gravel or black ice.
I know machines can do this too, but ours works so well and so quickly; it’s hard to compare with what’s on the market at the moment.
Regardless, I think we agree on the final point. Such a strange way to limit a self driving car.
Unspoken knowledge is also not one of the 5 human senses.
FSD needs the human sense that seeing a stop sign on the freeway (sticking out of the back of a highway maintenance trailer) doesn't mean what a stop sign usually means - but that stopped highway trailer, with a person holding the same sign and looking at you means you should follow it.
Should you run into a pile of gravel to avoid a baby stroller? If a loose garbage can and a baby stroller appear on opposite sides of the road and both block your path, which one do you hit? A soccer ball coming out between parked cars probably has a kid following it...
The people who have died with FSD ran into problems like these. A human would see a transport truck cab, and the wheels on the other side of the road and infer that - even though it blends into the sky - that there's a trailer in between. One of the Tesla accidents was that the side of the trailer appeared the same color and location as the sky, so the Tesla didn't see it and went under the trailer at highway speeds. The driver wasn't paying attention and didn't make it.
Sensing packages will only get you so far, human intuition for new unexpected situations is impossible to get AI to learn - at least at this point.
Agree 100% but I wouldn’t call it intuition, but human understanding. We don’t just see stuff, we understand it. We understand so much context surrounding the things we see. We can predict the weight of an object by observing it’s motion in response to wind, essentially running a physics simulation, to know that that 4x8 sheet of styrofoam blowing in our lane poses no risk. Or that an empty cardboard box tumbling in the wind is actually empty based on it's motion.
We have an incredible level of understanding.
Jeez I even put myself in other drivers’ shoes/heads so as to predict their behavior. Their subtle cues in lane positioning, distraction, etc. they’re all incredibly telling if you can model human psychology.
Not to take away from your core point, but I wouldn't say the human sensory system is mediocre.
Your eyes are sensitive enough to see a candle flame at 2.6 km away. A rod can actually be stimulated by a single photon, but the brain squelches activation that small. The human ear is sensitive enough to hear a watch tick from about 20 feet away in a quiet room. That's pretty darn good.
Sure they do. They're those two spherical mostly-white-with-colored-circles-on-the-front things embedded in the upper half of their face. (They usually tend to call them "eyes" for some reason.) They're closed-circuit and wired pretty directly into the brain, so it's not like webcams or recording cameras where you can semi-easily get the data off somewhere else, and obviouly they're kind of wet and squishy and not-easily-electronics-compatible like most human components, but in terms of actual functionality, they're cameras.
I feel our government should regulate this kind of speech more aggressively, to boost the safety of the public.
The messaging by Tesla has been misleading, yes. There have been many other examples of couching date-slippage and over-promises with phrases like "no promises but Cybertruck|FSD|Model 3|Rockets|AI|Robotics|Neural-Link|End-Of-The-World should be here in two years."
Announcing a capability, and importantly MUSK acting as a widely-recognized public figure like it's ready - shaming every other fledgling competitor, before it's ready, is amounting to GASLIGHTING.
A speculate this is what we should expect from the company-owner's notable, controversial personality type.
Immediately upon receiving "Full Self Driving Beta", in the 201x's, some Tesla drivers CRASHED because they turned ON the lane-keep-assist and then left the car to do the driving .. straight into the back of a stopped truck, or a split open lane-divider, and died.
To reiterate, this mis-leadership has resulted in death.
I suspect that CEO personality-type is a salient factor in whether a company should be allowed to operate. Should we stop this .. should we prevent actual Companies operating? Not just a group of hobbyists but tax-paying, licensed, employing companies. It is OUR government. We can stop this predictable travesty of gaslighting -> good-faith belief -> bad outcomes.
I’m not against regulation but I’m not sure it’s needed here either.
In the case of misleading statements and marketing causing death, that seems like something that a lawsuit would handle, not regulation.
In the case of false marketing misleading investors, we would see investor lawsuits for that if the stock price suffered as a result. Right now, what damages would you show for musks lies?
Instead of new regulation, a lot of nonsense would have been avoided had the SEC not given him a free pass after the last round of investor fraud. It was entirely predictable that an egomaniacal child would not change behaviour after a slap on the wrist and a warning.
Regulation often happens after companies have mislead people, it’s more a chicken egg scenario, do we let disasters happen to legislate against or do we have the foreknowledge to see the disaster happening and stop it?
Advertising claims are in fact regulated. There just isn't a lot of enforcement but, at normal companies, legal teams are very conservative about making claims in advertising and marketing materials. If I claim our product is better than Brand X, you'd better believe legal is going to want to see the evidence and probably qualify the statement.
Indeed false advertising is a thing and it's not legal(depending on where you live ? it is in most of the world that has teslas at least). The government reaching further than going for false advertising is where the slope gp mentioned would come in I assume, and on that I fully agree.
It is fine (and correct) to be cautious about any situation where the government gets involved and starts setting limits.
But to discard an argument entirely is to bury your head in the sand.
Regulations are applied to many industries and limit the kinds of claims that can be made about products, and those regulations are important to public safety.
Such regulations most often apply to products that involve life or death consequences.
A company marketing vitamin supplements can’t claim that the supplements are proven to cure cancer, and that’s a good thing.
This is not to say that the government can’t or doesn’t get things wrong. But I’m curious how you feel about limitations placed on financial, food, and pharmaceutical companies.
At a minimum, there’s a fraud angle. You shouldn’t be able to claim to sell something that does A when that thing cannot in fact do A.
And when A happens to be critical to not dying, the implications of that claim just raise the stakes even higher.
The involvement of government is at most the courts to redress the kind of issues you're talking about, which is narrow to the particular situation involved, guided by precedent, but not fixed by it. Fraud, injury, death are all covered here.
Heavy-handed regulation I can't back though, in part because it is (increasingly these days) as broad as possible, as vague as possible, ensnaring far more then the initial case merited.
I did read it, actually, but there wasn't much of merit there. The usual spend more, make more rules, enforced by someone with a gun somewhere such that bad thing X can't happen again. Then of course, it does.
Although honestly, it should also be on the regulation proposer to say cite why such a rule is necessary, and more importantly, under which conditions, if it doesn't work out, such a rule might be withdrawn.
I mean:
We've had Medicaid, but still have problems in elderly healthcare.
We have the Patriot Act, but still have terrorists.
We have a Department of Education, but still have low (possibly lower) quality schools.
We have a CDC, that failed in testing and containment of disease.
We have an SEC, but SBF's fraud went unabated (although hopefully punished soon).
We have an FAA, but still have unworthy aircraft (737 Max looking at you).
Perhaps then next round of rules proposed by folk looking to win their next popularity contest will work next time, I just doubt it very much. I just don't see much value in these rules - if bad things keep happening, what was the point? Might as well not have them, and make people directly responsible, through a court of actual law, when actual damages have occurred.
It sounds a lot like something Musk promised in 2019: "Next year for sure, we will have over a million robotaxis on the road," "The fleet wakes up with an over-the-air update. That's all it takes."
The most generous interpretation is that Musk is delusional and out of touch with his company and his technology. But I think it is more accurately described as bullshit ("speech intended to persuade without regard for truth") or a lie (making knowingly false statements).
I think the interesting question is whether that legally ends up being fraud in the context of a civil suit. Whatever was going on in Musk's head (deluded, reckless, or lying) it seems like Tesla as an entity should have known that the cars were not actually self driving and that customers could reasonably think they were getting self driving.
> The most generous interpretation is that Musk is delusional and out of touch with his company and his technology.
I'm not sure I'd describe that as a generous interpretation, but there is other evidence that he's just wildly optimistic/over-confident. E.g., he still pours significant investment/effort into his project to colonize Mars, even after describing falcon wing doors as "engineering hubris"
And you can use it like a taxi service and send a non-driving human as a passenger. Or no passenger and just send it to an address to retrieve or deliver a package.
It's probably reasonable to expect that there are some asterisks like paved roads only and can't drive in literally all weather conditions. But, yeah, no human supervision needed.
Some asterisks like non-paved roads will cause poor and potentially dangerous performance, technically.
If I tell it to drive into a volcano, and getting it to do so is significantly harder than five minutes of repeating various variations on "Yes, I know it's a volcano and will likely destroy the car and everything in it; do so anyway.", it's defective. (Preferably it also should not be significantly easier, of course, at least not to the point of being able to do so accidentally.)
My ultimate dream is to be able to more or less avoid flying. Hop in my car at 11pm, punch in address, pull out the lay-flat seat, go to sleep, wake-up 7-8 hours away. If I make it a day trip, I can do the entire thing in reverse never having had to pay for a hotel.
Living on the coast, I would still have to resort to flying, but there are many places I would enjoy that are reachable in an eight hour drive.
You can do this in practically every EU country, not to mention Asia (I don't know about other parts of the globe) via trains. I don't get why it has to be a car. It's just that the US has zero infrastructure for both inter city as well as intra city public transport outside of like 4 cities.
The US has one of the most extensive rail networks in the world. What we do differently than Europe is use the rail network almost exclusively for freight. If we followed Europe's lead and did a lot more freight via long haul truck, we could free up a bunch of infrastructure for passengers, but would the end result be a net win? Or would we actually increase emissions by doing that?
There is absolutely no reason to make that tradeoff. If a rail line is hitting maximum capacity, then that means there's plenty of money going around to expand it. Adding more passenger trains does not require decreasing the amount of freight trains. That's not the reason we're bad at doing passenger rail.
Freight rail has significantly different requirements than passenger. People want to move quickly, with frequent daily trips (eg 24 trips per day, moving at 90mph). Freight does not care about train speed so long as it is reliable (eg 1 trip per day moving at 50mph). Logistics networks can plan around reliability.
For one, pull up a satellite view and look at the US anywhere 150 miles away from the west coast.
And we have buses that go between lots of cities(greyhounds). They just suck compared to driving yourself or flying, which is why you'll pretty much only find poor and desperate people on them.
Well, it's a bit more PITA to get your ass to station with all the baggage, then also possibly changeover to another line. But yeah, to the airplane it's an altenative.
This is already happening with regularity, at least in California.
Engineering is about solving part of the problem first, then iterating. If you look at what they've accomplished only in the last year with FSD, I find it highly unlikely that 0-disengagement drives don't become the norm by the end of 2023.
Currently, 90+% of drives have 0 safety-critical disengagements, and 50+% have no human-discomfort-related disengagements.
> Engineering is about solving part of the problem first, then iterating.
And fraud is about selling solutions to parts of the problems you haven’t yet solved. While there are engineering criticisms of Tesla FSD, that’s not the issue here, so presenting definitions of engineering is a non-sequitur.
> If you look at what they've accomplished only in the last year with FSD, I find it highly unlikely that 0-disengagement drives don't become the norm by the end of 2023.
> Currently, 90+% of drives have 0 safety-critical disengagements, and 50+% have no human-discomfort-related disengagements.
No one is going to put their pre-teen child in a car for a 5 hour trip when there is a 10% chance that the car will crash.
And yet you think that such a thing will be the norm in 12 months or less?
Are you referring to Tesla's FSD effort, or Waymo?
I hear excuses when I point this out, but lane keeping and adaptive cruise on the Model 3/Y are inferior to the competition now. AP was a selling point when the Model 3 first came out, it has become a bit of a joke now. If you buy a Model 3 today, it comes with an asterisk. If you want cruise control, beware, adaptive cruise has notable bugs and there is no option to turn it off and go old-school.
It means a car that can drive in a reasonably broad range set if expected traffic conditions and has no failure mode that is unsafe at any point.
It’s that second part that is the important one. If the car can’t drive in heavy snow or roadless areas or whatever that’s fine, as long as it’s able to recognize and avoid those situations in a graceful way, like refusing to engage with them and parking in a safe area, suggesting the trip be cancelled, that kind of thing.
The current approach of it works, but when it doesn’t it will say “just kidding take over for me now” while going 70mph towards the side of a truck is definitely not that.
Great, so we're back to "unlimited data" doesn't really mean "unlimited" but, just a large amount, but not really a large amount, because...
Words have meaning. Full self driving, to anyone who isn't a nerd who likes to pick shit apart, means the freaking car will drive all by itself as if a human were driving it, maybe even better.
I know what I want. I want to tell my car to go the the shop and get serviced, come back, and take me home after work. After a flight, have it pick up my inlaws at the baggage claim exit. etc...
Musk is a charlatan, but why do people act like it is common to buy $40k+ cars like picking up some gum in the checkout aisle? What the features name sounds like is not the only factor here. People don’t buys cars with that lack of thought. What the salespeople say, what the marketing material says, what the manual says, what the CEO says in public, what the feature feels like during a test drive should all matter more. This isn’t a defense of Tesla. I think it still leaves an argument for fraud, but we really need a better argument than the same “the name sounds misleading when you first hear it” argument that has been repeated for years now. If that was all it took, we all would have sued cellphone companies for their “unlimited plans”.
This isn't a "9 out of 10 dentists recommend using our product" situation, and cellphone companies have been sued for failing to deliver true unlimited plans.
It is being marketed as "Full Self Driving", and the CEO is publicly saying it is "in beta", "coming next year", and "just waiting for regulatory approval". In reality it needs constant driver supervision, is still primitive enough that it is directly linked to multiple fatalities, has been coming "next year" for over 6 years, and is clearly not even remotely finished. Customers have paid thousands of dollars for a feature they are not able to use, half a decade down the line. They may never be able to use it as advertised at all. The product as delivered is substantially different from anything the company says.
It's like saying your car can do 0 to 60 in less than ten seconds and then delivering a car without an engine - while putting a statement in the small writing that the car currently only reaches this in freefall from a mountain top, but pinky promise that they will install the engine in a few years.
Your comment helps prove my point because you didn’t stop at just the name. It takes more than “the name is a misleading” to be considered fraud. Tesla is guilty of more than just giving the feature a bad name. If you are arguing fraud, mention those other things. Because people don’t spend $40k+ based off a name of a feature.
I can still go to Verizon’s website today and find “Unlimited” plans so the name clearly wasn’t the problem in whatever lawsuits they might have been involved in.
And to reiterate, my point is not that Tesla is innocent. My point is that the most common argument used to point to their guilt is a stupid argument, especially considering there are better arguments available to argue the same underlying point.
> I'm sorry, it's a scam to sell X "coming next year" and six years later, not to have it.
Where did I state otherwise?
Once again, all these comments are reinforcing my point which is that there are plenty of more concrete complaints to make about this than “Autopilot is a misleading name”. I don’t know how many times I need to say I’m not defending Tesla. I am criticizing a bad argument. But I guess I should just accept my downvotes. I should know better than to try to have a nuanced conversation about Tesla and Musk.
You said: we really need a better argument than the same “the name sounds misleading when you first hear it” argument that has been repeated for years now
We do have that. We have a very thorough argument, and it has been repeated endlessly. So that's why you got downvoted.
People often elaborate moderately beyond the direct point that they started off in reply to. You are making an incorrect characterization of the problem. Your reply was unclear in context.
In your opinion, exactly how much time should people making purchases put in to investigate the extent to which they're being lied to? Because in my view, the burden should first fall on companies to be clear and honest.
The name itself is not a lie. You can call it misleading, but it isn’t a lie by itself. When the CEO is obviously lying to the public and you focus on the name instead, it weakens the case for fraud in the court of public opinion.
I don't necessarily disagree, but I offer a couple of counterarguments:
A. Most people who bought this knew what it was, that it was unfinished, and they were funding the research (which is a public good to some extent)
B. There should be some degree of burden on the customer to research what they are buying, especially with a claim as extraordinary as "Full Self-Driving"
Maybe we should bring back patent medicines again? New, improved Radithor! If you're unhappy when your jaw falls off, it just means you should have read the fine print.
Maybe the burden should be on the seller to call it “unfinished and unreliable attempt at self driving” instead?
The idea that we don’t let marketers make misleading product claims is very old. As a society our point of view is that no, actually the burden of proof should not be on the customer to figure out if a tube labeled “toothpaste” is actually wet gypsum or something.
The research burden might lie with the customer for "don't take your hands off the wheel just because if you ignore the instructions it sounds like something an 'autopilot' would let you do"
But I don't think it's true for believing the car company CEO when he repeatedly said that the car came equipped with all the necessary hardware for full self driving, and if you purchased the $15000 package the full self driving software would be with you by the end of the year, with most of the remaining obstacles being regulatory...
It's a bit like consumers have a responsibility to read the label on medical devices to use them properly, but they don't have a responsibility to advance their medical understanding to the point they can tell if Elizabeth Holmes is telling the truth about her product actually doing what she said it was doing.
I never did understand how that passed the sniff test. Unleashing a fleet of self driving robotaxis wouldn't skryrocket the value of anything, it would be the fastest race to the bottom in memory.
That particular aspect passes the sniff test for me. The race would only go so fast with a million cars produced per year, only some of which would be participating.
I think 2016 it was already widely known by the industry it was going to be very difficult compared to what Tesla was saying, especially without the handicaps they imposed on themselves. No one with knowledge took Tesla's timeline seriously
> Clearly even level 4 autonomy is far beyond what any near future technology is capable of
I've always been pessimistic about FSD but I think it'd pretty hard to reconcile that quote with actual L4 deployments by Waymo and Cruise
I don't like how this whole thing has played out, but to be fair, I think fear of missing out over something that would, if successful, be an existential threat to Uber is probably a rational response.
I don't completely disagree. They definitely have to be weighing the likelihood of that outcome, and Musk's statements probably made it seem more likely. That said, there are a lot of existential threats to companies that they should probably not be spending investor money on, such as asteroid defense.
If current Waymo counts as level 4 then we need sub-levels for them to have much meaning. They are a long way from supporting something like autonomous driving geofenced to all major US cities in all weather conditions. That big step up is still far behind true level 5, which can handle poorly maintained roads and all of the other weirdness one can encounter outside of major first world cities.
> The Waymo Driver operates at Level 4 autonomy, meaning, Waymo says, that “no human driver is needed in our defined operational conditions.” This, Waymo continues, represents “fully autonomous driving technology,” with the Waymo Driver being “fully independent from a human driver.”
Level 4 means full autonomy over all normal driving conditions. Waymo does not handle complex construction sites like a normal human driver would, a fact verifiable by many thousands of people. It does not have level 4 autonomy.
Looks like the article got backlash too because the editor inserted an editor's note in quite large font right in the middle of the article clarifying it.
> Level 4 means full autonomy over all normal driving conditions.
Aren't you describing Level 5?
From what I see online, Level 4 means
"The car can operate without human input or oversight but only under select conditions defined by factors such as road type or geographic area"
Waymo has been operating geo-fenced self-driving cars without drivers present. It may not handle complex situations, but that isn't expected out of L4.
Level 4 means full autonomy within certain predefined conditions. In Cruise's case, this might be, "Only in certain parts of San Francisco at night." This qualifies for Level 4.
Often we Canadians drive on roads where we cant see the lines due to snow/ice. We sort of form our own "lanes" based on where we think the lines SHOULD be.
How would a FSD Tesla handle a situation where everything is white?
> We sort of form our own "lanes" based on where we think the lines SHOULD be.
And that is exactly what FSD does. You can search YouTube, there's plenty of examples if you want to see how it performs in different weather conditions. Snow is challenging but not impossible.
That is not my experience at all. It either says “Full self driving unavailable. Poor weather detected,” or it freaks the fuck out and tries to kill me.
FSD can’t adequately handle rain, let alone snow where the road lines are not visible.
Based on many hours of footage, rain doesn't seem to be a big problem unless it's very heavy, but the experience can vary I suppose. Snow for sure is still a problem, but there is a path to improving it. The current state of perception shows that it can be solved by deep learning (more training data for these conditions), in the sense that in many situations it can guess where the lines and lanes are based on other context, like tire tracks, positioning of other cars, width of the road and lane meta data, snow protrusions etc. There's also the problem of when to treat snow as an obstruction to go around and avoid vs when to just go over it. There will always be more uncertainty in these conditions though, just like there is for us humans, we are often unsure where the lanes are exactly, and one of the challenges if having the system deal with this uncertainty better. And then control has clearly not been worked on for snow and ice yet.
Forget rain and snow, they can't even handle non-California roads yet.
I had the misfortune of taking a rental on narrow mountain roads and it actively attempted to kill us twice. While moving over to pass an oncoming car, the lane keeping feature decided we were too close to the shoulder and started steering us back into the oncoming car.
After pulling over, regaining my composure for a few minutes, and figuring out how to disable it; somehow it activated again on a different day and it happened again.
> somehow it activated again on a different day and it happened again.
I dont have a tesla but my car has things like "lane assist" and most of these safety features can be turned off, but the default state is "ON" and this is reset once the car is turned off/on again.
Oddly enough, what it calls "advanced cruse control" defaults to OFF on my car and must be turned on any time you want to use it?
Yeah, I think it was something like that. Once I began checking for it, it seemed inconsistent - possibly related to how long the car was off?
The joys of modern technology - every time I get in my car, I have to spend 15 seconds checking the state of the "Car may decide to murder you setting".
Granted it did fine on wider, more developed roads once we got closer to larger towns, and it would probably do fine in my day-to-day life; but that whole experience has left me really shaken on any self driving tech. The roads it fell apart on weren't even that bad - well paved and maintained, just narrow and without any sort of center stripe.
"lane assist" can defiantly be annoying. While mine doesn't try to kill me by driving off a cliff... It does seem to prefer that i drive dangerously close to those cement "jersey barrier" walls they use in construction sites??
The lane narrows due to construction and it prefers to maintain the lane even thought it is too narrow... I don't get why it cant detect the barrier and realize that driving over the line is safer?
This is my big issue with the whole "FSD" - roads are not uniform and driving conditions are not always idea.
Let's say FSD is fine for 80% of the time, but what happens if drivers depend on it and it is now in the 20%?
I think giving drivers a false sense of security is very harmful.
Psychologists have shown time and again that as car safety improves, driver aggression only increases to remove the benefits.
ABS - You can now drive faster knowing you wont skid...
"lane assist" has shown that people will depend on it and allow their cars to drive right off the road and into the ditch (youtube has a lot of these). FSB will only be worse.
The rain can be no joke here. I've had to stop driving and pull over for the first time in my life after moving to NC. The rain was too heavy to see more than 10 feet or so.
Rain + city lights can make it hard to see even road markins in front. Technically self-driving car would theoretically have opportunity for advantage here (as it can just refer to maps) but practically it's super hard problem to solve.
Rain fade affects GPS does it not? Is the car relying on inertial navigation for a period? This rain is not something a camera and iffy GPS signal can keep you on the road. The intensity and amount of rain would definitely not let a camera see.
Quick add: this was at 12PM (noon) on a countryish road. City lights had nothing to do here.
> Considering it's still being sold and marketed as 'Full self driving'.
The problem is that this isn’t what they do. They are extremely careful in their wording.
Go back through the archive.org backups of tesla.com/autopilot and watch how their wording changes.
Here’s what they say today:
> Tesla cars come standard with advanced hardware capable of providing Autopilot features, and full self-driving capabilities—through software updates designed to improve functionality over time.
Despite how you or any reasonable person reads it, Tesla can easily make the case that the cars (as sold today) do not have full-self driving available right now. The hardware in the cars does have the capability of allowing full self-driving at some nebulous point in the future (something that’s going to be very difficult to disprove).
The other thing the do is use Autopilot (capital A) as a branded feature that doesn’t mean anything like what autopilot (lowercase a) means to a reasonable person.
I’m been on the fsd beta for a long time now and I have to say it’s remarkably improved over the last year. If it continues to improve at this rate for another year it’s going to be pretty useful. I use it to drive around seattle (which is a difficult driving place with randomly followed infrastructure standards and randomly driving drivers and randomly parked cars) and it used to be a death defying gripping of the steering wheel and grabbing it out of bone headed moves while everyone looks at me askance to a relatively chill experience. Yesterday I needed to take it in for some fender work (my wife isn’t the best driver and I got fsd as a donation to self driving research in the off chance it’ll let her someday delegate driving to the car, but since that hasn’t happened yet I have periodic body work that needs to be done as she pachinkos through the streets). The Tesla service center is in an industrial area in an insane interchange part of I5 with crazy off ramps and strange turns. I always get lost. I let fsd drive door to door and it did it flawlessly. That surprised me, but after each update it’s been getting considerably better.
It still has flaws - it widens it’s place in the lane on every on-ramp which is annoying and weird road conditions can confuse it about what to do next. It sometimes doesn’t anticipate well the turn signals of someone changing lanes leading to people flipping me off for not giving them space. But all in all, given the state of adaptive cruise control 10 years ago, this is a remarkable achievement to put in a consumer grade vehicle.
It's very different from the approach taken by Waymo and Cruise. Tesla's FSD uses vision only, and it's available everywhere in the US and Canada. The fact that it can complete most drives successfully (~90% of the time) without intervention in any city in the US or Canada is pretty impressive IMO.
The parent poster is right, I've been watching the progress and it's gotten probably 20 to 50 times more reliable in the last year.
Other solutions are geofenced, not because they could not ostensibly function in other environments, but because it is criminally reckless to do so. Allowing a safety-critical system under test to operate in unvalidated situations is stupid. Allowing a safety-critical system in development that is currently unsafe to operate to be operated by untrained customers in unvalidated environments is criminal.
Waymo had a fully driverless test vehicle in 2015, before Tesla Autopilot was even released. That is a superior system in a specific operational domain. They continued improving it for years with multiple order of magnitude improvements before allowing customers to use it in the same, highly constrained operational domain. Only Tesla is reckless enough to look at a product as good as what Waymo had 7 years ago, decide to ship it, and expand the operational domain to untested situations while they are at it.
This is the thing that always baffles me about self driving - while it may be very cool and impressive from a technology perspective, and while of course no technology can start out as completely bug free, when it comes to driving, anything less than 99.9999% success is a worse driving experience.
That is, until I can take a nap in my car, or read a book, or whatever, what's really the point? If I have to keep my hands on the wheel (or yoke) at all times, and have to pay attention 100% of the time, I'd rather be in a mode where I actually really do need to pay attention 100% of the time, instead of one where I really only need to pay attention 5% of the time (oh yeah, and if I miss that 5% I'm in an accident or dead).
In stop an go traffic in a city just having to be aware of whether it’s about to murder you is a blessing compared to screaming and shaking your fist at everyone on the road. There’s a lot of advantage to being able to zone out with highway driving attention vs active driving. I don’t need it to let me sleep, I just need it to do most of the thinking and I just monitor it’s performance - the cognitive load of that is very low vs active driving.
This is what people aren't paying attention to. The rate of progress over only 1 year is astounding.
Driving with FSD supervised by a driver already makes driving far safer and far less stressful. The car has a situational awareness that humans can't match. It's only a matter of time until most drives have 0 disengagements due to human discomfort/awkwardness.
> Driving with FSD supervised by a driver already makes driving far safer
Does it though? Waymo said that their research indicated drivers tend to get complacent and assume the tech will bail them, thus they pay less attention and make more mistakes.
The fact you’re not surprised that a car can drive itself is telling. And not in the positive way you want to make it seem.
It’s magic man. You’re jaded, and have set the goal posts too far and likely increasingly far.
WRT delivery timing, who amongst us had delivered a complex R&D heavy software project on time? I’m not an Elon fan boy but I respect he laid out a vision and worked to execute on it. That he didn’t hit the mark with 100% fidelity doesn’t seem like an issue to me.
> You’re jaded, and have set the goal posts too far and likely increasingly far.
Not the original poster. Tesla has set the goal posts too far, not commenters here. And yet, they are misleading consumers about how far the goal posts are.
People aren't saying Tesla's current features aren't marvelous, but that its not the magic that was promised.
> WRT delivery timing, who amongst us had delivered a complex R&D heavy software project on time.
Probably none of us. However, most of us haven't overpromised on the delivery time when we know that the project will take longer than what we have conveyed.
Why would you be surprised? Google did that with no safety drivers over 7 years ago [1], before Tesla even released Autopilot, after 6 years of development. The gap between that and a proper autonomous vehicle with no geofencing that is safer than even the bottom 10% of human drivers is immense and has not been achieved yet. That Tesla has achieved less in 6 years than what Waymo did in 6 years when they were trailblazing the field over a decade ago is disappointing. That Tesla executives believe their Full Self Driving product is acceptable to sell and ship to untrained customers, when systems that are nearly a decade more advanced and tens to hundreds of time better are unacceptable, is criminal.
No, the question is whether Musk committed fraud by selling _in 2016_ self-driving car software which he is still unable to deliver in 2022, and the answer is YES whether or not you are "jaded".
You are a lawyer? Or is that just your opinion based on what you personally believe fraud should mean (or on how much or little you like Musk)? Thanks for clarifying.
OP is talking about being surprised by steady progress to a major human achievement, one that could save a million lives for starters, but also one that in its mature state (where robots can visually interpret and navigate 3d space) unlocks even more labor saving automation than cars, busses, and trains put together.
It has the dark side of military uses, sure, but given how developed our tools for blind destruction already are, robots that can see the world will make a much bigger dent on the positive side.
Progress on this is genuinely exciting and people should let themselves feel that even if they don't like Elon Musk.
Thank you for your thoughtful comment. It is quite sad to see how eager people are to throw stones. It’s always the darkest side of HN that comes out in these threads.
Seattle's road infrastructure is bafflingly bad and confusing for drivers. But it always seems wrong to me to assume that we shouldn't improve the infrastructure and make it more safe vs putting half baked self driving out in the real world to overcome it. I understand they want more data to feed into their ML datasets and the idea is that that will improve things over time, but we are sacrificing some number of lives for this data without knowing if a convergence point will actually be reached. And what you say at the end has lead me to actively avoid Teslas when I can, they make the road more unsafe for me while they are learning, not just their own drivers.
I was in one a couple months ago. My friend likes it but was also clear that you have to pay attention to what it's doing because it makes regular mistakes.
Yep. After the last few updates, I went from a non-believer (you need LIDAR, vision only autonomy is at least a decade away and tesla fucked up going this route) to actually believing in 2 or 3 years we'll have a very nice and useful level 3 from Tesla, and that vision only is the right approach (because I want this to be affordable, not some tech only robotaxis have).
And I think by 2027 we'll have level 4 vision only.
It's unlikely that the rate of improvement, however you estimate it, will continue at the same rate. This is a Pareto effect, with marginal gains ever more difficult to obtain.
And the edge cases FSD can handle will be impaired by the sensors they do (not) install. While they make a good point that sensor fusion can impair comprehension, the removal of radar makes it hard to avoid the edge cases most calamitous.
I actually expect a nice boost of progress as the occupancy network is improved and used more by other parts of the software stack.
And I expect a fairly significant improvement when they release the HW4 hardware upgrade. Better cameras, and more processing are such "free" (as in effort, not money of course lol) improvements to their error rates. I'm surprised they got this far with their current hardware. If you've ever done real time computer vision, you know how hard it is to maintain decent performance (fps and latency) even on fairly beefy GPUs.
Tesla's lawyers state: "mere failure to realize a long-term, aspirational goal is not fraud"
They intentionally neglect to acknowledge musk's many specific claims that involved specific timelines, as opposed to the nebulous long term aspirations they allude to.
Claims about what will happen in the future and on what timeline are obviously speculative and best guesses at best. No reasonable person would interpret them any other way.
But that would just be failing to assume good faith from the onset, which is a problem on its own (especially if we’re trying to interpret legal stuff).
I’m not sure what you mean exactly. If you’re suggesting that repeated failure to meet ambitious target deadlines is more indicative of bad faith than a single failure to do so, well… I don’t think it works like that.
The bar for abandoning the presumption of good faith, and the principle of charitable interpretation, is very high.
> I’m not sure what you mean exactly. If you’re suggesting that repeated failure to meet ambitious target deadlines is more indicative of bad faith than a single failure to do so, well… I don’t think it works like that.
Sure it does! Elon Musk isn't 5, right? He's capable of observing the rate of progress and adjusting his pronouncements.
Sounds like a contract with specific terms! If we’re going to look at this as a contract, we’re going to need more than intuition: we’re going to need some contract law, and we’re probably going to need to look at all the disclosures and fine print involved in the purchase of FSD.
Fraud does not need a contract. My understanding is that it has three elements: First, the perpetrator has to provide a false statement as a material fact. Second, the perpetrator had to have known that the statement was untrue. Third, the perpetrator had to have intended to deceive the victim.
I welcome the public's acknowledgement that self-driving vehicles are completely infeasible for even the medium term future. It's just unfortunate that the trigger for this change seems not to be anything about snow, or the inability to replicate human flocking on buried/unmarked roads during winter, etc, but instead more about the current political fallout from Musk's take-over of Twitter.
Musk's Twitter takeover is just an added layer of bullshit on top of the FSD lies. People have paid thousands of dollars for a "feature" that will run them into center dividers, trains, and children but disengage a fraction of a second before impact so it can't be blamed. Tesla also requires NDAs be signed to uphold service agreements and fights tooth and nail to keep FSD's failures out of the news. Meanwhile Musk is over promising the feature and saying it'll be done in a few months and it's the future of the company.
So you'd think if it was so important it would be his primary focus during his supposed 22 hours a day of work.
People can see the problems with FSD and then look over and Musk bought a fucking social media network to go play edgelord troll on. So it's a legitimate topic of discussion because the public is to believe FSD will be ready Real Soon Now™ and is super serial important yet it's obviously not the focus of the huckster that's been continually lying about it.
Self-driving vehicles seem only feasible under one condition: you have isolated roads that allow no other motorized vehicles, no pedestrians, no bicycles, etc. In that case, a robust computerized system with built-in failsafes seems almost entirely plausible (although hacking-related takeover is definitely an issue).
Part of this is just legal issues - i.e. our legal system will hold human drivers responsible for fatal car accidents, via civil suits and/or criminal prosecution, but corporations are largely protected from liability (excluding special cases like Ford Pinto gas tanks etc.).
With self-driving cars, the corporation will always be held directly responsible, even if the relative safety risk is lower. It's similar to airplane disasters for that reason (i.e. airplanes are relatively much safer than personal vehicles, statistically speaking).
This is a weird amount of hubris for someone who either doesn't understand the goals of current work on self-driving cars or is just clueless about the field.
Tesla's attempts at self-driving capture absolutely nothing besides... Tesla's attempts at self-driving.
There are companies with realistic sensor stacks, not artificially imposing restrictions so they can pre-sell their results, actually focused on self-driving.
A true L5 vehicle is an impossibility because that'd mean handling the frozen tundra equally as well as rush hour in Mumbai, or a wildfire in California. And laypeople tend to really lean into that...
But for a realistic definition of L5, "feasibility" is low bar already proven by major players except Tesla. So to say "completely infeasible" kind of kills credibility of everything following it.
Dismissing "snow" as something that happens only in "the frozen tundra" is fairly provincial. You'd be surprised by how much of the USA (and other industrialized countries) have roads and road edges mostly obscured by snow for weeks at a time multiple times throughout winter. There's a reason almost all autonomous driving tests are done in places like Arizona. And the ones done in winter settings tend to be absolute positioning based (ground penetrating radar maps, etc) or non-human vision modalities which don't actually match how human drivers flock.
> Dismissing "snow" as something that happens only in "the frozen tundra" is fairly provincial.
Who did that? My comment certainly didn't by any reading of it.
> You'd be surprised by how much of the USA (and other industrialized countries) have roads and road edges mostly obscured by snow for weeks at a time multiple times throughout winter
Again, maybe you would, but I'm not.
> There's a reason almost all autonomous driving tests are done in places like Arizona
Yes, because they have good weather. It turns out that compartmentalizing a problem as large as autonomous vehicles is a smart idea.
It's not like you can't work on participation just because the local weather is nice. Regardless of climate we use things like rain and temp chambers, and there are AVs in places with bad weather, but that's not where service gets launched because again, compartmentation is a good thing.
I guess all that is to say, no one in the AV field is unaware that bad weather is a thing, and yet we're still in the problem space.
> non-human vision modalities which don't actually match how human drivers flock
Is this supposed to be a gotcha or did you not realize AV companies are not about blindly to replicate human vision in the AV field?
You'll find Tesla is the only company silly enough to imply nonsense like that.
>Is this supposed to be a gotcha or did you not realize AV companies are not about blindly to replicate human vision in the AV field?
Unless the machines fail the same way human drivers do and go off the absolute positioning based lanes to follow the emergent human based ones they'll cause crashes. Sometimes doing the right thing is the wrong thing because everyone is doing the wrong thing.
Again you're showing that you're not familiar with the field you're talking about.
Being able to follow vehicles in front of you outside of lines is below table stakes in this field, you wouldn't even get into the casino. Mobileye was doing that 11 years ago with nothing more than a OTS radar and a single BW camera.
I mean, again, do you really think saying "humans don't perfectly follow the rules of the road" could possibly be a novel concept to anyone in the field of AVs?
There's nothing like being in a field to realize how far HN hubris actually goes. Someone reading your first comment would think you're a thought leader with how flippantly you write off an entire industry, yet here you are 2 comments in and slowly re-discovering the ground truths that the industry is already built on.
I'm not talking about following other vehicles in front of you. I'm talking about follow the paths the other vehicles, now long gone, have left in the snow/ice/mush.
>do you really think saying "humans don't perfectly follow the rules of the road" could possibly be a novel concept to anyone in the field of AVs?
I do think that driving in cold regions seems to be a novel concept because every time I talk to someone in the know like yourself they latch on to the wrong side of the idea immediately.
Not realizing that we can follow other cars is somewhat understandable. Bringing up following tire tracks in the snow at this stage is so utterly nonsensical, it simply doesn't pass muster when considering the ways to parse the statement.
Millions upon millions of people who can be served without having to checks notes solve following tire tracks on the ground... but that's your current bastion for why self-driving is "completely infeasible"
No good deed goes unpunished apparently, because here you are now wearing the misguided nature of our charitable interpretations as a badge of honor.
I've owned Tesla's since 2016. At no time did I base my purchases on what the cars might be able to do in the future, only what they can do right now. Nor did I see any compelling reason to donate $10k - $15k to their R&D effort. The cars were compelling enough on their own merits and remain so. The FSD fiasco really does a disservice to what is otherwise a revolutionary car company producing excellent automobiles.
Agree with this and I also haven’t paid for FSD. Except at some point Tesla offered FSD along with a computer upgrade for about $1500. I took the deal. But I didn’t pay for FSD when I switched to Model Y. Interestingly, I’ve been able to turn on the FSD even though I didn’t pay for it.
Yeah excellent aside from the subpar build quality and horrible service. The ev tech is the main part of a Tesla that was revolutionary. Unfortunately this is all irrelevant to the self driving fraud perspective and the multitudes who bought cars or more important stock in the company based on future prospects that Tesla advertised.
We had social media before Facebook/instagram era too. Heck we had electric cars before gasoline powered cars became a thing.
I think it’s obvious at this point that Tesla was the first to bring commercial electric vehicles in production at scale, and the first to make it compete with ICE vehicles for the masses - well beyond the niche who’ve been using an electric car since Leaf era.
Prior to Tesla, there was not really any mass market electric car and charging network. Seems revolutionary, since all the other manufacturers are following the lead to make their own electric cars now.
Unlike the Apollo LRV (Lunar Roving Vehicle), you don't need a spacesuit to drive a Tesla. Plus, I think a Tesla has more range.
As an interesting aside, the LRV had non-rechargeable batteries (two 36-volt silver-zinc potassium hydroxide non-rechargeable batteries with a capacity of 121 amp-hr, according to Nasa).
If only Madoff had tried this trick. He did not commit fraud against investors - he had really hoped to make them that much money, but he failed to do it.
The argument is being taken out of context. Not saying that FSD is flawless. But Tesla isn't calling it a failure. They are saying it's not yet achieved. And it's a perfectly sane legal argument. Again, this does not mean Tesla is right here, just that the title is a misrepresentation of their argument.
FSD was first sold 6 years ago. The FSD hardware has a warranty of 4 years. The average car lasts about 11 years.
Tesla no longer guarantees the functionality of the hardware, yet has failed to deliver the software. They are inching towards failing to deliver during the car's economic lifespan. Legally, would it be acceptable for them to deliver in 2032, when the first cars sold with FSD have long been scrapped?
I love these analogies. Like those people who didn’t buy Amazon stock in 2014 because they remembered WebVan. It’s internet. It’s shopping. They speak of the future. Hence it is exactly the same.
If GPT reasoned like this, you’d snicker about how it “just pretends to be intelligent”, would you not? :)
Who would have thought that a camera-only system would struggle to succeed where systems that leverage $20,000 LIDAR sensors and racks of specialized machine learning hardware have met with only limited success?
This whole thread smacks of this type of cynicism - apparently a lot of tech people now really, really hate Elon Musk.
Andrej Karpathy addressed this point in his recent Lex Fridman interview - more sensors aren't necessarily better, and for each sensor you introduce a whole host of supply chain issues and physical constraints. By sticking to one sensor type that contains all of the information theoretically necessary to make decisions they've simplified the problem.
They bet pretty big on this approach... including by making their own silicon. I don't know how anyone could really believe that they're not earnestly pursuing eventual "Full Self-Driving" capability, even though they may still be far from that goal.
What they've achieved thus far is really impressive. That took a LOT of work.
Andrej Karpathy designed a system that is literally 1000x worse in miles between disengagements than the systems deployed by Waymo and Cruise. It reached that level more slowly than just about every other company in the space and is improving more slowly than other companies that are already tens to hundreds of times further ahead. For crying out loud, they have not even figured out “Do Not Enter” signs, school bus stop signs, and not running into children and strollers yet after literal years of development. His opinion on the subject relative to the real experts, who do use more thorough sensor suites, is worthless.
That Andrej Karpathy admits that they hamstrung themselves by focusing on BoM, development costs, and production logistics when they have received payment for and are currently shipping a defective safety-critical product demonstrates either their total reckless disregard for all established safety process or total ignorance. Neither of which is acceptable when doing regular engineering, let alone safety-critical engineering.
That interview was a load of bullshit to justify the decision to stick with cameras.
The only reason why they picked cameras is because they are cheap, and they can put them in vehicles now, and promise they will be self-driving in the future.
Tesla saw that there is a huge demand for self-driving vehicles, so they decided to sell them, even though they don't have them. I wouldn't call that "earnest".
> Andrej Karpathy addressed this point in his recent Lex Fridman interview - more sensors aren't necessarily better, and for each sensor you introduce a whole host of supply chain issues and physical constraints.
That's a bullshit justification. Going to camera-only and trying to overcome that limitation with "AI" was hubris. All of the competition working on FSD realized cars need more and more varied sensors, not fewer. Humans use a variety of sensors between primary perception and proprioception to drive vehicles.
The biggest problem with camera-only is the system needs to recognize objects to understand them as road hazards. Humans don't need to recognize a thing in the road to think "don't hit that thing". It doesn't matter is an object in the road is a crumpled newspaper, a cinder block, or a cinder block covered in a newspaper, you avoid it all the same. Vision augmented with radar and LiDAR can help just say "that's a thing in the way" more reliably than vision alone. Glints from the setting Sun or windows aren't going to cause radar to think some solid object is no longer a solid object. A bad angle or odd shadow won't cause LiDAR not be able to pull an object from the background.
> This whole thread smacks of this type of cynicism
Both Musk stans and Musk haters go wrong when they conflate his companies with his persona. Tesla is obviously a highly innovative company, and will likely be Muskless at some point, and probably better for it. Then they can make more reasonable claims about FSD and not be under multiple investigations, including a criminal probe by the DOJ for misleading customers.
The problem is making decisions based on the correct road behavior - not often do they have a problem with actual detection, even in typical/regular rain.
Saying you want to climb Mount Everest and not being able to get all the way to the top is failure, not fraud. The fact that Musk charged people to watch muddies the waters a bit, though.
How about charging $10k for the “Full summit of Everest Experience” when you’re still sitting at base camp? Is that fraud or just failure? Or maybe it’s ok because it’s beta?
Definitely not. I sat down in my wife's new Model 3 to help her configure it to her liking, and I was surprised to see that autosteer is still listed as beta. Clearly the meaning of 'beta' has lost all meaningful definition.
"Doctored" as a colloquial term covers a lot of things, some of which are massive frauds and some of which aren't. It's misleading for sure to have the car drive around a bunch and release only the best 3 minute segment. But as long as the car really was driving, I don't think it's fraud, any more than it's fraud for a restaurant to post only their most perfectly plated dishes to Instagram.
If you release a video claiming a car was driving itself, and delete the footage where it was failing to drive itself - say, buy driving into fences - then that is a fraudulent video. "Best 3 minute segment" does not capture the intentional deception because the product is incapable of attaining claimed capability.
Not too far off from "This saline injection will cure headaches" and pick the best 40% of trial results to film an ad
I am not familiar with the referenced video, but I thought Nikola [0] was in hot water (ie real legal trouble) for their video of an electric truck coasting down a hill.
Many people successfully climbed Mount Everest so this is not apples to apples.
Musk said it is economically insane to buy any other car, because in the near future they will update your Tesla with newest self-driving software, and you will be able to make at least $30,000 a year by using your car as a self-driving taxi. With Tesla at price-tag of around $60,000, he was advertising a staggering 50% annual rate-of-return.
If you want to make a similar claim, it would be like he advertised getting to the Mars (something hasn't been done before) and make it so cheap you be able to sale tickets to other people at huge discount and make profit that way but only if you buy his non-existing technology now.
I would venture to say that attempting to climb Mount Everest and failing but saying you had indeed climbed Mount Everest would be a fraud. How many times did Elon Musk say he climbed "Mount Everest" or "we have a fully self driving car now" when Tesla didn't?
More like, selling the Everest experience but failing to equip yourself with oxygen, climbing axes, etc., and not arranging for transport anywhere near the mountain. Instead, travelling to a idyllic mountain retreat 500 miles away. And then watching a video on a big screen of someone climbing.
Definitely a contrast between the sales pitch and reality.
That sounds uncharitable. The Tesla ML model is among the best on the market right now, with millions of miles of training video, lots of FLOPs spent on training and a top-tier engineering team, no?
Let’s say the system responds correctly to X% of stimuli, what’s the delta here? We do know that effort and difficulty increases exponentially as we approach 100%, but what’s fraud here? That someone thought they’d hit 99.999999% already but only hit 99.9%?
Did Elon advertise a system built with # miles training videos & FLOPS spent on training or, or did he advertise a working system? I don't care about how much work went into it (well I do actually, but that's beside the point) I care about how well it does what they say it does.
If you advertise a problem as already solved and just needs fine tuning and sell stuff based on it and continue with the lie even after there are clear signals of it likely failing then I don't think then it's a failure _AND_ fraud.
The fraudulent part is making absurd promised anyone with understanding of the topic knows you probably won't be able to keep. It is the part where you continue to push sells through this technology when it has shown to have problems.
Being optimistic is grate, but lying is still lying and ignoring facts is still ignoring facts.
The failure to understand the context of the quote is palpable in both the article and comments.
This is a standard legal argument used as a defence against allegations of fraud, and it’s not a judgment of how successful (or not) the product is. The only thing it means is “notwithstanding whether we’ve achieved our goals, we have been honest and well-intentioned”.
Well, that's also the thing - internal docs and interviews have shown that Musk was well aware that "FSD will be here, this year", year after year, was not ever going to be true.
"“Elon's tweet does not match engineering reality per CJ. Tesla is at Level 2 currently,” the California DMV said in the memo about its March 9th conference call with Tesla representatives, including the director of Autopilot software CJ Moore... In an earnings call in January, Musk told investors that he was “highly confident the car will be able to drive itself with reliability in excess of human this year.”
"“Tesla indicated that Elon is extrapolating on the rates of improvement when speaking about L5 capabilities. Tesla couldn’t say if the rate of improvement would make it to L5 by end of calendar year”, the document showed.
"The National Highway Traffic Safety Administration has several investigations open, including a probe into why Teslas seem to disproportionately crash into emergency vehicles parked on the roadside. The agency has set no public timeline for a determination."
I wonder if this is the case of the U.S regulators turning their blind eyes to the problem as to not jeopardize Tesla's standing in the global EV market, much the same way FAA dragged its feet on grounding 737 MAX.
I'm still baffled as to how there has been no meaningful punitive action taken against Musk for all the false claims he has made over the years.
I'd guess that the 'free market' should apply punitive action by losing them customers, but that relies on customers having (or even wanting) accurate information. I guess they get round some of the issues by having the autopilot transfer control to the driver just before crashing and can then pin the blame on them.
> “The person in the driver’s seat is only there for legal reasons. He is not driving anything. The car is driving itself.”
> Tesla workers later revealed that the video was fabricated, done in multiple takes, with the driving system’s failures removed, including a crash into a fence. The video remains on Tesla’s website.
Hah, despite all of the Tesla news I somehow missed that this video was a fake and that the car crashed into a fence while filming.
Tesla is a sham of a company. Not only the self driving stuff, but he's also building up his company to be like a giant, faceless tech company with customer support. Last week I was about to accept delivery of a Model S Plaid, I was on hold for an hour and 10 minutes with sales to even reach anyone, after hours of trying to find a number. I walked into the Porsche dealership and they pretty much were ready to bring out babes in bikini's and give me scotch. I know alot of people dislike Tesla due to their CEO, but the actual company is a giant fuckup and it's like talking to a faceless human.
In the mercede's dealership? They all drive mercede's, in the Honda dealership, they drive honda's. In the Tesla dealership? They drive super cheap, shitty cars, since they're effectively minimum, wage employees.
FSD beta is fairly good, certainly better than the standard Autopilot. It's definitely not perfect, but so far for basic driving from point A to B it seems to work well. I can only see the tech improving as it gathers more data and feedback. It's a ML training data problem at this point, no?
It's shockingly good and definitely worth 20% of the value of a car, for me. Two main reasons:
1. Your body doesn't feel tired after driving, because besides the effort of a hand resting on the bottom of the wheel, your body can be completely relaxed the vast majority of the time.
2. When FSD is on, you can completely stop paying attention to navigation-related decisions. You still have to pay attention to driving and intervene sometimes (most interventions are just a tap on the gas to give the car the confidence to go, because it's biased to caution) but every decision about when and where to turn is completely taken care of for you. This frees up a lot of mental overhead when you're in totally unfamiliar territory, including overhead that, without FSD, would pull your attention away from driving, making you less safe.
I didn't get the magic of #2 until I used it on an unfamiliar drive home in New England backroads with lots of turns and realized I was paying attention to the road and my conversation with my partner, and none at all to where we were going. Then we were home.
The one non-ideal thing about FSD Beta right now is that it's really easy to lose it for an arbitrarily long period of time.
What happens is it's fairly easy to have your hand on the wheel on a straight stretch of highway and still get the nag. And there's a small but non-zero false negative rate when you wiggle the wheel to satisfy the nag but your wiggle is insufficient. And because it becomes so habitual and because the car beeps at you for other reasons too, there's a fairly high rate (at least for me) of wiggling to satisfy the nag and failing to notice that you didn't wiggle enough.
The odds are small but after months of long drives it's pretty easy to accumulate the 5 "strikes" that trigger loss of FSD Beta without violating the intent of the nag, i.e. taking your eyes off the wheel or the road.
I'll also say that all of this value will probably be completely unapparent to someone who tries FSD Beta for the first time or infrequently, because like ChatGPT, FSD Beta is not a human-replacing AI that always gives you the right answer. It's a tool you learn to use, by building up a mental model of it over time.
The first time you use FSD Beta will be a lot more like the first time you drove a car in actual traffic, or the first time you drove with a student driver. You'll feel safe enough to keep doing it but it will be very unsettling.
But over time as your model builds of how it behaves, it becomes a tool you'll want to turn on in a growing number of situations. And you'll miss it.
The value will also be much less apparent to someone who does short (<1 hour) drives on familiar roads.
I am a bit annoyed that people think "resting the hand on the bottom of the wheel" is enough.
Why can't people hold the wheel properly, with two hands, in ten-to-two (or quarter-to-three) position? If something unexpected happens, you can react much faster.
Real FSD, where it is not necessary to have a human ready to take the controls at any time, isn't something you can just get more training data for. It's just that current ML methods aren't up for the task. Basically, it turns out you need to have something that can think, and ML is nowhere near that.
Tesla seems to be taking the big ball of ML approach, and that relies on a model that can practically think, and that in turn means you're essentially waiting on AGI. Which is ridiculous.
Instead you break down the problem that is driving. Throw pieces of ML at specific problems, like traffic light classifiers, but have a much more structured approach to the high level decisions that are made.
The benefit of this is that you gain clear introspection into why your vehicle is making the decisions it makes, and a direct way to influence them other than "throw in more data and hope next time it makes the right decision!"
The latter is how you kill people, and it's what Tesla has been doing so far.
-
Hint for other readers: Please read the comment if you want to write a reply to it.
Literally nothing was said about people wanting a car you can just get into, but of course the first reply didn't make it 2 sentences in before jumping to completely orthogonal conclusions to anything that was even remotely implied in this comment.
It just gets exhausting to have a conversation about a topic like this if you have to repeat every point 10 times over.
Like I said in my edit since I was rate-limited, this is a nonsensical response to what I said.
> Basically, it turns out you need to have something that can think, and ML is nowhere near that.
This is what I was replying to. The comment I replied to didn't say a word about hoping into your car and inputting an address, and my comment repeatedly mentioned ML, so it's really hard to imagine you actually read either and came away with that erroneous of a conclusion.
FSD beta shows that Tesla is not working on self-driving.
They've frozen their sensor suite in place because Elon wanted to be able to pre-sell FSD.
The class of mistakes it makes demonstrate that Tesla is *willfully* marching into the local maxima problem: They're taking an approach that improves their actual product, AP, while sacrificing any hope of actually solving FSD.
Full-self driving and what autopilot actually does take two completely different approaches. AP needs to be able to gracefully handle control to a non-trained layman. Current L3/L4 vehicles that rely on a driver have trained operators ready to takeover while the vehicle thinks everything is fine, and they need to aggressively make decisions like no-go, while AP never no-gos.
For people who are in the self-driving field and are familiar with the approaches to perception and decision making (which I hope more of which will chime in here), FSD beta makes mistakes that are kind of like someone saying their robot can now feed you steak, and the robot starts by grasping the sharp end of a knife and cutting the steak with the handle. Sure you might make some progress getting mashed steak into your mouth, but the robot is cutting off its hand to do it. It's not going to result in much steak in your belly unless the robot's approach is completely reworked.
Advertising and selling a feature that you don't have sure seems like fraud to me.
If it's "the feature is nearly ready but just needs some polish" there's an argument to be made that maybe you were just being a bit too hopeful. But "years and lots of fundamental research away, and we're not even sure what sensor systems were going to rely on", that's just fraud. You're looking for early stage investors, not customer preorders, at that point.
>>Tesla’s Full Self-Driving technology may be a failure, Tesla lawyers admit — but it’s not a fraud.
But it IS a fraud when you state or imply that it is usable with capabilities X, Y, and Z, and it actually is not.
Merely calling it "Full Self Driving" is a very clear statement that the damn thing will drive itself, and do so fully.
1st DDG Definition: Full: "2. Complete in every particular."
So, if is supposed to drive itself. Moreover, Tesla & Musk claim this capability is complete in every particular, as in it does it itself, not requiring me to add any capability.
This FSD claim has been going on since like 2014, with "autopilot Hardware 2.0" shipping when Barack Obama was still president [0].
Yet we still have nothing resembling actual "Full Self Driving" - which would be get in the car, tell it where to go, and get there without touching any controls or worrying about errors.
It'd be one thing if it was promised last year and it was not quite here yet.
But this is closer to a decade and FSD is NOWHERE CLOSE to this capability. This is one of the biggest vaporware scams ever.
Tesla used to market this as "Full self driving hardware" on their website. They conveniently did not mention that you would also need the software, which still does not exist.
As a software engineer, I simply can’t trust software to drive my car 65mph in highway traffic. I have a Model Y, but never bought the autopilot. Actually even if Tesla pays me $15k per year to ask me to use autopilot, I’d say no. It is not worth the risk of crashes. But it always surprised me when I see many software engineers here in Silicon Valley driving on freeways using autopilot while watching videos or napping on the driver’s seat. My coworker even let autopilot to fully drive along the cliff roads in Yosemite. What a big heart…
A large factor in the valuation of Tesla is the notion that Tesla is not "just" a car maker. There used to be a good case for that: Tesla, by necessity, led the way in a more integrated electronics architecture for EV controls and infotainment. Self-driving looked promising. Breaking the dealer model in the US was very attractive to anyone who had a bad experience with car salesmen.
But, in the end, everyone puts their trousers on one leg at a time. Car OEM valuations will probably be higher than in the ICE age, and probably lower than $TSLA is today.
- But what really sets this product apart from others is its long-term, aspirational goals!
- You mean features?
- The problem with features is that they can also turn out to be bugs. Our innovation here was to do away with features.
- To get rid of bugs?
- That rights, to get rid of bugs: what sets aspirational goals apart from mere product features.
- But ..
- Product features can also turn out to be downright frauds. "This product features x" when it doesn't. Now that is fraud. But "this product aspires to x" and jeez, it fails to reach its goals. We're all human after all.
- selling something as a commercial product, claiming it can do something that actually can not is not failure, is fraud. Failure is selling something stating it's an experiment "use at your own risk, the price is low because of that";
- try seeing our kind of "piloting automation": in air and see it's FAR EASIER, we are in liquid environment without constant very near constraints. A ship travel from A to B, sure it need to take care of eventual other ships, floating obstacles etc, but in the mean they are FAR LESS than "other cars on the road" and far away, no ships normally travel as near as a car with another. Similarly for planes where there is even a third dimension of limited movement freedom.
Long story short:
- ALL modern companies sell experimental cheap crap sold as finished high-quality product because the private research model IS A FAILURE, we need PUBLIC RESEARCH for public development where the private sector pick things to implement and sell BUT do not drive the research;
- ALL efforts to mimic something old with new tech is doomed to fail. Computers as featured-typewriters was an idiocy, similar try mimicking the physical office desk on a computer (see for instance old General Magic UI) and so on. People DO NOT WANT TO CHANGE and that's why keeping the old paradigm in the new tech spread well commercially and ultimately fails creating big disasters, that's why we need culture, witch means PUBLIC schools, universities, open to anyone for free, but we need to change to make evolution happen. We can't get FSD for cars, we can get for flying cars. We can't get flying cars we all dream since decades in modern cities built for cars on roads and so on. The public do the research for the sake of humanity not single cohort profits, spread the new knowledge via public schools, institutions, private sector push such change in the society and the market create the equilibrium.
This header is misleading. Lawyers are making a legal point, not a prediction on the tech or where it stands and certainly not saying "its a failure". They don't even have the technical insight to make that assertion.
They're also not saying it's "a failure", they're saying "failure to realize a goal is not fraud". There is a strong semantic difference between "we have failed to realize the goal (yet)" and "we have failed at realizing the goal (permanently)".
As a former Tesla owner (one year ago), their claims are just ridiculous. How can Tesla talk about FSD, when they cannot resolve even simple cruise control on a street that is not a highway with a separator. More like FFS than FSD
Advertising always had a fraudulent side to it, by not telling the whole story or showing an idealized version of products, but Musk is taking it to a whole new level.
I view self-driving cars (along with drones, ubiquitous encryption, and distributed manufacturing) as "post-WW3 technologies". The same way that atomic energy, computers, jet engines, and interstate highways were "post-WW2 technologies". These are things that could be done with current technology, but the current state of infrastructure & societal organization makes them uneconomical, and no single private firm has the market power needed to force through the social changes needed to drive adoption. However, they give a huge advantage to one side in wartime, where you are deliberately trying to kill people rather than avoid killing people. That gives the side who uses them the ability to win the war, which then gives them the political power to force through the societal & infrastructure changes needed for mass adoption. The war also gives the general populace widespread exposure to the new technologies, which makes them less scary to the winning side and drives political support for rearranging society to benefit from them (they still remain scary to the losing side, but they'll be dead).
We're at peace now, which makes self-driving cars kind of a non-starter. But that's not guaranteed to hold forever, even in the near future, nor is it even true in many places of the globe (witness Russia/Ukraine). It also may not be the point - remember that self-driving cars initially came out of a DARPA grant.
A lot of the technical problems with self-driving cars are easier to solve with widespread infrastructure changes. For example, you solve construction by mandating that construction diversions use smart traffic cones with a small transmitter that coordinates with the other cones in the area to signal which lane has priority, and then broadcasts that to oncoming self-driving cars. You solve pedestrians by having a proper street/road split [1], where pedestrians are banned from roads and cars are banned from streets. Heck, you can make the whole problem a lot simpler just by banning non-self-driving cars from roads.
If large infrastructure changes are in the mix, then the self-driving problem is trivially solved using a pair of metal guide rails. They also greatly increase the cargo capacity of the vehicle, a win-win.
There's still a product requirement missing there, which is the ability to get to lots of different places.
I think there's a lot of economic logic in a vehicle that can navigate easily constructed roads - without any smart devices embedded in them - and yet still needs metaphorical guiderails for the exceptions like construction zones and downtown populated areas. That still keeps road construction cheap (just asphalt and paint!) but gets you 100% of the way to where you want to be. If you add rails you need to lay them everywhere that a road is now, and you need switches at each intersection.
Tesla cars are sold on the value as they are now. The potential features of the future are only an interesting possibility. It's not that different from any hardware/software on the market. Sure, Photoshop might have full self-driving AI that magically makes images for you someday (and Adobe does promise some big things), but people buy Photoshop for what it can do right now.
Tesla never used lidar. They did use a low-resolution radar and dropped it because they felt it was holding back their vision-based development. At the time, Musk commented that radar was not useful unless it were higher resolution.
There have been credible rumors of an upcoming addition of high-resolution radar to the Tesla sensor suite, consistent with Musk's earlier remarks.
You know, where Theranos went wrong was not selling some other thing and then tack on their “Full blood test in just one drop” thing as an add-on. Oh and making sure it’s labeled beta. That’s very important.
If you look at the track record of Musk, you can see that he promises something futuristic and delivers something conventional but with improvements that's still desirable.
Haven't Holmes imitated Jobs but Musk, today she could have been a success story. She should have still promised all your tests from a single drop of blood and keep claiming that it's coming next year and deliver classical machines but with greatly improved usability, integration with the modern hospital management system and fart jokes. They even had a prototype that worked some of the time for some of the test, sell it at an outrageous price and warn the users that all the tests should be repeated on a traditional machine because this is a beta device.
Also She shouldn't have relied on people from the establishment for PR, she should have partnered with popular-science YouTubers. Just send them a machine to play with, they wouldn't know if the machine is any good - they will be grateful for having access to such a device and make it the cool way of testing your blood.
I think where we can all agree that Musk is a genius is in seeing where billions of public funds are going to flow, and then positioning himself to be in a position to profit.
That was a key difference between Theranos and Tesla/SolarCity.
Imagine if Elizabeth Holmes had gotten just a bit luckier with timing and had been around when the COVID pandemic occurred. Not only would she have launched a highly-futuristic COIVD test, but also received billions of funding with nearly no oversight. She almost certainly would have negotiated to receive proactive immunity from the government, like the vaccine manufacturers did.
Good point. I like to think that people ultimately have good intentions and maybe even the guy from WeWork could have had make it work somehow if they had a bit more integrity and actually tried to kick the can down the road until they can deliver something with value. When you can stay alive for long enough you greatly improve your chances for a great timing. Had Holmes survived till 2020, she might have become a success story with the testing during the pandemic.
Delivering something useful is important. Holmes failed on that, She should have delivered something like regular test machines with a design that resembles something from the sci-fi culture and would have been off the hook for the delays and failures.
Maybe she should have done re-skinned Siemens machines. The original Tesla Roadster was based on Lotus Elise.
Milton from Nikola would be better example in my opinion. You had him rolling a truck down the hill to show it works despite at the time it was still technology in development (company is still doing progress now). It still shocks me he is going to prison for that as a fraud when it was just a commercial. I don't know if you know - but classical trick in advertising food is using cigarette smoke instead of hot steam to show hot fresh food. Obviously steam would cover camera's lens and render image useless. Also most close-up food commercials have food made from rubbery guee, and not actual food because it shines better and "stays" fresh for longer than an hour when you want to do multiple shots for hours. How come McDonalds management is not facing prison like Milton is? Its because everyone knows commercial of a product does not equal to the same exact product you will pick up off the shelf and eat. Nobody at McDonald is going to jail because you attempted to sell people plastic guee covered with cigarette smell.
You make it seem like the commercial was the only evidence used to prove that he committed fraud. He made statements, both in media and in legal documents, that proved in the court that he commit fraud.
I suspect that if McDonalds' management were only pretending that they were opening new burger franchises to dupe investors and they'd managed to raise billions in funding by pretending a single mockup restaurant with a rubber burger and some city stock footage was a restaurant chain, they would be quite likely to serve jail time. But McDonalds is, in fact, entirely capable of making and selling burgers. Nikola is not capable of making and selling functioning hydrogen fuel cell powered cars, despite lying about already having that capability seven years ago when their stock soared.
Doesn't matter what consumers know about authenticity of advertising imagery, it matters that McDonalds investors aren't being lied to about the existence of the Big Mac and Nikola One investors were being lied to about the existence of the Nikola One's propulsion system. Which is why "other companies that aren't amongst history's biggest pump and dump scams also get creative with product images" was never going to be a viable legal defence for Milton.
Well fortunately Tesla has a laser-focused CEO that is devoting his time to making sure a feature which he clearly stated was absolutely critical is being responsibly developed and marketed.
Really grateful for that—I’d hate to think this publicly traded company that had a sky-high valuation that was, in part, based off this promise was being led by somebody who is off in the weeds worried about social media or something.
At this point I'd imagine Tesla is better off with Musk being distracted by his shiny new toy.
Though I'd be concerned about the effect he is having on their brand image. Even two years ago I was on the verge of buying a Tesla but ended up cancelling my order in large part because I just couldn't stand the obnoxiousness of Elon and he has gotten orders of magnitude worse since then. His new "demographic"/cult also seem to be climate change / green energy skeptics, not convinced they will be gobbling up Teslas at quite the same rate as my demographic would.
My father finds Musk entirely reputable. I am currently subject to at least a 30 minute call a day from him, a retired model S owner, who has nothing left in his life other than the Tesla. The calls are 30 minutes of him reeling off Tesla PR bollocks. He chased everyone he knew away, speaks of nothing else and cannot see reason or error in this or any of Musk's things. He thinks the Twitter thing is a freedom grab, he literally thinks SpaceX are building rockets to take the good humans off and build a new world order on Mars. To get the model S in the first place he cashed in his annuity pension and sold his house and is now on a downward spiral of financial oblivion.
This is exactly cult behaviour.
I'm at a loss of what to do other than refer him to mental health services here in the UK.
I couldn't even possibly consider owning something associated with someone so utterly socially damaging on every front.
Edit: I'm actually fucking crying while I'm writing this. Posted anon as my other account has my GH and stuff on it.
Ugh, Sorry to hear youre going through this. Unwinding an entrenchment is hard, wonder if theres a support group or something for those deep in techno-optimism.
For me, the frame breaks in me have been deep thought from both an engineering perspective and cultural scaffolding perspective, paired with real travels to places where technology was limited (either be practical circumstance like going to a rural village or intentionally like a silent meditation retreat).
A graphic I still come back to is the good life project ( https://goodlife.leeds.ac.uk/ ) with inquiry around how many planets of stuff even the alternatives require from a planetary boundaries perspective. There's more technical works like https://escholarship.org/uc/item/9js5291m .
This is extremely sad and I hope you and your father can try therapy together or separately, but the fundamental issue is not Tesla or the car. Those are the symptoms. Best of luck.
It might never go away but that doesn’t mean it has to be filled with religion or something equally as bad. There is such a thing as community and public service that have some actual meaning and benefit to society, they just don’t get as much airtime as corporate ad spends.
Literally. People thought the death of God would bring enlightenment to society. Instead, God was replaced by stupid idols. You only have to look at the resurgence of astrology among young women.
We find ourselves living in a vetocracy, where very little groundbreaking work actually gets done. Critics rule the roost at this current cultural moment. So there's a lot of pent-up energy that will be rewarded to anyone able to articulate a concrete positive vision for the future. Of course people are going to bow down and worship the one guy actually making progress against all odds. The only other person might be Zuck with his metaverse bet, but although he has the concrete positive vision, unlike Musk he hasn't delivered (yet). Anyone have other examples?
I think we need to look at the assumptions made. Has Musk and Zuck actually improved the world or merely reorganised it to gain market for their ideological purposes?
Delivery or not doesn't imply improvement. Progress for the sake of progress is not necessarily an improvement either.
Yes and no. Yes, progress for the sake of progress isn't strictly an improvement.
On the other hand conditions are ever changing and our knowledge of the world changes our understanding of those conditions. To me these things nearly always imply that we should act in some direction to improve our lot. I don't see many people willing to put in the work. I see a lot of people willing to sit on the sideline and talk about how the people on the field are dumb or what they're doing obviously won't work, and that they are evil scheming villains who will surely change the world for the worst for their own benefit.
Would you kindly present an argument as to why one should believe we are living in a vetocracy? I see people pursuing all manner of projects all the time, I don't really buy that the world has been narrowed by criticism. What I see limiting people's ability to pursue projects they're passionate about is a lack of funding and opportunity.
People are, as are companies, but public works are increasingly bogged down but polarization, litigation, and bureaucracy. I think this creates a sense of unreality because campaign promises and outcomes end up feeling so disconnected from each other. 25 years ago it was more common to see public officials pop up in the news and say 'we're gonna start doing X' and then see construction crews or the equivalent at work the following Monday. It was also easier to keep track of 'progress bars' because the media landscape wasn't fragmented into 10,000 tiny snowglobes depicting a particular narrow view of the world.
The internet is part of the issue, but only part. Long-term political strategizing is another part, and a third factor is 9-11 which seems to have permanently traumatized the USA. As a simple example, information displays on subway platforms in my area are still putting up announcements about how public bathrooms in the subway station are closed for security reasons. It's been 21 years since 9-11 and major cities in the US are too freaked out to figure out basic things like public bathrooms and instead live in a permanent state of mental emergency.
A billionaire tells his critics they should build stuff instead of criticize him, as if their opportunities to do so are equivalent, and as of criticism and building stuff are mutually exclusive? That doesn't hold a drop of water.
We didn't have enough respirators because of critics? My recollection is that we failed to invoke the Defense Production Act and impress companies into building them. This was based on the values of the Trump administration (who saw this as a violation of free market principles) and not an overabundance of criticism; as an executive order, it would have been immune to veto.
The piece goes on to argue the federal government is a vetocracy, and I can agree that Congress is in a state of gridlock like 90% of the time. But that isn't because of critics, that's because of party politics and deficiencies in the structure of Congress. This is a cynical power play to sabotage one's opponents, not criticism run amok.
I can agree that there's "vetocracy" in the sense of gridlock, but when you say "critics rule the roost," I remain unconvinced. Perhaps the answer is to be found in the book you recommend, I have not taken a look at that yet.
I think you're on to something. At every other point in history the wealthy were expected to use their wealth. Victorian wealthy funded folly and public libraries and other stuff. Roman aristocrats paid for entire armies.
Our billionaires subscribe to greed is good capitalism and hoard their wealth though. Musk bucks this trend and therefore gets a ton of my sympathy despite his erratic nature. We need more Elon Musks not less.
Please, be aware that if you call the mental health services on someone it can be extremely difficult to unwind. Once a giant bureaucracy tags you as a risk, sane decision making can go out the window. No-one will want to say you're not a risk even if you are sane, because their reputation suffers if they are wrong. Only a trained psychiatrist can make such a judgement legally and they may have an hour with you once every 3 months to make it. The psychiatrist will be acting in their forensic capacity, and will be unlikely to agree that you are sane unless you agree that you have a personality disorder and need medication. Being in that position can 100% drive a sane person mad.
I am so sorry. I don't know what to advise other than talking to a family lawyer. Regrettably, the highly networked society we have collectively created has spawned its own sort of mass media and cultishness has become widespread because it is easy and profitable to engineer.
I don't know what to suggest for your situation - maybe talk to a family lawyer? I have the impression that the NHS is probably not up to providing the kind of mental health support needed to deal with a broad spectrum social problem.
This is not even the only case of Elon musk related psychosis that I've heared of.
Sometimes I legitimately believe that the SCP foundation or something like it is real because it feels like elites who are totally asses have reality distortion fields around them.
> It sounds like you are afflicted with the same affliction as your dad, just in the opposite direction (especially if you actually cried over typing this post).
I don't think it's exactly completely delusional or obsessive to be sad that your dad is now financially unstable and obsessive about a single thing to the point it's alienated the people around them.
You can be well adjusted and sad that someone close to you is on a bad path.
You can even resent people you blame for enabling that path.
You literally described it as an affliction. If you prefer the criticism to be, "don't pathologize someone based on a few sentences," fine, don't pathologize someone based on a few sentences.
If your stated belief is real, then I'd encourage you to reflect on why you were willing to violate this principle, and why, after being challenged on it, you deflected.
I'm sorry, English is not my first language, but I don't think "affliction" means "disease". I used it as a "cause of persistent pain or distress", which I still think the OP has, given that he/she admitted to crying while writing a blog post (you can read his/her other posts to see that it's a bit of an obsession with Musk).
Well I'm sorry, I didn't realize English isn't your first language, but I should have been cognizant of this possibility. (In my language community, it's more commonly used to mean "a cause of mental or bodily pain, as sickness, loss, calamity, or persecution." I think even this is broader than I generally hear it, it's rare that I don't hear affliction refer to a disease.) You also said they were "hysterical", which, unless you're using a quite different usage, generally refers to someone's mental health & their inability to manage their emotions; not something we should say about someone because they took the risk to share a painful experience in public.
TIt is common to dismiss someone discussing something they care about, on the grounds that they care about it; they created this account to talk about Musk, so we shouldn't be surprised that's what they're doing. They related something painful, so we shouldn't be surprised it upset them. None of this should be held against them or discredit them.
If you were in there shoes, and assuming everything they said was true - is there really a different way that you would have expressed it?
Seriously True. I bought my 3 in 2019; and my biggest hesitancy at the time was Musk being (insane) and tanking the company preventing me from keeping the car on the road long enough/resale value.
Today? I do <3 my car, but I'd likely not touch new one with a 10 foot pole; especially as a new customer and there actually being other EV Options available. It's distressing because Tesla does (mostly) what I want in a daily driver well.
> At this point I'd imagine Tesla is better off with Musk being distracted by his shiny new toy.
I think that's probably correct. One question I find interesting: What is Musk actually good at?
He's clearly good at hype-driven PR and painting himself as a Tony Stark character. That has been great for Tesla; he's been able to raise gobs of money cheaply. Money that apparently kept Telsa from going bankrupt. [1] But that genius for hype also makes it harder to figure out other skills.
Looking at his track record, he got fired from Zip2 and PayPal. The jury's still out on Telsa; it recently got into the black, but it has always had its troubles, and its first-mover advantage is eroding. As is its stock price, down by half recently, something surely not helped by Musk's delusional promises (e.g., 1 million Tesla robotaxis by the end of 2020). The Boring Company and Neuralink both look troubled. [2] [3] And of course Twitter is an extremely public clusterfuck.
That only leaves SpaceX as possible evidence that he's good at anything other than hype and raising money. But recently a former SpaceX intern posted about his experience there [4], saying "Elon was basically a child king. He was an important figurehead who provided the company with the money, power, and PR, but he didn’t have the knowledge or (frankly) maturity to handle day-to-day decision making and everyone knew that. He was surrounded by people whose job was, essentially, to manipulate him into making good decisions."
So I think what we're seeing with Twitter is the real Elon. And yeah, if I were at Telsa trying to pivot from hype-driven startup to solid company able to compete with everybody from GM to Toyota to BMW, I would much rather Musk stayed at Twitter.
I am not an Elon fan or anything, but I don't think you can say the jury is out on Tesla. Tesla is the first successful new car company manufacturing in the US in more than fifty years. It changed people's perception of electric cars and changed the car market. Even if it has a ton of troubles going forward it will be a very valuable company. I think you have to give him his due for that.
It is perfectly possible that Tesla will be marginal or out of business within a decade.
Tesla did a good job hyping electric cars and selling them to an early-adopter market when they had no real competition. But now that the market is proven and the discussion has shifted, every major car company and a bunch of other players, possibly including Apple, are going after the much bigger mainstream market. If you look now at Consumer Reports and their recommended BEV cars, Tesla only has 1 of the 5 models listed, and its score is a middling 78, behind the 91 for the Kia EV6 and the 84 for the Genesis GV60. Once the rest of the competition has a few years to iterate, the picture could be significantly worse for them.
It's possible that Tesla could be the Google of electric cars, where first-mover advantage leads to decades of dominance. But it's also possible that they could be a Groupon or a Pebble: companies that were initially leaders and darlings, but that couldn't keep up over the long term.
> Looking at his track record, he got fired from Zip2 and PayPal
Technically, no, he got fired as CEO from X.com twice, once before it had PayPal as a product, once after it bought Confinity which had PayPal as a product. X.com became PayPal right after the second time he was fired, so he never actually got fired from (or worked for) PayPal.
He was also fired from Zip2, which was a separate company. You are technically correct that PayPal at the time was called X.com, but it's still the same company.
It really feels like he used to have advisors who would push back on his more outlandish ideas and that provided a good hype / pr vehicle for Tesla and SpaceX. That seems gone now.
He seemed to go steadily off the rails since the Thailand rescue debacle but that might have just been the first gaffe that I noticed as being an obvious misstep.
You're lucky. My family (father, sister, etc) have pretty much fully outfitted themselves in Teslas, but hate Musk. They could stand him enough before, but now they are ready to move on to Lucid or anyone else really. I'm glad I never pulled the trigger -- Musk just always came across as extra creepy too me.
What is with this Cult of Elon? I really don't care who the CEO is, I love my Model 3. It costs me literally one sixth the cost per kilometer than my previous Nissan was costing, when comparing the cost of electricity and gas. And that's before considering that I haven't had to change the oil or do any other scheduled maintenance in the 35000 or so KM I've put on it.
And it's not just the price. My kids who previously couldn't stand long drives now want to go places, at first because of the glass roof but now because they love when I sqeeze them into the seatback with the throttle. It's car number 15 for me, and the best of any of them by far. Who cares about the CEO? I don't even know who the CEO of Nissan was when I had that, or Ford when I had that, or Renault when I had that.
> The CEOs of those other car companies were not raging egotists
Actually, from the stories of 1950's and 1960's Detroit, in fact they were. I don't know about today.
> backed up by a crowd of howler monkeys chasing a parasocial dopamine hit.
_This_ is the problem. Who cares if Elon eats kittens or promises Mars rocks in the glovebox? It's the Twitter Shitters magnifying every musing into predictions and three-page blog posts that is the problem. Just don't listen to them.
So instead you bought a car from a company run by people who are afraid to ever travel to the United States because they know they may be subject to personal prosecution for Dieselgate, which they so far evaded.
I mean I agree he’s super obnoxious, but if making car purchase decisions based on the virtue of company leadership, I would have called it a tie and bought the better car instead.
I had the same exact response and cancelled mine around the time of the tweets about the Pelosi attack he made. It’s a shame, we loved our last Tesla but I can’t stand the thought of being associated with him. We were supposed to pick it up the week he made those tweets.
I was his former prime target demographic. I feel very strongly about protecting the environment, I hate gasoline engines after working on them most of my life, I loved how economical it was to drive an electric car, I love technology, and I could afford a Tesla. I don’t know what his new demographic is but I don’t think they even can buy a Tesla or will. I wish him lots of luck. Based on my experiences in a small town, his new demographic buys dually trucks listening to alt right propaganda and lives check to check working in the oilfield. Or is on a very fixed income and is retired and stays at home afraid, watching Fox News, with their last car in the driveway, a 1997 Oldsmobile or Buick.
From what I've seen there are quite a lot of Tesla drivers in major southern suburbs.
Southern suburbs tend to be fairly conservative. Maybe not quite as Trump loving as rural areas. But conservative nonetheless and likely not turned off by Elon's recent emergence as a right wing political figure.
Also, if you read enough /r/elonmusk you'll see that a lot of people have bought into Elon's schtick that he's actually just a centrist (apparently centrists love DeSantis)
You know, my decisions on things are mostly based on the product...
But if I know I'm going to enrich someone that I find completely obnoxious, it gives me pause. (Of course, dealing with an organization helmed by someone obnoxious tends to have its own costs: they tend not to be super laser-focused on the customer experience).
Ordinary political disagreement isn't enough to do this for me. But seemingly systemic wanton disregard for others gets me to the point that it's a lot harder for me to justify buying that product.
There's a pizza place in my town co-owned by a notorious jerk. They have pretty good pizza. I still don't go there anymore (even though he is generally not there and the risk of running into him is low).
It seems that 5% or more of the country views everything through a binary lens of "Is this something my media and political group will be upset about or not" so if you are a Republican/Fox News person you feel that you should buy a "MyPillow" when buying a pillow.
and if you are part of the Democrat/NYT group buying a "MyPillow" is a bad thing.
To your point, this is crazy because both sides probably buy the exact same Tyson Farms chicken and have spent zero time learning about Tyson's business and labor practices and Tyson executives political beliefs. They are just thinking about which businesses Fox News / NYT has told them to care about.
Ever since Madison Avenue taught the postwar consumerist middle class to consider purchases as buying into aspirational lifestyles, tying it into personal identity, the same process has shifted from simply patronizing businesses all of the way to political choices. Thus, charismatic and attention-seeking CEOs no longer sell their businesses simply no the basis of the quality of their goods or services, but buying into entire constructs of Zizek voice ideology. Thus, perhaps paradoxically, consumer individuals, craving belonging, are buying their way into tribes of like-minded individuals with similar convictions.
In short, don't blame the country themselves; these businesses and their management are pushing politicizing as a tactic to sell, sell, sell. As we in tech know, everything is ads.
I've not studied this specifically but I do have a general interest in the complex control systems now in cars —and I try to avoid them.
I think that, from watching the likes or Rich Rebuilds, who ostensibly operates without any approval from tesla, the cars will run without the internet but I guess they'd become "feature complete" and when modules die it will be on the car hacking community to pick up the mantle.
Traditional cars aren't in a much better position as somewhat superfluous modules (think a radio head unit) can take down one of the many CANbus networks in your car and leave it a brick.
Many (most?) auto techs are not well versed identifying these types of low level problems and will fall back to firing the parts cannon at the car rather than identifying the issue definitively. Just like in Battleship you can get a hit by random guessing but this is a waste of money and resources.
The random superfluous modules also become unavailable and if they're critical to making the car start then your car might be out of commission for the foreseeable future. GM and Ford just don't have as much of a social media halo around them.
You do see a few headlines around particularly egregious money digs, like one company making remote start a subscription feature. That module and the car it's in is not built to last in my view.
> him exposing all the censorship that was going on at twitter.
What you mean is that Elon Musk has made a series of self-contradictory claims with no evidence about what was going on at Twitter before he was there, and you believed his empty claims unquestioningly.
I don't generally accept claims without proof, because no rational person should. When we're talking about a serial liar like Musk,
> Amazing how much a group of people have gone from loving him to hating him because he is allowing free speech.
Nope, there's no proof he's doing that, either, and plenty of serious researchers on the left who have been thrown off the platform with no reason given.
---
You should learn to demand proof for people's claims before repeating them unquestioningly. It will help the quality of your belief systems.
It is totally fine if you don't like Elon, but in the real world there isn't an EV that is cheap and available and has a good charging network.
The ID4 is decent but that company was started by actual Hitler and Nazis, so if you are interested in not buying from companies that have a history of evil leadership that one isn't a good option.
And gas cars are not good for CO2 emissions, so sometimes you have to accept that you sometime buy products made by companies that have executives you don't like. And realistically I find it very unlikely that every product you buy on a regular basis today is run by people who have much better morals and politics than Elon.
Interestingly I did end up buying an ID4 and it's been great for a full year now.
It's not perfect but I like that is has more of a "traditional" interior compared to Tesla, which felt kind of like a giant tablet with a car-shaped case around it. That design was already making me hesitant and Elon with his endless stream of nonsense was the straw that broke the camels back so to speak.
> The ID4 is decent but that company was started by actual Hitler and Nazis, so if you are interested in not buying from companies that have a history of evil leadership that one isn't a good option.
This is a wild comment. The current leadership is slightly more important than the leadership 80 years ago. It is especially strange considering you could have made moral arguments against that company based off much more recent misdeeds.
Yep, there was 'diesel-gate' back in the early-mid 2010s and even today the way the company overlooks the behavior of the Chinese government is also very unsavory (a problem Elon has as well)
Good thing that this publicly traded company hasn’t loaned out employees to other unrelated businesses which happen to be owned by their CEO. I’d that happened, I would worry about the distraction being more widespread than just the CEO.
Employees at Musk companies have the freedom to moonlight, or even transfer entirely.
Tesla benefits from SpaceX materials engineering, for example. Tesla now effectively has access to a massive comms/PR engine, with far more reach than traditional advertising, without paying gatekeepers.
Ford spends $2B a year on advertising. Elon can reply directly to Tesla owners.
The issues are the mixing of company and personal funds.
If you're an investor in Tesla and Musk pulls 50 engineers from the Autopilot team to come work at Twitter [1]. How exactly are you benefitting from that? Autopilot development just pauses for the team to go work on blue check implementations?
I am genuinely convinced Tesla is better off when Musk is focusing on something else instead of managing it.
Tesla need Musks public charizma, his money, his connections and him bullying everyone who stands in a way. Sure, especially his ability to convince public about dreams is super key aspect. But, his management style and the way he runs things seems to be something company needs to protect itself from.
People have attributed a lot of overinflated image to Musk and his effects and skills over the years, regarding Tesla, SpaceX, and other companies of his.
They still are doing so about this Twitter thing. It’s all a nothing burger.
Clearly even level 4 autonomy is far beyond what any near future technology is capable of. If at the very least due to the fact that a very commonly encountered condition, construction work, effectively requires an AGI to navigate.
Which would imply the best achievable self driving is around level 3 autonomy.