• Hart@beehaw.org
    link
    fedilink
    arrow-up
    2
    ·
    2 years ago

    Engineers, raise your hand if you’ve tried to do good work despite your management’s ‘support.’ Oh, look at all the hands going up!

    • [email protected]@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      2 years ago

      Management ALWAYS knows what’s best! Obviously!

      Hence why they constantly come running for us to fix it when shit goes as we say it will.

    • kitonthenet@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      2 years ago

      This is true, but when safety is on the line it actually goes further than that. As an engineer you have an ethical duty to say no to making a product unsafe for end users or the general public.

      It doesn’t matter if you get fired, if your boss goes to the media to bitch about you, if your boss threatens to sue you, you as an engineer hold a position of public trust to keep the people that use your product safe. If you don’t respect that and take it seriously, well we see where oceangate ended up.

      • dark_stang@beehaw.org
        link
        fedilink
        arrow-up
        2
        ·
        2 years ago

        The number of times I’ve rejected something because of security flaws (usually database injection), only to see other engineers later approve and merge the pull request is infuriating. There seems to always be an engineer who is willing to make an unsafe product.

      • EthicalAI@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        2 years ago

        Yeah my boss has been going back and forth with me on this for months. Wanting to release unsecured products to the general public. I’m getting exhausted with him. I hold the keys and frequently I’ve told him no, and threatened to quit. Each time they just retreat back and hold a meeting how it will “stay on dev for now”. The features aren’t even feasible to release in the near future but I know they will force the issue. My resignation letter is on the table.

          • sparkl_motion@beehaw.org
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 years ago

            “Sir, with all due respect, I don’t believe turning a commercial diesel filling station into a quad copter doesn’t seem feasible.”

            • kent_eh@lemmy.ca
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              1 year ago

              “Sir, with all due respect, I don’t believe turning a commercial diesel filling station into a quad copter doesn’t seem feasible.”

              You just need to think outside the box. like these lads did: https://youtu.be/ReAa2WFm8Vc?t=16

      • chrisn@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        2 years ago

        That value is instilled in many types of engineering, but not as much in software engineering.

    • bfg9k@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      2 years ago

      Tale as old as time.

      Engineers: “This is possible but we will need to equip every car with an expensive sensor suite”

      Management: “So you’re saying we can just remove the sensors and figure it out with your engineering magic, you guys are really good at that, you got my iPhone connected to ICloud so you must be reeeally good with technology.”

      Engineers: “…”

      Management: “Also, anyone not up to this task is fired.”

  • SirEDCaLot@lemmy.fmhy.ml
    link
    fedilink
    arrow-up
    0
    ·
    2 years ago

    I’m not sure what kind of serious trouble they are actually in. I have spent most of today being driven around by my Tesla, and aside from the occasional badly handled intersection and unnecessary slowdown it’s doing fucking great. So I would Tell anyone who says Tesla is in serious trouble, just go drive the car. Actually use the FSD beta before you say that it’s useless. Because it’s not. It is already far better than anyone expected vision only driving to be, and every release brings more improvements. I’m not saying that is a Tesla fanboy. I’m saying that as a person who actually drives the car.

    • exscape@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      2 years ago

      This kind of serious trouble (from the article):

      The Department of Justice is currently investigating Tesla for a series of accidents — some fatal — that occurred while their autonomous software was in use. In the DoJ’s eyes, Tesla’s marketing and communication departments sold their software as a fully autonomous system, which is far from the truth. As a result, some consumers used it as such, resulting in tragedy. The dates of many of these accidents transpired after Tesla went visual-only, meaning these cars were using the allegedly less capable software.

      Consequently, Tesla faces severe ramifications if the DoJ finds them guilty.

      And of course:

      The report even found that Musk rushed the release of FSD (Full Self-Driving) before it was ready and that, according to former Tesla employees, even today, the software isn’t safe for public road use. In fact, a former test operator went on record saying that the company is “nowhere close” to having a finished product.

      So even though it seems to work for you, the people who created it don’t seem to think it’s safe enough to use.

      • CmdrShepard@lemmy.one
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        1 year ago

        I think you and the author are drawing conclusions that aren’t supported by the quote.

        The engineers stated it’s “nowhere close” to being a finished product which is evident by the fact that it’s only L2 and in beta.

        The DOJ is investigating but we know some of these crashes where from people disregarding the safety features (like keeping your hands on the wheel and eyes on the road) when they crashed, so what comes of the investigation is still up in the air and I think a lot of the motivation is driven by publicity from articles such as this and not necessarily because the system is unsafe to use at all.

        The truth is that nobody has achieved full automation so we don’t know what a full automation suite should look like in terms of hardware and software. The Mercedes system is a joke in that it can only be used on the highway below 40MPH. I dunno what speed limits are where you’re located but in my area all the highways are 55+MPH.

        Furthermore, the robotaxis are being used in places like Vegas where they’re geofenced to premapped city streets in areas where the weather is perfect all year round. The entire industry has a long way to go before anyone reaches a finished product.

        • exscape@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          I dunno, I think “even today, the software isn’t safe for public road use” is pretty clear-cut and has nothing to do with the level of automation.

          I’m not suggesting anyone else is way ahead though. But I do think that removing all non-visual sensors is an obvious step back, especially in poor weather where visibility may be near zero, but other sensors types could be relatively unimpeded.

          • CmdrShepard@lemmy.one
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            1 year ago

            I think “even today, the software isn’t safe for public road use” is pretty clear-cut and has nothing to do with the level of automation.

            Keep in mind this isnt even a quote and was attributed to someone who doesn’t even work on this tech for the company. What has you convinced it’s unsafe for use now? A few car accidents? What about all the accidents that have been prevented using this same system? You might suggest a ban but if the crash rate or fatality rate increases, haven’t you made conditions less safe on the road?

            The problem with articles like this is they focus on things like “Tesla has experienced 50 crashes in the last 5 years!!” but they don’t include context like the fact that cars without these systems have crash rates 10x higher or more. These systems can still be a net benefit even if they don’t work 100% of the time or prevent 100% of crashes.

    • CamilleMellom@mander.xyz
      link
      fedilink
      arrow-up
      0
      ·
      2 years ago

      The thing is working good enough most of the time is not enough. I haven’t driven a Tesla so I’m not speaking for their cars but I work in SLAM and while cameras are great for it, cameras on a fast car need to process fast and get good images. It’s a difficult requirement for camera only, so you will not be able to garante safety like other sensors would. In most scenarios, the situation is simple: e.g. a highway where you can track lines and cars and everything is predictable. The problem is the outliers when it’s suddenly not predictable: a lack of feature in crowded environments, a recognition pipeline that fails because the model detects something is not there or fail to detect something there… then you have no safeguards.

      Camera only is not authorize in most logistic operation in factory, im not sure what changes for a car.

      It’s ok to build a system that is good « most of the time » if you don’t advertise it as a fully autonomous system, so people stay focus.

      • SirEDCaLot@lemmy.fmhy.ml
        link
        fedilink
        arrow-up
        0
        ·
        2 years ago

        My point stands- drive the car.
        You’re 100% right with everything you say. It has to work 100% of the time. Good enough most of the time won’t get to L3-5 self driving.

        Camera only is not authorize in most logistic operation in factory, im not sure what changes for a car.

        The question is not the camera, it’s what you do with the data that comes off the camera.
        The first few versions of camera-based autopilot sucked. They were notably inferior to their radar-based equivalents- that’s because the cameras were using neural network based image recognition on each camera. So it’d take a picture from one camera, say ‘that looks like a car and it looks like it’s about 20’ away’ and repeat this for each frame from each camera. That sorta worked okay most of the time but it got confused a lot. It would also ignore any image it couldn’t classify, which of course was no good because lots of ‘odd’ things can threaten the car. This setup would never get to L3 quality or reliability. It did tons of stupid shit all the time.

        What they do now is called occupancy networks. That is, video from ALL cameras is fed into one neural network that understands the geometry of the car and where the cameras are. Using multiple frames of video from multiple cameras at once, it then generates a 3d model of the world around the car and identifies objects in it like what is road and what is curb and sidewalk and other vehicles and pedestrians (and where they are moving and likely to move to), and that data is fed to a planner AI that decides things like where the car should accelerate/brake/turn.
        Because the occupancy network is generating a 3d model, you get data that’s equivalent to LiDAR (3d model of space) but with much less cost and complexity. And because you only have one set of sensors, you don’t have to do sensor fusion to resolve discrepancies between different sensors.

        I drive a Tesla. And I’m telling you from experience- it DOES work. The latest betas of full self driving software are very very good. On the highway, the computer is a better driver than me in most situations. And on local roads- it navigates them near-perfectly, the only thing it sometimes has trouble with is figuring out when is it’s turn in an intersection (you have to push the gas pedal to force it to go).

        I’d say it’s easily at L3+ state for highway driving. Not there yet for local roads. But it gets better with every release.

        • CamilleMellom@mander.xyz
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          It’s an interesting discussion thanks!

          I know that it can be done :). It’s my direct field of research (localization and mapping of autonomous robots with a focus on building 3D model from camera images e.g NeRF related methods )what i was trying to say is that you cannot have high safety using just cameras. But I think we agree there :)

          I’ll be curious to know how they handle environment with a clear lack of depth information (highway roads), how they optimized the processing power (estimating depth is one thing but building a continuous 3D model is different), and the image blur when moving at high speed :). Sensor fusion between visual slam and LiDAR is not complex (since the LiDAR provide what you estimate with your neural occupancy grid anyway, what you get is a more accurate measurement) so on the technological side they don’t really gain much, mainly a gain for the cost.

          My guess is that they probably still do a lot of feature detection (lines and stuff) in the background and a lot of what you experience when you drive is improvement in depth estimation and feature detection on rgb images? But maybe not I’ll be really interested to read about it more :). Do you have the research paper that the Tesla algo relies on?

          Just to be clear, i have no doubt it works :). I have used similar system for mobile robots and I don’t see why it would not. But I’m also worried they it will lull people in a false sense of safety while the driver should stay alert.

          • SirEDCaLot@lemmy.fmhy.ml
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            Don’t have the paper, my info comes mainly from various interviews with people involved in the thing. Elon of course, Andrej Karpathy is the other (he was in charge of their AI program for some time).

            They apparently used to use feature detection and object recognition in RGB images, then gave up on that (as generating coherent RGB images just adds latency and object recognition was too inflexible) and they’re now just going by raw photon count data from the sensor fed directly into the neural nets that generate the 3d model. Once trained this apparently can do some insane stuff like pull edge data out from below the noise floor.

            This may be of interest– This is also from 2 years ago, before Tesla switched to occupancy networks everywhere. I’d say that’s a pretty good equivalent of a LiDAR scan…

            • CamilleMellom@mander.xyz
              link
              fedilink
              arrow-up
              2
              ·
              1 year ago

              I Googled it to see because I thought they maybe were using event cameras then but no, they use 10bit instead of classic 8bit but they are not litterally counting photons (which would not be useful). It’s interesting that it improved the precision and recall of their « object detection model ». Guess the image is of better quality then.

              The link from 2 years ago is not particularly impressive: https://arxiv.org/abs/1406.2283 this is an equal valent paper I think from 2014

              • SirEDCaLot@lemmy.fmhy.ml
                link
                fedilink
                arrow-up
                0
                ·
                1 year ago

                Not sure the exact details- I heard they were sampling 10 bits per pixel but a bunch of their release notes talked about photon count detection back when they switched to that system.
                Given that the HW3 cameras started being used to just generate RGB images, I suspect the current iteration is working by just pulling RAW format frames and interpreting them as a photon count grid, from there detecting edges and geometry with the occupancy network.

                I’ve not seen much of anything published by Tesla on the subject. I suspect most of their research they are keeping hush hush to get a leg up on the competition. They share everything regarding EV tech because they want to push the industry in that direction, but I think they see FSD as their secret sauce that they might sell hardware kits but not let others too far under the hood.

        • tony@l.bxy.sh
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          1 year ago

          Because the occupancy network is generating a 3d model, you get data that’s equivalent to LiDAR (3d model of space) but with much less cost and complexity. And because you only have one set of sensors, you don’t have to do sensor fusion to resolve discrepancies between different sensors.

          That’s my problem, it is approximating LIDAR but it isn’t the same. I would say multiple sensor types is necessary for exactly the reason you suggested it isn’t - to get multiple forms of input and get consensus, or failing consensus fail-safe.

          I don’t doubt Tesla autopilot works well and it certainly seems to be an impressive feat of engineering, but can it be better?

          In our town we had a Tesla shoot through red traffic lights near our local school barely missing a child crossing the road. The driver was looking at their lap (presumably their phone). I looked online and apparently autopilot doesn’t work with traffic lights, but FSD does?

          It’s not specific to Tesla but people unaware of the limitations level 2, particularly when brands like Tesla give people the impression the car “drives itself” is unethical.

          My opinion is if that Tesla had extra sensors, even if the car is only in level 2 mode, it should be able to pick up that something is there and slow/stop. I want the extra sensors to cover the edge cases and give more confidence in the system.

          Would you still feel the same about Tesla if your car injured/killed someone or if someone you care about was injured/killed by a Tesla?

          IMHO these are not systems that we should be compromising to cut costs or because the CEO is too stubborn. If we can put extra sensors in and it objectively makes it safer why don’t we? Self driving cars are a luxury.

          Crazy hypothetical: I wonder how Tesla would cope with someone/something covered in Vantablack?

          • SirEDCaLot@lemmy.fmhy.ml
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            In our town we had a Tesla shoot through red traffic lights near our local school barely missing a child crossing the road. The driver was looking at their lap (presumably their phone). I looked online and apparently autopilot doesn’t work with traffic lights, but FSD does?

            There’s a few versions of this and several generations with different capability. The early Tesla Autopilot had no recognition of stop signs, it was literally just ‘cruise control that keeps you in your lane’. FSD for sure does recognize stop signs, traffic lights, etc and reacts correctly to them. I BELIEVE that the current iteration of Traffic Aware Cruise Control (what you get if you don’t pay extra for FSD or Enhanced Autopilot) will stop for traffic lights but I could be wrong on that. I know it detects pedestrians but its detection isn’t nearly as advanced as FSD.

            I will give you that in theory, the time-of-flight data from a LiDAR pulse will give you a more reliable point cloud than anything you’d get from cameras. But I also know Tesla is doing things with cameras that border on black magic. They gave up on getting images out of the cameras and are now just using the raw photon count data from the sensor, and with the AI trained it can apparently detect edges with only a few photons of difference between pixels (below the noise floor). And I can say from experience that a few times I’ve been in blackout rainstorms where even with full wipers I can barely see anything, and the FSD visualization doesn’t skip a beat and it sees other cars before I do.

            Would you still feel the same about Tesla if your car injured/killed someone or if someone you care about was injured/killed by a Tesla?

            As a Level 2 system, the Tesla is not capable of injuring or killing someone. The driver is responsible for that.

            But I’d ask- if a Tesla saw YOUR loved one in the road, and it would have reacted but it wasn’t in FSD mode and the human driver reacted too slowly, how would you feel about that? I say this not to be contrarian, but because we really are approaching the point where the car has better situational awareness than the human.

            If we can put extra sensors in and it objectively makes it safer why don’t we? Self driving cars are a luxury.

            For the reason above with the loved one. If you can use cameras and make a system that costs the manufacturer $3000/car, and it’s 50 times safer than a human, or use LiDAR and cost the manufacturer $10,000/car, and it’s 100 times safer than a human, which is safer?
            The answer is the cameras, because it will be on more cars, thus deliver more overall safety.
            I understand the thinking that ‘Elon cheaped out, Tesla FSD is a hack system on shitty hardware that uses clever programming to work around a cut-rate sensor suite’. But I’d also argue- if they can get similar performance out of a camera, and put it on more cars, doesn’t that do more to overall improve safety?

            In the example above, if the car didn’t have the self driving package because the guy couldn’t afford it, wouldn’t you prefer that a decent but better than human self driving system was on the car?

            • tony@l.bxy.sh
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              There’s a few versions of this and several generations with different capability. […]

              This raises its own issues but is the nature of the “move fast and break things” ethos of tech today. While it has its benefits; is it suitable for vehicles, particularly their safety systems? It isn’t clear to me, as it is a double-edged sword.

              As a Level 2 system, the Tesla is not capable of injuring or killing someone. The driver is responsible for that.

              But I’d ask- if a Tesla saw YOUR loved one in the road, and it would have reacted but it wasn’t in FSD mode and the human driver reacted too slowly, how would you feel about that? I say this not to be contrarian, but because we really are approaching the point where the car has better situational awareness than the human.

              I would be angry that such a modern car with any form of self driving doesn’t have emergency braking. Though, that would require additional sensors…

              I’d also be angry that L2 systems were allowed in that environment in the first place, but as you say it is ultimately the drivers fault.

              Like cruise control having minimum speeds that generally prevent it being used in town; I would hope that the manufacturer would make it difficult to use L2 outside of motorway driving. This doesn’t prevent people bypassing it but means someone doing so was trying to do something they shouldn’t.

              With a connected vehicle, being able to limit L2 use outside of motorway should be straightforward.

              Then it becomes akin to disabling traction control or adaptive cruise control and having an accident that could be prevented. The tools are there, the default is on, a driver deliberately disabled it. The manufacturer did as much as they reasonably could.

              In the example above, if the car didn’t have the self driving package because the guy couldn’t afford it, wouldn’t you prefer that a decent but better than human self driving system was on the car?

              I would prefer they had no self driving rather than be under the mistaken impression the car could drive for them in the current configuration. The limitations of self driving (in any car) are often not clear to a lot of people and can vary greatly. I feel this is where accidents are most likely - in the stage between fully manual and fully autonomous.

              If Tesla offer a half-way for less money would you not expect the consumer to take the cheapest option? If they have an accident it is more likely someone else is injured, so why pay more to improve the self driving when it doesn’t affect them?

              If you can use cameras and make a system that costs the manufacturer $3000/car, and it’s 50 times safer than a human, or use LiDAR and cost the manufacturer $10,000/car, and it’s 100 times safer than a human, which is safer?
              The answer is the cameras, because it will be on more cars, thus deliver more overall safety.

              I agree an improvement is better than none, but I’m not sure your conclusion can be made so easily? Tesla is the only company I know steadfastly refusing to use any other sensor types and the only reason I see is price.

              Thinking about it, drum brakes are cheaper than disc brakes… (said with tongue-firmly-in-cheek)

              Another concern is that any Tesla incidents, however rare, could do huge damage to people’s perception of self driving. People mightn’t know there is a difference between Tesla and other manufacturer’s autonomous driving ability.

              For many people Tesla is self-driving cars, if a Tesla has an accident in L2 even though this is the driver’s fault the headlines will be “Tesla autopilot hits school child” not “Driver inappropriately uses limited motorway assistance mode of car in small town hitting school child

              What about the impact on the industry? If Tesla is much cheaper than LIDAR-equipped vehicles will this kill a better/safer product a-la betamax?

              IMHO safety shouldn’t take a lower priority to price/CEO demands. Consumers often don’t know and frankly shouldn’t need to know the details of these systems.

              Do you pick your airline based on the plane they fly and it’s safety record or the price of the ticket, being confident all aviation is held to rigorous safety standards?

              As has been seen recently with a certain submarine, safety measures should not be taken lightly.

              • SirEDCaLot@lemmy.fmhy.ml
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                While it has its benefits; is it suitable for vehicles, particularly their safety systems? It isn’t clear to me, as it is a double-edged sword.

                Perhaps, but if you are developing a tech that can save lives, doesn’t it make sense to put that out in more cars faster?

                I would be angry that such a modern car with any form of self driving doesn’t have emergency braking. Though, that would require additional sensors…

                Tesla does this with cameras whether you pay for FSD or not. It can also detect if you’re near an object and slam on gas instead of brake, it will cancel that out. These are options you can turn off if you don’t want them.

                I’d also be angry that L2 systems were allowed in that environment in the first place, but as you say it is ultimately the drivers fault.

                I’m saying- imagine if the car has L2 self driving, and the driver had that feature turned off. The human was driving the car. The human didn’t react quickly enough to prevent hitting your loved one, but the computer would have.
                Most of the conversation around FSD type tech revolves around what happens when it does something wrong that the human would have done right. But as the tech improves, we will get to the point where the tech makes fewer mistakes than the human. And then this conversation reverses- rather than ‘why did the human let the machine do something bad’ it becomes ‘why did the machine let the human do something bad’.

                I would hope that the manufacturer would make it difficult to use L2 outside of motorway driving.

                Why? Tesla’s FSD beta L2 is great. It’s not perfect, but it does a very good job for most parts of driving on surface streets.

                I would prefer they had no self driving rather than be under the mistaken impression the car could drive for them in the current configuration. The limitations of self driving (in any car) are often not clear to a lot of people and can vary greatly.

                This is valid. I think the name ‘full self driving’ is problematic somewhat. I think it will get to the point of actually being fully self driving, and I think it will get there soon (next year or two). But they’ve been using that term for several years now and especially the first few versions of ‘FSD’ were anything but. And before they started with driver monitoring, there were a bunch of people who bought ‘FSD’ and trusted it a lot more than they should have.

                If Tesla offer a half-way for less money would you not expect the consumer to take the cheapest option? If they have an accident it is more likely someone else is injured, so why pay more to improve the self driving when it doesn’t affect them?

                That’s not how their pricing works. The safety features are always there. The hardware is always there. It’s just a function of what software you get. And if you don’t buy FSD when you buy the car, you can buy it later and it will be unlocked over the air.
                What you get is extra functionality. There is no ‘my car ran over a little kid on a bike because I didn’t pay for the extra safety package’. It’s ‘my car won’t drive itself because I didn’t pay for that, I just get a smart cruise control’.

                Tesla is the only company I know steadfastly refusing to use any other sensor types and the only reason I see is price.

                Price yes, and difficulty integrating different data sets. On their higher end cars they’ve re-introduced a high resolution radar unit. Haven’t see much on how that’s being used though.
                The basic answer is they can get to where we need with cameras alone because our software is better than others. For any other automaker that doesn’t have Tesla’s AI systems, LiDAR is important.

                Another concern is that any Tesla incidents, however rare, could do huge damage to people’s perception of self driving.

                This already happens whether the computer is driving or not. Lots of people don’t understand Teslas and think that if you buy one it’ll drive you into a brick wall and then catch on fire while you’re locked inside. Bad journalists will always put out bad journalism. That’s not a reason to stop tech progress tho.

                If Tesla is much cheaper than LIDAR-equipped vehicles will this kill a better/safer product a-la betamax?

                Right now FSD isn’t a main selling point for most drivers. I’d argue that what might kill others is not that Tesla’s system is cheaper, but that it works better and more of the time. Ford and GM both have a self driving system, but it only works on certain highways that have been mapped with centimeter-level LiDAR ahead of time. Tesla has a system they’re trying to make general purpose, so it can drive on any road. So if the Tesla system takes you driveway-to-driveway and the competition takes you onramp-to-offramp, the Tesla system is more flexible and thus more valuable regardless of the purchase price.

                Do you pick your airline based on the plane they fly and it’s safety record or the price of the ticket, being confident all aviation is held to rigorous safety standards? As has been seen recently with a certain submarine, safety measures should not be taken lightly.

                I agree standards should apply, that’s why Tesla isn’t L3+ certified even though on the highway I really think it’s ready for it.

                • tony@l.bxy.sh
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  1 year ago

                  Perhaps, but if you are developing a tech that can save lives, doesn’t it make sense to put that out in more cars faster?

                  Totally agree, that’s why I say it is a double-edged sword. The theory being is that it is more acceptable to ship bugs because they can be rectified much more quickly.

                  Tesla does this with cameras whether you pay for FSD or not. It can also detect if you’re near an object and slam on gas instead of brake, it will cancel that out. These are options you can turn off if you don’t want them.

                  Thanks for clarifying that, not something I was aware of. Sounds very pragmatic.

                  I’m saying- imagine if the car has L2 self driving, and the driver had that feature turned off. The human was driving the car. The human didn’t react quickly enough to prevent hitting your loved one, but the computer would have. Most of the conversation around FSD type tech revolves around what happens when it does something wrong that the human would have done right. But as the tech improves, we will get to the point where the tech makes fewer mistakes than the human. And then this conversation reverses- rather than ‘why did the human let the machine do something bad’ it becomes ‘why did the machine let the human do something bad’.

                  I misunderstood the original scenario, and while it sounds like it shouldn’t be possible at current (given the auto braking you mentioned above), I understand the meaning. I agree with you here, I don’t think the human is better and my issue isn’t that I think a human would necessarily react better (and certainly in L2 the problem is a human almost never will).

                  My main concern was about an accident with camera-only that could have been avoided with additional sensors. I had heard additional sensors had been suggested at Tesla, but vetoed. I knew that Musk was confident cameras can do it all and had said as much. My concern was that his bullishness was reason for this policy, however hearing that Tesla are investigating other sensors dispels that theory.

                  This already happens whether the computer is driving or not. Lots of people don’t understand Teslas and think that if you buy one it’ll drive you into a brick wall and then catch on fire while you’re locked inside. Bad journalists will always put out bad journalism. That’s not a reason to stop tech progress tho.

                  Agreed. I don’t follow self-driving cars or Tesla/Musk closely so I’m just as ill-informed. The original concern was if Tesla’s policy of using only cameras reduces their self-driving capability compared to non camera-only competition, even performing well above a human, it could affect the perception of self-driving vehicles.

                  Right now FSD isn’t a main selling point for most drivers. I’d argue that what might kill others is not that Tesla’s system is cheaper, but that it works better and more of the time. Ford and GM both have a self driving system, but it only works on certain highways that have been mapped with centimeter-level LiDAR ahead of time. Tesla has a system they’re trying to make general purpose, so it can drive on any road. So if the Tesla system takes you driveway-to-driveway and the competition takes you onramp-to-offramp, the Tesla system is more flexible and thus more valuable regardless of the purchase price.

                  Yes, I agree. Aside from Waymo, which doesn’t look to be coming to consumers any time soon, I’m not sure who else is close to Tesla on that problem. I would have expected to hear more from the major manufacturers but it seems while some have been certified L3, it is only in certain conditions and locations.