• redsunrise@programming.dev
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    1
    ·
    6 days ago

    It’s almost as if removing LiDAR cameras in favor of traditional cameras “because they’re stupid” was the most idiotic ketamine-fueled decision a company could ever make. Why anyone believes Teslas can drive themselves in uncertain situations is beyond me.

    • Ptsf@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 days ago

      The even crazier part is that with all of their advancements in software, they’d probably legitimately have fsd launched and running well by now, given how much driving data they can ingest at will, if they’d just included a few hundred dollars worth of low resolution lidar and ultrasonics. Irrc they stated they were having issues with chain of trust among the sensors, but I’m not sure I believe that.

      • TheYang@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        5 days ago

        Well, do we know what the blockers are for Tesla?
        I feel like when I watch videos of FSD on cars, the representation of the world on the screen is rather good.
        Now given this datapoint of me watching maybe 30minutes of video in total, is the issue in:
        a) creating the distance to obstacles in the surroundings from cameras or in:
        b) reading street signs, road markings, stop lights etc, or in:
        c) doing the right thing, given a correct set of data about the surroundings?

        Lidar / Radar / Sonar would only help for a).
        Or is it combination of all of them, and the (relatively) cheap sensor would at least eliminate a), so one could focus on b and c?

        • Ptsf@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          5 days ago

          The blockers for Tesla are that it’s processing a 2D input in order to navigate 3D space. They use some ai trickery to make virtual anchor points using image stills and points of time to work around this and get back to a 3D space but the auto industry at large (not me) has collectively agreed this cannot overcome numerous serious challenges in realistic applications (the one people may be most familiar with is Mark Rober’s test where the tesla just drives right into a wall painted to look like the road Wiley Coyote style, but this has real world analogs such as complex weather). Lidar and ultrasonics integrated into the chain of trust can (and already do for most adas systems) mitigate a significant portion of the risk this issue causes (Volvo has shown even low resolution “cheap” Lidar sensors without 360 degree coverage can offer most of these benefits). To be honest I’m not certain that the addition would fix everything, perhaps the engineering obstacles really were insurmountable… but from what I hear from the industry at large, my friends in the space, and my own common sense; I don’t see how a wholly 2D implementation relying on only camera input can be anything but an insurmountable engineering challenge to overcome in order to produce the final minimal viable product. So from my understanding it’d be like being told you have to use water and only water as your hydraulic fluid, or that you can only use a heat lamp to cook for your restaurant. It’s just legitimately unsuitable for the purpose despite giving off the guise of doing the same work.

          • Ptsf@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 days ago

            Also and I’d forgotten to mention, but what you see in the on-screen representation is entirely divorced from the actual stack doing your driving. They’re basically running a small video game using the virtual world map they build and rendering in assets and such from there. It’s meant to give you a reasonable look into what the car sees and might do, but they’ve confirmed that it is in no way tied to the underlying neural decision network.

            • TheYang@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              4 days ago

              But that’s exactly the point.
              if the virtual map they’re building from cameras is complete, correct and stable (and presumably some other criteria that I didn’t think of from the top of my head), then the cameras would be sufficient.
              The underlying neural decision network can still fuck things up from a correct virtual world map.

              Now, how good is the virtual world map in real world conditions?

              • Ptsf@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                4 days ago

                You’re missing the point though maybe? You can’t take data, run it through what is essentially lossy compression, and then get the same data back out. Best you can do is a facsimile of it that suffers in some regard.

    • moseschrute@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      5 days ago

      To be fair, Elon probably wasn’t taking ketamine back when he made that decision. He made that decision all by himself :)

    • sabreW4K3@lazysoci.alOPM
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      19
      ·
      6 days ago

      I actually think this is kinda unfair. Up until recently, the only self driving system actually worth a shit was Tesla’s. Due to the way it collected street data and upcycled it. The problem is that the system they implemented has a ceiling, but it was heads and tails above all competition for a while. Waymo are benefiting from not being Musk related, but as things stand, they’re using more tele-operators than Tesla did for the normal Tesla’s. Compared to the Robotaxis, who knows. That will come out in a few years.