• Possibly linux@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    What’s the problem with btrfs really?

    It is nice but it also feels like it is perpetually unfinished. Is there some major flaw in the design?

    • enumerator4829@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      I’ve seen ZFS in production use on pools with hundreds of TBs, clustered together into clusters of many PBs. The people running that don’t even think about BTRFS, and certainly won’t actively consider it for anything.

      • BTRFS once had data corruption bugs. ZFS also had that, but only in very specific edge cases. That case was taken very seriously, but basically, ZFS has a reputation for not fucking up your bits even close to BTRFS
      • ZFS was built and tested for use on large pools from the beginning, by Sun engineers from back when Sun was fucking amazing and not a pile of Oracle managed garbage. BTRFS still doesn’t have stable RAID5/6.
      • ZFS send/recv is amazing for remote replication of large filesystems.
      • DRAID is kind o the only sane thing to do with todays disk sizes, speeds and pool sizes.

      But those are ancillary reasons. I’ll quote the big reason from the archwiki:

      The RAID 5 and RAID 6 modes of Btrfs are fatally flawed, and should not be used for "anything but testing with throw-away data”.

      For economic reasons, you need erasure coding for bigger pools, either classic RAID5/6 or DRAID. BTRFS will either melt your data in RAID5/6 or not support DRAID at all.

    • swab148@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      Mostly just the RAID5 and 6 instability, it’s fantastic otherwise. But I’m kinda excited to try out bcachefs pretty soon, as well.

  • brucethemoose@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    1 month ago

    IDK what they mean by better ssd I/O performance, btrfs was the worst FS I tested for some heavy SSD workloads (like writing thousands of little pngs in short time, file searches, merging huge weights with some paging)…

    The features are fantastic, especially for HDDs, but it’s an inherently high overhead FS.

    ext4 was also bad. F2FS and XFS are great, and I’ve stuck with F2FS for now.

  • circuitfarmer@lemmy.sdf.org
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    1 month ago

    The CoW nature of Btrfs means it’s often slower than ext4 for common tasks, right? It also means more writes to your SSDs.

    I’ve stuck to ext4 so far, as someone who doesn’t really have a need for snapshotting.

    Edit: I’m not an expert on file systems in the least, so do chime in if these assumptions are incorrect.

      • circuitfarmer@lemmy.sdf.org
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        But if the file system needs extra writes anyway for CoW, and the SSD needs its own CoW, then wouldn’t that end up being exponential writes? Or is there some mechanism which mitigates that?

        • InverseParallax@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          The fs does cow then releases the old block if appropriate.

          The ssd has a tracking map for all blocks, it’s cow relies on a block being overwritten to free the old block.

          Basically it works out the same either way.

  • lunarul@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    1 month ago

    I zoomed in to read what they’re saying on the bottom right and was disappointed.