There’s been some Friday night kernel drama on the Linux kernel mailing list… Linus Torvalds has expressed regrets for merging the Bcachefs file-system and an ensuing back-and-forth between the file-system maintainer.

  • NeoNachtwaechter@lemmy.world
    link
    fedilink
    arrow-up
    24
    arrow-down
    7
    ·
    3 months ago

    two files in the same folder, one of them stored compressed on an array of HDDs in RAID10 and the other one stored on a different array […]

    Now that’s what I call serious over-engineering.

    Who in the world wants to use that?

    And does that developer maybe have some spare time? /s

    • apt_install_coffee@lemmy.ml
      link
      fedilink
      arrow-up
      64
      ·
      3 months ago

      This is actually a feature that enterprise SAN solutions have had for a while, being able choose your level of redundancy & performance at a file level is extremely useful for minimising downtime and not replicating ephemeral data.

      Most filesystem features are not for the average user who has their data replicated in a cloud service; they’re for businesses where this flexibility saves a lot of money.

      • apt_install_coffee@lemmy.ml
        link
        fedilink
        arrow-up
        3
        ·
        3 months ago

        I’ll also tac on that when you use cloud storage, what do you think your stuff is stored on at the end of the day? Sure as shit not Bcachefs yet, but it’s more likely than not on some netapp appliance for the same features that Bcachefs is developing.

    • Max-P@lemmy.max-p.me
      link
      fedilink
      arrow-up
      24
      arrow-down
      1
      ·
      3 months ago

      Simple example: my Steam library could be RAID0 and unencrypted but my backups I definitely want to be RAID1 and compressed, and encrypted for security. The media library doesn’t need encryption but maybe want it in RAID1 because ripping movies takes forever. I may also want to have the games on NVMe when I play them, and stored on the HDDs when I’m not playing them, and my VMs on the SATA SSD array as a performance middleground.

    • Semperverus@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      3 months ago

      This probably meets some extreme corporate usecase where they are serving millions of customers.

      • DaPorkchop_@lemmy.ml
        link
        fedilink
        arrow-up
        18
        ·
        edit-2
        3 months ago

        It’s not that obscure - I had a use case a while back where I had multiple rocksdb instances running on the same machine and wanted each of them to store their WAL only on SSD storage with compression and have the main tables be stored uncompressed on an HDD array with write-through SSD cache (ideally using the same set of SSDs for cost). I eventually did it, but it required partitioning the SSDs in half, using one half for a bcache (not bcachefs) in front of the HDDs and then using the other half of the SSDs to create a compressed filesystem which I then created subdirectories on and bind mounted each into the corresponding rocksdb database.

        Yes, it works, but it’s also ugly as sin and the SSD allocation between the cache and the WAL storage is also fixed (I’d like to use as much space as possible for caching). This would be just a few simple commands using bcachefs, and would also be completely transparent once configured (no messing around with dozens of fstab entries or bind mounts).

        • MrSpArkle@lemmy.ca
          link
          fedilink
          arrow-up
          2
          ·
          3 months ago

          Is there a reason for bind mounting and not just configuring the db to point at a different path?