To be clear, I don’t blame the poster of this comment at all for the content of their post – this is accepted as “common knowledge” by a lot of Linux sysadmins and is probably one of the most likely things that you will hear from one if you ask them to talk about swap. It is unfortunately also, however, a misunderstanding of the purpose and use of swap, especially on modern systems.

  • Balinares
    link
    fedilink
    English
    arrow-up
    18
    ·
    5 months ago

    Great article, thanks for posting! Worth noting that swap is also used for tmpfs partitions. Meaning that if you don’t have any swap, temporary files in /tmp will use your actual physical RAM. That’s probably not what you want.

  • 4wd@programming.dev
    link
    fedilink
    arrow-up
    20
    arrow-down
    2
    ·
    5 months ago

    I’ve never understood why GNU/Linux actually needs swap. Okay, I created a 4G partition for it, having 32G of RAM. I never used all that RAM, but even so, stuff regularly ends up in swap. Why does the OS waste write cycles on my SSD if it doesn’t have to?

    However, if I artificially fill up all 32G of RAM, the system almost completely freezes faster than switching to using swap as a “lifeline”. And it only comes back to life when OOM Killer finally remembers its existence and kills some f__ing important process.

    • DefederateLemmyMl@feddit.nl
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      5 months ago

      I’ve never understood why GNU/Linux actually needs swap

      It doesn’t. It’s just good to have in most circumstances.

      Also, sidenote: “GNU” doesn’t apply here. Swapping is purely kernel business, no GNU involvement here.

      Okay, I created a 4G partition for it, having 32G of RAM. I never used all that RAM, but even so, stuff regularly ends up in swap. Why does the OS waste write cycles on my SSD if it doesn’t have to?

      Physical memory does not just contain program data, it also contains the filesystem cache, which is also important for performance and responsiveness. The idea is that some of the least recently used memory pages are sometimes evicted to swap in favor of more file caching.

      You can tweak this behavior by setting the vm.swappiness kernel parameter with sysctl. Basically higher values mean higher preference for keeping file backed pages in memory, lower values mean higher preference for keeping regular memory pages in memory.

      By default vm.swappiness = 60. If you have an abundance of memory, like a desktop system with 32G, it can be advantageous to lower the value of this parameter. If you set it to something low like 10 or 1, you will rarely see any of this paradoxical swap usage, but the system will still swap if absolutely necessary. I remember reading somewhere that it’s not a good idea to set it to 0, but I don’t remember the reason for that.

      Alternatively, there is no rule that says you can’t disable swap entirely. I’ve run a 32G desktop system without any swap for years. The downside is that if your 32G does run out, there will be no warning signs and the OOM killer will unceremoniously kill whatever is using the most memory.

      tl;dr just do this:

      sysctl vm.swappiness=10
      echo "vm.swappiness=10" > /etc/sysctl.d/99-swappiness.conf
      
  • recursive_recursion [they/them]@programming.dev
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    2
    ·
    edit-2
    5 months ago

    definitely agreed on against using swap as memory as much as you can especially since it can needlessly wear out your ssd (m.2 on most if not all on modern systems)

    allocating swap is still necessary as it provides features such as:

    • mitigating OOM scenarios (but doesn’t prevent them completely)
    • enabling system hibernation(suspend-to-disk)

    On my journey to learning Arch Linux I’ve personally tried:

    • 0 swap: which was pretty awful with constant unexpected system freezes/crashes
    • swap file: finicky but doable
    • swap partition: typical default/ol’reliable

    the last two doesn’t guarantee 0 problems as user errors such as allocating too much system memory to VMs can cause issues for the host system, but it does tend to mitigate the majority of issues that prevent the perception of system stability in comparison to Windows or Macs

    Resources:

      • recursive_recursion [they/them]@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        edit-2
        5 months ago

        after a quick preliminary search,
        a couple of things I’ve found out:

        this might not necessarily be adopted by the mainstream desktop users, due to the nature of zram compressing data in addition to disallowing hibernation

        Hibernating to swap on zram is not supported, even when zram is configured with a backing device on permanent storage. logind will protect against trying to hibernate to a swap space on zram.

        • data compression by nature incurs dataloss so there are both pro & con tradeoffs to which should be communicated to the user otherwise they might receive an unnecessary worse user experience:
          • “why is my computer so buggy?”
            • (the system is working as it’s coded but not working in they way the user expects it to)
              • this would be unfortunate as they might leave back to Windows or MacOS
          • I could be completely wrong about this, if so please comment or disregard this section entirely

        to me atm; zram seems great for server based systems rather than for desktop users’ systems


        one other method for zram mainstream adoption is to encourage an eccentric system that I’m currently using, which is to have the host system only contain minimal packages to run Virtual Machines:

        • zram compression might be fine as the main services/programs are run within their relevant and/or separated VM containers
          • this simultaneously achieves:
            • Wikipedia - Separation of Concerns
            • enhanced security as the host is further firewalled/bubblewrapped from the guest like Flatpaking/containerization (and malware is less likely to activate due to malicious users being unhelpful in letting their programs run in a VM 😢🤗)
        • hibernation can still be achived by save-stating VMs (VM restoration at will basically achieves hibernation so it doesn’t really matter if the host is shutdown so long as all VMs are saved by the user before host shutdown)

        sorry for the long comment!
        stuff like this interests me the most so I’ve spent a lot of time in learning and tinkering around🤗

    • DefederateLemmyMl@feddit.nl
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      5 months ago

      0 swap: which was pretty awful with constant unexpected system freezes/crashes

      I’ve run Arch without swap for many years without issues. The key of course is that you need enough RAM for what you are trying to do with your computer.

      There’s no reason why a 32GB RAM + 0GB swap system should have more problems than a 16GB RAM + 16GB swap system with the same workload. If anything, the former is going to run much better.

      swap file: finicky but doable

      What is finicky about a swap file?

      It’s just this:

      mkswap -U clear --size 4G --file /swapfile
      swapon /swapfile
      

      Done

      If anything it’s way easier to create a file in your filesystem than having to (re-)partition your drive to have a swap partition. Much more flexible too if you want to change your swap configuration in the future.

      • recursive_recursion [they/them]@programming.dev
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        5 months ago

        I’ve run Arch without swap for many years without issues. The key of course is that you need enough RAM for what you are trying to do with your computer.

        There’s no reason why a 32GB RAM + 0GB swap system should have more problems than a 16GB RAM + 16GB swap system with the same workload. If anything, the former is going to run much better.

        I run a lot of VMs; I typically run 2 at the same time in addition to running other programs in the background, my usecase is more eccentric than most users in the Linux space which is already pretty niche

        What is finicky about a swap file?

        It’s just this:

        mkswap -U clear --size 4G --file /swapfile
        swapon /swapfile
        

        Done

        I’m using BTRFS with LUKS-based Full Disk Encryption, the last time I used swapfiles with BTRFS with FDE it was in 2019 and it was painful to say the least, I rememeber spending several weeks scouring Stack and the Arch forums in order to get it to work properly.

        • usecases and preferences will differ from user to user, so I wouldn’t advise assuming a one-size-fits-all solution

        If anything it’s way easier to create a file in your filesystem than having to (re-)partition your drive to have a swap partition. Much more flexible too if you want to change your swap configuration in the future.

        swapfiles are great in that you can resize them at will, however after trying all 3 options I found myself questioning when would I even want to resize swap for a single system, so at a certain point I felt that swapfiles were more of a gimmick cause again you don’t really want to depend on swap if can.

        • if you’re consistently reaching your physical memory limits, upgrading your ram will most likely be the best bet for both system stability and hardware durability
        • DefederateLemmyMl@feddit.nl
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          5 months ago

          I run a lot of VMs; I typically run 2 at the same time in addition to running other programs in the background, my usecase is more eccentric than most users in the Linux space which is already pretty niche

          If what you’re doing involves using close to all of your system memory, it does make sense to add swap. So your use case is a good example actually.

          I also have an old Arch PC that I use to run various VMs on (currently 6 VMs in use). It does have a swapfile, but the most swap I’ve ever seen in use is about 1GB.

          I’m using BTRFS with LUKS-based Full Disk Encryption, the last time I used swapfiles with BTRFS with FDE it was in 2019 and it was painful to say the least, I rememeber spending several weeks scouring Stack and the Arch forums in order to get it to work properly.

          Weird. Sounds like you may have painted yourself a bit into a corner by using BTRFS then. I use trusty old ext4 on top of LUKS FDE, no issues with swapfiles whatsoever.

          That brings me to another downside of swap partitions: encryption. You can leak sensitive data through your swap partition, so it should be encrypted. If you use a plain partition, without LUKS in between, information in your swap is exposed. So you need to do more configuration to setup LUKS on your swap partition.

          If you use a swapfile on an already encrypted filesystem though, you don’t have to worry about it.

          when would I even want to resize swap for a single system

          Maybe your requirements change (e.g. “I want to be able to hibernate”), maybe your memory configuration changes, maybe you’ve underestimated or overestimated how much swap you need.

          Case in point: the Arch PC I mentioned above only uses upto 1GB of swap, but it has a 16GB swapfile. This discussion has brought to my attention that perhaps I should downsize the swapfile a bit and free up disk space.

          you don’t really want to depend on swap if can

          That is my position too. It’s always better to have a properly sized system, or limit what you push on an existing system. High swap usage rarely results in a good experience.

  • lurch (he/him)@sh.itjust.works
    link
    fedilink
    arrow-up
    14
    arrow-down
    4
    ·
    edit-2
    5 months ago

    I’m running Linux without swap for 20 years on my workstations and gaming PCs now. If you don’t hybernate and have enough RAM swap is useless.

    My memory doesn’t need to be managed. I have 20GB in my current setup and it was never full. If anything gets swapped in this situation it means it needlessly slows me down.

    I even mount tmpfs ramdisks for my shader cache dirs, because they get recreated every time anyways and why would i want anything temporary on disk, if i have 20 GB of RAM.

    • DefederateLemmyMl@feddit.nl
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      5 months ago

      I have 20GB in my current setup and it was never full. If anything gets swapped in this situation it means it needlessly slows me down.

      Not necessarily. Your memory also contains file backed pages (i.e. “file system cache”). These pages are typically not counted when determining “memory usage”, because they can always be discarded.

      It is often advantageous to keep frequently use files in cache in favor of unfrequently used memory pages.

      • lurch (he/him)@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        arrow-down
        2
        ·
        5 months ago

        so you think it’s faster to keep cache for files on a disk, almost like where the files already are, instead of the 14 GB of actually free RAM that the “free” command shows? if that’s your opinion, okay, but i don’t agree at all. (btw. that command also shows cache and i think that’s included.)

        • DefederateLemmyMl@feddit.nl
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          5 months ago

          You are misunderstanding.

          The file cache is never written out to the swapfile, because files are already on disk, like you say. The file cache is kept in memory and the kernel may decide it’s more advantageous to swap out unused anonymous memory pages to disk than flushing a file from the cache. You can use the vm.swappiness parameter to finetune this behavior to your liking btw. Lower values favor keeping more anonymous memory pages in memory, higher values favor file backed pages.

          To give an extreme example of where this is useful: I have a use case where I process a number of large video files (each 2GiB-10Gib in size). The job I’m doing involves doing several passes over the same file sequentially. You can bet your ass that caching them in memory speeds things up dramatically: the first pass, where it has to read the file on disk is at 200x speed (relative to the video’s duration), the second pass at 15000x speed.

          Even in less extreme circumstances it also helps by keeping frequently accessed files in your home directory in memory, for example your browser profile. Your browser and desktop environment would be much more sluggish if it had to reach out to disk every time for every file it touched.

          And you are free to disagree, but it’s not my opinion but the opinion of kernel developers of just about every operating system built the past 4 decades. So I’d say: take up the argument with them and see how far you get.

          • lurch (he/him)@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            ·
            5 months ago

            oh, i see. i have never done anything like your example. i have converted lots of videos, but not where it would go back in the file. yeah, i can see how you would want to slow down everything else on the system by swapping it, to get your video processed a bit faster. it’s just nothing i would do. however, if i wanted to, i could just truncate a file of 6GB, add it as swap and delete it afterwards.

  • nixfreak@sopuli.xyz
    link
    fedilink
    arrow-up
    6
    ·
    5 months ago

    This is a great article , and actually rather use swap than no swap. My systems run so much better with swap.

  • zero_spelled_with_an_ecks@programming.dev
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    5 months ago

    Without swap: The OOM killer is triggered more quickly as anonymous pages are locked into memory and cannot be reclaimed.

    This is why I prefer no swap on servers. I want oom to kill things quickly, otherwise everything slows to a crawl including ssh and terminal sessions that I’d use to troubleshoot and kill things myself anyway. I have redundant servers so a down system is much less disruptive than a terminally slow system.

  • boredsquirrel@slrpnk.net
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    5 months ago

    Swap is only needed for emergencies, if you have a bad oom, and if you have low RAM.

    If you have like 1-4GB of RAM, ZRAM may be better than swap.