• Ubuntu tmpfs /run filling up to 100%

    From gcubebuddy@21:4/129 to All on Wednesday, May 26, 2021 13:55:01
    Hi all, i have been noticing with ubuntu 20.04 LTS, that if i leave Mystic running (under the non-privilaged mystic account), that the tmpfs /run directory, which is 380 megs, fills up over time to 100% full...
    has anyone else had this issue? i have been trying to track down on line how
    to fix it. i did fine 1 command to extend the storage area, but it also fills up over time. the only thing so far that seems to fix it is rebooting the server once a week to clear out the tmpfs mount.

    from what i have read up on it, apparently it is used when hosting a service, as a local tmp swap partition. as when linux hosts services out over a
    network, or the cloud, it uses the system memory to create a virtual disk to host the files localy on the system running so it can perform needed tasks
    for the service. here is an example of what it looks like....


    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    osadmin@laptop1:~$ df -h
    Filesystem Size Used Avail Use% Mounted on
    udev 7.7G 0 7.7G 0% /dev
    tmpfs 389M 352M 353M 100% /run
    ...
    tmpfs 389M 389M 389M 100% /run/user/1000 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    So my question is, has anyone here running Mystic on Ubuntu, ran into this issue? if so, what was the fix or work around you used?
    i have been looking everywhere on line for a fix for this...
    apparently it is suppose to use about a 3rd of the memory to run this mount
    as a cache / tmp dir. it used to be hosted in /tmp or /var/tmp, but the
    problem with those dirs they had universal write privlages so anyone or
    account could modify any of the files located there...
    any advice would help.
    thanks

    Thanks
    - Gamecube Buddy

    telnet --<{bbs.hive32.com:23333}>--

    --- Mystic BBS v1.12 A46 2020/08/26 (Linux/64)
    * Origin: Hive32 (21:4/129)
  • From acn@21:3/127.1 to gcubebuddy on Wednesday, May 26, 2021 16:37:00
    Am 26.05.21 schrieb gcubebuddy@21:4/129 in FSX_GEN:

    Hallo gcubebuddy,

    Hi all, i have been noticing with ubuntu 20.04 LTS, that if i leave Mystic running (under the non-privilaged mystic account), that the tmpfs /run directory, which is 380 megs, fills up over time to 100% full...

    Did you look into /run to see what files have been created there?

    Try
    # ls -l /run

    Or, if it contains some subdirectories, you can use
    # du -sh /run/* | sort -h
    to see which directory takes how much space (and this output is sorted)

    This could give some more information :)

    Without knowning what is taking up all the space, we'd have to look into
    our crystal ball...

    Regards,
    Anna

    --- OpenXP 5.0.49
    * Origin: Imzadi Box Point (21:3/127.1)
  • From Spectre@21:3/101 to gcubebuddy on Thursday, May 27, 2021 07:41:00
    Hi all, i have been noticing with ubuntu 20.04 LTS, that if i leave Mystic running (under the non-privilaged mystic account), that the tmpfs /run directory, which is 380 megs, fills up over time to 100% full... has

    Chuckle, I had a similar problem with buntu 18LTS I think it was, but it was n't /run I had the issue with it was /var/log I presume you've tried to delete obvious candidates while the system is running and finding no resolve it still looks full?

    You'll need to identify what is actually writing all the extra data. Rather than "deleting" the file you'll need to truncate it. If you delete an open file the filesystem doesn't actually release the space. The file might be gone, but a df will show no extra space until reboot. Without any significant data in the logs I just truncated them nightly.

    Spec


    *** THE READER V4.50 [freeware]
    --- SuperBBS v1.17-3 (Eval)
    * Origin: We know where you live, we're coming round to get you (21:3/101)
  • From gcubebuddy@21:4/129 to Spectre on Friday, May 28, 2021 14:15:36
    You'll need to identify what is actually writing all the extra data. Rather than "deleting" the file you'll need to truncate it. If you
    delete an open file the filesystem doesn't actually release the space.
    but a df will show no extra space until reboot. Without any significant data in
    the logs I just truncated them nightly.
    Spec

    Hey! awesome thanks for responding.
    by chance, do you know what command you are running to do the trunication?
    the only thing that seems to release the space is just rebooting. i have not tried remounting it yet though... so i dont know if that would help. i did a logout of all accounts associated with /run/user/1000 dir. it also seems
    that /run is also a seperate mount as well.

    I wonder what is causing these files to stick like this and not be cleared
    out when the system is done with them... i would imagine that this should be causing alot of issues for people... i have been trying to look through my ubuntu 20.04 unleashed book, but can seem to locate anything there. and there doesnt seem to be a lot of info on this issue or fixes listed in the ubuntu forums. thanks for helping with this.

    Thanks
    - Gamecube Buddy

    telnet --<{bbs.hive32.com:23333}>--

    --- Mystic BBS v1.12 A46 2020/08/26 (Linux/64)
    * Origin: Hive32 (21:4/129)
  • From tenser@21:1/101 to gcubebuddy on Saturday, May 29, 2021 10:04:17
    On 28 May 2021 at 02:15p, gcubebuddy pondered and said...

    You'll need to identify what is actually writing all the extra data. Rather than "deleting" the file you'll need to truncate it. If you delete an open file the filesystem doesn't actually release the space but a df will show no extra space until reboot. Without any significa data in
    the logs I just truncated them nightly.
    Spec

    Hey! awesome thanks for responding.
    by chance, do you know what command you are running to do the
    trunication? the only thing that seems to release the space is just rebooting. i have not tried remounting it yet though... so i dont know
    if that would help. i did a logout of all accounts associated with /run/user/1000 dir. it also seems that /run is also a seperate mount as well.

    Truncating a file like that can be done using the `truncate(2)`
    system call. On most Linux distributions, there's a program
    called `truncate` that will likely do what you want.

    A couple of caveats: the actual details of freeing blocks and
    so forth are implemented by the filesystems; it's possible there
    exist filesystems where one can `ftruncate()` a file descriptor
    or `truncate()` a file with open references and the allocated
    block situation doesn't change; I don't know of any that do that,
    though. It's a bit of a weird way to approach the world.

    The second caveat is that this is likely a good way to create
    what we call "sparse" files. Truncating a file that has open
    file descriptors against it doesn't modify the offset of that
    descriptor in the file. So if you've got a process that has
    an open file descriptor for some file, and has been steadily
    writing data to it and now has a write offset of, say, a
    megabyte then if you truncate it, the allocated blocks associated
    with the file will be free, but the next write that process makes
    will be at the megabyte offset. The result is a file that appears
    to be more than a megabyte long, but the beginning of it will now
    all be zeros; most modern OSes won't actually allocate blocks
    for those unless you lseek() somewhere in the sparse region and
    write something there; reads in that region will be synthetically
    filled with zeros. But, again, how that's _actually_ implemented
    depends of the filesystem's particular semantics.

    I wonder what is causing these files to stick like this and not be
    cleared out when the system is done with them... i would imagine that
    this should be causing alot of issues for people... i have been trying
    to look through my ubuntu 20.04 unleashed book, but can seem to locate anything there. and there doesnt seem to be a lot of info on this issue
    or fixes listed in the ubuntu forums. thanks for helping with this.

    It's probably a bug. If a file is deleted, but there is still an
    open file descriptor that references that file, then the resources
    associated with it won't be freed until the last reference to the
    file disappears (that is, the last file descriptor referring to it
    is close()'d).

    You don't have to reboot necessarily, though; if you were to, say,
    kill the process holding the file open that should be enough.

    Lots of Unix daemons that open log files and things like that will
    close() and re-open() them in response to a signal, often SIGHUP
    (which, since it's generated by a terminal and daemons are never
    supposed to be associated with a terminal, was considered "safe"
    to send to a random daemon).

    Tools like `lsof` can help you locate what process has what files
    open.

    --- Mystic BBS v1.12 A46 2020/08/26 (Windows/32)
    * Origin: Agency BBS | Dunedin, New Zealand | agency.bbs.nz (21:1/101)
  • From Spectre@21:3/101 to gcubebuddy on Saturday, May 29, 2021 11:02:00
    Hey! awesome thanks for responding. by chance, do you know what
    command you are running to do the trunication? the only thing that

    Yeah the nova-trunions are truncated with something like...

    echo :>/yourfilenamehere

    Spec


    *** THE READER V4.50 [freeware]
    --- SuperBBS v1.17-3 (Eval)
    * Origin: We know where you live, we're coming round to get you (21:3/101)
  • From Spectre@21:3/101 to tenser on Saturday, May 29, 2021 11:13:00
    It took me a while, but it was logging from VirtualBox that was causing my problems originally. In the end I was able to disable logging or cut it right down I don't recall which now.

    A couple of caveats: the actual details of freeing blocks and so
    forth are implemented by the filesystems; it's possible there exist filesystems where one can `ftruncate()` a file descriptor or
    `truncate()` a file with open references and the allocated block
    situation doesn't change; I don't know of any that do that, though.
    It's a bit of a weird way to approach the world.

    I don't know what files he'd have sitting in /run that are continually expanding, but logging at least didn't have sparse files or a write offset. Both appear to just hold the file open and keep appending data. If you delete the open file then regardless of the user space, the filesystem retains the data until the owning application is closed or closes the file itself.

    I found it somewhat odd that an echo :>/filename was sufficient to truncate it, and have the space released presumably because it doesn't require a close to change the file. Everything was then happy to just pick up where it left off.

    In my iniital hunt for a solution I didn't come across trunc, the boffins just went straight to echo over the existing file.

    Spec


    *** THE READER V4.50 [freeware]
    --- SuperBBS v1.17-3 (Eval)
    * Origin: We know where you live, we're coming round to get you (21:3/101)
  • From gcubebuddy@21:4/129 to tenser on Wednesday, June 02, 2021 11:58:26
    Truncating a file like that can be done using the `truncate(2)`
    system call. On most Linux distributions, there's a program
    called `truncate` that will likely do what you want.

    awesome thanks for the info. i am going to have to do some more research
    into this once i have some free time.

    Thanks
    - Gamecube Buddy

    telnet --<{bbs.hive32.com:23333}>--

    --- Mystic BBS v1.12 A46 2020/08/26 (Linux/64)
    * Origin: Hive32 (21:4/129)
  • From tenser@21:1/121 to Spectre on Wednesday, June 02, 2021 10:14:59

    On Saturday, May 29th Spectre was heard saying...
    A couple of caveats: the actual details of freeing blocks and so
    forth are implemented by the filesystems; it's possible there exist filesystems where one can `ftruncate()` a file descriptor or `truncate()` a file with open references and the allocated block situation doesn't change; I don't know of any that do that, though. It's a bit of a weird way to approach the world.

    I don't know what files he'd have sitting in /run that are continually expanding, but logging at least didn't have sparse files or a write offset. Both appear to just hold the file open and keep appending data. If you delete the open file then regardless of the user space, the filesystem retains the data until the owning application is closed or closes the file itself.

    Hmm; something else is at play, then. If a process has some
    open file descriptor associated with some file, then that file
    descriptor has read and write pointers into the file that are
    set at some point. Let us assume we're talking about a logging
    process, such as syslog. In this case, let us further assume
    that those pointers are non-zero.

    Now, suppose that some other process comes along and truncates
    the file to length 0. All of the blocks associated with the
    file will (presumably) be released back to the underlying
    filesystem; at this point, the length of the file is zero.

    Now assume the logging process wakes up and writes some data
    to the file. It will do so at whatever its write pointer is
    set to; as above, it is not zero, and truncation does not
    affect read/write pointers in open file descriptors referring
    to the truncated file.

    The effect would be the same as open()'ing the file, lseek()'ing
    to some offset, and then write()'ing some data at that offset.
    That's basically creating a sparse file; the "file length" will
    now be whatever the write offset was, plus the length of the write
    (assuming it completed successfully).

    If you truncate'd a file and it appeared to grow from zero again,
    that's a strong indicator that whatever is doing the logging is
    open()'ing the file before every write, and close()'ing it after.
    Given all the shenanigans that have occurred around rolling logs
    over the years, that's frankly not unreasonable.

    I found it somewhat odd that an echo :>/filename was sufficient to truncate it, and have the space released presumably because it doesn't require a close to change the file. Everything was then happy to just pick up where it left off.

    In my iniital hunt for a solution I didn't come across trunc, the boffins just went straight to echo over the existing file.

    Interesting. That's shell dependent, however, in how the shell
    implements '>' redirection (it would be reasonable to unlink() and
    then creat()/open(..., O_CREAT|..., ...) the file, for example:
    POSIX merely says the output file has to be truncated to zero lenth,
    but doesn't specify how. I think it would be a bit weird not to
    just open it O_TRUNC, though).

    --- ENiGMA 1/2 v0.0.12-beta (linux; x64; 14.15.4)
    * Origin: Xibalba -+- xibalba.l33t.codes:44510 (21:1/121)