• tiramichu@sh.itjust.works
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    7 days ago

    I’ve encountered a pretty annoying thing on Linux several times.

    When copying a large file to a flash drive the transfer appears to complete very quickly, yet if you eject and remove the drive (which the graphical file manager will happily do without complaint) then on taking the flash drive where it needs to go you’ll find your file is frustratingly corrupt.

    This happens because the write to the disk is cached in memory, and the file manager is apparently unaware of this cache.

    You can avoid this by opening a terminal and executing the command ‘sync’ - this will ensure all cached file writes for all disks are fully written. When the command exits (which may be immediately or may take some minutes) the USB write is definitely done, and you can safely unmount it.

    Not sure if this behaviour is distro-dependant in any way, or if other file managers deal with it better, but it’s definitely one of only a few times in modern Linux where I’ve had such an unintuitive experience and was super disappointed it didn’t do better. Normally if I shoot myself in the foot it was at least clearly my fault!

    From the user perspective, if I copy the file and then ‘safely’ eject the disk and it lets me, I’ve done everything properly, right?

    Non-technical users must get caught by this all the time, with the difference being that they can’t figure out why it is happening, or what they should do to prevent it.

  • Treczoks@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    7 days ago

    Reminds me of the times when our company had two locations connected by a slow connection (way before fast internet everywhere). You could copy even large files from server A to server B in seconds, but when closing the file, it could be stuck for hours, while the transfer of the cached data was in progress.