View previous topic :: View next topic |
Author |
Message |
eccerr0r Watchman
Joined: 01 Jul 2004 Posts: 9779 Location: almost Mile High in the USA
|
Posted: Tue Jun 23, 2020 4:46 pm Post subject: Create a big file and allocate it, fast? |
|
|
This I still haven't seen a solution to over the years. When creating a file it's always O(n) as the filesystem must write something to the file (discounting sparse files). This is slow, and also generates unneeded writes to SSDs.
Theoretically one should be able to create a file in O(1) to O(log n) time. Just create a file entry, mark off the extents for the file. This is clearly a security hole as unused space may include bits of deleted files, so I suspect only root that can cat /dev/sda anyway, so no additional hazard for root to do this.
Sparse files don't allocate the space so I can't use these for temporary swap files - which is the main use for this.
This is a fairly specific use model, but can this be done and how? _________________ Intel Core i7 2700K/Radeon R7 250/24GB DDR3/256GB SSD
What am I supposed watching? |
|
Back to top |
|
|
szatox Advocate
Joined: 27 Aug 2013 Posts: 3348
|
Posted: Tue Jun 23, 2020 5:59 pm Post subject: |
|
|
I think some torrent clients do that, but this feature is dependent on the FS (and they fall back to zero-filled file if unavailable). So it is possible but not in a generic way |
|
Back to top |
|
|
ct85711 Veteran
Joined: 27 Sep 2005 Posts: 1791
|
|
Back to top |
|
|
Ant P. Watchman
Joined: 18 Apr 2009 Posts: 6920
|
Posted: Tue Jun 23, 2020 9:17 pm Post subject: |
|
|
You shouldn't do this and there's no point micro-optimising writes when you're going to put a swap file on an ssd anyway, but use debugfs's fallocate subcommand. |
|
Back to top |
|
|
eccerr0r Watchman
Joined: 01 Jul 2004 Posts: 9779 Location: almost Mile High in the USA
|
Posted: Tue Jun 23, 2020 9:26 pm Post subject: |
|
|
Oh yeah, it sure would be fs-dependent. I'd consider switching to that FS ...
I was playing with fallocate a long time ago and it too seemed to have zeroed all the blocks at least back then. I don't know about now. It still appears to be not quite doing what I want. Fallocate seems to not work on one of my machines with ext3fs, so it might be attempting to do what I want, though I really would like to see the garbage in the file as proof that it did what I wanted (my ext4fs machine returns zeroed files.)
I also don't agree this is microoptimization as when dealing with huge files, zeroing the file is one full erase cycle on the media on that much space... and none of that zeroed space matters if it's actually zeroed. Also temporary empty filesystem images would also benefit from this...
(Incidentally, perhaps the reason why it didn't do what I expected it to do in terms of execution time in the past is that --posix may have somehow been used as a default when the filesystem doesn't support fallocate. That might be why I didn't accept it as a proper solution back then.) _________________ Intel Core i7 2700K/Radeon R7 250/24GB DDR3/256GB SSD
What am I supposed watching? |
|
Back to top |
|
|
|