- #1
- 35,005
- 21,671
Triggered by the restore data from a deleted file thread, I was thinking about what improvements have been made in file systems in the last 50 or so years, What was not present in Multics or ITS or early unix..."
I'm not talking about capacity limits. At the birth of the PC, one could have build FAT that would have worked on today's disks - a million times bigger - but what would have been the point?
I can think of three candidates:
1. Sparse files
2. Deduplication
3. Copy-on-write
Sparse files are definietly a niche. Copy on write can promote fragmentation, which was an issue back in the day, so I could understand why it didn't happen. ("You want me to do what?") Deduplicaton is a good ides when it is, and a very very bad idea at other times.
But I can't think of anything else. It's as if all the progress in CS kind of sidestepped the file systems.
I'm not talking about capacity limits. At the birth of the PC, one could have build FAT that would have worked on today's disks - a million times bigger - but what would have been the point?
I can think of three candidates:
1. Sparse files
2. Deduplication
3. Copy-on-write
Sparse files are definietly a niche. Copy on write can promote fragmentation, which was an issue back in the day, so I could understand why it didn't happen. ("You want me to do what?") Deduplicaton is a good ides when it is, and a very very bad idea at other times.
But I can't think of anything else. It's as if all the progress in CS kind of sidestepped the file systems.