First run on a big folder (1.1Go) ================================== The program was for the first time run against a fairly big folder: my Desktop folder. It is a composite folder that contains a lot of different things, among which: - a lot of downloaded PDF files - a lot of downloaded image files - a massive (715,3Mio) package of binary updates - a TV show episode (85Mio) - some other fat binary executables (108Mio) - some compressed packages (53Mio) _I am beginning to understand the bad compression performances..._ ## Resources - CPU: 90% 4 cores - RAM: 0.7% 16Go ## Time `16:00:04` → `16:12:33` = `00:12:29` ## Size ### Original ``` 1050129 kio ``` ### Repo ``` 18 kio test_1/00000/files 1017586 kio test_1/00000/chunks 5908 kio test_1/00000/recipe 1023515 kio test_1/00000 1023519 kio test_1 ``` saved: ``` 26610 kio ``` ## Notes Not really impressed by the saving, even when there are a lot of deduplicated files. There are a lot of chunks larger than uncompressed ones (8208o instead of8192o). This is probably because of the added Zlib header and the fact that no compression was achieved. ## Conclusions 1. I should probably test with a folder that contains less binary and compressed data. 2. Maybe we can store the chunks uncompressed when we detect that it uses less space than the compressed version. Second run on source code folder (1.0Go) ========================================= ## Resources Similar ## Time `17:09:01` → `17:13:51` = `00:4:50` ## Size ### Original ``` 925515 kio ``` ### Repo ``` 6433 kio test_2/00000/files 272052 kio test_2/00000/chunks 17468 kio test_2/00000/recipe 295956 kio test_2/00000 295960 kio test_2 ``` saved: ``` 629555 kio ```