From 7f65509f10704a0fbe2ad7e227eee1d0babc9c93 Mon Sep 17 00:00:00 2001 From: n-peugnet Date: Mon, 13 Sep 2021 14:36:15 +0200 Subject: fix english typos --- .gitignore | 6 +++--- README.md | 2 +- TODO.md | 25 +++++++++++++------------ 3 files changed, 17 insertions(+), 16 deletions(-) diff --git a/.gitignore b/.gitignore index 2c790a9..b80751f 100644 --- a/.gitignore +++ b/.gitignore @@ -1,5 +1,5 @@ -## Binaries +## Project generated files dna-backup -## Test generated -test/repo +## IDE files +.vscode diff --git a/README.md b/README.md index b7808cc..654e7e0 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@ # DNA Backup -_Deduplicated versionned backups for DNA._ +_Deduplicated versioned backups for DNA._ ## Requirements diff --git a/TODO.md b/TODO.md index 43d684f..9df11d2 100644 --- a/TODO.md +++ b/TODO.md @@ -1,9 +1,9 @@ priority 1 ---------- -- [x] add deltaEncode chunks function +- [x] add `deltaEncode` chunks function - [x] do not merge consecutive smaller chunks as these could be stored as - chunks if no similar chunk is found. Thus it will need to be of - `chunkSize` or less. Otherwise it could not be possibly used for + chunks if no similar chunk is found. Thus, it will need to be of + `chunkSize` or less. Otherwise, it could not be possibly used for deduplication. ``` for each new chunk: @@ -20,30 +20,31 @@ priority 1 - [x] load recipe - [x] read chunks in-order into a stream - [ ] read individual files -- [ ] properly store informations to be DNA encoded +- [ ] properly store information to be DNA encoded - [ ] tar source to keep files metadata ? - [x] store chunks compressed - [x] compress before storing - - [x] uncompress before loading - - [ ] store compressed chunks into tracks of trackSize (1024o) + - [x] decompress before loading + - [ ] store compressed chunks into tracks of `trackSize` (1024o) - [x] add chunk cache... what was it for again ?? - [x] better tests for `(*Repo).Commit` priority 2 ---------- -- [ ] use more the `Reader` API (which is analoguous to the `IOStream` in Java) -- [ ] refactor matchStream as right now it is quite complex +- [ ] use more the `Reader` API (which is analogous to the `IOStream` in Java) +- [ ] refactor `matchStream` as right now it is quite complex - [x] better test for `(*Repo).matchStream` - [ ] compress partial chunks (`TempChunks` for now) -- [ ] tail packing of PartialChunks (this Struct does not exist yet as it is in +- [ ] tail packing of `PartialChunks` (this Struct does not exist yet as it is in fact just `TempChunks` for now) - [ ] option to commit without deltas to save new base chunks -- [ ] custom binary marshall and unmarshal for chunks +- [ ] custom binary marshal and unmarshal for chunks - [ ] use `loadChunkContent` in `loadChunks` +- [ ] store hashes for faster maps rebuild -réunion 7/09 +reunion 7/09 ------------ - [ ] save recipe consecutive chunks as extents - [ ] store recipe and files incrementally - [ ] compress recipe -- [ ] make size comparision between recipe and chunks with some datasets +- [ ] make size comparison between recipe and chunks with some datasets -- cgit v1.2.3