aboutsummaryrefslogtreecommitdiff
path: root/TODO.md
blob: 72301cf61647b34a7a0647ae4336aeea0d5cfa0d (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
priority 1
----------
- [x] add deltaEncode chunks function
    - [x] do not merge consecutive smaller chunks as these could be stored as chunks if no similar chunk is found. Thus it will need to be of `chunkSize` or less. Otherwise it could not be possibly used for deduplication.
    ```
    for each new chunk:
        find similar in sketchMap
        if exists:
            delta encode
        else:
            calculate fingerprint
            store in fingerprintMap
            store in sketchMap
    ```
- [ ] read from repo
    - [x] store recipe
    - [x] load recipe
    - [ ] read chunks in-order into a stream
    - [ ] read individual files
- [ ] properly store informations to be DNA encoded
    - [ ] tar source to keep files metadata ?
    - [ ] store chunks compressed
        - [ ] compress before storing
        - [ ] uncompress before loading
    - [ ] store compressed chunks into tracks of trackSize (1024o)
- [ ] add chunk cache that would look like this:
    ```go
    type ChunkCache map[ChunkId][]byte // Do we really want to only keep the chunk content ?

    type Cache interface {
        Get(id ChunkId) Chunk
        Set(id ChunkId, Chunk)
    }
    ```

priority 2
----------
- [ ] use more the `Reader` API (which is analoguous to the `IOStream` in Java)
- [ ] refactor matchStream as right now it is quite complex
- [ ] better test for `Repo.matchStream`
- [ ] tail packing of PartialChunks (this Struct does not exist yet as it is in fact just `TempChunks` for now)
- [ ] option to commit without deltas to save new base chunks

réunion 7/09
------------
- [ ] save recipe consecutive chunks as extents
- [ ] store recipe and files incrementally
- [ ] compress recipe
- [ ] make size comparision between recipe and chunks with some datasets