Commits


attempt to speed up the deltification for big files The current hash table perform poorly on big files due to a small resize step that pushes the table to its limits continuously. Instead, to have both a better performing hash table and keep the memory consumption low, save the blocks in an array and use the hash table as index. Then, use a more generous resizing scheme that guarantees the good properties of the hash table. To avoid having to rebuild the table when the array is resized, save the indexes in the table, and to further reduce the memory consumption use 32 bit indices. On amd64 this means that each slot is 4 bytes instead of 8 for a pointer or 24 for a struct got_deltify_block. ok stsp@


use random seeds for murmurhash2 change the three hardcoded seeds to fresh ones generated on demand via arc4random. Suggested/fixed by and ok stsp@


consistently match size of hash variables to that returned by murmurhash ok millert stsp


reduce minimum deltification chunk size to 32; suggested by ori


map raw object files into memory while packing if possible


Allow for skipping the base object header in got_deltify().


substantial rewrite of deltify.c; operate on plain files only