forked from luck/tmp_suning_uos_patched
pstore: Adjust buffer size for compression for smaller registered buffers
When backends (ex: efivars) have smaller registered buffers, the big_oops_buf is too big for them as number of repeated occurences in the text captured will be less. What happens is that pstore takes too big a bite from the dmesg log and then finds it cannot compress it enough to meet the backend block size. Patch takes care of adjusting the buffer size based on the registered buffer size. cmpr values have been arrived after doing experiments with plain text for buffers of size 1k - 4k (Smaller the buffer size repeated occurence will be less) and with sample crash log for buffers ranging from 4k - 10k. Reported-by: Seiji Aguchi <seiji.aguchi@hds.com> Tested-by: Seiji Aguchi <seiji.aguchi@hds.com> Signed-off-by: Aruna Balakrishnaiah <aruna@linux.vnet.ibm.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
This commit is contained in:
parent
e831cbfc1a
commit
7de8fe2fa8
|
@ -195,8 +195,29 @@ static int pstore_decompress(void *in, void *out, size_t inlen, size_t outlen)
|
|||
static void allocate_buf_for_compression(void)
|
||||
{
|
||||
size_t size;
|
||||
size_t cmpr;
|
||||
|
||||
big_oops_buf_sz = (psinfo->bufsize * 100) / 45;
|
||||
switch (psinfo->bufsize) {
|
||||
/* buffer range for efivars */
|
||||
case 1000 ... 2000:
|
||||
cmpr = 56;
|
||||
break;
|
||||
case 2001 ... 3000:
|
||||
cmpr = 54;
|
||||
break;
|
||||
case 3001 ... 3999:
|
||||
cmpr = 52;
|
||||
break;
|
||||
/* buffer range for nvram, erst */
|
||||
case 4000 ... 10000:
|
||||
cmpr = 45;
|
||||
break;
|
||||
default:
|
||||
cmpr = 60;
|
||||
break;
|
||||
}
|
||||
|
||||
big_oops_buf_sz = (psinfo->bufsize * 100) / cmpr;
|
||||
big_oops_buf = kmalloc(big_oops_buf_sz, GFP_KERNEL);
|
||||
if (big_oops_buf) {
|
||||
size = max(zlib_deflate_workspacesize(WINDOW_BITS, MEM_LEVEL),
|
||||
|
|
Loading…
Reference in New Issue
Block a user