forked from luck/tmp_suning_uos_patched
cpuidle: menu: Avoid overflows when computing variance
The variance computation in get_typical_interval() may overflow if the square of the value of diff exceeds the maximum for the int64_t data type value which basically is the case when it is of the order of UINT_MAX. However, data points so far in the future don't matter for idle state selection anyway, so change the initial threshold value in get_typical_interval() to INT_MAX which will cause more "outlying" data points to be discarded without affecting the selection result. Reported-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
This commit is contained in:
parent
ef8006846a
commit
814b8797f9
|
@ -186,7 +186,7 @@ static unsigned int get_typical_interval(struct menu_device *data,
|
||||||
unsigned int min, max, thresh, avg;
|
unsigned int min, max, thresh, avg;
|
||||||
uint64_t sum, variance;
|
uint64_t sum, variance;
|
||||||
|
|
||||||
thresh = UINT_MAX; /* Discard outliers above this value */
|
thresh = INT_MAX; /* Discard outliers above this value */
|
||||||
|
|
||||||
again:
|
again:
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue
Block a user