More power management updates for 5.8-rc1

- Add support for interconnect bandwidth to the OPP core (Georgi
    Djakov, Saravana Kannan, Sibi Sankar, Viresh Kumar).
 
  - Add support for regulator enable/disable to the OPP core (Kamil
    Konieczny).
 
  - Add boost support to the CPPC cpufreq driver (Xiongfeng Wang).
 
  - Make the tegra186 cpufreq driver set the
    CPUFREQ_NEED_INITIAL_FREQ_CHECK flag (Mian Yousaf Kaukab).
 
  - Prevent the ACPI power management from using power resources
    with devices where the list of power resources for power state
    D0 (full power) is missing (Rafael Wysocki).
 
  - Annotate a hibernation-related function with __init (Christophe
    JAILLET).
 -----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEE4fcc61cGeeHD/fCwgsRv/nhiVHEFAl7g/zESHHJqd0Byand5
 c29ja2kubmV0AAoJEILEb/54YlRxZzgP+wTBW/WVLVkrlk2tQhcbbj+y9TNNOfU1
 FZ9C56bR+5VJhrrxTxVHeQP7PDCNxqVM57M8Bcnl3I0LFi+OHNAqkN/xW323N7ZA
 8OGkFgeqSxgG21691rTEwVnwwhdvQsNw47Fqjbu10PiNFYm6W8YNI5JMQRxfTVHb
 H8Nt7xcJ5i7wnMRnAyrotnTUYmS3nZ7IwpHFEoM2SwWCqlYr0h9rDqKz3MvbiE59
 m0G+4tFUv8egyshzwMD78PeFG+7iZP9s4uovsKujj4UGskmAVn9BGP0vI5AJiS4b
 9KdrDNdX5NAEBFn5eDVzZSMzYhRI3pebd306oWUS+A4/rDA+BtC4ECkaWKE6IlX6
 pmJuY5w8mZU0geH+W2xJQp6t/f60XJymEYKu88opm69ujgJL8X2PglWNq8tal6iX
 BfaPNMla+0mNt9L+GzIb6v5f/nbNBQ8qe6vXlQndqcxxerIBPktLTP+j18FzN9N4
 4XOyFJYoauvfMidK8+fjGlSGi54GnlseopNcVTD6IRjRkXhYYJE62oTfgoFe1DIs
 7pFyEtnTgA0Ma9h1CMs2/UwaL1Xule4HMZzdeAnkelAAivdJI181XU0k3Y6vuNwA
 4IXkpMph8utvWX/Dp+OH0a5YGvpEAuihwe8a2Ld/VtOGVINh4hVQucU/yrGN6EIJ
 Od7S+Zua3bR6
 =p5eu
 -----END PGP SIGNATURE-----

Merge tag 'pm-5.8-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull more power management updates from Rafael Wysocki:
 "These are operating performance points (OPP) framework updates mostly,
  including support for interconnect bandwidth in the OPP core, plus a
  few cpufreq changes, including boost support in the CPPC cpufreq
  driver, an ACPI device power management fix and a hibernation code
  cleanup.

  Specifics:

   - Add support for interconnect bandwidth to the OPP core (Georgi
     Djakov, Saravana Kannan, Sibi Sankar, Viresh Kumar).

   - Add support for regulator enable/disable to the OPP core (Kamil
     Konieczny).

   - Add boost support to the CPPC cpufreq driver (Xiongfeng Wang).

   - Make the tegra186 cpufreq driver set the
     CPUFREQ_NEED_INITIAL_FREQ_CHECK flag (Mian Yousaf Kaukab).

   - Prevent the ACPI power management from using power resources with
     devices where the list of power resources for power state D0 (full
     power) is missing (Rafael Wysocki).

   - Annotate a hibernation-related function with __init (Christophe
     JAILLET)"

* tag 'pm-5.8-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
  ACPI: PM: Avoid using power resources if there are none for D0
  cpufreq: CPPC: add SW BOOST support
  cpufreq: change '.set_boost' to act on one policy
  PM: hibernate: Add __init annotation to swsusp_header_init()
  opp: Don't parse icc paths unnecessarily
  opp: Remove bandwidth votes when target_freq is zero
  opp: core: add regulators enable and disable
  opp: Reorder the code for !target_freq case
  opp: Expose bandwidth information via debugfs
  cpufreq: dt: Add support for interconnect bandwidth scaling
  opp: Update the bandwidth on OPP frequency changes
  opp: Add sanity checks in _read_opp_key()
  opp: Add support for parsing interconnect bandwidth
  cpufreq: tegra186: add CPUFREQ_NEED_INITIAL_FREQ_CHECK flag
  OPP: Add helpers for reading the binding properties
  dt-bindings: opp: Introduce opp-peak-kBps and opp-avg-kBps bindings
This commit is contained in:
Linus Torvalds 2020-06-10 14:04:39 -07:00
commit 0c67f6b297
18 changed files with 511 additions and 79 deletions

View File

@ -83,9 +83,14 @@ properties.
Required properties:
- opp-hz: Frequency in Hz, expressed as a 64-bit big-endian integer. This is a
required property for all device nodes but devices like power domains. The
power domain nodes must have another (implementation dependent) property which
uniquely identifies the OPP nodes.
required property for all device nodes, unless another "required" property to
uniquely identify the OPP nodes exists. Devices like power domains must have
another (implementation dependent) property.
- opp-peak-kBps: Peak bandwidth in kilobytes per second, expressed as an array
of 32-bit big-endian integers. Each element of the array represents the
peak bandwidth value of each interconnect path. The number of elements should
match the number of interconnect paths.
Optional properties:
- opp-microvolt: voltage in micro Volts.
@ -132,6 +137,12 @@ Optional properties:
- opp-level: A value representing the performance level of the device,
expressed as a 32-bit integer.
- opp-avg-kBps: Average bandwidth in kilobytes per second, expressed as an array
of 32-bit big-endian integers. Each element of the array represents the
average bandwidth value of each interconnect path. The number of elements
should match the number of interconnect paths. This property is only
meaningful in OPP tables where opp-peak-kBps is present.
- clock-latency-ns: Specifies the maximum possible transition latency (in
nanoseconds) for switching to this OPP from any other OPP.

View File

@ -41,3 +41,7 @@ Temperature
Pressure
----------------------------------------
-kpascal : kilopascal
Throughput
----------------------------------------
-kBps : kilobytes per second

View File

@ -186,7 +186,7 @@ int acpi_device_set_power(struct acpi_device *device, int state)
* possibly drop references to the power resources in use.
*/
state = ACPI_STATE_D3_HOT;
/* If _PR3 is not available, use D3hot as the target state. */
/* If D3cold is not supported, use D3hot as the target state. */
if (!device->power.states[ACPI_STATE_D3_COLD].flags.valid)
target_state = state;
} else if (!device->power.states[state].flags.valid) {

View File

@ -918,12 +918,9 @@ static void acpi_bus_init_power_state(struct acpi_device *device, int state)
if (buffer.length && package
&& package->type == ACPI_TYPE_PACKAGE
&& package->package.count) {
int err = acpi_extract_power_resources(package, 0,
&ps->resources);
if (!err)
device->power.flags.power_resources = 1;
}
&& package->package.count)
acpi_extract_power_resources(package, 0, &ps->resources);
ACPI_FREE(buffer.pointer);
}
@ -970,14 +967,27 @@ static void acpi_bus_get_power_flags(struct acpi_device *device)
acpi_bus_init_power_state(device, i);
INIT_LIST_HEAD(&device->power.states[ACPI_STATE_D3_COLD].resources);
if (!list_empty(&device->power.states[ACPI_STATE_D3_HOT].resources))
device->power.states[ACPI_STATE_D3_COLD].flags.valid = 1;
/* Set defaults for D0 and D3hot states (always valid) */
/* Set the defaults for D0 and D3hot (always supported). */
device->power.states[ACPI_STATE_D0].flags.valid = 1;
device->power.states[ACPI_STATE_D0].power = 100;
device->power.states[ACPI_STATE_D3_HOT].flags.valid = 1;
/*
* Use power resources only if the D0 list of them is populated, because
* some platforms may provide _PR3 only to indicate D3cold support and
* in those cases the power resources list returned by it may be bogus.
*/
if (!list_empty(&device->power.states[ACPI_STATE_D0].resources)) {
device->power.flags.power_resources = 1;
/*
* D3cold is supported if the D3hot list of power resources is
* not empty.
*/
if (!list_empty(&device->power.states[ACPI_STATE_D3_HOT].resources))
device->power.states[ACPI_STATE_D3_COLD].flags.valid = 1;
}
if (acpi_bus_init_power(device))
device->flags.power_manageable = 0;
}

View File

@ -126,12 +126,12 @@ static void boost_set_msr_each(void *p_en)
boost_set_msr(enable);
}
static int set_boost(int val)
static int set_boost(struct cpufreq_policy *policy, int val)
{
get_online_cpus();
on_each_cpu(boost_set_msr_each, (void *)(long)val, 1);
put_online_cpus();
pr_debug("Core Boosting %sabled.\n", val ? "en" : "dis");
on_each_cpu_mask(policy->cpus, boost_set_msr_each,
(void *)(long)val, 1);
pr_debug("CPU %*pbl: Core Boosting %sabled.\n",
cpumask_pr_args(policy->cpus), val ? "en" : "dis");
return 0;
}
@ -162,7 +162,9 @@ static ssize_t store_cpb(struct cpufreq_policy *policy, const char *buf,
if (ret || val > 1)
return -EINVAL;
set_boost(val);
get_online_cpus();
set_boost(policy, val);
put_online_cpus();
return count;
}

View File

@ -37,6 +37,7 @@
* requested etc.
*/
static struct cppc_cpudata **all_cpu_data;
static bool boost_supported;
struct cppc_workaround_oem_info {
char oem_id[ACPI_OEM_ID_SIZE + 1];
@ -310,7 +311,7 @@ static int cppc_cpufreq_cpu_init(struct cpufreq_policy *policy)
* Section 8.4.7.1.1.5 of ACPI 6.1 spec)
*/
policy->min = cppc_cpufreq_perf_to_khz(cpu, cpu->perf_caps.lowest_nonlinear_perf);
policy->max = cppc_cpufreq_perf_to_khz(cpu, cpu->perf_caps.highest_perf);
policy->max = cppc_cpufreq_perf_to_khz(cpu, cpu->perf_caps.nominal_perf);
/*
* Set cpuinfo.min_freq to Lowest to make the full range of performance
@ -318,7 +319,7 @@ static int cppc_cpufreq_cpu_init(struct cpufreq_policy *policy)
* nonlinear perf
*/
policy->cpuinfo.min_freq = cppc_cpufreq_perf_to_khz(cpu, cpu->perf_caps.lowest_perf);
policy->cpuinfo.max_freq = cppc_cpufreq_perf_to_khz(cpu, cpu->perf_caps.highest_perf);
policy->cpuinfo.max_freq = cppc_cpufreq_perf_to_khz(cpu, cpu->perf_caps.nominal_perf);
policy->transition_delay_us = cppc_cpufreq_get_transition_delay_us(cpu_num);
policy->shared_type = cpu->shared_type;
@ -343,6 +344,13 @@ static int cppc_cpufreq_cpu_init(struct cpufreq_policy *policy)
cpu->cur_policy = policy;
/*
* If 'highest_perf' is greater than 'nominal_perf', we assume CPU Boost
* is supported.
*/
if (cpu->perf_caps.highest_perf > cpu->perf_caps.nominal_perf)
boost_supported = true;
/* Set policy->cur to max now. The governors will adjust later. */
policy->cur = cppc_cpufreq_perf_to_khz(cpu,
cpu->perf_caps.highest_perf);
@ -410,6 +418,32 @@ static unsigned int cppc_cpufreq_get_rate(unsigned int cpunum)
return cppc_get_rate_from_fbctrs(cpu, fb_ctrs_t0, fb_ctrs_t1);
}
static int cppc_cpufreq_set_boost(struct cpufreq_policy *policy, int state)
{
struct cppc_cpudata *cpudata;
int ret;
if (!boost_supported) {
pr_err("BOOST not supported by CPU or firmware\n");
return -EINVAL;
}
cpudata = all_cpu_data[policy->cpu];
if (state)
policy->max = cppc_cpufreq_perf_to_khz(cpudata,
cpudata->perf_caps.highest_perf);
else
policy->max = cppc_cpufreq_perf_to_khz(cpudata,
cpudata->perf_caps.nominal_perf);
policy->cpuinfo.max_freq = policy->max;
ret = freq_qos_update_request(policy->max_freq_req, policy->max);
if (ret < 0)
return ret;
return 0;
}
static struct cpufreq_driver cppc_cpufreq_driver = {
.flags = CPUFREQ_CONST_LOOPS,
.verify = cppc_verify_policy,
@ -417,6 +451,7 @@ static struct cpufreq_driver cppc_cpufreq_driver = {
.get = cppc_cpufreq_get_rate,
.init = cppc_cpufreq_cpu_init,
.stop_cpu = cppc_cpufreq_stop_cpu,
.set_boost = cppc_cpufreq_set_boost,
.name = "cppc_cpufreq",
};

View File

@ -121,6 +121,10 @@ static int resources_available(void)
clk_put(cpu_clk);
ret = dev_pm_opp_of_find_icc_paths(cpu_dev, NULL);
if (ret)
return ret;
name = find_supply_name(cpu_dev);
/* Platform doesn't require regulator */
if (!name)

View File

@ -2532,34 +2532,29 @@ EXPORT_SYMBOL_GPL(cpufreq_update_limits);
/*********************************************************************
* BOOST *
*********************************************************************/
static int cpufreq_boost_set_sw(int state)
static int cpufreq_boost_set_sw(struct cpufreq_policy *policy, int state)
{
struct cpufreq_policy *policy;
int ret;
for_each_active_policy(policy) {
int ret;
if (!policy->freq_table)
return -ENXIO;
if (!policy->freq_table)
return -ENXIO;
ret = cpufreq_frequency_table_cpuinfo(policy,
policy->freq_table);
if (ret) {
pr_err("%s: Policy frequency update failed\n",
__func__);
return ret;
}
ret = freq_qos_update_request(policy->max_freq_req, policy->max);
if (ret < 0)
return ret;
ret = cpufreq_frequency_table_cpuinfo(policy, policy->freq_table);
if (ret) {
pr_err("%s: Policy frequency update failed\n", __func__);
return ret;
}
ret = freq_qos_update_request(policy->max_freq_req, policy->max);
if (ret < 0)
return ret;
return 0;
}
int cpufreq_boost_trigger_state(int state)
{
struct cpufreq_policy *policy;
unsigned long flags;
int ret = 0;
@ -2570,15 +2565,25 @@ int cpufreq_boost_trigger_state(int state)
cpufreq_driver->boost_enabled = state;
write_unlock_irqrestore(&cpufreq_driver_lock, flags);
ret = cpufreq_driver->set_boost(state);
if (ret) {
write_lock_irqsave(&cpufreq_driver_lock, flags);
cpufreq_driver->boost_enabled = !state;
write_unlock_irqrestore(&cpufreq_driver_lock, flags);
pr_err("%s: Cannot %s BOOST\n",
__func__, state ? "enable" : "disable");
get_online_cpus();
for_each_active_policy(policy) {
ret = cpufreq_driver->set_boost(policy, state);
if (ret)
goto err_reset_state;
}
put_online_cpus();
return 0;
err_reset_state:
put_online_cpus();
write_lock_irqsave(&cpufreq_driver_lock, flags);
cpufreq_driver->boost_enabled = !state;
write_unlock_irqrestore(&cpufreq_driver_lock, flags);
pr_err("%s: Cannot %s BOOST\n",
__func__, state ? "enable" : "disable");
return ret;
}

View File

@ -93,7 +93,8 @@ static int tegra186_cpufreq_set_target(struct cpufreq_policy *policy,
static struct cpufreq_driver tegra186_cpufreq_driver = {
.name = "tegra186",
.flags = CPUFREQ_STICKY | CPUFREQ_HAVE_GOVERNOR_PER_POLICY,
.flags = CPUFREQ_STICKY | CPUFREQ_HAVE_GOVERNOR_PER_POLICY |
CPUFREQ_NEED_INITIAL_FREQ_CHECK,
.verify = cpufreq_generic_frequency_table_verify,
.target_index = tegra186_cpufreq_set_target,
.init = tegra186_cpufreq_init,

View File

@ -543,6 +543,24 @@ void icc_set_tag(struct icc_path *path, u32 tag)
}
EXPORT_SYMBOL_GPL(icc_set_tag);
/**
* icc_get_name() - Get name of the icc path
* @path: reference to the path returned by icc_get()
*
* This function is used by an interconnect consumer to get the name of the icc
* path.
*
* Returns a valid pointer on success, or NULL otherwise.
*/
const char *icc_get_name(struct icc_path *path)
{
if (!path)
return NULL;
return path->name;
}
EXPORT_SYMBOL_GPL(icc_get_name);
/**
* icc_set_bw() - set bandwidth constraints on an interconnect path
* @path: reference to the path returned by icc_get()

View File

@ -664,7 +664,7 @@ static inline int _generic_set_opp_clk_only(struct device *dev, struct clk *clk,
return ret;
}
static int _generic_set_opp_regulator(const struct opp_table *opp_table,
static int _generic_set_opp_regulator(struct opp_table *opp_table,
struct device *dev,
unsigned long old_freq,
unsigned long freq,
@ -699,6 +699,18 @@ static int _generic_set_opp_regulator(const struct opp_table *opp_table,
goto restore_freq;
}
/*
* Enable the regulator after setting its voltages, otherwise it breaks
* some boot-enabled regulators.
*/
if (unlikely(!opp_table->regulator_enabled)) {
ret = regulator_enable(reg);
if (ret < 0)
dev_warn(dev, "Failed to enable regulator: %d", ret);
else
opp_table->regulator_enabled = true;
}
return 0;
restore_freq:
@ -713,6 +725,34 @@ static int _generic_set_opp_regulator(const struct opp_table *opp_table,
return ret;
}
static int _set_opp_bw(const struct opp_table *opp_table,
struct dev_pm_opp *opp, struct device *dev, bool remove)
{
u32 avg, peak;
int i, ret;
if (!opp_table->paths)
return 0;
for (i = 0; i < opp_table->path_count; i++) {
if (remove) {
avg = 0;
peak = 0;
} else {
avg = opp->bandwidth[i].avg;
peak = opp->bandwidth[i].peak;
}
ret = icc_set_bw(opp_table->paths[i], avg, peak);
if (ret) {
dev_err(dev, "Failed to %s bandwidth[%d]: %d\n",
remove ? "remove" : "set", i, ret);
return ret;
}
}
return 0;
}
static int _set_opp_custom(const struct opp_table *opp_table,
struct device *dev, unsigned long old_freq,
unsigned long freq,
@ -817,15 +857,31 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
}
if (unlikely(!target_freq)) {
if (opp_table->required_opp_tables) {
ret = _set_required_opps(dev, opp_table, NULL);
} else if (!_get_opp_count(opp_table)) {
/*
* Some drivers need to support cases where some platforms may
* have OPP table for the device, while others don't and
* opp_set_rate() just needs to behave like clk_set_rate().
*/
if (!_get_opp_count(opp_table))
return 0;
} else {
if (!opp_table->required_opp_tables && !opp_table->regulators &&
!opp_table->paths) {
dev_err(dev, "target frequency can't be 0\n");
ret = -EINVAL;
goto put_opp_table;
}
ret = _set_opp_bw(opp_table, NULL, dev, true);
if (ret)
return ret;
if (opp_table->regulator_enabled) {
regulator_disable(opp_table->regulators[0]);
opp_table->regulator_enabled = false;
}
ret = _set_required_opps(dev, opp_table, NULL);
goto put_opp_table;
}
@ -909,6 +965,9 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
dev_err(dev, "Failed to set required opps: %d\n", ret);
}
if (!ret)
ret = _set_opp_bw(opp_table, opp, dev, false);
put_opp:
dev_pm_opp_put(opp);
put_old_opp:
@ -999,6 +1058,12 @@ static struct opp_table *_allocate_opp_table(struct device *dev, int index)
ret);
}
/* Find interconnect path(s) for the device */
ret = dev_pm_opp_of_find_icc_paths(dev, opp_table);
if (ret)
dev_warn(dev, "%s: Error finding interconnect paths: %d\n",
__func__, ret);
BLOCKING_INIT_NOTIFIER_HEAD(&opp_table->head);
INIT_LIST_HEAD(&opp_table->opp_list);
kref_init(&opp_table->kref);
@ -1057,6 +1122,7 @@ static void _opp_table_kref_release(struct kref *kref)
{
struct opp_table *opp_table = container_of(kref, struct opp_table, kref);
struct opp_device *opp_dev, *temp;
int i;
_of_clear_opp_table(opp_table);
@ -1064,6 +1130,12 @@ static void _opp_table_kref_release(struct kref *kref)
if (!IS_ERR(opp_table->clk))
clk_put(opp_table->clk);
if (opp_table->paths) {
for (i = 0; i < opp_table->path_count; i++)
icc_put(opp_table->paths[i]);
kfree(opp_table->paths);
}
WARN_ON(!list_empty(&opp_table->opp_list));
list_for_each_entry_safe(opp_dev, temp, &opp_table->dev_list, node) {
@ -1243,19 +1315,23 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_remove_all_dynamic);
struct dev_pm_opp *_opp_allocate(struct opp_table *table)
{
struct dev_pm_opp *opp;
int count, supply_size;
int supply_count, supply_size, icc_size;
/* Allocate space for at least one supply */
count = table->regulator_count > 0 ? table->regulator_count : 1;
supply_size = sizeof(*opp->supplies) * count;
supply_count = table->regulator_count > 0 ? table->regulator_count : 1;
supply_size = sizeof(*opp->supplies) * supply_count;
icc_size = sizeof(*opp->bandwidth) * table->path_count;
/* allocate new OPP node and supplies structures */
opp = kzalloc(sizeof(*opp) + supply_size, GFP_KERNEL);
opp = kzalloc(sizeof(*opp) + supply_size + icc_size, GFP_KERNEL);
if (!opp)
return NULL;
/* Put the supplies at the end of the OPP structure as an empty array */
opp->supplies = (struct dev_pm_opp_supply *)(opp + 1);
if (icc_size)
opp->bandwidth = (struct dev_pm_opp_icc_bw *)(opp->supplies + supply_count);
INIT_LIST_HEAD(&opp->node);
return opp;
@ -1286,11 +1362,24 @@ static bool _opp_supported_by_regulators(struct dev_pm_opp *opp,
return true;
}
int _opp_compare_key(struct dev_pm_opp *opp1, struct dev_pm_opp *opp2)
{
if (opp1->rate != opp2->rate)
return opp1->rate < opp2->rate ? -1 : 1;
if (opp1->bandwidth && opp2->bandwidth &&
opp1->bandwidth[0].peak != opp2->bandwidth[0].peak)
return opp1->bandwidth[0].peak < opp2->bandwidth[0].peak ? -1 : 1;
if (opp1->level != opp2->level)
return opp1->level < opp2->level ? -1 : 1;
return 0;
}
static int _opp_is_duplicate(struct device *dev, struct dev_pm_opp *new_opp,
struct opp_table *opp_table,
struct list_head **head)
{
struct dev_pm_opp *opp;
int opp_cmp;
/*
* Insert new OPP in order of increasing frequency and discard if
@ -1301,12 +1390,13 @@ static int _opp_is_duplicate(struct device *dev, struct dev_pm_opp *new_opp,
* loop.
*/
list_for_each_entry(opp, &opp_table->opp_list, node) {
if (new_opp->rate > opp->rate) {
opp_cmp = _opp_compare_key(new_opp, opp);
if (opp_cmp > 0) {
*head = &opp->node;
continue;
}
if (new_opp->rate < opp->rate)
if (opp_cmp < 0)
return 0;
/* Duplicate OPPs */
@ -1670,6 +1760,13 @@ void dev_pm_opp_put_regulators(struct opp_table *opp_table)
/* Make sure there are no concurrent readers while updating opp_table */
WARN_ON(!list_empty(&opp_table->opp_list));
if (opp_table->regulator_enabled) {
for (i = opp_table->regulator_count - 1; i >= 0; i--)
regulator_disable(opp_table->regulators[i]);
opp_table->regulator_enabled = false;
}
for (i = opp_table->regulator_count - 1; i >= 0; i--)
regulator_put(opp_table->regulators[i]);

View File

@ -32,6 +32,47 @@ void opp_debug_remove_one(struct dev_pm_opp *opp)
debugfs_remove_recursive(opp->dentry);
}
static ssize_t bw_name_read(struct file *fp, char __user *userbuf,
size_t count, loff_t *ppos)
{
struct icc_path *path = fp->private_data;
char buf[64];
int i;
i = scnprintf(buf, sizeof(buf), "%.62s\n", icc_get_name(path));
return simple_read_from_buffer(userbuf, count, ppos, buf, i);
}
static const struct file_operations bw_name_fops = {
.open = simple_open,
.read = bw_name_read,
.llseek = default_llseek,
};
static void opp_debug_create_bw(struct dev_pm_opp *opp,
struct opp_table *opp_table,
struct dentry *pdentry)
{
struct dentry *d;
char name[11];
int i;
for (i = 0; i < opp_table->path_count; i++) {
snprintf(name, sizeof(name), "icc-path-%.1d", i);
/* Create per-path directory */
d = debugfs_create_dir(name, pdentry);
debugfs_create_file("name", S_IRUGO, d, opp_table->paths[i],
&bw_name_fops);
debugfs_create_u32("peak_bw", S_IRUGO, d,
&opp->bandwidth[i].peak);
debugfs_create_u32("avg_bw", S_IRUGO, d,
&opp->bandwidth[i].avg);
}
}
static void opp_debug_create_supplies(struct dev_pm_opp *opp,
struct opp_table *opp_table,
struct dentry *pdentry)
@ -94,6 +135,7 @@ void opp_debug_create_one(struct dev_pm_opp *opp, struct opp_table *opp_table)
&opp->clock_latency_ns);
opp_debug_create_supplies(opp, opp_table, d);
opp_debug_create_bw(opp, opp_table, d);
opp->dentry = d;
}

View File

@ -332,6 +332,105 @@ static int _of_opp_alloc_required_opps(struct opp_table *opp_table,
return ret;
}
static int _bandwidth_supported(struct device *dev, struct opp_table *opp_table)
{
struct device_node *np, *opp_np;
struct property *prop;
if (!opp_table) {
np = of_node_get(dev->of_node);
if (!np)
return -ENODEV;
opp_np = _opp_of_get_opp_desc_node(np, 0);
of_node_put(np);
} else {
opp_np = of_node_get(opp_table->np);
}
/* Lets not fail in case we are parsing opp-v1 bindings */
if (!opp_np)
return 0;
/* Checking only first OPP is sufficient */
np = of_get_next_available_child(opp_np, NULL);
if (!np) {
dev_err(dev, "OPP table empty\n");
return -EINVAL;
}
of_node_put(opp_np);
prop = of_find_property(np, "opp-peak-kBps", NULL);
of_node_put(np);
if (!prop || !prop->length)
return 0;
return 1;
}
int dev_pm_opp_of_find_icc_paths(struct device *dev,
struct opp_table *opp_table)
{
struct device_node *np;
int ret, i, count, num_paths;
struct icc_path **paths;
ret = _bandwidth_supported(dev, opp_table);
if (ret <= 0)
return ret;
ret = 0;
np = of_node_get(dev->of_node);
if (!np)
return 0;
count = of_count_phandle_with_args(np, "interconnects",
"#interconnect-cells");
of_node_put(np);
if (count < 0)
return 0;
/* two phandles when #interconnect-cells = <1> */
if (count % 2) {
dev_err(dev, "%s: Invalid interconnects values\n", __func__);
return -EINVAL;
}
num_paths = count / 2;
paths = kcalloc(num_paths, sizeof(*paths), GFP_KERNEL);
if (!paths)
return -ENOMEM;
for (i = 0; i < num_paths; i++) {
paths[i] = of_icc_get_by_index(dev, i);
if (IS_ERR(paths[i])) {
ret = PTR_ERR(paths[i]);
if (ret != -EPROBE_DEFER) {
dev_err(dev, "%s: Unable to get path%d: %d\n",
__func__, i, ret);
}
goto err;
}
}
if (opp_table) {
opp_table->paths = paths;
opp_table->path_count = num_paths;
return 0;
}
err:
while (i--)
icc_put(paths[i]);
kfree(paths);
return ret;
}
EXPORT_SYMBOL_GPL(dev_pm_opp_of_find_icc_paths);
static bool _opp_is_supported(struct device *dev, struct opp_table *opp_table,
struct device_node *np)
{
@ -521,6 +620,90 @@ void dev_pm_opp_of_remove_table(struct device *dev)
}
EXPORT_SYMBOL_GPL(dev_pm_opp_of_remove_table);
static int _read_bw(struct dev_pm_opp *new_opp, struct opp_table *table,
struct device_node *np, bool peak)
{
const char *name = peak ? "opp-peak-kBps" : "opp-avg-kBps";
struct property *prop;
int i, count, ret;
u32 *bw;
prop = of_find_property(np, name, NULL);
if (!prop)
return -ENODEV;
count = prop->length / sizeof(u32);
if (table->path_count != count) {
pr_err("%s: Mismatch between %s and paths (%d %d)\n",
__func__, name, count, table->path_count);
return -EINVAL;
}
bw = kmalloc_array(count, sizeof(*bw), GFP_KERNEL);
if (!bw)
return -ENOMEM;
ret = of_property_read_u32_array(np, name, bw, count);
if (ret) {
pr_err("%s: Error parsing %s: %d\n", __func__, name, ret);
goto out;
}
for (i = 0; i < count; i++) {
if (peak)
new_opp->bandwidth[i].peak = kBps_to_icc(bw[i]);
else
new_opp->bandwidth[i].avg = kBps_to_icc(bw[i]);
}
out:
kfree(bw);
return ret;
}
static int _read_opp_key(struct dev_pm_opp *new_opp, struct opp_table *table,
struct device_node *np, bool *rate_not_available)
{
bool found = false;
u64 rate;
int ret;
ret = of_property_read_u64(np, "opp-hz", &rate);
if (!ret) {
/*
* Rate is defined as an unsigned long in clk API, and so
* casting explicitly to its type. Must be fixed once rate is 64
* bit guaranteed in clk API.
*/
new_opp->rate = (unsigned long)rate;
found = true;
}
*rate_not_available = !!ret;
/*
* Bandwidth consists of peak and average (optional) values:
* opp-peak-kBps = <path1_value path2_value>;
* opp-avg-kBps = <path1_value path2_value>;
*/
ret = _read_bw(new_opp, table, np, true);
if (!ret) {
found = true;
ret = _read_bw(new_opp, table, np, false);
}
/* The properties were found but we failed to parse them */
if (ret && ret != -ENODEV)
return ret;
if (!of_property_read_u32(np, "opp-level", &new_opp->level))
found = true;
if (found)
return 0;
return ret;
}
/**
* _opp_add_static_v2() - Allocate static OPPs (As per 'v2' DT bindings)
* @opp_table: OPP table
@ -558,26 +741,12 @@ static struct dev_pm_opp *_opp_add_static_v2(struct opp_table *opp_table,
if (!new_opp)
return ERR_PTR(-ENOMEM);
ret = of_property_read_u64(np, "opp-hz", &rate);
if (ret < 0) {
/* "opp-hz" is optional for devices like power domains. */
if (!opp_table->is_genpd) {
dev_err(dev, "%s: opp-hz not found\n", __func__);
goto free_opp;
}
rate_not_available = true;
} else {
/*
* Rate is defined as an unsigned long in clk API, and so
* casting explicitly to its type. Must be fixed once rate is 64
* bit guaranteed in clk API.
*/
new_opp->rate = (unsigned long)rate;
ret = _read_opp_key(new_opp, opp_table, np, &rate_not_available);
if (ret < 0 && !opp_table->is_genpd) {
dev_err(dev, "%s: opp key field not found\n", __func__);
goto free_opp;
}
of_property_read_u32(np, "opp-level", &new_opp->level);
/* Check if the OPP supports hardware's hierarchy of versions or not */
if (!_opp_is_supported(dev, opp_table, np)) {
dev_dbg(dev, "OPP not supported by hardware: %llu\n", rate);

View File

@ -12,6 +12,7 @@
#define __DRIVER_OPP_H__
#include <linux/device.h>
#include <linux/interconnect.h>
#include <linux/kernel.h>
#include <linux/kref.h>
#include <linux/list.h>
@ -59,6 +60,7 @@ extern struct list_head opp_tables;
* @rate: Frequency in hertz
* @level: Performance level
* @supplies: Power supplies voltage/current values
* @bandwidth: Interconnect bandwidth values
* @clock_latency_ns: Latency (in nanoseconds) of switching to this OPP's
* frequency from any other OPP's frequency.
* @required_opps: List of OPPs that are required by this OPP.
@ -81,6 +83,7 @@ struct dev_pm_opp {
unsigned int level;
struct dev_pm_opp_supply *supplies;
struct dev_pm_opp_icc_bw *bandwidth;
unsigned long clock_latency_ns;
@ -144,8 +147,11 @@ enum opp_table_access {
* @clk: Device's clock handle
* @regulators: Supply regulators
* @regulator_count: Number of power supply regulators. Its value can be -1
* @regulator_enabled: Set to true if regulators were previously enabled.
* (uninitialized), 0 (no opp-microvolt property) or > 0 (has opp-microvolt
* property).
* @paths: Interconnect path handles
* @path_count: Number of interconnect paths
* @genpd_performance_state: Device's power domain support performance state.
* @is_genpd: Marks if the OPP table belongs to a genpd.
* @set_opp: Platform specific set_opp callback
@ -189,6 +195,9 @@ struct opp_table {
struct clk *clk;
struct regulator **regulators;
int regulator_count;
bool regulator_enabled;
struct icc_path **paths;
unsigned int path_count;
bool genpd_performance_state;
bool is_genpd;
@ -211,6 +220,7 @@ struct opp_device *_add_opp_dev(const struct device *dev, struct opp_table *opp_
void _dev_pm_opp_find_and_remove_table(struct device *dev);
struct dev_pm_opp *_opp_allocate(struct opp_table *opp_table);
void _opp_free(struct dev_pm_opp *opp);
int _opp_compare_key(struct dev_pm_opp *opp1, struct dev_pm_opp *opp2);
int _opp_add(struct device *dev, struct dev_pm_opp *new_opp, struct opp_table *opp_table, bool rate_not_available);
int _opp_add_v1(struct opp_table *opp_table, struct device *dev, unsigned long freq, long u_volt, bool dynamic);
void _dev_pm_opp_cpumask_remove_table(const struct cpumask *cpumask, int last_cpu);

View File

@ -367,7 +367,7 @@ struct cpufreq_driver {
/* platform specific boost support code */
bool boost_enabled;
int (*set_boost)(int state);
int (*set_boost)(struct cpufreq_policy *policy, int state);
};
/* flags */

View File

@ -35,6 +35,7 @@ int icc_enable(struct icc_path *path);
int icc_disable(struct icc_path *path);
int icc_set_bw(struct icc_path *path, u32 avg_bw, u32 peak_bw);
void icc_set_tag(struct icc_path *path, u32 tag);
const char *icc_get_name(struct icc_path *path);
#else
@ -84,6 +85,11 @@ static inline void icc_set_tag(struct icc_path *path, u32 tag)
{
}
static inline const char *icc_get_name(struct icc_path *path)
{
return NULL;
}
#endif /* CONFIG_INTERCONNECT */
#endif /* __LINUX_INTERCONNECT_H */

View File

@ -41,6 +41,18 @@ struct dev_pm_opp_supply {
unsigned long u_amp;
};
/**
* struct dev_pm_opp_icc_bw - Interconnect bandwidth values
* @avg: Average bandwidth corresponding to this OPP (in icc units)
* @peak: Peak bandwidth corresponding to this OPP (in icc units)
*
* This structure stores the bandwidth values for a single interconnect path.
*/
struct dev_pm_opp_icc_bw {
u32 avg;
u32 peak;
};
/**
* struct dev_pm_opp_info - OPP freq/voltage/current values
* @rate: Target clk rate in hz
@ -360,6 +372,7 @@ int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpuma
struct device_node *dev_pm_opp_of_get_opp_desc_node(struct device *dev);
struct device_node *dev_pm_opp_get_of_node(struct dev_pm_opp *opp);
int of_get_required_opp_performance_state(struct device_node *np, int index);
int dev_pm_opp_of_find_icc_paths(struct device *dev, struct opp_table *opp_table);
void dev_pm_opp_of_register_em(struct cpumask *cpus);
#else
static inline int dev_pm_opp_of_add_table(struct device *dev)
@ -408,6 +421,11 @@ static inline int of_get_required_opp_performance_state(struct device_node *np,
{
return -ENOTSUPP;
}
static inline int dev_pm_opp_of_find_icc_paths(struct device *dev, struct opp_table *opp_table)
{
return -ENOTSUPP;
}
#endif
#endif /* __LINUX_OPP_H__ */

View File

@ -1590,7 +1590,7 @@ int swsusp_unmark(void)
}
#endif
static int swsusp_header_init(void)
static int __init swsusp_header_init(void)
{
swsusp_header = (struct swsusp_header*) __get_free_page(GFP_KERNEL);
if (!swsusp_header)