4126c0197b
On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the cpus cannot be independently powered down, either due to sequencing restrictions (on Tegra 2, cpu 0 must be the last to power down), or due to HW bugs (on OMAP4460, a cpu powering up will corrupt the gic state unless the other cpu runs a work around). Each cpu has a power state that it can enter without coordinating with the other cpu (usually Wait For Interrupt, or WFI), and one or more "coupled" power states that affect blocks shared between the cpus (L2 cache, interrupt controller, and sometimes the whole SoC). Entering a coupled power state must be tightly controlled on both cpus. The easiest solution to implementing coupled cpu power states is to hotplug all but one cpu whenever possible, usually using a cpufreq governor that looks at cpu load to determine when to enable the secondary cpus. This causes problems, as hotplug is an expensive operation, so the number of hotplug transitions must be minimized, leading to very slow response to loads, often on the order of seconds. This file implements an alternative solution, where each cpu will wait in the WFI state until all cpus are ready to enter a coupled state, at which point the coupled state function will be called on all cpus at approximately the same time. Once all cpus are ready to enter idle, they are woken by an smp cross call. At this point, there is a chance that one of the cpus will find work to do, and choose not to enter idle. A final pass is needed to guarantee that all cpus will call the power state enter function at the same time. During this pass, each cpu will increment the ready counter, and continue once the ready counter matches the number of online coupled cpus. If any cpu exits idle, the other cpus will decrement their counter and retry. To use coupled cpuidle states, a cpuidle driver must: Set struct cpuidle_device.coupled_cpus to the mask of all coupled cpus, usually the same as cpu_possible_mask if all cpus are part of the same cluster. The coupled_cpus mask must be set in the struct cpuidle_device for each cpu. Set struct cpuidle_device.safe_state to a state that is not a coupled state. This is usually WFI. Set CPUIDLE_FLAG_COUPLED in struct cpuidle_state.flags for each state that affects multiple cpus. Provide a struct cpuidle_state.enter function for each state that affects multiple cpus. This function is guaranteed to be called on all cpus at approximately the same time. The driver should ensure that the cpus all abort together if any cpu tries to abort once the function is called. update1: cpuidle: coupled: fix count of online cpus online_count was never incremented on boot, and was also counting cpus that were not part of the coupled set. Fix both issues by introducting a new function that counts online coupled cpus, and call it from register as well as the hotplug notifier. update2: cpuidle: coupled: fix decrementing ready count cpuidle_coupled_set_not_ready sometimes refuses to decrement the ready count in order to prevent a race condition. This makes it unsuitable for use when finished with idle. Add a new function cpuidle_coupled_set_done that decrements both the ready count and waiting count, and call it after idle is complete. Cc: Amit Kucheria <amit.kucheria@linaro.org> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Trinabh Gupta <g.trinabh@gmail.com> Cc: Deepthi Dharwar <deepthi@linux.vnet.ibm.com> Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Reviewed-by: Kevin Hilman <khilman@ti.com> Tested-by: Kevin Hilman <khilman@ti.com> Signed-off-by: Colin Cross <ccross@android.com> Acked-by: Rafael J. Wysocki <rjw@sisk.pl> Signed-off-by: Len Brown <len.brown@intel.com>
65 lines
1.9 KiB
C
65 lines
1.9 KiB
C
/*
|
|
* cpuidle.h - The internal header file
|
|
*/
|
|
|
|
#ifndef __DRIVER_CPUIDLE_H
|
|
#define __DRIVER_CPUIDLE_H
|
|
|
|
#include <linux/device.h>
|
|
|
|
/* For internal use only */
|
|
extern struct cpuidle_governor *cpuidle_curr_governor;
|
|
extern struct list_head cpuidle_governors;
|
|
extern struct list_head cpuidle_detected_devices;
|
|
extern struct mutex cpuidle_lock;
|
|
extern spinlock_t cpuidle_driver_lock;
|
|
extern int cpuidle_disabled(void);
|
|
extern int cpuidle_enter_state(struct cpuidle_device *dev,
|
|
struct cpuidle_driver *drv, int next_state);
|
|
|
|
/* idle loop */
|
|
extern void cpuidle_install_idle_handler(void);
|
|
extern void cpuidle_uninstall_idle_handler(void);
|
|
|
|
/* governors */
|
|
extern int cpuidle_switch_governor(struct cpuidle_governor *gov);
|
|
|
|
/* sysfs */
|
|
extern int cpuidle_add_interface(struct device *dev);
|
|
extern void cpuidle_remove_interface(struct device *dev);
|
|
extern int cpuidle_add_state_sysfs(struct cpuidle_device *device);
|
|
extern void cpuidle_remove_state_sysfs(struct cpuidle_device *device);
|
|
extern int cpuidle_add_sysfs(struct device *dev);
|
|
extern void cpuidle_remove_sysfs(struct device *dev);
|
|
|
|
#ifdef CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED
|
|
bool cpuidle_state_is_coupled(struct cpuidle_device *dev,
|
|
struct cpuidle_driver *drv, int state);
|
|
int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
|
|
struct cpuidle_driver *drv, int next_state);
|
|
int cpuidle_coupled_register_device(struct cpuidle_device *dev);
|
|
void cpuidle_coupled_unregister_device(struct cpuidle_device *dev);
|
|
#else
|
|
static inline bool cpuidle_state_is_coupled(struct cpuidle_device *dev,
|
|
struct cpuidle_driver *drv, int state)
|
|
{
|
|
return false;
|
|
}
|
|
|
|
static inline int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
|
|
struct cpuidle_driver *drv, int next_state)
|
|
{
|
|
return -1;
|
|
}
|
|
|
|
static inline int cpuidle_coupled_register_device(struct cpuidle_device *dev)
|
|
{
|
|
return 0;
|
|
}
|
|
|
|
static inline void cpuidle_coupled_unregister_device(struct cpuidle_device *dev)
|
|
{
|
|
}
|
|
#endif
|
|
|
|
#endif /* __DRIVER_CPUIDLE_H */
|