When we attempt to reclaim an inode, the first thing we do is take
the inode lock. This is blocking right now, so if the inode being
accessed by something else (e.g. being flushed to the cluster
buffer) we will block here.
Change this to a trylock so that we do not block inode reclaim
unnecessarily here.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
Inode reclaim will still throttle direct reclaim on the per-ag
reclaim locks. This is no longer necessary as reclaim can run
non-blocking now. Hence we can remove these locks so that we don't
arbitrarily block reclaimers just because there are more direct
reclaimers than there are AGs.
This can result in multiple reclaimers working on the same range of
an AG, but this doesn't cause any apparent issues. Optimising the
spread of concurrent reclaimers for best efficiency can be done in a
future patchset.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
We no longer need to issue IO from shrinker based inode reclaim to
prevent spurious OOM killer invocation. This leaves only the global
filesystem management operations such as unmount needing to
writeback dirty inodes and reclaim them.
Instead of using the reclaim pass to write dirty inodes before
reclaiming them, use the AIL to push all the dirty inodes before we
try to reclaim them. This allows us to remove all the conditional
SYNC_WAIT locking and the writeback code from xfs_reclaim_inode()
and greatly simplify the checks we need to do to reclaim an inode.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
Now that dirty inode writeback doesn't cause read-modify-write
cycles on the inode cluster buffer under memory pressure, the need
to throttle memory reclaim to the rate at which we can clean dirty
inodes goes away. That is due to the fact that we no longer thrash
inode cluster buffers under memory pressure to clean dirty inodes.
This means inode writeback no longer stalls on memory allocation
or read IO, and hence can be done asynchronously without generating
memory pressure. As a result, blocking inode writeback in reclaim is
no longer necessary to prevent reclaim priority windup as cleaning
dirty inodes is no longer dependent on having memory reserves
available for the filesystem to make progress reclaiming inodes.
Hence we can convert inode reclaim to be non-blocking for shrinker
callouts, both for direct reclaim and kswapd.
On a vanilla kernel, running a 16-way fsmark create workload on a
4 node/16p/16GB RAM machine, I can reliably pin 14.75GB of RAM via
userspace mlock(). The OOM killer gets invoked at 15GB of
pinned RAM.
Without the inode cluster pinning, this non-blocking reclaim patch
triggers premature OOM killer invocation with the same memory
pinning, sometimes with as much as 45% of RAM being free. It's
trivially easy to trigger the OOM killer when reclaim does not
block.
With pinning inode clusters in RAM and then adding this patch, I can
reliably pin 14.5GB of RAM and still have the fsmark workload run to
completion. The OOM killer gets invoked 14.75GB of pinned RAM, which
is only a small amount of memory less than the vanilla kernel. It is
much more reliable than just with async reclaim alone.
simoops shows that allocation stalls go away when async reclaim is
used. Vanilla kernel:
Run time: 1924 seconds
Read latency (p50: 3,305,472) (p95: 3,723,264) (p99: 4,001,792)
Write latency (p50: 184,064) (p95: 553,984) (p99: 807,936)
Allocation latency (p50: 2,641,920) (p95: 3,911,680) (p99: 4,464,640)
work rate = 13.45/sec (avg 13.44/sec) (p50: 13.46) (p95: 13.58) (p99: 13.70)
alloc stall rate = 3.80/sec (avg: 2.59) (p50: 2.54) (p95: 2.96) (p99: 3.02)
With inode cluster pinning and async reclaim:
Run time: 1924 seconds
Read latency (p50: 3,305,472) (p95: 3,715,072) (p99: 3,977,216)
Write latency (p50: 187,648) (p95: 553,984) (p99: 789,504)
Allocation latency (p50: 2,748,416) (p95: 3,919,872) (p99: 4,448,256)
work rate = 13.28/sec (avg 13.32/sec) (p50: 13.26) (p95: 13.34) (p99: 13.34)
alloc stall rate = 0.02/sec (avg: 0.02) (p50: 0.01) (p95: 0.03) (p99: 0.03)
Latencies don't really change much, nor does the work rate. However,
allocation almost never stalls with these changes, whilst the
vanilla kernel is sometimes reporting 20 stalls/s over a 60s sample
period. This difference is due to inode reclaim being largely
non-blocking now.
IOWs, once we have pinned inode cluster buffers, we can make inode
reclaim non-blocking without a major risk of premature and/or
spurious OOM killer invocation, and without any changes to memory
reclaim infrastructure.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Amir Goldstein <amir73il@gmail.com>
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
When we dirty an inode, we are going to have to write it disk at
some point in the near future. This requires the inode cluster
backing buffer to be present in memory. Unfortunately, under severe
memory pressure we can reclaim the inode backing buffer while the
inode is dirty in memory, resulting in stalling the AIL pushing
because it has to do a read-modify-write cycle on the cluster
buffer.
When we have no memory available, the read of the cluster buffer
blocks the AIL pushing process, and this causes all sorts of issues
for memory reclaim as it requires inode writeback to make forwards
progress. Allocating a cluster buffer causes more memory pressure,
and results in more cluster buffers to be reclaimed, resulting in
more RMW cycles to be done in the AIL context and everything then
backs up on AIL progress. Only the synchronous inode cluster
writeback in the the inode reclaim code provides some level of
forwards progress guarantees that prevent OOM-killer rampages in
this situation.
Fix this by pinning the inode backing buffer to the inode log item
when the inode is first dirtied (i.e. in xfs_trans_log_inode()).
This may mean the first modification of an inode that has been held
in cache for a long time may block on a cluster buffer read, but
we can do that in transaction context and block safely until the
buffer has been allocated and read.
Once we have the cluster buffer, the inode log item takes a
reference to it, pinning it in memory, and attaches it to the log
item for future reference. This means we can always grab the cluster
buffer from the inode log item when we need it.
When the inode is finally cleaned and removed from the AIL, we can
drop the reference the inode log item holds on the cluster buffer.
Once all inodes on the cluster buffer are clean, the cluster buffer
will be unpinned and it will be available for memory reclaim to
reclaim again.
This avoids the issues with needing to do RMW cycles in the AIL
pushing context, and hence allows complete non-blocking inode
flushing to be performed by the AIL pushing context.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
xfs_ail_delete_one() is called directly from dquot and inode IO
completion, as well as from the generic xfs_trans_ail_delete()
function. Inodes are about to have their own failure handling, and
dquots will in future, too. Pull the clearing of the LI_FAILED flag
up into the callers so we can customise the code appropriately.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
When an buffer IO error occurs, we want to mark all
the log items attached to the buffer as failed. Open code
the error handling loop so that we can modify the flagging for the
different types of objects directly and independently of each other.
This also allows us to remove the ->iop_error method from the log
item operations.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
Currently when a buffer with attached log items has an IO error
it called ->iop_error for each attched log item. These all call
xfs_set_li_failed() to handle the error, but we are about to change
the way log items manage buffers. hence we first need to remove the
per-item dependency on buffer handling done by xfs_set_li_failed().
We already have specific buffer type IO completion routines, so move
the log item error handling out of the generic error handling and
into the log item specific functions so we can implement per-type
error handling easily.
This requires a more complex return value from the error handling
code so that we can take the correct action the failure handling
requires. This results in some repeated boilerplate in the
functions, but that can be cleaned up later once all the changes
cascade through this code.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
They are not used anymore, so remove them from the log item and the
buffer iodone attachment interfaces.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
Now that we've sorted inode and dquot buffers, we can apply the same
cleanups to dirty buffers with buffer log items. They only have one
callback, too, so we don't need the log item callback. Collapse the
iodone functions and remove all the now unnecessary infrastructure
around callback processing.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
The energy counters of certain models seems to be reporting
inconsistent values. Hence, match for the supported models.
Signed-off-by: Naveen Krishna Chatradhi <nchatrad@amd.com>
Fixes: 8abee9566b ("hwmon: Add amd_energy driver to report energy counters")
Link: https://lore.kernel.org/r/20200706171715.124993-1-nchatrad@amd.com
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
[BUG]
The following small test script can trigger ASSERT() at unmount time:
mkfs.btrfs -f $dev
mount $dev $mnt
mount -o remount,discard=async $mnt
umount $mnt
The call trace:
assertion failed: atomic_read(&block_group->count) == 1, in fs/btrfs/block-group.c:3431
------------[ cut here ]------------
kernel BUG at fs/btrfs/ctree.h:3204!
invalid opcode: 0000 [#1] PREEMPT SMP NOPTI
CPU: 4 PID: 10389 Comm: umount Tainted: G O 5.8.0-rc3-custom+ #68
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
Call Trace:
btrfs_free_block_groups.cold+0x22/0x55 [btrfs]
close_ctree+0x2cb/0x323 [btrfs]
btrfs_put_super+0x15/0x17 [btrfs]
generic_shutdown_super+0x72/0x110
kill_anon_super+0x18/0x30
btrfs_kill_super+0x17/0x30 [btrfs]
deactivate_locked_super+0x3b/0xa0
deactivate_super+0x40/0x50
cleanup_mnt+0x135/0x190
__cleanup_mnt+0x12/0x20
task_work_run+0x64/0xb0
__prepare_exit_to_usermode+0x1bc/0x1c0
__syscall_return_slowpath+0x47/0x230
do_syscall_64+0x64/0xb0
entry_SYSCALL_64_after_hwframe+0x44/0xa9
The code:
ASSERT(atomic_read(&block_group->count) == 1);
btrfs_put_block_group(block_group);
[CAUSE]
Obviously it's some btrfs_get_block_group() call doesn't get its put
call.
The offending btrfs_get_block_group() happens here:
void btrfs_mark_bg_unused(struct btrfs_block_group *bg)
{
if (list_empty(&bg->bg_list)) {
btrfs_get_block_group(bg);
list_add_tail(&bg->bg_list, &fs_info->unused_bgs);
}
}
So every call sites removing the block group from unused_bgs list should
reduce the ref count of that block group.
However for async discard, it didn't follow the call convention:
void btrfs_discard_punt_unused_bgs_list(struct btrfs_fs_info *fs_info)
{
list_for_each_entry_safe(block_group, next, &fs_info->unused_bgs,
bg_list) {
list_del_init(&block_group->bg_list);
btrfs_discard_queue_work(&fs_info->discard_ctl, block_group);
}
}
And in btrfs_discard_queue_work(), it doesn't call
btrfs_put_block_group() either.
[FIX]
Fix the problem by reducing the reference count when we grab the block
group from unused_bgs list.
Reported-by: Marcos Paulo de Souza <mpdesouza@suse.com>
Fixes: 6e80d4f8c4 ("btrfs: handle empty block_group removal for async discard")
CC: stable@vger.kernel.org # 5.6+
Tested-by: Marcos Paulo de Souza <mpdesouza@suse.com>
Reviewed-by: Anand Jain <anand.jain@oracle.com>
Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
The generic netlink protocol is implemented but the different
notification functions are not yet connected to the core code.
These changes add the notification calls in the different
corresponding places.
Reviewed-by: Amit Kucheria <amit.kucheria@linaro.org>
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Acked-by: Zhang Rui <rui.zhang@intel.com>
Link: https://lore.kernel.org/r/20200706105538.2159-4-daniel.lezcano@linaro.org
Initially the thermal framework had a very simple notification
mechanism to send generic netlink messages to the userspace.
The notification function was never called from anywhere and the
corresponding dead code was removed. It was probably a first attempt
to introduce the netlink notification.
At LPC2018, the presentation "Linux thermal: User kernel interface",
proposed to create the notifications to the userspace via a kfifo.
The advantage of the kfifo is the performance. It is usually used from
a 1:1 communication channel where a driver captures data and sends it
as fast as possible to a userspace process.
The drawback is that only one process uses the notification channel
exclusively, thus no other process is allowed to use the channel to
get temperature or notifications.
This patch defines a generic netlink API to discover the current
thermal setup and adds event notifications as well as temperature
sampling. As any genetlink protocol, it can evolve and the versioning
allows to keep the backward compatibility.
In order to prevent the user from getting flooded with data on a
single channel, there are two multicast channels, one for the
temperature sampling when the thermal zone is updated and another one
for the events, so the user can get the events only without the
thermal zone temperature sampling.
Also, a list of commands to discover the thermal setup is added and
can be extended when needed.
Reviewed-by: Amit Kucheria <amit.kucheria@linaro.org>
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Acked-by: Zhang Rui <rui.zhang@intel.com>
Link: https://lore.kernel.org/r/20200706105538.2159-3-daniel.lezcano@linaro.org
The next patch will introduce the generic netlink protocol to handle
events, sampling and command from the thermal framework. In order to
deal with the thermal zone, it uses its unique identifier to
characterize it in the message. Passing an integer is more efficient
than passing an entire string.
This change provides a function returning back a thermal zone pointer
corresponding to the identifier passed as parameter.
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Reviewed-by: Amit Kucheria <amit.kucheria@linaro.org>
Acked-by: Zhang Rui <rui.zhang@intel.com>
Link: https://lore.kernel.org/r/20200706105538.2159-2-daniel.lezcano@linaro.org
The cdev, tz and governor list, as well as their respective locks are
statically defined in the thermal_core.c file.
In order to give a sane access to these list, like browsing all the
thermal zones or all the cooling devices, let's define a set of
helpers where we pass a callback as a parameter to be called for each
thermal entity.
We keep the self-encapsulation and ensure the locks are correctly
taken when looking at the list.
Acked-by: Zhang Rui <rui.zhang@intel.com>
Reviewed-by: Amit Kucheria <amit.kucheria@linaro.org>
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Link: https://lore.kernel.org/r/20200706105538.2159-1-daniel.lezcano@linaro.org
The recent GCC compiler is very picky with the VD_H_START() and
AFBC_DEC_PIXEL_BGN_H() macros, triggering a runtime assert error as:
In function 'meson_overlay_setup_scaler_params',
inlined from 'meson_overlay_atomic_update' at
drivers/gpu/drm/meson/meson_overlay.c:542:2:
./include/linux/compiler.h:392:38: error: call to
'__compiletime_assert_341' declared with attribute error: FIELD_PREP:
value too large for the field
drivers/gpu/drm/meson/meson_overlay.c:413:4: note: in expansion of macro
'AFBC_DEC_PIXEL_BGN_H'
413 | AFBC_DEC_PIXEL_BGN_H(hd_start_lines - afbc_left) |
| ^~~~~~~~~~~~~~~~~~~~
./include/linux/compiler.h:392:38: error: call to
'__compiletime_assert_401' declared with attribute error: FIELD_PREP:
value too large for the field
It's not expected to overflow these fields, but the compiler did
find a case where it overflows.
We can safely ignore this, so mask the value with the field width.
Fixes: e860785d57 ("drm/meson: overlay: setup overlay for Amlogic FBC")
Signed-off-by: Neil Armstrong <narmstrong@baylibre.com>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
[narmstrong: moved to (value) to avoid precedence issues]
Link: https://patchwork.freedesktop.org/patch/msgid/20200707135009.32474-1-narmstrong@baylibre.com
This series tries to reduce a whole bunch of overhead in each SPI
transfer. Much of this overhead is new with the recent interconnect
changes, but even without those changes we still had some overhead
that we could avoid. Let's avoid all of it.
These changes are atop the Qualcomm tree to avoid merge conflicts. If
they look good, the most expedient way to land them is probably to get
Ack's from Mark and land then via the Qualcomm tree.
Most testing was done on the Chrome OS 5.4 tree, but sanity check was
done on mainline.
Douglas Anderson (3):
spi: spi-geni-qcom: Avoid clock setting if not needed
spi: spi-geni-qcom: Set an autosuspend delay of 250 ms
spi: spi-geni-qcom: Get rid of most overhead in prepare_message()
drivers/spi/spi-geni-qcom.c | 67 ++++++++++++++++++-------------------
1 file changed, 32 insertions(+), 35 deletions(-)
--
2.27.0.383.g050319c2ae-goog
Hello,
this series first fixes the calculation of the clock rate. The driver will
round up to the nearest clock rate instead of rounding down. Resulting in SPI
devices accessed with a too high SPI clock.
The remaining patches improve the performance of the driver. The changes range
from micro-optimizations like reducing MMIO writes to the controller to
reducing the number of needed interrupts in some use cases.
regards,
Marc
changes since v1:
- added Maxime Ripard's to the existing patches
- 06/10: (was 05/10 in v1)
"spi: spi-sun6i: sun6i_spi_drain_fifo(): introduce sun6i_spi_get_rx_fifo_count() and make use of it"
use FIELD_GET instead of open coding it
(tnx: Maxime Ripard)
- 05/10: "spi: spi-sun6i: sun6i_spi_get_tx_fifo_count: Convert manual shift+mask to FIELD_GET()"
new patch
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.orghttp://lists.infradead.org/mailman/listinfo/linux-arm-kernel
In commit 0e3b8a81f5 ("spi: spi-geni-qcom: Add interconnect
support") the spi_geni_runtime_suspend() and spi_geni_runtime_resume()
became a bit slower. Measuring on my hardware I see numbers in the
hundreds of microseconds now.
Let's use autosuspend to help avoid some of the overhead. Now if
we're doing a bunch of transfers we won't need to be constantly
chruning.
The number 250 ms for the autosuspend delay was picked a bit
arbitrarily, so if someone has measurements showing a better value we
could easily change this.
Fixes: 0e3b8a81f5 ("spi: spi-geni-qcom: Add interconnect support")
Signed-off-by: Douglas Anderson <dianders@chromium.org>
Reviewed-by: Akash Asthana<akashast@codeaurora.org>
Link: https://lore.kernel.org/r/20200701174506.2.I9b8f6bb1e7e6d8847e2ed2cf854ec55678db427f@changeid
Signed-off-by: Mark Brown <broonie@kernel.org>
This patch adds support for COMPILE_TEST while fixing a warning when
no support for device tree is there.
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Christoph Fritz <chf.fritz@googlemail.com>
Link: https://lore.kernel.org/r/1c437154873ace65ff738a0ebca511308f1cecc1.camel@googlemail.com
Signed-off-by: Mark Brown <broonie@kernel.org>
This change drops the override in `amd_gpio_irq_set_type()` that
ignores the IRQ trigger type settings from the caller. The device
driver (caller) is in a better position to identify the right trigger
type for the device based on the usage as well as the information
exposed by the BIOS. There are instances where the device driver might
want to configure the trigger type differently in different modes. An
example of this is gpio-keys driver which configures IRQ type as
trigger on both edges (to identify assert and deassert events) when in
S0 and reconfigures the trigger type using the information provided by
the BIOS when going into suspend to ensure that the wake happens on
the required edge.
This override in `amd_gpio_irq_set_type()` prevents the caller from
being able to reconfigure trigger type once it is set either based on
ACPI information or the type used by the first caller for IRQ on a
given GPIO line.
Without this change, pen-insert gpio key (used by garaged stylus on a
Chromebook) works fine in S0 (i.e. insert and eject events are
correctly identified), however, BIOS configuration for wake on only
pen eject i.e. only-rising edge or only-falling edge is not honored.
With this change, it was verified that pen-insert gpio key behavior is
correct in both S0 and for wakeup from S3.
Signed-off-by: Furquan Shaikh <furquan@google.com>
Signed-off-by: Shyam Sundar S K<Shyam-sundar.S-k@amd.com>
Link: https://lore.kernel.org/r/20200626211026.513520-1-furquan@google.com
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Replace the custom code to parse GPIO offsets and/or GPIO offset ranges
by a call to bitmap_parselist(), and an iteration over the returned bit
mask.
This should have no impact on the format of the configuration parameters
written to the "new_device" virtual file in sysfs.
Suggested-by: Andy Shevchenko <andy.shevchenko@gmail.com>
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Link: https://lore.kernel.org/r/20200701114212.8520-3-geert+renesas@glider.be
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
In get_arg(), the variable start is pre-initialized, but overwritten
again in the first statement. Rework the assignment to not rely on
pre-initialization, to make the code easier to read.
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Link: https://lore.kernel.org/r/20200701114212.8520-2-geert+renesas@glider.be
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
The PAT1 register contains information about the IRQ type (edge/level)
for input GPIOs with IRQ enabled, and the direction for non-IRQ GPIOs.
So it makes sense to read it only if the GPIO has no interrupt
configured, otherwise input GPIOs configured for level IRQs are
misdetected as output GPIOs.
Fixes: ebd6651418 ("pinctrl: ingenic: Implement .get_direction for GPIO chips")
Reported-by: João Henrique <johnnyonflame@hotmail.com>
Signed-off-by: Paul Cercueil <paul@crapouillou.net>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20200622214548.265417-2-paul@crapouillou.net
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Ingenic SoCs don't natively support registering an interrupt for both
rising and falling edges. This has to be emulated in software.
Until now, this was emulated by switching back and forth between
IRQ_TYPE_EDGE_RISING and IRQ_TYPE_EDGE_FALLING according to the level of
the GPIO. While this worked most of the time, when used with GPIOs that
need debouncing, some events would be lost. For instance, between the
time a falling-edge interrupt happens and the interrupt handler
configures the hardware for rising-edge, the level of the pin may have
already risen, and the rising-edge event is lost.
To address that issue, instead of switching back and forth between
IRQ_TYPE_EDGE_RISING and IRQ_TYPE_EDGE_FALLING, we now switch back and
forth between IRQ_TYPE_LEVEL_LOW and IRQ_TYPE_LEVEL_HIGH. Since we
always switch in the interrupt handler, they actually permit to detect
level changes. In the example above, if the pin level rises before
switching the IRQ type from IRQ_TYPE_LEVEL_LOW to IRQ_TYPE_LEVEL_HIGH,
a new interrupt will raise as soon as the handler exits, and the
rising-edge event will be properly detected.
Fixes: e72394e2ea ("pinctrl: ingenic: Merge GPIO functionality")
Reported-by: João Henrique <johnnyonflame@hotmail.com>
Signed-off-by: Paul Cercueil <paul@crapouillou.net>
Tested-by: João Henrique <johnnyonflame@hotmail.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20200622214548.265417-1-paul@crapouillou.net
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Convert the ingenic,pinctrl.txt doc file to ingenic,pinctrl.yaml.
In the process, some compatible strings now require a fallback, as the
corresponding SoCs are pin-compatible with their fallback variant.
Signed-off-by: Paul Cercueil <paul@crapouillou.net>
Link: https://lore.kernel.org/r/20200622113740.46450-1-paul@crapouillou.net
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
As per 'struct mfd_cell ab8500_devs[]' there are not 1, but 3 PWM
devices on the AB8500. Until now, each of them have referenced
the same Device Tree node. This change ensures each device has
their own.
Due to recent `dtc` checks [0], nodes cannot share the same node
name, so we are forced to rename the affected nodes by appending
their associated numeric 'bank ID'.
[0] ste-ab8500.dtsi:210.16-214.7: ERROR (duplicate_node_names):
/soc/prcmu@80157000/ab8500/ab8500-pwm: Duplicate node name
Signed-off-by: Lee Jones <lee.jones@linaro.org>
Link: https://lore.kernel.org/r/20200622083432.1491715-1-lee.jones@linaro.org
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Adds support for the back and menu keys on golden.
Signed-off-by: Nick Reitemeyer <nick.reitemeyer@web.de>
Tested-by: Stephan Gerhold <stephan@gerhold.net>
Reviewed-by: Stephan Gerhold <stephan@gerhold.net>
Link: https://lore.kernel.org/r/20200621193822.133683-2-nick.reitemeyer@web.de
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
This adds default clock/PLL configuration to the driver
for usage with generic drivers like simple-card for usage
with a fixed rate clock.
Signed-off-by: Sebastian Reichel <sebastian.reichel@collabora.com>
Reviewed-by: Adam Thomson <Adam.Thomson.Opensource@diasemi.com>
Link: https://lore.kernel.org/r/20200626164623.87894-1-sebastian.reichel@collabora.com
Signed-off-by: Mark Brown <broonie@kernel.org>
Add support for pm660(l) SPMI GPIOs. The PMICs feature
13 and 12 GPIOs respectively, though with a lot of
holes inbetween.
Signed-off-by: Konrad Dybcio <konradybcio@gmail.com>
Acked-by: Bjorn Andersson <bjorn.andersson@linaro.org>
Link: https://lore.kernel.org/r/20200622192558.152828-2-konradybcio@gmail.com
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
This add support for Sparx5 pinctrl, using the ocelot drives as
basis. It adds pinconfig support as well, as supported by the
platform.
Signed-off-by: Lars Povlsen <lars.povlsen@microchip.com>
Reviewed-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Link: https://lore.kernel.org/r/20200615133242.24911-6-lars.povlsen@microchip.com
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
If a GPIO bank has greater than 16 pins, PAD_DS_REG is split into two
or more registers. However, when register and bit were calculated, the
first register defined in the bank was used, and the bit was calculated
based on the first pin. This causes problems in setting the driving
strength.
The following method was used to solve this problem:
A bit is calculated first using predefined strides. Then, If the bit is
32 or more, the register is changed by the quotient of the bit divided
by 32. And the bit is set to the remainder.
Signed-off-by: Hyeonki Hong <hhk7734@gmail.com>
Link: https://lore.kernel.org/r/20200618025916.GA19368@home-desktop
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
The commit 3ad796cbc3 ("ALSA: pcm: Use SG-buffer only when direct
DMA is available") introduced a check of the DMA type and this caused
a build error on m68k (and possibly some others) due to the lack of
dma_is_direct() definition. Since the check is needed only for
CONFIG_SND_DMA_SGBUF enablement (i.e. solely x86), use #ifdef instead
of IS_ENABLED() for avoiding such a build error.
Fixes: 3ad796cbc3 ("ALSA: pcm: Use SG-buffer only when direct DMA is available")
Reported-by: kernel test robot <lkp@intel.com>
Link: https://lore.kernel.org/r/20200707111225.26826-1-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Use the correct the function name in the documentation for
"pcs_parse_one_pinctrl_entry()".
"smux_parse_one_pinctrl_entry()" appears to be an artifact from the
development of a prior patch series ("simple pinmux driver") which
transformed into pinctrl-single.
Fixes: 8b8b091bf0 ("pinctrl: Add one-register-per-pin type device tree based pinctrl driver")
Signed-off-by: Drew Fustini <drew@beagleboard.org>
Link: https://lore.kernel.org/r/20200617180543.GA4186054@x1
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Increase #pinctrl-cells to 2 so that mux and conf be kept separate. This
requires the AM33XX_PADCONF macro in omap.h to also be modified to keep pin
conf and pin mux values separate.
Signed-off-by: Drew Fustini <drew@beagleboard.org>
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Haojian Zhuang <haojian.zhuang@linaro.org>
Link: https://lore.kernel.org/r/20200701013320.130441-3-drew@beagleboard.org
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>