Commit Graph

1169205 Commits (887185649c7ee8a9cc2d4e94de92bbbae6cd3747)

Author SHA1 Message Date
Rahul Rameshbabu e12ebbf0cc net/mlx5: Document previously implemented mlx5 tracepoints
Tracepoints were previously implemented but not documented till this patch
series.

Signed-off-by: Rahul Rameshbabu <rrameshbabu@nvidia.com>
Reviewed-by: Gal Pressman <gal@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-02-04 02:07:03 -08:00
Rahul Rameshbabu a12ba19269 net/mlx5: Update Kconfig parameter documentation
Provide information for Kconfig flags defined but not documented till this
patch series for the mlx5 driver.

Signed-off-by: Rahul Rameshbabu <rrameshbabu@nvidia.com>
Reviewed-by: Gal Pressman <gal@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-02-04 02:07:03 -08:00
Rahul Rameshbabu f2d51e5793 net/mlx5: Separate mlx5 driver documentation into multiple pages
The mlx5 device driver documentation page has grown in size and should be
split into multiple subpages. This change also contains a table of contents
for these new subpages.

Signed-off-by: Rahul Rameshbabu <rrameshbabu@nvidia.com>
Reviewed-by: Gal Pressman <gal@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-02-04 02:07:02 -08:00
Roi Dayan 199abf33f4 net/mlx5: Lag, Move mpesw related definitions to mpesw.h
mpesw definitions should be in mpesw.h and not lag.h.

Signed-off-by: Roi Dayan <roid@nvidia.com>
Reviewed-by: Maor Dickman <maord@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-02-04 02:07:02 -08:00
Mark Bloch 6a80313d24 net/mlx5: Lag, Use flag to check for shared FDB mode
It's redundant and incorrect to check lag is also sriov mode.

Signed-off-by: Mark Bloch <mbloch@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-02-04 02:07:02 -08:00
Roi Dayan b399b066e2 net/mlx5: Lag, Remove redundant bool allocation on the stack
There is no need to allocate the bool variable and can just return the value.

Signed-off-by: Roi Dayan <roid@nvidia.com>
Reviewed-by: Maor Dickman <maord@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-02-04 02:07:02 -08:00
Roi Dayan 9a49a64ea7 net/mlx5: Lag, Use mlx5_lag_dev() instead of derefering pointers
Use the existing wrapper mlx5_lag_dev() to access the lag object from
dev for better maintainability and consistent code.

Signed-off-by: Roi Dayan <roid@nvidia.com>
Reviewed-by: Maor Dickman <maord@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-02-04 02:07:02 -08:00
Roi Dayan 2afcfae77a net/mlx5: Lag, Update multiport eswitch check to log an error
Update the function to log an error to the user if failing to offload
the rule and while there add correct prefix for the function name.

Signed-off-by: Roi Dayan <roid@nvidia.com>
Reviewed-by: Maor Dickman <maord@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-02-04 02:07:02 -08:00
David S. Miller 042b7858d5 Merge branch 'net-smc-parallelism'
D. Wythe says:

====================
net/smc: optimize the parallelism of SMC-R connections

This patch set attempts to optimize the parallelism of SMC-R connections,
mainly to reduce unnecessary blocking on locks, and to fix exceptions that
occur after thoses optimization.

According to Off-CPU graph, SMC worker's off-CPU as that:

smc_close_passive_work                  (1.09%)
        smcr_buf_unuse                  (1.08%)
                smc_llc_flow_initiate   (1.02%)

smc_listen_work                         (48.17%)
        __mutex_lock.isra.11            (47.96%)

An ideal SMC-R connection process should only block on the IO events
of the network, but it's quite clear that the SMC-R connection now is
queued on the lock most of the time.

The goal of this patchset is to achieve our ideal situation where
network IO events are blocked for the majority of the connection lifetime.

There are three big locks here:

1. smc_client_lgr_pending & smc_server_lgr_pending

2. llc_conf_mutex

3. rmbs_lock & sndbufs_lock

And an implementation issue:

1. confirm/delete rkey msg can't be sent concurrently while
protocol allows indeed.

Unfortunately,The above problems together affect the parallelism of
SMC-R connection. If any of them are not solved. our goal cannot
be achieved.

After this patch set, we can get a quite ideal off-CPU graph as
following:

smc_close_passive_work                                  (41.58%)
        smcr_buf_unuse                                  (41.57%)
                smc_llc_do_delete_rkey                  (41.57%)

smc_listen_work                                         (39.10%)
        smc_clc_wait_msg                                (13.18%)
                tcp_recvmsg_locked                      (13.18)
        smc_listen_find_device                          (25.87%)
                smcr_lgr_reg_rmbs                       (25.87%)
                        smc_llc_do_confirm_rkey         (25.87%)

We can see that most of the waiting times are waiting for network IO
events. This also has a certain performance improvement on our
short-lived conenction wrk/nginx benchmark test:

+--------------+------+------+-------+--------+------+--------+
|conns/qps     |c4    | c8   |  c16  |  c32   | c64  |  c200  |
+--------------+------+------+-------+--------+------+--------+
|SMC-R before  |9.7k  | 10k  |  10k  |  9.9k  | 9.1k |  8.9k  |
+--------------+------+------+-------+--------+------+--------+
|SMC-R now     |13k   | 19k  |  18k  |  16k   | 15k  |  12k   |
+--------------+------+------+-------+--------+------+--------+
|TCP           |15k   | 35k  |  51k  |  80k   | 100k |  162k  |
+--------------+------+------+-------+--------+------+--------+

The reason why the benefit is not obvious after the number of connections
has increased dues to workqueue. If we try to change workqueue to UNBOUND,
we can obtain at least 4-5 times performance improvement, reach up to half
of TCP. However, this is not an elegant solution, the optimization of it
will be much more complicated. But in any case, we will submit relevant
optimization patches as soon as possible.

Please note that the premise here is that the lock related problem
must be solved first, otherwise, no matter how we optimize the workqueue,
there won't be much improvement.

Because there are a lot of related changes to the code, if you have
any questions or suggestions, please let me know.

Thanks
D. Wythe

v1 -> v2:

1. Fix panic in SMC-D scenario
2. Fix lnkc related hashfn calculation exception, caused by operator
priority
3. Only wake up one connection if the lnk is not active
4. Delete obsolete unlock logic in smc_listen_work()
5. PATCH format, do Reverse Christmas tree
6. PATCH format, change all xxx_lnk_xxx function to xxx_link_xxx
7. PATCH format, add correct fix tag for the patches for fixes.
8. PATCH format, fix some spelling error
9. PATCH format, rename slow to do_slow

v2 -> v3:

1. add SMC-D support, remove the concept of link cluster since SMC-D has
no link at all. Replace it by lgr decision maker, who provides suggestions
to SMC-D and SMC-R on whether to create new link group.

2. Fix the corruption problem described by PATCH 'fix application
data exception' on SMC-D.

v3 -> v4:

1. Fix panic caused by uninitialization map.

v4 -> v5:

1. Make SMC-D buf creation be serial to avoid Potential error
2. Add a flag to synchronize the success of the first contact
with the ready of the link group, including SMC-D and SMC-R.
3. Fixed possible reference count leak in smc_llc_flow_start().
4. reorder the patch, make bugfix PATCH be ahead.

v5 -> v6:

1. Separate the bugfix patches to make it independent.
2. Merge patch 'fix SMC_CLC_DECL_ERR_REGRMB without smc_server_lgr_pending'
with patch 'remove locks smc_client_lgr_pending and smc_server_lgr_pending'
3. Format code styles, including alignment and reverse christmas tree
style.
4. Fix a possible memory leak in smc_llc_rmt_delete_rkey()
and smc_llc_rmt_conf_rkey().

v6 -> v7:

1. Discard patch attempting to remove global locks
2. Discard patch attempting make confirm/delete rkey process concurrently
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2023-02-04 09:48:19 +00:00
D. Wythe aff7bfed90 net/smc: replace mutex rmbs_lock and sndbufs_lock with rw_semaphore
It's clear that rmbs_lock and sndbufs_lock are aims to protect the
rmbs list or the sndbufs list.

During connection establieshment, smc_buf_get_slot() will always
be invoked, and it only performs read semantics in rmbs list and
sndbufs list.

Based on the above considerations, we replace mutex with rw_semaphore.
Only smc_buf_get_slot() use down_read() to allow smc_buf_get_slot()
run concurrently, other part use down_write() to keep exclusive
semantics.

Signed-off-by: D. Wythe <alibuda@linux.alibaba.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2023-02-04 09:48:19 +00:00
D. Wythe 4da687448d net/smc: reduce unnecessary blocking in smcr_lgr_reg_rmbs()
Unlike smc_buf_create() and smcr_buf_unuse(), smcr_lgr_reg_rmbs() is
exclusive when assigned rmb_desc was not registered, although it can be
executed in parallel when assigned rmb_desc was registered already
and only performs read semtamics on it. Hence, we can not simply replace
it with read semaphore.

The idea here is that if the assigned rmb_desc was registered already,
use read semaphore to protect the critical section, once the assigned
rmb_desc was not registered, keep using keep write semaphore still
to keep its exclusivity.

Thanks to the reusable features of rmb_desc, which allows us to execute
in parallel in most cases.

Signed-off-by: D. Wythe <alibuda@linux.alibaba.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2023-02-04 09:48:19 +00:00
D. Wythe f6421014e8 net/smc: use read semaphores to reduce unnecessary blocking in smc_buf_create() & smcr_buf_unuse()
Following is part of Off-CPU graph during frequent SMC-R short-lived
processing:

process_one_work				(51.19%)
smc_close_passive_work			(28.36%)
	smcr_buf_unuse				(28.34%)
	rwsem_down_write_slowpath		(28.22%)

smc_listen_work				(22.83%)
	smc_clc_wait_msg			(1.84%)
	smc_buf_create				(20.45%)
		smcr_buf_map_usable_links
		rwsem_down_write_slowpath	(20.43%)
	smcr_lgr_reg_rmbs			(0.53%)
		rwsem_down_write_slowpath	(0.43%)
		smc_llc_do_confirm_rkey		(0.08%)

We can clearly see that during the connection establishment time,
waiting time of connections is not on IO, but on llc_conf_mutex.

What is more important, the core critical area (smcr_buf_unuse() &
smc_buf_create()) only perfroms read semantics on links, we can
easily replace it with read semaphore.

Signed-off-by: D. Wythe <alibuda@linux.alibaba.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2023-02-04 09:48:19 +00:00
D. Wythe b5dd4d6981 net/smc: llc_conf_mutex refactor, replace it with rw_semaphore
llc_conf_mutex was used to protect links and link related configurations
in the same link group, for example, add or delete links. However,
in most cases, the protected critical area has only read semantics and
with no write semantics at all, such as obtaining a usable link or an
available rmb_desc.

This patch do simply code refactoring, replace mutex with rw_semaphore,
replace mutex_lock with down_write and replace mutex_unlock with
up_write.

Theoretically, this replacement is equivalent, but after this patch,
we can distinguish lock granularity according to different semantics
of critical areas.

Signed-off-by: D. Wythe <alibuda@linux.alibaba.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2023-02-04 09:48:19 +00:00
Mark Brown 69218b59be kselftest/alsa: Run PCM tests for multiple cards in parallel
With each test taking 4 seconds the runtime of pcm-test can add up. Since
generally each card in the system is physically independent and will be
unaffected by what's going on with other cards we can mitigate this by
testing each card in parallel. Make a list of cards as we enumerate the
system and then start a thread for each, then join the threads to ensure
they have all finished. The threads each run the same tests we currently
run for each PCM on the card before exiting.

The list of PCMs is kept global since it helps with global operations
like working out our planned number of tests and identifying missing PCMs
and it seemed neater to check for PCMs on the right card in the card
thread than make every PCM loop iterate over cards as well.

We don't run per-PCM tests in parallel since in embedded systems it can
be the case that resources are shared between the PCMs and operations on
one PCM on a card may constrain what can be done on another PCM on the same
card leading to potentially unstable results.

We use a mutex to ensure that the reporting of results is serialised and we
don't have issues with anything like the current test number, we could do
this in the kselftest framework but it seems like this might cause problems
for other tests that are doing lower level testing and building in
constrained environments such as nolibc so this seems more sensible.

Note that the ordering of the tests can't be guaranteed as things stand,
this does not seem like a major problem since the numbering of tests often
changes as test programs are changed so results parsers are expected to
rely on the test name rather than the test numbers. We also now prefix the
machine generated test name when printing the description of the test since
this is logged before streaming starts.

On my two card desktop system this reduces the overall runtime by a
third.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20230203-alsa-pcm-test-card-thread-v1-1-59941640ebba@kernel.org
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2023-02-04 09:36:35 +01:00
Takashi Sakamoto 0d9eb7ed95 ALSA: fireface: add field for the number of messages copied to user space
Current structure includes no field to express the number of messages
copied to user space, thus user space application needs to information
out of the structure to parse the content of structure.

This commit adds a field to express the number of messages copied to user
space since It is more preferable to use self-contained structure.

Kees Cook proposed an idea of annotation for bound of flexible arrays
in his future improvement for flexible-length array in kernel. The
additional field for message count is suitable to the idea as well.

Reference: https://people.kernel.org/kees/bounded-flexible-arrays-in-c
Signed-off-by: Takashi Sakamoto <o-takashi@sakamocchi.jp>
Link: https://lore.kernel.org/r/20230202133708.163936-1-o-takashi@sakamocchi.jp
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2023-02-04 09:35:17 +01:00
Ard Biesheuvel cf1d2ffcc6 efi: Discover BTI support in runtime services regions
Add the generic plumbing to detect whether or not the runtime code
regions were constructed with BTI/IBT landing pads by the firmware,
permitting the OS to enable enforcement when mapping these regions into
the OS's address space.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
2023-02-04 09:19:02 +01:00
Florian Lehner 17c9b4e1a7 bpf: fix typo in header for bpf_perf_prog_read_value
Fix a simple typo in the documentation for bpf_perf_prog_read_value.

Signed-off-by: Florian Lehner <dev@der-flo.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/r/20230203121439.25884-1-dev@der-flo.net
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2023-02-03 22:11:21 -08:00
Jakub Kicinski 88c940cccb Merge branch 'updates-to-enetc-txq-management'
Vladimir Oltean says:

====================
Updates to ENETC TXQ management

The set ensures that the number of TXQs given by enetc to the network
stack (mqprio or TX hashing) + the number of TXQs given to XDP never
exceeds the number of available TXQs.

These are the first 4 patches of series "[v5,net-next,00/17] ENETC
mqprio/taprio cleanup" from here:
https://patchwork.kernel.org/project/netdevbpf/cover/20230202003621.2679603-1-vladimir.oltean@nxp.com/

There is no change in this version compared to there. I split them off
because this contains a fix for net-next and it would be good if it
could go in quickly. I also did it to reduce the patch count of that
other series, if I need to respin it again.
====================

Link: https://lore.kernel.org/r/20230203001116.3814809-1-vladimir.oltean@nxp.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-03 20:05:59 -08:00
Vladimir Oltean 800db2d125 net: enetc: ensure we always have a minimum number of TXQs for stack
Currently it can happen that an mqprio qdisc is installed with num_tc 8,
and this will reserve 8 (out of 8) TXQs for the network stack. Then we
can attach an XDP program, and this will crop 2 TXQs, leaving just 6 for
mqprio. That's not what the user requested, and we should fail it.

On the other hand, if mqprio isn't requested, we still give the 8 TXQs
to the network stack (with hashing among a single traffic class), but
then, cropping 2 TXQs for XDP is fine, because the user didn't
explicitly ask for any number of TXQs, so no expectations are violated.

Simply put, the logic that mqprio should impose a minimum number of TXQs
for the network never existed. Let's say (more or less arbitrarily) that
without mqprio, the driver expects a minimum number of TXQs equal to the
number of CPUs (on NXP LS1028A, that is either 1, or 2). And with mqprio,
mqprio gives the minimum required number of TXQs.

Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-03 20:05:57 -08:00
Vladimir Oltean 4ea1dd743e net: enetc: recalculate num_real_tx_queues when XDP program attaches
Since the blamed net-next commit, enetc_setup_xdp_prog() no longer goes
through enetc_open(), and therefore, the function which was supposed to
detect whether a BPF program exists (in order to crop some TX queues
from network stack usage), enetc_num_stack_tx_queues(), no longer gets
called.

We can move the netif_set_real_num_rx_queues() call to enetc_alloc_msix()
(probe time), since it is a runtime invariant. We can do the same thing
with netif_set_real_num_tx_queues(), and let enetc_reconfigure_xdp_cb()
explicitly recalculate and change the number of stack TX queues.

Fixes: c33bfaf91c ("net: enetc: set up XDP program under enetc_reconfigure()")
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-03 20:05:57 -08:00
Vladimir Oltean 46a0ecf93b net: enetc: allow the enetc_reconfigure() callback to fail
enetc_reconfigure() was modified in commit c33bfaf91c ("net: enetc:
set up XDP program under enetc_reconfigure()") to take an optional
callback that runs while the netdev is down, but this callback currently
cannot fail.

Code up the error handling so that the interface is restarted with the
old resources if the callback fails.

Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-03 20:05:57 -08:00
Vladimir Oltean 1c81a9b3aa net: enetc: simplify enetc_num_stack_tx_queues()
We keep a pointer to the xdp_prog in the private netdev structure as
well; what's replicated per RX ring is done so just for more convenient
access from the NAPI poll procedure.

Simplify enetc_num_stack_tx_queues() by looking at priv->xdp_prog rather
than iterating through the information replicated per RX ring.

Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-03 20:05:57 -08:00
Jakub Kicinski a05e7a6798 Merge branch 'ionic-code-maintenance'
Shannon Nelson says:

====================
ionic: code maintenance

These are a few fixes for a hardware bug, a couple of sw bugs,
and a little code cleanup.
====================

Link: https://lore.kernel.org/r/20230202215537.69756-1-shannon.nelson@amd.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-03 19:58:57 -08:00
Allen Hubbe b69585bfce ionic: missed doorbell workaround
In one version of the HW there is a remote possibility that it
will miss the doorbell ring.  This adds a bit of protection to
be sure we don't stall a queue from a missed doorbell.

Fixes: 0f3154e6bc ("ionic: Add Tx and Rx handling")
Signed-off-by: Allen Hubbe <allen.hubbe@amd.com>
Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-03 19:58:53 -08:00
Shannon Nelson 1fffb02541 ionic: clear up notifyq alloc commentary
Make sure the q+cq alloc for NotifyQ is clearly documented
and don't bother with unnecessary local variables.

Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-03 19:58:53 -08:00
Neel Patel e8797a0584 ionic: clean interrupt before enabling queue to avoid credit race
Clear the interrupt credits before enabling the queue rather
than after to be sure that the enabled queue starts at 0 and
that we don't wipe away possible credits after enabling the
queue.

Fixes: 0f3154e6bc ("ionic: Add Tx and Rx handling")
Signed-off-by: Neel Patel <neel.patel@amd.com>
Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-03 19:58:53 -08:00
Jakub Kicinski 8788260e8f Merge branch 'raw-add-drop-reasons-and-use-another-hash-function'
Eric Dumazet says:

====================
raw: add drop reasons and use another hash function

Two first patches add drop reasons to raw input processing.

Last patch spreads RAW sockets in the shared hash tables
to avoid long hash buckets in some cases.
====================

Link: https://lore.kernel.org/r/20230202094100.3083177-1-edumazet@google.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-03 19:56:26 -08:00
Eric Dumazet 6579f5bacc raw: use net_hash_mix() in hash function
Some applications seem to rely on RAW sockets.

If they use private netns, we can avoid piling all RAW
sockets bound to a given protocol into a single bucket.

Also place (struct raw_hashinfo).lock into its own
cache line to limit false sharing.

Alternative would be to have per-netns hashtables,
but this seems too expensive for most netns
where RAW sockets are not used.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-03 19:56:23 -08:00
Eric Dumazet 42186e6c00 ipv4: raw: add drop reasons
Use existing helpers and drop reason codes for RAW input path.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-03 19:56:23 -08:00
Eric Dumazet 8d8ebd77f5 ipv6: raw: add drop reasons
Use existing helpers and drop reason codes for RAW input path.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-03 19:56:23 -08:00
Heiner Kallweit 69ff53e4a4 net: phy: meson-gxl: use MMD access dummy stubs for GXL, internal PHY
Jerome provided the information that also the GXL internal PHY doesn't
support MMD register access and EEE. MMD reads return 0xffff, what
results in e.g. completely wrong ethtool --show-eee output.
Therefore use the MMD dummy stubs.

Fixes: d853d145ea ("net: phy: add an option to disable EEE advertisement")
Suggested-by: Jerome Brunet <jbrunet@baylibre.com>
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Link: https://lore.kernel.org/r/84432fe4-0be4-bc82-4e5c-557206b40f56@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-03 19:55:36 -08:00
Radhey Shyam Pandey c9011b028e net: macb: Perform zynqmp dynamic configuration only for SGMII interface
In zynqmp platforms where firmware supports dynamic SGMII configuration
but has other non-SGMII ethernet devices, it fails them with no packets
received at the RX interface.

To fix this behaviour perform SGMII dynamic configuration only
for the SGMII phy interface.

Fixes: 32cee78181 ("net: macb: Add zynqmp SGMII dynamic configuration support")
Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@amd.com>
Reviewed-by: Jiri Pirko <jiri@nvidia.com>
Reported-by: Michal Simek <michal.simek@amd.com>
Tested-by: Michal Simek <michal.simek@amd.com>
Reviewed-by: Claudiu Beznea <claudiu.beznea@microchip.com>
Link: https://lore.kernel.org/r/1675340779-27499-1-git-send-email-radhey.shyam.pandey@amd.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-03 19:34:55 -08:00
Jakub Kicinski dfefcb0c04 Merge branch 'devlink-move-devlink-dev-code-to-a-separate-file'
Moshe Shemesh says:

====================
devlink: Move devlink dev code to a separate file

This patchset is moving code from the file leftover.c to new file dev.c.
About 1.3K lines are moved by this patchset covering most of the devlink
dev object callbacks and functionality: reload, eswitch, info, flash and
selftest.
====================

Link: https://lore.kernel.org/r/1675349226-284034-1-git-send-email-moshe@nvidia.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-03 19:25:28 -08:00
Moshe Shemesh 7c976c7cfc devlink: Move devlink dev selftest code to dev
Move devlink dev selftest callbacks and related code from leftover.c to
file dev.c. No functional change in this patch.

Signed-off-by: Moshe Shemesh <moshe@nvidia.com>
Reviewed-by: Jiri Pirko <jiri@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-03 19:25:26 -08:00
Moshe Shemesh ec4a0ce92e devlink: Move devlink_info_req struct to be local
As all users of the struct devlink_info_req are already in dev.c, move
this struct from devl_internal.c to be local in dev.c.

Signed-off-by: Moshe Shemesh <moshe@nvidia.com>
Reviewed-by: Jiri Pirko <jiri@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-03 19:25:26 -08:00
Moshe Shemesh a13aab66cb devlink: Move devlink dev flash code to dev
Move devlink dev flash callbacks, helpers and other related code from
leftover.c to dev.c. No functional change in this patch.

Signed-off-by: Moshe Shemesh <moshe@nvidia.com>
Reviewed-by: Jiri Pirko <jiri@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-03 19:25:26 -08:00
Moshe Shemesh d60191c46e devlink: Move devlink dev info code to dev
Move devlink dev info callbacks, related drivers helpers functions and
other related code from leftover.c to dev.c. No functional change in
this patch.

Signed-off-by: Moshe Shemesh <moshe@nvidia.com>
Reviewed-by: Jiri Pirko <jiri@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-03 19:25:26 -08:00
Moshe Shemesh af2f8c1f82 devlink: Move devlink dev eswitch code to dev
Move devlink dev eswitch callbacks and related code from leftover.c to
file dev.c. No functional change in this patch.

Signed-off-by: Moshe Shemesh <moshe@nvidia.com>
Reviewed-by: Jiri Pirko <jiri@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-03 19:25:25 -08:00
Moshe Shemesh c6ed7d6ef9 devlink: Move devlink dev reload code to dev
Move devlink dev reload callback and related code from leftover.c to
file dev.c. No functional change in this patch.

Signed-off-by: Moshe Shemesh <moshe@nvidia.com>
Reviewed-by: Jiri Pirko <jiri@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-03 19:25:25 -08:00
Moshe Shemesh dbeeca81bd devlink: Split out dev get and dump code
Move devlink dev get and dump callbacks and related dev code to new file
dev.c. This file shall include all callbacks that are specific on
devlink dev object.

No functional change in this patch.

Signed-off-by: Moshe Shemesh <moshe@nvidia.com>
Reviewed-by: Jiri Pirko <jiri@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-03 19:25:25 -08:00
Vladimir Oltean d795527d50 net: dsa: use NL_SET_ERR_MSG_WEAK_MOD() more consistently
Now that commit 028fb19c6b ("netlink: provide an ability to set
default extack message") provides a weak function that doesn't override
an existing extack message provided by the driver, it makes sense to use
it also for LAG and HSR offloading, not just for bridge offloading.

Also consistently put the message string on a separate line, to reduce
line length from 92 to 84 characters.

Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Link: https://lore.kernel.org/r/20230202140354.3158129-1-vladimir.oltean@nxp.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-03 19:23:32 -08:00
Qi Zheng cbe83191d4 bonding: fix error checking in bond_debug_reregister()
Since commit ff9fb72bc0 ("debugfs: return error values,
not NULL") changed return value of debugfs_rename() in
error cases from %NULL to %ERR_PTR(-ERROR), we should
also check error values instead of NULL.

Fixes: ff9fb72bc0 ("debugfs: return error values, not NULL")
Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com>
Acked-by: Jay Vosburgh <jay.vosburgh@canonical.com>
Link: https://lore.kernel.org/r/20230202093256.32458-1-zhengqi.arch@bytedance.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-03 19:22:20 -08:00
Andrew Morton a5b21d8d79 revert "squashfs: harden sanity check in squashfs_read_xattr_id_table"
This fix was nacked by Philip, for reasons identified in the email linked
below.

Link: https://lkml.kernel.org/r/68f15d67-8945-2728-1f17-5b53a80ec52d@squashfs.org.uk
Fixes: 72e544b1b2 ("squashfs: harden sanity check in squashfs_read_xattr_id_table")
Cc: Alexey Khoroshilov <khoroshilov@ispras.ru>
Cc: Fedor Pchelkin <pchelkin@ispras.ru>
Cc: Phillip Lougher <phillip@squashfs.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-02-03 17:52:25 -08:00
Shiyang Ruan 388bc034d9 fsdax: dax_unshare_iter() should return a valid length
The copy_mc_to_kernel() will return 0 if it executed successfully.  Then
the return value should be set to the length it copied.

[akpm@linux-foundation.org: don't mess up `ret', per Matthew]
Link: https://lkml.kernel.org/r/1675341227-14-1-git-send-email-ruansy.fnst@fujitsu.com
Fixes: d984648e42 ("fsdax,xfs: port unshare to fsdax")
Signed-off-by: Shiyang Ruan <ruansy.fnst@fujitsu.com>
Cc: Darrick J. Wong <djwong@kernel.org>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Matthew Wilcox <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-02-03 17:52:24 -08:00
Kuan-Ying Lee aa1e6a932c mm/gup: add folio to list when folio_isolate_lru() succeed
If we call folio_isolate_lru() successfully, we will get return value 0. 
We need to add this folio to the movable_pages_list.

Link: https://lkml.kernel.org/r/20230131063206.28820-1-Kuan-Ying.Lee@mediatek.com
Fixes: 67e139b02d ("mm/gup.c: refactor check_and_migrate_movable_pages()")
Signed-off-by: Kuan-Ying Lee <Kuan-Ying.Lee@mediatek.com>
Reviewed-by: Alistair Popple <apopple@nvidia.com>
Acked-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: Andrew Yang <andrew.yang@mediatek.com>
Cc: Chinwen Chang <chinwen.chang@mediatek.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Matthias Brugger <matthias.bgg@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-02-03 17:52:24 -08:00
Seth Jenkins 81e9d6f864 aio: fix mremap after fork null-deref
Commit e4a0d3e720 ("aio: Make it possible to remap aio ring") introduced
a null-deref if mremap is called on an old aio mapping after fork as
mm->ioctx_table will be set to NULL.

[jmoyer@redhat.com: fix 80 column issue]
Link: https://lkml.kernel.org/r/x49sffq4nvg.fsf@segfault.boston.devel.redhat.com
Fixes: e4a0d3e720 ("aio: Make it possible to remap aio ring")
Signed-off-by: Seth Jenkins <sethjenkins@google.com>
Signed-off-by: Jeff Moyer <jmoyer@redhat.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Benjamin LaHaise <bcrl@kvack.org>
Cc: Jann Horn <jannh@google.com>
Cc: Pavel Emelyanov <xemul@parallels.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-02-03 17:52:24 -08:00
Alexander Mikhalitsyn ca2b1a5cd1 mailmap: add entry for Alexander Mikhalitsyn
My old email <alexander.mikhalitsyn@virtuozzo.com> isn't working
anymore.

Link: https://lkml.kernel.org/r/20230131123456.192657-1-aleksandr.mikhalitsyn@canonical.com
Signed-off-by: Alexander Mikhalitsyn <aleksandr.mikhalitsyn@canonical.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-02-03 17:52:24 -08:00
Arnd Bergmann 3770e52fd4 mm: extend max struct page size for kmsan
After x86 enabled support for KMSAN, it has become possible to have larger
'struct page' than was expected when commit 5470dea49f ("mm: use
mm_zero_struct_page from SPARC on all 64b architectures") was merged:

include/linux/mm.h:156:10: warning: no case matching constant switch condition '96'
        switch (sizeof(struct page)) {

Extend the maximum accordingly.

Link: https://lkml.kernel.org/r/20230130130739.563628-1-arnd@kernel.org
Fixes: 5470dea49f ("mm: use mm_zero_struct_page from SPARC on all 64b architectures")
Fixes: 4ca8cc8d1b ("x86: kmsan: enable KMSAN builds for x86")
Fixes: f80be4571b ("kmsan: add KMSAN runtime core")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Michal Hocko <mhocko@suse.com>
Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Alex Sierra <alex.sierra@amd.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Liam R. Howlett <Liam.Howlett@Oracle.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Naoya Horiguchi <naoya.horiguchi@nec.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-02-03 17:52:24 -08:00
Jeff LaBundy 9e69e845ae dt-bindings: input: iqs626a: Redefine trackpad property types
Following a recent refactor of the driver to properly drop unused
device nodes, the driver expects trackpad channel touch thresholds
and ATI base values to be specified under single trackpad channel
child nodes.

This enhancement moves both properties to scalar values as opposed
to arrays, making their types consistent across bindings.

Fixes: a8f1f0dc86 ("dt-bindings: input: Add bindings for Azoteq IQS626A")
Signed-off-by: Jeff LaBundy <jeff@labundy.com>
Reviewed-by: Rob Herring <robh@kernel.org>
Link: https://lore.kernel.org/r/Y9RQcddToBr1rrnJ@nixie71
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
2023-02-03 17:21:04 -08:00
Jeff LaBundy 4d3d2694e1 Input: iqs626a - drop unused device node references
Each call to device/fwnode_get_named_child_node() must be matched
with a call to fwnode_handle_put() once the corresponding node is
no longer in use. This ensures a reference count remains balanced
in the case of dynamic device tree support.

Currently, the driver never calls fwnode_handle_put(); this patch
adds the missing calls. Because fwnode_handle_put() does not take
a const *fwnode_handle, the const qualifier is removed across all
corresponding *fwnode_handle instances.

As part of this change, trackpad channel touch thresholds and ATI
base values are now specified under single trackpad channel child
nodes. This enhancement moves both properties to scalar values as
opposed to arrays, making their types consistent across bindings.

Fixes: f1d2809de9 ("Input: Add support for Azoteq IQS626A")
Signed-off-by: Jeff LaBundy <jeff@labundy.com>
Link: https://lore.kernel.org/r/Y9RQVe/V1Hnw1oly@nixie71
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
2023-02-03 17:21:04 -08:00