Commit Graph

948892 Commits (94dea151bf3651c01acb12a38ca75ba9d26ea4da)

Author SHA1 Message Date
Yonghong Song 3f9969f2c0 bpf: Fix pos computation for bpf_iter seq_ops->start()
Currently, the pos pointer in bpf iterator map/task/task_file
seq_ops->start() is always incremented.
This is incorrect. It should be increased only if
*pos is 0 (for SEQ_START_TOKEN) since these start()
function actually returns the first real object.
If *pos is not 0, it merely found the object
based on the state in seq->private, and not really
advancing the *pos. This patch fixed this issue
by only incrementing *pos if it is 0.

Note that the old *pos calculation, although not
correct, does not affect correctness of bpf_iter
as bpf_iter seq_file->read() does not support llseek.

This patch also renamed "mid" in bpf_map iterator
seq_file private data to "map_id" for better clarity.

Fixes: 6086d29def ("bpf: Add bpf_map iterator")
Fixes: eaaacd2391 ("bpf: Add task and task/file iterator targets")
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200722195156.4029817-1-yhs@fb.com
2020-07-25 20:16:32 -07:00
Yonghong Song afbf21dce6 bpf: Support readonly/readwrite buffers in verifier
Readonly and readwrite buffer register states
are introduced. Totally four states,
PTR_TO_RDONLY_BUF[_OR_NULL] and PTR_TO_RDWR_BUF[_OR_NULL]
are supported. As suggested by their respective
names, PTR_TO_RDONLY_BUF[_OR_NULL] are for
readonly buffers and PTR_TO_RDWR_BUF[_OR_NULL]
for read/write buffers.

These new register states will be used
by later bpf map element iterator.

New register states share some similarity to
PTR_TO_TP_BUFFER as it will calculate accessed buffer
size during verification time. The accessed buffer
size will be later compared to other metrics during
later attach/link_create time.

Similar to reg_state PTR_TO_BTF_ID_OR_NULL in bpf
iterator programs, PTR_TO_RDONLY_BUF_OR_NULL or
PTR_TO_RDWR_BUF_OR_NULL reg_types can be set at
prog->aux->bpf_ctx_arg_aux, and bpf verifier will
retrieve the values during btf_ctx_access().
Later bpf map element iterator implementation
will show how such information will be assigned
during target registeration time.

The verifier is also enhanced such that PTR_TO_RDONLY_BUF
can be passed to ARG_PTR_TO_MEM[_OR_NULL] helper argument, and
PTR_TO_RDWR_BUF can be passed to ARG_PTR_TO_MEM[_OR_NULL] or
ARG_PTR_TO_UNINIT_MEM.

Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200723184111.590274-1-yhs@fb.com
2020-07-25 20:16:32 -07:00
Jakub Sitnicki 86176a1821 selftests/bpf: Test BPF socket lookup and reuseport with connections
Cover the case when BPF socket lookup returns a socket that belongs to a
reuseport group, and the reuseport group contains connected UDP sockets.

Ensure that the presence of connected UDP sockets in reuseport group does
not affect the socket lookup result. Socket selected by reuseport should
always be used as result in such case.

Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Kuniyuki Iwashima <kuniyu@amazon.co.jp>
Link: https://lore.kernel.org/bpf/20200722161720.940831-3-jakub@cloudflare.com
2020-07-25 20:16:32 -07:00
Yonghong Song f9c7927295 bpf: Refactor to provide aux info to bpf_iter_init_seq_priv_t
This patch refactored target bpf_iter_init_seq_priv_t callback
function to accept additional information. This will be needed
in later patches for map element targets since a particular
map should be passed to traverse elements for that particular
map. In the future, other information may be passed to target
as well, e.g., pid, cgroup id, etc. to customize the iterator.

Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200723184110.590156-1-yhs@fb.com
2020-07-25 20:16:32 -07:00
Yonghong Song 14fc6bd6b7 bpf: Refactor bpf_iter_reg to have separate seq_info member
There is no functionality change for this patch.
Struct bpf_iter_reg is used to register a bpf_iter target,
which includes information for both prog_load, link_create
and seq_file creation.

This patch puts fields related seq_file creation into
a different structure. This will be useful for map
elements iterator where one iterator covers different
map types and different map types may have different
seq_ops, init/fini private_data function and
private_data size.

Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200723184109.590030-1-yhs@fb.com
2020-07-25 20:16:32 -07:00
Jakub Sitnicki c8a2983c4d udp: Don't discard reuseport selection when group has connections
When BPF socket lookup prog selects a socket that belongs to a reuseport
group, and the reuseport group has connected sockets in it, the socket
selected by reuseport will be discarded, and socket returned by BPF socket
lookup will be used instead.

Modify this behavior so that the socket selected by reuseport running after
BPF socket lookup always gets used. Ignore the fact that the reuseport
group might have connections because it is only relevant when scoring
sockets during regular hashtable-based lookup.

Fixes: 72f7e9440e ("udp: Run SK_LOOKUP BPF program on socket lookup")
Fixes: 6d4201b138 ("udp6: Run SK_LOOKUP BPF program on socket lookup")
Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Kuniyuki Iwashima <kuniyu@amazon.co.jp>
Link: https://lore.kernel.org/bpf/20200722161720.940831-2-jakub@cloudflare.com
2020-07-25 20:16:01 -07:00
Andrii Nakryiko f3c93a93b5 tools/bpftool: Strip BPF .o files before skeleton generation
Strip away DWARF info from .bpf.o files, before generating BPF skeletons.
This reduces bpftool binary size from 3.43MB to 2.58MB.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Quentin Monnet <quentin@isovalent.com>
Link: https://lore.kernel.org/bpf/20200722043804.2373298-1-andriin@fb.com
2020-07-25 20:15:32 -07:00
Dan Williams 60d360acdd driver-core: Introduce DEVICE_ATTR_ADMIN_{RO,RW}
A common pattern for using plain DEVICE_ATTR() instead of
DEVICE_ATTR_RO() and DEVICE_ATTR_RW() is for attributes that want to
limit read to only root.  I.e. many users of DEVICE_ATTR() are
specifying 0400 or 0600 for permissions.

Given the expectation that CAP_SYS_ADMIN is needed to access these
sensitive attributes add an explicit helper with the _ADMIN_ identifier
for DEVICE_ATTR_ADMIN_{RO,RW}.

Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vishal Verma <vishal.l.verma@intel.com>
2020-07-25 19:34:48 -06:00
Dan Williams 916566ae78 tools/testing/nvdimm: Emulate firmware activation commands
Augment the existing firmware update emulation to track activations and
validate proper update vs activate sequencing.

The DIMM firmware activate capability has a concept of a maximum amount
of time platform firmware will quiesce the system relative to how many
DIMMs are being activated in parallel. Simulate that DIMM activation
happens serially, 1 second per-DIMM, and limit the max at 3 seconds. The
nfit_test0 bus emulates 5 DIMMs so it will take 2 activations to update
all DIMMs.

Cc: Vishal Verma <vishal.l.verma@intel.com>
Cc: Dave Jiang <dave.jiang@intel.com>
Cc: Ira Weiny <ira.weiny@intel.com>
Reported-by: Andy Shevchenko <andriy.shevchenko@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vishal Verma <vishal.l.verma@intel.com>
2020-07-25 19:34:48 -06:00
Dan Williams abfd4d9c82 tools/testing/nvdimm: Prepare nfit_ctl_test() for ND_CMD_CALL emulation
In preparation for adding a mocked implementation of the
firmware-activate bus-info command, rework nfit_ctl_test() to operate on
a local command payload wrapped in a 'struct nd_cmd_pkg'.

Cc: Vishal Verma <vishal.l.verma@intel.com>
Cc: Dave Jiang <dave.jiang@intel.com>
Cc: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vishal Verma <vishal.l.verma@intel.com>
2020-07-25 19:34:47 -06:00
Dan Williams 0d47c4dfe5 tools/testing/nvdimm: Add command debug messages
Arrange the for nfit_test_ctl() path to dump command payloads similarly
to the acpi_nfit_ctl() path. This is useful for comparing the
sequence of command events between an emulated ACPI-NFIT platform and a
real one.

Cc: Vishal Verma <vishal.l.verma@intel.com>
Cc: Dave Jiang <dave.jiang@intel.com>
Cc: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vishal Verma <vishal.l.verma@intel.com>
2020-07-25 19:34:47 -06:00
Dan Williams 24770658dc tools/testing/nvdimm: Cleanup dimm index passing
The ND_CMD_CALL path only applies to the nfit_test0 emulated DIMMs.
Cleanup occurrences of (i - t->dcr_idx) since that offset fixup only
applies to cases where nfit_test1 needs a bus-local index.

Cc: Vishal Verma <vishal.l.verma@intel.com>
Cc: Dave Jiang <dave.jiang@intel.com>
Cc: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vishal Verma <vishal.l.verma@intel.com>
2020-07-25 19:34:47 -06:00
Dan Williams 6450ddbd5d ACPI: NFIT: Define runtime firmware activation commands
Platform reboots are expensive. Towards reducing downtime to apply
firmware updates the Intel NVDIMM command definition is growing support
for applying live firmware updates that only require temporarily
suspending memory traffic instead of a full reboot.

Follow-on commits add support for triggering firmware activation, this
patch only defines the commands, adds probe support, and validates that
they are blocked via the ioctl path. The ioctl-path block ensures that
the OS is in charge since these commands have side effects only the OS
can handle. Specifically firmware activation may cause the memory
controller to be quiesced on the order of 100s of milliseconds. In that
case Linux ensure the activation only takes place while the OS is in a
suspend state.

Link: https://pmem.io/documents/IntelOptanePMem_DSM_Interface-V2.0.pdf
Cc: Vishal Verma <vishal.l.verma@intel.com>
Cc: Dave Jiang <dave.jiang@intel.com>
Cc: Ira Weiny <ira.weiny@intel.com>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Cc: Len Brown <lenb@kernel.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vishal Verma <vishal.l.verma@intel.com>
2020-07-25 19:34:47 -06:00
Dan Williams d46e6a2176 ACPI: NFIT: Move bus_dsm_mask out of generic nvdimm_bus_descriptor
DSMs are strictly an ACPI mechanism, evict the bus_dsm_mask concept from
the generic 'struct nvdimm_bus_descriptor' object.

As a side effect the test facility ->bus_nfit_cmd_force_en is no longer
necessary. The test infrastructure can communicate that information
directly in ->bus_dsm_mask.

Cc: Vishal Verma <vishal.l.verma@intel.com>
Cc: Dave Jiang <dave.jiang@intel.com>
Cc: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vishal Verma <vishal.l.verma@intel.com>
2020-07-25 19:34:47 -06:00
Dan Williams 92fe2aa859 libnvdimm: Validate command family indices
The ND_CMD_CALL format allows for a general passthrough of passlisted
commands targeting a given command set. However there is no validation
of the family index relative to what the bus supports.

- Update the NFIT bus implementation (the only one that supports
  ND_CMD_CALL passthrough) to also passlist the valid set of command
  family indices.

- Update the generic __nd_ioctl() path to validate that field on behalf
  of all implementations.

Fixes: 31eca76ba2 ("nfit, libnvdimm: limited/whitelisted dimm command marshaling mechanism")
Cc: Vishal Verma <vishal.l.verma@intel.com>
Cc: Dave Jiang <dave.jiang@intel.com>
Cc: Ira Weiny <ira.weiny@intel.com>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Cc: Len Brown <lenb@kernel.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vishal Verma <vishal.l.verma@intel.com>
2020-07-25 19:34:47 -06:00
David S. Miller a57066b1a0 Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
The UDP reuseport conflict was a little bit tricky.

The net-next code, via bpf-next, extracted the reuseport handling
into a helper so that the BPF sk lookup code could invoke it.

At the same time, the logic for reuseport handling of unconnected
sockets changed via commit efc6b6f6c3
which changed the logic to carry on the reuseport result into the
rest of the lookup loop if we do not return immediately.

This requires moving the reuseport_has_conns() logic into the callers.

While we are here, get rid of inline directives as they do not belong
in foo.c files.

The other changes were cases of more straightforward overlapping
modifications.

Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-25 17:49:04 -07:00
Linus Torvalds 04300d66f0 RISC-V Fixes for 5.8-rc7
I have a few more fixes this week:
 
 * A fix to avoid using SBI calls during kasan initialization, as the SBI calls
   themselves have not been probed yet.
 * Three fixes related to systems with multiple memory regions.
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCgAxFiEEKzw3R0RoQ7JKlDp6LhMZ81+7GIkFAl8cebwTHHBhbG1lckBk
 YWJiZWx0LmNvbQAKCRAuExnzX7sYiV4UEACunm3LQGjjZUbHX/1/X2iAdNBcnbiZ
 SFSXaKuAiDzSgT0//UMloo33eywo3afGCTOCysi3zrwNU+Hbj5l9byTMnvXy9hGv
 QCFf2aEma9Ebk9ImdLDjkHQy2e1SiQRHQiKQcGTtP4LN470rrysh0a8qLLVAGucJ
 j5fDPcy9smRw4x0lGHqLXDmVIm8nQNJAj6iKVQYzA2mSJC2+RoKcPcunnXvoOUwl
 q/lcp41BavZyYsDIoAblRCA1/sfFL9BfocL4fUBFh4HUp/qcqwsGPv+U5QoEzkQ8
 9LLLuXnHXgU5BTv4zVCUqXjMza6fgZj+qMpxXLQig0DRBetuQY2gpCdt2JAkcXTj
 XvkHavxkA6206myl9xY6OJueC/25I6dp0eDENBpUNET8fqbeFBhIUCuNtAzT+98J
 OcMImyFQgoesGU+6dkdHr6tSJHDcI3PzXlVUNH3hkIQq5Ny10g+jjAInZo6iXfpW
 yBu+AzvippbD6C8/L+ntP94y9F/Me5rZKWMBhT2gTLm6JLrtC/Wl+ol2fZF8gmIU
 wdT6TUykDY6u6bdqPSRKISzy0HfuREQNJU3qqoI496BMWQ1DaQjHB1482fdtdELI
 KUFkM6u9iudWu5Dap45rSCjeZu01cLqnQqO5M3LYUrCy0D02j5SiGfUwRYo+eD0S
 me68CtmbUPsH7Q==
 =FG/F
 -----END PGP SIGNATURE-----

Merge tag 'riscv-for-linus-5.8-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux into master

Pull RISC-V fixes from Palmer Dabbelt:
 "A few more fixes this week:

   - A fix to avoid using SBI calls during kasan initialization, as the
     SBI calls themselves have not been probed yet.

   - Three fixes related to systems with multiple memory regions"

* tag 'riscv-for-linus-5.8-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux:
  riscv: Parse all memory blocks to remove unusable memory
  RISC-V: Do not rely on initrd_start/end computed during early dt parsing
  RISC-V: Set maximum number of mapped pages correctly
  riscv: kasan: use local_tlb_flush_all() to avoid uninitialized __sbi_rfence
2020-07-25 14:42:11 -07:00
Linus Torvalds fbe0d451bc Misc fixes:
- Fix a section end page alignment assumption that was causing crashes
 
  - Fix ORC unwinding on freshly forked tasks which haven't executed yet
    and which have empty user task stacks
 
  - Fix the debug.exception-trace=1 sysctl dumping of user stacks, which
    was broken by recent maccess changes
 
 Signed-off-by: Ingo Molnar <mingo@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAl8cGlwRHG1pbmdvQGtl
 cm5lbC5vcmcACgkQEnMQ0APhK1hVGxAAhW5LoNLv6BsMtKrFhBJRzPhoO8YcB8ep
 Z/MaaiEcS8kcSVx1nQrNNhqzwIGBlexHTyoddYQr/LeM3ahOoXc4+jZI7Es6WKdc
 Rhofalp0pryJrqj52orAVAUeHikmiqmadANKKmUNtXcjWWoxexgab28gLwJPPA55
 SKw1/tqQ6JTf/M6tLnalib/VcigN7F/r797LgqLan9pSkjMGj6f1zMFmqTsNDi05
 Si5gqRv2GlAraelRQU7emMZGu1bXhfJvGix7BB1/u6nRimY15vIFtccgt4akJmFN
 tBUwNHFeLrYsoX1txjMTtGzASBqtjaD6A1K2geqmuZ1lRL1YuGnUl7n4ccgAvQdJ
 eY6iiLiXDcjZLZXYe5tfdV/4p3HZ3MYm/IRkaw2KfiGAIv0Gii/U0mShO+lX1PAZ
 BdelgJv2fcOxDiNe0RM4EniBXQUcqDIOpQ1fPryYsFLeeA1LCK2/b8WLOPHuK8YG
 oxAgU9vV7OA7nfBdxIPszvX1jXOiQPglsW2gfxlKbGAA9j3uKDGhGnmFFYatYUMF
 OEOf/5iuw3MZE/+gAG685ksk4ibgW4Kv6ahAEdzs5hh37uM8SiC3oWVWQf2YWGHG
 ITswqKY2oUqghWa3ftXEtPLKstsYbRG7KCRfBXh6AN2ay7JrDJg6sItCL4UI5BjK
 dkDJ4pKLXug=
 =oamN
 -----END PGP SIGNATURE-----

Merge tag 'x86-urgent-2020-07-25' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip into master

Pull x86 fixes from Ingo Molnar:
 "Misc fixes:

   - Fix a section end page alignment assumption that was causing
     crashes

   - Fix ORC unwinding on freshly forked tasks which haven't executed
     yet and which have empty user task stacks

   - Fix the debug.exception-trace=1 sysctl dumping of user stacks,
     which was broken by recent maccess changes"

* tag 'x86-urgent-2020-07-25' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/dumpstack: Dump user space code correctly again
  x86/stacktrace: Fix reliable check for empty user task stacks
  x86/unwind/orc: Fix ORC for newly forked tasks
  x86, vmlinux.lds: Page-align end of ..page_aligned sections
2020-07-25 14:25:47 -07:00
Linus Torvalds 78b1afe22d Fix a interaction/regression between uprobes based shared library tracing & GDB.
Signed-off-by: Ingo Molnar <mingo@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAl8cHqwRHG1pbmdvQGtl
 cm5lbC5vcmcACgkQEnMQ0APhK1iyTxAAghbrqeeT6V4lj8xfBif5BOvexEPDaur5
 2OPXRMLg4YWYCCC8Wwdj43Nnz3IuG5A75acyXkxmqLgvIu4QE7EAm8CYuyy0LWZa
 NK7A8v5fbDOKJJL/WdSZ/xoFP27KLgE4u0gaeBBOp9Chib/bWo6h63jxsOpbQybE
 u5u6x4uAm90RKRocYnfHaMXxI8kDVnRRdfwmVjAcy3dbu3uth02vdfqlmXrmPh1A
 kpqH/EnIlw9DrJCGOdDJPCmCTjcj5xYyy3qA7C6DkY3X+oe4AYtMWnfCPOgV0k7v
 xGq0j9jIa0kynHQsZxjX28CObuumMrh6ZbI7/UfE8W8yccpUQKkzTWKhqzb/9Wc7
 E5SivCiykAhUPCbhcwZIihW6nwjWSF54Z1UCxi/PA1RMRTe89e96881Xfgh/huW7
 TCHnK9lRyWAP+ceJcj72BEobRElfsHyeTbd0xwS/a+LBCftxp/r8Fm1DHrrBaGtL
 WE/lHXxgRR2b/ieAVhlEUNwd7tn4YBjcad8WstZTYIZHMxtmPvUKjaDhMuWNclIH
 QtjRYy1vA0gAf2Lbs3EP52sTZXyxlksftkwlca0z8VF5SezDER0R4V4EgQqIa8zC
 mLr7jmnKqEeOp6vkdhTSnDHYNHuVw4OpGCY+28vh17b/z6MFNokAq5Bd/fV07oKg
 nassUOSteTw=
 =eCSv
 -----END PGP SIGNATURE-----

Merge tag 'perf-urgent-2020-07-25' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip into master

Pull uprobe fix from Ingo Molnar:
 "Fix an interaction/regression between uprobes based shared library
  tracing & GDB"

* tag 'perf-urgent-2020-07-25' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  uprobes: Change handle_swbp() to send SIGTRAP with si_code=SI_KERNEL, to fix GDB regression
2020-07-25 13:55:38 -07:00
Linus Torvalds a7b36c2b13 Fix a suspend/resume regression (crash) on TI AM3/AM4 SoC's.
Signed-off-by: Ingo Molnar <mingo@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAl8cDeIRHG1pbmdvQGtl
 cm5lbC5vcmcACgkQEnMQ0APhK1gVeA/8Cs67mJ9j3oto917z1GLElBVy7A+od/Pn
 TqWEtwmonJA/ZM/TvuvfFxHFg6kgPlDx0x5pHYUows/8DGsftdveDQw4bGhrCN0i
 Xghcn4gB+jdOaykpLQacEukYt/nPU0IOf79XqySiu774lWHvzwQI+2MIb7xBHBMO
 npUXFzgeSlgjFkMm1lVa9QVPwVc1zQ6gKPJdLpDlrgW3cCrenM7QCAg3NG3Gx5TT
 OXK4QZEvIuym+HWwbZRA2mOUSjm8U1Z1o95jiWUro7o5Xe+Agqi5zsLJc7lxX57+
 LYwrXNx/zPZ+oBlzZhZZrAxn1bJP7DVS13ZCEL9Miy0Uslr7QBPyoGHiV4P8tL3v
 cOn7kY+P7YT96Fx3ahkK3pJN+nUZIAsRrwZbgOP4bZHKdVi3zqUEilSdZNcXBZ9y
 nj9mY/2pX9SOMIvoaosku2hkKs5EKIkh3qzhvbfA8fXsQC3lttrPGRW7hgPmxm7W
 Fa+6liOi+t+d0rGgmi1fhHSn/L2gzmsLy5R1l6XyK15nDSALd4TrdtDrlJ8vbbl5
 oRAN9mt9qPqNCQQKjcnAIXuPeJp2Pt6KphlPApHVa3J/eVzug1dFWyjia/e+js8W
 Gejz1TV6SDdU137TmC4E6suAZ5nRSEBwiE+MiNnnG0DIvz7cp5FSIWCXOgt5WPl5
 BXS4CSwnF24=
 =7P/h
 -----END PGP SIGNATURE-----

Merge tag 'timers-urgent-2020-07-25' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip into master

Pull timer fix from Ingo Molnar:
 "Fix a suspend/resume regression (crash) on TI AM3/AM4 SoC's"

* tag 'timers-urgent-2020-07-25' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  clocksource/drivers/timer-ti-dm: Fix suspend and resume for am3 and am4
2020-07-25 13:27:12 -07:00
Linus Torvalds 3077805eeb Fix a race introduced by the recent loadavg race fix, plus add a
debug check for a hard to debug case of bogus wakeup function flags.
 
 Signed-off-by: Ingo Molnar <mingo@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAl8cDFkRHG1pbmdvQGtl
 cm5lbC5vcmcACgkQEnMQ0APhK1h2Hw//WW67DawJjUVoz7Hwjgf02ECgY1XspmoW
 5oJjtXFU6pvOThL4U8qcb1HGOtWV/MqFajv25RX4lIyaxBo8AD5mazVQCwOc3s2C
 nR9ZM2kYRgSKbrSp2EOoGYCTw76do9y8beahZ5btvSkjNzD1YgDJwwcuGeMZwp3w
 cFg8GeeUbLRFcsVVMx+apirg2KR9rUp1jqjVdjsln8C/VngjWPYFdktgAglcXfry
 5Op2SOkYUw4gw/vX8jm6YyyxtqZHbH5lsDN7zk12W3Xwqh9tH2ounk+ehzFlT+cY
 HD8PvB39cT4UYtF7R0R7W6cYuLJkbF6XHpRhqBvwd4M7u6yzABp5S2W9ZfZPjmQR
 nNTV2MBK9LUWTtMhXby7w7ooooSFqRydEUpEfTrmD8xYmMGYMmNkjpgErXKtIyk+
 a124ouvP8pwmI0WxA9w8rPKAJnuN0TN60RvtEE4RpxvaL4Os79QcTObhwuxdarxm
 8nJvQRqts0O3+i9dWF8hS9ZFWkQZfX9GfiEK6zR2qaqjxtVwvagc58XLBcRBd9pn
 CI8JvJ1pZ8AFC6dy/szZ/6awrEJzynfeInYXnVX8UQ7Cn+bBQdTAlgXopLY58ybY
 RKTIq4/Ov9D7g3RggoK+aEjryxJflN0xaz9Dus8fvveIZuFHtsra0u7FM3ZHz38x
 rmP1A6Dc3Yc=
 =XPX3
 -----END PGP SIGNATURE-----

Merge tag 'sched-urgent-2020-07-25' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip into master

Pull scheduler fixes from Ingo Molnar:
 "Fix a race introduced by the recent loadavg race fix, plus add a debug
  check for a hard to debug case of bogus wakeup function flags"

* tag 'sched-urgent-2020-07-25' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  sched: Warn if garbage is passed to default_wake_function()
  sched: Fix race against ptrace_freeze_trace()
2020-07-25 13:24:40 -07:00
Linus Torvalds 17baa44286 Various EFI fixes:
- Fix the layering violation in the use of the EFI runtime services
     availability mask in users of the 'efivars' abstraction
 
   - Revert build fix for GCC v4.8 which is no longer supported
 
   - Clean up some x86 EFI stub details, some of which are borderline bugs
     that copy around garbage into padding fields - let's fix these
     out of caution.
 
   - Fix build issues while working on RISC-V support
 
   - Avoid --whole-archive when linking the stub on arm64
 
 Signed-off-by: Ingo Molnar <mingo@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAl8cCIkRHG1pbmdvQGtl
 cm5lbC5vcmcACgkQEnMQ0APhK1hGsxAAsPwL5ghYykViUQId0OjNmI7eRgISKRso
 3n3BaFmgYMp9eq1gndzPp7A5Ty0NPsr8FiwuZ7setk9YoLuTUT1MHVtAzd6xkxlR
 838CwTDvW5HvB69uxPQDHA/1mcH1smH4Iew/J7QXP+o6Zrg+BWOjKNtTiKFwawNC
 m4Tkdvvq52wzykqbeuRhXxLetKFOH//R1V0s4M6nNySuY6gQGJQ2LPyaiMN6eq1V
 LE8+wGcNlIRUOeC8RkEA7CE9g92jkGZ07uJDA09OP0J5WBNWLcxMJM2mBZ1Ho6uc
 sNNOeTy76sjuXQvUBWCnjBr3/qqXnVGSuIq8NS1hS4L7HagdQ/peqFJRzvWBHT90
 b9wUZv09ioFm9lb6/P6NL16sn/WPknCD7umxpfp5HKrlL2p7puvsvDXtuWgyhhPG
 M+X9ZX1iaA54cA9bU6cXFzrNMw/DnYjHFsECF915EXjItNZToGs0rd1lf7ArgH2P
 +3HgvLods73ufObKH2pUfY7EU2Ly1oJsNpK3RmpoOUzehW+++S80KdytkaAB10kT
 dKp9LlTj6gC+lnC45J9NtpHlCodz/Rc0lBHQpxIqlk/p9grUuH4zn714Ii/FfNQg
 i29GX9cCdgv+4KzmJCNTqyZvGFZ5m3K8f41x7iD+/ygqPLsP9PfV5RGRGau+FfL8
 ezInhKdITqM=
 =n3nF
 -----END PGP SIGNATURE-----

Merge tag 'efi-urgent-2020-07-25' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip into master

Pull EFI fixes from Ingo Molnar:
 "Various EFI fixes:

   - Fix the layering violation in the use of the EFI runtime services
     availability mask in users of the 'efivars' abstraction

   - Revert build fix for GCC v4.8 which is no longer supported

   - Clean up some x86 EFI stub details, some of which are borderline
     bugs that copy around garbage into padding fields - let's fix these
     out of caution.

   - Fix build issues while working on RISC-V support

   - Avoid --whole-archive when linking the stub on arm64"

* tag 'efi-urgent-2020-07-25' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  efi: Revert "efi/x86: Fix build with gcc 4"
  efi/efivars: Expose RT service availability via efivars abstraction
  efi/libstub: Move the function prototypes to header file
  efi/libstub: Fix gcc error around __umoddi3 for 32 bit builds
  efi/libstub/arm64: link stub lib.a conditionally
  efi/x86: Only copy upto the end of setup_header
  efi/x86: Remove unused variables
2020-07-25 13:18:42 -07:00
Linus Torvalds 7cb3a5c5f6 Fixfor a recently discovered regression in rename to older servers caused by a recent patch
-----BEGIN PGP SIGNATURE-----
 
 iQGzBAABCgAdFiEE6fsu8pdIjtWE/DpLiiy9cAdyT1EFAl8bmiQACgkQiiy9cAdy
 T1EfFwv+IeD4kNshoBF5NT//+7j+9n+R8z74v5bJUEmoabZMuCaWuYl/uUcBqIBH
 l29Xd60aF8V2jd3xIt/SEPNFOAvvBSQIjn8X52Lkquo+QzgiOElSUL8aNZe17T9L
 N1S+nhwk2udlveBtWZRxHT50WNdKQCGKEwUDn9ouUbuKxLrcwWc1gRCxkegmJXey
 NRaynzdCRYFx7uLdctvesTDB4P9MJn+5vX7L1BdI7MxXzsSEI1yHpM5OBL9e13ki
 swZ7P16qyCA6TK8EYuYLCeXKEK0X+wGeu/y8JZ7RjO9ozqrvECvguumNbcUCs+zf
 vLWQkaeC2M1L5L4WV71sY5Pi7CUGw9WGaU/FBJUK65+hYqUWETt351BwbIRwVlrx
 mX7C7Dh50AM84/54u169Sk1p/S8vjdvkmx8YyRWK+t7kmZx6TP7XKXvYN1vEHS12
 Y5ceRKmNHLdRqS3nY6FK7TKdgVWK2hgNODDdVbUZ+iPoJztSmH7j2RkTzxJz6qBi
 vzBAPJiG
 =Cvv4
 -----END PGP SIGNATURE-----

Merge tag '5.8-rc6-cifs-fix' of git://git.samba.org/sfrench/cifs-2.6 into master

Pull cifs fix from Steve French:
 "A fix for a recently discovered regression in rename to older servers
  caused by a recent patch"

* tag '5.8-rc6-cifs-fix' of git://git.samba.org/sfrench/cifs-2.6:
  Revert "cifs: Fix the target file was deleted when rename failed."
2020-07-25 12:53:46 -07:00
Ingo Molnar c84d53051f Linux 5.8-rc6
-----BEGIN PGP SIGNATURE-----
 
 iQFSBAABCAA8FiEEq68RxlopcLEwq+PEeb4+QwBBGIYFAl8UzA4eHHRvcnZhbGRz
 QGxpbnV4LWZvdW5kYXRpb24ub3JnAAoJEHm+PkMAQRiGQ7cH/3v+Gv+SmHJCvaT2
 CSu0+7okVnYbY3UTb3hykk7/aOqb6284KjxR03r0CWFzsEsZVhC5pvvruASSiMQg
 Pi04sLqv6CsGLHd1n+pl4AUYEaxq6k4KS3uU3HHSWxrahDDApQoRUx2F8lpOxyj8
 RiwnoO60IMPA7IFJqzcZuFqsgdxqiiYvnzT461KX8Mrw6fyMXeR2KAj2NwMX8dZN
 At21Sf8+LSoh6q2HnugfiUd/jR10XbfxIIx2lXgIinb15GXgWydEQVrDJ7cUV7ix
 Jd0S+dtOtp+lWtFHDoyjjqqsMV7+G8i/rFNZoxSkyZqsUTaKzaR6JD3moSyoYZgG
 0+eXO4A=
 =9EpR
 -----END PGP SIGNATURE-----

Merge tag 'v5.8-rc6' into locking/core, to pick up fixes

Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-07-25 21:49:36 +02:00
Chris Wilson a7ef9b28aa locking/lockdep: Fix overflow in presentation of average lock-time
Though the number of lock-acquisitions is tracked as unsigned long, this
is passed as the divisor to div_s64() which interprets it as a s32,
giving nonsense values with more than 2 billion acquisitons. E.g.

  acquisitions   holdtime-min   holdtime-max holdtime-total   holdtime-avg
  -------------------------------------------------------------------------
    2350439395           0.07         353.38   649647067.36          0.-32

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: https://lore.kernel.org/r/20200725185110.11588-1-chris@chris-wilson.co.uk
2020-07-25 21:47:42 +02:00
Linus Torvalds 1b64b2e244 Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net into master
Pull networking fixes from David Miller:

 1) Fix RCU locaking in iwlwifi, from Johannes Berg.

 2) mt76 can access uninitialized NAPI struct, from Felix Fietkau.

 3) Fix race in updating pause settings in bnxt_en, from Vasundhara
    Volam.

 4) Propagate error return properly during unbind failures in ax88172a,
    from George Kennedy.

 5) Fix memleak in adf7242_probe, from Liu Jian.

 6) smc_drv_probe() can leak, from Wang Hai.

 7) Don't muck with the carrier state if register_netdevice() fails in
    the bonding driver, from Taehee Yoo.

 8) Fix memleak in dpaa_eth_probe, from Liu Jian.

 9) Need to check skb_put_padto() return value in hsr_fill_tag(), from
    Murali Karicheri.

10) Don't lose ionic RSS hash settings across FW update, from Shannon
    Nelson.

11) Fix clobbered SKB control block in act_ct, from Wen Xu.

12) Missing newlink in "tx_timeout" sysfs output, from Xiongfeng Wang.

13) IS_UDPLITE cleanup a long time ago, incorrectly handled
    transformations involving UDPLITE_RECV_CC. From Miaohe Lin.

14) Unbalanced locking in netdevsim, from Taehee Yoo.

15) Suppress false-positive error messages in qed driver, from Alexander
    Lobakin.

16) Out of bounds read in ax25_connect and ax25_sendmsg, from Peilin Ye.

17) Missing SKB release in cxgb4's uld_send(), from Navid Emamdoost.

18) Uninitialized value in geneve_changelink(), from Cong Wang.

19) Fix deadlock in xen-netfront, from Andera Righi.

19) flush_backlog() frees skbs with IRQs disabled, so should use
    dev_kfree_skb_irq() instead of kfree_skb(). From Subash Abhinov
    Kasiviswanathan.

* git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (111 commits)
  drivers/net/wan: lapb: Corrected the usage of skb_cow
  dev: Defer free of skbs in flush_backlog
  qrtr: orphan socket in qrtr_release()
  xen-netfront: fix potential deadlock in xennet_remove()
  flow_offload: Move rhashtable inclusion to the source file
  geneve: fix an uninitialized value in geneve_changelink()
  bonding: check return value of register_netdevice() in bond_newlink()
  tcp: allow at most one TLP probe per flight
  AX.25: Prevent integer overflows in connect and sendmsg
  cxgb4: add missing release on skb in uld_send()
  net: atlantic: fix PTP on AQC10X
  AX.25: Prevent out-of-bounds read in ax25_sendmsg()
  sctp: shrink stream outq when fails to do addstream reconf
  sctp: shrink stream outq only when new outcnt < old outcnt
  AX.25: Fix out-of-bounds read in ax25_connect()
  enetc: Remove the mdio bus on PF probe bailout
  net: ethernet: ti: add NETIF_F_HW_TC hw feature flag for taprio offload
  net: ethernet: ave: Fix error returns in ave_init
  drivers/net/wan/x25_asy: Fix to make it work
  ipvs: fix the connection sync failed in some cases
  ...
2020-07-25 11:50:59 -07:00
Pavel Begunkov b089ed390b io-wq: update hash bits
Linked requests are hashed, remove a comment stating otherwise. Also
move hash bits to emphasise that we don't carry it through loop
iteration and set it every time.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-07-25 09:47:44 -06:00
Pavel Begunkov f063c5477e io_uring: fix missing io_queue_linked_timeout()
Whoever called io_prep_linked_timeout() should also do
io_queue_linked_timeout(). __io_queue_sqe() doesn't follow that for the
punting path leaving linked timeouts prepared but never queued.

Fixes: 6df1db6b54 ("io_uring: fix mis-refcounting linked timeouts")
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-07-25 09:47:44 -06:00
Pavel Begunkov b65e0dd6a2 io_uring: mark ->work uninitialised after cleanup
Remove REQ_F_WORK_INITIALIZED after io_req_clean_work(). That's a cold
path but is safer for those using io_req_clean_work() out of
*dismantle_req()/*io_free(). And for the same reason zero work.fs

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-07-25 09:47:44 -06:00
Hayato Ohhashi 898ec52d2b x86/xen/time: Set the X86_FEATURE_TSC_KNOWN_FREQ flag in xen_tsc_khz()
If the TSC frequency is known from the pvclock page,
the TSC frequency does not need to be recalibrated.

We can avoid recalibration by setting X86_FEATURE_TSC_KNOWN_FREQ.

Signed-off-by: Hayato Ohhashi <o8@vmm.dev>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Link: https://lore.kernel.org/r/20200721161231.6019-1-o8@vmm.dev
2020-07-25 15:49:41 +02:00
Coly Li a2f32ee8fd bcache: fix bio_{start,end}_io_acct with proper device
Commit 85750aeb74 ("bcache: use bio_{start,end}_io_acct") moves the
io account code to the location after bio_set_dev(bio, dc->bdev) in
cached_dev_make_request(). Then the account is performed incorrectly on
backing device, indeed the I/O should be counted to bcache device like
/dev/bcache0.

With the mistaken I/O account, iostat does not display I/O counts for
bcache device and all the numbers go to backing device. In writeback
mode, the hard drive may have 340K+ IOPS which is impossible and wrong
for spinning disk.

This patch introduces bch_bio_start_io_acct() and bch_bio_end_io_acct(),
which switches bio->bi_disk to bcache device before calling
bio_start_io_acct() or bio_end_io_acct(). Now the I/Os are counted to
bcache device, and bcache device, cache device and backing device have
their correct I/O count information back.

Fixes: 85750aeb74 ("bcache: use bio_{start,end}_io_acct")
Signed-off-by: Coly Li <colyli@suse.de>
Cc: Christoph Hellwig <hch@lst.de>
Cc: stable@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-07-25 07:38:21 -06:00
Coly Li 4e4d4e0962 bcache: avoid extra memory consumption in struct bbio for large bucket size
Bcache uses struct bbio to do I/Os for meta data pages like uuids,
disk_buckets, prio_buckets, and btree nodes.

Example writing a btree node onto cache device, the process is,
- Allocate a struct bbio from mempool c->bio_meta.
- Inside struct bbio embedded a struct bio, initialize bi_inline_vecs
  for this embedded bio.
- Call bch_bio_map() to map each meta data page to each bv from the
  inlined  bi_io_vec table.
- Call bch_submit_bbio() to submit the bio into underlying block layer.
- When the I/O completed, only release the struct bbio, don't touch the
  reference counter of the meta data pages.

The struct bbio is defined as,
738 struct bbio {
739     unsigned int            submit_time_us;
	[snipped]
748     struct bio              bio;
749 };

Because struct bio is embedded at the end of struct bbio, therefore the
actual size of struct bbio is sizeof(struct bio) + size of the embedded
bio->bi_inline_vecs.

Now all the meta data bucket size are limited to meta_bucket_pages(), if
the bucket size is large than meta_bucket_pages()*PAGE_SECTORS, rested
space in the bucket is unused. Therefore the most used space in meta
bucket is (1<<MAX_ORDER) pages, or (1<<CONFIG_FORCE_MAX_ZONEORDER) if it
is configured.

Therefore for large bucket size, it is unnecessary to calculate the
allocation size of mempool c->bio_meta as,
	mempool_init_kmalloc_pool(&c->bio_meta, 2,
			sizeof(struct bbio) +
			sizeof(struct bio_vec) * bucket_pages(c))
It is too large, neither the Linux buddy allocator cannot allocate so
much continuous pages, nor the extra allocated pages are wasted.

This patch replace bucket_pages() to meta_bucket_pages() in two places,
- In bch_cache_set_alloc(), when initialize mempool c->bio_meta, uses
  sizeof(struct bbio) + sizeof(struct bio_vec) * bucket_pages(c) to set
  the allocating object size.
- In bch_bbio_alloc(), when calling bio_init() to set inline bvec talbe
  bi_inline_bvecs, uses meta_bucket_pages() to indicate number of the
  inline bio vencs number.

Now the maximum size of embedded bio inside struct bbio exactly matches
the limit of meta_bucket_pages(), no extra page wasted.

Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-07-25 07:38:21 -06:00
Coly Li 6907dc498f bcache: avoid extra memory allocation from mempool c->fill_iter
Mempool c->fill_iter is used to allocate memory for struct btree_iter in
bch_btree_node_read_done() to iterate all keys of a read-in btree node.

The allocation size is defined in bch_cache_set_alloc() by,
  mempool_init_kmalloc_pool(&c->fill_iter, 1, iter_size))
where iter_size is defined by a calculation,
  (sb->bucket_size / sb->block_size + 1) * sizeof(struct btree_iter_set)

For 16bit width bucket_size the calculation is OK, but now the bucket
size is extended to 32bit, the bucket size can be 2GB. By the above
calculation, iter_size can be 2048 pages (order 11 is still accepted by
buddy allocator).

But the actual size holds the bkeys in meta data bucket is limited to
meta_bucket_pages() already, which is 16MB. By the above calculation,
if replace sb->bucket_size by meta_bucket_pages() * PAGE_SECTORS, the
result is 16 pages. This is the size large enough for the mempool
allocation to struct btree_iter.

Therefore in worst case every time mempool c->fill_iter allocates, at
most 4080 pages are wasted and won't be used. Therefore this patch uses
meta_bucket_pages() * PAGE_SECTORS to calculate the iter size in
bch_cache_set_alloc(), to avoid extra memory allocation from mempool
c->fill_iter.

Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-07-25 07:38:21 -06:00
Coly Li 092bd54d69 bcache: add sysfs file to display feature sets information of cache set
The following three sysfs files are created to display according feature
set information of bcache:
	/sys/fs/bcache/<cache set UUID>/internal/feature_compat
	/sys/fs/bcache/<cache set UUID>/internal/feature_ro_compat
	/sys/fs/bcache/<cache set UUID>/internal/feature_incompat
is added by this patch, to display feature sets information of the cache
set.

Now only an incompat feature 'large_bucket' added in bcache, the sysfs
file content is:
        [large_bucket]
string large_bucket means the running bcache drive supports incompat
feature 'large_bucket', the wrapping [] means the 'large_bucket' feature
is currently enabled on this cache set.

This patch is ready to display compat and ro_compat features, in future
once bcache code implements such feature sets, the according feature
strings will be displayed in their sysfs files too.

Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-07-25 07:38:21 -06:00
Coly Li ffa4703275 bcache: add bucket_size_hi into struct cache_sb_disk for large bucket
The large bucket feature is to extend bucket_size from 16bit to 32bit.

When create cache device on zoned device (e.g. zoned NVMe SSD), making
a single bucket cover one or more zones of the zoned device is the
simplest way to support zoned device as cache by bcache.

But current maximum bucket size is 16MB and a typical zone size of zoned
device is 256MB, this is the major motiviation to extend bucket size to
a larger bit width.

This patch is the basic and first change to support large bucket size,
the major changes it makes are,
- Add BCH_FEATURE_INCOMPAT_LARGE_BUCKET for the large bucket feature,
  INCOMPAT means it introduces incompatible on-disk format change.
- Add BCH_FEATURE_INCOMPAT_FUNCS(large_bucket, LARGE_BUCKET) routines.
- Adds __le16 bucket_size_hi into struct cache_sb_disk at offset 0x8d0
  for the on-disk super block format.
- For the in-memory super block struct cache_sb, member bucket_size is
  extended from __u16 to __32.
- Add get_bucket_size() to combine the bucket_size and bucket_size_hi
  from struct cache_sb_disk into an unsigned int value.

Since we already have large bucket size helpers meta_bucket_pages(),
meta_bucket_bytes() and alloc_meta_bucket_pages(), they make sure when
bucket size > 8MB, the memory allocation for bcache meta data bucket
won't fail no matter how large the bucket size extended. So these meta
data buckets are handled properly when the bucket size width increase
from 16bit to 32bit, we don't need to worry about them.

Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-07-25 07:38:21 -06:00
Coly Li f9c32a5a90 bcache: handle btree node memory allocation properly for bucket size > 8MB
Currently the bcache internal btree node occupies a whole bucket. When
loading the btree node from cache device into memory, mca_data_alloc()
will call bch_btree_keys_alloc() to allocate memory for the whole bucket
size, ilog2(b->c->btree_pages) is send to bch_btree_keys_alloc() as the
parameter 'page_order'.

c->btree_pages is set as bucket_pages() in bch_cache_set_alloc(), for
bucket size > 8MB, ilog2(b->c->btree_pages) is 12 for 4KB page size. By
default the maximum page order __get_free_pages() accepts is MAX_ORDER
(11), in this condition bch_btree_keys_alloc() will always fail.

Because of other over-page-order allocation failure fails the cache
device registration, such btree node allocation failure wasn't observed
during runtime. After other blocking page allocation failures for bucket
size > 8MB, this btree node allocation issue may trigger potentical risk
e.g. infinite dead-loop to retry btree node allocation after failure.

This patch fixes the potential problem by setting c->btree_pages to
meta_bucket_pages() in bch_cache_set_alloc(). In the condition that
bucket size > 8MB, meta_bucket_pages() will always return a number which
won't exceed the maximum page order of the buddy allocator.

Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-07-25 07:38:20 -06:00
Coly Li bf6af17065 bcache: handle cache set verify_ondisk properly for bucket size > 8MB
In bch_btree_cache_alloc() when CONFIG_BCACHE_DEBUG is configured,
allocate memory for c->verify_ondisk may fail if the bucket size > 8MB,
which will require __get_free_pages() to allocate continuous pages
with order > 11 (the default MAX_ORDER of Linux buddy allocator). Such
over size allocation will fail, and cause 2 problems,
- When CONFIG_BCACHE_DEBUG is configured,  bch_btree_verify() does not
  work, because c->verify_ondisk is NULL and bch_btree_verify() returns
  immediately.
- bch_btree_cache_alloc() will fail due to c->verify_ondisk allocation
  failed, then the whole cache device registration fails. And because of
  this failure, the first problem of bch_btree_verify() has no chance to
  be triggered.

This patch fixes the above problem by two means,
1) If pages allocation of c->verify_ondisk fails, set it to NULL and
   returns bch_btree_cache_alloc() with -ENOMEM.
2) When calling __get_free_pages() to allocate c->verify_ondisk pages,
   use ilog2(meta_bucket_pages(&c->sb)) to make sure ilog2() will always
   generate a pages order <= MAX_ORDER (or CONFIG_FORCE_MAX_ZONEORDER).
   Then the buddy system won't directly reject the allocation request.

Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-07-25 07:38:20 -06:00
Coly Li c954ac8d66 bcache: handle cache prio_buckets and disk_buckets properly for bucket size > 8MB
Similar to c->uuids, struct cache's prio_buckets and disk_buckets also
have the potential memory allocation failure during cache registration
if the bucket size > 8MB.

ca->prio_buckets can be stored on cache device in multiple buckets, its
in-memory space is allocated by kzalloc() interface but normally
allocated by alloc_pages() because the size > KMALLOC_MAX_CACHE_SIZE.

So allocation of ca->prio_buckets has the MAX_ORDER restriction too. If
the bucket size > 8MB, by default the page allocator will fail because
the page order > 11 (default MAX_ORDER value). ca->prio_buckets should
also use meta_bucket_bytes(), meta_bucket_pages() to decide its memory
size and use alloc_meta_bucket_pages() to allocate pages, to avoid the
allocation failure during cache set registration when bucket size > 8MB.

ca->disk_buckets is a single bucket size memory buffer, it is used to
iterate each bucket of ca->prio_buckets, and compose the bio based on
memory of ca->disk_buckets, then write ca->disk_buckets memory to cache
disk one-by-one for each bucket of ca->prio_buckets. ca->disk_buckets
should have in-memory size exact to the meta_bucket_pages(), this is the
size that ca->prio_buckets will be stored into each on-disk bucket.

This patch fixes the above issues and handle cache's prio_buckets and
disk_buckets properly for bucket size larger than 8MB.

Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-07-25 07:38:20 -06:00
Coly Li 21e478ddb2 bcache: handle c->uuids properly for bucket size > 8MB
Bcache allocates a whole bucket to store c->uuids on cache device, and
allocates continuous pages to store it in-memory. When the bucket size
exceeds maximum allocable continuous pages, bch_cache_set_alloc() will
fail and cache device registration will fail.

This patch allocates c->uuids by alloc_meta_bucket_pages(), and uses
ilog2(meta_bucket_pages(c)) to indicate order of c->uuids pages when
free it. When writing c->uuids to cache device, its size is decided
by meta_bucket_pages(c) * PAGE_SECTORS. Now c->uuids is properly handled
for bucket size > 8MB.

Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-07-25 07:38:20 -06:00
Coly Li de1fafab64 bcache: introduce meta_bucket_pages() related helper routines
Currently the in-memory meta data like c->uuids or c->disk_buckets
are allocated by alloc_bucket_pages(). The macro alloc_bucket_pages()
calls __get_free_pages() to allocated continuous pages with order
indicated by ilog2(bucket_pages(c)),
 #define alloc_bucket_pages(gfp, c)                      \
     ((void *) __get_free_pages(__GFP_ZERO|gfp, ilog2(bucket_pages(c))))

The maximum order is defined as MAX_ORDER, the default value is 11 (and
can be overwritten by CONFIG_FORCE_MAX_ZONEORDER). In bcache code the
maximum bucket size width is 16bits, this is restricted both by KEY_SIZE
size and bucket_size size from struct cache_sb_disk. The maximum 16bits
width and power-of-2 value is (1<<15) in unit of sector (512byte). It
means the maximum value of bucket size in bytes is (1<<24) bytes a.k.a
4096 pages.

When the bucket size is set to maximum permitted value, ilog2(4096) is
12, which exceeds the default maximum order __get_free_pages() can
accepted, the failed pages allocation will fail cache set registration
procedure and print a kernel oops message for the exceeded pages order.

This patch introduces meta_bucket_pages(), meta_bucket_bytes(), and
alloc_bucket_pages() helper routines. meta_bucket_pages() indicates the
maximum pages can be allocated to meta data bucket, meta_bucket_bytes()
indicates the according maximum bytes, and alloc_bucket_pages() does
the pages allocation for meta bucket. Because meta_bucket_pages()
chooses the smaller value among the bucket size and MAX_ORDER_NR_PAGES,
it still works when MAX_ORDER overwritten by CONFIG_FORCE_MAX_ZONEORDER.

Following patches will use these helper routines to decide maximum pages
can be allocated for different meta data buckets. If the bucket size is
larger than meta_bucket_bytes(), the bcache registration can continue to
success, just the space more than meta_bucket_bytes() inside the bucket
is wasted. Comparing bcache failed for large bucket size, wasting some
space for meta data buckets is acceptable at this moment.

Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-07-25 07:38:20 -06:00
Coly Li 4c1ccd0896 bcache: struct cache_sb is only for in-memory super block now
We have struct cache_sb_disk for on-disk super block already, it is
unnecessary to keep the in-memory super block format exactly mapping
to the on-disk struct layout.

This patch adds code comments to notice that struct cache_sb is not
exactly mapping to cache_sb_disk, and removes the useless member csum
and pad[5].

Although struct cache_sb does not belong to uapi, but there are still
some on-disk format related macros reference it and it is unncessary to
get rid of such dependency now. So struct cache_sb will continue to stay
in include/uapi/linux/bache.h for now.

Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-07-25 07:38:20 -06:00
Coly Li 198efa35c5 bcache: move bucket related code into read_super_common()
Setting sb->first_bucket and checking sb->keys indeed are only for cache
device, it does not make sense to do them in read_super() for backing
device too.

This patch moves the related code piece into read_super_common()
explicitly for cache device and avoid the confusion.

Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-07-25 07:38:20 -06:00
Coly Li d721a43ff6 bcache: increase super block version for cache device and backing device
The new added super block version BCACHE_SB_VERSION_BDEV_WITH_FEATURES
(5) BCACHE_SB_VERSION_CDEV_WITH_FEATURES value (6), is for the feature
set bits.

Devices have super block version equal to the new version will have
three new members for feature set bits in the on-disk super block,
        __le64                  feature_compat;
        __le64                  feature_incompat;
        __le64                  feature_ro_compat;

They are used for further new features which may introduce on-disk
format change, and avoid unncessary super block version increase.

The very basic features handling code skeleton is also initialized in
this patch.

Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-07-25 07:38:20 -06:00
Coly Li 117f636ea6 bcache: fix super block seq numbers comparision in register_cache_set()
In register_cache_set(), c is pointer to struct cache_set, and ca is
pointer to struct cache, if ca->sb.seq > c->sb.seq, it means this
registering cache has up to date version and other members, the in-
memory version and other members should be updated to the newer value.

But current implementation makes a cache set only has a single cache
device, so the above assumption works well except for a special case.
The execption is when a cache device new created and both ca->sb.seq and
c->sb.seq are 0, because the super block is never flushed out yet. In
the location for the following if() check,
2156         if (ca->sb.seq > c->sb.seq) {
2157                 c->sb.version           = ca->sb.version;
2158                 memcpy(c->sb.set_uuid, ca->sb.set_uuid, 16);
2159                 c->sb.flags             = ca->sb.flags;
2160                 c->sb.seq               = ca->sb.seq;
2161                 pr_debug("set version = %llu\n", c->sb.version);
2162         }
c->sb.version is not initialized yet and valued 0. When ca->sb.seq is 0,
the if() check will fail (because both values are 0), and the cache set
version, set_uuid, flags and seq won't be updated.

The above problem is hiden for current code, because the bucket size is
compatible among different super block version. And the next time when
running cache set again, ca->sb.seq will be larger than 0 and cache set
super block version will be updated properly.

But if the large bucket feature is enabled,  sb->bucket_size is the low
16bits of the bucket size. For a power of 2 value, when the actual
bucket size exceeds 16bit width, sb->bucket_size will always be 0. Then
read_super_common() will fail because the if() check to
is_power_of_2(sb->bucket_size) is false. This is how the long time
hidden bug is triggered.

This patch modifies the if() check to the following way,
2156         if (ca->sb.seq > c->sb.seq || c->sb.seq == 0) {
Then cache set's version, set_uuid, flags and seq will always be updated
corectly including for a new created cache device.

Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-07-25 07:38:20 -06:00
Coly Li a42d3c642c bcache: disassemble the big if() checks in bch_cache_set_alloc()
In bch_cache_set_alloc() there is a big if() checks combined by 11 items
together. When this big if() statement fails, it is difficult to tell
exactly which item fails indeed.

This patch disassembles this big if() checks into 11 single if() checks,
which makes code debug more easier.

Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-07-25 07:38:20 -06:00
Coly Li c557a5f7bb bcache: add more accurate error information in read_super_common()
The improperly set bucket or block size will trigger error in
read_super_common(). For large bucket size, a more accurate error message
for invalid bucket or block size is necessary.

This patch disassembles the combined if() checks into multiple single
if() check, and provide more accurate error message for each check
failure condition.

Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-07-25 07:38:20 -06:00
Coly Li 5b21403c7f bcache: add read_super_common() to read major part of super block
Later patches will introduce feature set bits to on-disk super block and
increase super block version. Current code in read_super() which reads
common part of super block for version BCACHE_SB_VERSION_CDEV and version
BCACHE_SB_VERSION_CDEV_WITH_UUID will be shared with the new version.

Therefore this patch moves the reusable part into read_super_common(),
this preparation patch will make later patches more simplier and only
focus on new feature set bits.

Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-07-25 07:38:20 -06:00
Coly Li 7a14812679 bcache: fix overflow in offset_to_stripe()
offset_to_stripe() returns the stripe number (in type unsigned int) from
an offset (in type uint64_t) by the following calculation,
	do_div(offset, d->stripe_size);
For large capacity backing device (e.g. 18TB) with small stripe size
(e.g. 4KB), the result is 4831838208 and exceeds UINT_MAX. The actual
returned value which caller receives is 536870912, due to the overflow.

Indeed in bcache_device_init(), bcache_device->nr_stripes is limited in
range [1, INT_MAX]. Therefore all valid stripe numbers in bcache are
in range [0, bcache_dev->nr_stripes - 1].

This patch adds a upper limition check in offset_to_stripe(): the max
valid stripe number should be less than bcache_device->nr_stripes. If
the calculated stripe number from do_div() is equal to or larger than
bcache_device->nr_stripe, -EINVAL will be returned. (Normally nr_stripes
is less than INT_MAX, exceeding upper limitation doesn't mean overflow,
therefore -EOVERFLOW is not used as error code.)

This patch also changes nr_stripes' type of struct bcache_device from
'unsigned int' to 'int', and return value type of offset_to_stripe()
from 'unsigned int' to 'int', to match their exact data ranges.

All locations where bcache_device->nr_stripes and offset_to_stripe() are
referenced also get updated for the above type change.

Reported-and-tested-by: Ken Raeburn <raeburn@redhat.com>
Signed-off-by: Coly Li <colyli@suse.de>
Cc: stable@vger.kernel.org
Link: https://bugzilla.redhat.com/show_bug.cgi?id=1783075
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-07-25 07:38:20 -06:00
Coly Li 65f0f017e7 bcache: avoid nr_stripes overflow in bcache_device_init()
For some block devices which large capacity (e.g. 8TB) but small io_opt
size (e.g. 8 sectors), in bcache_device_init() the stripes number calcu-
lated by,
	DIV_ROUND_UP_ULL(sectors, d->stripe_size);
might be overflow to the unsigned int bcache_device->nr_stripes.

This patch uses the uint64_t variable to store DIV_ROUND_UP_ULL()
and after the value is checked to be available in unsigned int range,
sets it to bache_device->nr_stripes. Then the overflow is avoided.

Reported-and-tested-by: Ken Raeburn <raeburn@redhat.com>
Signed-off-by: Coly Li <colyli@suse.de>
Cc: stable@vger.kernel.org
Link: https://bugzilla.redhat.com/show_bug.cgi?id=1783075
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-07-25 07:38:20 -06:00
Gustavo A. R. Silva 29f1d5cace bcache: Use struct_size() in kzalloc()
Make use of the struct_size() helper instead of an open-coded version
in order to avoid any potential type mistakes.

This code was detected with the help of Coccinelle and, audited and
fixed manually.

Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-07-25 07:38:20 -06:00