fscrypt updates for 6.17

Simplify how fscrypt uses the crypto API, resulting in some
 significant performance improvements:
 
  - Drop the incomplete and problematic support for asynchronous
    algorithms. These drivers are bug-prone, and it turns out they are
    actually much slower than the CPU-based code as well.
 
  - Allocate crypto requests on the stack instead of the heap. This
    improves encryption and decryption performance, especially for
    filenames. It also eliminates a point of failure during I/O.
 -----BEGIN PGP SIGNATURE-----
 
 iIoEABYIADIWIQSacvsUNc7UX4ntmEPzXCl4vpKOKwUCaIZ+fhQcZWJpZ2dlcnNA
 a2VybmVsLm9yZwAKCRDzXCl4vpKOK+dkAQDrHUTj9dGZI/cQ/TjP0kmOv9XfYAfj
 HOQDRikTX+Ip4QEA6L8FS8lJYf9EMznTvTPOkP7hXpwqzuf00vJWr+ySmQs=
 =N9vo
 -----END PGP SIGNATURE-----

Merge tag 'fscrypt-for-linus' of git://git.kernel.org/pub/scm/fs/fscrypt/linux

Pull fscrypt updates from Eric Biggers:
 "Simplify how fscrypt uses the crypto API, resulting in some
  significant performance improvements:

   - Drop the incomplete and problematic support for asynchronous
     algorithms. These drivers are bug-prone, and it turns out they are
     actually much slower than the CPU-based code as well.

   - Allocate crypto requests on the stack instead of the heap. This
     improves encryption and decryption performance, especially for
     filenames. This also eliminates a point of failure during I/O"

* tag 'fscrypt-for-linus' of git://git.kernel.org/pub/scm/fs/fscrypt/linux:
  ceph: Remove gfp_t argument from ceph_fscrypt_encrypt_*()
  fscrypt: Remove gfp_t argument from fscrypt_encrypt_block_inplace()
  fscrypt: Remove gfp_t argument from fscrypt_crypt_data_unit()
  fscrypt: Switch to sync_skcipher and on-stack requests
  fscrypt: Drop FORBID_WEAK_KEYS flag for AES-ECB
  fscrypt: Don't use asynchronous CryptoAPI algorithms
  fscrypt: Don't use problematic non-inline crypto engines
  fscrypt: Drop obsolete recommendation to enable optimized SHA-512
  fscrypt: Explicitly include <linux/export.h>
pull/1309/head
Linus Torvalds 2025-07-28 18:07:38 -07:00
commit 283564a433
18 changed files with 145 additions and 181 deletions

View File

@ -147,9 +147,8 @@ However, these ioctls have some limitations:
were wiped. To partially solve this, you can add init_on_free=1 to were wiped. To partially solve this, you can add init_on_free=1 to
your kernel command line. However, this has a performance cost. your kernel command line. However, this has a performance cost.
- Secret keys might still exist in CPU registers, in crypto - Secret keys might still exist in CPU registers or in other places
accelerator hardware (if used by the crypto API to implement any of not explicitly considered here.
the algorithms), or in other places not explicitly considered here.
Full system compromise Full system compromise
~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~
@ -406,9 +405,12 @@ the work is done by XChaCha12, which is much faster than AES when AES
acceleration is unavailable. For more information about Adiantum, see acceleration is unavailable. For more information about Adiantum, see
`the Adiantum paper <https://eprint.iacr.org/2018/720.pdf>`_. `the Adiantum paper <https://eprint.iacr.org/2018/720.pdf>`_.
The (AES-128-CBC-ESSIV, AES-128-CBC-CTS) pair exists only to support The (AES-128-CBC-ESSIV, AES-128-CBC-CTS) pair was added to try to
systems whose only form of AES acceleration is an off-CPU crypto provide a more efficient option for systems that lack AES instructions
accelerator such as CAAM or CESA that does not support XTS. in the CPU but do have a non-inline crypto engine such as CAAM or CESA
that supports AES-CBC (and not AES-XTS). This is deprecated. It has
been shown that just doing AES on the CPU is actually faster.
Moreover, Adiantum is faster still and is recommended on such systems.
The remaining mode pairs are the "national pride ciphers": The remaining mode pairs are the "national pride ciphers":
@ -468,14 +470,6 @@ API, but the filenames mode still does.
- Recommended: - Recommended:
- AES-CBC acceleration - AES-CBC acceleration
fscrypt also uses HMAC-SHA512 for key derivation, so enabling SHA-512
acceleration is recommended:
- SHA-512
- Recommended:
- arm64: CONFIG_CRYPTO_SHA512_ARM64_CE
- x86: CONFIG_CRYPTO_SHA512_SSSE3
Contents encryption Contents encryption
------------------- -------------------
@ -1326,22 +1320,13 @@ this by validating all top-level encryption policies prior to access.
Inline encryption support Inline encryption support
========================= =========================
By default, fscrypt uses the kernel crypto API for all cryptographic Many newer systems (especially mobile SoCs) have *inline encryption
operations (other than HKDF, which fscrypt partially implements hardware* that can encrypt/decrypt data while it is on its way to/from
itself). The kernel crypto API supports hardware crypto accelerators, the storage device. Linux supports inline encryption through a set of
but only ones that work in the traditional way where all inputs and extensions to the block layer called *blk-crypto*. blk-crypto allows
outputs (e.g. plaintexts and ciphertexts) are in memory. fscrypt can filesystems to attach encryption contexts to bios (I/O requests) to
take advantage of such hardware, but the traditional acceleration specify how the data will be encrypted or decrypted in-line. For more
model isn't particularly efficient and fscrypt hasn't been optimized information about blk-crypto, see
for it.
Instead, many newer systems (especially mobile SoCs) have *inline
encryption hardware* that can encrypt/decrypt data while it is on its
way to/from the storage device. Linux supports inline encryption
through a set of extensions to the block layer called *blk-crypto*.
blk-crypto allows filesystems to attach encryption contexts to bios
(I/O requests) to specify how the data will be encrypted or decrypted
in-line. For more information about blk-crypto, see
:ref:`Documentation/block/inline-encryption.rst <inline_encryption>`. :ref:`Documentation/block/inline-encryption.rst <inline_encryption>`.
On supported filesystems (currently ext4 and f2fs), fscrypt can use On supported filesystems (currently ext4 and f2fs), fscrypt can use

View File

@ -488,15 +488,13 @@ int ceph_fscrypt_decrypt_block_inplace(const struct inode *inode,
int ceph_fscrypt_encrypt_block_inplace(const struct inode *inode, int ceph_fscrypt_encrypt_block_inplace(const struct inode *inode,
struct page *page, unsigned int len, struct page *page, unsigned int len,
unsigned int offs, u64 lblk_num, unsigned int offs, u64 lblk_num)
gfp_t gfp_flags)
{ {
struct ceph_client *cl = ceph_inode_to_client(inode); struct ceph_client *cl = ceph_inode_to_client(inode);
doutc(cl, "%p %llx.%llx len %u offs %u blk %llu\n", inode, doutc(cl, "%p %llx.%llx len %u offs %u blk %llu\n", inode,
ceph_vinop(inode), len, offs, lblk_num); ceph_vinop(inode), len, offs, lblk_num);
return fscrypt_encrypt_block_inplace(inode, page, len, offs, lblk_num, return fscrypt_encrypt_block_inplace(inode, page, len, offs, lblk_num);
gfp_flags);
} }
/** /**
@ -614,9 +612,8 @@ int ceph_fscrypt_decrypt_extents(struct inode *inode, struct page **page,
* @page: pointer to page array * @page: pointer to page array
* @off: offset into the file that the data starts * @off: offset into the file that the data starts
* @len: max length to encrypt * @len: max length to encrypt
* @gfp: gfp flags to use for allocation
* *
* Decrypt an array of cleartext pages and return the amount of * Encrypt an array of cleartext pages and return the amount of
* data encrypted. Any data in the page prior to the start of the * data encrypted. Any data in the page prior to the start of the
* first complete block in the read is ignored. Any incomplete * first complete block in the read is ignored. Any incomplete
* crypto blocks at the end of the array are ignored. * crypto blocks at the end of the array are ignored.
@ -624,7 +621,7 @@ int ceph_fscrypt_decrypt_extents(struct inode *inode, struct page **page,
* Returns the length of the encrypted data or a negative errno. * Returns the length of the encrypted data or a negative errno.
*/ */
int ceph_fscrypt_encrypt_pages(struct inode *inode, struct page **page, u64 off, int ceph_fscrypt_encrypt_pages(struct inode *inode, struct page **page, u64 off,
int len, gfp_t gfp) int len)
{ {
int i, num_blocks; int i, num_blocks;
u64 baseblk = off >> CEPH_FSCRYPT_BLOCK_SHIFT; u64 baseblk = off >> CEPH_FSCRYPT_BLOCK_SHIFT;
@ -645,7 +642,7 @@ int ceph_fscrypt_encrypt_pages(struct inode *inode, struct page **page, u64 off,
fret = ceph_fscrypt_encrypt_block_inplace(inode, page[pgidx], fret = ceph_fscrypt_encrypt_block_inplace(inode, page[pgidx],
CEPH_FSCRYPT_BLOCK_SIZE, pgoffs, CEPH_FSCRYPT_BLOCK_SIZE, pgoffs,
baseblk + i, gfp); baseblk + i);
if (fret < 0) { if (fret < 0) {
if (ret == 0) if (ret == 0)
ret = fret; ret = fret;

View File

@ -152,15 +152,14 @@ int ceph_fscrypt_decrypt_block_inplace(const struct inode *inode,
unsigned int offs, u64 lblk_num); unsigned int offs, u64 lblk_num);
int ceph_fscrypt_encrypt_block_inplace(const struct inode *inode, int ceph_fscrypt_encrypt_block_inplace(const struct inode *inode,
struct page *page, unsigned int len, struct page *page, unsigned int len,
unsigned int offs, u64 lblk_num, unsigned int offs, u64 lblk_num);
gfp_t gfp_flags);
int ceph_fscrypt_decrypt_pages(struct inode *inode, struct page **page, int ceph_fscrypt_decrypt_pages(struct inode *inode, struct page **page,
u64 off, int len); u64 off, int len);
int ceph_fscrypt_decrypt_extents(struct inode *inode, struct page **page, int ceph_fscrypt_decrypt_extents(struct inode *inode, struct page **page,
u64 off, struct ceph_sparse_extent *map, u64 off, struct ceph_sparse_extent *map,
u32 ext_cnt); u32 ext_cnt);
int ceph_fscrypt_encrypt_pages(struct inode *inode, struct page **page, u64 off, int ceph_fscrypt_encrypt_pages(struct inode *inode, struct page **page, u64 off,
int len, gfp_t gfp); int len);
static inline struct page *ceph_fscrypt_pagecache_page(struct page *page) static inline struct page *ceph_fscrypt_pagecache_page(struct page *page)
{ {
@ -236,8 +235,7 @@ static inline int ceph_fscrypt_decrypt_block_inplace(const struct inode *inode,
static inline int ceph_fscrypt_encrypt_block_inplace(const struct inode *inode, static inline int ceph_fscrypt_encrypt_block_inplace(const struct inode *inode,
struct page *page, unsigned int len, struct page *page, unsigned int len,
unsigned int offs, u64 lblk_num, unsigned int offs, u64 lblk_num)
gfp_t gfp_flags)
{ {
return 0; return 0;
} }
@ -259,7 +257,7 @@ static inline int ceph_fscrypt_decrypt_extents(struct inode *inode,
static inline int ceph_fscrypt_encrypt_pages(struct inode *inode, static inline int ceph_fscrypt_encrypt_pages(struct inode *inode,
struct page **page, u64 off, struct page **page, u64 off,
int len, gfp_t gfp) int len)
{ {
return 0; return 0;
} }

View File

@ -1992,8 +1992,7 @@ ceph_sync_write(struct kiocb *iocb, struct iov_iter *from, loff_t pos,
if (IS_ENCRYPTED(inode)) { if (IS_ENCRYPTED(inode)) {
ret = ceph_fscrypt_encrypt_pages(inode, pages, ret = ceph_fscrypt_encrypt_pages(inode, pages,
write_pos, write_len, write_pos, write_len);
GFP_KERNEL);
if (ret < 0) { if (ret < 0) {
doutc(cl, "encryption failed with %d\n", ret); doutc(cl, "encryption failed with %d\n", ret);
ceph_release_page_vector(pages, num_pages); ceph_release_page_vector(pages, num_pages);

View File

@ -2436,8 +2436,7 @@ static int fill_fscrypt_truncate(struct inode *inode,
/* encrypt the last block */ /* encrypt the last block */
ret = ceph_fscrypt_encrypt_block_inplace(inode, page, ret = ceph_fscrypt_encrypt_block_inplace(inode, page,
CEPH_FSCRYPT_BLOCK_SIZE, CEPH_FSCRYPT_BLOCK_SIZE,
0, block, 0, block);
GFP_KERNEL);
if (ret) if (ret)
goto out; goto out;
} }

View File

@ -7,10 +7,12 @@
* Copyright (C) 2015, Motorola Mobility * Copyright (C) 2015, Motorola Mobility
*/ */
#include <linux/pagemap.h>
#include <linux/module.h>
#include <linux/bio.h> #include <linux/bio.h>
#include <linux/export.h>
#include <linux/module.h>
#include <linux/namei.h> #include <linux/namei.h>
#include <linux/pagemap.h>
#include "fscrypt_private.h" #include "fscrypt_private.h"
/** /**
@ -165,8 +167,7 @@ int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk,
do { do {
err = fscrypt_crypt_data_unit(ci, FS_ENCRYPT, du_index, err = fscrypt_crypt_data_unit(ci, FS_ENCRYPT, du_index,
ZERO_PAGE(0), pages[i], ZERO_PAGE(0), pages[i],
du_size, offset, du_size, offset);
GFP_NOFS);
if (err) if (err)
goto out; goto out;
du_index++; du_index++;

View File

@ -20,12 +20,14 @@
* Special Publication 800-38E and IEEE P1619/D16. * Special Publication 800-38E and IEEE P1619/D16.
*/ */
#include <linux/pagemap.h> #include <crypto/skcipher.h>
#include <linux/export.h>
#include <linux/mempool.h> #include <linux/mempool.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/scatterlist.h> #include <linux/pagemap.h>
#include <linux/ratelimit.h> #include <linux/ratelimit.h>
#include <crypto/skcipher.h> #include <linux/scatterlist.h>
#include "fscrypt_private.h" #include "fscrypt_private.h"
static unsigned int num_prealloc_crypto_pages = 32; static unsigned int num_prealloc_crypto_pages = 32;
@ -108,15 +110,13 @@ void fscrypt_generate_iv(union fscrypt_iv *iv, u64 index,
int fscrypt_crypt_data_unit(const struct fscrypt_inode_info *ci, int fscrypt_crypt_data_unit(const struct fscrypt_inode_info *ci,
fscrypt_direction_t rw, u64 index, fscrypt_direction_t rw, u64 index,
struct page *src_page, struct page *dest_page, struct page *src_page, struct page *dest_page,
unsigned int len, unsigned int offs, unsigned int len, unsigned int offs)
gfp_t gfp_flags)
{ {
struct crypto_sync_skcipher *tfm = ci->ci_enc_key.tfm;
SYNC_SKCIPHER_REQUEST_ON_STACK(req, tfm);
union fscrypt_iv iv; union fscrypt_iv iv;
struct skcipher_request *req = NULL;
DECLARE_CRYPTO_WAIT(wait);
struct scatterlist dst, src; struct scatterlist dst, src;
struct crypto_skcipher *tfm = ci->ci_enc_key.tfm; int err;
int res = 0;
if (WARN_ON_ONCE(len <= 0)) if (WARN_ON_ONCE(len <= 0))
return -EINVAL; return -EINVAL;
@ -125,31 +125,23 @@ int fscrypt_crypt_data_unit(const struct fscrypt_inode_info *ci,
fscrypt_generate_iv(&iv, index, ci); fscrypt_generate_iv(&iv, index, ci);
req = skcipher_request_alloc(tfm, gfp_flags);
if (!req)
return -ENOMEM;
skcipher_request_set_callback( skcipher_request_set_callback(
req, CRYPTO_TFM_REQ_MAY_BACKLOG | CRYPTO_TFM_REQ_MAY_SLEEP, req, CRYPTO_TFM_REQ_MAY_BACKLOG | CRYPTO_TFM_REQ_MAY_SLEEP,
crypto_req_done, &wait); NULL, NULL);
sg_init_table(&dst, 1); sg_init_table(&dst, 1);
sg_set_page(&dst, dest_page, len, offs); sg_set_page(&dst, dest_page, len, offs);
sg_init_table(&src, 1); sg_init_table(&src, 1);
sg_set_page(&src, src_page, len, offs); sg_set_page(&src, src_page, len, offs);
skcipher_request_set_crypt(req, &src, &dst, len, &iv); skcipher_request_set_crypt(req, &src, &dst, len, &iv);
if (rw == FS_DECRYPT) if (rw == FS_DECRYPT)
res = crypto_wait_req(crypto_skcipher_decrypt(req), &wait); err = crypto_skcipher_decrypt(req);
else else
res = crypto_wait_req(crypto_skcipher_encrypt(req), &wait); err = crypto_skcipher_encrypt(req);
skcipher_request_free(req); if (err)
if (res) {
fscrypt_err(ci->ci_inode, fscrypt_err(ci->ci_inode,
"%scryption failed for data unit %llu: %d", "%scryption failed for data unit %llu: %d",
(rw == FS_DECRYPT ? "De" : "En"), index, res); (rw == FS_DECRYPT ? "De" : "En"), index, err);
return res; return err;
}
return 0;
} }
/** /**
@ -204,7 +196,7 @@ struct page *fscrypt_encrypt_pagecache_blocks(struct folio *folio,
for (i = offs; i < offs + len; i += du_size, index++) { for (i = offs; i < offs + len; i += du_size, index++) {
err = fscrypt_crypt_data_unit(ci, FS_ENCRYPT, index, err = fscrypt_crypt_data_unit(ci, FS_ENCRYPT, index,
&folio->page, ciphertext_page, &folio->page, ciphertext_page,
du_size, i, gfp_flags); du_size, i);
if (err) { if (err) {
fscrypt_free_bounce_page(ciphertext_page); fscrypt_free_bounce_page(ciphertext_page);
return ERR_PTR(err); return ERR_PTR(err);
@ -225,7 +217,6 @@ EXPORT_SYMBOL(fscrypt_encrypt_pagecache_blocks);
* @offs: Byte offset within @page at which the block to encrypt begins * @offs: Byte offset within @page at which the block to encrypt begins
* @lblk_num: Filesystem logical block number of the block, i.e. the 0-based * @lblk_num: Filesystem logical block number of the block, i.e. the 0-based
* number of the block within the file * number of the block within the file
* @gfp_flags: Memory allocation flags
* *
* Encrypt a possibly-compressed filesystem block that is located in an * Encrypt a possibly-compressed filesystem block that is located in an
* arbitrary page, not necessarily in the original pagecache page. The @inode * arbitrary page, not necessarily in the original pagecache page. The @inode
@ -237,13 +228,12 @@ EXPORT_SYMBOL(fscrypt_encrypt_pagecache_blocks);
*/ */
int fscrypt_encrypt_block_inplace(const struct inode *inode, struct page *page, int fscrypt_encrypt_block_inplace(const struct inode *inode, struct page *page,
unsigned int len, unsigned int offs, unsigned int len, unsigned int offs,
u64 lblk_num, gfp_t gfp_flags) u64 lblk_num)
{ {
if (WARN_ON_ONCE(inode->i_sb->s_cop->supports_subblock_data_units)) if (WARN_ON_ONCE(inode->i_sb->s_cop->supports_subblock_data_units))
return -EOPNOTSUPP; return -EOPNOTSUPP;
return fscrypt_crypt_data_unit(inode->i_crypt_info, FS_ENCRYPT, return fscrypt_crypt_data_unit(inode->i_crypt_info, FS_ENCRYPT,
lblk_num, page, page, len, offs, lblk_num, page, page, len, offs);
gfp_flags);
} }
EXPORT_SYMBOL(fscrypt_encrypt_block_inplace); EXPORT_SYMBOL(fscrypt_encrypt_block_inplace);
@ -283,8 +273,7 @@ int fscrypt_decrypt_pagecache_blocks(struct folio *folio, size_t len,
struct page *page = folio_page(folio, i >> PAGE_SHIFT); struct page *page = folio_page(folio, i >> PAGE_SHIFT);
err = fscrypt_crypt_data_unit(ci, FS_DECRYPT, index, page, err = fscrypt_crypt_data_unit(ci, FS_DECRYPT, index, page,
page, du_size, i & ~PAGE_MASK, page, du_size, i & ~PAGE_MASK);
GFP_NOFS);
if (err) if (err)
return err; return err;
} }
@ -317,8 +306,7 @@ int fscrypt_decrypt_block_inplace(const struct inode *inode, struct page *page,
if (WARN_ON_ONCE(inode->i_sb->s_cop->supports_subblock_data_units)) if (WARN_ON_ONCE(inode->i_sb->s_cop->supports_subblock_data_units))
return -EOPNOTSUPP; return -EOPNOTSUPP;
return fscrypt_crypt_data_unit(inode->i_crypt_info, FS_DECRYPT, return fscrypt_crypt_data_unit(inode->i_crypt_info, FS_DECRYPT,
lblk_num, page, page, len, offs, lblk_num, page, page, len, offs);
GFP_NOFS);
} }
EXPORT_SYMBOL(fscrypt_decrypt_block_inplace); EXPORT_SYMBOL(fscrypt_decrypt_block_inplace);

View File

@ -11,11 +11,13 @@
* This has not yet undergone a rigorous security audit. * This has not yet undergone a rigorous security audit.
*/ */
#include <linux/namei.h>
#include <linux/scatterlist.h>
#include <crypto/hash.h> #include <crypto/hash.h>
#include <crypto/sha2.h> #include <crypto/sha2.h>
#include <crypto/skcipher.h> #include <crypto/skcipher.h>
#include <linux/export.h>
#include <linux/namei.h>
#include <linux/scatterlist.h>
#include "fscrypt_private.h" #include "fscrypt_private.h"
/* /*
@ -92,13 +94,12 @@ static inline bool fscrypt_is_dot_dotdot(const struct qstr *str)
int fscrypt_fname_encrypt(const struct inode *inode, const struct qstr *iname, int fscrypt_fname_encrypt(const struct inode *inode, const struct qstr *iname,
u8 *out, unsigned int olen) u8 *out, unsigned int olen)
{ {
struct skcipher_request *req = NULL;
DECLARE_CRYPTO_WAIT(wait);
const struct fscrypt_inode_info *ci = inode->i_crypt_info; const struct fscrypt_inode_info *ci = inode->i_crypt_info;
struct crypto_skcipher *tfm = ci->ci_enc_key.tfm; struct crypto_sync_skcipher *tfm = ci->ci_enc_key.tfm;
SYNC_SKCIPHER_REQUEST_ON_STACK(req, tfm);
union fscrypt_iv iv; union fscrypt_iv iv;
struct scatterlist sg; struct scatterlist sg;
int res; int err;
/* /*
* Copy the filename to the output buffer for encrypting in-place and * Copy the filename to the output buffer for encrypting in-place and
@ -109,28 +110,17 @@ int fscrypt_fname_encrypt(const struct inode *inode, const struct qstr *iname,
memcpy(out, iname->name, iname->len); memcpy(out, iname->name, iname->len);
memset(out + iname->len, 0, olen - iname->len); memset(out + iname->len, 0, olen - iname->len);
/* Initialize the IV */
fscrypt_generate_iv(&iv, 0, ci); fscrypt_generate_iv(&iv, 0, ci);
/* Set up the encryption request */ skcipher_request_set_callback(
req = skcipher_request_alloc(tfm, GFP_NOFS); req, CRYPTO_TFM_REQ_MAY_BACKLOG | CRYPTO_TFM_REQ_MAY_SLEEP,
if (!req) NULL, NULL);
return -ENOMEM;
skcipher_request_set_callback(req,
CRYPTO_TFM_REQ_MAY_BACKLOG | CRYPTO_TFM_REQ_MAY_SLEEP,
crypto_req_done, &wait);
sg_init_one(&sg, out, olen); sg_init_one(&sg, out, olen);
skcipher_request_set_crypt(req, &sg, &sg, olen, &iv); skcipher_request_set_crypt(req, &sg, &sg, olen, &iv);
err = crypto_skcipher_encrypt(req);
/* Do the encryption */ if (err)
res = crypto_wait_req(crypto_skcipher_encrypt(req), &wait); fscrypt_err(inode, "Filename encryption failed: %d", err);
skcipher_request_free(req); return err;
if (res < 0) {
fscrypt_err(inode, "Filename encryption failed: %d", res);
return res;
}
return 0;
} }
EXPORT_SYMBOL_GPL(fscrypt_fname_encrypt); EXPORT_SYMBOL_GPL(fscrypt_fname_encrypt);
@ -148,34 +138,25 @@ static int fname_decrypt(const struct inode *inode,
const struct fscrypt_str *iname, const struct fscrypt_str *iname,
struct fscrypt_str *oname) struct fscrypt_str *oname)
{ {
struct skcipher_request *req = NULL;
DECLARE_CRYPTO_WAIT(wait);
struct scatterlist src_sg, dst_sg;
const struct fscrypt_inode_info *ci = inode->i_crypt_info; const struct fscrypt_inode_info *ci = inode->i_crypt_info;
struct crypto_skcipher *tfm = ci->ci_enc_key.tfm; struct crypto_sync_skcipher *tfm = ci->ci_enc_key.tfm;
SYNC_SKCIPHER_REQUEST_ON_STACK(req, tfm);
union fscrypt_iv iv; union fscrypt_iv iv;
int res; struct scatterlist src_sg, dst_sg;
int err;
/* Allocate request */
req = skcipher_request_alloc(tfm, GFP_NOFS);
if (!req)
return -ENOMEM;
skcipher_request_set_callback(req,
CRYPTO_TFM_REQ_MAY_BACKLOG | CRYPTO_TFM_REQ_MAY_SLEEP,
crypto_req_done, &wait);
/* Initialize IV */
fscrypt_generate_iv(&iv, 0, ci); fscrypt_generate_iv(&iv, 0, ci);
/* Create decryption request */ skcipher_request_set_callback(
req, CRYPTO_TFM_REQ_MAY_BACKLOG | CRYPTO_TFM_REQ_MAY_SLEEP,
NULL, NULL);
sg_init_one(&src_sg, iname->name, iname->len); sg_init_one(&src_sg, iname->name, iname->len);
sg_init_one(&dst_sg, oname->name, oname->len); sg_init_one(&dst_sg, oname->name, oname->len);
skcipher_request_set_crypt(req, &src_sg, &dst_sg, iname->len, &iv); skcipher_request_set_crypt(req, &src_sg, &dst_sg, iname->len, &iv);
res = crypto_wait_req(crypto_skcipher_decrypt(req), &wait); err = crypto_skcipher_decrypt(req);
skcipher_request_free(req); if (err) {
if (res < 0) { fscrypt_err(inode, "Filename decryption failed: %d", err);
fscrypt_err(inode, "Filename decryption failed: %d", res); return err;
return res;
} }
oname->len = strnlen(oname->name, iname->len); oname->len = strnlen(oname->name, iname->len);

View File

@ -45,6 +45,24 @@
*/ */
#undef FSCRYPT_MAX_KEY_SIZE #undef FSCRYPT_MAX_KEY_SIZE
/*
* This mask is passed as the third argument to the crypto_alloc_*() functions
* to prevent fscrypt from using the Crypto API drivers for non-inline crypto
* engines. Those drivers have been problematic for fscrypt. fscrypt users
* have reported hangs and even incorrect en/decryption with these drivers.
* Since going to the driver, off CPU, and back again is really slow, such
* drivers can be over 50 times slower than the CPU-based code for fscrypt's
* workload. Even on platforms that lack AES instructions on the CPU, using the
* offloads has been shown to be slower, even staying with AES. (Of course,
* Adiantum is faster still, and is the recommended option on such platforms...)
*
* Note that fscrypt also supports inline crypto engines. Those don't use the
* Crypto API and work much better than the old-style (non-inline) engines.
*/
#define FSCRYPT_CRYPTOAPI_MASK \
(CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY | \
CRYPTO_ALG_KERN_DRIVER_ONLY)
#define FSCRYPT_CONTEXT_V1 1 #define FSCRYPT_CONTEXT_V1 1
#define FSCRYPT_CONTEXT_V2 2 #define FSCRYPT_CONTEXT_V2 2
@ -221,7 +239,7 @@ struct fscrypt_symlink_data {
* Normally only one of the fields will be non-NULL. * Normally only one of the fields will be non-NULL.
*/ */
struct fscrypt_prepared_key { struct fscrypt_prepared_key {
struct crypto_skcipher *tfm; struct crypto_sync_skcipher *tfm;
#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT #ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT
struct blk_crypto_key *blk_key; struct blk_crypto_key *blk_key;
#endif #endif
@ -319,8 +337,7 @@ int fscrypt_initialize(struct super_block *sb);
int fscrypt_crypt_data_unit(const struct fscrypt_inode_info *ci, int fscrypt_crypt_data_unit(const struct fscrypt_inode_info *ci,
fscrypt_direction_t rw, u64 index, fscrypt_direction_t rw, u64 index,
struct page *src_page, struct page *dest_page, struct page *src_page, struct page *dest_page,
unsigned int len, unsigned int offs, unsigned int len, unsigned int offs);
gfp_t gfp_flags);
struct page *fscrypt_alloc_bounce_page(gfp_t gfp_flags); struct page *fscrypt_alloc_bounce_page(gfp_t gfp_flags);
void __printf(3, 4) __cold void __printf(3, 4) __cold

View File

@ -8,8 +8,8 @@
*/ */
#include <crypto/hash.h> #include <crypto/hash.h>
#include <crypto/sha2.h>
#include <crypto/hkdf.h> #include <crypto/hkdf.h>
#include <crypto/sha2.h>
#include "fscrypt_private.h" #include "fscrypt_private.h"
@ -58,7 +58,7 @@ int fscrypt_init_hkdf(struct fscrypt_hkdf *hkdf, const u8 *master_key,
u8 prk[HKDF_HASHLEN]; u8 prk[HKDF_HASHLEN];
int err; int err;
hmac_tfm = crypto_alloc_shash(HKDF_HMAC_ALG, 0, 0); hmac_tfm = crypto_alloc_shash(HKDF_HMAC_ALG, 0, FSCRYPT_CRYPTOAPI_MASK);
if (IS_ERR(hmac_tfm)) { if (IS_ERR(hmac_tfm)) {
fscrypt_err(NULL, "Error allocating " HKDF_HMAC_ALG ": %ld", fscrypt_err(NULL, "Error allocating " HKDF_HMAC_ALG ": %ld",
PTR_ERR(hmac_tfm)); PTR_ERR(hmac_tfm));

View File

@ -5,6 +5,8 @@
* Encryption hooks for higher-level filesystem operations. * Encryption hooks for higher-level filesystem operations.
*/ */
#include <linux/export.h>
#include "fscrypt_private.h" #include "fscrypt_private.h"
/** /**

View File

@ -15,6 +15,7 @@
#include <linux/blk-crypto.h> #include <linux/blk-crypto.h>
#include <linux/blkdev.h> #include <linux/blkdev.h>
#include <linux/buffer_head.h> #include <linux/buffer_head.h>
#include <linux/export.h>
#include <linux/sched/mm.h> #include <linux/sched/mm.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/uio.h> #include <linux/uio.h>

View File

@ -18,12 +18,13 @@
* information about these ioctls. * information about these ioctls.
*/ */
#include <linux/unaligned.h>
#include <crypto/skcipher.h> #include <crypto/skcipher.h>
#include <linux/export.h>
#include <linux/key-type.h> #include <linux/key-type.h>
#include <linux/random.h>
#include <linux/once.h> #include <linux/once.h>
#include <linux/random.h>
#include <linux/seq_file.h> #include <linux/seq_file.h>
#include <linux/unaligned.h>
#include "fscrypt_private.h" #include "fscrypt_private.h"

View File

@ -9,6 +9,7 @@
*/ */
#include <crypto/skcipher.h> #include <crypto/skcipher.h>
#include <linux/export.h>
#include <linux/random.h> #include <linux/random.h>
#include "fscrypt_private.h" #include "fscrypt_private.h"
@ -96,14 +97,15 @@ select_encryption_mode(const union fscrypt_policy *policy,
} }
/* Create a symmetric cipher object for the given encryption mode and key */ /* Create a symmetric cipher object for the given encryption mode and key */
static struct crypto_skcipher * static struct crypto_sync_skcipher *
fscrypt_allocate_skcipher(struct fscrypt_mode *mode, const u8 *raw_key, fscrypt_allocate_skcipher(struct fscrypt_mode *mode, const u8 *raw_key,
const struct inode *inode) const struct inode *inode)
{ {
struct crypto_skcipher *tfm; struct crypto_sync_skcipher *tfm;
int err; int err;
tfm = crypto_alloc_skcipher(mode->cipher_str, 0, 0); tfm = crypto_alloc_sync_skcipher(mode->cipher_str, 0,
FSCRYPT_CRYPTOAPI_MASK);
if (IS_ERR(tfm)) { if (IS_ERR(tfm)) {
if (PTR_ERR(tfm) == -ENOENT) { if (PTR_ERR(tfm) == -ENOENT) {
fscrypt_warn(inode, fscrypt_warn(inode,
@ -123,21 +125,22 @@ fscrypt_allocate_skcipher(struct fscrypt_mode *mode, const u8 *raw_key,
* first time a mode is used. * first time a mode is used.
*/ */
pr_info("fscrypt: %s using implementation \"%s\"\n", pr_info("fscrypt: %s using implementation \"%s\"\n",
mode->friendly_name, crypto_skcipher_driver_name(tfm)); mode->friendly_name,
crypto_skcipher_driver_name(&tfm->base));
} }
if (WARN_ON_ONCE(crypto_skcipher_ivsize(tfm) != mode->ivsize)) { if (WARN_ON_ONCE(crypto_sync_skcipher_ivsize(tfm) != mode->ivsize)) {
err = -EINVAL; err = -EINVAL;
goto err_free_tfm; goto err_free_tfm;
} }
crypto_skcipher_set_flags(tfm, CRYPTO_TFM_REQ_FORBID_WEAK_KEYS); crypto_sync_skcipher_set_flags(tfm, CRYPTO_TFM_REQ_FORBID_WEAK_KEYS);
err = crypto_skcipher_setkey(tfm, raw_key, mode->keysize); err = crypto_sync_skcipher_setkey(tfm, raw_key, mode->keysize);
if (err) if (err)
goto err_free_tfm; goto err_free_tfm;
return tfm; return tfm;
err_free_tfm: err_free_tfm:
crypto_free_skcipher(tfm); crypto_free_sync_skcipher(tfm);
return ERR_PTR(err); return ERR_PTR(err);
} }
@ -150,7 +153,7 @@ err_free_tfm:
int fscrypt_prepare_key(struct fscrypt_prepared_key *prep_key, int fscrypt_prepare_key(struct fscrypt_prepared_key *prep_key,
const u8 *raw_key, const struct fscrypt_inode_info *ci) const u8 *raw_key, const struct fscrypt_inode_info *ci)
{ {
struct crypto_skcipher *tfm; struct crypto_sync_skcipher *tfm;
if (fscrypt_using_inline_encryption(ci)) if (fscrypt_using_inline_encryption(ci))
return fscrypt_prepare_inline_crypt_key(prep_key, raw_key, return fscrypt_prepare_inline_crypt_key(prep_key, raw_key,
@ -174,7 +177,7 @@ int fscrypt_prepare_key(struct fscrypt_prepared_key *prep_key,
void fscrypt_destroy_prepared_key(struct super_block *sb, void fscrypt_destroy_prepared_key(struct super_block *sb,
struct fscrypt_prepared_key *prep_key) struct fscrypt_prepared_key *prep_key)
{ {
crypto_free_skcipher(prep_key->tfm); crypto_free_sync_skcipher(prep_key->tfm);
fscrypt_destroy_inline_crypt_key(sb, prep_key); fscrypt_destroy_inline_crypt_key(sb, prep_key);
memzero_explicit(prep_key, sizeof(*prep_key)); memzero_explicit(prep_key, sizeof(*prep_key));
} }

View File

@ -48,39 +48,30 @@ static int derive_key_aes(const u8 *master_key,
const u8 nonce[FSCRYPT_FILE_NONCE_SIZE], const u8 nonce[FSCRYPT_FILE_NONCE_SIZE],
u8 *derived_key, unsigned int derived_keysize) u8 *derived_key, unsigned int derived_keysize)
{ {
int res = 0; struct crypto_sync_skcipher *tfm;
struct skcipher_request *req = NULL; int err;
DECLARE_CRYPTO_WAIT(wait);
struct scatterlist src_sg, dst_sg;
struct crypto_skcipher *tfm = crypto_alloc_skcipher("ecb(aes)", 0, 0);
if (IS_ERR(tfm)) { tfm = crypto_alloc_sync_skcipher("ecb(aes)", 0, FSCRYPT_CRYPTOAPI_MASK);
res = PTR_ERR(tfm); if (IS_ERR(tfm))
tfm = NULL; return PTR_ERR(tfm);
goto out;
}
crypto_skcipher_set_flags(tfm, CRYPTO_TFM_REQ_FORBID_WEAK_KEYS);
req = skcipher_request_alloc(tfm, GFP_KERNEL);
if (!req) {
res = -ENOMEM;
goto out;
}
skcipher_request_set_callback(req,
CRYPTO_TFM_REQ_MAY_BACKLOG | CRYPTO_TFM_REQ_MAY_SLEEP,
crypto_req_done, &wait);
res = crypto_skcipher_setkey(tfm, nonce, FSCRYPT_FILE_NONCE_SIZE);
if (res < 0)
goto out;
sg_init_one(&src_sg, master_key, derived_keysize); err = crypto_sync_skcipher_setkey(tfm, nonce, FSCRYPT_FILE_NONCE_SIZE);
sg_init_one(&dst_sg, derived_key, derived_keysize); if (err == 0) {
skcipher_request_set_crypt(req, &src_sg, &dst_sg, derived_keysize, SYNC_SKCIPHER_REQUEST_ON_STACK(req, tfm);
NULL); struct scatterlist src_sg, dst_sg;
res = crypto_wait_req(crypto_skcipher_encrypt(req), &wait);
out: skcipher_request_set_callback(req,
skcipher_request_free(req); CRYPTO_TFM_REQ_MAY_BACKLOG |
crypto_free_skcipher(tfm); CRYPTO_TFM_REQ_MAY_SLEEP,
return res; NULL, NULL);
sg_init_one(&src_sg, master_key, derived_keysize);
sg_init_one(&dst_sg, derived_key, derived_keysize);
skcipher_request_set_crypt(req, &src_sg, &dst_sg,
derived_keysize, NULL);
err = crypto_skcipher_encrypt(req);
}
crypto_free_sync_skcipher(tfm);
return err;
} }
/* /*

View File

@ -10,11 +10,13 @@
* Modified by Eric Biggers, 2019 for v2 policy support. * Modified by Eric Biggers, 2019 for v2 policy support.
*/ */
#include <linux/export.h>
#include <linux/fs_context.h> #include <linux/fs_context.h>
#include <linux/mount.h>
#include <linux/random.h> #include <linux/random.h>
#include <linux/seq_file.h> #include <linux/seq_file.h>
#include <linux/string.h> #include <linux/string.h>
#include <linux/mount.h>
#include "fscrypt_private.h" #include "fscrypt_private.h"
/** /**

View File

@ -51,7 +51,7 @@ int ubifs_encrypt(const struct inode *inode, struct ubifs_data_node *dn,
memset(p + in_len, 0, pad_len - in_len); memset(p + in_len, 0, pad_len - in_len);
err = fscrypt_encrypt_block_inplace(inode, virt_to_page(p), pad_len, err = fscrypt_encrypt_block_inplace(inode, virt_to_page(p), pad_len,
offset_in_page(p), block, GFP_NOFS); offset_in_page(p), block);
if (err) { if (err) {
ubifs_err(c, "fscrypt_encrypt_block_inplace() failed: %d", err); ubifs_err(c, "fscrypt_encrypt_block_inplace() failed: %d", err);
return err; return err;

View File

@ -314,7 +314,7 @@ struct page *fscrypt_encrypt_pagecache_blocks(struct folio *folio,
size_t len, size_t offs, gfp_t gfp_flags); size_t len, size_t offs, gfp_t gfp_flags);
int fscrypt_encrypt_block_inplace(const struct inode *inode, struct page *page, int fscrypt_encrypt_block_inplace(const struct inode *inode, struct page *page,
unsigned int len, unsigned int offs, unsigned int len, unsigned int offs,
u64 lblk_num, gfp_t gfp_flags); u64 lblk_num);
int fscrypt_decrypt_pagecache_blocks(struct folio *folio, size_t len, int fscrypt_decrypt_pagecache_blocks(struct folio *folio, size_t len,
size_t offs); size_t offs);
@ -487,8 +487,7 @@ static inline struct page *fscrypt_encrypt_pagecache_blocks(struct folio *folio,
static inline int fscrypt_encrypt_block_inplace(const struct inode *inode, static inline int fscrypt_encrypt_block_inplace(const struct inode *inode,
struct page *page, struct page *page,
unsigned int len, unsigned int len,
unsigned int offs, u64 lblk_num, unsigned int offs, u64 lblk_num)
gfp_t gfp_flags)
{ {
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }