mm: add folio_expected_ref_count() for reference count calculation

Patch series " JFS: Implement migrate_folio for jfs_metapage_aops" v5.

This patchset addresses a warning that occurs during memory compaction due
to JFS's missing migrate_folio operation.  The warning was introduced by
commit 7ee3647243 ("migrate: Remove call to ->writepage") which added
explicit warnings when filesystem don't implement migrate_folio.

The syzbot reported following [1]:
  jfs_metapage_aops does not implement migrate_folio
  WARNING: CPU: 1 PID: 5861 at mm/migrate.c:955 fallback_migrate_folio mm/migrate.c:953 [inline]
  WARNING: CPU: 1 PID: 5861 at mm/migrate.c:955 move_to_new_folio+0x70e/0x840 mm/migrate.c:1007
  Modules linked in:
  CPU: 1 UID: 0 PID: 5861 Comm: syz-executor280 Not tainted 6.15.0-rc1-next-20250411-syzkaller #0 PREEMPT(full) 
  Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025
  RIP: 0010:fallback_migrate_folio mm/migrate.c:953 [inline]
  RIP: 0010:move_to_new_folio+0x70e/0x840 mm/migrate.c:1007

To fix this issue, this series implement metapage_migrate_folio() for JFS
which handles both single and multiple metapages per page configurations.

While most filesystems leverage existing migration implementations like
filemap_migrate_folio(), buffer_migrate_folio_norefs() or
buffer_migrate_folio() (which internally used folio_expected_refs()),
JFS's metapage architecture requires special handling of its private data
during migration.  To support this, this series introduce the
folio_expected_ref_count(), which calculates external references to a
folio from page/swap cache, private data, and page table mappings.

This standardized implementation replaces the previous ad-hoc
folio_expected_refs() function and enables JFS to accurately determine
whether a folio has unexpected references before attempting migration.




Implement folio_expected_ref_count() to calculate expected folio reference
counts from:
- Page/swap cache (1 per page)
- Private data (1)
- Page table mappings (1 per map)

While originally needed for page migration operations, this improved
implementation standardizes reference counting by consolidating all
refcount contributors into a single, reusable function that can benefit
any subsystem needing to detect unexpected references to folios.

The folio_expected_ref_count() returns the sum of these external
references without including any reference the caller itself might hold. 
Callers comparing against the actual folio_ref_count() must account for
their own references separately.

Link: https://syzkaller.appspot.com/bug?extid=8bb6fd945af4e0ad9299 [1]
Link: https://lkml.kernel.org/r/20250430100150.279751-1-shivankg@amd.com
Link: https://lkml.kernel.org/r/20250430100150.279751-2-shivankg@amd.com
Signed-off-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Shivank Garg <shivankg@amd.com>
Suggested-by: Matthew Wilcox <willy@infradead.org>
Co-developed-by: David Hildenbrand <david@redhat.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Dave Kleikamp <shaggy@kernel.org>
Cc: Donet Tom <donettom@linux.ibm.com>
Cc: Jane Chu <jane.chu@oracle.com>
Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
Cc: Zi Yan <ziy@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
pull/1253/head
Shivank Garg 2025-04-30 10:01:51 +00:00 committed by Andrew Morton
parent 60309008e1
commit 86ebd50224
2 changed files with 59 additions and 18 deletions

View File

@ -2114,6 +2114,61 @@ static inline bool folio_maybe_mapped_shared(struct folio *folio)
return test_bit(FOLIO_MM_IDS_SHARED_BITNUM, &folio->_mm_ids);
}
/**
* folio_expected_ref_count - calculate the expected folio refcount
* @folio: the folio
*
* Calculate the expected folio refcount, taking references from the pagecache,
* swapcache, PG_private and page table mappings into account. Useful in
* combination with folio_ref_count() to detect unexpected references (e.g.,
* GUP or other temporary references).
*
* Does currently not consider references from the LRU cache. If the folio
* was isolated from the LRU (which is the case during migration or split),
* the LRU cache does not apply.
*
* Calling this function on an unmapped folio -- !folio_mapped() -- that is
* locked will return a stable result.
*
* Calling this function on a mapped folio will not result in a stable result,
* because nothing stops additional page table mappings from coming (e.g.,
* fork()) or going (e.g., munmap()).
*
* Calling this function without the folio lock will also not result in a
* stable result: for example, the folio might get dropped from the swapcache
* concurrently.
*
* However, even when called without the folio lock or on a mapped folio,
* this function can be used to detect unexpected references early (for example,
* if it makes sense to even lock the folio and unmap it).
*
* The caller must add any reference (e.g., from folio_try_get()) it might be
* holding itself to the result.
*
* Returns the expected folio refcount.
*/
static inline int folio_expected_ref_count(const struct folio *folio)
{
const int order = folio_order(folio);
int ref_count = 0;
if (WARN_ON_ONCE(folio_test_slab(folio)))
return 0;
if (folio_test_anon(folio)) {
/* One reference per page from the swapcache. */
ref_count += folio_test_swapcache(folio) << order;
} else if (!((unsigned long)folio->mapping & PAGE_MAPPING_FLAGS)) {
/* One reference per page from the pagecache. */
ref_count += !!folio->mapping << order;
/* One reference from PG_private. */
ref_count += folio_test_private(folio);
}
/* One reference per page table mapping. */
return ref_count + folio_mapcount(folio);
}
#ifndef HAVE_ARCH_MAKE_FOLIO_ACCESSIBLE
static inline int arch_make_folio_accessible(struct folio *folio)
{

View File

@ -445,20 +445,6 @@ unlock:
}
#endif
static int folio_expected_refs(struct address_space *mapping,
struct folio *folio)
{
int refs = 1;
if (!mapping)
return refs;
refs += folio_nr_pages(folio);
if (folio_test_private(folio))
refs++;
return refs;
}
/*
* Replace the folio in the mapping.
*
@ -601,7 +587,7 @@ static int __folio_migrate_mapping(struct address_space *mapping,
int folio_migrate_mapping(struct address_space *mapping,
struct folio *newfolio, struct folio *folio, int extra_count)
{
int expected_count = folio_expected_refs(mapping, folio) + extra_count;
int expected_count = folio_expected_ref_count(folio) + extra_count + 1;
if (folio_ref_count(folio) != expected_count)
return -EAGAIN;
@ -618,7 +604,7 @@ int migrate_huge_page_move_mapping(struct address_space *mapping,
struct folio *dst, struct folio *src)
{
XA_STATE(xas, &mapping->i_pages, folio_index(src));
int rc, expected_count = folio_expected_refs(mapping, src);
int rc, expected_count = folio_expected_ref_count(src) + 1;
if (folio_ref_count(src) != expected_count)
return -EAGAIN;
@ -749,7 +735,7 @@ static int __migrate_folio(struct address_space *mapping, struct folio *dst,
struct folio *src, void *src_private,
enum migrate_mode mode)
{
int rc, expected_count = folio_expected_refs(mapping, src);
int rc, expected_count = folio_expected_ref_count(src) + 1;
/* Check whether src does not have extra refs before we do more work */
if (folio_ref_count(src) != expected_count)
@ -837,7 +823,7 @@ static int __buffer_migrate_folio(struct address_space *mapping,
return migrate_folio(mapping, dst, src, mode);
/* Check whether page does not have extra refs before we do more work */
expected_count = folio_expected_refs(mapping, src);
expected_count = folio_expected_ref_count(src) + 1;
if (folio_ref_count(src) != expected_count)
return -EAGAIN;