Hi Fuad,
On 5/14/25 2:34 AM, Fuad Tabba wrote:
Guest memory can be backed by guest_memfd or by anonymous memory. Rename
vma_shift to page_shift and vma_pagesize to page_size to ease
readability in subsequent patches.
Suggested-by: James Houghton <jthoughton@xxxxxxxxxx>
Signed-off-by: Fuad Tabba <tabba@xxxxxxxxxx>
---
arch/arm64/kvm/mmu.c | 54 ++++++++++++++++++++++----------------------
1 file changed, 27 insertions(+), 27 deletions(-)
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 9865ada04a81..d756c2b5913f 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1479,13 +1479,13 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
phys_addr_t ipa = fault_ipa;
struct kvm *kvm = vcpu->kvm;
struct vm_area_struct *vma;
- short vma_shift;
+ short page_shift;
void *memcache;
gfn_t gfn;
kvm_pfn_t pfn;
bool logging_active = memslot_is_logging(memslot);
bool force_pte = logging_active || is_protected_kvm_enabled();
- long vma_pagesize, fault_granule;
+ long page_size, fault_granule;
enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R;
struct kvm_pgtable *pgt;
struct page *page;
[...]
/*
@@ -1600,9 +1600,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
* ensure we find the right PFN and lay down the mapping in the right
* place.
*/
- if (vma_pagesize == PMD_SIZE || vma_pagesize == PUD_SIZE) {
- fault_ipa &= ~(vma_pagesize - 1);
- ipa &= ~(vma_pagesize - 1);
+ if (page_size == PMD_SIZE || page_size == PUD_SIZE) {
+ fault_ipa &= ~(page_size - 1);
+ ipa &= ~(page_size - 1);
}
nit: since we're here for readability, ALIGN_DOWN() may be used:
fault_ipa = ALIGN_DOWN(fault_ipa, page_size);
ipa = ALIGN_DOWN(ipa, page_size);
Thanks,
Gavin