ASoC: Fixes for v7.0
Another smallish batch of fixes and quirks, these days it's AMD that is getting all the DMI entries added. We've got one core fix for a missing list initialisation with auxiliary devices, otherwise it's all fairly small things. -----BEGIN PGP SIGNATURE----- iQEzBAABCgAdFiEEreZoqmdXGLWf4p/qJNaLcl1Uh9AFAmnNVK0ACgkQJNaLcl1U h9CiDQf+JQT6CFsWzsxr1HQjB23wXD7c0p27K2YJ5ipAvV5MKnVu8gaHB0UZXejo CIszWigzMSf/SgX4xeSIQVwHAzpTSm36QKisXlESqp8XRbCdiOdAAyi5w1uo0sBh WvBFeF5wK+xOVhVhA2qGbwONyN40zPoHC70ZZESKrXSrZBkBB+KBorHgVxc9LDdx UYkKg3q8U9JUoilGn3M4ItfzWIZKnmkl4JaTAnfkVAEUHWM2vzheSZmt26iAVVXe fQu5XtlkWveELxyu2Jb2Vu42cDysgrSvs+oCtnDClGqEgsSUPO4P6WOD3drXv/Oc sEpWL1jhNQ+4GycVEBBMXZndzvAbHw== =par6 -----END PGP SIGNATURE----- Merge tag 'asoc-fix-v7.0-rc6' of https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound into for-linus ASoC: Fixes for v7.0 Another smallish batch of fixes and quirks, these days it's AMD that is getting all the DMI entries added. We've got one core fix for a missing list initialisation with auxiliary devices, otherwise it's all fairly small things.
This commit is contained in:
commit
b477ab8893
2
.mailmap
2
.mailmap
|
|
@ -316,6 +316,7 @@ Hans Verkuil <hverkuil@kernel.org> <hverkuil-cisco@xs4all.nl>
|
|||
Hans Verkuil <hverkuil@kernel.org> <hansverk@cisco.com>
|
||||
Hao Ge <hao.ge@linux.dev> <gehao@kylinos.cn>
|
||||
Harry Yoo <harry.yoo@oracle.com> <42.hyeyoo@gmail.com>
|
||||
Harry Yoo <harry@kernel.org> <harry.yoo@oracle.com>
|
||||
Heiko Carstens <hca@linux.ibm.com> <h.carstens@de.ibm.com>
|
||||
Heiko Carstens <hca@linux.ibm.com> <heiko.carstens@de.ibm.com>
|
||||
Heiko Stuebner <heiko@sntech.de> <heiko.stuebner@bqreaders.com>
|
||||
|
|
@ -587,6 +588,7 @@ Morten Welinder <terra@gnome.org>
|
|||
Morten Welinder <welinder@anemone.rentec.com>
|
||||
Morten Welinder <welinder@darter.rentec.com>
|
||||
Morten Welinder <welinder@troll.com>
|
||||
Muhammad Usama Anjum <usama.anjum@arm.com> <usama.anjum@collabora.com>
|
||||
Mukesh Ojha <quic_mojha@quicinc.com> <mojha@codeaurora.org>
|
||||
Muna Sinada <quic_msinada@quicinc.com> <msinada@codeaurora.org>
|
||||
Murali Nalajala <quic_mnalajal@quicinc.com> <mnalajal@codeaurora.org>
|
||||
|
|
|
|||
|
|
@ -85,6 +85,16 @@ In the example, 'Requester ID' means the ID of the device that sent
|
|||
the error message to the Root Port. Please refer to PCIe specs for other
|
||||
fields.
|
||||
|
||||
The 'TLP Header' is the prefix/header of the TLP that caused the error
|
||||
in raw hex format. To decode the TLP Header into human-readable form
|
||||
one may use tlp-tool:
|
||||
|
||||
https://github.com/mmpg-x86/tlp-tool
|
||||
|
||||
Example usage::
|
||||
|
||||
curl -L https://git.kernel.org/linus/2ca1c94ce0b6 | rtlp-tool --aer
|
||||
|
||||
AER Ratelimits
|
||||
--------------
|
||||
|
||||
|
|
|
|||
|
|
@ -149,11 +149,33 @@ For architectures that require cache flushing for DMA coherence
|
|||
DMA_ATTR_MMIO will not perform any cache flushing. The address
|
||||
provided must never be mapped cacheable into the CPU.
|
||||
|
||||
DMA_ATTR_CPU_CACHE_CLEAN
|
||||
------------------------
|
||||
DMA_ATTR_DEBUGGING_IGNORE_CACHELINES
|
||||
------------------------------------
|
||||
|
||||
This attribute indicates the CPU will not dirty any cacheline overlapping this
|
||||
DMA_FROM_DEVICE/DMA_BIDIRECTIONAL buffer while it is mapped. This allows
|
||||
multiple small buffers to safely share a cacheline without risk of data
|
||||
corruption, suppressing DMA debug warnings about overlapping mappings.
|
||||
All mappings sharing a cacheline should have this attribute.
|
||||
This attribute indicates that CPU cache lines may overlap for buffers mapped
|
||||
with DMA_FROM_DEVICE or DMA_BIDIRECTIONAL.
|
||||
|
||||
Such overlap may occur when callers map multiple small buffers that reside
|
||||
within the same cache line. In this case, callers must guarantee that the CPU
|
||||
will not dirty these cache lines after the mappings are established. When this
|
||||
condition is met, multiple buffers can safely share a cache line without risking
|
||||
data corruption.
|
||||
|
||||
All mappings that share a cache line must set this attribute to suppress DMA
|
||||
debug warnings about overlapping mappings.
|
||||
|
||||
DMA_ATTR_REQUIRE_COHERENT
|
||||
-------------------------
|
||||
|
||||
DMA mapping requests with the DMA_ATTR_REQUIRE_COHERENT fail on any
|
||||
system where SWIOTLB or cache management is required. This should only
|
||||
be used to support uAPI designs that require continuous HW DMA
|
||||
coherence with userspace processes, for example RDMA and DRM. At a
|
||||
minimum the memory being mapped must be userspace memory from
|
||||
pin_user_pages() or similar.
|
||||
|
||||
Drivers should consider using dma_mmap_pages() instead of this
|
||||
interface when building their uAPIs, when possible.
|
||||
|
||||
It must never be used in an in-kernel driver that only works with
|
||||
kernel memory.
|
||||
|
|
|
|||
|
|
@ -783,6 +783,56 @@ controlled by the "uuid" mount option, which supports these values:
|
|||
mounted with "uuid=on".
|
||||
|
||||
|
||||
Durability and copy up
|
||||
----------------------
|
||||
|
||||
The fsync(2) system call ensures that the data and metadata of a file
|
||||
are safely written to the backing storage, which is expected to
|
||||
guarantee the existence of the information post system crash.
|
||||
|
||||
Without an fsync(2) call, there is no guarantee that the observed
|
||||
data after a system crash will be either the old or the new data, but
|
||||
in practice, the observed data after crash is often the old or new data
|
||||
or a mix of both.
|
||||
|
||||
When an overlayfs file is modified for the first time, copy up will
|
||||
create a copy of the lower file and its parent directories in the upper
|
||||
layer. Since the Linux filesystem API does not enforce any particular
|
||||
ordering on storing changes without explicit fsync(2) calls, in case
|
||||
of a system crash, the upper file could end up with no data at all
|
||||
(i.e. zeros), which would be an unusual outcome. To avoid this
|
||||
experience, overlayfs calls fsync(2) on the upper file before completing
|
||||
data copy up with rename(2) or link(2) to make the copy up "atomic".
|
||||
|
||||
By default, overlayfs does not explicitly call fsync(2) on copied up
|
||||
directories or on metadata-only copy up, so it provides no guarantee to
|
||||
persist the user's modification unless the user calls fsync(2).
|
||||
The fsync during copy up only guarantees that if a copy up is observed
|
||||
after a crash, the observed data is not zeroes or intermediate values
|
||||
from the copy up staging area.
|
||||
|
||||
On traditional local filesystems with a single journal (e.g. ext4, xfs),
|
||||
fsync on a file also persists the parent directory changes, because they
|
||||
are usually modified in the same transaction, so metadata durability during
|
||||
data copy up effectively comes for free. Overlayfs further limits risk by
|
||||
disallowing network filesystems as upper layer.
|
||||
|
||||
Overlayfs can be tuned to prefer performance or durability when storing
|
||||
to the underlying upper layer. This is controlled by the "fsync" mount
|
||||
option, which supports these values:
|
||||
|
||||
- "auto": (default)
|
||||
Call fsync(2) on upper file before completion of data copy up.
|
||||
No explicit fsync(2) on directory or metadata-only copy up.
|
||||
- "strict":
|
||||
Call fsync(2) on upper file and directories before completion of any
|
||||
copy up.
|
||||
- "volatile": [*]
|
||||
Prefer performance over durability (see `Volatile mount`_)
|
||||
|
||||
[*] The mount option "volatile" is an alias to "fsync=volatile".
|
||||
|
||||
|
||||
Volatile mount
|
||||
--------------
|
||||
|
||||
|
|
|
|||
|
|
@ -27,10 +27,10 @@ for details.
|
|||
Sysfs entries
|
||||
-------------
|
||||
|
||||
The following attributes are supported. Current maxim attribute
|
||||
The following attributes are supported. Current maximum attribute
|
||||
is read-write, all other attributes are read-only.
|
||||
|
||||
in0_input Measured voltage in microvolts.
|
||||
in0_input Measured voltage in millivolts.
|
||||
|
||||
curr1_input Measured current in microamperes.
|
||||
curr1_max_alarm Overcurrent alarm in microamperes.
|
||||
curr1_input Measured current in milliamperes.
|
||||
curr1_max Overcurrent shutdown threshold in milliamperes.
|
||||
|
|
|
|||
|
|
@ -51,8 +51,9 @@ temp1_max Provides thermal control temperature of the CPU package
|
|||
temp1_crit Provides shutdown temperature of the CPU package which
|
||||
is also known as the maximum processor junction
|
||||
temperature, Tjmax or Tprochot.
|
||||
temp1_crit_hyst Provides the hysteresis value from Tcontrol to Tjmax of
|
||||
the CPU package.
|
||||
temp1_crit_hyst Provides the hysteresis temperature of the CPU
|
||||
package. Returns Tcontrol, the temperature at which
|
||||
the critical condition clears.
|
||||
|
||||
temp2_label "DTS"
|
||||
temp2_input Provides current temperature of the CPU package scaled
|
||||
|
|
@ -62,8 +63,9 @@ temp2_max Provides thermal control temperature of the CPU package
|
|||
temp2_crit Provides shutdown temperature of the CPU package which
|
||||
is also known as the maximum processor junction
|
||||
temperature, Tjmax or Tprochot.
|
||||
temp2_crit_hyst Provides the hysteresis value from Tcontrol to Tjmax of
|
||||
the CPU package.
|
||||
temp2_crit_hyst Provides the hysteresis temperature of the CPU
|
||||
package. Returns Tcontrol, the temperature at which
|
||||
the critical condition clears.
|
||||
|
||||
temp3_label "Tcontrol"
|
||||
temp3_input Provides current Tcontrol temperature of the CPU
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Landlock: unprivileged access control
|
|||
=====================================
|
||||
|
||||
:Author: Mickaël Salaün
|
||||
:Date: January 2026
|
||||
:Date: March 2026
|
||||
|
||||
The goal of Landlock is to enable restriction of ambient rights (e.g. global
|
||||
filesystem or network access) for a set of processes. Because Landlock
|
||||
|
|
@ -197,12 +197,27 @@ similar backwards compatibility check is needed for the restrict flags
|
|||
|
||||
.. code-block:: c
|
||||
|
||||
__u32 restrict_flags = LANDLOCK_RESTRICT_SELF_LOG_NEW_EXEC_ON;
|
||||
if (abi < 7) {
|
||||
/* Clear logging flags unsupported before ABI 7. */
|
||||
__u32 restrict_flags =
|
||||
LANDLOCK_RESTRICT_SELF_LOG_NEW_EXEC_ON |
|
||||
LANDLOCK_RESTRICT_SELF_TSYNC;
|
||||
switch (abi) {
|
||||
case 1 ... 6:
|
||||
/* Removes logging flags for ABI < 7 */
|
||||
restrict_flags &= ~(LANDLOCK_RESTRICT_SELF_LOG_SAME_EXEC_OFF |
|
||||
LANDLOCK_RESTRICT_SELF_LOG_NEW_EXEC_ON |
|
||||
LANDLOCK_RESTRICT_SELF_LOG_SUBDOMAINS_OFF);
|
||||
__attribute__((fallthrough));
|
||||
case 7:
|
||||
/*
|
||||
* Removes multithreaded enforcement flag for ABI < 8
|
||||
*
|
||||
* WARNING: Without this flag, calling landlock_restrict_self(2) is
|
||||
* only equivalent if the calling process is single-threaded. Below
|
||||
* ABI v8 (and as of ABI v8, when not using this flag), a Landlock
|
||||
* policy would only be enforced for the calling thread and its
|
||||
* children (and not for all threads, including parents and siblings).
|
||||
*/
|
||||
restrict_flags &= ~LANDLOCK_RESTRICT_SELF_TSYNC;
|
||||
}
|
||||
|
||||
The next step is to restrict the current thread from gaining more privileges
|
||||
|
|
|
|||
23
MAINTAINERS
23
MAINTAINERS
|
|
@ -3986,7 +3986,7 @@ F: drivers/hwmon/asus-ec-sensors.c
|
|||
ASUS NOTEBOOKS AND EEEPC ACPI/WMI EXTRAS DRIVERS
|
||||
M: Corentin Chary <corentin.chary@gmail.com>
|
||||
M: Luke D. Jones <luke@ljones.dev>
|
||||
M: Denis Benato <benato.denis96@gmail.com>
|
||||
M: Denis Benato <denis.benato@linux.dev>
|
||||
L: platform-driver-x86@vger.kernel.org
|
||||
S: Maintained
|
||||
W: https://asus-linux.org/
|
||||
|
|
@ -8628,8 +8628,14 @@ F: drivers/gpu/drm/lima/
|
|||
F: include/uapi/drm/lima_drm.h
|
||||
|
||||
DRM DRIVERS FOR LOONGSON
|
||||
M: Jianmin Lv <lvjianmin@loongson.cn>
|
||||
M: Qianhai Wu <wuqianhai@loongson.cn>
|
||||
R: Huacai Chen <chenhuacai@kernel.org>
|
||||
R: Mingcong Bai <jeffbai@aosc.io>
|
||||
R: Xi Ruoyao <xry111@xry111.site>
|
||||
R: Icenowy Zheng <zhengxingda@iscas.ac.cn>
|
||||
L: dri-devel@lists.freedesktop.org
|
||||
S: Orphan
|
||||
S: Maintained
|
||||
T: git https://gitlab.freedesktop.org/drm/misc/kernel.git
|
||||
F: drivers/gpu/drm/loongson/
|
||||
|
||||
|
|
@ -9613,7 +9619,12 @@ F: include/linux/ext2*
|
|||
|
||||
EXT4 FILE SYSTEM
|
||||
M: "Theodore Ts'o" <tytso@mit.edu>
|
||||
M: Andreas Dilger <adilger.kernel@dilger.ca>
|
||||
R: Andreas Dilger <adilger.kernel@dilger.ca>
|
||||
R: Baokun Li <libaokun@linux.alibaba.com>
|
||||
R: Jan Kara <jack@suse.cz>
|
||||
R: Ojaswin Mujoo <ojaswin@linux.ibm.com>
|
||||
R: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
|
||||
R: Zhang Yi <yi.zhang@huawei.com>
|
||||
L: linux-ext4@vger.kernel.org
|
||||
S: Maintained
|
||||
W: http://ext4.wiki.kernel.org
|
||||
|
|
@ -12009,7 +12020,6 @@ I2C SUBSYSTEM
|
|||
M: Wolfram Sang <wsa+renesas@sang-engineering.com>
|
||||
L: linux-i2c@vger.kernel.org
|
||||
S: Maintained
|
||||
W: https://i2c.wiki.kernel.org/
|
||||
Q: https://patchwork.ozlabs.org/project/linux-i2c/list/
|
||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux.git
|
||||
F: Documentation/i2c/
|
||||
|
|
@ -12035,7 +12045,6 @@ I2C SUBSYSTEM HOST DRIVERS
|
|||
M: Andi Shyti <andi.shyti@kernel.org>
|
||||
L: linux-i2c@vger.kernel.org
|
||||
S: Maintained
|
||||
W: https://i2c.wiki.kernel.org/
|
||||
Q: https://patchwork.ozlabs.org/project/linux-i2c/list/
|
||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/andi.shyti/linux.git
|
||||
F: Documentation/devicetree/bindings/i2c/
|
||||
|
|
@ -16877,7 +16886,7 @@ M: Lorenzo Stoakes <ljs@kernel.org>
|
|||
R: Rik van Riel <riel@surriel.com>
|
||||
R: Liam R. Howlett <Liam.Howlett@oracle.com>
|
||||
R: Vlastimil Babka <vbabka@kernel.org>
|
||||
R: Harry Yoo <harry.yoo@oracle.com>
|
||||
R: Harry Yoo <harry@kernel.org>
|
||||
R: Jann Horn <jannh@google.com>
|
||||
L: linux-mm@kvack.org
|
||||
S: Maintained
|
||||
|
|
@ -24343,7 +24352,7 @@ F: drivers/nvmem/layouts/sl28vpd.c
|
|||
|
||||
SLAB ALLOCATOR
|
||||
M: Vlastimil Babka <vbabka@kernel.org>
|
||||
M: Harry Yoo <harry.yoo@oracle.com>
|
||||
M: Harry Yoo <harry@kernel.org>
|
||||
M: Andrew Morton <akpm@linux-foundation.org>
|
||||
R: Hao Li <hao.li@linux.dev>
|
||||
R: Christoph Lameter <cl@gentwo.org>
|
||||
|
|
|
|||
4
Makefile
4
Makefile
|
|
@ -2,7 +2,7 @@
|
|||
VERSION = 7
|
||||
PATCHLEVEL = 0
|
||||
SUBLEVEL = 0
|
||||
EXTRAVERSION = -rc5
|
||||
EXTRAVERSION = -rc6
|
||||
NAME = Baby Opossum Posse
|
||||
|
||||
# *DOCUMENTATION*
|
||||
|
|
@ -1654,7 +1654,7 @@ CLEAN_FILES += vmlinux.symvers modules-only.symvers \
|
|||
modules.builtin.ranges vmlinux.o.map vmlinux.unstripped \
|
||||
compile_commands.json rust/test \
|
||||
rust-project.json .vmlinux.objs .vmlinux.export.c \
|
||||
.builtin-dtbs-list .builtin-dtb.S
|
||||
.builtin-dtbs-list .builtin-dtbs.S
|
||||
|
||||
# Directories & files removed with 'make mrproper'
|
||||
MRPROPER_FILES += include/config include/generated \
|
||||
|
|
|
|||
|
|
@ -1753,7 +1753,7 @@ int __kvm_at_swap_desc(struct kvm *kvm, gpa_t ipa, u64 old, u64 new)
|
|||
if (!writable)
|
||||
return -EPERM;
|
||||
|
||||
ptep = (u64 __user *)hva + offset;
|
||||
ptep = (void __user *)hva + offset;
|
||||
if (cpus_have_final_cap(ARM64_HAS_LSE_ATOMICS))
|
||||
r = __lse_swap_desc(ptep, old, new);
|
||||
else
|
||||
|
|
|
|||
|
|
@ -247,6 +247,20 @@ void kvm_reset_vcpu(struct kvm_vcpu *vcpu)
|
|||
kvm_vcpu_set_be(vcpu);
|
||||
|
||||
*vcpu_pc(vcpu) = target_pc;
|
||||
|
||||
/*
|
||||
* We may come from a state where either a PC update was
|
||||
* pending (SMC call resulting in PC being increpented to
|
||||
* skip the SMC) or a pending exception. Make sure we get
|
||||
* rid of all that, as this cannot be valid out of reset.
|
||||
*
|
||||
* Note that clearing the exception mask also clears PC
|
||||
* updates, but that's an implementation detail, and we
|
||||
* really want to make it explicit.
|
||||
*/
|
||||
vcpu_clear_flag(vcpu, PENDING_EXCEPTION);
|
||||
vcpu_clear_flag(vcpu, EXCEPT_MASK);
|
||||
vcpu_clear_flag(vcpu, INCREMENT_PC);
|
||||
vcpu_set_reg(vcpu, 0, reset_state.r0);
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -41,4 +41,40 @@
|
|||
.cfi_endproc; \
|
||||
SYM_END(name, SYM_T_NONE)
|
||||
|
||||
/*
|
||||
* This is for the signal handler trampoline, which is used as the return
|
||||
* address of the signal handlers in userspace instead of called normally.
|
||||
* The long standing libgcc bug https://gcc.gnu.org/PR124050 requires a
|
||||
* nop between .cfi_startproc and the actual address of the trampoline, so
|
||||
* we cannot simply use SYM_FUNC_START.
|
||||
*
|
||||
* This wrapper also contains all the .cfi_* directives for recovering
|
||||
* the content of the GPRs and the "return address" (where the rt_sigreturn
|
||||
* syscall will jump to), assuming there is a struct rt_sigframe (where
|
||||
* a struct sigcontext containing those information we need to recover) at
|
||||
* $sp. The "DWARF for the LoongArch(TM) Architecture" manual states
|
||||
* column 0 is for $zero, but it does not make too much sense to
|
||||
* save/restore the hardware zero register. Repurpose this column here
|
||||
* for the return address (here it's not the content of $ra we cannot use
|
||||
* the default column 3).
|
||||
*/
|
||||
#define SYM_SIGFUNC_START(name) \
|
||||
.cfi_startproc; \
|
||||
.cfi_signal_frame; \
|
||||
.cfi_def_cfa 3, RT_SIGFRAME_SC; \
|
||||
.cfi_return_column 0; \
|
||||
.cfi_offset 0, SC_PC; \
|
||||
\
|
||||
.irp num, 1, 2, 3, 4, 5, 6, 7, 8, \
|
||||
9, 10, 11, 12, 13, 14, 15, 16, \
|
||||
17, 18, 19, 20, 21, 22, 23, 24, \
|
||||
25, 26, 27, 28, 29, 30, 31; \
|
||||
.cfi_offset \num, SC_REGS + \num * SZREG; \
|
||||
.endr; \
|
||||
\
|
||||
nop; \
|
||||
SYM_START(name, SYM_L_GLOBAL, SYM_A_ALIGN)
|
||||
|
||||
#define SYM_SIGFUNC_END(name) SYM_FUNC_END(name)
|
||||
|
||||
#endif
|
||||
|
|
|
|||
|
|
@ -0,0 +1,9 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0+ */
|
||||
|
||||
#include <asm/siginfo.h>
|
||||
#include <asm/ucontext.h>
|
||||
|
||||
struct rt_sigframe {
|
||||
struct siginfo rs_info;
|
||||
struct ucontext rs_uctx;
|
||||
};
|
||||
|
|
@ -16,6 +16,7 @@
|
|||
#include <asm/ptrace.h>
|
||||
#include <asm/processor.h>
|
||||
#include <asm/ftrace.h>
|
||||
#include <asm/sigframe.h>
|
||||
#include <vdso/datapage.h>
|
||||
|
||||
static void __used output_ptreg_defines(void)
|
||||
|
|
@ -220,6 +221,7 @@ static void __used output_sc_defines(void)
|
|||
COMMENT("Linux sigcontext offsets.");
|
||||
OFFSET(SC_REGS, sigcontext, sc_regs);
|
||||
OFFSET(SC_PC, sigcontext, sc_pc);
|
||||
OFFSET(RT_SIGFRAME_SC, rt_sigframe, rs_uctx.uc_mcontext);
|
||||
BLANK();
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -42,16 +42,15 @@ static int __init init_cpu_fullname(void)
|
|||
int cpu, ret;
|
||||
char *cpuname;
|
||||
const char *model;
|
||||
struct device_node *root;
|
||||
|
||||
/* Parsing cpuname from DTS model property */
|
||||
root = of_find_node_by_path("/");
|
||||
ret = of_property_read_string(root, "model", &model);
|
||||
ret = of_property_read_string(of_root, "model", &model);
|
||||
if (ret == 0) {
|
||||
cpuname = kstrdup(model, GFP_KERNEL);
|
||||
if (!cpuname)
|
||||
return -ENOMEM;
|
||||
loongson_sysconf.cpuname = strsep(&cpuname, " ");
|
||||
}
|
||||
of_node_put(root);
|
||||
|
||||
if (loongson_sysconf.cpuname && !strncmp(loongson_sysconf.cpuname, "Loongson", 8)) {
|
||||
for (cpu = 0; cpu < NR_CPUS; cpu++)
|
||||
|
|
|
|||
|
|
@ -35,6 +35,7 @@
|
|||
#include <asm/cpu-features.h>
|
||||
#include <asm/fpu.h>
|
||||
#include <asm/lbt.h>
|
||||
#include <asm/sigframe.h>
|
||||
#include <asm/ucontext.h>
|
||||
#include <asm/vdso.h>
|
||||
|
||||
|
|
@ -51,11 +52,6 @@
|
|||
#define lock_lbt_owner() ({ preempt_disable(); pagefault_disable(); })
|
||||
#define unlock_lbt_owner() ({ pagefault_enable(); preempt_enable(); })
|
||||
|
||||
struct rt_sigframe {
|
||||
struct siginfo rs_info;
|
||||
struct ucontext rs_uctx;
|
||||
};
|
||||
|
||||
struct _ctx_layout {
|
||||
struct sctx_info *addr;
|
||||
unsigned int size;
|
||||
|
|
|
|||
|
|
@ -83,7 +83,7 @@ static inline void eiointc_update_sw_coremap(struct loongarch_eiointc *s,
|
|||
|
||||
if (!(s->status & BIT(EIOINTC_ENABLE_CPU_ENCODE))) {
|
||||
cpuid = ffs(cpuid) - 1;
|
||||
cpuid = (cpuid >= 4) ? 0 : cpuid;
|
||||
cpuid = ((cpuid < 0) || (cpuid >= 4)) ? 0 : cpuid;
|
||||
}
|
||||
|
||||
vcpu = kvm_get_vcpu_by_cpuid(s->kvm, cpuid);
|
||||
|
|
@ -472,34 +472,34 @@ static int kvm_eiointc_regs_access(struct kvm_device *dev,
|
|||
switch (addr) {
|
||||
case EIOINTC_NODETYPE_START ... EIOINTC_NODETYPE_END:
|
||||
offset = (addr - EIOINTC_NODETYPE_START) / 4;
|
||||
p = s->nodetype + offset * 4;
|
||||
p = (void *)s->nodetype + offset * 4;
|
||||
break;
|
||||
case EIOINTC_IPMAP_START ... EIOINTC_IPMAP_END:
|
||||
offset = (addr - EIOINTC_IPMAP_START) / 4;
|
||||
p = &s->ipmap + offset * 4;
|
||||
p = (void *)&s->ipmap + offset * 4;
|
||||
break;
|
||||
case EIOINTC_ENABLE_START ... EIOINTC_ENABLE_END:
|
||||
offset = (addr - EIOINTC_ENABLE_START) / 4;
|
||||
p = s->enable + offset * 4;
|
||||
p = (void *)s->enable + offset * 4;
|
||||
break;
|
||||
case EIOINTC_BOUNCE_START ... EIOINTC_BOUNCE_END:
|
||||
offset = (addr - EIOINTC_BOUNCE_START) / 4;
|
||||
p = s->bounce + offset * 4;
|
||||
p = (void *)s->bounce + offset * 4;
|
||||
break;
|
||||
case EIOINTC_ISR_START ... EIOINTC_ISR_END:
|
||||
offset = (addr - EIOINTC_ISR_START) / 4;
|
||||
p = s->isr + offset * 4;
|
||||
p = (void *)s->isr + offset * 4;
|
||||
break;
|
||||
case EIOINTC_COREISR_START ... EIOINTC_COREISR_END:
|
||||
if (cpu >= s->num_cpu)
|
||||
return -EINVAL;
|
||||
|
||||
offset = (addr - EIOINTC_COREISR_START) / 4;
|
||||
p = s->coreisr[cpu] + offset * 4;
|
||||
p = (void *)s->coreisr[cpu] + offset * 4;
|
||||
break;
|
||||
case EIOINTC_COREMAP_START ... EIOINTC_COREMAP_END:
|
||||
offset = (addr - EIOINTC_COREMAP_START) / 4;
|
||||
p = s->coremap + offset * 4;
|
||||
p = (void *)s->coremap + offset * 4;
|
||||
break;
|
||||
default:
|
||||
kvm_err("%s: unknown eiointc register, addr = %d\n", __func__, addr);
|
||||
|
|
|
|||
|
|
@ -588,6 +588,9 @@ struct kvm_vcpu *kvm_get_vcpu_by_cpuid(struct kvm *kvm, int cpuid)
|
|||
{
|
||||
struct kvm_phyid_map *map;
|
||||
|
||||
if (cpuid < 0)
|
||||
return NULL;
|
||||
|
||||
if (cpuid >= KVM_MAX_PHYID)
|
||||
return NULL;
|
||||
|
||||
|
|
|
|||
|
|
@ -5,9 +5,11 @@
|
|||
#include <linux/kernel.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/acpi.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/vgaarb.h>
|
||||
#include <linux/io-64-nonatomic-lo-hi.h>
|
||||
#include <asm/cacheflush.h>
|
||||
#include <asm/loongson.h>
|
||||
|
||||
|
|
@ -15,6 +17,9 @@
|
|||
#define PCI_DEVICE_ID_LOONGSON_DC1 0x7a06
|
||||
#define PCI_DEVICE_ID_LOONGSON_DC2 0x7a36
|
||||
#define PCI_DEVICE_ID_LOONGSON_DC3 0x7a46
|
||||
#define PCI_DEVICE_ID_LOONGSON_GPU1 0x7a15
|
||||
#define PCI_DEVICE_ID_LOONGSON_GPU2 0x7a25
|
||||
#define PCI_DEVICE_ID_LOONGSON_GPU3 0x7a35
|
||||
|
||||
int raw_pci_read(unsigned int domain, unsigned int bus, unsigned int devfn,
|
||||
int reg, int len, u32 *val)
|
||||
|
|
@ -99,3 +104,78 @@ static void pci_fixup_vgadev(struct pci_dev *pdev)
|
|||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LOONGSON, PCI_DEVICE_ID_LOONGSON_DC1, pci_fixup_vgadev);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LOONGSON, PCI_DEVICE_ID_LOONGSON_DC2, pci_fixup_vgadev);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LOONGSON, PCI_DEVICE_ID_LOONGSON_DC3, pci_fixup_vgadev);
|
||||
|
||||
#define CRTC_NUM_MAX 2
|
||||
#define CRTC_OUTPUT_ENABLE 0x100
|
||||
|
||||
static void loongson_gpu_fixup_dma_hang(struct pci_dev *pdev, bool on)
|
||||
{
|
||||
u32 i, val, count, crtc_offset, device;
|
||||
void __iomem *crtc_reg, *base, *regbase;
|
||||
static u32 crtc_status[CRTC_NUM_MAX] = { 0 };
|
||||
|
||||
base = pdev->bus->ops->map_bus(pdev->bus, pdev->devfn + 1, 0);
|
||||
device = readw(base + PCI_DEVICE_ID);
|
||||
|
||||
regbase = ioremap(readq(base + PCI_BASE_ADDRESS_0) & ~0xffull, SZ_64K);
|
||||
if (!regbase) {
|
||||
pci_err(pdev, "Failed to ioremap()\n");
|
||||
return;
|
||||
}
|
||||
|
||||
switch (device) {
|
||||
case PCI_DEVICE_ID_LOONGSON_DC2:
|
||||
crtc_reg = regbase + 0x1240;
|
||||
crtc_offset = 0x10;
|
||||
break;
|
||||
case PCI_DEVICE_ID_LOONGSON_DC3:
|
||||
crtc_reg = regbase;
|
||||
crtc_offset = 0x400;
|
||||
break;
|
||||
}
|
||||
|
||||
for (i = 0; i < CRTC_NUM_MAX; i++, crtc_reg += crtc_offset) {
|
||||
val = readl(crtc_reg);
|
||||
|
||||
if (!on)
|
||||
crtc_status[i] = val;
|
||||
|
||||
/* No need to fixup if the status is off at startup. */
|
||||
if (!(crtc_status[i] & CRTC_OUTPUT_ENABLE))
|
||||
continue;
|
||||
|
||||
if (on)
|
||||
val |= CRTC_OUTPUT_ENABLE;
|
||||
else
|
||||
val &= ~CRTC_OUTPUT_ENABLE;
|
||||
|
||||
mb();
|
||||
writel(val, crtc_reg);
|
||||
|
||||
for (count = 0; count < 40; count++) {
|
||||
val = readl(crtc_reg) & CRTC_OUTPUT_ENABLE;
|
||||
if ((on && val) || (!on && !val))
|
||||
break;
|
||||
udelay(1000);
|
||||
}
|
||||
|
||||
pci_info(pdev, "DMA hang fixup at reg[0x%lx]: 0x%x\n",
|
||||
(unsigned long)crtc_reg & 0xffff, readl(crtc_reg));
|
||||
}
|
||||
|
||||
iounmap(regbase);
|
||||
}
|
||||
|
||||
static void pci_fixup_dma_hang_early(struct pci_dev *pdev)
|
||||
{
|
||||
loongson_gpu_fixup_dma_hang(pdev, false);
|
||||
}
|
||||
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON, PCI_DEVICE_ID_LOONGSON_GPU2, pci_fixup_dma_hang_early);
|
||||
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON, PCI_DEVICE_ID_LOONGSON_GPU3, pci_fixup_dma_hang_early);
|
||||
|
||||
static void pci_fixup_dma_hang_final(struct pci_dev *pdev)
|
||||
{
|
||||
loongson_gpu_fixup_dma_hang(pdev, true);
|
||||
}
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LOONGSON, PCI_DEVICE_ID_LOONGSON_GPU2, pci_fixup_dma_hang_final);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LOONGSON, PCI_DEVICE_ID_LOONGSON_GPU3, pci_fixup_dma_hang_final);
|
||||
|
|
|
|||
|
|
@ -26,7 +26,7 @@ cflags-vdso := $(ccflags-vdso) \
|
|||
$(filter -W%,$(filter-out -Wa$(comma)%,$(KBUILD_CFLAGS))) \
|
||||
-std=gnu11 -fms-extensions -O2 -g -fno-strict-aliasing -fno-common -fno-builtin \
|
||||
-fno-stack-protector -fno-jump-tables -DDISABLE_BRANCH_PROFILING \
|
||||
$(call cc-option, -fno-asynchronous-unwind-tables) \
|
||||
$(call cc-option, -fasynchronous-unwind-tables) \
|
||||
$(call cc-option, -fno-stack-protector)
|
||||
aflags-vdso := $(ccflags-vdso) \
|
||||
-D__ASSEMBLY__ -Wa,-gdwarf-2
|
||||
|
|
@ -41,7 +41,7 @@ endif
|
|||
|
||||
# VDSO linker flags.
|
||||
ldflags-y := -Bsymbolic --no-undefined -soname=linux-vdso.so.1 \
|
||||
$(filter -E%,$(KBUILD_CFLAGS)) -shared --build-id -T
|
||||
$(filter -E%,$(KBUILD_CFLAGS)) -shared --build-id --eh-frame-hdr -T
|
||||
|
||||
#
|
||||
# Shared build commands.
|
||||
|
|
|
|||
|
|
@ -12,13 +12,13 @@
|
|||
|
||||
#include <asm/regdef.h>
|
||||
#include <asm/asm.h>
|
||||
#include <asm/asm-offsets.h>
|
||||
|
||||
.section .text
|
||||
.cfi_sections .debug_frame
|
||||
|
||||
SYM_FUNC_START(__vdso_rt_sigreturn)
|
||||
SYM_SIGFUNC_START(__vdso_rt_sigreturn)
|
||||
|
||||
li.w a7, __NR_rt_sigreturn
|
||||
syscall 0
|
||||
|
||||
SYM_FUNC_END(__vdso_rt_sigreturn)
|
||||
SYM_SIGFUNC_END(__vdso_rt_sigreturn)
|
||||
|
|
|
|||
|
|
@ -62,8 +62,8 @@ do { \
|
|||
* @size: number of elements in array
|
||||
*/
|
||||
#define array_index_mask_nospec array_index_mask_nospec
|
||||
static inline unsigned long array_index_mask_nospec(unsigned long index,
|
||||
unsigned long size)
|
||||
static __always_inline unsigned long array_index_mask_nospec(unsigned long index,
|
||||
unsigned long size)
|
||||
{
|
||||
unsigned long mask;
|
||||
|
||||
|
|
|
|||
|
|
@ -710,6 +710,9 @@ void kvm_arch_crypto_clear_masks(struct kvm *kvm);
|
|||
void kvm_arch_crypto_set_masks(struct kvm *kvm, unsigned long *apm,
|
||||
unsigned long *aqm, unsigned long *adm);
|
||||
|
||||
#define SIE64_RETURN_NORMAL 0
|
||||
#define SIE64_RETURN_MCCK 1
|
||||
|
||||
int __sie64a(phys_addr_t sie_block_phys, struct kvm_s390_sie_block *sie_block, u64 *rsa,
|
||||
unsigned long gasce);
|
||||
|
||||
|
|
|
|||
|
|
@ -62,7 +62,7 @@ struct stack_frame {
|
|||
struct {
|
||||
unsigned long sie_control_block;
|
||||
unsigned long sie_savearea;
|
||||
unsigned long sie_reason;
|
||||
unsigned long sie_return;
|
||||
unsigned long sie_flags;
|
||||
unsigned long sie_control_block_phys;
|
||||
unsigned long sie_guest_asce;
|
||||
|
|
|
|||
|
|
@ -63,7 +63,7 @@ int main(void)
|
|||
OFFSET(__SF_EMPTY, stack_frame, empty[0]);
|
||||
OFFSET(__SF_SIE_CONTROL, stack_frame, sie_control_block);
|
||||
OFFSET(__SF_SIE_SAVEAREA, stack_frame, sie_savearea);
|
||||
OFFSET(__SF_SIE_REASON, stack_frame, sie_reason);
|
||||
OFFSET(__SF_SIE_RETURN, stack_frame, sie_return);
|
||||
OFFSET(__SF_SIE_FLAGS, stack_frame, sie_flags);
|
||||
OFFSET(__SF_SIE_CONTROL_PHYS, stack_frame, sie_control_block_phys);
|
||||
OFFSET(__SF_SIE_GUEST_ASCE, stack_frame, sie_guest_asce);
|
||||
|
|
|
|||
|
|
@ -200,7 +200,7 @@ SYM_FUNC_START(__sie64a)
|
|||
stg %r3,__SF_SIE_CONTROL(%r15) # ...and virtual addresses
|
||||
stg %r4,__SF_SIE_SAVEAREA(%r15) # save guest register save area
|
||||
stg %r5,__SF_SIE_GUEST_ASCE(%r15) # save guest asce
|
||||
xc __SF_SIE_REASON(8,%r15),__SF_SIE_REASON(%r15) # reason code = 0
|
||||
xc __SF_SIE_RETURN(8,%r15),__SF_SIE_RETURN(%r15) # return code = 0
|
||||
mvc __SF_SIE_FLAGS(8,%r15),__TI_flags(%r14) # copy thread flags
|
||||
lmg %r0,%r13,0(%r4) # load guest gprs 0-13
|
||||
mvi __TI_sie(%r14),1
|
||||
|
|
@ -237,7 +237,7 @@ SYM_INNER_LABEL(sie_exit, SYM_L_GLOBAL)
|
|||
xgr %r4,%r4
|
||||
xgr %r5,%r5
|
||||
lmg %r6,%r14,__SF_GPRS(%r15) # restore kernel registers
|
||||
lg %r2,__SF_SIE_REASON(%r15) # return exit reason code
|
||||
lg %r2,__SF_SIE_RETURN(%r15) # return sie return code
|
||||
BR_EX %r14
|
||||
SYM_FUNC_END(__sie64a)
|
||||
EXPORT_SYMBOL(__sie64a)
|
||||
|
|
@ -271,6 +271,7 @@ SYM_CODE_START(system_call)
|
|||
xgr %r9,%r9
|
||||
xgr %r10,%r10
|
||||
xgr %r11,%r11
|
||||
xgr %r12,%r12
|
||||
la %r2,STACK_FRAME_OVERHEAD(%r15) # pointer to pt_regs
|
||||
mvc __PT_R8(64,%r2),__LC_SAVE_AREA(%r13)
|
||||
MBEAR %r2,%r13
|
||||
|
|
@ -407,6 +408,7 @@ SYM_CODE_START(\name)
|
|||
xgr %r6,%r6
|
||||
xgr %r7,%r7
|
||||
xgr %r10,%r10
|
||||
xgr %r12,%r12
|
||||
xc __PT_FLAGS(8,%r11),__PT_FLAGS(%r11)
|
||||
mvc __PT_R8(64,%r11),__LC_SAVE_AREA(%r13)
|
||||
MBEAR %r11,%r13
|
||||
|
|
@ -496,6 +498,7 @@ SYM_CODE_START(mcck_int_handler)
|
|||
xgr %r6,%r6
|
||||
xgr %r7,%r7
|
||||
xgr %r10,%r10
|
||||
xgr %r12,%r12
|
||||
stmg %r8,%r9,__PT_PSW(%r11)
|
||||
xc __PT_FLAGS(8,%r11),__PT_FLAGS(%r11)
|
||||
xc __SF_BACKCHAIN(8,%r15),__SF_BACKCHAIN(%r15)
|
||||
|
|
|
|||
|
|
@ -487,8 +487,8 @@ void notrace s390_do_machine_check(struct pt_regs *regs)
|
|||
mcck_dam_code = (mci.val & MCIC_SUBCLASS_MASK);
|
||||
if (test_cpu_flag(CIF_MCCK_GUEST) &&
|
||||
(mcck_dam_code & MCCK_CODE_NO_GUEST) != mcck_dam_code) {
|
||||
/* Set exit reason code for host's later handling */
|
||||
*((long *)(regs->gprs[15] + __SF_SIE_REASON)) = -EINTR;
|
||||
/* Set sie return code for host's later handling */
|
||||
((struct stack_frame *)regs->gprs[15])->sie_return = SIE64_RETURN_MCCK;
|
||||
}
|
||||
clear_cpu_flag(CIF_MCCK_GUEST);
|
||||
|
||||
|
|
|
|||
|
|
@ -13,6 +13,7 @@
|
|||
*/
|
||||
|
||||
#include <linux/cpufeature.h>
|
||||
#include <linux/nospec.h>
|
||||
#include <linux/errno.h>
|
||||
#include <linux/sched.h>
|
||||
#include <linux/mm.h>
|
||||
|
|
@ -131,8 +132,10 @@ void noinstr __do_syscall(struct pt_regs *regs, int per_trap)
|
|||
if (unlikely(test_and_clear_pt_regs_flag(regs, PIF_SYSCALL_RET_SET)))
|
||||
goto out;
|
||||
regs->gprs[2] = -ENOSYS;
|
||||
if (likely(nr < NR_syscalls))
|
||||
if (likely(nr < NR_syscalls)) {
|
||||
nr = array_index_nospec(nr, NR_syscalls);
|
||||
regs->gprs[2] = sys_call_table[nr](regs);
|
||||
}
|
||||
out:
|
||||
syscall_exit_to_user_mode(regs);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -134,32 +134,6 @@ int dat_set_asce_limit(struct kvm_s390_mmu_cache *mc, union asce *asce, int newt
|
|||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* dat_crstep_xchg() - Exchange a gmap CRSTE with another.
|
||||
* @crstep: Pointer to the CRST entry
|
||||
* @new: Replacement entry.
|
||||
* @gfn: The affected guest address.
|
||||
* @asce: The ASCE of the address space.
|
||||
*
|
||||
* Context: This function is assumed to be called with kvm->mmu_lock held.
|
||||
*/
|
||||
void dat_crstep_xchg(union crste *crstep, union crste new, gfn_t gfn, union asce asce)
|
||||
{
|
||||
if (crstep->h.i) {
|
||||
WRITE_ONCE(*crstep, new);
|
||||
return;
|
||||
} else if (cpu_has_edat2()) {
|
||||
crdte_crste(crstep, *crstep, new, gfn, asce);
|
||||
return;
|
||||
}
|
||||
|
||||
if (machine_has_tlb_guest())
|
||||
idte_crste(crstep, gfn, IDTE_GUEST_ASCE, asce, IDTE_GLOBAL);
|
||||
else
|
||||
idte_crste(crstep, gfn, 0, NULL_ASCE, IDTE_GLOBAL);
|
||||
WRITE_ONCE(*crstep, new);
|
||||
}
|
||||
|
||||
/**
|
||||
* dat_crstep_xchg_atomic() - Atomically exchange a gmap CRSTE with another.
|
||||
* @crstep: Pointer to the CRST entry.
|
||||
|
|
@ -175,8 +149,8 @@ void dat_crstep_xchg(union crste *crstep, union crste new, gfn_t gfn, union asce
|
|||
*
|
||||
* Return: %true if the exchange was successful.
|
||||
*/
|
||||
bool dat_crstep_xchg_atomic(union crste *crstep, union crste old, union crste new, gfn_t gfn,
|
||||
union asce asce)
|
||||
bool __must_check dat_crstep_xchg_atomic(union crste *crstep, union crste old, union crste new,
|
||||
gfn_t gfn, union asce asce)
|
||||
{
|
||||
if (old.h.i)
|
||||
return arch_try_cmpxchg((long *)crstep, &old.val, new.val);
|
||||
|
|
@ -292,6 +266,7 @@ static int dat_split_ste(struct kvm_s390_mmu_cache *mc, union pmd *pmdp, gfn_t g
|
|||
pt->ptes[i].val = init.val | i * PAGE_SIZE;
|
||||
/* No need to take locks as the page table is not installed yet. */
|
||||
pgste_init.prefix_notif = old.s.fc1.prefix_notif;
|
||||
pgste_init.vsie_notif = old.s.fc1.vsie_notif;
|
||||
pgste_init.pcl = uses_skeys && init.h.i;
|
||||
dat_init_pgstes(pt, pgste_init.val);
|
||||
} else {
|
||||
|
|
@ -893,7 +868,8 @@ static long _dat_slot_crste(union crste *crstep, gfn_t gfn, gfn_t next, struct d
|
|||
|
||||
/* This table entry needs to be updated. */
|
||||
if (walk->start <= gfn && walk->end >= next) {
|
||||
dat_crstep_xchg_atomic(crstep, crste, new_crste, gfn, walk->asce);
|
||||
if (!dat_crstep_xchg_atomic(crstep, crste, new_crste, gfn, walk->asce))
|
||||
return -EINVAL;
|
||||
/* A lower level table was present, needs to be freed. */
|
||||
if (!crste.h.fc && !crste.h.i) {
|
||||
if (is_pmd(crste))
|
||||
|
|
@ -1021,67 +997,21 @@ bool dat_test_age_gfn(union asce asce, gfn_t start, gfn_t end)
|
|||
return _dat_walk_gfn_range(start, end, asce, &test_age_ops, 0, NULL) > 0;
|
||||
}
|
||||
|
||||
int dat_link(struct kvm_s390_mmu_cache *mc, union asce asce, int level,
|
||||
bool uses_skeys, struct guest_fault *f)
|
||||
{
|
||||
union crste oldval, newval;
|
||||
union pte newpte, oldpte;
|
||||
union pgste pgste;
|
||||
int rc = 0;
|
||||
|
||||
rc = dat_entry_walk(mc, f->gfn, asce, DAT_WALK_ALLOC_CONTINUE, level, &f->crstep, &f->ptep);
|
||||
if (rc == -EINVAL || rc == -ENOMEM)
|
||||
return rc;
|
||||
if (rc)
|
||||
return -EAGAIN;
|
||||
|
||||
if (WARN_ON_ONCE(unlikely(get_level(f->crstep, f->ptep) > level)))
|
||||
return -EINVAL;
|
||||
|
||||
if (f->ptep) {
|
||||
pgste = pgste_get_lock(f->ptep);
|
||||
oldpte = *f->ptep;
|
||||
newpte = _pte(f->pfn, f->writable, f->write_attempt | oldpte.s.d, !f->page);
|
||||
newpte.s.sd = oldpte.s.sd;
|
||||
oldpte.s.sd = 0;
|
||||
if (oldpte.val == _PTE_EMPTY.val || oldpte.h.pfra == f->pfn) {
|
||||
pgste = __dat_ptep_xchg(f->ptep, pgste, newpte, f->gfn, asce, uses_skeys);
|
||||
if (f->callback)
|
||||
f->callback(f);
|
||||
} else {
|
||||
rc = -EAGAIN;
|
||||
}
|
||||
pgste_set_unlock(f->ptep, pgste);
|
||||
} else {
|
||||
oldval = READ_ONCE(*f->crstep);
|
||||
newval = _crste_fc1(f->pfn, oldval.h.tt, f->writable,
|
||||
f->write_attempt | oldval.s.fc1.d);
|
||||
newval.s.fc1.sd = oldval.s.fc1.sd;
|
||||
if (oldval.val != _CRSTE_EMPTY(oldval.h.tt).val &&
|
||||
crste_origin_large(oldval) != crste_origin_large(newval))
|
||||
return -EAGAIN;
|
||||
if (!dat_crstep_xchg_atomic(f->crstep, oldval, newval, f->gfn, asce))
|
||||
return -EAGAIN;
|
||||
if (f->callback)
|
||||
f->callback(f);
|
||||
}
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
static long dat_set_pn_crste(union crste *crstep, gfn_t gfn, gfn_t next, struct dat_walk *walk)
|
||||
{
|
||||
union crste crste = READ_ONCE(*crstep);
|
||||
union crste newcrste, oldcrste;
|
||||
int *n = walk->priv;
|
||||
|
||||
if (!crste.h.fc || crste.h.i || crste.h.p)
|
||||
return 0;
|
||||
|
||||
do {
|
||||
oldcrste = READ_ONCE(*crstep);
|
||||
if (!oldcrste.h.fc || oldcrste.h.i || oldcrste.h.p)
|
||||
return 0;
|
||||
if (oldcrste.s.fc1.prefix_notif)
|
||||
break;
|
||||
newcrste = oldcrste;
|
||||
newcrste.s.fc1.prefix_notif = 1;
|
||||
} while (!dat_crstep_xchg_atomic(crstep, oldcrste, newcrste, gfn, walk->asce));
|
||||
*n = 2;
|
||||
if (crste.s.fc1.prefix_notif)
|
||||
return 0;
|
||||
crste.s.fc1.prefix_notif = 1;
|
||||
dat_crstep_xchg(crstep, crste, gfn, walk->asce);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -160,14 +160,14 @@ union pmd {
|
|||
unsigned long :44; /* HW */
|
||||
unsigned long : 3; /* Unused */
|
||||
unsigned long : 1; /* HW */
|
||||
unsigned long s : 1; /* Special */
|
||||
unsigned long w : 1; /* Writable soft-bit */
|
||||
unsigned long r : 1; /* Readable soft-bit */
|
||||
unsigned long d : 1; /* Dirty */
|
||||
unsigned long y : 1; /* Young */
|
||||
unsigned long prefix_notif : 1; /* Guest prefix invalidation notification */
|
||||
unsigned long : 3; /* HW */
|
||||
unsigned long prefix_notif : 1; /* Guest prefix invalidation notification */
|
||||
unsigned long vsie_notif : 1; /* Referenced in a shadow table */
|
||||
unsigned long : 1; /* Unused */
|
||||
unsigned long : 4; /* HW */
|
||||
unsigned long sd : 1; /* Soft-Dirty */
|
||||
unsigned long pr : 1; /* Present */
|
||||
|
|
@ -183,14 +183,14 @@ union pud {
|
|||
unsigned long :33; /* HW */
|
||||
unsigned long :14; /* Unused */
|
||||
unsigned long : 1; /* HW */
|
||||
unsigned long s : 1; /* Special */
|
||||
unsigned long w : 1; /* Writable soft-bit */
|
||||
unsigned long r : 1; /* Readable soft-bit */
|
||||
unsigned long d : 1; /* Dirty */
|
||||
unsigned long y : 1; /* Young */
|
||||
unsigned long prefix_notif : 1; /* Guest prefix invalidation notification */
|
||||
unsigned long : 3; /* HW */
|
||||
unsigned long prefix_notif : 1; /* Guest prefix invalidation notification */
|
||||
unsigned long vsie_notif : 1; /* Referenced in a shadow table */
|
||||
unsigned long : 1; /* Unused */
|
||||
unsigned long : 4; /* HW */
|
||||
unsigned long sd : 1; /* Soft-Dirty */
|
||||
unsigned long pr : 1; /* Present */
|
||||
|
|
@ -254,14 +254,14 @@ union crste {
|
|||
struct {
|
||||
unsigned long :47;
|
||||
unsigned long : 1; /* HW (should be 0) */
|
||||
unsigned long s : 1; /* Special */
|
||||
unsigned long w : 1; /* Writable */
|
||||
unsigned long r : 1; /* Readable */
|
||||
unsigned long d : 1; /* Dirty */
|
||||
unsigned long y : 1; /* Young */
|
||||
unsigned long prefix_notif : 1; /* Guest prefix invalidation notification */
|
||||
unsigned long : 3; /* HW */
|
||||
unsigned long prefix_notif : 1; /* Guest prefix invalidation notification */
|
||||
unsigned long vsie_notif : 1; /* Referenced in a shadow table */
|
||||
unsigned long : 1;
|
||||
unsigned long : 4; /* HW */
|
||||
unsigned long sd : 1; /* Soft-Dirty */
|
||||
unsigned long pr : 1; /* Present */
|
||||
|
|
@ -540,8 +540,6 @@ int dat_set_slot(struct kvm_s390_mmu_cache *mc, union asce asce, gfn_t start, gf
|
|||
u16 type, u16 param);
|
||||
int dat_set_prefix_notif_bit(union asce asce, gfn_t gfn);
|
||||
bool dat_test_age_gfn(union asce asce, gfn_t start, gfn_t end);
|
||||
int dat_link(struct kvm_s390_mmu_cache *mc, union asce asce, int level,
|
||||
bool uses_skeys, struct guest_fault *f);
|
||||
|
||||
int dat_perform_essa(union asce asce, gfn_t gfn, int orc, union essa_state *state, bool *dirty);
|
||||
long dat_reset_cmma(union asce asce, gfn_t start_gfn);
|
||||
|
|
@ -938,11 +936,14 @@ static inline bool dat_pudp_xchg_atomic(union pud *pudp, union pud old, union pu
|
|||
return dat_crstep_xchg_atomic(_CRSTEP(pudp), _CRSTE(old), _CRSTE(new), gfn, asce);
|
||||
}
|
||||
|
||||
static inline void dat_crstep_clear(union crste *crstep, gfn_t gfn, union asce asce)
|
||||
static inline union crste dat_crstep_clear_atomic(union crste *crstep, gfn_t gfn, union asce asce)
|
||||
{
|
||||
union crste newcrste = _CRSTE_EMPTY(crstep->h.tt);
|
||||
union crste oldcrste, empty = _CRSTE_EMPTY(crstep->h.tt);
|
||||
|
||||
dat_crstep_xchg(crstep, newcrste, gfn, asce);
|
||||
do {
|
||||
oldcrste = READ_ONCE(*crstep);
|
||||
} while (!dat_crstep_xchg_atomic(crstep, oldcrste, empty, gfn, asce));
|
||||
return oldcrste;
|
||||
}
|
||||
|
||||
static inline int get_level(union crste *crstep, union pte *ptep)
|
||||
|
|
|
|||
|
|
@ -1434,17 +1434,27 @@ static int _do_shadow_pte(struct gmap *sg, gpa_t raddr, union pte *ptep_h, union
|
|||
if (rc)
|
||||
return rc;
|
||||
|
||||
pgste = pgste_get_lock(ptep_h);
|
||||
newpte = _pte(f->pfn, f->writable, !p, 0);
|
||||
newpte.s.d |= ptep->s.d;
|
||||
newpte.s.sd |= ptep->s.sd;
|
||||
newpte.h.p &= ptep->h.p;
|
||||
pgste = _gmap_ptep_xchg(sg->parent, ptep_h, newpte, pgste, f->gfn, false);
|
||||
pgste.vsie_notif = 1;
|
||||
if (!pgste_get_trylock(ptep_h, &pgste))
|
||||
return -EAGAIN;
|
||||
newpte = _pte(f->pfn, f->writable, !p, ptep_h->s.s);
|
||||
newpte.s.d |= ptep_h->s.d;
|
||||
newpte.s.sd |= ptep_h->s.sd;
|
||||
newpte.h.p &= ptep_h->h.p;
|
||||
if (!newpte.h.p && !f->writable) {
|
||||
rc = -EOPNOTSUPP;
|
||||
} else {
|
||||
pgste = _gmap_ptep_xchg(sg->parent, ptep_h, newpte, pgste, f->gfn, false);
|
||||
pgste.vsie_notif = 1;
|
||||
}
|
||||
pgste_set_unlock(ptep_h, pgste);
|
||||
if (rc)
|
||||
return rc;
|
||||
if (!sg->parent)
|
||||
return -EAGAIN;
|
||||
|
||||
newpte = _pte(f->pfn, 0, !p, 0);
|
||||
pgste = pgste_get_lock(ptep);
|
||||
if (!pgste_get_trylock(ptep, &pgste))
|
||||
return -EAGAIN;
|
||||
pgste = __dat_ptep_xchg(ptep, pgste, newpte, gpa_to_gfn(raddr), sg->asce, uses_skeys(sg));
|
||||
pgste_set_unlock(ptep, pgste);
|
||||
|
||||
|
|
@ -1454,7 +1464,7 @@ static int _do_shadow_pte(struct gmap *sg, gpa_t raddr, union pte *ptep_h, union
|
|||
static int _do_shadow_crste(struct gmap *sg, gpa_t raddr, union crste *host, union crste *table,
|
||||
struct guest_fault *f, bool p)
|
||||
{
|
||||
union crste newcrste;
|
||||
union crste newcrste, oldcrste;
|
||||
gfn_t gfn;
|
||||
int rc;
|
||||
|
||||
|
|
@ -1467,16 +1477,28 @@ static int _do_shadow_crste(struct gmap *sg, gpa_t raddr, union crste *host, uni
|
|||
if (rc)
|
||||
return rc;
|
||||
|
||||
newcrste = _crste_fc1(f->pfn, host->h.tt, f->writable, !p);
|
||||
newcrste.s.fc1.d |= host->s.fc1.d;
|
||||
newcrste.s.fc1.sd |= host->s.fc1.sd;
|
||||
newcrste.h.p &= host->h.p;
|
||||
newcrste.s.fc1.vsie_notif = 1;
|
||||
newcrste.s.fc1.prefix_notif = host->s.fc1.prefix_notif;
|
||||
_gmap_crstep_xchg(sg->parent, host, newcrste, f->gfn, false);
|
||||
do {
|
||||
/* _gmap_crstep_xchg_atomic() could have unshadowed this shadow gmap */
|
||||
if (!sg->parent)
|
||||
return -EAGAIN;
|
||||
oldcrste = READ_ONCE(*host);
|
||||
newcrste = _crste_fc1(f->pfn, oldcrste.h.tt, f->writable, !p);
|
||||
newcrste.s.fc1.d |= oldcrste.s.fc1.d;
|
||||
newcrste.s.fc1.sd |= oldcrste.s.fc1.sd;
|
||||
newcrste.h.p &= oldcrste.h.p;
|
||||
newcrste.s.fc1.vsie_notif = 1;
|
||||
newcrste.s.fc1.prefix_notif = oldcrste.s.fc1.prefix_notif;
|
||||
newcrste.s.fc1.s = oldcrste.s.fc1.s;
|
||||
if (!newcrste.h.p && !f->writable)
|
||||
return -EOPNOTSUPP;
|
||||
} while (!_gmap_crstep_xchg_atomic(sg->parent, host, oldcrste, newcrste, f->gfn, false));
|
||||
if (!sg->parent)
|
||||
return -EAGAIN;
|
||||
|
||||
newcrste = _crste_fc1(f->pfn, host->h.tt, 0, !p);
|
||||
dat_crstep_xchg(table, newcrste, gpa_to_gfn(raddr), sg->asce);
|
||||
newcrste = _crste_fc1(f->pfn, oldcrste.h.tt, 0, !p);
|
||||
gfn = gpa_to_gfn(raddr);
|
||||
while (!dat_crstep_xchg_atomic(table, READ_ONCE(*table), newcrste, gfn, sg->asce))
|
||||
;
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
@ -1500,21 +1522,31 @@ static int _gaccess_do_shadow(struct kvm_s390_mmu_cache *mc, struct gmap *sg,
|
|||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* A race occourred. The shadow mapping is already valid, nothing to do */
|
||||
if ((ptep && !ptep->h.i) || (!ptep && crste_leaf(*table)))
|
||||
/* A race occurred. The shadow mapping is already valid, nothing to do */
|
||||
if ((ptep && !ptep->h.i && ptep->h.p == w->p) ||
|
||||
(!ptep && crste_leaf(*table) && !table->h.i && table->h.p == w->p))
|
||||
return 0;
|
||||
|
||||
gl = get_level(table, ptep);
|
||||
|
||||
/* In case of a real address space */
|
||||
if (w->level <= LEVEL_MEM) {
|
||||
l = TABLE_TYPE_PAGE_TABLE;
|
||||
hl = TABLE_TYPE_REGION1;
|
||||
goto real_address_space;
|
||||
}
|
||||
|
||||
/*
|
||||
* Skip levels that are already protected. For each level, protect
|
||||
* only the page containing the entry, not the whole table.
|
||||
*/
|
||||
for (i = gl ; i >= w->level; i--) {
|
||||
rc = gmap_protect_rmap(mc, sg, entries[i - 1].gfn, gpa_to_gfn(saddr),
|
||||
entries[i - 1].pfn, i, entries[i - 1].writable);
|
||||
rc = gmap_protect_rmap(mc, sg, entries[i].gfn, gpa_to_gfn(saddr),
|
||||
entries[i].pfn, i + 1, entries[i].writable);
|
||||
if (rc)
|
||||
return rc;
|
||||
if (!sg->parent)
|
||||
return -EAGAIN;
|
||||
}
|
||||
|
||||
rc = dat_entry_walk(NULL, entries[LEVEL_MEM].gfn, sg->parent->asce, DAT_WALK_LEAF,
|
||||
|
|
@ -1526,6 +1558,7 @@ static int _gaccess_do_shadow(struct kvm_s390_mmu_cache *mc, struct gmap *sg,
|
|||
/* Get the smallest granularity */
|
||||
l = min3(gl, hl, w->level);
|
||||
|
||||
real_address_space:
|
||||
flags = DAT_WALK_SPLIT_ALLOC | (uses_skeys(sg->parent) ? DAT_WALK_USES_SKEYS : 0);
|
||||
/* If necessary, create the shadow mapping */
|
||||
if (l < gl) {
|
||||
|
|
|
|||
|
|
@ -313,13 +313,16 @@ static long gmap_clear_young_crste(union crste *crstep, gfn_t gfn, gfn_t end, st
|
|||
struct clear_young_pte_priv *priv = walk->priv;
|
||||
union crste crste, new;
|
||||
|
||||
crste = READ_ONCE(*crstep);
|
||||
do {
|
||||
crste = READ_ONCE(*crstep);
|
||||
|
||||
if (!crste.h.fc)
|
||||
return 0;
|
||||
if (!crste.s.fc1.y && crste.h.i)
|
||||
return 0;
|
||||
if (crste_prefix(crste) && !gmap_mkold_prefix(priv->gmap, gfn, end))
|
||||
break;
|
||||
|
||||
if (!crste.h.fc)
|
||||
return 0;
|
||||
if (!crste.s.fc1.y && crste.h.i)
|
||||
return 0;
|
||||
if (!crste_prefix(crste) || gmap_mkold_prefix(priv->gmap, gfn, end)) {
|
||||
new = crste;
|
||||
new.h.i = 1;
|
||||
new.s.fc1.y = 0;
|
||||
|
|
@ -328,8 +331,8 @@ static long gmap_clear_young_crste(union crste *crstep, gfn_t gfn, gfn_t end, st
|
|||
folio_set_dirty(phys_to_folio(crste_origin_large(crste)));
|
||||
new.s.fc1.d = 0;
|
||||
new.h.p = 1;
|
||||
dat_crstep_xchg(crstep, new, gfn, walk->asce);
|
||||
}
|
||||
} while (!dat_crstep_xchg_atomic(crstep, crste, new, gfn, walk->asce));
|
||||
|
||||
priv->young = 1;
|
||||
return 0;
|
||||
}
|
||||
|
|
@ -391,14 +394,18 @@ static long _gmap_unmap_crste(union crste *crstep, gfn_t gfn, gfn_t next, struct
|
|||
{
|
||||
struct gmap_unmap_priv *priv = walk->priv;
|
||||
struct folio *folio = NULL;
|
||||
union crste old = *crstep;
|
||||
|
||||
if (crstep->h.fc) {
|
||||
if (crstep->s.fc1.pr && test_bit(GMAP_FLAG_EXPORT_ON_UNMAP, &priv->gmap->flags))
|
||||
folio = phys_to_folio(crste_origin_large(*crstep));
|
||||
gmap_crstep_xchg(priv->gmap, crstep, _CRSTE_EMPTY(crstep->h.tt), gfn);
|
||||
if (folio)
|
||||
uv_convert_from_secure_folio(folio);
|
||||
}
|
||||
if (!old.h.fc)
|
||||
return 0;
|
||||
|
||||
if (old.s.fc1.pr && test_bit(GMAP_FLAG_EXPORT_ON_UNMAP, &priv->gmap->flags))
|
||||
folio = phys_to_folio(crste_origin_large(old));
|
||||
/* No races should happen because kvm->mmu_lock is held in write mode */
|
||||
KVM_BUG_ON(!gmap_crstep_xchg_atomic(priv->gmap, crstep, old, _CRSTE_EMPTY(old.h.tt), gfn),
|
||||
priv->gmap->kvm);
|
||||
if (folio)
|
||||
uv_convert_from_secure_folio(folio);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
@ -474,23 +481,24 @@ static long _crste_test_and_clear_softdirty(union crste *table, gfn_t gfn, gfn_t
|
|||
|
||||
if (fatal_signal_pending(current))
|
||||
return 1;
|
||||
crste = READ_ONCE(*table);
|
||||
if (!crste.h.fc)
|
||||
return 0;
|
||||
if (crste.h.p && !crste.s.fc1.sd)
|
||||
return 0;
|
||||
do {
|
||||
crste = READ_ONCE(*table);
|
||||
if (!crste.h.fc)
|
||||
return 0;
|
||||
if (crste.h.p && !crste.s.fc1.sd)
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* If this large page contains one or more prefixes of vCPUs that are
|
||||
* currently running, do not reset the protection, leave it marked as
|
||||
* dirty.
|
||||
*/
|
||||
if (!crste.s.fc1.prefix_notif || gmap_mkold_prefix(gmap, gfn, end)) {
|
||||
/*
|
||||
* If this large page contains one or more prefixes of vCPUs that are
|
||||
* currently running, do not reset the protection, leave it marked as
|
||||
* dirty.
|
||||
*/
|
||||
if (crste.s.fc1.prefix_notif && !gmap_mkold_prefix(gmap, gfn, end))
|
||||
break;
|
||||
new = crste;
|
||||
new.h.p = 1;
|
||||
new.s.fc1.sd = 0;
|
||||
gmap_crstep_xchg(gmap, table, new, gfn);
|
||||
}
|
||||
} while (!gmap_crstep_xchg_atomic(gmap, table, crste, new, gfn));
|
||||
|
||||
for ( ; gfn < end; gfn++)
|
||||
mark_page_dirty(gmap->kvm, gfn);
|
||||
|
|
@ -511,7 +519,7 @@ void gmap_sync_dirty_log(struct gmap *gmap, gfn_t start, gfn_t end)
|
|||
_dat_walk_gfn_range(start, end, gmap->asce, &walk_ops, 0, gmap);
|
||||
}
|
||||
|
||||
static int gmap_handle_minor_crste_fault(union asce asce, struct guest_fault *f)
|
||||
static int gmap_handle_minor_crste_fault(struct gmap *gmap, struct guest_fault *f)
|
||||
{
|
||||
union crste newcrste, oldcrste = READ_ONCE(*f->crstep);
|
||||
|
||||
|
|
@ -536,10 +544,8 @@ static int gmap_handle_minor_crste_fault(union asce asce, struct guest_fault *f)
|
|||
newcrste.s.fc1.d = 1;
|
||||
newcrste.s.fc1.sd = 1;
|
||||
}
|
||||
if (!oldcrste.s.fc1.d && newcrste.s.fc1.d)
|
||||
SetPageDirty(phys_to_page(crste_origin_large(newcrste)));
|
||||
/* In case of races, let the slow path deal with it. */
|
||||
return !dat_crstep_xchg_atomic(f->crstep, oldcrste, newcrste, f->gfn, asce);
|
||||
return !gmap_crstep_xchg_atomic(gmap, f->crstep, oldcrste, newcrste, f->gfn);
|
||||
}
|
||||
/* Trying to write on a read-only page, let the slow path deal with it. */
|
||||
return 1;
|
||||
|
|
@ -568,8 +574,6 @@ static int _gmap_handle_minor_pte_fault(struct gmap *gmap, union pgste *pgste,
|
|||
newpte.s.d = 1;
|
||||
newpte.s.sd = 1;
|
||||
}
|
||||
if (!oldpte.s.d && newpte.s.d)
|
||||
SetPageDirty(pfn_to_page(newpte.h.pfra));
|
||||
*pgste = gmap_ptep_xchg(gmap, f->ptep, newpte, *pgste, f->gfn);
|
||||
|
||||
return 0;
|
||||
|
|
@ -606,7 +610,7 @@ int gmap_try_fixup_minor(struct gmap *gmap, struct guest_fault *fault)
|
|||
fault->callback(fault);
|
||||
pgste_set_unlock(fault->ptep, pgste);
|
||||
} else {
|
||||
rc = gmap_handle_minor_crste_fault(gmap->asce, fault);
|
||||
rc = gmap_handle_minor_crste_fault(gmap, fault);
|
||||
if (!rc && fault->callback)
|
||||
fault->callback(fault);
|
||||
}
|
||||
|
|
@ -623,10 +627,61 @@ static inline bool gmap_1m_allowed(struct gmap *gmap, gfn_t gfn)
|
|||
return test_bit(GMAP_FLAG_ALLOW_HPAGE_1M, &gmap->flags);
|
||||
}
|
||||
|
||||
static int _gmap_link(struct kvm_s390_mmu_cache *mc, struct gmap *gmap, int level,
|
||||
struct guest_fault *f)
|
||||
{
|
||||
union crste oldval, newval;
|
||||
union pte newpte, oldpte;
|
||||
union pgste pgste;
|
||||
int rc = 0;
|
||||
|
||||
rc = dat_entry_walk(mc, f->gfn, gmap->asce, DAT_WALK_ALLOC_CONTINUE, level,
|
||||
&f->crstep, &f->ptep);
|
||||
if (rc == -ENOMEM)
|
||||
return rc;
|
||||
if (KVM_BUG_ON(rc == -EINVAL, gmap->kvm))
|
||||
return rc;
|
||||
if (rc)
|
||||
return -EAGAIN;
|
||||
if (KVM_BUG_ON(get_level(f->crstep, f->ptep) > level, gmap->kvm))
|
||||
return -EINVAL;
|
||||
|
||||
if (f->ptep) {
|
||||
pgste = pgste_get_lock(f->ptep);
|
||||
oldpte = *f->ptep;
|
||||
newpte = _pte(f->pfn, f->writable, f->write_attempt | oldpte.s.d, !f->page);
|
||||
newpte.s.sd = oldpte.s.sd;
|
||||
oldpte.s.sd = 0;
|
||||
if (oldpte.val == _PTE_EMPTY.val || oldpte.h.pfra == f->pfn) {
|
||||
pgste = gmap_ptep_xchg(gmap, f->ptep, newpte, pgste, f->gfn);
|
||||
if (f->callback)
|
||||
f->callback(f);
|
||||
} else {
|
||||
rc = -EAGAIN;
|
||||
}
|
||||
pgste_set_unlock(f->ptep, pgste);
|
||||
} else {
|
||||
do {
|
||||
oldval = READ_ONCE(*f->crstep);
|
||||
newval = _crste_fc1(f->pfn, oldval.h.tt, f->writable,
|
||||
f->write_attempt | oldval.s.fc1.d);
|
||||
newval.s.fc1.s = !f->page;
|
||||
newval.s.fc1.sd = oldval.s.fc1.sd;
|
||||
if (oldval.val != _CRSTE_EMPTY(oldval.h.tt).val &&
|
||||
crste_origin_large(oldval) != crste_origin_large(newval))
|
||||
return -EAGAIN;
|
||||
} while (!gmap_crstep_xchg_atomic(gmap, f->crstep, oldval, newval, f->gfn));
|
||||
if (f->callback)
|
||||
f->callback(f);
|
||||
}
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
int gmap_link(struct kvm_s390_mmu_cache *mc, struct gmap *gmap, struct guest_fault *f)
|
||||
{
|
||||
unsigned int order;
|
||||
int rc, level;
|
||||
int level;
|
||||
|
||||
lockdep_assert_held(&gmap->kvm->mmu_lock);
|
||||
|
||||
|
|
@ -638,16 +693,14 @@ int gmap_link(struct kvm_s390_mmu_cache *mc, struct gmap *gmap, struct guest_fau
|
|||
else if (order >= get_order(_SEGMENT_SIZE) && gmap_1m_allowed(gmap, f->gfn))
|
||||
level = TABLE_TYPE_SEGMENT;
|
||||
}
|
||||
rc = dat_link(mc, gmap->asce, level, uses_skeys(gmap), f);
|
||||
KVM_BUG_ON(rc == -EINVAL, gmap->kvm);
|
||||
return rc;
|
||||
return _gmap_link(mc, gmap, level, f);
|
||||
}
|
||||
|
||||
static int gmap_ucas_map_one(struct kvm_s390_mmu_cache *mc, struct gmap *gmap,
|
||||
gfn_t p_gfn, gfn_t c_gfn, bool force_alloc)
|
||||
{
|
||||
union crste newcrste, oldcrste;
|
||||
struct page_table *pt;
|
||||
union crste newcrste;
|
||||
union crste *crstep;
|
||||
union pte *ptep;
|
||||
int rc;
|
||||
|
|
@ -673,7 +726,11 @@ static int gmap_ucas_map_one(struct kvm_s390_mmu_cache *mc, struct gmap *gmap,
|
|||
&crstep, &ptep);
|
||||
if (rc)
|
||||
return rc;
|
||||
dat_crstep_xchg(crstep, newcrste, c_gfn, gmap->asce);
|
||||
do {
|
||||
oldcrste = READ_ONCE(*crstep);
|
||||
if (oldcrste.val == newcrste.val)
|
||||
break;
|
||||
} while (!dat_crstep_xchg_atomic(crstep, oldcrste, newcrste, c_gfn, gmap->asce));
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
@ -777,8 +834,10 @@ static void gmap_ucas_unmap_one(struct gmap *gmap, gfn_t c_gfn)
|
|||
int rc;
|
||||
|
||||
rc = dat_entry_walk(NULL, c_gfn, gmap->asce, 0, TABLE_TYPE_SEGMENT, &crstep, &ptep);
|
||||
if (!rc)
|
||||
dat_crstep_xchg(crstep, _PMD_EMPTY, c_gfn, gmap->asce);
|
||||
if (rc)
|
||||
return;
|
||||
while (!dat_crstep_xchg_atomic(crstep, READ_ONCE(*crstep), _PMD_EMPTY, c_gfn, gmap->asce))
|
||||
;
|
||||
}
|
||||
|
||||
void gmap_ucas_unmap(struct gmap *gmap, gfn_t c_gfn, unsigned long count)
|
||||
|
|
@ -1017,8 +1076,8 @@ static void gmap_unshadow_level(struct gmap *sg, gfn_t r_gfn, int level)
|
|||
dat_ptep_xchg(ptep, _PTE_EMPTY, r_gfn, sg->asce, uses_skeys(sg));
|
||||
return;
|
||||
}
|
||||
crste = READ_ONCE(*crstep);
|
||||
dat_crstep_clear(crstep, r_gfn, sg->asce);
|
||||
|
||||
crste = dat_crstep_clear_atomic(crstep, r_gfn, sg->asce);
|
||||
if (crste_leaf(crste) || crste.h.i)
|
||||
return;
|
||||
if (is_pmd(crste))
|
||||
|
|
@ -1101,6 +1160,7 @@ struct gmap_protect_asce_top_level {
|
|||
static inline int __gmap_protect_asce_top_level(struct kvm_s390_mmu_cache *mc, struct gmap *sg,
|
||||
struct gmap_protect_asce_top_level *context)
|
||||
{
|
||||
struct gmap *parent;
|
||||
int rc, i;
|
||||
|
||||
guard(write_lock)(&sg->kvm->mmu_lock);
|
||||
|
|
@ -1108,7 +1168,12 @@ static inline int __gmap_protect_asce_top_level(struct kvm_s390_mmu_cache *mc, s
|
|||
if (kvm_s390_array_needs_retry_safe(sg->kvm, context->seq, context->f))
|
||||
return -EAGAIN;
|
||||
|
||||
scoped_guard(spinlock, &sg->parent->children_lock) {
|
||||
parent = READ_ONCE(sg->parent);
|
||||
if (!parent)
|
||||
return -EAGAIN;
|
||||
scoped_guard(spinlock, &parent->children_lock) {
|
||||
if (READ_ONCE(sg->parent) != parent)
|
||||
return -EAGAIN;
|
||||
for (i = 0; i < CRST_TABLE_PAGES; i++) {
|
||||
if (!context->f[i].valid)
|
||||
continue;
|
||||
|
|
@ -1191,6 +1256,9 @@ struct gmap *gmap_create_shadow(struct kvm_s390_mmu_cache *mc, struct gmap *pare
|
|||
struct gmap *sg, *new;
|
||||
int rc;
|
||||
|
||||
if (WARN_ON(!parent))
|
||||
return ERR_PTR(-EINVAL);
|
||||
|
||||
scoped_guard(spinlock, &parent->children_lock) {
|
||||
sg = gmap_find_shadow(parent, asce, edat_level);
|
||||
if (sg) {
|
||||
|
|
|
|||
|
|
@ -185,6 +185,8 @@ static inline union pgste _gmap_ptep_xchg(struct gmap *gmap, union pte *ptep, un
|
|||
else
|
||||
_gmap_handle_vsie_unshadow_event(gmap, gfn);
|
||||
}
|
||||
if (!ptep->s.d && newpte.s.d && !newpte.s.s)
|
||||
SetPageDirty(pfn_to_page(newpte.h.pfra));
|
||||
return __dat_ptep_xchg(ptep, pgste, newpte, gfn, gmap->asce, uses_skeys(gmap));
|
||||
}
|
||||
|
||||
|
|
@ -194,35 +196,42 @@ static inline union pgste gmap_ptep_xchg(struct gmap *gmap, union pte *ptep, uni
|
|||
return _gmap_ptep_xchg(gmap, ptep, newpte, pgste, gfn, true);
|
||||
}
|
||||
|
||||
static inline void _gmap_crstep_xchg(struct gmap *gmap, union crste *crstep, union crste ne,
|
||||
gfn_t gfn, bool needs_lock)
|
||||
static inline bool __must_check _gmap_crstep_xchg_atomic(struct gmap *gmap, union crste *crstep,
|
||||
union crste oldcrste, union crste newcrste,
|
||||
gfn_t gfn, bool needs_lock)
|
||||
{
|
||||
unsigned long align = 8 + (is_pmd(*crstep) ? 0 : 11);
|
||||
unsigned long align = is_pmd(newcrste) ? _PAGE_ENTRIES : _PAGE_ENTRIES * _CRST_ENTRIES;
|
||||
|
||||
if (KVM_BUG_ON(crstep->h.tt != oldcrste.h.tt || newcrste.h.tt != oldcrste.h.tt, gmap->kvm))
|
||||
return true;
|
||||
|
||||
lockdep_assert_held(&gmap->kvm->mmu_lock);
|
||||
if (!needs_lock)
|
||||
lockdep_assert_held(&gmap->children_lock);
|
||||
|
||||
gfn = ALIGN_DOWN(gfn, align);
|
||||
if (crste_prefix(*crstep) && (ne.h.p || ne.h.i || !crste_prefix(ne))) {
|
||||
ne.s.fc1.prefix_notif = 0;
|
||||
if (crste_prefix(oldcrste) && (newcrste.h.p || newcrste.h.i || !crste_prefix(newcrste))) {
|
||||
newcrste.s.fc1.prefix_notif = 0;
|
||||
gmap_unmap_prefix(gmap, gfn, gfn + align);
|
||||
}
|
||||
if (crste_leaf(*crstep) && crstep->s.fc1.vsie_notif &&
|
||||
(ne.h.p || ne.h.i || !ne.s.fc1.vsie_notif)) {
|
||||
ne.s.fc1.vsie_notif = 0;
|
||||
if (crste_leaf(oldcrste) && oldcrste.s.fc1.vsie_notif &&
|
||||
(newcrste.h.p || newcrste.h.i || !newcrste.s.fc1.vsie_notif)) {
|
||||
newcrste.s.fc1.vsie_notif = 0;
|
||||
if (needs_lock)
|
||||
gmap_handle_vsie_unshadow_event(gmap, gfn);
|
||||
else
|
||||
_gmap_handle_vsie_unshadow_event(gmap, gfn);
|
||||
}
|
||||
dat_crstep_xchg(crstep, ne, gfn, gmap->asce);
|
||||
if (!oldcrste.s.fc1.d && newcrste.s.fc1.d && !newcrste.s.fc1.s)
|
||||
SetPageDirty(phys_to_page(crste_origin_large(newcrste)));
|
||||
return dat_crstep_xchg_atomic(crstep, oldcrste, newcrste, gfn, gmap->asce);
|
||||
}
|
||||
|
||||
static inline void gmap_crstep_xchg(struct gmap *gmap, union crste *crstep, union crste ne,
|
||||
gfn_t gfn)
|
||||
static inline bool __must_check gmap_crstep_xchg_atomic(struct gmap *gmap, union crste *crstep,
|
||||
union crste oldcrste, union crste newcrste,
|
||||
gfn_t gfn)
|
||||
{
|
||||
return _gmap_crstep_xchg(gmap, crstep, ne, gfn, true);
|
||||
return _gmap_crstep_xchg_atomic(gmap, crstep, oldcrste, newcrste, gfn, true);
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
|||
|
|
@ -2724,6 +2724,9 @@ static unsigned long get_ind_bit(__u64 addr, unsigned long bit_nr, bool swap)
|
|||
|
||||
bit = bit_nr + (addr % PAGE_SIZE) * 8;
|
||||
|
||||
/* kvm_set_routing_entry() should never allow this to happen */
|
||||
WARN_ON_ONCE(bit > (PAGE_SIZE * BITS_PER_BYTE - 1));
|
||||
|
||||
return swap ? (bit ^ (BITS_PER_LONG - 1)) : bit;
|
||||
}
|
||||
|
||||
|
|
@ -2824,6 +2827,12 @@ void kvm_s390_reinject_machine_check(struct kvm_vcpu *vcpu,
|
|||
int rc;
|
||||
|
||||
mci.val = mcck_info->mcic;
|
||||
|
||||
/* log machine checks being reinjected on all debugs */
|
||||
VCPU_EVENT(vcpu, 2, "guest machine check %lx", mci.val);
|
||||
KVM_EVENT(2, "guest machine check %lx", mci.val);
|
||||
pr_info("guest machine check pid %d: %lx", current->pid, mci.val);
|
||||
|
||||
if (mci.sr)
|
||||
cr14 |= CR14_RECOVERY_SUBMASK;
|
||||
if (mci.dg)
|
||||
|
|
@ -2852,6 +2861,7 @@ int kvm_set_routing_entry(struct kvm *kvm,
|
|||
struct kvm_kernel_irq_routing_entry *e,
|
||||
const struct kvm_irq_routing_entry *ue)
|
||||
{
|
||||
const struct kvm_irq_routing_s390_adapter *adapter;
|
||||
u64 uaddr_s, uaddr_i;
|
||||
int idx;
|
||||
|
||||
|
|
@ -2862,6 +2872,14 @@ int kvm_set_routing_entry(struct kvm *kvm,
|
|||
return -EINVAL;
|
||||
e->set = set_adapter_int;
|
||||
|
||||
adapter = &ue->u.adapter;
|
||||
if (adapter->summary_addr + (adapter->summary_offset / 8) >=
|
||||
(adapter->summary_addr & PAGE_MASK) + PAGE_SIZE)
|
||||
return -EINVAL;
|
||||
if (adapter->ind_addr + (adapter->ind_offset / 8) >=
|
||||
(adapter->ind_addr & PAGE_MASK) + PAGE_SIZE)
|
||||
return -EINVAL;
|
||||
|
||||
idx = srcu_read_lock(&kvm->srcu);
|
||||
uaddr_s = gpa_to_hva(kvm, ue->u.adapter.summary_addr);
|
||||
uaddr_i = gpa_to_hva(kvm, ue->u.adapter.ind_addr);
|
||||
|
|
|
|||
|
|
@ -4617,7 +4617,7 @@ static int vcpu_post_run_handle_fault(struct kvm_vcpu *vcpu)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int vcpu_post_run(struct kvm_vcpu *vcpu, int exit_reason)
|
||||
static int vcpu_post_run(struct kvm_vcpu *vcpu, int sie_return)
|
||||
{
|
||||
struct mcck_volatile_info *mcck_info;
|
||||
struct sie_page *sie_page;
|
||||
|
|
@ -4633,14 +4633,14 @@ static int vcpu_post_run(struct kvm_vcpu *vcpu, int exit_reason)
|
|||
vcpu->run->s.regs.gprs[14] = vcpu->arch.sie_block->gg14;
|
||||
vcpu->run->s.regs.gprs[15] = vcpu->arch.sie_block->gg15;
|
||||
|
||||
if (exit_reason == -EINTR) {
|
||||
VCPU_EVENT(vcpu, 3, "%s", "machine check");
|
||||
if (sie_return == SIE64_RETURN_MCCK) {
|
||||
sie_page = container_of(vcpu->arch.sie_block,
|
||||
struct sie_page, sie_block);
|
||||
mcck_info = &sie_page->mcck_info;
|
||||
kvm_s390_reinject_machine_check(vcpu, mcck_info);
|
||||
return 0;
|
||||
}
|
||||
WARN_ON_ONCE(sie_return != SIE64_RETURN_NORMAL);
|
||||
|
||||
if (vcpu->arch.sie_block->icptcode > 0) {
|
||||
rc = kvm_handle_sie_intercept(vcpu);
|
||||
|
|
@ -4679,7 +4679,7 @@ int noinstr kvm_s390_enter_exit_sie(struct kvm_s390_sie_block *scb,
|
|||
#define PSW_INT_MASK (PSW_MASK_EXT | PSW_MASK_IO | PSW_MASK_MCHECK)
|
||||
static int __vcpu_run(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
int rc, exit_reason;
|
||||
int rc, sie_return;
|
||||
struct sie_page *sie_page = (struct sie_page *)vcpu->arch.sie_block;
|
||||
|
||||
/*
|
||||
|
|
@ -4719,9 +4719,9 @@ xfer_to_guest_mode_check:
|
|||
guest_timing_enter_irqoff();
|
||||
__disable_cpu_timer_accounting(vcpu);
|
||||
|
||||
exit_reason = kvm_s390_enter_exit_sie(vcpu->arch.sie_block,
|
||||
vcpu->run->s.regs.gprs,
|
||||
vcpu->arch.gmap->asce.val);
|
||||
sie_return = kvm_s390_enter_exit_sie(vcpu->arch.sie_block,
|
||||
vcpu->run->s.regs.gprs,
|
||||
vcpu->arch.gmap->asce.val);
|
||||
|
||||
__enable_cpu_timer_accounting(vcpu);
|
||||
guest_timing_exit_irqoff();
|
||||
|
|
@ -4744,7 +4744,7 @@ xfer_to_guest_mode_check:
|
|||
}
|
||||
kvm_vcpu_srcu_read_lock(vcpu);
|
||||
|
||||
rc = vcpu_post_run(vcpu, exit_reason);
|
||||
rc = vcpu_post_run(vcpu, sie_return);
|
||||
if (rc || guestdbg_exit_pending(vcpu)) {
|
||||
kvm_vcpu_srcu_read_unlock(vcpu);
|
||||
break;
|
||||
|
|
@ -5520,9 +5520,21 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
|
|||
}
|
||||
#endif
|
||||
case KVM_S390_VCPU_FAULT: {
|
||||
idx = srcu_read_lock(&vcpu->kvm->srcu);
|
||||
r = vcpu_dat_fault_handler(vcpu, arg, 0);
|
||||
srcu_read_unlock(&vcpu->kvm->srcu, idx);
|
||||
gpa_t gaddr = arg;
|
||||
|
||||
scoped_guard(srcu, &vcpu->kvm->srcu) {
|
||||
r = vcpu_ucontrol_translate(vcpu, &gaddr);
|
||||
if (r)
|
||||
break;
|
||||
|
||||
r = kvm_s390_faultin_gfn_simple(vcpu, NULL, gpa_to_gfn(gaddr), false);
|
||||
if (r == PGM_ADDRESSING)
|
||||
r = -EFAULT;
|
||||
if (r <= 0)
|
||||
break;
|
||||
r = -EIO;
|
||||
KVM_BUG_ON(r, vcpu->kvm);
|
||||
}
|
||||
break;
|
||||
}
|
||||
case KVM_ENABLE_CAP:
|
||||
|
|
|
|||
|
|
@ -1122,6 +1122,7 @@ static int do_vsie_run(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page, struc
|
|||
{
|
||||
struct kvm_s390_sie_block *scb_s = &vsie_page->scb_s;
|
||||
struct kvm_s390_sie_block *scb_o = vsie_page->scb_o;
|
||||
unsigned long sie_return = SIE64_RETURN_NORMAL;
|
||||
int guest_bp_isolation;
|
||||
int rc = 0;
|
||||
|
||||
|
|
@ -1163,7 +1164,7 @@ xfer_to_guest_mode_check:
|
|||
goto xfer_to_guest_mode_check;
|
||||
}
|
||||
guest_timing_enter_irqoff();
|
||||
rc = kvm_s390_enter_exit_sie(scb_s, vcpu->run->s.regs.gprs, sg->asce.val);
|
||||
sie_return = kvm_s390_enter_exit_sie(scb_s, vcpu->run->s.regs.gprs, sg->asce.val);
|
||||
guest_timing_exit_irqoff();
|
||||
local_irq_enable();
|
||||
}
|
||||
|
|
@ -1178,12 +1179,13 @@ skip_sie:
|
|||
|
||||
kvm_vcpu_srcu_read_lock(vcpu);
|
||||
|
||||
if (rc == -EINTR) {
|
||||
VCPU_EVENT(vcpu, 3, "%s", "machine check");
|
||||
if (sie_return == SIE64_RETURN_MCCK) {
|
||||
kvm_s390_reinject_machine_check(vcpu, &vsie_page->mcck_info);
|
||||
return 0;
|
||||
}
|
||||
|
||||
WARN_ON_ONCE(sie_return != SIE64_RETURN_NORMAL);
|
||||
|
||||
if (rc > 0)
|
||||
rc = 0; /* we could still have an icpt */
|
||||
else if (current->thread.gmap_int_code)
|
||||
|
|
@ -1326,7 +1328,7 @@ static void unregister_shadow_scb(struct kvm_vcpu *vcpu)
|
|||
static int vsie_run(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
|
||||
{
|
||||
struct kvm_s390_sie_block *scb_s = &vsie_page->scb_s;
|
||||
struct gmap *sg;
|
||||
struct gmap *sg = NULL;
|
||||
int rc = 0;
|
||||
|
||||
while (1) {
|
||||
|
|
@ -1366,6 +1368,8 @@ static int vsie_run(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
|
|||
sg = gmap_put(sg);
|
||||
cond_resched();
|
||||
}
|
||||
if (sg)
|
||||
sg = gmap_put(sg);
|
||||
|
||||
if (rc == -EFAULT) {
|
||||
/*
|
||||
|
|
|
|||
|
|
@ -441,10 +441,17 @@ void do_secure_storage_access(struct pt_regs *regs)
|
|||
folio = phys_to_folio(addr);
|
||||
if (unlikely(!folio_try_get(folio)))
|
||||
return;
|
||||
rc = arch_make_folio_accessible(folio);
|
||||
rc = uv_convert_from_secure(folio_to_phys(folio));
|
||||
if (!rc)
|
||||
clear_bit(PG_arch_1, &folio->flags.f);
|
||||
folio_put(folio);
|
||||
/*
|
||||
* There are some valid fixup types for kernel
|
||||
* accesses to donated secure memory. zeropad is one
|
||||
* of them.
|
||||
*/
|
||||
if (rc)
|
||||
BUG();
|
||||
return handle_fault_error_nolock(regs, 0);
|
||||
} else {
|
||||
if (faulthandler_disabled())
|
||||
return handle_fault_error_nolock(regs, 0);
|
||||
|
|
|
|||
|
|
@ -121,6 +121,9 @@ noinstr struct ghcb *__sev_get_ghcb(struct ghcb_state *state)
|
|||
|
||||
WARN_ON(!irqs_disabled());
|
||||
|
||||
if (!sev_cfg.ghcbs_initialized)
|
||||
return boot_ghcb;
|
||||
|
||||
data = this_cpu_read(runtime_data);
|
||||
ghcb = &data->ghcb_page;
|
||||
|
||||
|
|
@ -164,6 +167,9 @@ noinstr void __sev_put_ghcb(struct ghcb_state *state)
|
|||
|
||||
WARN_ON(!irqs_disabled());
|
||||
|
||||
if (!sev_cfg.ghcbs_initialized)
|
||||
return;
|
||||
|
||||
data = this_cpu_read(runtime_data);
|
||||
ghcb = &data->ghcb_page;
|
||||
|
||||
|
|
|
|||
|
|
@ -177,6 +177,16 @@ static noinstr void fred_extint(struct pt_regs *regs)
|
|||
}
|
||||
}
|
||||
|
||||
#ifdef CONFIG_AMD_MEM_ENCRYPT
|
||||
noinstr void exc_vmm_communication(struct pt_regs *regs, unsigned long error_code)
|
||||
{
|
||||
if (user_mode(regs))
|
||||
return user_exc_vmm_communication(regs, error_code);
|
||||
else
|
||||
return kernel_exc_vmm_communication(regs, error_code);
|
||||
}
|
||||
#endif
|
||||
|
||||
static noinstr void fred_hwexc(struct pt_regs *regs, unsigned long error_code)
|
||||
{
|
||||
/* Optimize for #PF. That's the only exception which matters performance wise */
|
||||
|
|
@ -207,6 +217,10 @@ static noinstr void fred_hwexc(struct pt_regs *regs, unsigned long error_code)
|
|||
#ifdef CONFIG_X86_CET
|
||||
case X86_TRAP_CP: return exc_control_protection(regs, error_code);
|
||||
#endif
|
||||
#ifdef CONFIG_AMD_MEM_ENCRYPT
|
||||
case X86_TRAP_VC: return exc_vmm_communication(regs, error_code);
|
||||
#endif
|
||||
|
||||
default: return fred_bad_type(regs, error_code);
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -433,7 +433,20 @@ static __always_inline void setup_lass(struct cpuinfo_x86 *c)
|
|||
|
||||
/* These bits should not change their value after CPU init is finished. */
|
||||
static const unsigned long cr4_pinned_mask = X86_CR4_SMEP | X86_CR4_SMAP | X86_CR4_UMIP |
|
||||
X86_CR4_FSGSBASE | X86_CR4_CET | X86_CR4_FRED;
|
||||
X86_CR4_FSGSBASE | X86_CR4_CET;
|
||||
|
||||
/*
|
||||
* The CR pinning protects against ROP on the 'mov %reg, %CRn' instruction(s).
|
||||
* Since you can ROP directly to these instructions (barring shadow stack),
|
||||
* any protection must follow immediately and unconditionally after that.
|
||||
*
|
||||
* Specifically, the CR[04] write functions below will have the value
|
||||
* validation controlled by the @cr_pinning static_branch which is
|
||||
* __ro_after_init, just like the cr4_pinned_bits value.
|
||||
*
|
||||
* Once set, an attacker will have to defeat page-tables to get around these
|
||||
* restrictions. Which is a much bigger ask than 'simple' ROP.
|
||||
*/
|
||||
static DEFINE_STATIC_KEY_FALSE_RO(cr_pinning);
|
||||
static unsigned long cr4_pinned_bits __ro_after_init;
|
||||
|
||||
|
|
@ -2050,12 +2063,6 @@ static void identify_cpu(struct cpuinfo_x86 *c)
|
|||
setup_umip(c);
|
||||
setup_lass(c);
|
||||
|
||||
/* Enable FSGSBASE instructions if available. */
|
||||
if (cpu_has(c, X86_FEATURE_FSGSBASE)) {
|
||||
cr4_set_bits(X86_CR4_FSGSBASE);
|
||||
elf_hwcap2 |= HWCAP2_FSGSBASE;
|
||||
}
|
||||
|
||||
/*
|
||||
* The vendor-specific functions might have changed features.
|
||||
* Now we do "generic changes."
|
||||
|
|
@ -2416,6 +2423,18 @@ void cpu_init_exception_handling(bool boot_cpu)
|
|||
/* GHCB needs to be setup to handle #VC. */
|
||||
setup_ghcb();
|
||||
|
||||
/*
|
||||
* On CPUs with FSGSBASE support, paranoid_entry() uses
|
||||
* ALTERNATIVE-patched RDGSBASE/WRGSBASE instructions. Secondary CPUs
|
||||
* boot after alternatives are patched globally, so early exceptions
|
||||
* execute patched code that depends on FSGSBASE. Enable the feature
|
||||
* before any exceptions occur.
|
||||
*/
|
||||
if (cpu_feature_enabled(X86_FEATURE_FSGSBASE)) {
|
||||
cr4_set_bits(X86_CR4_FSGSBASE);
|
||||
elf_hwcap2 |= HWCAP2_FSGSBASE;
|
||||
}
|
||||
|
||||
if (cpu_feature_enabled(X86_FEATURE_FRED)) {
|
||||
/* The boot CPU has enabled FRED during early boot */
|
||||
if (!boot_cpu)
|
||||
|
|
|
|||
|
|
@ -3044,12 +3044,6 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot,
|
|||
bool prefetch = !fault || fault->prefetch;
|
||||
bool write_fault = fault && fault->write;
|
||||
|
||||
if (unlikely(is_noslot_pfn(pfn))) {
|
||||
vcpu->stat.pf_mmio_spte_created++;
|
||||
mark_mmio_spte(vcpu, sptep, gfn, pte_access);
|
||||
return RET_PF_EMULATE;
|
||||
}
|
||||
|
||||
if (is_shadow_present_pte(*sptep)) {
|
||||
if (prefetch && is_last_spte(*sptep, level) &&
|
||||
pfn == spte_to_pfn(*sptep))
|
||||
|
|
@ -3066,13 +3060,22 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot,
|
|||
child = spte_to_child_sp(pte);
|
||||
drop_parent_pte(vcpu->kvm, child, sptep);
|
||||
flush = true;
|
||||
} else if (WARN_ON_ONCE(pfn != spte_to_pfn(*sptep))) {
|
||||
} else if (pfn != spte_to_pfn(*sptep)) {
|
||||
WARN_ON_ONCE(vcpu->arch.mmu->root_role.direct);
|
||||
drop_spte(vcpu->kvm, sptep);
|
||||
flush = true;
|
||||
} else
|
||||
was_rmapped = 1;
|
||||
}
|
||||
|
||||
if (unlikely(is_noslot_pfn(pfn))) {
|
||||
vcpu->stat.pf_mmio_spte_created++;
|
||||
mark_mmio_spte(vcpu, sptep, gfn, pte_access);
|
||||
if (flush)
|
||||
kvm_flush_remote_tlbs_gfn(vcpu->kvm, gfn, level);
|
||||
return RET_PF_EMULATE;
|
||||
}
|
||||
|
||||
wrprot = make_spte(vcpu, sp, slot, pte_access, gfn, pfn, *sptep, prefetch,
|
||||
false, host_writable, &spte);
|
||||
|
||||
|
|
|
|||
|
|
@ -424,7 +424,7 @@ void __init efi_unmap_boot_services(void)
|
|||
if (efi_enabled(EFI_DBG))
|
||||
return;
|
||||
|
||||
sz = sizeof(*ranges_to_free) * efi.memmap.nr_map + 1;
|
||||
sz = sizeof(*ranges_to_free) * (efi.memmap.nr_map + 1);
|
||||
ranges_to_free = kzalloc(sz, GFP_KERNEL);
|
||||
if (!ranges_to_free) {
|
||||
pr_err("Failed to allocate storage for freeable EFI regions\n");
|
||||
|
|
|
|||
|
|
@ -35,6 +35,7 @@
|
|||
#define IVPU_HW_IP_60XX 60
|
||||
|
||||
#define IVPU_HW_IP_REV_LNL_B0 4
|
||||
#define IVPU_HW_IP_REV_NVL_A0 0
|
||||
|
||||
#define IVPU_HW_BTRS_MTL 1
|
||||
#define IVPU_HW_BTRS_LNL 2
|
||||
|
|
|
|||
|
|
@ -70,8 +70,10 @@ static void wa_init(struct ivpu_device *vdev)
|
|||
if (ivpu_hw_btrs_gen(vdev) == IVPU_HW_BTRS_MTL)
|
||||
vdev->wa.interrupt_clear_with_0 = ivpu_hw_btrs_irqs_clear_with_0_mtl(vdev);
|
||||
|
||||
if (ivpu_device_id(vdev) == PCI_DEVICE_ID_LNL &&
|
||||
ivpu_revision(vdev) < IVPU_HW_IP_REV_LNL_B0)
|
||||
if ((ivpu_device_id(vdev) == PCI_DEVICE_ID_LNL &&
|
||||
ivpu_revision(vdev) < IVPU_HW_IP_REV_LNL_B0) ||
|
||||
(ivpu_device_id(vdev) == PCI_DEVICE_ID_NVL &&
|
||||
ivpu_revision(vdev) == IVPU_HW_IP_REV_NVL_A0))
|
||||
vdev->wa.disable_clock_relinquish = true;
|
||||
|
||||
if (ivpu_test_mode & IVPU_TEST_MODE_CLK_RELINQ_ENABLE)
|
||||
|
|
|
|||
|
|
@ -1656,6 +1656,8 @@ static int acpi_ec_setup(struct acpi_ec *ec, struct acpi_device *device, bool ca
|
|||
|
||||
ret = ec_install_handlers(ec, device, call_reg);
|
||||
if (ret) {
|
||||
ec_remove_handlers(ec);
|
||||
|
||||
if (ec == first_ec)
|
||||
first_ec = NULL;
|
||||
|
||||
|
|
|
|||
|
|
@ -1545,6 +1545,7 @@ static int _regmap_select_page(struct regmap *map, unsigned int *reg,
|
|||
unsigned int val_num)
|
||||
{
|
||||
void *orig_work_buf;
|
||||
unsigned int selector_reg;
|
||||
unsigned int win_offset;
|
||||
unsigned int win_page;
|
||||
bool page_chg;
|
||||
|
|
@ -1563,10 +1564,31 @@ static int _regmap_select_page(struct regmap *map, unsigned int *reg,
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* It is possible to have selector register inside data window.
|
||||
In that case, selector register is located on every page and
|
||||
it needs no page switching, when accessed alone. */
|
||||
/*
|
||||
* Calculate the address of the selector register in the corresponding
|
||||
* data window if it is located on every page.
|
||||
*/
|
||||
page_chg = in_range(range->selector_reg, range->window_start, range->window_len);
|
||||
if (page_chg)
|
||||
selector_reg = range->range_min + win_page * range->window_len +
|
||||
range->selector_reg - range->window_start;
|
||||
|
||||
/*
|
||||
* It is possible to have selector register inside data window.
|
||||
* In that case, selector register is located on every page and it
|
||||
* needs no page switching, when accessed alone.
|
||||
*
|
||||
* Nevertheless we should synchronize the cache values for it.
|
||||
* This can't be properly achieved if the selector register is
|
||||
* the first and the only one to be read inside the data window.
|
||||
* That's why we update it in that case as well.
|
||||
*
|
||||
* However, we specifically avoid updating it for the default page,
|
||||
* when it's overlapped with the real data window, to prevent from
|
||||
* infinite looping.
|
||||
*/
|
||||
if (val_num > 1 ||
|
||||
(page_chg && selector_reg != range->selector_reg) ||
|
||||
range->window_start + win_offset != range->selector_reg) {
|
||||
/* Use separate work_buf during page switching */
|
||||
orig_work_buf = map->work_buf;
|
||||
|
|
@ -1575,7 +1597,7 @@ static int _regmap_select_page(struct regmap *map, unsigned int *reg,
|
|||
ret = _regmap_update_bits(map, range->selector_reg,
|
||||
range->selector_mask,
|
||||
win_page << range->selector_shift,
|
||||
&page_chg, false);
|
||||
NULL, false);
|
||||
|
||||
map->work_buf = orig_work_buf;
|
||||
|
||||
|
|
|
|||
|
|
@ -917,9 +917,8 @@ static void zram_account_writeback_submit(struct zram *zram)
|
|||
|
||||
static int zram_writeback_complete(struct zram *zram, struct zram_wb_req *req)
|
||||
{
|
||||
u32 size, index = req->pps->index;
|
||||
int err, prio;
|
||||
bool huge;
|
||||
u32 index = req->pps->index;
|
||||
int err;
|
||||
|
||||
err = blk_status_to_errno(req->bio.bi_status);
|
||||
if (err) {
|
||||
|
|
@ -946,28 +945,13 @@ static int zram_writeback_complete(struct zram *zram, struct zram_wb_req *req)
|
|||
goto out;
|
||||
}
|
||||
|
||||
if (zram->compressed_wb) {
|
||||
/*
|
||||
* ZRAM_WB slots get freed, we need to preserve data required
|
||||
* for read decompression.
|
||||
*/
|
||||
size = get_slot_size(zram, index);
|
||||
prio = get_slot_comp_priority(zram, index);
|
||||
huge = test_slot_flag(zram, index, ZRAM_HUGE);
|
||||
}
|
||||
|
||||
slot_free(zram, index);
|
||||
set_slot_flag(zram, index, ZRAM_WB);
|
||||
clear_slot_flag(zram, index, ZRAM_IDLE);
|
||||
if (test_slot_flag(zram, index, ZRAM_HUGE))
|
||||
atomic64_dec(&zram->stats.huge_pages);
|
||||
atomic64_sub(get_slot_size(zram, index), &zram->stats.compr_data_size);
|
||||
zs_free(zram->mem_pool, get_slot_handle(zram, index));
|
||||
set_slot_handle(zram, index, req->blk_idx);
|
||||
|
||||
if (zram->compressed_wb) {
|
||||
if (huge)
|
||||
set_slot_flag(zram, index, ZRAM_HUGE);
|
||||
set_slot_size(zram, index, size);
|
||||
set_slot_comp_priority(zram, index, prio);
|
||||
}
|
||||
|
||||
atomic64_inc(&zram->stats.pages_stored);
|
||||
set_slot_flag(zram, index, ZRAM_WB);
|
||||
|
||||
out:
|
||||
slot_unlock(zram, index);
|
||||
|
|
@ -2010,8 +1994,13 @@ static void slot_free(struct zram *zram, u32 index)
|
|||
set_slot_comp_priority(zram, index, 0);
|
||||
|
||||
if (test_slot_flag(zram, index, ZRAM_HUGE)) {
|
||||
/*
|
||||
* Writeback completion decrements ->huge_pages but keeps
|
||||
* ZRAM_HUGE flag for deferred decompression path.
|
||||
*/
|
||||
if (!test_slot_flag(zram, index, ZRAM_WB))
|
||||
atomic64_dec(&zram->stats.huge_pages);
|
||||
clear_slot_flag(zram, index, ZRAM_HUGE);
|
||||
atomic64_dec(&zram->stats.huge_pages);
|
||||
}
|
||||
|
||||
if (test_slot_flag(zram, index, ZRAM_WB)) {
|
||||
|
|
|
|||
|
|
@ -251,11 +251,13 @@ void btintel_hw_error(struct hci_dev *hdev, u8 code)
|
|||
|
||||
bt_dev_err(hdev, "Hardware error 0x%2.2x", code);
|
||||
|
||||
hci_req_sync_lock(hdev);
|
||||
|
||||
skb = __hci_cmd_sync(hdev, HCI_OP_RESET, 0, NULL, HCI_INIT_TIMEOUT);
|
||||
if (IS_ERR(skb)) {
|
||||
bt_dev_err(hdev, "Reset after hardware error failed (%ld)",
|
||||
PTR_ERR(skb));
|
||||
return;
|
||||
goto unlock;
|
||||
}
|
||||
kfree_skb(skb);
|
||||
|
||||
|
|
@ -263,18 +265,21 @@ void btintel_hw_error(struct hci_dev *hdev, u8 code)
|
|||
if (IS_ERR(skb)) {
|
||||
bt_dev_err(hdev, "Retrieving Intel exception info failed (%ld)",
|
||||
PTR_ERR(skb));
|
||||
return;
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
if (skb->len != 13) {
|
||||
bt_dev_err(hdev, "Exception info size mismatch");
|
||||
kfree_skb(skb);
|
||||
return;
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
bt_dev_err(hdev, "Exception info %s", (char *)(skb->data + 1));
|
||||
|
||||
kfree_skb(skb);
|
||||
|
||||
unlock:
|
||||
hci_req_sync_unlock(hdev);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(btintel_hw_error);
|
||||
|
||||
|
|
|
|||
|
|
@ -2376,8 +2376,11 @@ static void btusb_work(struct work_struct *work)
|
|||
if (data->air_mode == HCI_NOTIFY_ENABLE_SCO_CVSD) {
|
||||
if (hdev->voice_setting & 0x0020) {
|
||||
static const int alts[3] = { 2, 4, 5 };
|
||||
unsigned int sco_idx;
|
||||
|
||||
new_alts = alts[data->sco_num - 1];
|
||||
sco_idx = min_t(unsigned int, data->sco_num - 1,
|
||||
ARRAY_SIZE(alts) - 1);
|
||||
new_alts = alts[sco_idx];
|
||||
} else {
|
||||
new_alts = data->sco_num;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -541,6 +541,8 @@ static int download_firmware(struct ll_device *lldev)
|
|||
if (err || !fw->data || !fw->size) {
|
||||
bt_dev_err(lldev->hu.hdev, "request_firmware failed(errno %d) for %s",
|
||||
err, bts_scr_name);
|
||||
if (!err)
|
||||
release_firmware(fw);
|
||||
return -EINVAL;
|
||||
}
|
||||
ptr = (void *)fw->data;
|
||||
|
|
|
|||
|
|
@ -1427,12 +1427,9 @@ static int cpufreq_policy_online(struct cpufreq_policy *policy,
|
|||
* If there is a problem with its frequency table, take it
|
||||
* offline and drop it.
|
||||
*/
|
||||
if (policy->freq_table_sorted != CPUFREQ_TABLE_SORTED_ASCENDING &&
|
||||
policy->freq_table_sorted != CPUFREQ_TABLE_SORTED_DESCENDING) {
|
||||
ret = cpufreq_table_validate_and_sort(policy);
|
||||
if (ret)
|
||||
goto out_offline_policy;
|
||||
}
|
||||
ret = cpufreq_table_validate_and_sort(policy);
|
||||
if (ret)
|
||||
goto out_offline_policy;
|
||||
|
||||
/* related_cpus should at least include policy->cpus. */
|
||||
cpumask_copy(policy->related_cpus, policy->cpus);
|
||||
|
|
|
|||
|
|
@ -313,6 +313,17 @@ static void cs_start(struct cpufreq_policy *policy)
|
|||
dbs_info->requested_freq = policy->cur;
|
||||
}
|
||||
|
||||
static void cs_limits(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct cs_policy_dbs_info *dbs_info = to_dbs_info(policy->governor_data);
|
||||
|
||||
/*
|
||||
* The limits have changed, so may have the current frequency. Reset
|
||||
* requested_freq to avoid any unintended outcomes due to the mismatch.
|
||||
*/
|
||||
dbs_info->requested_freq = policy->cur;
|
||||
}
|
||||
|
||||
static struct dbs_governor cs_governor = {
|
||||
.gov = CPUFREQ_DBS_GOVERNOR_INITIALIZER("conservative"),
|
||||
.kobj_type = { .default_groups = cs_groups },
|
||||
|
|
@ -322,6 +333,7 @@ static struct dbs_governor cs_governor = {
|
|||
.init = cs_init,
|
||||
.exit = cs_exit,
|
||||
.start = cs_start,
|
||||
.limits = cs_limits,
|
||||
};
|
||||
|
||||
#define CPU_FREQ_GOV_CONSERVATIVE (cs_governor.gov)
|
||||
|
|
|
|||
|
|
@ -563,6 +563,7 @@ EXPORT_SYMBOL_GPL(cpufreq_dbs_governor_stop);
|
|||
|
||||
void cpufreq_dbs_governor_limits(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct dbs_governor *gov = dbs_governor_of(policy);
|
||||
struct policy_dbs_info *policy_dbs;
|
||||
|
||||
/* Protect gov->gdbs_data against cpufreq_dbs_governor_exit() */
|
||||
|
|
@ -574,6 +575,8 @@ void cpufreq_dbs_governor_limits(struct cpufreq_policy *policy)
|
|||
mutex_lock(&policy_dbs->update_mutex);
|
||||
cpufreq_policy_apply_limits(policy);
|
||||
gov_update_sample_delay(policy_dbs, 0);
|
||||
if (gov->limits)
|
||||
gov->limits(policy);
|
||||
mutex_unlock(&policy_dbs->update_mutex);
|
||||
|
||||
out:
|
||||
|
|
|
|||
|
|
@ -138,6 +138,7 @@ struct dbs_governor {
|
|||
int (*init)(struct dbs_data *dbs_data);
|
||||
void (*exit)(struct dbs_data *dbs_data);
|
||||
void (*start)(struct cpufreq_policy *policy);
|
||||
void (*limits)(struct cpufreq_policy *policy);
|
||||
};
|
||||
|
||||
static inline struct dbs_governor *dbs_governor_of(struct cpufreq_policy *policy)
|
||||
|
|
|
|||
|
|
@ -360,6 +360,10 @@ int cpufreq_table_validate_and_sort(struct cpufreq_policy *policy)
|
|||
if (policy_has_boost_freq(policy))
|
||||
policy->boost_supported = true;
|
||||
|
||||
if (policy->freq_table_sorted == CPUFREQ_TABLE_SORTED_ASCENDING ||
|
||||
policy->freq_table_sorted == CPUFREQ_TABLE_SORTED_DESCENDING)
|
||||
return 0;
|
||||
|
||||
return set_freq_table_sorted(policy);
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -59,6 +59,7 @@ config CXL_ACPI
|
|||
tristate "CXL ACPI: Platform Support"
|
||||
depends on ACPI
|
||||
depends on ACPI_NUMA
|
||||
depends on CXL_PMEM || !CXL_PMEM
|
||||
default CXL_BUS
|
||||
select ACPI_TABLE_LIB
|
||||
select ACPI_HMAT
|
||||
|
|
|
|||
|
|
@ -94,7 +94,6 @@ static bool should_emulate_decoders(struct cxl_endpoint_dvsec_info *info)
|
|||
struct cxl_hdm *cxlhdm;
|
||||
void __iomem *hdm;
|
||||
u32 ctrl;
|
||||
int i;
|
||||
|
||||
if (!info)
|
||||
return false;
|
||||
|
|
@ -113,22 +112,16 @@ static bool should_emulate_decoders(struct cxl_endpoint_dvsec_info *info)
|
|||
return false;
|
||||
|
||||
/*
|
||||
* If any decoders are committed already, there should not be any
|
||||
* emulated DVSEC decoders.
|
||||
* If HDM decoders are globally enabled, do not fall back to DVSEC
|
||||
* range emulation. Zeroed decoder registers after region teardown
|
||||
* do not imply absence of HDM capability.
|
||||
*
|
||||
* Falling back to DVSEC here would treat the decoder as AUTO and
|
||||
* may incorrectly latch default interleave settings.
|
||||
*/
|
||||
for (i = 0; i < cxlhdm->decoder_count; i++) {
|
||||
ctrl = readl(hdm + CXL_HDM_DECODER0_CTRL_OFFSET(i));
|
||||
dev_dbg(&info->port->dev,
|
||||
"decoder%d.%d: committed: %ld base: %#x_%.8x size: %#x_%.8x\n",
|
||||
info->port->id, i,
|
||||
FIELD_GET(CXL_HDM_DECODER0_CTRL_COMMITTED, ctrl),
|
||||
readl(hdm + CXL_HDM_DECODER0_BASE_HIGH_OFFSET(i)),
|
||||
readl(hdm + CXL_HDM_DECODER0_BASE_LOW_OFFSET(i)),
|
||||
readl(hdm + CXL_HDM_DECODER0_SIZE_HIGH_OFFSET(i)),
|
||||
readl(hdm + CXL_HDM_DECODER0_SIZE_LOW_OFFSET(i)));
|
||||
if (FIELD_GET(CXL_HDM_DECODER0_CTRL_COMMITTED, ctrl))
|
||||
return false;
|
||||
}
|
||||
ctrl = readl(hdm + CXL_HDM_DECODER_CTRL_OFFSET);
|
||||
if (ctrl & CXL_HDM_DECODER_ENABLE)
|
||||
return false;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1301,7 +1301,7 @@ int cxl_mem_sanitize(struct cxl_memdev *cxlmd, u16 cmd)
|
|||
* Require an endpoint to be safe otherwise the driver can not
|
||||
* be sure that the device is unmapped.
|
||||
*/
|
||||
if (endpoint && cxl_num_decoders_committed(endpoint) == 0)
|
||||
if (cxlmd->dev.driver && cxl_num_decoders_committed(endpoint) == 0)
|
||||
return __cxl_mem_sanitize(mds, cmd);
|
||||
|
||||
return -EBUSY;
|
||||
|
|
|
|||
|
|
@ -552,10 +552,13 @@ static void cxl_port_release(struct device *dev)
|
|||
xa_destroy(&port->dports);
|
||||
xa_destroy(&port->regions);
|
||||
ida_free(&cxl_port_ida, port->id);
|
||||
if (is_cxl_root(port))
|
||||
|
||||
if (is_cxl_root(port)) {
|
||||
kfree(to_cxl_root(port));
|
||||
else
|
||||
} else {
|
||||
put_device(dev->parent);
|
||||
kfree(port);
|
||||
}
|
||||
}
|
||||
|
||||
static ssize_t decoders_committed_show(struct device *dev,
|
||||
|
|
@ -707,6 +710,7 @@ static struct cxl_port *cxl_port_alloc(struct device *uport_dev,
|
|||
struct cxl_port *iter;
|
||||
|
||||
dev->parent = &parent_port->dev;
|
||||
get_device(dev->parent);
|
||||
port->depth = parent_port->depth + 1;
|
||||
port->parent_dport = parent_dport;
|
||||
|
||||
|
|
|
|||
|
|
@ -3854,8 +3854,10 @@ static int __construct_region(struct cxl_region *cxlr,
|
|||
}
|
||||
|
||||
rc = sysfs_update_group(&cxlr->dev.kobj, &cxl_region_group);
|
||||
if (rc)
|
||||
if (rc) {
|
||||
kfree(res);
|
||||
return rc;
|
||||
}
|
||||
|
||||
rc = insert_resource(cxlrd->res, res);
|
||||
if (rc) {
|
||||
|
|
|
|||
|
|
@ -554,7 +554,7 @@ static __exit void cxl_pmem_exit(void)
|
|||
|
||||
MODULE_DESCRIPTION("CXL PMEM: Persistent Memory Support");
|
||||
MODULE_LICENSE("GPL v2");
|
||||
module_init(cxl_pmem_init);
|
||||
subsys_initcall(cxl_pmem_init);
|
||||
module_exit(cxl_pmem_exit);
|
||||
MODULE_IMPORT_NS("CXL");
|
||||
MODULE_ALIAS_CXL(CXL_DEVICE_NVDIMM_BRIDGE);
|
||||
|
|
|
|||
|
|
@ -844,6 +844,7 @@ static int dw_edma_irq_request(struct dw_edma *dw,
|
|||
{
|
||||
struct dw_edma_chip *chip = dw->chip;
|
||||
struct device *dev = dw->chip->dev;
|
||||
struct msi_desc *msi_desc;
|
||||
u32 wr_mask = 1;
|
||||
u32 rd_mask = 1;
|
||||
int i, err = 0;
|
||||
|
|
@ -895,9 +896,12 @@ static int dw_edma_irq_request(struct dw_edma *dw,
|
|||
&dw->irq[i]);
|
||||
if (err)
|
||||
goto err_irq_free;
|
||||
|
||||
if (irq_get_msi_desc(irq))
|
||||
msi_desc = irq_get_msi_desc(irq);
|
||||
if (msi_desc) {
|
||||
get_cached_msi_msg(irq, &dw->irq[i].msi);
|
||||
if (!msi_desc->pci.msi_attrib.is_msix)
|
||||
dw->irq[i].msi.data = dw->irq[0].msi.data + i;
|
||||
}
|
||||
}
|
||||
|
||||
dw->nr_irqs = i;
|
||||
|
|
|
|||
|
|
@ -252,10 +252,10 @@ static void dw_hdma_v0_core_start(struct dw_edma_chunk *chunk, bool first)
|
|||
lower_32_bits(chunk->ll_region.paddr));
|
||||
SET_CH_32(dw, chan->dir, chan->id, llp.msb,
|
||||
upper_32_bits(chunk->ll_region.paddr));
|
||||
/* Set consumer cycle */
|
||||
SET_CH_32(dw, chan->dir, chan->id, cycle_sync,
|
||||
HDMA_V0_CONSUMER_CYCLE_STAT | HDMA_V0_CONSUMER_CYCLE_BIT);
|
||||
}
|
||||
/* Set consumer cycle */
|
||||
SET_CH_32(dw, chan->dir, chan->id, cycle_sync,
|
||||
HDMA_V0_CONSUMER_CYCLE_STAT | HDMA_V0_CONSUMER_CYCLE_BIT);
|
||||
|
||||
dw_hdma_v0_sync_ll_data(chunk);
|
||||
|
||||
|
|
|
|||
|
|
@ -317,10 +317,8 @@ static struct dma_chan *fsl_edma3_xlate(struct of_phandle_args *dma_spec,
|
|||
return NULL;
|
||||
i = fsl_chan - fsl_edma->chans;
|
||||
|
||||
fsl_chan->priority = dma_spec->args[1];
|
||||
fsl_chan->is_rxchan = dma_spec->args[2] & FSL_EDMA_RX;
|
||||
fsl_chan->is_remote = dma_spec->args[2] & FSL_EDMA_REMOTE;
|
||||
fsl_chan->is_multi_fifo = dma_spec->args[2] & FSL_EDMA_MULTI_FIFO;
|
||||
if (!b_chmux && i != dma_spec->args[0])
|
||||
continue;
|
||||
|
||||
if ((dma_spec->args[2] & FSL_EDMA_EVEN_CH) && (i & 0x1))
|
||||
continue;
|
||||
|
|
@ -328,17 +326,15 @@ static struct dma_chan *fsl_edma3_xlate(struct of_phandle_args *dma_spec,
|
|||
if ((dma_spec->args[2] & FSL_EDMA_ODD_CH) && !(i & 0x1))
|
||||
continue;
|
||||
|
||||
if (!b_chmux && i == dma_spec->args[0]) {
|
||||
chan = dma_get_slave_channel(chan);
|
||||
chan->device->privatecnt++;
|
||||
return chan;
|
||||
} else if (b_chmux && !fsl_chan->srcid) {
|
||||
/* if controller support channel mux, choose a free channel */
|
||||
chan = dma_get_slave_channel(chan);
|
||||
chan->device->privatecnt++;
|
||||
fsl_chan->srcid = dma_spec->args[0];
|
||||
return chan;
|
||||
}
|
||||
fsl_chan->srcid = dma_spec->args[0];
|
||||
fsl_chan->priority = dma_spec->args[1];
|
||||
fsl_chan->is_rxchan = dma_spec->args[2] & FSL_EDMA_RX;
|
||||
fsl_chan->is_remote = dma_spec->args[2] & FSL_EDMA_REMOTE;
|
||||
fsl_chan->is_multi_fifo = dma_spec->args[2] & FSL_EDMA_MULTI_FIFO;
|
||||
|
||||
chan = dma_get_slave_channel(chan);
|
||||
chan->device->privatecnt++;
|
||||
return chan;
|
||||
}
|
||||
return NULL;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -158,11 +158,7 @@ static const struct device_type idxd_cdev_file_type = {
|
|||
static void idxd_cdev_dev_release(struct device *dev)
|
||||
{
|
||||
struct idxd_cdev *idxd_cdev = dev_to_cdev(dev);
|
||||
struct idxd_cdev_context *cdev_ctx;
|
||||
struct idxd_wq *wq = idxd_cdev->wq;
|
||||
|
||||
cdev_ctx = &ictx[wq->idxd->data->type];
|
||||
ida_free(&cdev_ctx->minor_ida, idxd_cdev->minor);
|
||||
kfree(idxd_cdev);
|
||||
}
|
||||
|
||||
|
|
@ -582,11 +578,15 @@ int idxd_wq_add_cdev(struct idxd_wq *wq)
|
|||
|
||||
void idxd_wq_del_cdev(struct idxd_wq *wq)
|
||||
{
|
||||
struct idxd_cdev_context *cdev_ctx;
|
||||
struct idxd_cdev *idxd_cdev;
|
||||
|
||||
idxd_cdev = wq->idxd_cdev;
|
||||
wq->idxd_cdev = NULL;
|
||||
cdev_device_del(&idxd_cdev->cdev, cdev_dev(idxd_cdev));
|
||||
|
||||
cdev_ctx = &ictx[wq->idxd->data->type];
|
||||
ida_free(&cdev_ctx->minor_ida, idxd_cdev->minor);
|
||||
put_device(cdev_dev(idxd_cdev));
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -175,6 +175,7 @@ void idxd_wq_free_resources(struct idxd_wq *wq)
|
|||
free_descs(wq);
|
||||
dma_free_coherent(dev, wq->compls_size, wq->compls, wq->compls_addr);
|
||||
sbitmap_queue_free(&wq->sbq);
|
||||
wq->type = IDXD_WQT_NONE;
|
||||
}
|
||||
EXPORT_SYMBOL_NS_GPL(idxd_wq_free_resources, "IDXD");
|
||||
|
||||
|
|
@ -382,7 +383,6 @@ static void idxd_wq_disable_cleanup(struct idxd_wq *wq)
|
|||
lockdep_assert_held(&wq->wq_lock);
|
||||
wq->state = IDXD_WQ_DISABLED;
|
||||
memset(wq->wqcfg, 0, idxd->wqcfg_size);
|
||||
wq->type = IDXD_WQT_NONE;
|
||||
wq->threshold = 0;
|
||||
wq->priority = 0;
|
||||
wq->enqcmds_retries = IDXD_ENQCMDS_RETRIES;
|
||||
|
|
@ -831,8 +831,7 @@ static void idxd_device_evl_free(struct idxd_device *idxd)
|
|||
struct device *dev = &idxd->pdev->dev;
|
||||
struct idxd_evl *evl = idxd->evl;
|
||||
|
||||
gencfg.bits = ioread32(idxd->reg_base + IDXD_GENCFG_OFFSET);
|
||||
if (!gencfg.evl_en)
|
||||
if (!evl)
|
||||
return;
|
||||
|
||||
mutex_lock(&evl->lock);
|
||||
|
|
@ -1125,7 +1124,11 @@ int idxd_device_config(struct idxd_device *idxd)
|
|||
{
|
||||
int rc;
|
||||
|
||||
lockdep_assert_held(&idxd->dev_lock);
|
||||
guard(spinlock)(&idxd->dev_lock);
|
||||
|
||||
if (!test_bit(IDXD_FLAG_CONFIGURABLE, &idxd->flags))
|
||||
return 0;
|
||||
|
||||
rc = idxd_wqs_setup(idxd);
|
||||
if (rc < 0)
|
||||
return rc;
|
||||
|
|
@ -1332,6 +1335,11 @@ void idxd_wq_free_irq(struct idxd_wq *wq)
|
|||
|
||||
free_irq(ie->vector, ie);
|
||||
idxd_flush_pending_descs(ie);
|
||||
|
||||
/* The interrupt might have been already released by FLR */
|
||||
if (ie->int_handle == INVALID_INT_HANDLE)
|
||||
return;
|
||||
|
||||
if (idxd->request_int_handles)
|
||||
idxd_device_release_int_handle(idxd, ie->int_handle, IDXD_IRQ_MSIX);
|
||||
idxd_device_clear_perm_entry(idxd, ie);
|
||||
|
|
@ -1340,6 +1348,23 @@ void idxd_wq_free_irq(struct idxd_wq *wq)
|
|||
ie->pasid = IOMMU_PASID_INVALID;
|
||||
}
|
||||
|
||||
void idxd_wq_flush_descs(struct idxd_wq *wq)
|
||||
{
|
||||
struct idxd_irq_entry *ie = &wq->ie;
|
||||
struct idxd_device *idxd = wq->idxd;
|
||||
|
||||
guard(mutex)(&wq->wq_lock);
|
||||
|
||||
if (wq->state != IDXD_WQ_ENABLED || wq->type != IDXD_WQT_KERNEL)
|
||||
return;
|
||||
|
||||
idxd_flush_pending_descs(ie);
|
||||
if (idxd->request_int_handles)
|
||||
idxd_device_release_int_handle(idxd, ie->int_handle, IDXD_IRQ_MSIX);
|
||||
idxd_device_clear_perm_entry(idxd, ie);
|
||||
ie->int_handle = INVALID_INT_HANDLE;
|
||||
}
|
||||
|
||||
int idxd_wq_request_irq(struct idxd_wq *wq)
|
||||
{
|
||||
struct idxd_device *idxd = wq->idxd;
|
||||
|
|
@ -1454,11 +1479,7 @@ int idxd_drv_enable_wq(struct idxd_wq *wq)
|
|||
}
|
||||
}
|
||||
|
||||
rc = 0;
|
||||
spin_lock(&idxd->dev_lock);
|
||||
if (test_bit(IDXD_FLAG_CONFIGURABLE, &idxd->flags))
|
||||
rc = idxd_device_config(idxd);
|
||||
spin_unlock(&idxd->dev_lock);
|
||||
rc = idxd_device_config(idxd);
|
||||
if (rc < 0) {
|
||||
dev_dbg(dev, "Writing wq %d config failed: %d\n", wq->id, rc);
|
||||
goto err;
|
||||
|
|
@ -1533,7 +1554,6 @@ void idxd_drv_disable_wq(struct idxd_wq *wq)
|
|||
idxd_wq_reset(wq);
|
||||
idxd_wq_free_resources(wq);
|
||||
percpu_ref_exit(&wq->wq_active);
|
||||
wq->type = IDXD_WQT_NONE;
|
||||
wq->client_count = 0;
|
||||
}
|
||||
EXPORT_SYMBOL_NS_GPL(idxd_drv_disable_wq, "IDXD");
|
||||
|
|
@ -1554,10 +1574,7 @@ int idxd_device_drv_probe(struct idxd_dev *idxd_dev)
|
|||
}
|
||||
|
||||
/* Device configuration */
|
||||
spin_lock(&idxd->dev_lock);
|
||||
if (test_bit(IDXD_FLAG_CONFIGURABLE, &idxd->flags))
|
||||
rc = idxd_device_config(idxd);
|
||||
spin_unlock(&idxd->dev_lock);
|
||||
rc = idxd_device_config(idxd);
|
||||
if (rc < 0)
|
||||
return -ENXIO;
|
||||
|
||||
|
|
|
|||
|
|
@ -194,6 +194,22 @@ static void idxd_dma_release(struct dma_device *device)
|
|||
kfree(idxd_dma);
|
||||
}
|
||||
|
||||
static int idxd_dma_terminate_all(struct dma_chan *c)
|
||||
{
|
||||
struct idxd_wq *wq = to_idxd_wq(c);
|
||||
|
||||
idxd_wq_flush_descs(wq);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void idxd_dma_synchronize(struct dma_chan *c)
|
||||
{
|
||||
struct idxd_wq *wq = to_idxd_wq(c);
|
||||
|
||||
idxd_wq_drain(wq);
|
||||
}
|
||||
|
||||
int idxd_register_dma_device(struct idxd_device *idxd)
|
||||
{
|
||||
struct idxd_dma_dev *idxd_dma;
|
||||
|
|
@ -224,6 +240,8 @@ int idxd_register_dma_device(struct idxd_device *idxd)
|
|||
dma->device_issue_pending = idxd_dma_issue_pending;
|
||||
dma->device_alloc_chan_resources = idxd_dma_alloc_chan_resources;
|
||||
dma->device_free_chan_resources = idxd_dma_free_chan_resources;
|
||||
dma->device_terminate_all = idxd_dma_terminate_all;
|
||||
dma->device_synchronize = idxd_dma_synchronize;
|
||||
|
||||
rc = dma_async_device_register(dma);
|
||||
if (rc < 0) {
|
||||
|
|
|
|||
|
|
@ -803,6 +803,7 @@ void idxd_wq_quiesce(struct idxd_wq *wq);
|
|||
int idxd_wq_init_percpu_ref(struct idxd_wq *wq);
|
||||
void idxd_wq_free_irq(struct idxd_wq *wq);
|
||||
int idxd_wq_request_irq(struct idxd_wq *wq);
|
||||
void idxd_wq_flush_descs(struct idxd_wq *wq);
|
||||
|
||||
/* submission */
|
||||
int idxd_submit_desc(struct idxd_wq *wq, struct idxd_desc *desc);
|
||||
|
|
|
|||
|
|
@ -973,7 +973,8 @@ static void idxd_device_config_restore(struct idxd_device *idxd,
|
|||
|
||||
idxd->rdbuf_limit = idxd_saved->saved_idxd.rdbuf_limit;
|
||||
|
||||
idxd->evl->size = saved_evl->size;
|
||||
if (idxd->evl)
|
||||
idxd->evl->size = saved_evl->size;
|
||||
|
||||
for (i = 0; i < idxd->max_groups; i++) {
|
||||
struct idxd_group *saved_group, *group;
|
||||
|
|
@ -1104,12 +1105,10 @@ static void idxd_reset_done(struct pci_dev *pdev)
|
|||
idxd_device_config_restore(idxd, idxd->idxd_saved);
|
||||
|
||||
/* Re-configure IDXD device if allowed. */
|
||||
if (test_bit(IDXD_FLAG_CONFIGURABLE, &idxd->flags)) {
|
||||
rc = idxd_device_config(idxd);
|
||||
if (rc < 0) {
|
||||
dev_err(dev, "HALT: %s config fails\n", idxd_name);
|
||||
goto out;
|
||||
}
|
||||
rc = idxd_device_config(idxd);
|
||||
if (rc < 0) {
|
||||
dev_err(dev, "HALT: %s config fails\n", idxd_name);
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* Bind IDXD device to driver. */
|
||||
|
|
@ -1147,6 +1146,7 @@ static void idxd_reset_done(struct pci_dev *pdev)
|
|||
}
|
||||
out:
|
||||
kfree(idxd->idxd_saved);
|
||||
idxd->idxd_saved = NULL;
|
||||
}
|
||||
|
||||
static const struct pci_error_handlers idxd_error_handler = {
|
||||
|
|
|
|||
|
|
@ -397,6 +397,17 @@ static void idxd_device_flr(struct work_struct *work)
|
|||
dev_err(&idxd->pdev->dev, "FLR failed\n");
|
||||
}
|
||||
|
||||
static void idxd_wqs_flush_descs(struct idxd_device *idxd)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < idxd->max_wqs; i++) {
|
||||
struct idxd_wq *wq = idxd->wqs[i];
|
||||
|
||||
idxd_wq_flush_descs(wq);
|
||||
}
|
||||
}
|
||||
|
||||
static irqreturn_t idxd_halt(struct idxd_device *idxd)
|
||||
{
|
||||
union gensts_reg gensts;
|
||||
|
|
@ -415,6 +426,11 @@ static irqreturn_t idxd_halt(struct idxd_device *idxd)
|
|||
} else if (gensts.reset_type == IDXD_DEVICE_RESET_FLR) {
|
||||
idxd->state = IDXD_DEV_HALTED;
|
||||
idxd_mask_error_interrupts(idxd);
|
||||
/* Flush all pending descriptors, and disable
|
||||
* interrupts, they will be re-enabled when FLR
|
||||
* concludes.
|
||||
*/
|
||||
idxd_wqs_flush_descs(idxd);
|
||||
dev_dbg(&idxd->pdev->dev,
|
||||
"idxd halted, doing FLR. After FLR, configs are restored\n");
|
||||
INIT_WORK(&idxd->work, idxd_device_flr);
|
||||
|
|
|
|||
|
|
@ -138,7 +138,7 @@ static void llist_abort_desc(struct idxd_wq *wq, struct idxd_irq_entry *ie,
|
|||
*/
|
||||
list_for_each_entry_safe(d, t, &flist, list) {
|
||||
list_del_init(&d->list);
|
||||
idxd_dma_complete_txd(found, IDXD_COMPLETE_ABORT, true,
|
||||
idxd_dma_complete_txd(d, IDXD_COMPLETE_ABORT, true,
|
||||
NULL, NULL);
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1836,6 +1836,7 @@ static void idxd_conf_device_release(struct device *dev)
|
|||
{
|
||||
struct idxd_device *idxd = confdev_to_idxd(dev);
|
||||
|
||||
destroy_workqueue(idxd->wq);
|
||||
kfree(idxd->groups);
|
||||
bitmap_free(idxd->wq_enable_map);
|
||||
kfree(idxd->wqs);
|
||||
|
|
|
|||
|
|
@ -10,6 +10,7 @@
|
|||
*/
|
||||
|
||||
#include <linux/bitfield.h>
|
||||
#include <linux/cleanup.h>
|
||||
#include <linux/dma-mapping.h>
|
||||
#include <linux/dmaengine.h>
|
||||
#include <linux/interrupt.h>
|
||||
|
|
@ -296,13 +297,10 @@ static void rz_dmac_disable_hw(struct rz_dmac_chan *channel)
|
|||
{
|
||||
struct dma_chan *chan = &channel->vc.chan;
|
||||
struct rz_dmac *dmac = to_rz_dmac(chan->device);
|
||||
unsigned long flags;
|
||||
|
||||
dev_dbg(dmac->dev, "%s channel %d\n", __func__, channel->index);
|
||||
|
||||
local_irq_save(flags);
|
||||
rz_dmac_ch_writel(channel, CHCTRL_DEFAULT, CHCTRL, 1);
|
||||
local_irq_restore(flags);
|
||||
}
|
||||
|
||||
static void rz_dmac_set_dmars_register(struct rz_dmac *dmac, int nr, u32 dmars)
|
||||
|
|
@ -447,6 +445,7 @@ static int rz_dmac_alloc_chan_resources(struct dma_chan *chan)
|
|||
if (!desc)
|
||||
break;
|
||||
|
||||
/* No need to lock. This is called only for the 1st client. */
|
||||
list_add_tail(&desc->node, &channel->ld_free);
|
||||
channel->descs_allocated++;
|
||||
}
|
||||
|
|
@ -502,18 +501,21 @@ rz_dmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
|
|||
dev_dbg(dmac->dev, "%s channel: %d src=0x%pad dst=0x%pad len=%zu\n",
|
||||
__func__, channel->index, &src, &dest, len);
|
||||
|
||||
if (list_empty(&channel->ld_free))
|
||||
return NULL;
|
||||
scoped_guard(spinlock_irqsave, &channel->vc.lock) {
|
||||
if (list_empty(&channel->ld_free))
|
||||
return NULL;
|
||||
|
||||
desc = list_first_entry(&channel->ld_free, struct rz_dmac_desc, node);
|
||||
desc = list_first_entry(&channel->ld_free, struct rz_dmac_desc, node);
|
||||
|
||||
desc->type = RZ_DMAC_DESC_MEMCPY;
|
||||
desc->src = src;
|
||||
desc->dest = dest;
|
||||
desc->len = len;
|
||||
desc->direction = DMA_MEM_TO_MEM;
|
||||
desc->type = RZ_DMAC_DESC_MEMCPY;
|
||||
desc->src = src;
|
||||
desc->dest = dest;
|
||||
desc->len = len;
|
||||
desc->direction = DMA_MEM_TO_MEM;
|
||||
|
||||
list_move_tail(channel->ld_free.next, &channel->ld_queue);
|
||||
}
|
||||
|
||||
list_move_tail(channel->ld_free.next, &channel->ld_queue);
|
||||
return vchan_tx_prep(&channel->vc, &desc->vd, flags);
|
||||
}
|
||||
|
||||
|
|
@ -529,27 +531,29 @@ rz_dmac_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
|
|||
int dma_length = 0;
|
||||
int i = 0;
|
||||
|
||||
if (list_empty(&channel->ld_free))
|
||||
return NULL;
|
||||
scoped_guard(spinlock_irqsave, &channel->vc.lock) {
|
||||
if (list_empty(&channel->ld_free))
|
||||
return NULL;
|
||||
|
||||
desc = list_first_entry(&channel->ld_free, struct rz_dmac_desc, node);
|
||||
desc = list_first_entry(&channel->ld_free, struct rz_dmac_desc, node);
|
||||
|
||||
for_each_sg(sgl, sg, sg_len, i) {
|
||||
dma_length += sg_dma_len(sg);
|
||||
for_each_sg(sgl, sg, sg_len, i)
|
||||
dma_length += sg_dma_len(sg);
|
||||
|
||||
desc->type = RZ_DMAC_DESC_SLAVE_SG;
|
||||
desc->sg = sgl;
|
||||
desc->sgcount = sg_len;
|
||||
desc->len = dma_length;
|
||||
desc->direction = direction;
|
||||
|
||||
if (direction == DMA_DEV_TO_MEM)
|
||||
desc->src = channel->src_per_address;
|
||||
else
|
||||
desc->dest = channel->dst_per_address;
|
||||
|
||||
list_move_tail(channel->ld_free.next, &channel->ld_queue);
|
||||
}
|
||||
|
||||
desc->type = RZ_DMAC_DESC_SLAVE_SG;
|
||||
desc->sg = sgl;
|
||||
desc->sgcount = sg_len;
|
||||
desc->len = dma_length;
|
||||
desc->direction = direction;
|
||||
|
||||
if (direction == DMA_DEV_TO_MEM)
|
||||
desc->src = channel->src_per_address;
|
||||
else
|
||||
desc->dest = channel->dst_per_address;
|
||||
|
||||
list_move_tail(channel->ld_free.next, &channel->ld_queue);
|
||||
return vchan_tx_prep(&channel->vc, &desc->vd, flags);
|
||||
}
|
||||
|
||||
|
|
@ -561,8 +565,8 @@ static int rz_dmac_terminate_all(struct dma_chan *chan)
|
|||
unsigned int i;
|
||||
LIST_HEAD(head);
|
||||
|
||||
rz_dmac_disable_hw(channel);
|
||||
spin_lock_irqsave(&channel->vc.lock, flags);
|
||||
rz_dmac_disable_hw(channel);
|
||||
for (i = 0; i < DMAC_NR_LMDESC; i++)
|
||||
lmdesc[i].header = 0;
|
||||
|
||||
|
|
@ -699,7 +703,9 @@ static void rz_dmac_irq_handle_channel(struct rz_dmac_chan *channel)
|
|||
if (chstat & CHSTAT_ER) {
|
||||
dev_err(dmac->dev, "DMAC err CHSTAT_%d = %08X\n",
|
||||
channel->index, chstat);
|
||||
rz_dmac_ch_writel(channel, CHCTRL_DEFAULT, CHCTRL, 1);
|
||||
|
||||
scoped_guard(spinlock_irqsave, &channel->vc.lock)
|
||||
rz_dmac_ch_writel(channel, CHCTRL_DEFAULT, CHCTRL, 1);
|
||||
goto done;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -1234,8 +1234,8 @@ static int xdma_probe(struct platform_device *pdev)
|
|||
|
||||
xdev->rmap = devm_regmap_init_mmio(&pdev->dev, reg_base,
|
||||
&xdma_regmap_config);
|
||||
if (!xdev->rmap) {
|
||||
xdma_err(xdev, "config regmap failed: %d", ret);
|
||||
if (IS_ERR(xdev->rmap)) {
|
||||
xdma_err(xdev, "config regmap failed: %pe", xdev->rmap);
|
||||
goto failed;
|
||||
}
|
||||
INIT_LIST_HEAD(&xdev->dma_dev.channels);
|
||||
|
|
|
|||
|
|
@ -997,16 +997,16 @@ static u32 xilinx_dma_get_residue(struct xilinx_dma_chan *chan,
|
|||
struct xilinx_cdma_tx_segment,
|
||||
node);
|
||||
cdma_hw = &cdma_seg->hw;
|
||||
residue += (cdma_hw->control - cdma_hw->status) &
|
||||
chan->xdev->max_buffer_len;
|
||||
residue += (cdma_hw->control & chan->xdev->max_buffer_len) -
|
||||
(cdma_hw->status & chan->xdev->max_buffer_len);
|
||||
} else if (chan->xdev->dma_config->dmatype ==
|
||||
XDMA_TYPE_AXIDMA) {
|
||||
axidma_seg = list_entry(entry,
|
||||
struct xilinx_axidma_tx_segment,
|
||||
node);
|
||||
axidma_hw = &axidma_seg->hw;
|
||||
residue += (axidma_hw->control - axidma_hw->status) &
|
||||
chan->xdev->max_buffer_len;
|
||||
residue += (axidma_hw->control & chan->xdev->max_buffer_len) -
|
||||
(axidma_hw->status & chan->xdev->max_buffer_len);
|
||||
} else {
|
||||
aximcdma_seg =
|
||||
list_entry(entry,
|
||||
|
|
@ -1014,8 +1014,8 @@ static u32 xilinx_dma_get_residue(struct xilinx_dma_chan *chan,
|
|||
node);
|
||||
aximcdma_hw = &aximcdma_seg->hw;
|
||||
residue +=
|
||||
(aximcdma_hw->control - aximcdma_hw->status) &
|
||||
chan->xdev->max_buffer_len;
|
||||
(aximcdma_hw->control & chan->xdev->max_buffer_len) -
|
||||
(aximcdma_hw->status & chan->xdev->max_buffer_len);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -1235,14 +1235,6 @@ static int xilinx_dma_alloc_chan_resources(struct dma_chan *dchan)
|
|||
|
||||
dma_cookie_init(dchan);
|
||||
|
||||
if (chan->xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA) {
|
||||
/* For AXI DMA resetting once channel will reset the
|
||||
* other channel as well so enable the interrupts here.
|
||||
*/
|
||||
dma_ctrl_set(chan, XILINX_DMA_REG_DMACR,
|
||||
XILINX_DMA_DMAXR_ALL_IRQ_MASK);
|
||||
}
|
||||
|
||||
if ((chan->xdev->dma_config->dmatype == XDMA_TYPE_CDMA) && chan->has_sg)
|
||||
dma_ctrl_set(chan, XILINX_DMA_REG_DMACR,
|
||||
XILINX_CDMA_CR_SGMODE);
|
||||
|
|
@ -1564,8 +1556,29 @@ static void xilinx_dma_start_transfer(struct xilinx_dma_chan *chan)
|
|||
if (chan->err)
|
||||
return;
|
||||
|
||||
if (list_empty(&chan->pending_list))
|
||||
if (list_empty(&chan->pending_list)) {
|
||||
if (chan->cyclic) {
|
||||
struct xilinx_dma_tx_descriptor *desc;
|
||||
struct list_head *entry;
|
||||
|
||||
desc = list_last_entry(&chan->done_list,
|
||||
struct xilinx_dma_tx_descriptor, node);
|
||||
list_for_each(entry, &desc->segments) {
|
||||
struct xilinx_axidma_tx_segment *axidma_seg;
|
||||
struct xilinx_axidma_desc_hw *axidma_hw;
|
||||
axidma_seg = list_entry(entry,
|
||||
struct xilinx_axidma_tx_segment,
|
||||
node);
|
||||
axidma_hw = &axidma_seg->hw;
|
||||
axidma_hw->status = 0;
|
||||
}
|
||||
|
||||
list_splice_tail_init(&chan->done_list, &chan->active_list);
|
||||
chan->desc_pendingcount = 0;
|
||||
chan->idle = false;
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
if (!chan->idle)
|
||||
return;
|
||||
|
|
@ -1591,6 +1604,7 @@ static void xilinx_dma_start_transfer(struct xilinx_dma_chan *chan)
|
|||
head_desc->async_tx.phys);
|
||||
reg &= ~XILINX_DMA_CR_DELAY_MAX;
|
||||
reg |= chan->irq_delay << XILINX_DMA_CR_DELAY_SHIFT;
|
||||
reg |= XILINX_DMA_DMAXR_ALL_IRQ_MASK;
|
||||
dma_ctrl_write(chan, XILINX_DMA_REG_DMACR, reg);
|
||||
|
||||
xilinx_dma_start(chan);
|
||||
|
|
@ -3024,7 +3038,7 @@ static int xilinx_dma_chan_probe(struct xilinx_dma_device *xdev,
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
xdev->common.directions |= chan->direction;
|
||||
xdev->common.directions |= BIT(chan->direction);
|
||||
|
||||
/* Request the interrupt */
|
||||
chan->irq = of_irq_get(node, chan->tdest);
|
||||
|
|
|
|||
|
|
@ -692,9 +692,9 @@ int amdgpu_amdkfd_submit_ib(struct amdgpu_device *adev,
|
|||
goto err_ib_sched;
|
||||
}
|
||||
|
||||
/* Drop the initial kref_init count (see drm_sched_main as example) */
|
||||
dma_fence_put(f);
|
||||
ret = dma_fence_wait(f, false);
|
||||
/* Drop the returned fence reference after the wait completes */
|
||||
dma_fence_put(f);
|
||||
|
||||
err_ib_sched:
|
||||
amdgpu_job_free(job);
|
||||
|
|
|
|||
|
|
@ -4207,7 +4207,8 @@ fail:
|
|||
|
||||
static int amdgpu_device_get_job_timeout_settings(struct amdgpu_device *adev)
|
||||
{
|
||||
char *input = amdgpu_lockup_timeout;
|
||||
char buf[AMDGPU_MAX_TIMEOUT_PARAM_LENGTH];
|
||||
char *input = buf;
|
||||
char *timeout_setting = NULL;
|
||||
int index = 0;
|
||||
long timeout;
|
||||
|
|
@ -4217,9 +4218,17 @@ static int amdgpu_device_get_job_timeout_settings(struct amdgpu_device *adev)
|
|||
adev->gfx_timeout = adev->compute_timeout = adev->sdma_timeout =
|
||||
adev->video_timeout = msecs_to_jiffies(2000);
|
||||
|
||||
if (!strnlen(input, AMDGPU_MAX_TIMEOUT_PARAM_LENGTH))
|
||||
if (!strnlen(amdgpu_lockup_timeout, AMDGPU_MAX_TIMEOUT_PARAM_LENGTH))
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* strsep() destructively modifies its input by replacing delimiters
|
||||
* with '\0'. Use a stack copy so the global module parameter buffer
|
||||
* remains intact for multi-GPU systems where this function is called
|
||||
* once per device.
|
||||
*/
|
||||
strscpy(buf, amdgpu_lockup_timeout, sizeof(buf));
|
||||
|
||||
while ((timeout_setting = strsep(&input, ",")) &&
|
||||
strnlen(timeout_setting, AMDGPU_MAX_TIMEOUT_PARAM_LENGTH)) {
|
||||
ret = kstrtol(timeout_setting, 0, &timeout);
|
||||
|
|
|
|||
|
|
@ -35,10 +35,13 @@
|
|||
* PASIDs are global address space identifiers that can be shared
|
||||
* between the GPU, an IOMMU and the driver. VMs on different devices
|
||||
* may use the same PASID if they share the same address
|
||||
* space. Therefore PASIDs are allocated using a global IDA. VMs are
|
||||
* looked up from the PASID per amdgpu_device.
|
||||
* space. Therefore PASIDs are allocated using IDR cyclic allocator
|
||||
* (similar to kernel PID allocation) which naturally delays reuse.
|
||||
* VMs are looked up from the PASID per amdgpu_device.
|
||||
*/
|
||||
static DEFINE_IDA(amdgpu_pasid_ida);
|
||||
|
||||
static DEFINE_IDR(amdgpu_pasid_idr);
|
||||
static DEFINE_SPINLOCK(amdgpu_pasid_idr_lock);
|
||||
|
||||
/* Helper to free pasid from a fence callback */
|
||||
struct amdgpu_pasid_cb {
|
||||
|
|
@ -50,8 +53,8 @@ struct amdgpu_pasid_cb {
|
|||
* amdgpu_pasid_alloc - Allocate a PASID
|
||||
* @bits: Maximum width of the PASID in bits, must be at least 1
|
||||
*
|
||||
* Allocates a PASID of the given width while keeping smaller PASIDs
|
||||
* available if possible.
|
||||
* Uses kernel's IDR cyclic allocator (same as PID allocation).
|
||||
* Allocates sequentially with automatic wrap-around.
|
||||
*
|
||||
* Returns a positive integer on success. Returns %-EINVAL if bits==0.
|
||||
* Returns %-ENOSPC if no PASID was available. Returns %-ENOMEM on
|
||||
|
|
@ -59,14 +62,15 @@ struct amdgpu_pasid_cb {
|
|||
*/
|
||||
int amdgpu_pasid_alloc(unsigned int bits)
|
||||
{
|
||||
int pasid = -EINVAL;
|
||||
int pasid;
|
||||
|
||||
for (bits = min(bits, 31U); bits > 0; bits--) {
|
||||
pasid = ida_alloc_range(&amdgpu_pasid_ida, 1U << (bits - 1),
|
||||
(1U << bits) - 1, GFP_KERNEL);
|
||||
if (pasid != -ENOSPC)
|
||||
break;
|
||||
}
|
||||
if (bits == 0)
|
||||
return -EINVAL;
|
||||
|
||||
spin_lock(&amdgpu_pasid_idr_lock);
|
||||
pasid = idr_alloc_cyclic(&amdgpu_pasid_idr, NULL, 1,
|
||||
1U << bits, GFP_KERNEL);
|
||||
spin_unlock(&amdgpu_pasid_idr_lock);
|
||||
|
||||
if (pasid >= 0)
|
||||
trace_amdgpu_pasid_allocated(pasid);
|
||||
|
|
@ -81,7 +85,10 @@ int amdgpu_pasid_alloc(unsigned int bits)
|
|||
void amdgpu_pasid_free(u32 pasid)
|
||||
{
|
||||
trace_amdgpu_pasid_freed(pasid);
|
||||
ida_free(&amdgpu_pasid_ida, pasid);
|
||||
|
||||
spin_lock(&amdgpu_pasid_idr_lock);
|
||||
idr_remove(&amdgpu_pasid_idr, pasid);
|
||||
spin_unlock(&amdgpu_pasid_idr_lock);
|
||||
}
|
||||
|
||||
static void amdgpu_pasid_free_cb(struct dma_fence *fence,
|
||||
|
|
@ -616,3 +623,15 @@ void amdgpu_vmid_mgr_fini(struct amdgpu_device *adev)
|
|||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* amdgpu_pasid_mgr_cleanup - cleanup PASID manager
|
||||
*
|
||||
* Cleanup the IDR allocator.
|
||||
*/
|
||||
void amdgpu_pasid_mgr_cleanup(void)
|
||||
{
|
||||
spin_lock(&amdgpu_pasid_idr_lock);
|
||||
idr_destroy(&amdgpu_pasid_idr);
|
||||
spin_unlock(&amdgpu_pasid_idr_lock);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -74,6 +74,7 @@ int amdgpu_pasid_alloc(unsigned int bits);
|
|||
void amdgpu_pasid_free(u32 pasid);
|
||||
void amdgpu_pasid_free_delayed(struct dma_resv *resv,
|
||||
u32 pasid);
|
||||
void amdgpu_pasid_mgr_cleanup(void);
|
||||
|
||||
bool amdgpu_vmid_had_gpu_reset(struct amdgpu_device *adev,
|
||||
struct amdgpu_vmid *id);
|
||||
|
|
|
|||
|
|
@ -2898,6 +2898,7 @@ void amdgpu_vm_manager_fini(struct amdgpu_device *adev)
|
|||
xa_destroy(&adev->vm_manager.pasids);
|
||||
|
||||
amdgpu_vmid_mgr_fini(adev);
|
||||
amdgpu_pasid_mgr_cleanup();
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -2973,14 +2974,14 @@ bool amdgpu_vm_handle_fault(struct amdgpu_device *adev, u32 pasid,
|
|||
if (!root)
|
||||
return false;
|
||||
|
||||
addr /= AMDGPU_GPU_PAGE_SIZE;
|
||||
|
||||
if (is_compute_context && !svm_range_restore_pages(adev, pasid, vmid,
|
||||
node_id, addr, ts, write_fault)) {
|
||||
node_id, addr >> PAGE_SHIFT, ts, write_fault)) {
|
||||
amdgpu_bo_unref(&root);
|
||||
return true;
|
||||
}
|
||||
|
||||
addr /= AMDGPU_GPU_PAGE_SIZE;
|
||||
|
||||
r = amdgpu_bo_reserve(root, true);
|
||||
if (r)
|
||||
goto error_unref;
|
||||
|
|
|
|||
|
|
@ -3170,11 +3170,11 @@ static int kfd_ioctl_create_process(struct file *filep, struct kfd_process *p, v
|
|||
struct kfd_process *process;
|
||||
int ret;
|
||||
|
||||
/* Each FD owns only one kfd_process */
|
||||
if (p->context_id != KFD_CONTEXT_ID_PRIMARY)
|
||||
if (!filep->private_data || !p)
|
||||
return -EINVAL;
|
||||
|
||||
if (!filep->private_data || !p)
|
||||
/* Each FD owns only one kfd_process */
|
||||
if (p->context_id != KFD_CONTEXT_ID_PRIMARY)
|
||||
return -EINVAL;
|
||||
|
||||
mutex_lock(&kfd_processes_mutex);
|
||||
|
|
|
|||
|
|
@ -3909,8 +3909,9 @@ void amdgpu_dm_update_connector_after_detect(
|
|||
|
||||
aconnector->dc_sink = sink;
|
||||
dc_sink_retain(aconnector->dc_sink);
|
||||
drm_edid_free(aconnector->drm_edid);
|
||||
aconnector->drm_edid = NULL;
|
||||
if (sink->dc_edid.length == 0) {
|
||||
aconnector->drm_edid = NULL;
|
||||
hdmi_cec_unset_edid(aconnector);
|
||||
if (aconnector->dc_link->aux_mode) {
|
||||
drm_dp_cec_unset_edid(&aconnector->dm_dp_aux.aux);
|
||||
|
|
@ -5422,7 +5423,7 @@ static void setup_backlight_device(struct amdgpu_display_manager *dm,
|
|||
caps = &dm->backlight_caps[aconnector->bl_idx];
|
||||
|
||||
/* Only offer ABM property when non-OLED and user didn't turn off by module parameter */
|
||||
if (!caps->ext_caps->bits.oled && amdgpu_dm_abm_level < 0)
|
||||
if (caps->ext_caps && !caps->ext_caps->bits.oled && amdgpu_dm_abm_level < 0)
|
||||
drm_object_attach_property(&aconnector->base.base,
|
||||
dm->adev->mode_info.abm_level_property,
|
||||
ABM_SYSFS_CONTROL);
|
||||
|
|
@ -12523,6 +12524,11 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
|
|||
}
|
||||
|
||||
if (dc_resource_is_dsc_encoding_supported(dc)) {
|
||||
for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) {
|
||||
dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
|
||||
dm_new_crtc_state->mode_changed_independent_from_dsc = new_crtc_state->mode_changed;
|
||||
}
|
||||
|
||||
for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) {
|
||||
if (drm_atomic_crtc_needs_modeset(new_crtc_state)) {
|
||||
ret = add_affected_mst_dsc_crtcs(state, crtc);
|
||||
|
|
|
|||
|
|
@ -984,6 +984,7 @@ struct dm_crtc_state {
|
|||
|
||||
bool freesync_vrr_info_changed;
|
||||
|
||||
bool mode_changed_independent_from_dsc;
|
||||
bool dsc_force_changed;
|
||||
bool vrr_supported;
|
||||
struct mod_freesync_config freesync_config;
|
||||
|
|
|
|||
|
|
@ -1744,9 +1744,11 @@ int pre_validate_dsc(struct drm_atomic_state *state,
|
|||
int ind = find_crtc_index_in_state_by_stream(state, stream);
|
||||
|
||||
if (ind >= 0) {
|
||||
struct dm_crtc_state *dm_new_crtc_state = to_dm_crtc_state(state->crtcs[ind].new_state);
|
||||
|
||||
DRM_INFO_ONCE("%s:%d MST_DSC no mode changed for stream 0x%p\n",
|
||||
__func__, __LINE__, stream);
|
||||
state->crtcs[ind].new_state->mode_changed = 0;
|
||||
dm_new_crtc_state->base.mode_changed = dm_new_crtc_state->mode_changed_independent_from_dsc;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -650,9 +650,6 @@ static struct link_encoder *dce100_link_encoder_create(
|
|||
return &enc110->base;
|
||||
}
|
||||
|
||||
if (enc_init_data->hpd_source >= ARRAY_SIZE(link_enc_hpd_regs))
|
||||
return NULL;
|
||||
|
||||
link_regs_id =
|
||||
map_transmitter_id_to_phy_instance(enc_init_data->transmitter);
|
||||
|
||||
|
|
@ -661,7 +658,8 @@ static struct link_encoder *dce100_link_encoder_create(
|
|||
&link_enc_feature,
|
||||
&link_enc_regs[link_regs_id],
|
||||
&link_enc_aux_regs[enc_init_data->channel - 1],
|
||||
&link_enc_hpd_regs[enc_init_data->hpd_source]);
|
||||
enc_init_data->hpd_source >= ARRAY_SIZE(link_enc_hpd_regs) ?
|
||||
NULL : &link_enc_hpd_regs[enc_init_data->hpd_source]);
|
||||
return &enc110->base;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -671,7 +671,7 @@ static struct link_encoder *dce110_link_encoder_create(
|
|||
kzalloc_obj(struct dce110_link_encoder);
|
||||
int link_regs_id;
|
||||
|
||||
if (!enc110 || enc_init_data->hpd_source >= ARRAY_SIZE(link_enc_hpd_regs))
|
||||
if (!enc110)
|
||||
return NULL;
|
||||
|
||||
link_regs_id =
|
||||
|
|
@ -682,7 +682,8 @@ static struct link_encoder *dce110_link_encoder_create(
|
|||
&link_enc_feature,
|
||||
&link_enc_regs[link_regs_id],
|
||||
&link_enc_aux_regs[enc_init_data->channel - 1],
|
||||
&link_enc_hpd_regs[enc_init_data->hpd_source]);
|
||||
enc_init_data->hpd_source >= ARRAY_SIZE(link_enc_hpd_regs) ?
|
||||
NULL : &link_enc_hpd_regs[enc_init_data->hpd_source]);
|
||||
return &enc110->base;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -632,7 +632,7 @@ static struct link_encoder *dce112_link_encoder_create(
|
|||
kzalloc_obj(struct dce110_link_encoder);
|
||||
int link_regs_id;
|
||||
|
||||
if (!enc110 || enc_init_data->hpd_source >= ARRAY_SIZE(link_enc_hpd_regs))
|
||||
if (!enc110)
|
||||
return NULL;
|
||||
|
||||
link_regs_id =
|
||||
|
|
@ -643,7 +643,8 @@ static struct link_encoder *dce112_link_encoder_create(
|
|||
&link_enc_feature,
|
||||
&link_enc_regs[link_regs_id],
|
||||
&link_enc_aux_regs[enc_init_data->channel - 1],
|
||||
&link_enc_hpd_regs[enc_init_data->hpd_source]);
|
||||
enc_init_data->hpd_source >= ARRAY_SIZE(link_enc_hpd_regs) ?
|
||||
NULL : &link_enc_hpd_regs[enc_init_data->hpd_source]);
|
||||
return &enc110->base;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -716,7 +716,7 @@ static struct link_encoder *dce120_link_encoder_create(
|
|||
kzalloc_obj(struct dce110_link_encoder);
|
||||
int link_regs_id;
|
||||
|
||||
if (!enc110 || enc_init_data->hpd_source >= ARRAY_SIZE(link_enc_hpd_regs))
|
||||
if (!enc110)
|
||||
return NULL;
|
||||
|
||||
link_regs_id =
|
||||
|
|
@ -727,7 +727,8 @@ static struct link_encoder *dce120_link_encoder_create(
|
|||
&link_enc_feature,
|
||||
&link_enc_regs[link_regs_id],
|
||||
&link_enc_aux_regs[enc_init_data->channel - 1],
|
||||
&link_enc_hpd_regs[enc_init_data->hpd_source]);
|
||||
enc_init_data->hpd_source >= ARRAY_SIZE(link_enc_hpd_regs) ?
|
||||
NULL : &link_enc_hpd_regs[enc_init_data->hpd_source]);
|
||||
|
||||
return &enc110->base;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -746,18 +746,16 @@ static struct link_encoder *dce60_link_encoder_create(
|
|||
return &enc110->base;
|
||||
}
|
||||
|
||||
if (enc_init_data->hpd_source >= ARRAY_SIZE(link_enc_hpd_regs))
|
||||
return NULL;
|
||||
|
||||
link_regs_id =
|
||||
map_transmitter_id_to_phy_instance(enc_init_data->transmitter);
|
||||
|
||||
dce60_link_encoder_construct(enc110,
|
||||
enc_init_data,
|
||||
&link_enc_feature,
|
||||
&link_enc_regs[link_regs_id],
|
||||
&link_enc_aux_regs[enc_init_data->channel - 1],
|
||||
&link_enc_hpd_regs[enc_init_data->hpd_source]);
|
||||
enc_init_data,
|
||||
&link_enc_feature,
|
||||
&link_enc_regs[link_regs_id],
|
||||
&link_enc_aux_regs[enc_init_data->channel - 1],
|
||||
enc_init_data->hpd_source >= ARRAY_SIZE(link_enc_hpd_regs) ?
|
||||
NULL : &link_enc_hpd_regs[enc_init_data->hpd_source]);
|
||||
return &enc110->base;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -752,9 +752,6 @@ static struct link_encoder *dce80_link_encoder_create(
|
|||
return &enc110->base;
|
||||
}
|
||||
|
||||
if (enc_init_data->hpd_source >= ARRAY_SIZE(link_enc_hpd_regs))
|
||||
return NULL;
|
||||
|
||||
link_regs_id =
|
||||
map_transmitter_id_to_phy_instance(enc_init_data->transmitter);
|
||||
|
||||
|
|
@ -763,7 +760,8 @@ static struct link_encoder *dce80_link_encoder_create(
|
|||
&link_enc_feature,
|
||||
&link_enc_regs[link_regs_id],
|
||||
&link_enc_aux_regs[enc_init_data->channel - 1],
|
||||
&link_enc_hpd_regs[enc_init_data->hpd_source]);
|
||||
enc_init_data->hpd_source >= ARRAY_SIZE(link_enc_hpd_regs) ?
|
||||
NULL : &link_enc_hpd_regs[enc_init_data->hpd_source]);
|
||||
return &enc110->base;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -59,6 +59,10 @@
|
|||
|
||||
#define to_amdgpu_device(x) (container_of(x, struct amdgpu_device, pm.smu_i2c))
|
||||
|
||||
static void smu_v13_0_0_get_od_setting_limits(struct smu_context *smu,
|
||||
int od_feature_bit,
|
||||
int32_t *min, int32_t *max);
|
||||
|
||||
static const struct smu_feature_bits smu_v13_0_0_dpm_features = {
|
||||
.bits = {
|
||||
SMU_FEATURE_BIT_INIT(FEATURE_DPM_GFXCLK_BIT),
|
||||
|
|
@ -1043,8 +1047,35 @@ static bool smu_v13_0_0_is_od_feature_supported(struct smu_context *smu,
|
|||
PPTable_t *pptable = smu->smu_table.driver_pptable;
|
||||
const OverDriveLimits_t * const overdrive_upperlimits =
|
||||
&pptable->SkuTable.OverDriveLimitsBasicMax;
|
||||
int32_t min_value, max_value;
|
||||
bool feature_enabled;
|
||||
|
||||
return overdrive_upperlimits->FeatureCtrlMask & (1U << od_feature_bit);
|
||||
switch (od_feature_bit) {
|
||||
case PP_OD_FEATURE_FAN_CURVE_BIT:
|
||||
feature_enabled = !!(overdrive_upperlimits->FeatureCtrlMask & (1U << od_feature_bit));
|
||||
if (feature_enabled) {
|
||||
smu_v13_0_0_get_od_setting_limits(smu, PP_OD_FEATURE_FAN_CURVE_TEMP,
|
||||
&min_value, &max_value);
|
||||
if (!min_value && !max_value) {
|
||||
feature_enabled = false;
|
||||
goto out;
|
||||
}
|
||||
|
||||
smu_v13_0_0_get_od_setting_limits(smu, PP_OD_FEATURE_FAN_CURVE_PWM,
|
||||
&min_value, &max_value);
|
||||
if (!min_value && !max_value) {
|
||||
feature_enabled = false;
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
break;
|
||||
default:
|
||||
feature_enabled = !!(overdrive_upperlimits->FeatureCtrlMask & (1U << od_feature_bit));
|
||||
break;
|
||||
}
|
||||
|
||||
out:
|
||||
return feature_enabled;
|
||||
}
|
||||
|
||||
static void smu_v13_0_0_get_od_setting_limits(struct smu_context *smu,
|
||||
|
|
|
|||
|
|
@ -1391,7 +1391,7 @@ static int smu_v13_0_6_emit_clk_levels(struct smu_context *smu,
|
|||
break;
|
||||
case SMU_OD_MCLK:
|
||||
if (!smu_v13_0_6_cap_supported(smu, SMU_CAP(SET_UCLK_MAX)))
|
||||
return 0;
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
size += sysfs_emit_at(buf, size, "%s:\n", "OD_MCLK");
|
||||
size += sysfs_emit_at(buf, size, "0: %uMhz\n1: %uMhz\n",
|
||||
|
|
@ -2122,6 +2122,7 @@ static int smu_v13_0_6_usr_edit_dpm_table(struct smu_context *smu,
|
|||
{
|
||||
struct smu_dpm_context *smu_dpm = &(smu->smu_dpm);
|
||||
struct smu_13_0_dpm_context *dpm_context = smu_dpm->dpm_context;
|
||||
struct smu_dpm_table *uclk_table = &dpm_context->dpm_tables.uclk_table;
|
||||
struct smu_umd_pstate_table *pstate_table = &smu->pstate_table;
|
||||
uint32_t min_clk;
|
||||
uint32_t max_clk;
|
||||
|
|
@ -2221,14 +2222,16 @@ static int smu_v13_0_6_usr_edit_dpm_table(struct smu_context *smu,
|
|||
if (ret)
|
||||
return ret;
|
||||
|
||||
min_clk = SMU_DPM_TABLE_MIN(
|
||||
&dpm_context->dpm_tables.uclk_table);
|
||||
max_clk = SMU_DPM_TABLE_MAX(
|
||||
&dpm_context->dpm_tables.uclk_table);
|
||||
ret = smu_v13_0_6_set_soft_freq_limited_range(
|
||||
smu, SMU_UCLK, min_clk, max_clk, false);
|
||||
if (ret)
|
||||
return ret;
|
||||
if (SMU_DPM_TABLE_MAX(uclk_table) !=
|
||||
pstate_table->uclk_pstate.curr.max) {
|
||||
min_clk = SMU_DPM_TABLE_MIN(&dpm_context->dpm_tables.uclk_table);
|
||||
max_clk = SMU_DPM_TABLE_MAX(&dpm_context->dpm_tables.uclk_table);
|
||||
ret = smu_v13_0_6_set_soft_freq_limited_range(smu,
|
||||
SMU_UCLK, min_clk,
|
||||
max_clk, false);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
smu_v13_0_reset_custom_level(smu);
|
||||
}
|
||||
break;
|
||||
|
|
|
|||
|
|
@ -59,6 +59,10 @@
|
|||
|
||||
#define to_amdgpu_device(x) (container_of(x, struct amdgpu_device, pm.smu_i2c))
|
||||
|
||||
static void smu_v13_0_7_get_od_setting_limits(struct smu_context *smu,
|
||||
int od_feature_bit,
|
||||
int32_t *min, int32_t *max);
|
||||
|
||||
static const struct smu_feature_bits smu_v13_0_7_dpm_features = {
|
||||
.bits = {
|
||||
SMU_FEATURE_BIT_INIT(FEATURE_DPM_GFXCLK_BIT),
|
||||
|
|
@ -1053,8 +1057,35 @@ static bool smu_v13_0_7_is_od_feature_supported(struct smu_context *smu,
|
|||
PPTable_t *pptable = smu->smu_table.driver_pptable;
|
||||
const OverDriveLimits_t * const overdrive_upperlimits =
|
||||
&pptable->SkuTable.OverDriveLimitsBasicMax;
|
||||
int32_t min_value, max_value;
|
||||
bool feature_enabled;
|
||||
|
||||
return overdrive_upperlimits->FeatureCtrlMask & (1U << od_feature_bit);
|
||||
switch (od_feature_bit) {
|
||||
case PP_OD_FEATURE_FAN_CURVE_BIT:
|
||||
feature_enabled = !!(overdrive_upperlimits->FeatureCtrlMask & (1U << od_feature_bit));
|
||||
if (feature_enabled) {
|
||||
smu_v13_0_7_get_od_setting_limits(smu, PP_OD_FEATURE_FAN_CURVE_TEMP,
|
||||
&min_value, &max_value);
|
||||
if (!min_value && !max_value) {
|
||||
feature_enabled = false;
|
||||
goto out;
|
||||
}
|
||||
|
||||
smu_v13_0_7_get_od_setting_limits(smu, PP_OD_FEATURE_FAN_CURVE_PWM,
|
||||
&min_value, &max_value);
|
||||
if (!min_value && !max_value) {
|
||||
feature_enabled = false;
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
break;
|
||||
default:
|
||||
feature_enabled = !!(overdrive_upperlimits->FeatureCtrlMask & (1U << od_feature_bit));
|
||||
break;
|
||||
}
|
||||
|
||||
out:
|
||||
return feature_enabled;
|
||||
}
|
||||
|
||||
static void smu_v13_0_7_get_od_setting_limits(struct smu_context *smu,
|
||||
|
|
|
|||
|
|
@ -56,6 +56,10 @@
|
|||
|
||||
#define to_amdgpu_device(x) (container_of(x, struct amdgpu_device, pm.smu_i2c))
|
||||
|
||||
static void smu_v14_0_2_get_od_setting_limits(struct smu_context *smu,
|
||||
int od_feature_bit,
|
||||
int32_t *min, int32_t *max);
|
||||
|
||||
static const struct smu_feature_bits smu_v14_0_2_dpm_features = {
|
||||
.bits = { SMU_FEATURE_BIT_INIT(FEATURE_DPM_GFXCLK_BIT),
|
||||
SMU_FEATURE_BIT_INIT(FEATURE_DPM_UCLK_BIT),
|
||||
|
|
@ -922,8 +926,35 @@ static bool smu_v14_0_2_is_od_feature_supported(struct smu_context *smu,
|
|||
PPTable_t *pptable = smu->smu_table.driver_pptable;
|
||||
const OverDriveLimits_t * const overdrive_upperlimits =
|
||||
&pptable->SkuTable.OverDriveLimitsBasicMax;
|
||||
int32_t min_value, max_value;
|
||||
bool feature_enabled;
|
||||
|
||||
return overdrive_upperlimits->FeatureCtrlMask & (1U << od_feature_bit);
|
||||
switch (od_feature_bit) {
|
||||
case PP_OD_FEATURE_FAN_CURVE_BIT:
|
||||
feature_enabled = !!(overdrive_upperlimits->FeatureCtrlMask & (1U << od_feature_bit));
|
||||
if (feature_enabled) {
|
||||
smu_v14_0_2_get_od_setting_limits(smu, PP_OD_FEATURE_FAN_CURVE_TEMP,
|
||||
&min_value, &max_value);
|
||||
if (!min_value && !max_value) {
|
||||
feature_enabled = false;
|
||||
goto out;
|
||||
}
|
||||
|
||||
smu_v14_0_2_get_od_setting_limits(smu, PP_OD_FEATURE_FAN_CURVE_PWM,
|
||||
&min_value, &max_value);
|
||||
if (!min_value && !max_value) {
|
||||
feature_enabled = false;
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
break;
|
||||
default:
|
||||
feature_enabled = !!(overdrive_upperlimits->FeatureCtrlMask & (1U << od_feature_bit));
|
||||
break;
|
||||
}
|
||||
|
||||
out:
|
||||
return feature_enabled;
|
||||
}
|
||||
|
||||
static void smu_v14_0_2_get_od_setting_limits(struct smu_context *smu,
|
||||
|
|
|
|||
|
|
@ -550,27 +550,27 @@ int drm_gem_shmem_dumb_create(struct drm_file *file, struct drm_device *dev,
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(drm_gem_shmem_dumb_create);
|
||||
|
||||
static bool drm_gem_shmem_try_map_pmd(struct vm_fault *vmf, unsigned long addr,
|
||||
struct page *page)
|
||||
static vm_fault_t try_insert_pfn(struct vm_fault *vmf, unsigned int order,
|
||||
unsigned long pfn)
|
||||
{
|
||||
if (!order) {
|
||||
return vmf_insert_pfn(vmf->vma, vmf->address, pfn);
|
||||
#ifdef CONFIG_ARCH_SUPPORTS_PMD_PFNMAP
|
||||
unsigned long pfn = page_to_pfn(page);
|
||||
unsigned long paddr = pfn << PAGE_SHIFT;
|
||||
bool aligned = (addr & ~PMD_MASK) == (paddr & ~PMD_MASK);
|
||||
} else if (order == PMD_ORDER) {
|
||||
unsigned long paddr = pfn << PAGE_SHIFT;
|
||||
bool aligned = (vmf->address & ~PMD_MASK) == (paddr & ~PMD_MASK);
|
||||
|
||||
if (aligned &&
|
||||
pmd_none(*vmf->pmd) &&
|
||||
folio_test_pmd_mappable(page_folio(page))) {
|
||||
pfn &= PMD_MASK >> PAGE_SHIFT;
|
||||
if (vmf_insert_pfn_pmd(vmf, pfn, false) == VM_FAULT_NOPAGE)
|
||||
return true;
|
||||
}
|
||||
if (aligned &&
|
||||
folio_test_pmd_mappable(page_folio(pfn_to_page(pfn)))) {
|
||||
pfn &= PMD_MASK >> PAGE_SHIFT;
|
||||
return vmf_insert_pfn_pmd(vmf, pfn, false);
|
||||
}
|
||||
#endif
|
||||
|
||||
return false;
|
||||
}
|
||||
return VM_FAULT_FALLBACK;
|
||||
}
|
||||
|
||||
static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
|
||||
static vm_fault_t drm_gem_shmem_any_fault(struct vm_fault *vmf, unsigned int order)
|
||||
{
|
||||
struct vm_area_struct *vma = vmf->vma;
|
||||
struct drm_gem_object *obj = vma->vm_private_data;
|
||||
|
|
@ -581,6 +581,9 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
|
|||
pgoff_t page_offset;
|
||||
unsigned long pfn;
|
||||
|
||||
if (order && order != PMD_ORDER)
|
||||
return VM_FAULT_FALLBACK;
|
||||
|
||||
/* Offset to faulty address in the VMA. */
|
||||
page_offset = vmf->pgoff - vma->vm_pgoff;
|
||||
|
||||
|
|
@ -593,13 +596,8 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
|
|||
goto out;
|
||||
}
|
||||
|
||||
if (drm_gem_shmem_try_map_pmd(vmf, vmf->address, pages[page_offset])) {
|
||||
ret = VM_FAULT_NOPAGE;
|
||||
goto out;
|
||||
}
|
||||
|
||||
pfn = page_to_pfn(pages[page_offset]);
|
||||
ret = vmf_insert_pfn(vma, vmf->address, pfn);
|
||||
ret = try_insert_pfn(vmf, order, pfn);
|
||||
|
||||
out:
|
||||
dma_resv_unlock(shmem->base.resv);
|
||||
|
|
@ -607,6 +605,11 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
|
|||
return ret;
|
||||
}
|
||||
|
||||
static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
|
||||
{
|
||||
return drm_gem_shmem_any_fault(vmf, 0);
|
||||
}
|
||||
|
||||
static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
|
||||
{
|
||||
struct drm_gem_object *obj = vma->vm_private_data;
|
||||
|
|
@ -643,6 +646,9 @@ static void drm_gem_shmem_vm_close(struct vm_area_struct *vma)
|
|||
|
||||
const struct vm_operations_struct drm_gem_shmem_vm_ops = {
|
||||
.fault = drm_gem_shmem_fault,
|
||||
#ifdef CONFIG_ARCH_SUPPORTS_PMD_PFNMAP
|
||||
.huge_fault = drm_gem_shmem_any_fault,
|
||||
#endif
|
||||
.open = drm_gem_shmem_vm_open,
|
||||
.close = drm_gem_shmem_vm_close,
|
||||
};
|
||||
|
|
|
|||
|
|
@ -602,7 +602,7 @@ int drm_syncobj_get_handle(struct drm_file *file_private,
|
|||
drm_syncobj_get(syncobj);
|
||||
|
||||
ret = xa_alloc(&file_private->syncobj_xa, handle, syncobj, xa_limit_32b,
|
||||
GFP_NOWAIT);
|
||||
GFP_KERNEL);
|
||||
if (ret)
|
||||
drm_syncobj_put(syncobj);
|
||||
|
||||
|
|
@ -716,7 +716,7 @@ static int drm_syncobj_fd_to_handle(struct drm_file *file_private,
|
|||
drm_syncobj_get(syncobj);
|
||||
|
||||
ret = xa_alloc(&file_private->syncobj_xa, handle, syncobj, xa_limit_32b,
|
||||
GFP_NOWAIT);
|
||||
GFP_KERNEL);
|
||||
if (ret)
|
||||
drm_syncobj_put(syncobj);
|
||||
|
||||
|
|
|
|||
|
|
@ -4602,6 +4602,7 @@ intel_crtc_prepare_cleared_state(struct intel_atomic_state *state,
|
|||
struct intel_crtc_state *crtc_state =
|
||||
intel_atomic_get_new_crtc_state(state, crtc);
|
||||
struct intel_crtc_state *saved_state;
|
||||
int err;
|
||||
|
||||
saved_state = intel_crtc_state_alloc(crtc);
|
||||
if (!saved_state)
|
||||
|
|
@ -4610,7 +4611,12 @@ intel_crtc_prepare_cleared_state(struct intel_atomic_state *state,
|
|||
/* free the old crtc_state->hw members */
|
||||
intel_crtc_free_hw_state(crtc_state);
|
||||
|
||||
intel_dp_tunnel_atomic_clear_stream_bw(state, crtc_state);
|
||||
err = intel_dp_tunnel_atomic_clear_stream_bw(state, crtc_state);
|
||||
if (err) {
|
||||
kfree(saved_state);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
/* FIXME: before the switch to atomic started, a new pipe_config was
|
||||
* kzalloc'd. Code that depends on any field being zero should be
|
||||
|
|
|
|||
|
|
@ -621,19 +621,27 @@ int intel_dp_tunnel_atomic_compute_stream_bw(struct intel_atomic_state *state,
|
|||
*
|
||||
* Clear any DP tunnel stream BW requirement set by
|
||||
* intel_dp_tunnel_atomic_compute_stream_bw().
|
||||
*
|
||||
* Returns 0 in case of success, a negative error code otherwise.
|
||||
*/
|
||||
void intel_dp_tunnel_atomic_clear_stream_bw(struct intel_atomic_state *state,
|
||||
struct intel_crtc_state *crtc_state)
|
||||
int intel_dp_tunnel_atomic_clear_stream_bw(struct intel_atomic_state *state,
|
||||
struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
|
||||
int err;
|
||||
|
||||
if (!crtc_state->dp_tunnel_ref.tunnel)
|
||||
return;
|
||||
return 0;
|
||||
|
||||
err = drm_dp_tunnel_atomic_set_stream_bw(&state->base,
|
||||
crtc_state->dp_tunnel_ref.tunnel,
|
||||
crtc->pipe, 0);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
drm_dp_tunnel_atomic_set_stream_bw(&state->base,
|
||||
crtc_state->dp_tunnel_ref.tunnel,
|
||||
crtc->pipe, 0);
|
||||
drm_dp_tunnel_ref_put(&crtc_state->dp_tunnel_ref);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
|||
|
|
@ -40,8 +40,8 @@ int intel_dp_tunnel_atomic_compute_stream_bw(struct intel_atomic_state *state,
|
|||
struct intel_dp *intel_dp,
|
||||
const struct intel_connector *connector,
|
||||
struct intel_crtc_state *crtc_state);
|
||||
void intel_dp_tunnel_atomic_clear_stream_bw(struct intel_atomic_state *state,
|
||||
struct intel_crtc_state *crtc_state);
|
||||
int intel_dp_tunnel_atomic_clear_stream_bw(struct intel_atomic_state *state,
|
||||
struct intel_crtc_state *crtc_state);
|
||||
|
||||
int intel_dp_tunnel_atomic_add_state_for_crtc(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc);
|
||||
|
|
@ -88,9 +88,12 @@ intel_dp_tunnel_atomic_compute_stream_bw(struct intel_atomic_state *state,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static inline void
|
||||
static inline int
|
||||
intel_dp_tunnel_atomic_clear_stream_bw(struct intel_atomic_state *state,
|
||||
struct intel_crtc_state *crtc_state) {}
|
||||
struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int
|
||||
intel_dp_tunnel_atomic_add_state_for_crtc(struct intel_atomic_state *state,
|
||||
|
|
|
|||
|
|
@ -496,8 +496,10 @@ gmbus_xfer_read_chunk(struct intel_display *display,
|
|||
|
||||
val = intel_de_read_fw(display, GMBUS3(display));
|
||||
do {
|
||||
if (extra_byte_added && len == 1)
|
||||
if (extra_byte_added && len == 1) {
|
||||
len--;
|
||||
break;
|
||||
}
|
||||
|
||||
*buf++ = val & 0xff;
|
||||
val >>= 8;
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue