Merge android-4.19.73 (8ca5759
) into msm-4.19
* refs/heads/tmp-8ca5759: BACKPORT: make 'user_access_begin()' do 'access_ok()' ABI update for 4.19.72 ANDROID: first pass cuttlefish GKI modularization ANDROID: GKI: enable CONFIG_TIPC for x86 ANDROID: GKI: enable CONFIG_SPI for x86 ANDROID: update abi for 4.19.69 ANDROID: update ABI dump UPSTREAM: lib/test_meminit.c: use GFP_ATOMIC in RCU critical section UPSTREAM: mm: slub: Fix slab walking for init_on_free UPSTREAM: lib/test_meminit.c: minor test fixes UPSTREAM: lib/test_meminit.c: fix -Wmaybe-uninitialized false positive UPSTREAM: lib: introduce test_meminit module UPSTREAM: mm: init: report memory auto-initialization features at boot time UPSTREAM: mm: security: introduce init_on_alloc=1 and init_on_free=1 boot options UPSTREAM: arm64: move jump_label_init() before parse_early_param() ANDROID: update ABI dump ANDROID: gki_defconfig: enable CONFIG_QCOM_{COMMAND_DB,RPMH,PDC} ANDROID: cuttlefish: overlayfs: regression ANDROID: gki_defconfig enable CONFIG_SPARSEMEM_VMEMMAP ANDROID: update ABI for EFI, SCHED_TUNE ANDROID: gki_defconfig: Enable SCHED_TUNE ANDROID: gki_defconfig: Minimally enable EFI ANDROID: Add a tracepoint for mapping inode to full path ANDROID: update ABI for CONFIG_NR_CPUS=32 ANDROID: gki_defconfig: set CONFIG_NR_CPUS=32 ANDROID: gki_defconfig: set CONFIG_NR_CPUS=32 (x86_64) ANDROID: update ABI for CONFIG_TIPC ANDROID: gki_defconfig: enable CONFIG_TIPC BACKPORT: arch: add pidfd and io_uring syscalls everywhere ANDROID: update ABI dump UPSTREAM: dma-buf: add show_fdinfo handler UPSTREAM: dma-buf: add DMA_BUF_SET_NAME ioctls UPSTREAM: dma-buf: give each buffer a full-fledged inode ANDROID: Update the expected ABI UPSTREAM: drm/virtio: Fix cache entry creation race. UPSTREAM: drm/virtio: Wake up all waiters when capset response comes in. UPSTREAM: drm/virtio: Ensure cached capset entries are valid before copying. UPSTREAM: drm/virtio: use u64_to_user_ptr macro UPSTREAM: drm/virtio: remove irrelevant DRM_UNLOCKED flag UPSTREAM: drm/virtio: Remove redundant return type UPSTREAM: drm/virtio: allocate fences with GFP_KERNEL UPSTREAM: drm/virtio: add trace events for commands UPSTREAM: drm/virtio: trace drm_fence_emit BACKPORT: drm/virtio: set seqno for dma-fence UPSTREAM: drm/virtio: move drm_connector_update_edid_property() call UPSTREAM: drm/virtio: add missing drm_atomic_helper_shutdown() call. UPSTREAM: drm/virtio: rework resource creation workflow. UPSTREAM: drm/virtio: params struct for virtio_gpu_cmd_create_resource_3d() UPSTREAM: drm/virtio: params struct for virtio_gpu_cmd_create_resource() UPSTREAM: drm/virtio: use struct to pass params to virtio_gpu_object_create() UPSTREAM: drm/virtio: move virtio_gpu_object_{attach, detach} calls. UPSTREAM: drm/virtio: add virtio-gpu-features debugfs file. UPSTREAM: drm/virtio: remove set but not used variable 'vgdev' BACKPORT: drm/virtio: implement prime export UPSTREAM: drm/virtio: remove prime pin/unpin callbacks. UPSTREAM: drm/virtio: implement prime mmap BACKPORT: Revert "drm/virtio: drop prime import/export callbacks" UPSTREAM: drm/virtio: drop prime import/export callbacks UPSTREAM: drm/virtio: do NOT reuse resource ids UPSTREAM: drm/virtio: drop virtio_gpu_fence_cleanup() UPSTREAM: drm/virtio: fix pageflip flush UPSTREAM: drm/virtio: log error responses UPSTREAM: drm/virtio: Add missing virtqueue reset UPSTREAM: drm/virtio: Remove incorrect kfree() UPSTREAM: drm/virtio: switch to generic fbdev emulation UPSTREAM: drm/virtio: virtio_gpu_cmd_resource_create_3d: drop unused fence arg UPSTREAM: drm/virtio: fence: pass plain pointer UPSTREAM: drm/virtio: add edid support UPSTREAM: virtio-gpu: add VIRTIO_GPU_F_EDID feature UPSTREAM: drm/virtio: fix memory leak of vfpriv on error return path UPSTREAM: drm/virtio: bump driver version after explicit synchronization addition UPSTREAM: drm/virtio: add in/out fence support for explicit synchronization UPSTREAM: drm/virtio: add uapi for in and out explicit fences UPSTREAM: drm/virtio: add virtio_gpu_alloc_fence() UPSTREAM: drm/virtio: Use IDAs more efficiently UPSTREAM: drm/virtio: Handle error from virtio_gpu_resource_id_get UPSTREAM: gpu/drm/virtio/virtgpu_vq.c: Use kmem_cache_zalloc UPSTREAM: drm/virtio: Handle context ID allocation errors UPSTREAM: drm/virtio: Replace IDRs with IDAs UPSTREAM: drm/virtio: fix resource id handling UPSTREAM: drm/virtio: drop resource_id argument. UPSTREAM: drm/virtio: use virtio_gpu_object->hw_res_handle in virtio_gpu_resource_create_ioctl() UPSTREAM: drm/virtio: use virtio_gpu_object->hw_res_handle in virtio_gpu_mode_dumb_create() UPSTREAM: drm/virtio: use virtio_gpu_object->hw_res_handle in virtio_gpufb_create() BACKPORT: drm/virtio: track created object state UPSTREAM: drm/virtio: document drm_dev_set_unique workaround UPSTREAM: virtio: Support prime objects vmap/vunmap BACKPORT: virtio: Rework virtio_gpu_object_kmap() UPSTREAM: drm/virtio: pass virtio_gpu_object to virtio_gpu_cmd_transfer_to_host_{2d, 3d} UPSTREAM: drm/virtio: add dma sync for dma mapped virtio gpu framebuffer pages UPSTREAM: drm/virtio: Remove set but not used variable 'bo' UPSTREAM: drm/virtio: add iommu support. UPSTREAM: drm/virtio: add virtio_gpu_object_detach() function UPSTREAM: drm/virtio: track virtual output state UPSTREAM: drm/virtio: fix bounds check in virtio_gpu_cmd_get_capset() UPSTREAM: drm/virtio: Replace ttm_bo_unref with ttm_bo_put UPSTREAM: drm/virtio: Replace ttm_bo_reference with ttm_bo_get UPSTREAM: drm/virtio: Replace drm_dev_unref with drm_dev_put UPSTREAM: gpu: drm: virtio: code cleanup UPSTREAM: drm: byteorder: add DRM_FORMAT_HOST_* UPSTREAM: drm: add drm_connector_attach_edid_property() UPSTREAM: drm/prime: Add drm_gem_prime_mmap() ANDROID: Remove unused cuttlefish build infra f2fs: fix build error on android tracepoints ANDROID: sched/fair: Cap transient util in stune ANDROID: update ABI for 4.19.66 Adding GKI Ramdisk to gki config ANDROID: Removed unnecessary modules from cuttlefish. UPSTREAM: pidfd: fix a poll race when setting exit_state BACKPORT: arch: wire-up pidfd_open() UPSTREAM: pid: add pidfd_open() UPSTREAM: pidfd: add polling support UPSTREAM: signal: improve comments UPSTREAM: fork: do not release lock that wasn't taken UPSTREAM: signal: support CLONE_PIDFD with pidfd_send_signal UPSTREAM: clone: add CLONE_PIDFD UPSTREAM: Make anon_inodes unconditional UPSTREAM: signal: use fdget() since we don't allow O_PATH UPSTREAM: signal: don't silently convert SI_USER signals to non-current pidfd BACKPORT: signal: add pidfd_send_signal() syscall Conflicts: arch/arm64/configs/cuttlefish_defconfig arch/x86/configs/x86_64_cuttlefish_defconfig arch/x86/entry/syscalls/syscall_64.tbl build.config.cuttlefish.aarch64 build.config.cuttlefish.x86_64 drivers/dma-buf/dma-buf.c fs/userfaultfd.c include/linux/dma-buf.h kernel/sched/fair.c Change-Id: I65d7949be7c228000f94ad9118f2d80a8fa45a1b Signed-off-by: Ivaylo Georgiev <irgeorgiev@codeaurora.org>
This commit is contained in:
commit
44bb576a7a
@ -1638,6 +1638,15 @@
|
||||
|
||||
initrd= [BOOT] Specify the location of the initial ramdisk
|
||||
|
||||
init_on_alloc= [MM] Fill newly allocated pages and heap objects with
|
||||
zeroes.
|
||||
Format: 0 | 1
|
||||
Default set by CONFIG_INIT_ON_ALLOC_DEFAULT_ON.
|
||||
|
||||
init_on_free= [MM] Fill freed pages and heap objects with zeroes.
|
||||
Format: 0 | 1
|
||||
Default set by CONFIG_INIT_ON_FREE_DEFAULT_ON.
|
||||
|
||||
init_pkru= [x86] Specify the default memory protection keys rights
|
||||
register contents for all processes. 0x55555554 by
|
||||
default (disallow access to all but pkey 0). Can
|
||||
|
106655
abi_gki_aarch64.xml
106655
abi_gki_aarch64.xml
File diff suppressed because it is too large
Load Diff
@ -22,7 +22,6 @@ config KVM
|
||||
bool "Kernel-based Virtual Machine (KVM) support"
|
||||
depends on MMU && OF
|
||||
select PREEMPT_NOTIFIERS
|
||||
select ANON_INODES
|
||||
select ARM_GIC
|
||||
select ARM_GIC_V3
|
||||
select ARM_GIC_V3_ITS
|
||||
|
@ -414,3 +414,5 @@
|
||||
397 common statx sys_statx
|
||||
398 common rseq sys_rseq
|
||||
399 common io_pgetevents sys_io_pgetevents
|
||||
424 common pidfd_send_signal sys_pidfd_send_signal
|
||||
434 common pidfd_open sys_pidfd_open
|
||||
|
@ -1,455 +0,0 @@
|
||||
CONFIG_AUDIT=y
|
||||
CONFIG_NO_HZ=y
|
||||
CONFIG_HIGH_RES_TIMERS=y
|
||||
CONFIG_PREEMPT=y
|
||||
CONFIG_TASKSTATS=y
|
||||
CONFIG_TASK_DELAY_ACCT=y
|
||||
CONFIG_TASK_XACCT=y
|
||||
CONFIG_TASK_IO_ACCOUNTING=y
|
||||
CONFIG_PSI=y
|
||||
CONFIG_IKCONFIG=y
|
||||
CONFIG_IKCONFIG_PROC=y
|
||||
CONFIG_MEMCG=y
|
||||
CONFIG_MEMCG_SWAP=y
|
||||
CONFIG_RT_GROUP_SCHED=y
|
||||
CONFIG_CGROUP_FREEZER=y
|
||||
CONFIG_CPUSETS=y
|
||||
# CONFIG_PROC_PID_CPUSET is not set
|
||||
CONFIG_CGROUP_CPUACCT=y
|
||||
CONFIG_CGROUP_BPF=y
|
||||
CONFIG_SCHED_AUTOGROUP=y
|
||||
CONFIG_SCHED_TUNE=y
|
||||
CONFIG_BLK_DEV_INITRD=y
|
||||
# CONFIG_RD_BZIP2 is not set
|
||||
# CONFIG_RD_LZMA is not set
|
||||
# CONFIG_RD_XZ is not set
|
||||
# CONFIG_RD_LZO is not set
|
||||
# CONFIG_RD_LZ4 is not set
|
||||
CONFIG_SGETMASK_SYSCALL=y
|
||||
# CONFIG_SYSFS_SYSCALL is not set
|
||||
# CONFIG_FHANDLE is not set
|
||||
CONFIG_KALLSYMS_ALL=y
|
||||
CONFIG_BPF_SYSCALL=y
|
||||
CONFIG_BPF_JIT_ALWAYS_ON=y
|
||||
# CONFIG_RSEQ is not set
|
||||
CONFIG_EMBEDDED=y
|
||||
# CONFIG_VM_EVENT_COUNTERS is not set
|
||||
# CONFIG_COMPAT_BRK is not set
|
||||
# CONFIG_SLAB_MERGE_DEFAULT is not set
|
||||
CONFIG_PROFILING=y
|
||||
CONFIG_PCI=y
|
||||
CONFIG_PCI_HOST_GENERIC=y
|
||||
CONFIG_HZ_100=y
|
||||
CONFIG_SECCOMP=y
|
||||
CONFIG_PARAVIRT=y
|
||||
CONFIG_ARMV8_DEPRECATED=y
|
||||
CONFIG_SWP_EMULATION=y
|
||||
CONFIG_CP15_BARRIER_EMULATION=y
|
||||
CONFIG_SETEND_EMULATION=y
|
||||
CONFIG_ARM64_SW_TTBR0_PAN=y
|
||||
CONFIG_RANDOMIZE_BASE=y
|
||||
# CONFIG_EFI is not set
|
||||
CONFIG_COMPAT=y
|
||||
CONFIG_PM_WAKELOCKS=y
|
||||
CONFIG_PM_WAKELOCKS_LIMIT=0
|
||||
# CONFIG_PM_WAKELOCKS_GC is not set
|
||||
CONFIG_PM_DEBUG=y
|
||||
CONFIG_ENERGY_MODEL=y
|
||||
CONFIG_CPU_IDLE=y
|
||||
CONFIG_ARM_CPUIDLE=y
|
||||
CONFIG_CPU_FREQ=y
|
||||
CONFIG_CPU_FREQ_TIMES=y
|
||||
CONFIG_CPU_FREQ_DEFAULT_GOV_SCHEDUTIL=y
|
||||
CONFIG_CPU_FREQ_GOV_POWERSAVE=y
|
||||
CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
|
||||
CONFIG_CPUFREQ_DT=y
|
||||
CONFIG_ARM_BIG_LITTLE_CPUFREQ=y
|
||||
CONFIG_ARM_DT_BL_CPUFREQ=y
|
||||
CONFIG_ARM_SCPI_CPUFREQ=y
|
||||
CONFIG_ARM_SCMI_CPUFREQ=y
|
||||
CONFIG_ARM_SCMI_PROTOCOL=y
|
||||
# CONFIG_ARM_SCMI_POWER_DOMAIN is not set
|
||||
CONFIG_ARM_SCPI_PROTOCOL=y
|
||||
# CONFIG_ARM_SCPI_POWER_DOMAIN is not set
|
||||
CONFIG_KPROBES=y
|
||||
CONFIG_LTO_CLANG=y
|
||||
CONFIG_CFI_CLANG=y
|
||||
CONFIG_MODULES=y
|
||||
CONFIG_MODULE_UNLOAD=y
|
||||
CONFIG_MODVERSIONS=y
|
||||
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
|
||||
# CONFIG_SPARSEMEM_VMEMMAP is not set
|
||||
CONFIG_KSM=y
|
||||
CONFIG_ZSMALLOC=y
|
||||
CONFIG_NET=y
|
||||
CONFIG_PACKET=y
|
||||
CONFIG_UNIX=y
|
||||
CONFIG_XFRM_USER=y
|
||||
CONFIG_XFRM_INTERFACE=y
|
||||
CONFIG_XFRM_STATISTICS=y
|
||||
CONFIG_NET_KEY=y
|
||||
CONFIG_INET=y
|
||||
CONFIG_IP_MULTICAST=y
|
||||
CONFIG_IP_ADVANCED_ROUTER=y
|
||||
CONFIG_IP_MULTIPLE_TABLES=y
|
||||
CONFIG_NET_IPGRE_DEMUX=y
|
||||
CONFIG_NET_IPVTI=y
|
||||
CONFIG_INET_ESP=y
|
||||
# CONFIG_INET_XFRM_MODE_BEET is not set
|
||||
CONFIG_INET_UDP_DIAG=y
|
||||
CONFIG_INET_DIAG_DESTROY=y
|
||||
CONFIG_TCP_CONG_ADVANCED=y
|
||||
# CONFIG_TCP_CONG_BIC is not set
|
||||
# CONFIG_TCP_CONG_WESTWOOD is not set
|
||||
# CONFIG_TCP_CONG_HTCP is not set
|
||||
CONFIG_IPV6_ROUTER_PREF=y
|
||||
CONFIG_IPV6_ROUTE_INFO=y
|
||||
CONFIG_IPV6_OPTIMISTIC_DAD=y
|
||||
CONFIG_INET6_ESP=y
|
||||
CONFIG_INET6_IPCOMP=y
|
||||
CONFIG_IPV6_MIP6=y
|
||||
CONFIG_IPV6_VTI=y
|
||||
CONFIG_IPV6_MULTIPLE_TABLES=y
|
||||
CONFIG_NETFILTER=y
|
||||
CONFIG_NF_CONNTRACK=y
|
||||
CONFIG_NF_CONNTRACK_SECMARK=y
|
||||
CONFIG_NF_CONNTRACK_EVENTS=y
|
||||
CONFIG_NF_CONNTRACK_AMANDA=y
|
||||
CONFIG_NF_CONNTRACK_FTP=y
|
||||
CONFIG_NF_CONNTRACK_H323=y
|
||||
CONFIG_NF_CONNTRACK_IRC=y
|
||||
CONFIG_NF_CONNTRACK_NETBIOS_NS=y
|
||||
CONFIG_NF_CONNTRACK_PPTP=y
|
||||
CONFIG_NF_CONNTRACK_SANE=y
|
||||
CONFIG_NF_CONNTRACK_TFTP=y
|
||||
CONFIG_NF_CT_NETLINK=y
|
||||
CONFIG_NETFILTER_XT_TARGET_CLASSIFY=y
|
||||
CONFIG_NETFILTER_XT_TARGET_CONNMARK=y
|
||||
CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=y
|
||||
CONFIG_NETFILTER_XT_TARGET_CT=y
|
||||
CONFIG_NETFILTER_XT_TARGET_IDLETIMER=y
|
||||
CONFIG_NETFILTER_XT_TARGET_MARK=y
|
||||
CONFIG_NETFILTER_XT_TARGET_NFLOG=y
|
||||
CONFIG_NETFILTER_XT_TARGET_NFQUEUE=y
|
||||
CONFIG_NETFILTER_XT_TARGET_TPROXY=y
|
||||
CONFIG_NETFILTER_XT_TARGET_TRACE=y
|
||||
CONFIG_NETFILTER_XT_TARGET_SECMARK=y
|
||||
CONFIG_NETFILTER_XT_TARGET_TCPMSS=y
|
||||
CONFIG_NETFILTER_XT_MATCH_BPF=y
|
||||
CONFIG_NETFILTER_XT_MATCH_COMMENT=y
|
||||
CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=y
|
||||
CONFIG_NETFILTER_XT_MATCH_CONNMARK=y
|
||||
CONFIG_NETFILTER_XT_MATCH_CONNTRACK=y
|
||||
CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=y
|
||||
CONFIG_NETFILTER_XT_MATCH_HELPER=y
|
||||
CONFIG_NETFILTER_XT_MATCH_IPRANGE=y
|
||||
CONFIG_NETFILTER_XT_MATCH_LENGTH=y
|
||||
CONFIG_NETFILTER_XT_MATCH_LIMIT=y
|
||||
CONFIG_NETFILTER_XT_MATCH_MAC=y
|
||||
CONFIG_NETFILTER_XT_MATCH_MARK=y
|
||||
CONFIG_NETFILTER_XT_MATCH_OWNER=y
|
||||
CONFIG_NETFILTER_XT_MATCH_POLICY=y
|
||||
CONFIG_NETFILTER_XT_MATCH_PKTTYPE=y
|
||||
CONFIG_NETFILTER_XT_MATCH_QUOTA=y
|
||||
CONFIG_NETFILTER_XT_MATCH_QUOTA2=y
|
||||
CONFIG_NETFILTER_XT_MATCH_SOCKET=y
|
||||
CONFIG_NETFILTER_XT_MATCH_STATE=y
|
||||
CONFIG_NETFILTER_XT_MATCH_STATISTIC=y
|
||||
CONFIG_NETFILTER_XT_MATCH_STRING=y
|
||||
CONFIG_NETFILTER_XT_MATCH_TIME=y
|
||||
CONFIG_NETFILTER_XT_MATCH_U32=y
|
||||
CONFIG_IP_NF_IPTABLES=y
|
||||
CONFIG_IP_NF_MATCH_ECN=y
|
||||
CONFIG_IP_NF_MATCH_TTL=y
|
||||
CONFIG_IP_NF_FILTER=y
|
||||
CONFIG_IP_NF_TARGET_REJECT=y
|
||||
CONFIG_IP_NF_NAT=y
|
||||
CONFIG_IP_NF_TARGET_MASQUERADE=y
|
||||
CONFIG_IP_NF_TARGET_NETMAP=y
|
||||
CONFIG_IP_NF_TARGET_REDIRECT=y
|
||||
CONFIG_IP_NF_MANGLE=y
|
||||
CONFIG_IP_NF_RAW=y
|
||||
CONFIG_IP_NF_SECURITY=y
|
||||
CONFIG_IP_NF_ARPTABLES=y
|
||||
CONFIG_IP_NF_ARPFILTER=y
|
||||
CONFIG_IP_NF_ARP_MANGLE=y
|
||||
CONFIG_IP6_NF_IPTABLES=y
|
||||
CONFIG_IP6_NF_MATCH_RPFILTER=y
|
||||
CONFIG_IP6_NF_FILTER=y
|
||||
CONFIG_IP6_NF_TARGET_REJECT=y
|
||||
CONFIG_IP6_NF_MANGLE=y
|
||||
CONFIG_IP6_NF_RAW=y
|
||||
CONFIG_L2TP=y
|
||||
CONFIG_NET_SCHED=y
|
||||
CONFIG_NET_SCH_HTB=y
|
||||
CONFIG_NET_SCH_NETEM=y
|
||||
CONFIG_NET_SCH_INGRESS=y
|
||||
CONFIG_NET_CLS_U32=y
|
||||
CONFIG_NET_CLS_BPF=y
|
||||
CONFIG_NET_EMATCH=y
|
||||
CONFIG_NET_EMATCH_U32=y
|
||||
CONFIG_NET_CLS_ACT=y
|
||||
CONFIG_VSOCKETS=y
|
||||
CONFIG_VIRTIO_VSOCKETS=y
|
||||
CONFIG_BPF_JIT=y
|
||||
CONFIG_CFG80211=y
|
||||
# CONFIG_CFG80211_DEFAULT_PS is not set
|
||||
# CONFIG_CFG80211_CRDA_SUPPORT is not set
|
||||
CONFIG_MAC80211=y
|
||||
# CONFIG_MAC80211_RC_MINSTREL is not set
|
||||
CONFIG_RFKILL=y
|
||||
# CONFIG_UEVENT_HELPER is not set
|
||||
# CONFIG_ALLOW_DEV_COREDUMP is not set
|
||||
CONFIG_DEBUG_DEVRES=y
|
||||
CONFIG_OF_UNITTEST=y
|
||||
CONFIG_ZRAM=y
|
||||
CONFIG_BLK_DEV_LOOP=y
|
||||
CONFIG_BLK_DEV_RAM=y
|
||||
CONFIG_BLK_DEV_RAM_SIZE=8192
|
||||
CONFIG_VIRTIO_BLK=y
|
||||
CONFIG_UID_SYS_STATS=y
|
||||
CONFIG_SCSI=y
|
||||
# CONFIG_SCSI_MQ_DEFAULT is not set
|
||||
# CONFIG_SCSI_PROC_FS is not set
|
||||
CONFIG_BLK_DEV_SD=y
|
||||
CONFIG_SCSI_VIRTIO=y
|
||||
CONFIG_MD=y
|
||||
CONFIG_BLK_DEV_DM=y
|
||||
CONFIG_DM_CRYPT=y
|
||||
CONFIG_DM_UEVENT=y
|
||||
CONFIG_DM_VERITY=y
|
||||
CONFIG_DM_VERITY_AVB=y
|
||||
CONFIG_DM_VERITY_FEC=y
|
||||
CONFIG_DM_BOW=y
|
||||
CONFIG_NETDEVICES=y
|
||||
CONFIG_NETCONSOLE=y
|
||||
CONFIG_NETCONSOLE_DYNAMIC=y
|
||||
CONFIG_TUN=y
|
||||
CONFIG_VIRTIO_NET=y
|
||||
# CONFIG_ETHERNET is not set
|
||||
CONFIG_PHYLIB=y
|
||||
CONFIG_PPP=y
|
||||
CONFIG_PPP_BSDCOMP=y
|
||||
CONFIG_PPP_DEFLATE=y
|
||||
CONFIG_PPP_MPPE=y
|
||||
CONFIG_PPTP=y
|
||||
CONFIG_PPPOL2TP=y
|
||||
CONFIG_USB_RTL8152=y
|
||||
CONFIG_USB_USBNET=y
|
||||
# CONFIG_USB_NET_AX8817X is not set
|
||||
# CONFIG_USB_NET_AX88179_178A is not set
|
||||
# CONFIG_USB_NET_CDCETHER is not set
|
||||
# CONFIG_USB_NET_CDC_NCM is not set
|
||||
# CONFIG_USB_NET_NET1080 is not set
|
||||
# CONFIG_USB_NET_CDC_SUBSET is not set
|
||||
# CONFIG_USB_NET_ZAURUS is not set
|
||||
# CONFIG_WLAN_VENDOR_ADMTEK is not set
|
||||
# CONFIG_WLAN_VENDOR_ATH is not set
|
||||
# CONFIG_WLAN_VENDOR_ATMEL is not set
|
||||
# CONFIG_WLAN_VENDOR_BROADCOM is not set
|
||||
# CONFIG_WLAN_VENDOR_CISCO is not set
|
||||
# CONFIG_WLAN_VENDOR_INTEL is not set
|
||||
# CONFIG_WLAN_VENDOR_INTERSIL is not set
|
||||
# CONFIG_WLAN_VENDOR_MARVELL is not set
|
||||
# CONFIG_WLAN_VENDOR_MEDIATEK is not set
|
||||
# CONFIG_WLAN_VENDOR_RALINK is not set
|
||||
# CONFIG_WLAN_VENDOR_REALTEK is not set
|
||||
# CONFIG_WLAN_VENDOR_RSI is not set
|
||||
# CONFIG_WLAN_VENDOR_ST is not set
|
||||
# CONFIG_WLAN_VENDOR_TI is not set
|
||||
# CONFIG_WLAN_VENDOR_ZYDAS is not set
|
||||
# CONFIG_WLAN_VENDOR_QUANTENNA is not set
|
||||
CONFIG_VIRT_WIFI=y
|
||||
CONFIG_INPUT_EVDEV=y
|
||||
# CONFIG_INPUT_KEYBOARD is not set
|
||||
# CONFIG_INPUT_MOUSE is not set
|
||||
CONFIG_INPUT_JOYSTICK=y
|
||||
CONFIG_JOYSTICK_XPAD=y
|
||||
CONFIG_JOYSTICK_XPAD_FF=y
|
||||
CONFIG_JOYSTICK_XPAD_LEDS=y
|
||||
CONFIG_INPUT_TABLET=y
|
||||
CONFIG_TABLET_USB_ACECAD=y
|
||||
CONFIG_TABLET_USB_AIPTEK=y
|
||||
CONFIG_TABLET_USB_GTCO=y
|
||||
CONFIG_TABLET_USB_HANWANG=y
|
||||
CONFIG_TABLET_USB_KBTAB=y
|
||||
CONFIG_INPUT_MISC=y
|
||||
CONFIG_INPUT_UINPUT=y
|
||||
# CONFIG_VT is not set
|
||||
# CONFIG_LEGACY_PTYS is not set
|
||||
# CONFIG_DEVMEM is not set
|
||||
CONFIG_SERIAL_8250=y
|
||||
# CONFIG_SERIAL_8250_DEPRECATED_OPTIONS is not set
|
||||
CONFIG_SERIAL_8250_CONSOLE=y
|
||||
# CONFIG_SERIAL_8250_EXAR is not set
|
||||
CONFIG_SERIAL_8250_NR_UARTS=48
|
||||
CONFIG_SERIAL_8250_EXTENDED=y
|
||||
CONFIG_SERIAL_8250_MANY_PORTS=y
|
||||
CONFIG_SERIAL_8250_SHARE_IRQ=y
|
||||
CONFIG_SERIAL_OF_PLATFORM=y
|
||||
CONFIG_SERIAL_AMBA_PL011=y
|
||||
CONFIG_SERIAL_AMBA_PL011_CONSOLE=y
|
||||
CONFIG_VIRTIO_CONSOLE=y
|
||||
CONFIG_HW_RANDOM=y
|
||||
CONFIG_HW_RANDOM_VIRTIO=y
|
||||
# CONFIG_HW_RANDOM_CAVIUM is not set
|
||||
# CONFIG_DEVPORT is not set
|
||||
# CONFIG_I2C_COMPAT is not set
|
||||
# CONFIG_I2C_HELPER_AUTO is not set
|
||||
# CONFIG_HWMON is not set
|
||||
CONFIG_THERMAL=y
|
||||
CONFIG_CPU_THERMAL=y
|
||||
CONFIG_MEDIA_SUPPORT=y
|
||||
# CONFIG_VGA_ARB is not set
|
||||
CONFIG_DRM=y
|
||||
# CONFIG_DRM_FBDEV_EMULATION is not set
|
||||
CONFIG_DRM_VIRTIO_GPU=y
|
||||
CONFIG_SOUND=y
|
||||
CONFIG_SND=y
|
||||
CONFIG_SND_HRTIMER=y
|
||||
# CONFIG_SND_SUPPORT_OLD_API is not set
|
||||
# CONFIG_SND_VERBOSE_PROCFS is not set
|
||||
# CONFIG_SND_DRIVERS is not set
|
||||
CONFIG_SND_INTEL8X0=y
|
||||
# CONFIG_SND_USB is not set
|
||||
CONFIG_HIDRAW=y
|
||||
CONFIG_UHID=y
|
||||
CONFIG_HID_A4TECH=y
|
||||
CONFIG_HID_ACRUX=y
|
||||
CONFIG_HID_ACRUX_FF=y
|
||||
CONFIG_HID_APPLE=y
|
||||
CONFIG_HID_BELKIN=y
|
||||
CONFIG_HID_CHERRY=y
|
||||
CONFIG_HID_CHICONY=y
|
||||
CONFIG_HID_PRODIKEYS=y
|
||||
CONFIG_HID_CYPRESS=y
|
||||
CONFIG_HID_DRAGONRISE=y
|
||||
CONFIG_DRAGONRISE_FF=y
|
||||
CONFIG_HID_EMS_FF=y
|
||||
CONFIG_HID_ELECOM=y
|
||||
CONFIG_HID_EZKEY=y
|
||||
CONFIG_HID_HOLTEK=y
|
||||
CONFIG_HID_KEYTOUCH=y
|
||||
CONFIG_HID_KYE=y
|
||||
CONFIG_HID_UCLOGIC=y
|
||||
CONFIG_HID_WALTOP=y
|
||||
CONFIG_HID_GYRATION=y
|
||||
CONFIG_HID_TWINHAN=y
|
||||
CONFIG_HID_KENSINGTON=y
|
||||
CONFIG_HID_LCPOWER=y
|
||||
CONFIG_HID_LOGITECH=y
|
||||
CONFIG_HID_LOGITECH_DJ=y
|
||||
CONFIG_LOGITECH_FF=y
|
||||
CONFIG_LOGIRUMBLEPAD2_FF=y
|
||||
CONFIG_LOGIG940_FF=y
|
||||
CONFIG_HID_MAGICMOUSE=y
|
||||
CONFIG_HID_MICROSOFT=y
|
||||
CONFIG_HID_MONTEREY=y
|
||||
CONFIG_HID_MULTITOUCH=y
|
||||
CONFIG_HID_NTRIG=y
|
||||
CONFIG_HID_ORTEK=y
|
||||
CONFIG_HID_PANTHERLORD=y
|
||||
CONFIG_PANTHERLORD_FF=y
|
||||
CONFIG_HID_PETALYNX=y
|
||||
CONFIG_HID_PICOLCD=y
|
||||
CONFIG_HID_PRIMAX=y
|
||||
CONFIG_HID_ROCCAT=y
|
||||
CONFIG_HID_SAITEK=y
|
||||
CONFIG_HID_SAMSUNG=y
|
||||
CONFIG_HID_SONY=y
|
||||
CONFIG_HID_SPEEDLINK=y
|
||||
CONFIG_HID_SUNPLUS=y
|
||||
CONFIG_HID_GREENASIA=y
|
||||
CONFIG_GREENASIA_FF=y
|
||||
CONFIG_HID_SMARTJOYPLUS=y
|
||||
CONFIG_SMARTJOYPLUS_FF=y
|
||||
CONFIG_HID_TIVO=y
|
||||
CONFIG_HID_TOPSEED=y
|
||||
CONFIG_HID_THRUSTMASTER=y
|
||||
CONFIG_HID_WACOM=y
|
||||
CONFIG_HID_WIIMOTE=y
|
||||
CONFIG_HID_ZEROPLUS=y
|
||||
CONFIG_HID_ZYDACRON=y
|
||||
CONFIG_USB_HIDDEV=y
|
||||
CONFIG_USB_ANNOUNCE_NEW_DEVICES=y
|
||||
CONFIG_USB_EHCI_HCD=y
|
||||
CONFIG_USB_GADGET=y
|
||||
CONFIG_USB_CONFIGFS=y
|
||||
CONFIG_USB_CONFIGFS_UEVENT=y
|
||||
CONFIG_USB_CONFIGFS_F_FS=y
|
||||
CONFIG_USB_CONFIGFS_F_ACC=y
|
||||
CONFIG_USB_CONFIGFS_F_AUDIO_SRC=y
|
||||
CONFIG_USB_CONFIGFS_F_MIDI=y
|
||||
CONFIG_MMC=y
|
||||
# CONFIG_PWRSEQ_EMMC is not set
|
||||
# CONFIG_PWRSEQ_SIMPLE is not set
|
||||
# CONFIG_MMC_BLOCK is not set
|
||||
CONFIG_RTC_CLASS=y
|
||||
# CONFIG_RTC_SYSTOHC is not set
|
||||
CONFIG_RTC_DRV_PL030=y
|
||||
CONFIG_RTC_DRV_PL031=y
|
||||
CONFIG_VIRTIO_PCI=y
|
||||
# CONFIG_VIRTIO_PCI_LEGACY is not set
|
||||
CONFIG_VIRTIO_BALLOON=y
|
||||
CONFIG_VIRTIO_INPUT=y
|
||||
CONFIG_VIRTIO_MMIO=y
|
||||
CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES=y
|
||||
CONFIG_STAGING=y
|
||||
CONFIG_ASHMEM=y
|
||||
CONFIG_ANDROID_VSOC=y
|
||||
CONFIG_ION=y
|
||||
CONFIG_ION_SYSTEM_HEAP=y
|
||||
CONFIG_COMMON_CLK_SCPI=y
|
||||
# CONFIG_COMMON_CLK_XGENE is not set
|
||||
CONFIG_MAILBOX=y
|
||||
# CONFIG_IOMMU_SUPPORT is not set
|
||||
CONFIG_ANDROID=y
|
||||
CONFIG_ANDROID_BINDER_IPC=y
|
||||
CONFIG_LEGACY_ENERGY_MODEL_DT=y
|
||||
CONFIG_EXT4_FS=y
|
||||
CONFIG_EXT4_FS_SECURITY=y
|
||||
CONFIG_EXT4_ENCRYPTION=y
|
||||
CONFIG_F2FS_FS=y
|
||||
CONFIG_F2FS_FS_SECURITY=y
|
||||
CONFIG_F2FS_FS_ENCRYPTION=y
|
||||
# CONFIG_DNOTIFY is not set
|
||||
CONFIG_QUOTA=y
|
||||
CONFIG_QFMT_V2=y
|
||||
CONFIG_FUSE_FS=y
|
||||
CONFIG_OVERLAY_FS=y
|
||||
CONFIG_MSDOS_FS=y
|
||||
CONFIG_VFAT_FS=y
|
||||
CONFIG_TMPFS=y
|
||||
CONFIG_TMPFS_POSIX_ACL=y
|
||||
CONFIG_SDCARD_FS=y
|
||||
CONFIG_PSTORE=y
|
||||
CONFIG_PSTORE_CONSOLE=y
|
||||
CONFIG_PSTORE_RAM=y
|
||||
CONFIG_SECURITY_PERF_EVENTS_RESTRICT=y
|
||||
CONFIG_SECURITY=y
|
||||
CONFIG_SECURITY_NETWORK=y
|
||||
CONFIG_LSM_MMAP_MIN_ADDR=65536
|
||||
CONFIG_HARDENED_USERCOPY=y
|
||||
CONFIG_SECURITY_SELINUX=y
|
||||
CONFIG_CRYPTO_ADIANTUM=y
|
||||
CONFIG_CRYPTO_SHA512=y
|
||||
CONFIG_CRYPTO_LZ4=y
|
||||
CONFIG_CRYPTO_ZSTD=y
|
||||
CONFIG_CRYPTO_ANSI_CPRNG=y
|
||||
CONFIG_CRYPTO_DEV_VIRTIO=y
|
||||
CONFIG_XZ_DEC=y
|
||||
CONFIG_PRINTK_TIME=y
|
||||
CONFIG_DEBUG_INFO=y
|
||||
# CONFIG_ENABLE_MUST_CHECK is not set
|
||||
CONFIG_FRAME_WARN=1024
|
||||
# CONFIG_SECTION_MISMATCH_WARN_ONLY is not set
|
||||
CONFIG_MAGIC_SYSRQ=y
|
||||
CONFIG_DEBUG_STACK_USAGE=y
|
||||
CONFIG_DEBUG_MEMORY_INIT=y
|
||||
CONFIG_SOFTLOCKUP_DETECTOR=y
|
||||
# CONFIG_DETECT_HUNG_TASK is not set
|
||||
CONFIG_PANIC_TIMEOUT=5
|
||||
CONFIG_SCHEDSTATS=y
|
||||
CONFIG_RCU_CPU_STALL_TIMEOUT=60
|
||||
CONFIG_ENABLE_DEFAULT_TRACERS=y
|
||||
# CONFIG_RUNTIME_TESTING_MENU is not set
|
@ -17,6 +17,7 @@ CONFIG_CPUSETS=y
|
||||
CONFIG_CGROUP_CPUACCT=y
|
||||
CONFIG_CGROUP_BPF=y
|
||||
CONFIG_SCHED_AUTOGROUP=y
|
||||
CONFIG_SCHED_TUNE=y
|
||||
CONFIG_BLK_DEV_INITRD=y
|
||||
# CONFIG_RD_BZIP2 is not set
|
||||
# CONFIG_RD_LZMA is not set
|
||||
@ -35,10 +36,11 @@ CONFIG_EMBEDDED=y
|
||||
CONFIG_SLAB_FREELIST_RANDOM=y
|
||||
CONFIG_SLAB_FREELIST_HARDENED=y
|
||||
CONFIG_PROFILING=y
|
||||
CONFIG_ARCH_QCOM=y
|
||||
CONFIG_PCI=y
|
||||
CONFIG_PCI_HOST_GENERIC=y
|
||||
CONFIG_SCHED_MC=y
|
||||
CONFIG_NR_CPUS=256
|
||||
CONFIG_NR_CPUS=32
|
||||
CONFIG_SECCOMP=y
|
||||
CONFIG_PARAVIRT=y
|
||||
CONFIG_ARMV8_DEPRECATED=y
|
||||
@ -46,7 +48,7 @@ CONFIG_SWP_EMULATION=y
|
||||
CONFIG_CP15_BARRIER_EMULATION=y
|
||||
CONFIG_SETEND_EMULATION=y
|
||||
CONFIG_RANDOMIZE_BASE=y
|
||||
# CONFIG_EFI is not set
|
||||
# CONFIG_DMI is not set
|
||||
CONFIG_COMPAT=y
|
||||
CONFIG_PM_WAKELOCKS=y
|
||||
CONFIG_PM_WAKELOCKS_LIMIT=0
|
||||
@ -65,6 +67,7 @@ CONFIG_ARM_SCMI_PROTOCOL=y
|
||||
# CONFIG_ARM_SCMI_POWER_DOMAIN is not set
|
||||
CONFIG_ARM_SCPI_PROTOCOL=y
|
||||
# CONFIG_ARM_SCPI_POWER_DOMAIN is not set
|
||||
# CONFIG_EFI_ARMSTUB_DTB_LOADER is not set
|
||||
CONFIG_ARM64_CRYPTO=y
|
||||
CONFIG_CRYPTO_AES_ARM64=y
|
||||
CONFIG_KPROBES=y
|
||||
@ -72,7 +75,6 @@ CONFIG_MODULES=y
|
||||
CONFIG_MODULE_UNLOAD=y
|
||||
CONFIG_MODVERSIONS=y
|
||||
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
|
||||
# CONFIG_SPARSEMEM_VMEMMAP is not set
|
||||
CONFIG_TRANSPARENT_HUGEPAGE=y
|
||||
CONFIG_CMA=y
|
||||
CONFIG_CMA_AREAS=16
|
||||
@ -171,6 +173,7 @@ CONFIG_IP6_NF_FILTER=y
|
||||
CONFIG_IP6_NF_TARGET_REJECT=y
|
||||
CONFIG_IP6_NF_MANGLE=y
|
||||
CONFIG_IP6_NF_RAW=y
|
||||
CONFIG_TIPC=y
|
||||
CONFIG_L2TP=y
|
||||
CONFIG_BRIDGE=y
|
||||
CONFIG_NET_SCHED=y
|
||||
@ -279,7 +282,6 @@ CONFIG_HW_RANDOM_VIRTIO=y
|
||||
# CONFIG_I2C_HELPER_AUTO is not set
|
||||
CONFIG_SPI=y
|
||||
CONFIG_SPMI=y
|
||||
CONFIG_PINCTRL=y
|
||||
CONFIG_PINCTRL_AMD=y
|
||||
CONFIG_POWER_AVS=y
|
||||
# CONFIG_HWMON is not set
|
||||
@ -290,7 +292,6 @@ CONFIG_DEVFREQ_THERMAL=y
|
||||
CONFIG_WATCHDOG=y
|
||||
CONFIG_MFD_ACT8945A=y
|
||||
CONFIG_MFD_SYSCON=y
|
||||
CONFIG_REGULATOR=y
|
||||
CONFIG_MEDIA_SUPPORT=y
|
||||
CONFIG_MEDIA_CAMERA_SUPPORT=y
|
||||
CONFIG_MEDIA_CONTROLLER=y
|
||||
@ -354,12 +355,15 @@ CONFIG_COMMON_CLK_SCPI=y
|
||||
CONFIG_HWSPINLOCK=y
|
||||
CONFIG_MAILBOX=y
|
||||
CONFIG_ARM_SMMU=y
|
||||
CONFIG_QCOM_COMMAND_DB=y
|
||||
CONFIG_QCOM_RPMH=y
|
||||
CONFIG_DEVFREQ_GOV_PERFORMANCE=y
|
||||
CONFIG_DEVFREQ_GOV_POWERSAVE=y
|
||||
CONFIG_DEVFREQ_GOV_USERSPACE=y
|
||||
CONFIG_DEVFREQ_GOV_PASSIVE=y
|
||||
CONFIG_EXTCON=y
|
||||
CONFIG_PWM=y
|
||||
CONFIG_QCOM_PDC=y
|
||||
CONFIG_GENERIC_PHY=y
|
||||
CONFIG_RAS=y
|
||||
CONFIG_ANDROID=y
|
||||
@ -376,8 +380,8 @@ CONFIG_FUSE_FS=y
|
||||
CONFIG_OVERLAY_FS=y
|
||||
CONFIG_MSDOS_FS=y
|
||||
CONFIG_VFAT_FS=y
|
||||
CONFIG_TMPFS=y
|
||||
CONFIG_TMPFS_POSIX_ACL=y
|
||||
# CONFIG_EFIVAR_FS is not set
|
||||
CONFIG_SDCARD_FS=y
|
||||
CONFIG_PSTORE=y
|
||||
CONFIG_PSTORE_CONSOLE=y
|
||||
|
@ -44,7 +44,7 @@
|
||||
#define __ARM_NR_compat_set_tls (__ARM_NR_COMPAT_BASE + 5)
|
||||
#define __ARM_NR_COMPAT_END (__ARM_NR_COMPAT_BASE + 0x800)
|
||||
|
||||
#define __NR_compat_syscalls 399
|
||||
#define __NR_compat_syscalls 435
|
||||
#endif
|
||||
|
||||
#define __ARCH_WANT_SYS_CLONE
|
||||
|
@ -819,6 +819,10 @@ __SYSCALL(__NR_pkey_free, sys_pkey_free)
|
||||
__SYSCALL(__NR_statx, sys_statx)
|
||||
#define __NR_rseq 398
|
||||
__SYSCALL(__NR_rseq, sys_rseq)
|
||||
#define __NR_pidfd_send_signal 424
|
||||
__SYSCALL(__NR_pidfd_send_signal, sys_pidfd_send_signal)
|
||||
#define __NR_pidfd_open 434
|
||||
__SYSCALL(__NR_pidfd_open, sys_pidfd_open)
|
||||
|
||||
/*
|
||||
* Please add new compat syscalls above this comment and update
|
||||
|
@ -301,6 +301,11 @@ void __init setup_arch(char **cmdline_p)
|
||||
|
||||
setup_machine_fdt(__fdt_pointer);
|
||||
|
||||
/*
|
||||
* Initialise the static keys early as they may be enabled by the
|
||||
* cpufeature code and early parameters.
|
||||
*/
|
||||
jump_label_init();
|
||||
parse_early_param();
|
||||
|
||||
/*
|
||||
|
@ -420,11 +420,6 @@ void __init smp_cpus_done(unsigned int max_cpus)
|
||||
void __init smp_prepare_boot_cpu(void)
|
||||
{
|
||||
set_my_cpu_offset(per_cpu_offset(smp_processor_id()));
|
||||
/*
|
||||
* Initialise the static keys early as they may be enabled by the
|
||||
* cpufeature code.
|
||||
*/
|
||||
jump_label_init();
|
||||
cpuinfo_store_boot_cpu();
|
||||
}
|
||||
|
||||
|
@ -23,7 +23,6 @@ config KVM
|
||||
depends on OF
|
||||
select MMU_NOTIFIER
|
||||
select PREEMPT_NOTIFIERS
|
||||
select ANON_INODES
|
||||
select HAVE_KVM_CPU_RELAX_INTERCEPT
|
||||
select HAVE_KVM_ARCH_TLB_FLUSH_ALL
|
||||
select KVM_MMIO
|
||||
|
@ -20,7 +20,6 @@ config KVM
|
||||
depends on HAVE_KVM
|
||||
select EXPORT_UASM
|
||||
select PREEMPT_NOTIFIERS
|
||||
select ANON_INODES
|
||||
select KVM_GENERIC_DIRTYLOG_READ_PROTECT
|
||||
select HAVE_KVM_VCPU_ASYNC_IOCTL
|
||||
select KVM_MMIO
|
||||
|
@ -20,7 +20,6 @@ if VIRTUALIZATION
|
||||
config KVM
|
||||
bool
|
||||
select PREEMPT_NOTIFIERS
|
||||
select ANON_INODES
|
||||
select HAVE_KVM_EVENTFD
|
||||
select HAVE_KVM_VCPU_ASYNC_IOCTL
|
||||
select SRCU
|
||||
|
@ -391,3 +391,5 @@
|
||||
381 common kexec_file_load sys_kexec_file_load compat_sys_kexec_file_load
|
||||
382 common io_pgetevents sys_io_pgetevents compat_sys_io_pgetevents
|
||||
383 common rseq sys_rseq compat_sys_rseq
|
||||
424 common pidfd_send_signal sys_pidfd_send_signal sys_pidfd_send_signal
|
||||
434 common pidfd_open sys_pidfd_open sys_pidfd_open
|
||||
|
@ -21,7 +21,6 @@ config KVM
|
||||
prompt "Kernel-based Virtual Machine (KVM) support"
|
||||
depends on HAVE_KVM
|
||||
select PREEMPT_NOTIFIERS
|
||||
select ANON_INODES
|
||||
select HAVE_KVM_CPU_RELAX_INTERCEPT
|
||||
select HAVE_KVM_VCPU_ASYNC_IOCTL
|
||||
select HAVE_KVM_EVENTFD
|
||||
|
@ -46,7 +46,6 @@ config X86
|
||||
#
|
||||
select ACPI_LEGACY_TABLES_LOOKUP if ACPI
|
||||
select ACPI_SYSTEM_POWER_STATES_SUPPORT if ACPI
|
||||
select ANON_INODES
|
||||
select ARCH_CLOCKSOURCE_DATA
|
||||
select ARCH_DISCARD_MEMBLOCK
|
||||
select ARCH_HAS_ACPI_TABLE_UPGRADE if ACPI
|
||||
|
@ -15,6 +15,7 @@ CONFIG_CGROUP_FREEZER=y
|
||||
CONFIG_CGROUP_CPUACCT=y
|
||||
CONFIG_CGROUP_BPF=y
|
||||
CONFIG_SCHED_AUTOGROUP=y
|
||||
CONFIG_SCHED_TUNE=y
|
||||
CONFIG_BLK_DEV_INITRD=y
|
||||
# CONFIG_RD_BZIP2 is not set
|
||||
# CONFIG_RD_LZMA is not set
|
||||
@ -31,10 +32,12 @@ CONFIG_EMBEDDED=y
|
||||
# CONFIG_COMPAT_BRK is not set
|
||||
# CONFIG_SLAB_MERGE_DEFAULT is not set
|
||||
CONFIG_PROFILING=y
|
||||
CONFIG_SMP=y
|
||||
CONFIG_NR_CPUS=32
|
||||
CONFIG_EFI=y
|
||||
CONFIG_PM_WAKELOCKS=y
|
||||
CONFIG_PM_WAKELOCKS_LIMIT=0
|
||||
# CONFIG_PM_WAKELOCKS_GC is not set
|
||||
CONFIG_CPU_FREQ=y
|
||||
CONFIG_CPU_FREQ_TIMES=y
|
||||
CONFIG_CPU_FREQ_GOV_POWERSAVE=y
|
||||
CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
|
||||
@ -44,7 +47,6 @@ CONFIG_MODULES=y
|
||||
CONFIG_MODULE_UNLOAD=y
|
||||
CONFIG_MODVERSIONS=y
|
||||
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
|
||||
# CONFIG_SPARSEMEM_VMEMMAP is not set
|
||||
CONFIG_TRANSPARENT_HUGEPAGE=y
|
||||
CONFIG_ZSMALLOC=y
|
||||
CONFIG_NET=y
|
||||
@ -141,6 +143,7 @@ CONFIG_IP6_NF_FILTER=y
|
||||
CONFIG_IP6_NF_TARGET_REJECT=y
|
||||
CONFIG_IP6_NF_MANGLE=y
|
||||
CONFIG_IP6_NF_RAW=y
|
||||
CONFIG_TIPC=y
|
||||
CONFIG_L2TP=y
|
||||
CONFIG_NET_SCHED=y
|
||||
CONFIG_NET_SCH_HTB=y
|
||||
@ -150,8 +153,12 @@ CONFIG_NET_CLS_BPF=y
|
||||
CONFIG_NET_EMATCH=y
|
||||
CONFIG_NET_EMATCH_U32=y
|
||||
CONFIG_NET_CLS_ACT=y
|
||||
CONFIG_VSOCKETS=y
|
||||
CONFIG_VIRTIO_VSOCKETS=y
|
||||
CONFIG_VSOCKETS=m
|
||||
CONFIG_VIRTIO_VSOCKETS=m
|
||||
CONFIG_CAN=m
|
||||
# CONFIG_CAN_BCM is not set
|
||||
# CONFIG_CAN_GW is not set
|
||||
CONFIG_CAN_VCAN=m
|
||||
CONFIG_CFG80211=y
|
||||
# CONFIG_CFG80211_DEFAULT_PS is not set
|
||||
# CONFIG_CFG80211_CRDA_SUPPORT is not set
|
||||
@ -165,13 +172,12 @@ CONFIG_ZRAM=y
|
||||
CONFIG_BLK_DEV_LOOP=y
|
||||
CONFIG_BLK_DEV_RAM=y
|
||||
CONFIG_BLK_DEV_RAM_SIZE=8192
|
||||
CONFIG_VIRTIO_BLK=y
|
||||
CONFIG_VIRTIO_BLK=m
|
||||
CONFIG_UID_SYS_STATS=y
|
||||
CONFIG_SCSI=y
|
||||
# CONFIG_SCSI_MQ_DEFAULT is not set
|
||||
# CONFIG_SCSI_PROC_FS is not set
|
||||
CONFIG_BLK_DEV_SD=y
|
||||
CONFIG_SCSI_VIRTIO=y
|
||||
CONFIG_MD=y
|
||||
CONFIG_BLK_DEV_DM=y
|
||||
CONFIG_DM_CRYPT=y
|
||||
@ -182,7 +188,7 @@ CONFIG_DM_VERITY_FEC=y
|
||||
CONFIG_DM_BOW=y
|
||||
CONFIG_NETDEVICES=y
|
||||
CONFIG_TUN=y
|
||||
CONFIG_VIRTIO_NET=y
|
||||
CONFIG_VIRTIO_NET=m
|
||||
# CONFIG_ETHERNET is not set
|
||||
CONFIG_PHYLIB=y
|
||||
CONFIG_PPP=y
|
||||
@ -216,7 +222,7 @@ CONFIG_USB_USBNET=y
|
||||
# CONFIG_WLAN_VENDOR_TI is not set
|
||||
# CONFIG_WLAN_VENDOR_ZYDAS is not set
|
||||
# CONFIG_WLAN_VENDOR_QUANTENNA is not set
|
||||
CONFIG_VIRT_WIFI=y
|
||||
CONFIG_VIRT_WIFI=m
|
||||
CONFIG_INPUT_EVDEV=y
|
||||
# CONFIG_INPUT_KEYBOARD is not set
|
||||
# CONFIG_INPUT_MOUSE is not set
|
||||
@ -234,22 +240,24 @@ CONFIG_SERIAL_8250_NR_UARTS=48
|
||||
CONFIG_SERIAL_8250_EXTENDED=y
|
||||
CONFIG_SERIAL_8250_MANY_PORTS=y
|
||||
CONFIG_SERIAL_8250_SHARE_IRQ=y
|
||||
CONFIG_VIRTIO_CONSOLE=y
|
||||
CONFIG_HW_RANDOM=y
|
||||
CONFIG_HW_RANDOM_VIRTIO=y
|
||||
CONFIG_HW_RANDOM_VIRTIO=m
|
||||
# CONFIG_DEVPORT is not set
|
||||
# CONFIG_I2C_COMPAT is not set
|
||||
# CONFIG_I2C_HELPER_AUTO is not set
|
||||
CONFIG_SPI=y
|
||||
CONFIG_GPIOLIB=y
|
||||
# CONFIG_HWMON is not set
|
||||
CONFIG_DEVFREQ_THERMAL=y
|
||||
# CONFIG_X86_PKG_TEMP_THERMAL is not set
|
||||
CONFIG_MEDIA_SUPPORT=y
|
||||
CONFIG_MEDIA_CAMERA_SUPPORT=y
|
||||
# CONFIG_VGA_ARB is not set
|
||||
CONFIG_DRM=y
|
||||
# CONFIG_DRM_FBDEV_EMULATION is not set
|
||||
CONFIG_DRM_VIRTIO_GPU=y
|
||||
CONFIG_DRM_VIRTIO_GPU=m
|
||||
CONFIG_BACKLIGHT_LCD_SUPPORT=y
|
||||
# CONFIG_LCD_CLASS_DEVICE is not set
|
||||
CONFIG_BACKLIGHT_CLASS_DEVICE=y
|
||||
CONFIG_SOUND=y
|
||||
CONFIG_SND=y
|
||||
@ -258,7 +266,7 @@ CONFIG_SND_DYNAMIC_MINORS=y
|
||||
# CONFIG_SND_SUPPORT_OLD_API is not set
|
||||
# CONFIG_SND_VERBOSE_PROCFS is not set
|
||||
# CONFIG_SND_DRIVERS is not set
|
||||
CONFIG_SND_INTEL8X0=y
|
||||
CONFIG_SND_INTEL8X0=m
|
||||
# CONFIG_SND_USB is not set
|
||||
CONFIG_HIDRAW=y
|
||||
CONFIG_UHID=y
|
||||
@ -284,16 +292,16 @@ CONFIG_LEDS_CLASS=y
|
||||
CONFIG_LEDS_TRIGGERS=y
|
||||
CONFIG_RTC_CLASS=y
|
||||
# CONFIG_RTC_SYSTOHC is not set
|
||||
CONFIG_VIRTIO_PCI=y
|
||||
CONFIG_RTC_DRV_TEST=m
|
||||
CONFIG_VIRTIO_PCI=m
|
||||
# CONFIG_VIRTIO_PCI_LEGACY is not set
|
||||
CONFIG_VIRTIO_BALLOON=y
|
||||
CONFIG_VIRTIO_INPUT=y
|
||||
CONFIG_VIRTIO_MMIO=y
|
||||
CONFIG_VIRTIO_BALLOON=m
|
||||
CONFIG_VIRTIO_INPUT=m
|
||||
CONFIG_VIRTIO_MMIO=m
|
||||
CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES=y
|
||||
CONFIG_STAGING=y
|
||||
CONFIG_ASHMEM=y
|
||||
CONFIG_ION=y
|
||||
CONFIG_MAILBOX=y
|
||||
CONFIG_PM_DEVFREQ=y
|
||||
CONFIG_ANDROID=y
|
||||
CONFIG_ANDROID_BINDER_IPC=y
|
||||
@ -307,10 +315,12 @@ CONFIG_F2FS_FS_ENCRYPTION=y
|
||||
CONFIG_QUOTA=y
|
||||
CONFIG_QFMT_V2=y
|
||||
CONFIG_FUSE_FS=y
|
||||
CONFIG_OVERLAY_FS=y
|
||||
CONFIG_MSDOS_FS=y
|
||||
CONFIG_VFAT_FS=y
|
||||
CONFIG_TMPFS=y
|
||||
CONFIG_TMPFS_POSIX_ACL=y
|
||||
# CONFIG_EFIVAR_FS is not set
|
||||
CONFIG_SDCARD_FS=y
|
||||
CONFIG_PSTORE=y
|
||||
CONFIG_PSTORE_CONSOLE=y
|
||||
@ -325,7 +335,6 @@ CONFIG_CRYPTO_SHA512=y
|
||||
CONFIG_CRYPTO_LZ4=y
|
||||
CONFIG_CRYPTO_ZSTD=y
|
||||
CONFIG_CRYPTO_ANSI_CPRNG=y
|
||||
CONFIG_CRYPTO_DEV_VIRTIO=y
|
||||
CONFIG_CRC8=y
|
||||
CONFIG_XZ_DEC=y
|
||||
CONFIG_PRINTK_TIME=y
|
||||
|
@ -1,485 +0,0 @@
|
||||
CONFIG_POSIX_MQUEUE=y
|
||||
# CONFIG_USELIB is not set
|
||||
CONFIG_AUDIT=y
|
||||
CONFIG_NO_HZ=y
|
||||
CONFIG_HIGH_RES_TIMERS=y
|
||||
CONFIG_PREEMPT=y
|
||||
CONFIG_BSD_PROCESS_ACCT=y
|
||||
CONFIG_TASKSTATS=y
|
||||
CONFIG_TASK_DELAY_ACCT=y
|
||||
CONFIG_TASK_XACCT=y
|
||||
CONFIG_TASK_IO_ACCOUNTING=y
|
||||
CONFIG_PSI=y
|
||||
CONFIG_IKCONFIG=y
|
||||
CONFIG_IKCONFIG_PROC=y
|
||||
CONFIG_CGROUPS=y
|
||||
CONFIG_MEMCG=y
|
||||
CONFIG_MEMCG_SWAP=y
|
||||
CONFIG_CGROUP_SCHED=y
|
||||
CONFIG_RT_GROUP_SCHED=y
|
||||
CONFIG_CGROUP_FREEZER=y
|
||||
CONFIG_CPUSETS=y
|
||||
# CONFIG_PROC_PID_CPUSET is not set
|
||||
CONFIG_CGROUP_CPUACCT=y
|
||||
CONFIG_CGROUP_BPF=y
|
||||
CONFIG_NAMESPACES=y
|
||||
CONFIG_SCHED_TUNE=y
|
||||
CONFIG_BLK_DEV_INITRD=y
|
||||
# CONFIG_RD_LZ4 is not set
|
||||
# CONFIG_FHANDLE is not set
|
||||
# CONFIG_PCSPKR_PLATFORM is not set
|
||||
CONFIG_KALLSYMS_ALL=y
|
||||
CONFIG_BPF_SYSCALL=y
|
||||
CONFIG_BPF_JIT_ALWAYS_ON=y
|
||||
CONFIG_EMBEDDED=y
|
||||
# CONFIG_COMPAT_BRK is not set
|
||||
CONFIG_PROFILING=y
|
||||
CONFIG_SMP=y
|
||||
CONFIG_HYPERVISOR_GUEST=y
|
||||
CONFIG_PARAVIRT=y
|
||||
CONFIG_PARAVIRT_SPINLOCKS=y
|
||||
CONFIG_MCORE2=y
|
||||
CONFIG_PROCESSOR_SELECT=y
|
||||
# CONFIG_CPU_SUP_CENTAUR is not set
|
||||
CONFIG_NR_CPUS=8
|
||||
# CONFIG_MICROCODE is not set
|
||||
CONFIG_X86_MSR=y
|
||||
CONFIG_X86_CPUID=y
|
||||
# CONFIG_MTRR is not set
|
||||
CONFIG_HZ_100=y
|
||||
CONFIG_KEXEC=y
|
||||
CONFIG_CRASH_DUMP=y
|
||||
CONFIG_PHYSICAL_START=0x200000
|
||||
CONFIG_PHYSICAL_ALIGN=0x1000000
|
||||
CONFIG_CMDLINE_BOOL=y
|
||||
CONFIG_CMDLINE="console=ttyS0 reboot=p"
|
||||
CONFIG_PM_WAKELOCKS=y
|
||||
CONFIG_PM_WAKELOCKS_LIMIT=0
|
||||
# CONFIG_PM_WAKELOCKS_GC is not set
|
||||
CONFIG_PM_DEBUG=y
|
||||
CONFIG_ACPI_PROCFS_POWER=y
|
||||
# CONFIG_ACPI_FAN is not set
|
||||
# CONFIG_ACPI_THERMAL is not set
|
||||
# CONFIG_X86_PM_TIMER is not set
|
||||
CONFIG_CPU_FREQ_TIMES=y
|
||||
CONFIG_CPU_FREQ_GOV_ONDEMAND=y
|
||||
CONFIG_X86_ACPI_CPUFREQ=y
|
||||
CONFIG_PCI_MSI=y
|
||||
CONFIG_IA32_EMULATION=y
|
||||
# CONFIG_FIRMWARE_MEMMAP is not set
|
||||
CONFIG_OPROFILE=y
|
||||
CONFIG_KPROBES=y
|
||||
CONFIG_LTO_CLANG=y
|
||||
CONFIG_CFI_CLANG=y
|
||||
CONFIG_REFCOUNT_FULL=y
|
||||
CONFIG_MODULES=y
|
||||
CONFIG_MODULE_UNLOAD=y
|
||||
CONFIG_MODVERSIONS=y
|
||||
CONFIG_PARTITION_ADVANCED=y
|
||||
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
|
||||
CONFIG_BINFMT_MISC=y
|
||||
CONFIG_KSM=y
|
||||
CONFIG_DEFAULT_MMAP_MIN_ADDR=65536
|
||||
CONFIG_ZSMALLOC=y
|
||||
CONFIG_NET=y
|
||||
CONFIG_PACKET=y
|
||||
CONFIG_UNIX=y
|
||||
CONFIG_XFRM_USER=y
|
||||
CONFIG_XFRM_INTERFACE=y
|
||||
CONFIG_XFRM_STATISTICS=y
|
||||
CONFIG_NET_KEY=y
|
||||
CONFIG_INET=y
|
||||
CONFIG_IP_MULTICAST=y
|
||||
CONFIG_IP_ADVANCED_ROUTER=y
|
||||
CONFIG_IP_MULTIPLE_TABLES=y
|
||||
CONFIG_IP_ROUTE_MULTIPATH=y
|
||||
CONFIG_IP_ROUTE_VERBOSE=y
|
||||
CONFIG_NET_IPGRE_DEMUX=y
|
||||
CONFIG_IP_MROUTE=y
|
||||
CONFIG_IP_PIMSM_V1=y
|
||||
CONFIG_IP_PIMSM_V2=y
|
||||
CONFIG_SYN_COOKIES=y
|
||||
CONFIG_NET_IPVTI=y
|
||||
CONFIG_INET_ESP=y
|
||||
# CONFIG_INET_XFRM_MODE_BEET is not set
|
||||
CONFIG_INET_UDP_DIAG=y
|
||||
CONFIG_INET_DIAG_DESTROY=y
|
||||
CONFIG_TCP_CONG_ADVANCED=y
|
||||
# CONFIG_TCP_CONG_BIC is not set
|
||||
# CONFIG_TCP_CONG_WESTWOOD is not set
|
||||
# CONFIG_TCP_CONG_HTCP is not set
|
||||
CONFIG_TCP_MD5SIG=y
|
||||
CONFIG_IPV6_ROUTER_PREF=y
|
||||
CONFIG_IPV6_ROUTE_INFO=y
|
||||
CONFIG_IPV6_OPTIMISTIC_DAD=y
|
||||
CONFIG_INET6_AH=y
|
||||
CONFIG_INET6_ESP=y
|
||||
CONFIG_INET6_IPCOMP=y
|
||||
CONFIG_IPV6_MIP6=y
|
||||
CONFIG_IPV6_VTI=y
|
||||
CONFIG_IPV6_MULTIPLE_TABLES=y
|
||||
CONFIG_NETLABEL=y
|
||||
CONFIG_NETFILTER=y
|
||||
CONFIG_NF_CONNTRACK=y
|
||||
CONFIG_NF_CONNTRACK_SECMARK=y
|
||||
CONFIG_NF_CONNTRACK_EVENTS=y
|
||||
CONFIG_NF_CONNTRACK_AMANDA=y
|
||||
CONFIG_NF_CONNTRACK_FTP=y
|
||||
CONFIG_NF_CONNTRACK_H323=y
|
||||
CONFIG_NF_CONNTRACK_IRC=y
|
||||
CONFIG_NF_CONNTRACK_NETBIOS_NS=y
|
||||
CONFIG_NF_CONNTRACK_PPTP=y
|
||||
CONFIG_NF_CONNTRACK_SANE=y
|
||||
CONFIG_NF_CONNTRACK_TFTP=y
|
||||
CONFIG_NF_CT_NETLINK=y
|
||||
CONFIG_NETFILTER_XT_TARGET_CLASSIFY=y
|
||||
CONFIG_NETFILTER_XT_TARGET_CONNMARK=y
|
||||
CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=y
|
||||
CONFIG_NETFILTER_XT_TARGET_CT=y
|
||||
CONFIG_NETFILTER_XT_TARGET_IDLETIMER=y
|
||||
CONFIG_NETFILTER_XT_TARGET_MARK=y
|
||||
CONFIG_NETFILTER_XT_TARGET_NFLOG=y
|
||||
CONFIG_NETFILTER_XT_TARGET_NFQUEUE=y
|
||||
CONFIG_NETFILTER_XT_TARGET_TPROXY=y
|
||||
CONFIG_NETFILTER_XT_TARGET_TRACE=y
|
||||
CONFIG_NETFILTER_XT_TARGET_SECMARK=y
|
||||
CONFIG_NETFILTER_XT_TARGET_TCPMSS=y
|
||||
CONFIG_NETFILTER_XT_MATCH_BPF=y
|
||||
CONFIG_NETFILTER_XT_MATCH_COMMENT=y
|
||||
CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=y
|
||||
CONFIG_NETFILTER_XT_MATCH_CONNMARK=y
|
||||
CONFIG_NETFILTER_XT_MATCH_CONNTRACK=y
|
||||
CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=y
|
||||
CONFIG_NETFILTER_XT_MATCH_HELPER=y
|
||||
CONFIG_NETFILTER_XT_MATCH_IPRANGE=y
|
||||
# CONFIG_NETFILTER_XT_MATCH_L2TP is not set
|
||||
CONFIG_NETFILTER_XT_MATCH_LENGTH=y
|
||||
CONFIG_NETFILTER_XT_MATCH_LIMIT=y
|
||||
CONFIG_NETFILTER_XT_MATCH_MAC=y
|
||||
CONFIG_NETFILTER_XT_MATCH_MARK=y
|
||||
CONFIG_NETFILTER_XT_MATCH_OWNER=y
|
||||
CONFIG_NETFILTER_XT_MATCH_POLICY=y
|
||||
CONFIG_NETFILTER_XT_MATCH_PKTTYPE=y
|
||||
CONFIG_NETFILTER_XT_MATCH_QUOTA=y
|
||||
CONFIG_NETFILTER_XT_MATCH_QUOTA2=y
|
||||
CONFIG_NETFILTER_XT_MATCH_SOCKET=y
|
||||
CONFIG_NETFILTER_XT_MATCH_STATE=y
|
||||
CONFIG_NETFILTER_XT_MATCH_STATISTIC=y
|
||||
CONFIG_NETFILTER_XT_MATCH_STRING=y
|
||||
CONFIG_NETFILTER_XT_MATCH_TIME=y
|
||||
CONFIG_NETFILTER_XT_MATCH_U32=y
|
||||
CONFIG_IP_NF_IPTABLES=y
|
||||
CONFIG_IP_NF_MATCH_AH=y
|
||||
CONFIG_IP_NF_MATCH_ECN=y
|
||||
CONFIG_IP_NF_MATCH_TTL=y
|
||||
CONFIG_IP_NF_FILTER=y
|
||||
CONFIG_IP_NF_TARGET_REJECT=y
|
||||
CONFIG_IP_NF_NAT=y
|
||||
CONFIG_IP_NF_TARGET_MASQUERADE=y
|
||||
CONFIG_IP_NF_TARGET_NETMAP=y
|
||||
CONFIG_IP_NF_TARGET_REDIRECT=y
|
||||
CONFIG_IP_NF_MANGLE=y
|
||||
CONFIG_IP_NF_RAW=y
|
||||
CONFIG_IP_NF_SECURITY=y
|
||||
CONFIG_IP_NF_ARPTABLES=y
|
||||
CONFIG_IP_NF_ARPFILTER=y
|
||||
CONFIG_IP_NF_ARP_MANGLE=y
|
||||
CONFIG_IP6_NF_IPTABLES=y
|
||||
CONFIG_IP6_NF_MATCH_IPV6HEADER=y
|
||||
CONFIG_IP6_NF_MATCH_RPFILTER=y
|
||||
CONFIG_IP6_NF_FILTER=y
|
||||
CONFIG_IP6_NF_TARGET_REJECT=y
|
||||
CONFIG_IP6_NF_MANGLE=y
|
||||
CONFIG_IP6_NF_RAW=y
|
||||
CONFIG_L2TP=y
|
||||
CONFIG_NET_SCHED=y
|
||||
CONFIG_NET_SCH_HTB=y
|
||||
CONFIG_NET_SCH_NETEM=y
|
||||
CONFIG_NET_SCH_INGRESS=y
|
||||
CONFIG_NET_CLS_U32=y
|
||||
CONFIG_NET_CLS_BPF=y
|
||||
CONFIG_NET_EMATCH=y
|
||||
CONFIG_NET_EMATCH_U32=y
|
||||
CONFIG_NET_CLS_ACT=y
|
||||
CONFIG_VSOCKETS=y
|
||||
CONFIG_VIRTIO_VSOCKETS=y
|
||||
CONFIG_BPF_JIT=y
|
||||
CONFIG_CFG80211=y
|
||||
CONFIG_MAC80211=y
|
||||
CONFIG_RFKILL=y
|
||||
CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
|
||||
CONFIG_DEBUG_DEVRES=y
|
||||
CONFIG_OF=y
|
||||
CONFIG_OF_UNITTEST=y
|
||||
# CONFIG_PNP_DEBUG_MESSAGES is not set
|
||||
CONFIG_ZRAM=y
|
||||
CONFIG_BLK_DEV_LOOP=y
|
||||
CONFIG_BLK_DEV_RAM=y
|
||||
CONFIG_BLK_DEV_RAM_SIZE=8192
|
||||
CONFIG_VIRTIO_BLK=y
|
||||
CONFIG_UID_SYS_STATS=y
|
||||
CONFIG_SCSI=y
|
||||
CONFIG_BLK_DEV_SD=y
|
||||
CONFIG_BLK_DEV_SR=y
|
||||
CONFIG_BLK_DEV_SR_VENDOR=y
|
||||
CONFIG_CHR_DEV_SG=y
|
||||
CONFIG_SCSI_CONSTANTS=y
|
||||
CONFIG_SCSI_SPI_ATTRS=y
|
||||
CONFIG_SCSI_VIRTIO=y
|
||||
CONFIG_MD=y
|
||||
CONFIG_BLK_DEV_DM=y
|
||||
CONFIG_DM_CRYPT=y
|
||||
CONFIG_DM_MIRROR=y
|
||||
CONFIG_DM_ZERO=y
|
||||
CONFIG_DM_UEVENT=y
|
||||
CONFIG_DM_VERITY=y
|
||||
CONFIG_DM_VERITY_AVB=y
|
||||
CONFIG_DM_VERITY_FEC=y
|
||||
CONFIG_DM_BOW=y
|
||||
CONFIG_NETDEVICES=y
|
||||
CONFIG_NETCONSOLE=y
|
||||
CONFIG_NETCONSOLE_DYNAMIC=y
|
||||
CONFIG_TUN=y
|
||||
CONFIG_VIRTIO_NET=y
|
||||
# CONFIG_ETHERNET is not set
|
||||
CONFIG_PPP=y
|
||||
CONFIG_PPP_BSDCOMP=y
|
||||
CONFIG_PPP_DEFLATE=y
|
||||
CONFIG_PPP_MPPE=y
|
||||
CONFIG_PPTP=y
|
||||
CONFIG_PPPOL2TP=y
|
||||
CONFIG_USB_RTL8152=y
|
||||
CONFIG_USB_USBNET=y
|
||||
# CONFIG_USB_NET_AX8817X is not set
|
||||
# CONFIG_USB_NET_AX88179_178A is not set
|
||||
# CONFIG_USB_NET_CDCETHER is not set
|
||||
# CONFIG_USB_NET_CDC_NCM is not set
|
||||
# CONFIG_USB_NET_NET1080 is not set
|
||||
# CONFIG_USB_NET_CDC_SUBSET is not set
|
||||
# CONFIG_USB_NET_ZAURUS is not set
|
||||
# CONFIG_WLAN_VENDOR_ADMTEK is not set
|
||||
# CONFIG_WLAN_VENDOR_ATH is not set
|
||||
# CONFIG_WLAN_VENDOR_ATMEL is not set
|
||||
# CONFIG_WLAN_VENDOR_BROADCOM is not set
|
||||
# CONFIG_WLAN_VENDOR_CISCO is not set
|
||||
# CONFIG_WLAN_VENDOR_INTEL is not set
|
||||
# CONFIG_WLAN_VENDOR_INTERSIL is not set
|
||||
# CONFIG_WLAN_VENDOR_MARVELL is not set
|
||||
# CONFIG_WLAN_VENDOR_MEDIATEK is not set
|
||||
# CONFIG_WLAN_VENDOR_RALINK is not set
|
||||
# CONFIG_WLAN_VENDOR_REALTEK is not set
|
||||
# CONFIG_WLAN_VENDOR_RSI is not set
|
||||
# CONFIG_WLAN_VENDOR_ST is not set
|
||||
# CONFIG_WLAN_VENDOR_TI is not set
|
||||
# CONFIG_WLAN_VENDOR_ZYDAS is not set
|
||||
# CONFIG_WLAN_VENDOR_QUANTENNA is not set
|
||||
CONFIG_MAC80211_HWSIM=y
|
||||
CONFIG_VIRT_WIFI=y
|
||||
CONFIG_INPUT_MOUSEDEV=y
|
||||
CONFIG_INPUT_EVDEV=y
|
||||
# CONFIG_INPUT_KEYBOARD is not set
|
||||
# CONFIG_INPUT_MOUSE is not set
|
||||
CONFIG_INPUT_JOYSTICK=y
|
||||
CONFIG_JOYSTICK_XPAD=y
|
||||
CONFIG_JOYSTICK_XPAD_FF=y
|
||||
CONFIG_JOYSTICK_XPAD_LEDS=y
|
||||
CONFIG_INPUT_TABLET=y
|
||||
CONFIG_TABLET_USB_ACECAD=y
|
||||
CONFIG_TABLET_USB_AIPTEK=y
|
||||
CONFIG_TABLET_USB_GTCO=y
|
||||
CONFIG_TABLET_USB_HANWANG=y
|
||||
CONFIG_TABLET_USB_KBTAB=y
|
||||
CONFIG_INPUT_MISC=y
|
||||
CONFIG_INPUT_UINPUT=y
|
||||
# CONFIG_SERIO_I8042 is not set
|
||||
# CONFIG_VT is not set
|
||||
# CONFIG_LEGACY_PTYS is not set
|
||||
# CONFIG_DEVMEM is not set
|
||||
CONFIG_SERIAL_8250=y
|
||||
# CONFIG_SERIAL_8250_DEPRECATED_OPTIONS is not set
|
||||
CONFIG_SERIAL_8250_CONSOLE=y
|
||||
# CONFIG_SERIAL_8250_EXAR is not set
|
||||
CONFIG_SERIAL_8250_NR_UARTS=48
|
||||
CONFIG_SERIAL_8250_EXTENDED=y
|
||||
CONFIG_SERIAL_8250_MANY_PORTS=y
|
||||
CONFIG_SERIAL_8250_SHARE_IRQ=y
|
||||
CONFIG_VIRTIO_CONSOLE=y
|
||||
CONFIG_HW_RANDOM=y
|
||||
# CONFIG_HW_RANDOM_INTEL is not set
|
||||
# CONFIG_HW_RANDOM_AMD is not set
|
||||
# CONFIG_HW_RANDOM_VIA is not set
|
||||
CONFIG_HW_RANDOM_VIRTIO=y
|
||||
CONFIG_HPET=y
|
||||
# CONFIG_HPET_MMAP_DEFAULT is not set
|
||||
# CONFIG_DEVPORT is not set
|
||||
# CONFIG_ACPI_I2C_OPREGION is not set
|
||||
# CONFIG_I2C_COMPAT is not set
|
||||
# CONFIG_I2C_HELPER_AUTO is not set
|
||||
CONFIG_PTP_1588_CLOCK=y
|
||||
# CONFIG_HWMON is not set
|
||||
# CONFIG_X86_PKG_TEMP_THERMAL is not set
|
||||
CONFIG_WATCHDOG=y
|
||||
CONFIG_SOFT_WATCHDOG=y
|
||||
CONFIG_MEDIA_SUPPORT=y
|
||||
# CONFIG_VGA_ARB is not set
|
||||
CONFIG_DRM=y
|
||||
# CONFIG_DRM_FBDEV_EMULATION is not set
|
||||
CONFIG_DRM_VIRTIO_GPU=y
|
||||
CONFIG_SOUND=y
|
||||
CONFIG_SND=y
|
||||
CONFIG_SND_HRTIMER=y
|
||||
# CONFIG_SND_SUPPORT_OLD_API is not set
|
||||
# CONFIG_SND_VERBOSE_PROCFS is not set
|
||||
# CONFIG_SND_DRIVERS is not set
|
||||
CONFIG_SND_INTEL8X0=y
|
||||
# CONFIG_SND_USB is not set
|
||||
CONFIG_HIDRAW=y
|
||||
CONFIG_UHID=y
|
||||
CONFIG_HID_A4TECH=y
|
||||
CONFIG_HID_ACRUX=y
|
||||
CONFIG_HID_ACRUX_FF=y
|
||||
CONFIG_HID_APPLE=y
|
||||
CONFIG_HID_BELKIN=y
|
||||
CONFIG_HID_CHERRY=y
|
||||
CONFIG_HID_CHICONY=y
|
||||
CONFIG_HID_PRODIKEYS=y
|
||||
CONFIG_HID_CYPRESS=y
|
||||
CONFIG_HID_DRAGONRISE=y
|
||||
CONFIG_DRAGONRISE_FF=y
|
||||
CONFIG_HID_EMS_FF=y
|
||||
CONFIG_HID_ELECOM=y
|
||||
CONFIG_HID_EZKEY=y
|
||||
CONFIG_HID_HOLTEK=y
|
||||
CONFIG_HID_KEYTOUCH=y
|
||||
CONFIG_HID_KYE=y
|
||||
CONFIG_HID_UCLOGIC=y
|
||||
CONFIG_HID_WALTOP=y
|
||||
CONFIG_HID_GYRATION=y
|
||||
CONFIG_HID_TWINHAN=y
|
||||
CONFIG_HID_KENSINGTON=y
|
||||
CONFIG_HID_LCPOWER=y
|
||||
CONFIG_HID_LOGITECH=y
|
||||
CONFIG_HID_LOGITECH_DJ=y
|
||||
CONFIG_LOGITECH_FF=y
|
||||
CONFIG_LOGIRUMBLEPAD2_FF=y
|
||||
CONFIG_LOGIG940_FF=y
|
||||
CONFIG_HID_MAGICMOUSE=y
|
||||
CONFIG_HID_MICROSOFT=y
|
||||
CONFIG_HID_MONTEREY=y
|
||||
CONFIG_HID_MULTITOUCH=y
|
||||
CONFIG_HID_NTRIG=y
|
||||
CONFIG_HID_ORTEK=y
|
||||
CONFIG_HID_PANTHERLORD=y
|
||||
CONFIG_PANTHERLORD_FF=y
|
||||
CONFIG_HID_PETALYNX=y
|
||||
CONFIG_HID_PICOLCD=y
|
||||
CONFIG_HID_PRIMAX=y
|
||||
CONFIG_HID_ROCCAT=y
|
||||
CONFIG_HID_SAITEK=y
|
||||
CONFIG_HID_SAMSUNG=y
|
||||
CONFIG_HID_SONY=y
|
||||
CONFIG_HID_SPEEDLINK=y
|
||||
CONFIG_HID_SUNPLUS=y
|
||||
CONFIG_HID_GREENASIA=y
|
||||
CONFIG_GREENASIA_FF=y
|
||||
CONFIG_HID_SMARTJOYPLUS=y
|
||||
CONFIG_SMARTJOYPLUS_FF=y
|
||||
CONFIG_HID_TIVO=y
|
||||
CONFIG_HID_TOPSEED=y
|
||||
CONFIG_HID_THRUSTMASTER=y
|
||||
CONFIG_HID_WACOM=y
|
||||
CONFIG_HID_WIIMOTE=y
|
||||
CONFIG_HID_ZEROPLUS=y
|
||||
CONFIG_HID_ZYDACRON=y
|
||||
CONFIG_USB_HIDDEV=y
|
||||
CONFIG_USB_ANNOUNCE_NEW_DEVICES=y
|
||||
CONFIG_USB_EHCI_HCD=y
|
||||
CONFIG_USB_GADGET=y
|
||||
CONFIG_USB_DUMMY_HCD=y
|
||||
CONFIG_USB_CONFIGFS=y
|
||||
CONFIG_USB_CONFIGFS_UEVENT=y
|
||||
CONFIG_USB_CONFIGFS_F_FS=y
|
||||
CONFIG_USB_CONFIGFS_F_ACC=y
|
||||
CONFIG_USB_CONFIGFS_F_AUDIO_SRC=y
|
||||
CONFIG_USB_CONFIGFS_F_MIDI=y
|
||||
CONFIG_MMC=y
|
||||
# CONFIG_PWRSEQ_EMMC is not set
|
||||
# CONFIG_PWRSEQ_SIMPLE is not set
|
||||
# CONFIG_MMC_BLOCK is not set
|
||||
CONFIG_RTC_CLASS=y
|
||||
CONFIG_RTC_DRV_TEST=y
|
||||
CONFIG_SW_SYNC=y
|
||||
CONFIG_VIRTIO_PCI=y
|
||||
CONFIG_VIRTIO_BALLOON=y
|
||||
CONFIG_VIRTIO_INPUT=y
|
||||
CONFIG_VIRTIO_MMIO=y
|
||||
CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES=y
|
||||
CONFIG_STAGING=y
|
||||
CONFIG_ASHMEM=y
|
||||
CONFIG_ANDROID_VSOC=y
|
||||
CONFIG_ION=y
|
||||
CONFIG_ION_SYSTEM_HEAP=y
|
||||
# CONFIG_X86_PLATFORM_DEVICES is not set
|
||||
# CONFIG_IOMMU_SUPPORT is not set
|
||||
CONFIG_ANDROID=y
|
||||
CONFIG_ANDROID_BINDER_IPC=y
|
||||
CONFIG_EXT4_FS=y
|
||||
CONFIG_EXT4_FS_POSIX_ACL=y
|
||||
CONFIG_EXT4_FS_SECURITY=y
|
||||
CONFIG_EXT4_ENCRYPTION=y
|
||||
CONFIG_F2FS_FS=y
|
||||
CONFIG_F2FS_FS_SECURITY=y
|
||||
CONFIG_F2FS_FS_ENCRYPTION=y
|
||||
CONFIG_QUOTA=y
|
||||
CONFIG_QUOTA_NETLINK_INTERFACE=y
|
||||
# CONFIG_PRINT_QUOTA_WARNING is not set
|
||||
CONFIG_QFMT_V2=y
|
||||
CONFIG_AUTOFS4_FS=y
|
||||
CONFIG_FUSE_FS=y
|
||||
CONFIG_OVERLAY_FS=y
|
||||
CONFIG_MSDOS_FS=y
|
||||
CONFIG_VFAT_FS=y
|
||||
CONFIG_PROC_KCORE=y
|
||||
CONFIG_TMPFS=y
|
||||
CONFIG_TMPFS_POSIX_ACL=y
|
||||
CONFIG_HUGETLBFS=y
|
||||
CONFIG_SDCARD_FS=y
|
||||
CONFIG_PSTORE=y
|
||||
CONFIG_PSTORE_CONSOLE=y
|
||||
CONFIG_PSTORE_RAM=y
|
||||
CONFIG_NLS_DEFAULT="utf8"
|
||||
CONFIG_NLS_CODEPAGE_437=y
|
||||
CONFIG_NLS_ASCII=y
|
||||
CONFIG_NLS_ISO8859_1=y
|
||||
CONFIG_NLS_UTF8=y
|
||||
CONFIG_SECURITY_PERF_EVENTS_RESTRICT=y
|
||||
CONFIG_SECURITY=y
|
||||
CONFIG_SECURITY_NETWORK=y
|
||||
CONFIG_SECURITY_PATH=y
|
||||
CONFIG_HARDENED_USERCOPY=y
|
||||
CONFIG_SECURITY_SELINUX=y
|
||||
CONFIG_SECURITY_SELINUX_CHECKREQPROT_VALUE=1
|
||||
# CONFIG_CRYPTO_MANAGER_DISABLE_TESTS is not set
|
||||
CONFIG_CRYPTO_ADIANTUM=y
|
||||
CONFIG_CRYPTO_SHA512=y
|
||||
CONFIG_CRYPTO_LZ4=y
|
||||
CONFIG_CRYPTO_ZSTD=y
|
||||
CONFIG_CRYPTO_DEV_VIRTIO=y
|
||||
CONFIG_PRINTK_TIME=y
|
||||
CONFIG_DEBUG_INFO=y
|
||||
# CONFIG_ENABLE_MUST_CHECK is not set
|
||||
CONFIG_FRAME_WARN=1024
|
||||
# CONFIG_UNUSED_SYMBOLS is not set
|
||||
CONFIG_MAGIC_SYSRQ=y
|
||||
CONFIG_DEBUG_STACK_USAGE=y
|
||||
CONFIG_DEBUG_MEMORY_INIT=y
|
||||
CONFIG_DEBUG_STACKOVERFLOW=y
|
||||
CONFIG_HARDLOCKUP_DETECTOR=y
|
||||
CONFIG_PANIC_TIMEOUT=5
|
||||
CONFIG_SCHEDSTATS=y
|
||||
CONFIG_RCU_CPU_STALL_TIMEOUT=60
|
||||
CONFIG_ENABLE_DEFAULT_TRACERS=y
|
||||
CONFIG_IO_DELAY_NONE=y
|
||||
CONFIG_DEBUG_BOOT_PARAMS=y
|
||||
CONFIG_OPTIMIZE_INLINING=y
|
||||
CONFIG_UNWINDER_FRAME_POINTER=y
|
@ -398,3 +398,5 @@
|
||||
384 i386 arch_prctl sys_arch_prctl __ia32_compat_sys_arch_prctl
|
||||
385 i386 io_pgetevents sys_io_pgetevents __ia32_compat_sys_io_pgetevents
|
||||
386 i386 rseq sys_rseq __ia32_sys_rseq
|
||||
424 i386 pidfd_send_signal sys_pidfd_send_signal __ia32_sys_pidfd_send_signal
|
||||
434 i386 pidfd_open sys_pidfd_open __ia32_sys_pidfd_open
|
||||
|
@ -339,6 +339,8 @@
|
||||
330 common pkey_alloc sys_pkey_alloc
|
||||
331 common pkey_free sys_pkey_free
|
||||
332 common statx sys_statx
|
||||
424 common pidfd_send_signal sys_pidfd_send_signal
|
||||
434 common pidfd_open sys_pidfd_open
|
||||
|
||||
#
|
||||
# x32-specific system call numbers start at 512 to avoid cache impact
|
||||
|
@ -711,7 +711,17 @@ extern struct movsl_mask {
|
||||
* checking before using them, but you have to surround them with the
|
||||
* user_access_begin/end() pair.
|
||||
*/
|
||||
#define user_access_begin() __uaccess_begin()
|
||||
static __must_check inline bool user_access_begin(int type,
|
||||
const void __user *ptr,
|
||||
size_t len)
|
||||
{
|
||||
if (unlikely(!access_ok(type, ptr, len)))
|
||||
return 0;
|
||||
__uaccess_begin();
|
||||
return 1;
|
||||
}
|
||||
|
||||
#define user_access_begin(a, b, c) user_access_begin(a, b, c)
|
||||
#define user_access_end() __uaccess_end()
|
||||
|
||||
#define unsafe_put_user(x, ptr, err_label) \
|
||||
|
@ -27,7 +27,6 @@ config KVM
|
||||
depends on X86_LOCAL_APIC
|
||||
select PREEMPT_NOTIFIERS
|
||||
select MMU_NOTIFIER
|
||||
select ANON_INODES
|
||||
select HAVE_KVM_IRQCHIP
|
||||
select HAVE_KVM_IRQFD
|
||||
select IRQ_BYPASS_MANAGER
|
||||
|
@ -1,5 +0,0 @@
|
||||
. ${ROOT_DIR}/common/build.config.common
|
||||
. ${ROOT_DIR}/common/build.config.aarch64
|
||||
|
||||
DEFCONFIG=cuttlefish_defconfig
|
||||
POST_DEFCONFIG_CMDS="check_defconfig"
|
@ -1,5 +0,0 @@
|
||||
. ${ROOT_DIR}/common/build.config.common
|
||||
. ${ROOT_DIR}/common/build.config.x86_64
|
||||
|
||||
DEFCONFIG=x86_64_cuttlefish_defconfig
|
||||
POST_DEFCONFIG_CMDS="check_defconfig"
|
@ -16,3 +16,4 @@ System.map
|
||||
"
|
||||
STOP_SHIP_TRACEPRINTK=1
|
||||
ABI_DEFINITION=abi_gki_aarch64.xml
|
||||
BUILD_INITRAMFS=1
|
||||
|
@ -15,3 +15,4 @@ vmlinux
|
||||
System.map
|
||||
"
|
||||
STOP_SHIP_TRACEPRINTK=1
|
||||
BUILD_INITRAMFS=1
|
||||
|
@ -179,7 +179,6 @@ source "drivers/base/regmap/Kconfig"
|
||||
config DMA_SHARED_BUFFER
|
||||
bool
|
||||
default n
|
||||
select ANON_INODES
|
||||
select IRQ_WORK
|
||||
help
|
||||
This option enables the framework for buffer-sharing between
|
||||
|
@ -157,7 +157,6 @@ config TCG_CRB
|
||||
config TCG_VTPM_PROXY
|
||||
tristate "VTPM Proxy Interface"
|
||||
depends on TCG_TPM
|
||||
select ANON_INODES
|
||||
---help---
|
||||
This driver proxies for an emulated TPM (vTPM) running in userspace.
|
||||
A device /dev/vtpmx is provided that creates a device pair
|
||||
|
@ -3,7 +3,6 @@ menu "DMABUF options"
|
||||
config SYNC_FILE
|
||||
bool "Explicit Synchronization Framework"
|
||||
default n
|
||||
select ANON_INODES
|
||||
select DMA_SHARED_BUFFER
|
||||
---help---
|
||||
The Sync File Framework adds explicit syncronization via
|
||||
|
@ -40,7 +40,10 @@
|
||||
#include <linux/fdtable.h>
|
||||
#include <linux/list_sort.h>
|
||||
#include <linux/hashtable.h>
|
||||
#include <linux/mount.h>
|
||||
|
||||
#include <uapi/linux/dma-buf.h>
|
||||
#include <uapi/linux/magic.h>
|
||||
|
||||
static atomic_long_t name_counter;
|
||||
|
||||
@ -66,6 +69,41 @@ struct dma_proc {
|
||||
|
||||
static struct dma_buf_list db_list;
|
||||
|
||||
static char *dmabuffs_dname(struct dentry *dentry, char *buffer, int buflen)
|
||||
{
|
||||
struct dma_buf *dmabuf;
|
||||
char name[DMA_BUF_NAME_LEN];
|
||||
size_t ret = 0;
|
||||
|
||||
dmabuf = dentry->d_fsdata;
|
||||
mutex_lock(&dmabuf->lock);
|
||||
if (dmabuf->name)
|
||||
ret = strlcpy(name, dmabuf->name, DMA_BUF_NAME_LEN);
|
||||
mutex_unlock(&dmabuf->lock);
|
||||
|
||||
return dynamic_dname(dentry, buffer, buflen, "/%s:%s",
|
||||
dentry->d_name.name, ret > 0 ? name : "");
|
||||
}
|
||||
|
||||
static const struct dentry_operations dma_buf_dentry_ops = {
|
||||
.d_dname = dmabuffs_dname,
|
||||
};
|
||||
|
||||
static struct vfsmount *dma_buf_mnt;
|
||||
|
||||
static struct dentry *dma_buf_fs_mount(struct file_system_type *fs_type,
|
||||
int flags, const char *name, void *data)
|
||||
{
|
||||
return mount_pseudo(fs_type, "dmabuf:", NULL, &dma_buf_dentry_ops,
|
||||
DMA_BUF_MAGIC);
|
||||
}
|
||||
|
||||
static struct file_system_type dma_buf_fs_type = {
|
||||
.name = "dmabuf",
|
||||
.mount = dma_buf_fs_mount,
|
||||
.kill_sb = kill_anon_super,
|
||||
};
|
||||
|
||||
static int dma_buf_release(struct inode *inode, struct file *file)
|
||||
{
|
||||
struct dma_buf *dmabuf;
|
||||
@ -314,6 +352,43 @@ static int dma_buf_begin_cpu_access_umapped(struct dma_buf *dmabuf,
|
||||
static int dma_buf_end_cpu_access_umapped(struct dma_buf *dmabuf,
|
||||
enum dma_data_direction direction);
|
||||
|
||||
/**
|
||||
* dma_buf_set_name - Set a name to a specific dma_buf to track the usage.
|
||||
* The name of the dma-buf buffer can only be set when the dma-buf is not
|
||||
* attached to any devices. It could theoritically support changing the
|
||||
* name of the dma-buf if the same piece of memory is used for multiple
|
||||
* purpose between different devices.
|
||||
*
|
||||
* @dmabuf [in] dmabuf buffer that will be renamed.
|
||||
* @buf: [in] A piece of userspace memory that contains the name of
|
||||
* the dma-buf.
|
||||
*
|
||||
* Returns 0 on success. If the dma-buf buffer is already attached to
|
||||
* devices, return -EBUSY.
|
||||
*
|
||||
*/
|
||||
static long dma_buf_set_name(struct dma_buf *dmabuf, const char __user *buf)
|
||||
{
|
||||
char *name = strndup_user(buf, DMA_BUF_NAME_LEN);
|
||||
long ret = 0;
|
||||
|
||||
if (IS_ERR(name))
|
||||
return PTR_ERR(name);
|
||||
|
||||
mutex_lock(&dmabuf->lock);
|
||||
if (!list_empty(&dmabuf->attachments)) {
|
||||
ret = -EBUSY;
|
||||
kfree(name);
|
||||
goto out_unlock;
|
||||
}
|
||||
kfree(dmabuf->name);
|
||||
dmabuf->name = name;
|
||||
|
||||
out_unlock:
|
||||
mutex_unlock(&dmabuf->lock);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static long dma_buf_ioctl(struct file *file,
|
||||
unsigned int cmd, unsigned long arg)
|
||||
{
|
||||
@ -360,11 +435,29 @@ static long dma_buf_ioctl(struct file *file,
|
||||
ret = dma_buf_begin_cpu_access(dmabuf, dir);
|
||||
|
||||
return ret;
|
||||
|
||||
case DMA_BUF_SET_NAME:
|
||||
return dma_buf_set_name(dmabuf, (const char __user *)arg);
|
||||
|
||||
default:
|
||||
return -ENOTTY;
|
||||
}
|
||||
}
|
||||
|
||||
static void dma_buf_show_fdinfo(struct seq_file *m, struct file *file)
|
||||
{
|
||||
struct dma_buf *dmabuf = file->private_data;
|
||||
|
||||
seq_printf(m, "size:\t%zu\n", dmabuf->size);
|
||||
/* Don't count the temporary reference taken inside procfs seq_show */
|
||||
seq_printf(m, "count:\t%ld\n", file_count(dmabuf->file) - 1);
|
||||
seq_printf(m, "exp_name:\t%s\n", dmabuf->exp_name);
|
||||
mutex_lock(&dmabuf->lock);
|
||||
if (dmabuf->name)
|
||||
seq_printf(m, "name:\t%s\n", dmabuf->name);
|
||||
mutex_unlock(&dmabuf->lock);
|
||||
}
|
||||
|
||||
static const struct file_operations dma_buf_fops = {
|
||||
.release = dma_buf_release,
|
||||
.mmap = dma_buf_mmap_internal,
|
||||
@ -374,6 +467,7 @@ static const struct file_operations dma_buf_fops = {
|
||||
#ifdef CONFIG_COMPAT
|
||||
.compat_ioctl = dma_buf_ioctl,
|
||||
#endif
|
||||
.show_fdinfo = dma_buf_show_fdinfo,
|
||||
};
|
||||
|
||||
/*
|
||||
@ -384,6 +478,32 @@ static inline int is_dma_buf_file(struct file *file)
|
||||
return file->f_op == &dma_buf_fops;
|
||||
}
|
||||
|
||||
static struct file *dma_buf_getfile(struct dma_buf *dmabuf, int flags)
|
||||
{
|
||||
struct file *file;
|
||||
struct inode *inode = alloc_anon_inode(dma_buf_mnt->mnt_sb);
|
||||
|
||||
if (IS_ERR(inode))
|
||||
return ERR_CAST(inode);
|
||||
|
||||
inode->i_size = dmabuf->size;
|
||||
inode_set_bytes(inode, dmabuf->size);
|
||||
|
||||
file = alloc_file_pseudo(inode, dma_buf_mnt, "dmabuf",
|
||||
flags, &dma_buf_fops);
|
||||
if (IS_ERR(file))
|
||||
goto err_alloc_file;
|
||||
file->f_flags = flags & (O_ACCMODE | O_NONBLOCK);
|
||||
file->private_data = dmabuf;
|
||||
file->f_path.dentry->d_fsdata = dmabuf;
|
||||
|
||||
return file;
|
||||
|
||||
err_alloc_file:
|
||||
iput(inode);
|
||||
return file;
|
||||
}
|
||||
|
||||
/**
|
||||
* DOC: dma buf device access
|
||||
*
|
||||
@ -491,8 +611,7 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info)
|
||||
}
|
||||
dmabuf->resv = resv;
|
||||
|
||||
file = anon_inode_getfile(bufname, &dma_buf_fops, dmabuf,
|
||||
exp_info->flags);
|
||||
file = dma_buf_getfile(dmabuf, exp_info->flags);
|
||||
if (IS_ERR(file)) {
|
||||
ret = PTR_ERR(file);
|
||||
goto err_dmabuf;
|
||||
@ -1178,8 +1297,9 @@ static int dma_buf_debug_show(struct seq_file *s, void *unused)
|
||||
return ret;
|
||||
|
||||
seq_puts(s, "\nDma-buf Objects:\n");
|
||||
seq_printf(s, "%-8s\t%-8s\t%-8s\t%-8s\t%-12s\t%-s\n",
|
||||
"size", "flags", "mode", "count", "exp_name", "buf name");
|
||||
seq_printf(s, "%-8s\t%-8s\t%-8s\t%-8s\t%-12s\t%-s\t%-8s\n",
|
||||
"size", "flags", "mode", "count", "exp_name",
|
||||
"buf name", "ino");
|
||||
|
||||
list_for_each_entry(buf_obj, &db_list.head, list_node) {
|
||||
ret = mutex_lock_interruptible(&buf_obj->lock);
|
||||
@ -1190,11 +1310,13 @@ static int dma_buf_debug_show(struct seq_file *s, void *unused)
|
||||
continue;
|
||||
}
|
||||
|
||||
seq_printf(s, "%08zu\t%08x\t%08x\t%08ld\t%-12s\t%-s\n",
|
||||
seq_printf(s, "%08zu\t%08x\t%08x\t%08ld\t%-12s\t%-s\t%08lu\t%s\n",
|
||||
buf_obj->size,
|
||||
buf_obj->file->f_flags, buf_obj->file->f_mode,
|
||||
file_count(buf_obj->file),
|
||||
buf_obj->exp_name, buf_obj->buf_name);
|
||||
buf_obj->exp_name, buf_obj->buf_name,
|
||||
file_inode(buf_obj->file)->i_ino,
|
||||
buf_obj->name ?: "");
|
||||
|
||||
robj = buf_obj->resv;
|
||||
while (true) {
|
||||
@ -1449,6 +1571,10 @@ static inline void dma_buf_uninit_debugfs(void)
|
||||
|
||||
static int __init dma_buf_init(void)
|
||||
{
|
||||
dma_buf_mnt = kern_mount(&dma_buf_fs_type);
|
||||
if (IS_ERR(dma_buf_mnt))
|
||||
return PTR_ERR(dma_buf_mnt);
|
||||
|
||||
mutex_init(&db_list.lock);
|
||||
INIT_LIST_HEAD(&db_list.head);
|
||||
dma_buf_init_debugfs();
|
||||
@ -1459,5 +1585,6 @@ subsys_initcall(dma_buf_init);
|
||||
static void __exit dma_buf_deinit(void)
|
||||
{
|
||||
dma_buf_uninit_debugfs();
|
||||
kern_unmount(dma_buf_mnt);
|
||||
}
|
||||
__exitcall(dma_buf_deinit);
|
||||
|
@ -12,7 +12,6 @@ config ARCH_HAVE_CUSTOM_GPIO_H
|
||||
|
||||
menuconfig GPIOLIB
|
||||
bool "GPIO Support"
|
||||
select ANON_INODES
|
||||
help
|
||||
This enables GPIO support through the generic GPIO library.
|
||||
You only need to enable this, if you also want to enable
|
||||
|
@ -256,9 +256,7 @@ int drm_connector_init(struct drm_device *dev,
|
||||
|
||||
if (connector_type != DRM_MODE_CONNECTOR_VIRTUAL &&
|
||||
connector_type != DRM_MODE_CONNECTOR_WRITEBACK)
|
||||
drm_object_attach_property(&connector->base,
|
||||
config->edid_property,
|
||||
0);
|
||||
drm_connector_attach_edid_property(connector);
|
||||
|
||||
drm_object_attach_property(&connector->base,
|
||||
config->dpms_property, 0);
|
||||
@ -290,6 +288,25 @@ out_put:
|
||||
}
|
||||
EXPORT_SYMBOL(drm_connector_init);
|
||||
|
||||
/**
|
||||
* drm_connector_attach_edid_property - attach edid property.
|
||||
* @dev: DRM device
|
||||
* @connector: the connector
|
||||
*
|
||||
* Some connector types like DRM_MODE_CONNECTOR_VIRTUAL do not get a
|
||||
* edid property attached by default. This function can be used to
|
||||
* explicitly enable the edid property in these cases.
|
||||
*/
|
||||
void drm_connector_attach_edid_property(struct drm_connector *connector)
|
||||
{
|
||||
struct drm_mode_config *config = &connector->dev->mode_config;
|
||||
|
||||
drm_object_attach_property(&connector->base,
|
||||
config->edid_property,
|
||||
0);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_connector_attach_edid_property);
|
||||
|
||||
/**
|
||||
* drm_connector_attach_encoder - attach a connector to an encoder
|
||||
* @connector: connector to attach
|
||||
|
@ -678,6 +678,43 @@ out_unlock:
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_prime_handle_to_fd);
|
||||
|
||||
/**
|
||||
* drm_gem_prime_mmap - PRIME mmap function for GEM drivers
|
||||
* @obj: GEM object
|
||||
* @vma: Virtual address range
|
||||
*
|
||||
* This function sets up a userspace mapping for PRIME exported buffers using
|
||||
* the same codepath that is used for regular GEM buffer mapping on the DRM fd.
|
||||
* The fake GEM offset is added to vma->vm_pgoff and &drm_driver->fops->mmap is
|
||||
* called to set up the mapping.
|
||||
*
|
||||
* Drivers can use this as their &drm_driver.gem_prime_mmap callback.
|
||||
*/
|
||||
int drm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
|
||||
{
|
||||
/* Used by drm_gem_mmap() to lookup the GEM object */
|
||||
struct drm_file priv = {
|
||||
.minor = obj->dev->primary,
|
||||
};
|
||||
struct file fil = {
|
||||
.private_data = &priv,
|
||||
};
|
||||
int ret;
|
||||
|
||||
ret = drm_vma_node_allow(&obj->vma_node, &priv);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
vma->vm_pgoff += drm_vma_node_start(&obj->vma_node);
|
||||
|
||||
ret = obj->dev->driver->fops->mmap(&fil, vma);
|
||||
|
||||
drm_vma_node_revoke(&obj->vma_node, &priv);
|
||||
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_prime_mmap);
|
||||
|
||||
/**
|
||||
* drm_gem_prime_import_dev - core implementation of the import callback
|
||||
* @dev: drm_device to import into
|
||||
|
@ -1604,7 +1604,9 @@ static int eb_copy_relocations(const struct i915_execbuffer *eb)
|
||||
* happened we would make the mistake of assuming that the
|
||||
* relocations were valid.
|
||||
*/
|
||||
user_access_begin();
|
||||
if (!user_access_begin(VERIFY_WRITE, urelocs, size))
|
||||
goto end_user;
|
||||
|
||||
for (copied = 0; copied < nreloc; copied++)
|
||||
unsafe_put_user(-1,
|
||||
&urelocs[copied].presumed_offset,
|
||||
@ -2649,7 +2651,17 @@ i915_gem_execbuffer2_ioctl(struct drm_device *dev, void *data,
|
||||
unsigned int i;
|
||||
|
||||
/* Copy the new buffer offsets back to the user's exec list. */
|
||||
user_access_begin();
|
||||
/*
|
||||
* Note: count * sizeof(*user_exec_list) does not overflow,
|
||||
* because we checked 'count' in check_buffer_count().
|
||||
*
|
||||
* And this range already got effectively checked earlier
|
||||
* when we did the "copy_from_user()" above.
|
||||
*/
|
||||
if (!user_access_begin(VERIFY_WRITE, user_exec_list,
|
||||
count * sizeof(*user_exec_list)))
|
||||
goto end_user;
|
||||
|
||||
for (i = 0; i < args->buffer_count; i++) {
|
||||
if (!(exec2_list[i].offset & UPDATE))
|
||||
continue;
|
||||
|
@ -6,6 +6,6 @@
|
||||
virtio-gpu-y := virtgpu_drv.o virtgpu_kms.o virtgpu_drm_bus.o virtgpu_gem.o \
|
||||
virtgpu_fb.o virtgpu_display.o virtgpu_vq.o virtgpu_ttm.o \
|
||||
virtgpu_fence.o virtgpu_object.o virtgpu_debugfs.o virtgpu_plane.o \
|
||||
virtgpu_ioctl.o virtgpu_prime.o
|
||||
virtgpu_ioctl.o virtgpu_prime.o virtgpu_trace_points.o
|
||||
|
||||
obj-$(CONFIG_DRM_VIRTIO_GPU) += virtio-gpu.o
|
||||
|
@ -28,6 +28,30 @@
|
||||
|
||||
#include "virtgpu_drv.h"
|
||||
|
||||
static void virtio_add_bool(struct seq_file *m, const char *name,
|
||||
bool value)
|
||||
{
|
||||
seq_printf(m, "%-16s : %s\n", name, value ? "yes" : "no");
|
||||
}
|
||||
|
||||
static void virtio_add_int(struct seq_file *m, const char *name,
|
||||
int value)
|
||||
{
|
||||
seq_printf(m, "%-16s : %d\n", name, value);
|
||||
}
|
||||
|
||||
static int virtio_gpu_features(struct seq_file *m, void *data)
|
||||
{
|
||||
struct drm_info_node *node = (struct drm_info_node *) m->private;
|
||||
struct virtio_gpu_device *vgdev = node->minor->dev->dev_private;
|
||||
|
||||
virtio_add_bool(m, "virgl", vgdev->has_virgl_3d);
|
||||
virtio_add_bool(m, "edid", vgdev->has_edid);
|
||||
virtio_add_int(m, "cap sets", vgdev->num_capsets);
|
||||
virtio_add_int(m, "scanouts", vgdev->num_scanouts);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int
|
||||
virtio_gpu_debugfs_irq_info(struct seq_file *m, void *data)
|
||||
{
|
||||
@ -41,7 +65,8 @@ virtio_gpu_debugfs_irq_info(struct seq_file *m, void *data)
|
||||
}
|
||||
|
||||
static struct drm_info_list virtio_gpu_debugfs_list[] = {
|
||||
{ "irq_fence", virtio_gpu_debugfs_irq_info, 0, NULL },
|
||||
{ "virtio-gpu-features", virtio_gpu_features },
|
||||
{ "virtio-gpu-irq-fence", virtio_gpu_debugfs_irq_info, 0, NULL },
|
||||
};
|
||||
|
||||
#define VIRTIO_GPU_DEBUGFS_ENTRIES ARRAY_SIZE(virtio_gpu_debugfs_list)
|
||||
|
@ -75,12 +75,9 @@ virtio_gpu_framebuffer_init(struct drm_device *dev,
|
||||
struct drm_gem_object *obj)
|
||||
{
|
||||
int ret;
|
||||
struct virtio_gpu_object *bo;
|
||||
|
||||
vgfb->base.obj[0] = obj;
|
||||
|
||||
bo = gem_to_virtio_gpu_obj(obj);
|
||||
|
||||
drm_helper_mode_fill_fb_struct(dev, &vgfb->base, mode_cmd);
|
||||
|
||||
ret = drm_framebuffer_init(dev, &vgfb->base, &virtio_gpu_fb_funcs);
|
||||
@ -109,6 +106,9 @@ static void virtio_gpu_crtc_mode_set_nofb(struct drm_crtc *crtc)
|
||||
static void virtio_gpu_crtc_atomic_enable(struct drm_crtc *crtc,
|
||||
struct drm_crtc_state *old_state)
|
||||
{
|
||||
struct virtio_gpu_output *output = drm_crtc_to_virtio_gpu_output(crtc);
|
||||
|
||||
output->enabled = true;
|
||||
}
|
||||
|
||||
static void virtio_gpu_crtc_atomic_disable(struct drm_crtc *crtc,
|
||||
@ -119,6 +119,7 @@ static void virtio_gpu_crtc_atomic_disable(struct drm_crtc *crtc,
|
||||
struct virtio_gpu_output *output = drm_crtc_to_virtio_gpu_output(crtc);
|
||||
|
||||
virtio_gpu_cmd_set_scanout(vgdev, output->index, 0, 0, 0, 0, 0);
|
||||
output->enabled = false;
|
||||
}
|
||||
|
||||
static int virtio_gpu_crtc_atomic_check(struct drm_crtc *crtc,
|
||||
@ -168,6 +169,12 @@ static int virtio_gpu_conn_get_modes(struct drm_connector *connector)
|
||||
struct drm_display_mode *mode = NULL;
|
||||
int count, width, height;
|
||||
|
||||
if (output->edid) {
|
||||
count = drm_add_edid_modes(connector, output->edid);
|
||||
if (count)
|
||||
return count;
|
||||
}
|
||||
|
||||
width = le32_to_cpu(output->info.r.width);
|
||||
height = le32_to_cpu(output->info.r.height);
|
||||
count = drm_add_modes_noedid(connector, XRES_MAX, YRES_MAX);
|
||||
@ -236,12 +243,8 @@ static enum drm_connector_status virtio_gpu_conn_detect(
|
||||
|
||||
static void virtio_gpu_conn_destroy(struct drm_connector *connector)
|
||||
{
|
||||
struct virtio_gpu_output *virtio_gpu_output =
|
||||
drm_connector_to_virtio_gpu_output(connector);
|
||||
|
||||
drm_connector_unregister(connector);
|
||||
drm_connector_cleanup(connector);
|
||||
kfree(virtio_gpu_output);
|
||||
}
|
||||
|
||||
static const struct drm_connector_funcs virtio_gpu_connector_funcs = {
|
||||
@ -286,6 +289,8 @@ static int vgdev_output_init(struct virtio_gpu_device *vgdev, int index)
|
||||
drm_connector_init(dev, connector, &virtio_gpu_connector_funcs,
|
||||
DRM_MODE_CONNECTOR_VIRTUAL);
|
||||
drm_connector_helper_add(connector, &virtio_gpu_conn_helper_funcs);
|
||||
if (vgdev->has_edid)
|
||||
drm_connector_attach_edid_property(connector);
|
||||
|
||||
drm_encoder_init(dev, encoder, &virtio_gpu_enc_funcs,
|
||||
DRM_MODE_ENCODER_VIRTUAL, NULL);
|
||||
@ -372,6 +377,10 @@ int virtio_gpu_modeset_init(struct virtio_gpu_device *vgdev)
|
||||
|
||||
void virtio_gpu_modeset_fini(struct virtio_gpu_device *vgdev)
|
||||
{
|
||||
virtio_gpu_fbdev_fini(vgdev);
|
||||
int i;
|
||||
|
||||
for (i = 0 ; i < vgdev->num_scanouts; ++i)
|
||||
kfree(vgdev->outputs[i].edid);
|
||||
drm_atomic_helper_shutdown(vgdev->ddev);
|
||||
drm_mode_config_cleanup(vgdev->ddev);
|
||||
}
|
||||
|
@ -71,6 +71,37 @@ int drm_virtio_init(struct drm_driver *driver, struct virtio_device *vdev)
|
||||
if (vga)
|
||||
virtio_pci_kick_out_firmware_fb(pdev);
|
||||
|
||||
/*
|
||||
* Normally the drm_dev_set_unique() call is done by core DRM.
|
||||
* The following comment covers, why virtio cannot rely on it.
|
||||
*
|
||||
* Unlike the other virtual GPU drivers, virtio abstracts the
|
||||
* underlying bus type by using struct virtio_device.
|
||||
*
|
||||
* Hence the dev_is_pci() check, used in core DRM, will fail
|
||||
* and the unique returned will be the virtio_device "virtio0",
|
||||
* while a "pci:..." one is required.
|
||||
*
|
||||
* A few other ideas were considered:
|
||||
* - Extend the dev_is_pci() check [in drm_set_busid] to
|
||||
* consider virtio.
|
||||
* Seems like a bigger hack than what we have already.
|
||||
*
|
||||
* - Point drm_device::dev to the parent of the virtio_device
|
||||
* Semantic changes:
|
||||
* * Using the wrong device for i2c, framebuffer_alloc and
|
||||
* prime import.
|
||||
* Visual changes:
|
||||
* * Helpers such as DRM_DEV_ERROR, dev_info, drm_printer,
|
||||
* will print the wrong information.
|
||||
*
|
||||
* We could address the latter issues, by introducing
|
||||
* drm_device::bus_dev, ... which would be used solely for this.
|
||||
*
|
||||
* So for the moment keep things as-is, with a bulky comment
|
||||
* for the next person who feels like removing this
|
||||
* drm_dev_set_unique() quirk.
|
||||
*/
|
||||
snprintf(unique, sizeof(unique), "pci:%s", pname);
|
||||
ret = drm_dev_set_unique(dev, unique);
|
||||
if (ret)
|
||||
@ -85,6 +116,6 @@ int drm_virtio_init(struct drm_driver *driver, struct virtio_device *vdev)
|
||||
return 0;
|
||||
|
||||
err_free:
|
||||
drm_dev_unref(dev);
|
||||
drm_dev_put(dev);
|
||||
return ret;
|
||||
}
|
||||
|
@ -42,13 +42,20 @@ module_param_named(modeset, virtio_gpu_modeset, int, 0400);
|
||||
|
||||
static int virtio_gpu_probe(struct virtio_device *vdev)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (vgacon_text_force() && virtio_gpu_modeset == -1)
|
||||
return -EINVAL;
|
||||
|
||||
if (virtio_gpu_modeset == 0)
|
||||
return -EINVAL;
|
||||
|
||||
return drm_virtio_init(&driver, vdev);
|
||||
ret = drm_virtio_init(&driver, vdev);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
drm_fbdev_generic_setup(vdev->priv, 32);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void virtio_gpu_remove(struct virtio_device *vdev)
|
||||
@ -80,6 +87,7 @@ static unsigned int features[] = {
|
||||
*/
|
||||
VIRTIO_GPU_F_VIRGL,
|
||||
#endif
|
||||
VIRTIO_GPU_F_EDID,
|
||||
};
|
||||
static struct virtio_driver virtio_gpu_driver = {
|
||||
.feature_table = features,
|
||||
@ -130,8 +138,6 @@ static struct drm_driver driver = {
|
||||
.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
|
||||
.gem_prime_export = drm_gem_prime_export,
|
||||
.gem_prime_import = drm_gem_prime_import,
|
||||
.gem_prime_pin = virtgpu_gem_prime_pin,
|
||||
.gem_prime_unpin = virtgpu_gem_prime_unpin,
|
||||
.gem_prime_get_sg_table = virtgpu_gem_prime_get_sg_table,
|
||||
.gem_prime_import_sg_table = virtgpu_gem_prime_import_sg_table,
|
||||
.gem_prime_vmap = virtgpu_gem_prime_vmap,
|
||||
|
@ -36,6 +36,7 @@
|
||||
#include <drm/drm_atomic.h>
|
||||
#include <drm/drm_crtc_helper.h>
|
||||
#include <drm/drm_encoder.h>
|
||||
#include <drm/drm_fb_helper.h>
|
||||
#include <drm/ttm/ttm_bo_api.h>
|
||||
#include <drm/ttm/ttm_bo_driver.h>
|
||||
#include <drm/ttm/ttm_placement.h>
|
||||
@ -46,23 +47,42 @@
|
||||
#define DRIVER_DATE "0"
|
||||
|
||||
#define DRIVER_MAJOR 0
|
||||
#define DRIVER_MINOR 0
|
||||
#define DRIVER_PATCHLEVEL 1
|
||||
#define DRIVER_MINOR 1
|
||||
#define DRIVER_PATCHLEVEL 0
|
||||
|
||||
/* virtgpu_drm_bus.c */
|
||||
int drm_virtio_init(struct drm_driver *driver, struct virtio_device *vdev);
|
||||
|
||||
struct virtio_gpu_object_params {
|
||||
uint32_t format;
|
||||
uint32_t width;
|
||||
uint32_t height;
|
||||
unsigned long size;
|
||||
bool dumb;
|
||||
/* 3d */
|
||||
bool virgl;
|
||||
uint32_t target;
|
||||
uint32_t bind;
|
||||
uint32_t depth;
|
||||
uint32_t array_size;
|
||||
uint32_t last_level;
|
||||
uint32_t nr_samples;
|
||||
uint32_t flags;
|
||||
};
|
||||
|
||||
struct virtio_gpu_object {
|
||||
struct drm_gem_object gem_base;
|
||||
uint32_t hw_res_handle;
|
||||
|
||||
struct sg_table *pages;
|
||||
uint32_t mapped;
|
||||
void *vmap;
|
||||
bool dumb;
|
||||
struct ttm_place placement_code;
|
||||
struct ttm_placement placement;
|
||||
struct ttm_buffer_object tbo;
|
||||
struct ttm_bo_kmap_obj kmap;
|
||||
bool created;
|
||||
};
|
||||
#define gem_to_virtio_gpu_obj(gobj) \
|
||||
container_of((gobj), struct virtio_gpu_object, gem_base)
|
||||
@ -85,7 +105,6 @@ struct virtio_gpu_fence {
|
||||
struct dma_fence f;
|
||||
struct virtio_gpu_fence_driver *drv;
|
||||
struct list_head node;
|
||||
uint64_t seq;
|
||||
};
|
||||
#define to_virtio_fence(x) \
|
||||
container_of(x, struct virtio_gpu_fence, f)
|
||||
@ -112,8 +131,10 @@ struct virtio_gpu_output {
|
||||
struct drm_encoder enc;
|
||||
struct virtio_gpu_display_one info;
|
||||
struct virtio_gpu_update_cursor cursor;
|
||||
struct edid *edid;
|
||||
int cur_x;
|
||||
int cur_y;
|
||||
bool enabled;
|
||||
};
|
||||
#define drm_crtc_to_virtio_gpu_output(x) \
|
||||
container_of(x, struct virtio_gpu_output, crtc)
|
||||
@ -127,6 +148,7 @@ struct virtio_gpu_framebuffer {
|
||||
int x1, y1, x2, y2; /* dirty rect */
|
||||
spinlock_t dirty_lock;
|
||||
uint32_t hw_res_handle;
|
||||
struct virtio_gpu_fence *fence;
|
||||
};
|
||||
#define to_virtio_gpu_framebuffer(x) \
|
||||
container_of(x, struct virtio_gpu_framebuffer, base)
|
||||
@ -138,8 +160,6 @@ struct virtio_gpu_mman {
|
||||
struct ttm_bo_device bdev;
|
||||
};
|
||||
|
||||
struct virtio_gpu_fbdev;
|
||||
|
||||
struct virtio_gpu_queue {
|
||||
struct virtqueue *vq;
|
||||
spinlock_t qlock;
|
||||
@ -170,8 +190,6 @@ struct virtio_gpu_device {
|
||||
|
||||
struct virtio_gpu_mman mman;
|
||||
|
||||
/* pointer to fbdev info structure */
|
||||
struct virtio_gpu_fbdev *vgfbdev;
|
||||
struct virtio_gpu_output outputs[VIRTIO_GPU_MAX_SCANOUTS];
|
||||
uint32_t num_scanouts;
|
||||
|
||||
@ -180,8 +198,7 @@ struct virtio_gpu_device {
|
||||
struct kmem_cache *vbufs;
|
||||
bool vqs_ready;
|
||||
|
||||
struct idr resource_idr;
|
||||
spinlock_t resource_idr_lock;
|
||||
struct ida resource_ida;
|
||||
|
||||
wait_queue_head_t resp_wq;
|
||||
/* current display info */
|
||||
@ -190,10 +207,10 @@ struct virtio_gpu_device {
|
||||
|
||||
struct virtio_gpu_fence_driver fence_drv;
|
||||
|
||||
struct idr ctx_id_idr;
|
||||
spinlock_t ctx_id_idr_lock;
|
||||
struct ida ctx_id_ida;
|
||||
|
||||
bool has_virgl_3d;
|
||||
bool has_edid;
|
||||
|
||||
struct work_struct config_changed_work;
|
||||
|
||||
@ -209,6 +226,9 @@ struct virtio_gpu_fpriv {
|
||||
/* virtio_ioctl.c */
|
||||
#define DRM_VIRTIO_NUM_IOCTLS 10
|
||||
extern struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS];
|
||||
int virtio_gpu_object_list_validate(struct ww_acquire_ctx *ticket,
|
||||
struct list_head *head);
|
||||
void virtio_gpu_unref_list(struct list_head *head);
|
||||
|
||||
/* virtio_kms.c */
|
||||
int virtio_gpu_driver_load(struct drm_device *dev, unsigned long flags);
|
||||
@ -222,16 +242,17 @@ int virtio_gpu_gem_init(struct virtio_gpu_device *vgdev);
|
||||
void virtio_gpu_gem_fini(struct virtio_gpu_device *vgdev);
|
||||
int virtio_gpu_gem_create(struct drm_file *file,
|
||||
struct drm_device *dev,
|
||||
uint64_t size,
|
||||
struct virtio_gpu_object_params *params,
|
||||
struct drm_gem_object **obj_p,
|
||||
uint32_t *handle_p);
|
||||
int virtio_gpu_gem_object_open(struct drm_gem_object *obj,
|
||||
struct drm_file *file);
|
||||
void virtio_gpu_gem_object_close(struct drm_gem_object *obj,
|
||||
struct drm_file *file);
|
||||
struct virtio_gpu_object *virtio_gpu_alloc_object(struct drm_device *dev,
|
||||
size_t size, bool kernel,
|
||||
bool pinned);
|
||||
struct virtio_gpu_object*
|
||||
virtio_gpu_alloc_object(struct drm_device *dev,
|
||||
struct virtio_gpu_object_params *params,
|
||||
struct virtio_gpu_fence *fence);
|
||||
int virtio_gpu_mode_dumb_create(struct drm_file *file_priv,
|
||||
struct drm_device *dev,
|
||||
struct drm_mode_create_dumb *args);
|
||||
@ -240,30 +261,24 @@ int virtio_gpu_mode_dumb_mmap(struct drm_file *file_priv,
|
||||
uint32_t handle, uint64_t *offset_p);
|
||||
|
||||
/* virtio_fb */
|
||||
#define VIRTIO_GPUFB_CONN_LIMIT 1
|
||||
int virtio_gpu_fbdev_init(struct virtio_gpu_device *vgdev);
|
||||
void virtio_gpu_fbdev_fini(struct virtio_gpu_device *vgdev);
|
||||
int virtio_gpu_surface_dirty(struct virtio_gpu_framebuffer *qfb,
|
||||
struct drm_clip_rect *clips,
|
||||
unsigned int num_clips);
|
||||
/* virtio vg */
|
||||
int virtio_gpu_alloc_vbufs(struct virtio_gpu_device *vgdev);
|
||||
void virtio_gpu_free_vbufs(struct virtio_gpu_device *vgdev);
|
||||
void virtio_gpu_resource_id_get(struct virtio_gpu_device *vgdev,
|
||||
uint32_t *resid);
|
||||
void virtio_gpu_resource_id_put(struct virtio_gpu_device *vgdev, uint32_t id);
|
||||
void virtio_gpu_cmd_create_resource(struct virtio_gpu_device *vgdev,
|
||||
uint32_t resource_id,
|
||||
uint32_t format,
|
||||
uint32_t width,
|
||||
uint32_t height);
|
||||
struct virtio_gpu_object *bo,
|
||||
struct virtio_gpu_object_params *params,
|
||||
struct virtio_gpu_fence *fence);
|
||||
void virtio_gpu_cmd_unref_resource(struct virtio_gpu_device *vgdev,
|
||||
uint32_t resource_id);
|
||||
void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev,
|
||||
uint32_t resource_id, uint64_t offset,
|
||||
struct virtio_gpu_object *bo,
|
||||
uint64_t offset,
|
||||
__le32 width, __le32 height,
|
||||
__le32 x, __le32 y,
|
||||
struct virtio_gpu_fence **fence);
|
||||
struct virtio_gpu_fence *fence);
|
||||
void virtio_gpu_cmd_resource_flush(struct virtio_gpu_device *vgdev,
|
||||
uint32_t resource_id,
|
||||
uint32_t x, uint32_t y,
|
||||
@ -274,19 +289,19 @@ void virtio_gpu_cmd_set_scanout(struct virtio_gpu_device *vgdev,
|
||||
uint32_t x, uint32_t y);
|
||||
int virtio_gpu_object_attach(struct virtio_gpu_device *vgdev,
|
||||
struct virtio_gpu_object *obj,
|
||||
uint32_t resource_id,
|
||||
struct virtio_gpu_fence **fence);
|
||||
struct virtio_gpu_fence *fence);
|
||||
void virtio_gpu_object_detach(struct virtio_gpu_device *vgdev,
|
||||
struct virtio_gpu_object *obj);
|
||||
int virtio_gpu_attach_status_page(struct virtio_gpu_device *vgdev);
|
||||
int virtio_gpu_detach_status_page(struct virtio_gpu_device *vgdev);
|
||||
void virtio_gpu_cursor_ping(struct virtio_gpu_device *vgdev,
|
||||
struct virtio_gpu_output *output);
|
||||
int virtio_gpu_cmd_get_display_info(struct virtio_gpu_device *vgdev);
|
||||
void virtio_gpu_cmd_resource_inval_backing(struct virtio_gpu_device *vgdev,
|
||||
uint32_t resource_id);
|
||||
int virtio_gpu_cmd_get_capset_info(struct virtio_gpu_device *vgdev, int idx);
|
||||
int virtio_gpu_cmd_get_capset(struct virtio_gpu_device *vgdev,
|
||||
int idx, int version,
|
||||
struct virtio_gpu_drv_cap_cache **cache_p);
|
||||
int virtio_gpu_cmd_get_edids(struct virtio_gpu_device *vgdev);
|
||||
void virtio_gpu_cmd_context_create(struct virtio_gpu_device *vgdev, uint32_t id,
|
||||
uint32_t nlen, const char *name);
|
||||
void virtio_gpu_cmd_context_destroy(struct virtio_gpu_device *vgdev,
|
||||
@ -299,21 +314,23 @@ void virtio_gpu_cmd_context_detach_resource(struct virtio_gpu_device *vgdev,
|
||||
uint32_t resource_id);
|
||||
void virtio_gpu_cmd_submit(struct virtio_gpu_device *vgdev,
|
||||
void *data, uint32_t data_size,
|
||||
uint32_t ctx_id, struct virtio_gpu_fence **fence);
|
||||
uint32_t ctx_id, struct virtio_gpu_fence *fence);
|
||||
void virtio_gpu_cmd_transfer_from_host_3d(struct virtio_gpu_device *vgdev,
|
||||
uint32_t resource_id, uint32_t ctx_id,
|
||||
uint64_t offset, uint32_t level,
|
||||
struct virtio_gpu_box *box,
|
||||
struct virtio_gpu_fence **fence);
|
||||
struct virtio_gpu_fence *fence);
|
||||
void virtio_gpu_cmd_transfer_to_host_3d(struct virtio_gpu_device *vgdev,
|
||||
uint32_t resource_id, uint32_t ctx_id,
|
||||
struct virtio_gpu_object *bo,
|
||||
uint32_t ctx_id,
|
||||
uint64_t offset, uint32_t level,
|
||||
struct virtio_gpu_box *box,
|
||||
struct virtio_gpu_fence **fence);
|
||||
struct virtio_gpu_fence *fence);
|
||||
void
|
||||
virtio_gpu_cmd_resource_create_3d(struct virtio_gpu_device *vgdev,
|
||||
struct virtio_gpu_resource_create_3d *rc_3d,
|
||||
struct virtio_gpu_fence **fence);
|
||||
struct virtio_gpu_object *bo,
|
||||
struct virtio_gpu_object_params *params,
|
||||
struct virtio_gpu_fence *fence);
|
||||
void virtio_gpu_ctrl_ack(struct virtqueue *vq);
|
||||
void virtio_gpu_cursor_ack(struct virtqueue *vq);
|
||||
void virtio_gpu_fence_ack(struct virtqueue *vq);
|
||||
@ -341,25 +358,28 @@ void virtio_gpu_ttm_fini(struct virtio_gpu_device *vgdev);
|
||||
int virtio_gpu_mmap(struct file *filp, struct vm_area_struct *vma);
|
||||
|
||||
/* virtio_gpu_fence.c */
|
||||
int virtio_gpu_fence_emit(struct virtio_gpu_device *vgdev,
|
||||
bool virtio_fence_signaled(struct dma_fence *f);
|
||||
struct virtio_gpu_fence *virtio_gpu_fence_alloc(
|
||||
struct virtio_gpu_device *vgdev);
|
||||
void virtio_gpu_fence_emit(struct virtio_gpu_device *vgdev,
|
||||
struct virtio_gpu_ctrl_hdr *cmd_hdr,
|
||||
struct virtio_gpu_fence **fence);
|
||||
struct virtio_gpu_fence *fence);
|
||||
void virtio_gpu_fence_event_process(struct virtio_gpu_device *vdev,
|
||||
u64 last_seq);
|
||||
|
||||
/* virtio_gpu_object */
|
||||
int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
|
||||
unsigned long size, bool kernel, bool pinned,
|
||||
struct virtio_gpu_object **bo_ptr);
|
||||
int virtio_gpu_object_kmap(struct virtio_gpu_object *bo, void **ptr);
|
||||
struct virtio_gpu_object_params *params,
|
||||
struct virtio_gpu_object **bo_ptr,
|
||||
struct virtio_gpu_fence *fence);
|
||||
void virtio_gpu_object_kunmap(struct virtio_gpu_object *bo);
|
||||
int virtio_gpu_object_kmap(struct virtio_gpu_object *bo);
|
||||
int virtio_gpu_object_get_sg_table(struct virtio_gpu_device *qdev,
|
||||
struct virtio_gpu_object *bo);
|
||||
void virtio_gpu_object_free_sg_table(struct virtio_gpu_object *bo);
|
||||
int virtio_gpu_object_wait(struct virtio_gpu_object *bo, bool no_wait);
|
||||
|
||||
/* virtgpu_prime.c */
|
||||
int virtgpu_gem_prime_pin(struct drm_gem_object *obj);
|
||||
void virtgpu_gem_prime_unpin(struct drm_gem_object *obj);
|
||||
struct sg_table *virtgpu_gem_prime_get_sg_table(struct drm_gem_object *obj);
|
||||
struct drm_gem_object *virtgpu_gem_prime_import_sg_table(
|
||||
struct drm_device *dev, struct dma_buf_attachment *attach,
|
||||
@ -372,7 +392,7 @@ int virtgpu_gem_prime_mmap(struct drm_gem_object *obj,
|
||||
static inline struct virtio_gpu_object*
|
||||
virtio_gpu_object_ref(struct virtio_gpu_object *bo)
|
||||
{
|
||||
ttm_bo_reference(&bo->tbo);
|
||||
ttm_bo_get(&bo->tbo);
|
||||
return bo;
|
||||
}
|
||||
|
||||
@ -383,9 +403,8 @@ static inline void virtio_gpu_object_unref(struct virtio_gpu_object **bo)
|
||||
if ((*bo) == NULL)
|
||||
return;
|
||||
tbo = &((*bo)->tbo);
|
||||
ttm_bo_unref(&tbo);
|
||||
if (tbo == NULL)
|
||||
*bo = NULL;
|
||||
ttm_bo_put(tbo);
|
||||
*bo = NULL;
|
||||
}
|
||||
|
||||
static inline u64 virtio_gpu_object_mmap_offset(struct virtio_gpu_object *bo)
|
||||
|
@ -27,15 +27,6 @@
|
||||
#include <drm/drm_fb_helper.h>
|
||||
#include "virtgpu_drv.h"
|
||||
|
||||
#define VIRTIO_GPU_FBCON_POLL_PERIOD (HZ / 60)
|
||||
|
||||
struct virtio_gpu_fbdev {
|
||||
struct drm_fb_helper helper;
|
||||
struct virtio_gpu_framebuffer vgfb;
|
||||
struct virtio_gpu_device *vgdev;
|
||||
struct delayed_work work;
|
||||
};
|
||||
|
||||
static int virtio_gpu_dirty_update(struct virtio_gpu_framebuffer *fb,
|
||||
bool store, int x, int y,
|
||||
int width, int height)
|
||||
@ -102,7 +93,7 @@ static int virtio_gpu_dirty_update(struct virtio_gpu_framebuffer *fb,
|
||||
|
||||
offset = (y * fb->base.pitches[0]) + x * bpp;
|
||||
|
||||
virtio_gpu_cmd_transfer_to_host_2d(vgdev, obj->hw_res_handle,
|
||||
virtio_gpu_cmd_transfer_to_host_2d(vgdev, obj,
|
||||
offset,
|
||||
cpu_to_le32(w),
|
||||
cpu_to_le32(h),
|
||||
@ -157,199 +148,3 @@ int virtio_gpu_surface_dirty(struct virtio_gpu_framebuffer *vgfb,
|
||||
left, top, right - left, bottom - top);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void virtio_gpu_fb_dirty_work(struct work_struct *work)
|
||||
{
|
||||
struct delayed_work *delayed_work = to_delayed_work(work);
|
||||
struct virtio_gpu_fbdev *vfbdev =
|
||||
container_of(delayed_work, struct virtio_gpu_fbdev, work);
|
||||
struct virtio_gpu_framebuffer *vgfb = &vfbdev->vgfb;
|
||||
|
||||
virtio_gpu_dirty_update(&vfbdev->vgfb, false, vgfb->x1, vgfb->y1,
|
||||
vgfb->x2 - vgfb->x1, vgfb->y2 - vgfb->y1);
|
||||
}
|
||||
|
||||
static void virtio_gpu_3d_fillrect(struct fb_info *info,
|
||||
const struct fb_fillrect *rect)
|
||||
{
|
||||
struct virtio_gpu_fbdev *vfbdev = info->par;
|
||||
|
||||
drm_fb_helper_sys_fillrect(info, rect);
|
||||
virtio_gpu_dirty_update(&vfbdev->vgfb, true, rect->dx, rect->dy,
|
||||
rect->width, rect->height);
|
||||
schedule_delayed_work(&vfbdev->work, VIRTIO_GPU_FBCON_POLL_PERIOD);
|
||||
}
|
||||
|
||||
static void virtio_gpu_3d_copyarea(struct fb_info *info,
|
||||
const struct fb_copyarea *area)
|
||||
{
|
||||
struct virtio_gpu_fbdev *vfbdev = info->par;
|
||||
|
||||
drm_fb_helper_sys_copyarea(info, area);
|
||||
virtio_gpu_dirty_update(&vfbdev->vgfb, true, area->dx, area->dy,
|
||||
area->width, area->height);
|
||||
schedule_delayed_work(&vfbdev->work, VIRTIO_GPU_FBCON_POLL_PERIOD);
|
||||
}
|
||||
|
||||
static void virtio_gpu_3d_imageblit(struct fb_info *info,
|
||||
const struct fb_image *image)
|
||||
{
|
||||
struct virtio_gpu_fbdev *vfbdev = info->par;
|
||||
|
||||
drm_fb_helper_sys_imageblit(info, image);
|
||||
virtio_gpu_dirty_update(&vfbdev->vgfb, true, image->dx, image->dy,
|
||||
image->width, image->height);
|
||||
schedule_delayed_work(&vfbdev->work, VIRTIO_GPU_FBCON_POLL_PERIOD);
|
||||
}
|
||||
|
||||
static struct fb_ops virtio_gpufb_ops = {
|
||||
.owner = THIS_MODULE,
|
||||
DRM_FB_HELPER_DEFAULT_OPS,
|
||||
.fb_fillrect = virtio_gpu_3d_fillrect,
|
||||
.fb_copyarea = virtio_gpu_3d_copyarea,
|
||||
.fb_imageblit = virtio_gpu_3d_imageblit,
|
||||
};
|
||||
|
||||
static int virtio_gpu_vmap_fb(struct virtio_gpu_device *vgdev,
|
||||
struct virtio_gpu_object *obj)
|
||||
{
|
||||
return virtio_gpu_object_kmap(obj, NULL);
|
||||
}
|
||||
|
||||
static int virtio_gpufb_create(struct drm_fb_helper *helper,
|
||||
struct drm_fb_helper_surface_size *sizes)
|
||||
{
|
||||
struct virtio_gpu_fbdev *vfbdev =
|
||||
container_of(helper, struct virtio_gpu_fbdev, helper);
|
||||
struct drm_device *dev = helper->dev;
|
||||
struct virtio_gpu_device *vgdev = dev->dev_private;
|
||||
struct fb_info *info;
|
||||
struct drm_framebuffer *fb;
|
||||
struct drm_mode_fb_cmd2 mode_cmd = {};
|
||||
struct virtio_gpu_object *obj;
|
||||
uint32_t resid, format, size;
|
||||
int ret;
|
||||
|
||||
mode_cmd.width = sizes->surface_width;
|
||||
mode_cmd.height = sizes->surface_height;
|
||||
mode_cmd.pitches[0] = mode_cmd.width * 4;
|
||||
mode_cmd.pixel_format = drm_mode_legacy_fb_format(32, 24);
|
||||
|
||||
format = virtio_gpu_translate_format(mode_cmd.pixel_format);
|
||||
if (format == 0)
|
||||
return -EINVAL;
|
||||
|
||||
size = mode_cmd.pitches[0] * mode_cmd.height;
|
||||
obj = virtio_gpu_alloc_object(dev, size, false, true);
|
||||
if (IS_ERR(obj))
|
||||
return PTR_ERR(obj);
|
||||
|
||||
virtio_gpu_resource_id_get(vgdev, &resid);
|
||||
virtio_gpu_cmd_create_resource(vgdev, resid, format,
|
||||
mode_cmd.width, mode_cmd.height);
|
||||
|
||||
ret = virtio_gpu_vmap_fb(vgdev, obj);
|
||||
if (ret) {
|
||||
DRM_ERROR("failed to vmap fb %d\n", ret);
|
||||
goto err_obj_vmap;
|
||||
}
|
||||
|
||||
/* attach the object to the resource */
|
||||
ret = virtio_gpu_object_attach(vgdev, obj, resid, NULL);
|
||||
if (ret)
|
||||
goto err_obj_attach;
|
||||
|
||||
info = drm_fb_helper_alloc_fbi(helper);
|
||||
if (IS_ERR(info)) {
|
||||
ret = PTR_ERR(info);
|
||||
goto err_fb_alloc;
|
||||
}
|
||||
|
||||
info->par = helper;
|
||||
|
||||
ret = virtio_gpu_framebuffer_init(dev, &vfbdev->vgfb,
|
||||
&mode_cmd, &obj->gem_base);
|
||||
if (ret)
|
||||
goto err_fb_alloc;
|
||||
|
||||
fb = &vfbdev->vgfb.base;
|
||||
|
||||
vfbdev->helper.fb = fb;
|
||||
|
||||
strcpy(info->fix.id, "virtiodrmfb");
|
||||
info->fbops = &virtio_gpufb_ops;
|
||||
info->pixmap.flags = FB_PIXMAP_SYSTEM;
|
||||
|
||||
info->screen_buffer = obj->vmap;
|
||||
info->screen_size = obj->gem_base.size;
|
||||
drm_fb_helper_fill_fix(info, fb->pitches[0], fb->format->depth);
|
||||
drm_fb_helper_fill_var(info, &vfbdev->helper,
|
||||
sizes->fb_width, sizes->fb_height);
|
||||
|
||||
info->fix.mmio_start = 0;
|
||||
info->fix.mmio_len = 0;
|
||||
return 0;
|
||||
|
||||
err_fb_alloc:
|
||||
virtio_gpu_cmd_resource_inval_backing(vgdev, resid);
|
||||
err_obj_attach:
|
||||
err_obj_vmap:
|
||||
virtio_gpu_gem_free_object(&obj->gem_base);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int virtio_gpu_fbdev_destroy(struct drm_device *dev,
|
||||
struct virtio_gpu_fbdev *vgfbdev)
|
||||
{
|
||||
struct virtio_gpu_framebuffer *vgfb = &vgfbdev->vgfb;
|
||||
|
||||
drm_fb_helper_unregister_fbi(&vgfbdev->helper);
|
||||
|
||||
if (vgfb->base.obj[0])
|
||||
vgfb->base.obj[0] = NULL;
|
||||
drm_fb_helper_fini(&vgfbdev->helper);
|
||||
drm_framebuffer_cleanup(&vgfb->base);
|
||||
|
||||
return 0;
|
||||
}
|
||||
static const struct drm_fb_helper_funcs virtio_gpu_fb_helper_funcs = {
|
||||
.fb_probe = virtio_gpufb_create,
|
||||
};
|
||||
|
||||
int virtio_gpu_fbdev_init(struct virtio_gpu_device *vgdev)
|
||||
{
|
||||
struct virtio_gpu_fbdev *vgfbdev;
|
||||
int bpp_sel = 32; /* TODO: parameter from somewhere? */
|
||||
int ret;
|
||||
|
||||
vgfbdev = kzalloc(sizeof(struct virtio_gpu_fbdev), GFP_KERNEL);
|
||||
if (!vgfbdev)
|
||||
return -ENOMEM;
|
||||
|
||||
vgfbdev->vgdev = vgdev;
|
||||
vgdev->vgfbdev = vgfbdev;
|
||||
INIT_DELAYED_WORK(&vgfbdev->work, virtio_gpu_fb_dirty_work);
|
||||
|
||||
drm_fb_helper_prepare(vgdev->ddev, &vgfbdev->helper,
|
||||
&virtio_gpu_fb_helper_funcs);
|
||||
ret = drm_fb_helper_init(vgdev->ddev, &vgfbdev->helper,
|
||||
VIRTIO_GPUFB_CONN_LIMIT);
|
||||
if (ret) {
|
||||
kfree(vgfbdev);
|
||||
return ret;
|
||||
}
|
||||
|
||||
drm_fb_helper_single_add_all_connectors(&vgfbdev->helper);
|
||||
drm_fb_helper_initial_config(&vgfbdev->helper, bpp_sel);
|
||||
return 0;
|
||||
}
|
||||
|
||||
void virtio_gpu_fbdev_fini(struct virtio_gpu_device *vgdev)
|
||||
{
|
||||
if (!vgdev->vgfbdev)
|
||||
return;
|
||||
|
||||
virtio_gpu_fbdev_destroy(vgdev->ddev, vgdev->vgfbdev);
|
||||
kfree(vgdev->vgfbdev);
|
||||
vgdev->vgfbdev = NULL;
|
||||
}
|
||||
|
@ -24,6 +24,7 @@
|
||||
*/
|
||||
|
||||
#include <drm/drmP.h>
|
||||
#include <trace/events/dma_fence.h>
|
||||
#include "virtgpu_drv.h"
|
||||
|
||||
static const char *virtio_get_driver_name(struct dma_fence *f)
|
||||
@ -36,20 +37,18 @@ static const char *virtio_get_timeline_name(struct dma_fence *f)
|
||||
return "controlq";
|
||||
}
|
||||
|
||||
static bool virtio_signaled(struct dma_fence *f)
|
||||
bool virtio_fence_signaled(struct dma_fence *f)
|
||||
{
|
||||
struct virtio_gpu_fence *fence = to_virtio_fence(f);
|
||||
|
||||
if (atomic64_read(&fence->drv->last_seq) >= fence->seq)
|
||||
if (atomic64_read(&fence->drv->last_seq) >= fence->f.seqno)
|
||||
return true;
|
||||
return false;
|
||||
}
|
||||
|
||||
static void virtio_fence_value_str(struct dma_fence *f, char *str, int size)
|
||||
{
|
||||
struct virtio_gpu_fence *fence = to_virtio_fence(f);
|
||||
|
||||
snprintf(str, size, "%llu", fence->seq);
|
||||
snprintf(str, size, "%llu", (long long unsigned int) f->seqno);
|
||||
}
|
||||
|
||||
static void virtio_timeline_value_str(struct dma_fence *f, char *str, int size)
|
||||
@ -62,34 +61,47 @@ static void virtio_timeline_value_str(struct dma_fence *f, char *str, int size)
|
||||
static const struct dma_fence_ops virtio_fence_ops = {
|
||||
.get_driver_name = virtio_get_driver_name,
|
||||
.get_timeline_name = virtio_get_timeline_name,
|
||||
.signaled = virtio_signaled,
|
||||
.signaled = virtio_fence_signaled,
|
||||
.fence_value_str = virtio_fence_value_str,
|
||||
.timeline_value_str = virtio_timeline_value_str,
|
||||
};
|
||||
|
||||
int virtio_gpu_fence_emit(struct virtio_gpu_device *vgdev,
|
||||
struct virtio_gpu_fence *virtio_gpu_fence_alloc(struct virtio_gpu_device *vgdev)
|
||||
{
|
||||
struct virtio_gpu_fence_driver *drv = &vgdev->fence_drv;
|
||||
struct virtio_gpu_fence *fence = kzalloc(sizeof(struct virtio_gpu_fence),
|
||||
GFP_KERNEL);
|
||||
if (!fence)
|
||||
return fence;
|
||||
|
||||
fence->drv = drv;
|
||||
|
||||
/* This only partially initializes the fence because the seqno is
|
||||
* unknown yet. The fence must not be used outside of the driver
|
||||
* until virtio_gpu_fence_emit is called.
|
||||
*/
|
||||
dma_fence_init(&fence->f, &virtio_fence_ops, &drv->lock, drv->context, 0);
|
||||
|
||||
return fence;
|
||||
}
|
||||
|
||||
void virtio_gpu_fence_emit(struct virtio_gpu_device *vgdev,
|
||||
struct virtio_gpu_ctrl_hdr *cmd_hdr,
|
||||
struct virtio_gpu_fence **fence)
|
||||
struct virtio_gpu_fence *fence)
|
||||
{
|
||||
struct virtio_gpu_fence_driver *drv = &vgdev->fence_drv;
|
||||
unsigned long irq_flags;
|
||||
|
||||
*fence = kmalloc(sizeof(struct virtio_gpu_fence), GFP_ATOMIC);
|
||||
if ((*fence) == NULL)
|
||||
return -ENOMEM;
|
||||
|
||||
spin_lock_irqsave(&drv->lock, irq_flags);
|
||||
(*fence)->drv = drv;
|
||||
(*fence)->seq = ++drv->sync_seq;
|
||||
dma_fence_init(&(*fence)->f, &virtio_fence_ops, &drv->lock,
|
||||
drv->context, (*fence)->seq);
|
||||
dma_fence_get(&(*fence)->f);
|
||||
list_add_tail(&(*fence)->node, &drv->fences);
|
||||
fence->f.seqno = ++drv->sync_seq;
|
||||
dma_fence_get(&fence->f);
|
||||
list_add_tail(&fence->node, &drv->fences);
|
||||
spin_unlock_irqrestore(&drv->lock, irq_flags);
|
||||
|
||||
trace_dma_fence_emit(&fence->f);
|
||||
|
||||
cmd_hdr->flags |= cpu_to_le32(VIRTIO_GPU_FLAG_FENCE);
|
||||
cmd_hdr->fence_id = cpu_to_le64((*fence)->seq);
|
||||
return 0;
|
||||
cmd_hdr->fence_id = cpu_to_le64(fence->f.seqno);
|
||||
}
|
||||
|
||||
void virtio_gpu_fence_event_process(struct virtio_gpu_device *vgdev,
|
||||
@ -102,7 +114,7 @@ void virtio_gpu_fence_event_process(struct virtio_gpu_device *vgdev,
|
||||
spin_lock_irqsave(&drv->lock, irq_flags);
|
||||
atomic64_set(&vgdev->fence_drv.last_seq, last_seq);
|
||||
list_for_each_entry_safe(fence, tmp, &drv->fences, node) {
|
||||
if (last_seq < fence->seq)
|
||||
if (last_seq < fence->f.seqno)
|
||||
continue;
|
||||
dma_fence_signal_locked(&fence->f);
|
||||
list_del(&fence->node);
|
||||
|
@ -34,15 +34,16 @@ void virtio_gpu_gem_free_object(struct drm_gem_object *gem_obj)
|
||||
virtio_gpu_object_unref(&obj);
|
||||
}
|
||||
|
||||
struct virtio_gpu_object *virtio_gpu_alloc_object(struct drm_device *dev,
|
||||
size_t size, bool kernel,
|
||||
bool pinned)
|
||||
struct virtio_gpu_object*
|
||||
virtio_gpu_alloc_object(struct drm_device *dev,
|
||||
struct virtio_gpu_object_params *params,
|
||||
struct virtio_gpu_fence *fence)
|
||||
{
|
||||
struct virtio_gpu_device *vgdev = dev->dev_private;
|
||||
struct virtio_gpu_object *obj;
|
||||
int ret;
|
||||
|
||||
ret = virtio_gpu_object_create(vgdev, size, kernel, pinned, &obj);
|
||||
ret = virtio_gpu_object_create(vgdev, params, &obj, fence);
|
||||
if (ret)
|
||||
return ERR_PTR(ret);
|
||||
|
||||
@ -51,7 +52,7 @@ struct virtio_gpu_object *virtio_gpu_alloc_object(struct drm_device *dev,
|
||||
|
||||
int virtio_gpu_gem_create(struct drm_file *file,
|
||||
struct drm_device *dev,
|
||||
uint64_t size,
|
||||
struct virtio_gpu_object_params *params,
|
||||
struct drm_gem_object **obj_p,
|
||||
uint32_t *handle_p)
|
||||
{
|
||||
@ -59,7 +60,7 @@ int virtio_gpu_gem_create(struct drm_file *file,
|
||||
int ret;
|
||||
u32 handle;
|
||||
|
||||
obj = virtio_gpu_alloc_object(dev, size, false, false);
|
||||
obj = virtio_gpu_alloc_object(dev, params, NULL);
|
||||
if (IS_ERR(obj))
|
||||
return PTR_ERR(obj);
|
||||
|
||||
@ -82,35 +83,25 @@ int virtio_gpu_mode_dumb_create(struct drm_file *file_priv,
|
||||
struct drm_device *dev,
|
||||
struct drm_mode_create_dumb *args)
|
||||
{
|
||||
struct virtio_gpu_device *vgdev = dev->dev_private;
|
||||
struct drm_gem_object *gobj;
|
||||
struct virtio_gpu_object *obj;
|
||||
struct virtio_gpu_object_params params = { 0 };
|
||||
int ret;
|
||||
uint32_t pitch;
|
||||
uint32_t resid;
|
||||
uint32_t format;
|
||||
|
||||
pitch = args->width * ((args->bpp + 1) / 8);
|
||||
args->size = pitch * args->height;
|
||||
args->size = ALIGN(args->size, PAGE_SIZE);
|
||||
|
||||
ret = virtio_gpu_gem_create(file_priv, dev, args->size, &gobj,
|
||||
params.format = virtio_gpu_translate_format(DRM_FORMAT_HOST_XRGB8888);
|
||||
params.width = args->width;
|
||||
params.height = args->height;
|
||||
params.size = args->size;
|
||||
params.dumb = true;
|
||||
ret = virtio_gpu_gem_create(file_priv, dev, ¶ms, &gobj,
|
||||
&args->handle);
|
||||
if (ret)
|
||||
goto fail;
|
||||
|
||||
format = virtio_gpu_translate_format(DRM_FORMAT_XRGB8888);
|
||||
virtio_gpu_resource_id_get(vgdev, &resid);
|
||||
virtio_gpu_cmd_create_resource(vgdev, resid, format,
|
||||
args->width, args->height);
|
||||
|
||||
/* attach the object to the resource */
|
||||
obj = gem_to_virtio_gpu_obj(gobj);
|
||||
ret = virtio_gpu_object_attach(vgdev, obj, resid, NULL);
|
||||
if (ret)
|
||||
goto fail;
|
||||
|
||||
obj->dumb = true;
|
||||
args->pitch = pitch;
|
||||
return ret;
|
||||
|
||||
|
@ -28,6 +28,7 @@
|
||||
#include <drm/drmP.h>
|
||||
#include <drm/virtgpu_drm.h>
|
||||
#include <drm/ttm/ttm_execbuf_util.h>
|
||||
#include <linux/sync_file.h>
|
||||
|
||||
#include "virtgpu_drv.h"
|
||||
|
||||
@ -53,8 +54,8 @@ static int virtio_gpu_map_ioctl(struct drm_device *dev, void *data,
|
||||
&virtio_gpu_map->offset);
|
||||
}
|
||||
|
||||
static int virtio_gpu_object_list_validate(struct ww_acquire_ctx *ticket,
|
||||
struct list_head *head)
|
||||
int virtio_gpu_object_list_validate(struct ww_acquire_ctx *ticket,
|
||||
struct list_head *head)
|
||||
{
|
||||
struct ttm_operation_ctx ctx = { false, false };
|
||||
struct ttm_validate_buffer *buf;
|
||||
@ -78,7 +79,7 @@ static int virtio_gpu_object_list_validate(struct ww_acquire_ctx *ticket,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void virtio_gpu_unref_list(struct list_head *head)
|
||||
void virtio_gpu_unref_list(struct list_head *head)
|
||||
{
|
||||
struct ttm_validate_buffer *buf;
|
||||
struct ttm_buffer_object *bo;
|
||||
@ -105,7 +106,7 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
|
||||
struct virtio_gpu_device *vgdev = dev->dev_private;
|
||||
struct virtio_gpu_fpriv *vfpriv = drm_file->driver_priv;
|
||||
struct drm_gem_object *gobj;
|
||||
struct virtio_gpu_fence *fence;
|
||||
struct virtio_gpu_fence *out_fence;
|
||||
struct virtio_gpu_object *qobj;
|
||||
int ret;
|
||||
uint32_t *bo_handles = NULL;
|
||||
@ -114,11 +115,46 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
|
||||
struct ttm_validate_buffer *buflist = NULL;
|
||||
int i;
|
||||
struct ww_acquire_ctx ticket;
|
||||
struct sync_file *sync_file;
|
||||
int in_fence_fd = exbuf->fence_fd;
|
||||
int out_fence_fd = -1;
|
||||
void *buf;
|
||||
|
||||
if (vgdev->has_virgl_3d == false)
|
||||
return -ENOSYS;
|
||||
|
||||
if ((exbuf->flags & ~VIRTGPU_EXECBUF_FLAGS))
|
||||
return -EINVAL;
|
||||
|
||||
exbuf->fence_fd = -1;
|
||||
|
||||
if (exbuf->flags & VIRTGPU_EXECBUF_FENCE_FD_IN) {
|
||||
struct dma_fence *in_fence;
|
||||
|
||||
in_fence = sync_file_get_fence(in_fence_fd);
|
||||
|
||||
if (!in_fence)
|
||||
return -EINVAL;
|
||||
|
||||
/*
|
||||
* Wait if the fence is from a foreign context, or if the fence
|
||||
* array contains any fence from a foreign context.
|
||||
*/
|
||||
ret = 0;
|
||||
if (!dma_fence_match_context(in_fence, vgdev->fence_drv.context))
|
||||
ret = dma_fence_wait(in_fence, true);
|
||||
|
||||
dma_fence_put(in_fence);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (exbuf->flags & VIRTGPU_EXECBUF_FENCE_FD_OUT) {
|
||||
out_fence_fd = get_unused_fd_flags(O_CLOEXEC);
|
||||
if (out_fence_fd < 0)
|
||||
return out_fence_fd;
|
||||
}
|
||||
|
||||
INIT_LIST_HEAD(&validate_list);
|
||||
if (exbuf->num_bo_handles) {
|
||||
|
||||
@ -128,26 +164,22 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
|
||||
sizeof(struct ttm_validate_buffer),
|
||||
GFP_KERNEL | __GFP_ZERO);
|
||||
if (!bo_handles || !buflist) {
|
||||
kvfree(bo_handles);
|
||||
kvfree(buflist);
|
||||
return -ENOMEM;
|
||||
ret = -ENOMEM;
|
||||
goto out_unused_fd;
|
||||
}
|
||||
|
||||
user_bo_handles = (void __user *)(uintptr_t)exbuf->bo_handles;
|
||||
user_bo_handles = u64_to_user_ptr(exbuf->bo_handles);
|
||||
if (copy_from_user(bo_handles, user_bo_handles,
|
||||
exbuf->num_bo_handles * sizeof(uint32_t))) {
|
||||
ret = -EFAULT;
|
||||
kvfree(bo_handles);
|
||||
kvfree(buflist);
|
||||
return ret;
|
||||
goto out_unused_fd;
|
||||
}
|
||||
|
||||
for (i = 0; i < exbuf->num_bo_handles; i++) {
|
||||
gobj = drm_gem_object_lookup(drm_file, bo_handles[i]);
|
||||
if (!gobj) {
|
||||
kvfree(bo_handles);
|
||||
kvfree(buflist);
|
||||
return -ENOENT;
|
||||
ret = -ENOENT;
|
||||
goto out_unused_fd;
|
||||
}
|
||||
|
||||
qobj = gem_to_virtio_gpu_obj(gobj);
|
||||
@ -156,34 +188,60 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
|
||||
list_add(&buflist[i].head, &validate_list);
|
||||
}
|
||||
kvfree(bo_handles);
|
||||
bo_handles = NULL;
|
||||
}
|
||||
|
||||
ret = virtio_gpu_object_list_validate(&ticket, &validate_list);
|
||||
if (ret)
|
||||
goto out_free;
|
||||
|
||||
buf = memdup_user((void __user *)(uintptr_t)exbuf->command,
|
||||
exbuf->size);
|
||||
buf = memdup_user(u64_to_user_ptr(exbuf->command), exbuf->size);
|
||||
if (IS_ERR(buf)) {
|
||||
ret = PTR_ERR(buf);
|
||||
goto out_unresv;
|
||||
}
|
||||
virtio_gpu_cmd_submit(vgdev, buf, exbuf->size,
|
||||
vfpriv->ctx_id, &fence);
|
||||
|
||||
ttm_eu_fence_buffer_objects(&ticket, &validate_list, &fence->f);
|
||||
out_fence = virtio_gpu_fence_alloc(vgdev);
|
||||
if(!out_fence) {
|
||||
ret = -ENOMEM;
|
||||
goto out_memdup;
|
||||
}
|
||||
|
||||
if (out_fence_fd >= 0) {
|
||||
sync_file = sync_file_create(&out_fence->f);
|
||||
if (!sync_file) {
|
||||
dma_fence_put(&out_fence->f);
|
||||
ret = -ENOMEM;
|
||||
goto out_memdup;
|
||||
}
|
||||
|
||||
exbuf->fence_fd = out_fence_fd;
|
||||
fd_install(out_fence_fd, sync_file->file);
|
||||
}
|
||||
|
||||
virtio_gpu_cmd_submit(vgdev, buf, exbuf->size,
|
||||
vfpriv->ctx_id, out_fence);
|
||||
|
||||
ttm_eu_fence_buffer_objects(&ticket, &validate_list, &out_fence->f);
|
||||
|
||||
/* fence the command bo */
|
||||
virtio_gpu_unref_list(&validate_list);
|
||||
kvfree(buflist);
|
||||
dma_fence_put(&fence->f);
|
||||
return 0;
|
||||
|
||||
out_memdup:
|
||||
kfree(buf);
|
||||
out_unresv:
|
||||
ttm_eu_backoff_reservation(&ticket, &validate_list);
|
||||
out_free:
|
||||
virtio_gpu_unref_list(&validate_list);
|
||||
out_unused_fd:
|
||||
kvfree(bo_handles);
|
||||
kvfree(buflist);
|
||||
|
||||
if (out_fence_fd >= 0)
|
||||
put_unused_fd(out_fence_fd);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
@ -204,10 +262,9 @@ static int virtio_gpu_getparam_ioctl(struct drm_device *dev, void *data,
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
if (copy_to_user((void __user *)(unsigned long)param->value,
|
||||
&value, sizeof(int))) {
|
||||
if (copy_to_user(u64_to_user_ptr(param->value), &value, sizeof(int)))
|
||||
return -EFAULT;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -216,17 +273,12 @@ static int virtio_gpu_resource_create_ioctl(struct drm_device *dev, void *data,
|
||||
{
|
||||
struct virtio_gpu_device *vgdev = dev->dev_private;
|
||||
struct drm_virtgpu_resource_create *rc = data;
|
||||
struct virtio_gpu_fence *fence;
|
||||
int ret;
|
||||
uint32_t res_id;
|
||||
struct virtio_gpu_object *qobj;
|
||||
struct drm_gem_object *obj;
|
||||
uint32_t handle = 0;
|
||||
uint32_t size;
|
||||
struct list_head validate_list;
|
||||
struct ttm_validate_buffer mainbuf;
|
||||
struct virtio_gpu_fence *fence = NULL;
|
||||
struct ww_acquire_ctx ticket;
|
||||
struct virtio_gpu_resource_create_3d rc_3d;
|
||||
struct virtio_gpu_object_params params = { 0 };
|
||||
|
||||
if (vgdev->has_virgl_3d == false) {
|
||||
if (rc->depth > 1)
|
||||
@ -241,94 +293,43 @@ static int virtio_gpu_resource_create_ioctl(struct drm_device *dev, void *data,
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
INIT_LIST_HEAD(&validate_list);
|
||||
memset(&mainbuf, 0, sizeof(struct ttm_validate_buffer));
|
||||
|
||||
virtio_gpu_resource_id_get(vgdev, &res_id);
|
||||
|
||||
size = rc->size;
|
||||
|
||||
params.format = rc->format;
|
||||
params.width = rc->width;
|
||||
params.height = rc->height;
|
||||
params.size = rc->size;
|
||||
if (vgdev->has_virgl_3d) {
|
||||
params.virgl = true;
|
||||
params.target = rc->target;
|
||||
params.bind = rc->bind;
|
||||
params.depth = rc->depth;
|
||||
params.array_size = rc->array_size;
|
||||
params.last_level = rc->last_level;
|
||||
params.nr_samples = rc->nr_samples;
|
||||
params.flags = rc->flags;
|
||||
}
|
||||
/* allocate a single page size object */
|
||||
if (size == 0)
|
||||
size = PAGE_SIZE;
|
||||
if (params.size == 0)
|
||||
params.size = PAGE_SIZE;
|
||||
|
||||
qobj = virtio_gpu_alloc_object(dev, size, false, false);
|
||||
if (IS_ERR(qobj)) {
|
||||
ret = PTR_ERR(qobj);
|
||||
goto fail_id;
|
||||
}
|
||||
fence = virtio_gpu_fence_alloc(vgdev);
|
||||
if (!fence)
|
||||
return -ENOMEM;
|
||||
qobj = virtio_gpu_alloc_object(dev, ¶ms, fence);
|
||||
dma_fence_put(&fence->f);
|
||||
if (IS_ERR(qobj))
|
||||
return PTR_ERR(qobj);
|
||||
obj = &qobj->gem_base;
|
||||
|
||||
if (!vgdev->has_virgl_3d) {
|
||||
virtio_gpu_cmd_create_resource(vgdev, res_id, rc->format,
|
||||
rc->width, rc->height);
|
||||
|
||||
ret = virtio_gpu_object_attach(vgdev, qobj, res_id, NULL);
|
||||
} else {
|
||||
/* use a gem reference since unref list undoes them */
|
||||
drm_gem_object_get(&qobj->gem_base);
|
||||
mainbuf.bo = &qobj->tbo;
|
||||
list_add(&mainbuf.head, &validate_list);
|
||||
|
||||
ret = virtio_gpu_object_list_validate(&ticket, &validate_list);
|
||||
if (ret) {
|
||||
DRM_DEBUG("failed to validate\n");
|
||||
goto fail_unref;
|
||||
}
|
||||
|
||||
rc_3d.resource_id = cpu_to_le32(res_id);
|
||||
rc_3d.target = cpu_to_le32(rc->target);
|
||||
rc_3d.format = cpu_to_le32(rc->format);
|
||||
rc_3d.bind = cpu_to_le32(rc->bind);
|
||||
rc_3d.width = cpu_to_le32(rc->width);
|
||||
rc_3d.height = cpu_to_le32(rc->height);
|
||||
rc_3d.depth = cpu_to_le32(rc->depth);
|
||||
rc_3d.array_size = cpu_to_le32(rc->array_size);
|
||||
rc_3d.last_level = cpu_to_le32(rc->last_level);
|
||||
rc_3d.nr_samples = cpu_to_le32(rc->nr_samples);
|
||||
rc_3d.flags = cpu_to_le32(rc->flags);
|
||||
|
||||
virtio_gpu_cmd_resource_create_3d(vgdev, &rc_3d, NULL);
|
||||
ret = virtio_gpu_object_attach(vgdev, qobj, res_id, &fence);
|
||||
if (ret) {
|
||||
ttm_eu_backoff_reservation(&ticket, &validate_list);
|
||||
goto fail_unref;
|
||||
}
|
||||
ttm_eu_fence_buffer_objects(&ticket, &validate_list, &fence->f);
|
||||
}
|
||||
|
||||
qobj->hw_res_handle = res_id;
|
||||
|
||||
ret = drm_gem_handle_create(file_priv, obj, &handle);
|
||||
if (ret) {
|
||||
|
||||
drm_gem_object_release(obj);
|
||||
if (vgdev->has_virgl_3d) {
|
||||
virtio_gpu_unref_list(&validate_list);
|
||||
dma_fence_put(&fence->f);
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
drm_gem_object_put_unlocked(obj);
|
||||
|
||||
rc->res_handle = res_id; /* similiar to a VM address */
|
||||
rc->res_handle = qobj->hw_res_handle; /* similiar to a VM address */
|
||||
rc->bo_handle = handle;
|
||||
|
||||
if (vgdev->has_virgl_3d) {
|
||||
virtio_gpu_unref_list(&validate_list);
|
||||
dma_fence_put(&fence->f);
|
||||
}
|
||||
return 0;
|
||||
fail_unref:
|
||||
if (vgdev->has_virgl_3d) {
|
||||
virtio_gpu_unref_list(&validate_list);
|
||||
dma_fence_put(&fence->f);
|
||||
}
|
||||
//fail_obj:
|
||||
// drm_gem_object_handle_unreference_unlocked(obj);
|
||||
fail_id:
|
||||
virtio_gpu_resource_id_put(vgdev, res_id);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int virtio_gpu_resource_info_ioctl(struct drm_device *dev, void *data,
|
||||
@ -383,10 +384,16 @@ static int virtio_gpu_transfer_from_host_ioctl(struct drm_device *dev,
|
||||
goto out_unres;
|
||||
|
||||
convert_to_hw_box(&box, &args->box);
|
||||
|
||||
fence = virtio_gpu_fence_alloc(vgdev);
|
||||
if (!fence) {
|
||||
ret = -ENOMEM;
|
||||
goto out_unres;
|
||||
}
|
||||
virtio_gpu_cmd_transfer_from_host_3d
|
||||
(vgdev, qobj->hw_res_handle,
|
||||
vfpriv->ctx_id, offset, args->level,
|
||||
&box, &fence);
|
||||
&box, fence);
|
||||
reservation_object_add_excl_fence(qobj->tbo.resv,
|
||||
&fence->f);
|
||||
|
||||
@ -429,13 +436,18 @@ static int virtio_gpu_transfer_to_host_ioctl(struct drm_device *dev, void *data,
|
||||
convert_to_hw_box(&box, &args->box);
|
||||
if (!vgdev->has_virgl_3d) {
|
||||
virtio_gpu_cmd_transfer_to_host_2d
|
||||
(vgdev, qobj->hw_res_handle, offset,
|
||||
(vgdev, qobj, offset,
|
||||
box.w, box.h, box.x, box.y, NULL);
|
||||
} else {
|
||||
fence = virtio_gpu_fence_alloc(vgdev);
|
||||
if (!fence) {
|
||||
ret = -ENOMEM;
|
||||
goto out_unres;
|
||||
}
|
||||
virtio_gpu_cmd_transfer_to_host_3d
|
||||
(vgdev, qobj->hw_res_handle,
|
||||
(vgdev, qobj,
|
||||
vfpriv ? vfpriv->ctx_id : 0, offset,
|
||||
args->level, &box, &fence);
|
||||
args->level, &box, fence);
|
||||
reservation_object_add_excl_fence(qobj->tbo.resv,
|
||||
&fence->f);
|
||||
dma_fence_put(&fence->f);
|
||||
@ -512,7 +524,6 @@ static int virtio_gpu_get_caps_ioctl(struct drm_device *dev,
|
||||
list_for_each_entry(cache_ent, &vgdev->cap_cache, head) {
|
||||
if (cache_ent->id == args->cap_set_id &&
|
||||
cache_ent->version == args->cap_set_ver) {
|
||||
ptr = cache_ent->caps_cache;
|
||||
spin_unlock(&vgdev->display_info_lock);
|
||||
goto copy_exit;
|
||||
}
|
||||
@ -523,6 +534,7 @@ static int virtio_gpu_get_caps_ioctl(struct drm_device *dev,
|
||||
virtio_gpu_cmd_get_capset(vgdev, found_valid, args->cap_set_ver,
|
||||
&cache_ent);
|
||||
|
||||
copy_exit:
|
||||
ret = wait_event_timeout(vgdev->resp_wq,
|
||||
atomic_read(&cache_ent->is_valid), 5 * HZ);
|
||||
if (!ret)
|
||||
@ -533,8 +545,7 @@ static int virtio_gpu_get_caps_ioctl(struct drm_device *dev,
|
||||
|
||||
ptr = cache_ent->caps_cache;
|
||||
|
||||
copy_exit:
|
||||
if (copy_to_user((void __user *)(unsigned long)args->addr, ptr, size))
|
||||
if (copy_to_user(u64_to_user_ptr(args->addr), ptr, size))
|
||||
return -EFAULT;
|
||||
|
||||
return 0;
|
||||
@ -542,34 +553,34 @@ copy_exit:
|
||||
|
||||
struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS] = {
|
||||
DRM_IOCTL_DEF_DRV(VIRTGPU_MAP, virtio_gpu_map_ioctl,
|
||||
DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW),
|
||||
DRM_AUTH | DRM_RENDER_ALLOW),
|
||||
|
||||
DRM_IOCTL_DEF_DRV(VIRTGPU_EXECBUFFER, virtio_gpu_execbuffer_ioctl,
|
||||
DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW),
|
||||
DRM_AUTH | DRM_RENDER_ALLOW),
|
||||
|
||||
DRM_IOCTL_DEF_DRV(VIRTGPU_GETPARAM, virtio_gpu_getparam_ioctl,
|
||||
DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW),
|
||||
DRM_AUTH | DRM_RENDER_ALLOW),
|
||||
|
||||
DRM_IOCTL_DEF_DRV(VIRTGPU_RESOURCE_CREATE,
|
||||
virtio_gpu_resource_create_ioctl,
|
||||
DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW),
|
||||
DRM_AUTH | DRM_RENDER_ALLOW),
|
||||
|
||||
DRM_IOCTL_DEF_DRV(VIRTGPU_RESOURCE_INFO, virtio_gpu_resource_info_ioctl,
|
||||
DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW),
|
||||
DRM_AUTH | DRM_RENDER_ALLOW),
|
||||
|
||||
/* make transfer async to the main ring? - no sure, can we
|
||||
* thread these in the underlying GL
|
||||
*/
|
||||
DRM_IOCTL_DEF_DRV(VIRTGPU_TRANSFER_FROM_HOST,
|
||||
virtio_gpu_transfer_from_host_ioctl,
|
||||
DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW),
|
||||
DRM_AUTH | DRM_RENDER_ALLOW),
|
||||
DRM_IOCTL_DEF_DRV(VIRTGPU_TRANSFER_TO_HOST,
|
||||
virtio_gpu_transfer_to_host_ioctl,
|
||||
DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW),
|
||||
DRM_AUTH | DRM_RENDER_ALLOW),
|
||||
|
||||
DRM_IOCTL_DEF_DRV(VIRTGPU_WAIT, virtio_gpu_wait_ioctl,
|
||||
DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW),
|
||||
DRM_AUTH | DRM_RENDER_ALLOW),
|
||||
|
||||
DRM_IOCTL_DEF_DRV(VIRTGPU_GET_CAPS, virtio_gpu_get_caps_ioctl,
|
||||
DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW),
|
||||
DRM_AUTH | DRM_RENDER_ALLOW),
|
||||
};
|
||||
|
@ -28,11 +28,6 @@
|
||||
#include <drm/drmP.h>
|
||||
#include "virtgpu_drv.h"
|
||||
|
||||
static int virtio_gpu_fbdev = 1;
|
||||
|
||||
MODULE_PARM_DESC(fbdev, "Disable/Enable framebuffer device & console");
|
||||
module_param_named(fbdev, virtio_gpu_fbdev, int, 0400);
|
||||
|
||||
static void virtio_gpu_config_changed_work_func(struct work_struct *work)
|
||||
{
|
||||
struct virtio_gpu_device *vgdev =
|
||||
@ -44,6 +39,8 @@ static void virtio_gpu_config_changed_work_func(struct work_struct *work)
|
||||
virtio_cread(vgdev->vdev, struct virtio_gpu_config,
|
||||
events_read, &events_read);
|
||||
if (events_read & VIRTIO_GPU_EVENT_DISPLAY) {
|
||||
if (vgdev->has_edid)
|
||||
virtio_gpu_cmd_get_edids(vgdev);
|
||||
virtio_gpu_cmd_get_display_info(vgdev);
|
||||
drm_helper_hpd_irq_event(vgdev->ddev);
|
||||
events_clear |= VIRTIO_GPU_EVENT_DISPLAY;
|
||||
@ -52,39 +49,23 @@ static void virtio_gpu_config_changed_work_func(struct work_struct *work)
|
||||
events_clear, &events_clear);
|
||||
}
|
||||
|
||||
static void virtio_gpu_ctx_id_get(struct virtio_gpu_device *vgdev,
|
||||
uint32_t *resid)
|
||||
static int virtio_gpu_context_create(struct virtio_gpu_device *vgdev,
|
||||
uint32_t nlen, const char *name)
|
||||
{
|
||||
int handle;
|
||||
int handle = ida_alloc(&vgdev->ctx_id_ida, GFP_KERNEL);
|
||||
|
||||
idr_preload(GFP_KERNEL);
|
||||
spin_lock(&vgdev->ctx_id_idr_lock);
|
||||
handle = idr_alloc(&vgdev->ctx_id_idr, NULL, 1, 0, 0);
|
||||
spin_unlock(&vgdev->ctx_id_idr_lock);
|
||||
idr_preload_end();
|
||||
*resid = handle;
|
||||
}
|
||||
|
||||
static void virtio_gpu_ctx_id_put(struct virtio_gpu_device *vgdev, uint32_t id)
|
||||
{
|
||||
spin_lock(&vgdev->ctx_id_idr_lock);
|
||||
idr_remove(&vgdev->ctx_id_idr, id);
|
||||
spin_unlock(&vgdev->ctx_id_idr_lock);
|
||||
}
|
||||
|
||||
static void virtio_gpu_context_create(struct virtio_gpu_device *vgdev,
|
||||
uint32_t nlen, const char *name,
|
||||
uint32_t *ctx_id)
|
||||
{
|
||||
virtio_gpu_ctx_id_get(vgdev, ctx_id);
|
||||
virtio_gpu_cmd_context_create(vgdev, *ctx_id, nlen, name);
|
||||
if (handle < 0)
|
||||
return handle;
|
||||
handle += 1;
|
||||
virtio_gpu_cmd_context_create(vgdev, handle, nlen, name);
|
||||
return handle;
|
||||
}
|
||||
|
||||
static void virtio_gpu_context_destroy(struct virtio_gpu_device *vgdev,
|
||||
uint32_t ctx_id)
|
||||
{
|
||||
virtio_gpu_cmd_context_destroy(vgdev, ctx_id);
|
||||
virtio_gpu_ctx_id_put(vgdev, ctx_id);
|
||||
ida_free(&vgdev->ctx_id_ida, ctx_id - 1);
|
||||
}
|
||||
|
||||
static void virtio_gpu_init_vq(struct virtio_gpu_queue *vgvq,
|
||||
@ -151,10 +132,8 @@ int virtio_gpu_driver_load(struct drm_device *dev, unsigned long flags)
|
||||
vgdev->dev = dev->dev;
|
||||
|
||||
spin_lock_init(&vgdev->display_info_lock);
|
||||
spin_lock_init(&vgdev->ctx_id_idr_lock);
|
||||
idr_init(&vgdev->ctx_id_idr);
|
||||
spin_lock_init(&vgdev->resource_idr_lock);
|
||||
idr_init(&vgdev->resource_idr);
|
||||
ida_init(&vgdev->ctx_id_ida);
|
||||
ida_init(&vgdev->resource_ida);
|
||||
init_waitqueue_head(&vgdev->resp_wq);
|
||||
virtio_gpu_init_vq(&vgdev->ctrlq, virtio_gpu_dequeue_ctrl_func);
|
||||
virtio_gpu_init_vq(&vgdev->cursorq, virtio_gpu_dequeue_cursor_func);
|
||||
@ -174,6 +153,10 @@ int virtio_gpu_driver_load(struct drm_device *dev, unsigned long flags)
|
||||
#else
|
||||
DRM_INFO("virgl 3d acceleration not supported by guest\n");
|
||||
#endif
|
||||
if (virtio_has_feature(vgdev->vdev, VIRTIO_GPU_F_EDID)) {
|
||||
vgdev->has_edid = true;
|
||||
DRM_INFO("EDID support available.\n");
|
||||
}
|
||||
|
||||
ret = virtio_find_vqs(vgdev->vdev, 2, vqs, callbacks, names, NULL);
|
||||
if (ret) {
|
||||
@ -219,12 +202,11 @@ int virtio_gpu_driver_load(struct drm_device *dev, unsigned long flags)
|
||||
|
||||
if (num_capsets)
|
||||
virtio_gpu_get_capsets(vgdev, num_capsets);
|
||||
if (vgdev->has_edid)
|
||||
virtio_gpu_cmd_get_edids(vgdev);
|
||||
virtio_gpu_cmd_get_display_info(vgdev);
|
||||
wait_event_timeout(vgdev->resp_wq, !vgdev->display_info_pending,
|
||||
5 * HZ);
|
||||
if (virtio_gpu_fbdev)
|
||||
virtio_gpu_fbdev_init(vgdev);
|
||||
|
||||
return 0;
|
||||
|
||||
err_modeset:
|
||||
@ -257,6 +239,7 @@ void virtio_gpu_driver_unload(struct drm_device *dev)
|
||||
flush_work(&vgdev->ctrlq.dequeue_work);
|
||||
flush_work(&vgdev->cursorq.dequeue_work);
|
||||
flush_work(&vgdev->config_changed_work);
|
||||
vgdev->vdev->config->reset(vgdev->vdev);
|
||||
vgdev->vdev->config->del_vqs(vgdev->vdev);
|
||||
|
||||
virtio_gpu_modeset_fini(vgdev);
|
||||
@ -271,7 +254,7 @@ int virtio_gpu_driver_open(struct drm_device *dev, struct drm_file *file)
|
||||
{
|
||||
struct virtio_gpu_device *vgdev = dev->dev_private;
|
||||
struct virtio_gpu_fpriv *vfpriv;
|
||||
uint32_t id;
|
||||
int id;
|
||||
char dbgname[TASK_COMM_LEN];
|
||||
|
||||
/* can't create contexts without 3d renderer */
|
||||
@ -284,7 +267,11 @@ int virtio_gpu_driver_open(struct drm_device *dev, struct drm_file *file)
|
||||
return -ENOMEM;
|
||||
|
||||
get_task_comm(dbgname, current);
|
||||
virtio_gpu_context_create(vgdev, strlen(dbgname), dbgname, &id);
|
||||
id = virtio_gpu_context_create(vgdev, strlen(dbgname), dbgname);
|
||||
if (id < 0) {
|
||||
kfree(vfpriv);
|
||||
return id;
|
||||
}
|
||||
|
||||
vfpriv->ctx_id = id;
|
||||
file->driver_priv = vfpriv;
|
||||
|
@ -23,8 +23,40 @@
|
||||
* WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
*/
|
||||
|
||||
#include <drm/ttm/ttm_execbuf_util.h>
|
||||
|
||||
#include "virtgpu_drv.h"
|
||||
|
||||
static int virtio_gpu_resource_id_get(struct virtio_gpu_device *vgdev,
|
||||
uint32_t *resid)
|
||||
{
|
||||
#if 0
|
||||
int handle = ida_alloc(&vgdev->resource_ida, GFP_KERNEL);
|
||||
|
||||
if (handle < 0)
|
||||
return handle;
|
||||
#else
|
||||
static int handle;
|
||||
|
||||
/*
|
||||
* FIXME: dirty hack to avoid re-using IDs, virglrenderer
|
||||
* can't deal with that. Needs fixing in virglrenderer, also
|
||||
* should figure a better way to handle that in the guest.
|
||||
*/
|
||||
handle++;
|
||||
#endif
|
||||
|
||||
*resid = handle + 1;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void virtio_gpu_resource_id_put(struct virtio_gpu_device *vgdev, uint32_t id)
|
||||
{
|
||||
#if 0
|
||||
ida_free(&vgdev->resource_ida, id - 1);
|
||||
#endif
|
||||
}
|
||||
|
||||
static void virtio_gpu_ttm_bo_destroy(struct ttm_buffer_object *tbo)
|
||||
{
|
||||
struct virtio_gpu_object *bo;
|
||||
@ -33,88 +65,130 @@ static void virtio_gpu_ttm_bo_destroy(struct ttm_buffer_object *tbo)
|
||||
bo = container_of(tbo, struct virtio_gpu_object, tbo);
|
||||
vgdev = (struct virtio_gpu_device *)bo->gem_base.dev->dev_private;
|
||||
|
||||
if (bo->hw_res_handle)
|
||||
if (bo->created)
|
||||
virtio_gpu_cmd_unref_resource(vgdev, bo->hw_res_handle);
|
||||
if (bo->pages)
|
||||
virtio_gpu_object_free_sg_table(bo);
|
||||
drm_gem_object_release(&bo->gem_base);
|
||||
virtio_gpu_resource_id_put(vgdev, bo->hw_res_handle);
|
||||
kfree(bo);
|
||||
}
|
||||
|
||||
static void virtio_gpu_init_ttm_placement(struct virtio_gpu_object *vgbo,
|
||||
bool pinned)
|
||||
static void virtio_gpu_init_ttm_placement(struct virtio_gpu_object *vgbo)
|
||||
{
|
||||
u32 c = 1;
|
||||
u32 pflag = pinned ? TTM_PL_FLAG_NO_EVICT : 0;
|
||||
|
||||
vgbo->placement.placement = &vgbo->placement_code;
|
||||
vgbo->placement.busy_placement = &vgbo->placement_code;
|
||||
vgbo->placement_code.fpfn = 0;
|
||||
vgbo->placement_code.lpfn = 0;
|
||||
vgbo->placement_code.flags =
|
||||
TTM_PL_MASK_CACHING | TTM_PL_FLAG_TT | pflag;
|
||||
TTM_PL_MASK_CACHING | TTM_PL_FLAG_TT |
|
||||
TTM_PL_FLAG_NO_EVICT;
|
||||
vgbo->placement.num_placement = c;
|
||||
vgbo->placement.num_busy_placement = c;
|
||||
|
||||
}
|
||||
|
||||
int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
|
||||
unsigned long size, bool kernel, bool pinned,
|
||||
struct virtio_gpu_object **bo_ptr)
|
||||
struct virtio_gpu_object_params *params,
|
||||
struct virtio_gpu_object **bo_ptr,
|
||||
struct virtio_gpu_fence *fence)
|
||||
{
|
||||
struct virtio_gpu_object *bo;
|
||||
enum ttm_bo_type type;
|
||||
size_t acc_size;
|
||||
int ret;
|
||||
|
||||
if (kernel)
|
||||
type = ttm_bo_type_kernel;
|
||||
else
|
||||
type = ttm_bo_type_device;
|
||||
*bo_ptr = NULL;
|
||||
|
||||
acc_size = ttm_bo_dma_acc_size(&vgdev->mman.bdev, size,
|
||||
acc_size = ttm_bo_dma_acc_size(&vgdev->mman.bdev, params->size,
|
||||
sizeof(struct virtio_gpu_object));
|
||||
|
||||
bo = kzalloc(sizeof(struct virtio_gpu_object), GFP_KERNEL);
|
||||
if (bo == NULL)
|
||||
return -ENOMEM;
|
||||
size = roundup(size, PAGE_SIZE);
|
||||
ret = drm_gem_object_init(vgdev->ddev, &bo->gem_base, size);
|
||||
if (ret != 0) {
|
||||
ret = virtio_gpu_resource_id_get(vgdev, &bo->hw_res_handle);
|
||||
if (ret < 0) {
|
||||
kfree(bo);
|
||||
return ret;
|
||||
}
|
||||
bo->dumb = false;
|
||||
virtio_gpu_init_ttm_placement(bo, pinned);
|
||||
params->size = roundup(params->size, PAGE_SIZE);
|
||||
ret = drm_gem_object_init(vgdev->ddev, &bo->gem_base, params->size);
|
||||
if (ret != 0) {
|
||||
virtio_gpu_resource_id_put(vgdev, bo->hw_res_handle);
|
||||
kfree(bo);
|
||||
return ret;
|
||||
}
|
||||
bo->dumb = params->dumb;
|
||||
|
||||
ret = ttm_bo_init(&vgdev->mman.bdev, &bo->tbo, size, type,
|
||||
&bo->placement, 0, !kernel, acc_size,
|
||||
NULL, NULL, &virtio_gpu_ttm_bo_destroy);
|
||||
if (params->virgl) {
|
||||
virtio_gpu_cmd_resource_create_3d(vgdev, bo, params, fence);
|
||||
} else {
|
||||
virtio_gpu_cmd_create_resource(vgdev, bo, params, fence);
|
||||
}
|
||||
|
||||
virtio_gpu_init_ttm_placement(bo);
|
||||
ret = ttm_bo_init(&vgdev->mman.bdev, &bo->tbo, params->size,
|
||||
ttm_bo_type_device, &bo->placement, 0,
|
||||
true, acc_size, NULL, NULL,
|
||||
&virtio_gpu_ttm_bo_destroy);
|
||||
/* ttm_bo_init failure will call the destroy */
|
||||
if (ret != 0)
|
||||
return ret;
|
||||
|
||||
if (fence) {
|
||||
struct virtio_gpu_fence_driver *drv = &vgdev->fence_drv;
|
||||
struct list_head validate_list;
|
||||
struct ttm_validate_buffer mainbuf;
|
||||
struct ww_acquire_ctx ticket;
|
||||
unsigned long irq_flags;
|
||||
bool signaled;
|
||||
|
||||
INIT_LIST_HEAD(&validate_list);
|
||||
memset(&mainbuf, 0, sizeof(struct ttm_validate_buffer));
|
||||
|
||||
/* use a gem reference since unref list undoes them */
|
||||
drm_gem_object_get(&bo->gem_base);
|
||||
mainbuf.bo = &bo->tbo;
|
||||
list_add(&mainbuf.head, &validate_list);
|
||||
|
||||
ret = virtio_gpu_object_list_validate(&ticket, &validate_list);
|
||||
if (ret == 0) {
|
||||
spin_lock_irqsave(&drv->lock, irq_flags);
|
||||
signaled = virtio_fence_signaled(&fence->f);
|
||||
if (!signaled)
|
||||
/* virtio create command still in flight */
|
||||
ttm_eu_fence_buffer_objects(&ticket, &validate_list,
|
||||
&fence->f);
|
||||
spin_unlock_irqrestore(&drv->lock, irq_flags);
|
||||
if (signaled)
|
||||
/* virtio create command finished */
|
||||
ttm_eu_backoff_reservation(&ticket, &validate_list);
|
||||
}
|
||||
virtio_gpu_unref_list(&validate_list);
|
||||
}
|
||||
|
||||
*bo_ptr = bo;
|
||||
return 0;
|
||||
}
|
||||
|
||||
int virtio_gpu_object_kmap(struct virtio_gpu_object *bo, void **ptr)
|
||||
void virtio_gpu_object_kunmap(struct virtio_gpu_object *bo)
|
||||
{
|
||||
bo->vmap = NULL;
|
||||
ttm_bo_kunmap(&bo->kmap);
|
||||
}
|
||||
|
||||
int virtio_gpu_object_kmap(struct virtio_gpu_object *bo)
|
||||
{
|
||||
bool is_iomem;
|
||||
int r;
|
||||
|
||||
if (bo->vmap) {
|
||||
if (ptr)
|
||||
*ptr = bo->vmap;
|
||||
return 0;
|
||||
}
|
||||
WARN_ON(bo->vmap);
|
||||
|
||||
r = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, &bo->kmap);
|
||||
if (r)
|
||||
return r;
|
||||
bo->vmap = ttm_kmap_obj_virtual(&bo->kmap, &is_iomem);
|
||||
if (ptr)
|
||||
*ptr = bo->vmap;
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -152,13 +152,13 @@ static void virtio_gpu_primary_plane_update(struct drm_plane *plane,
|
||||
if (WARN_ON(!output))
|
||||
return;
|
||||
|
||||
if (plane->state->fb) {
|
||||
if (plane->state->fb && output->enabled) {
|
||||
vgfb = to_virtio_gpu_framebuffer(plane->state->fb);
|
||||
bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]);
|
||||
handle = bo->hw_res_handle;
|
||||
if (bo->dumb) {
|
||||
virtio_gpu_cmd_transfer_to_host_2d
|
||||
(vgdev, handle, 0,
|
||||
(vgdev, bo, 0,
|
||||
cpu_to_le32(plane->state->src_w >> 16),
|
||||
cpu_to_le32(plane->state->src_h >> 16),
|
||||
cpu_to_le32(plane->state->src_x >> 16),
|
||||
@ -180,11 +180,49 @@ static void virtio_gpu_primary_plane_update(struct drm_plane *plane,
|
||||
plane->state->src_h >> 16,
|
||||
plane->state->src_x >> 16,
|
||||
plane->state->src_y >> 16);
|
||||
virtio_gpu_cmd_resource_flush(vgdev, handle,
|
||||
plane->state->src_x >> 16,
|
||||
plane->state->src_y >> 16,
|
||||
plane->state->src_w >> 16,
|
||||
plane->state->src_h >> 16);
|
||||
if (handle)
|
||||
virtio_gpu_cmd_resource_flush(vgdev, handle,
|
||||
plane->state->src_x >> 16,
|
||||
plane->state->src_y >> 16,
|
||||
plane->state->src_w >> 16,
|
||||
plane->state->src_h >> 16);
|
||||
}
|
||||
|
||||
static int virtio_gpu_cursor_prepare_fb(struct drm_plane *plane,
|
||||
struct drm_plane_state *new_state)
|
||||
{
|
||||
struct drm_device *dev = plane->dev;
|
||||
struct virtio_gpu_device *vgdev = dev->dev_private;
|
||||
struct virtio_gpu_framebuffer *vgfb;
|
||||
struct virtio_gpu_object *bo;
|
||||
|
||||
if (!new_state->fb)
|
||||
return 0;
|
||||
|
||||
vgfb = to_virtio_gpu_framebuffer(new_state->fb);
|
||||
bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]);
|
||||
if (bo && bo->dumb && (plane->state->fb != new_state->fb)) {
|
||||
vgfb->fence = virtio_gpu_fence_alloc(vgdev);
|
||||
if (!vgfb->fence)
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void virtio_gpu_cursor_cleanup_fb(struct drm_plane *plane,
|
||||
struct drm_plane_state *old_state)
|
||||
{
|
||||
struct virtio_gpu_framebuffer *vgfb;
|
||||
|
||||
if (!plane->state->fb)
|
||||
return;
|
||||
|
||||
vgfb = to_virtio_gpu_framebuffer(plane->state->fb);
|
||||
if (vgfb->fence) {
|
||||
dma_fence_put(&vgfb->fence->f);
|
||||
vgfb->fence = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
static void virtio_gpu_cursor_plane_update(struct drm_plane *plane,
|
||||
@ -194,7 +232,6 @@ static void virtio_gpu_cursor_plane_update(struct drm_plane *plane,
|
||||
struct virtio_gpu_device *vgdev = dev->dev_private;
|
||||
struct virtio_gpu_output *output = NULL;
|
||||
struct virtio_gpu_framebuffer *vgfb;
|
||||
struct virtio_gpu_fence *fence = NULL;
|
||||
struct virtio_gpu_object *bo = NULL;
|
||||
uint32_t handle;
|
||||
int ret = 0;
|
||||
@ -217,16 +254,16 @@ static void virtio_gpu_cursor_plane_update(struct drm_plane *plane,
|
||||
if (bo && bo->dumb && (plane->state->fb != old_state->fb)) {
|
||||
/* new cursor -- update & wait */
|
||||
virtio_gpu_cmd_transfer_to_host_2d
|
||||
(vgdev, handle, 0,
|
||||
(vgdev, bo, 0,
|
||||
cpu_to_le32(plane->state->crtc_w),
|
||||
cpu_to_le32(plane->state->crtc_h),
|
||||
0, 0, &fence);
|
||||
0, 0, vgfb->fence);
|
||||
ret = virtio_gpu_object_reserve(bo, false);
|
||||
if (!ret) {
|
||||
reservation_object_add_excl_fence(bo->tbo.resv,
|
||||
&fence->f);
|
||||
dma_fence_put(&fence->f);
|
||||
fence = NULL;
|
||||
&vgfb->fence->f);
|
||||
dma_fence_put(&vgfb->fence->f);
|
||||
vgfb->fence = NULL;
|
||||
virtio_gpu_object_unreserve(bo);
|
||||
virtio_gpu_object_wait(bo, false);
|
||||
}
|
||||
@ -268,6 +305,8 @@ static const struct drm_plane_helper_funcs virtio_gpu_primary_helper_funcs = {
|
||||
};
|
||||
|
||||
static const struct drm_plane_helper_funcs virtio_gpu_cursor_helper_funcs = {
|
||||
.prepare_fb = virtio_gpu_cursor_prepare_fb,
|
||||
.cleanup_fb = virtio_gpu_cursor_cleanup_fb,
|
||||
.atomic_check = virtio_gpu_plane_atomic_check,
|
||||
.atomic_update = virtio_gpu_cursor_plane_update,
|
||||
};
|
||||
|
@ -28,21 +28,16 @@
|
||||
* device that might share buffers with virtgpu
|
||||
*/
|
||||
|
||||
int virtgpu_gem_prime_pin(struct drm_gem_object *obj)
|
||||
{
|
||||
WARN_ONCE(1, "not implemented");
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
void virtgpu_gem_prime_unpin(struct drm_gem_object *obj)
|
||||
{
|
||||
WARN_ONCE(1, "not implemented");
|
||||
}
|
||||
|
||||
struct sg_table *virtgpu_gem_prime_get_sg_table(struct drm_gem_object *obj)
|
||||
{
|
||||
WARN_ONCE(1, "not implemented");
|
||||
return ERR_PTR(-ENODEV);
|
||||
struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj);
|
||||
|
||||
if (!bo->tbo.ttm->pages || !bo->tbo.ttm->num_pages)
|
||||
/* should not happen */
|
||||
return ERR_PTR(-EINVAL);
|
||||
|
||||
return drm_prime_pages_to_sg(bo->tbo.ttm->pages,
|
||||
bo->tbo.ttm->num_pages);
|
||||
}
|
||||
|
||||
struct drm_gem_object *virtgpu_gem_prime_import_sg_table(
|
||||
@ -55,17 +50,25 @@ struct drm_gem_object *virtgpu_gem_prime_import_sg_table(
|
||||
|
||||
void *virtgpu_gem_prime_vmap(struct drm_gem_object *obj)
|
||||
{
|
||||
WARN_ONCE(1, "not implemented");
|
||||
return ERR_PTR(-ENODEV);
|
||||
struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj);
|
||||
int ret;
|
||||
|
||||
ret = virtio_gpu_object_kmap(bo);
|
||||
if (ret)
|
||||
return NULL;
|
||||
return bo->vmap;
|
||||
}
|
||||
|
||||
void virtgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
|
||||
{
|
||||
WARN_ONCE(1, "not implemented");
|
||||
virtio_gpu_object_kunmap(gem_to_virtio_gpu_obj(obj));
|
||||
}
|
||||
|
||||
int virtgpu_gem_prime_mmap(struct drm_gem_object *obj,
|
||||
struct vm_area_struct *area)
|
||||
struct vm_area_struct *vma)
|
||||
{
|
||||
return -ENODEV;
|
||||
struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj);
|
||||
|
||||
bo->gem_base.vma_node.vm_node.start = bo->tbo.vma_node.vm_node.start;
|
||||
return drm_gem_prime_mmap(obj, vma);
|
||||
}
|
||||
|
52
drivers/gpu/drm/virtio/virtgpu_trace.h
Normal file
52
drivers/gpu/drm/virtio/virtgpu_trace.h
Normal file
@ -0,0 +1,52 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
#if !defined(_VIRTGPU_TRACE_H_) || defined(TRACE_HEADER_MULTI_READ)
|
||||
#define _VIRTGPU_TRACE_H_
|
||||
|
||||
#include <linux/tracepoint.h>
|
||||
|
||||
#undef TRACE_SYSTEM
|
||||
#define TRACE_SYSTEM virtio_gpu
|
||||
#define TRACE_INCLUDE_FILE virtgpu_trace
|
||||
|
||||
DECLARE_EVENT_CLASS(virtio_gpu_cmd,
|
||||
TP_PROTO(struct virtqueue *vq, struct virtio_gpu_ctrl_hdr *hdr),
|
||||
TP_ARGS(vq, hdr),
|
||||
TP_STRUCT__entry(
|
||||
__field(int, dev)
|
||||
__field(unsigned int, vq)
|
||||
__field(const char *, name)
|
||||
__field(u32, type)
|
||||
__field(u32, flags)
|
||||
__field(u64, fence_id)
|
||||
__field(u32, ctx_id)
|
||||
),
|
||||
TP_fast_assign(
|
||||
__entry->dev = vq->vdev->index;
|
||||
__entry->vq = vq->index;
|
||||
__entry->name = vq->name;
|
||||
__entry->type = le32_to_cpu(hdr->type);
|
||||
__entry->flags = le32_to_cpu(hdr->flags);
|
||||
__entry->fence_id = le64_to_cpu(hdr->fence_id);
|
||||
__entry->ctx_id = le32_to_cpu(hdr->ctx_id);
|
||||
),
|
||||
TP_printk("vdev=%d vq=%u name=%s type=0x%x flags=0x%x fence_id=%llu ctx_id=%u",
|
||||
__entry->dev, __entry->vq, __entry->name,
|
||||
__entry->type, __entry->flags, __entry->fence_id,
|
||||
__entry->ctx_id)
|
||||
);
|
||||
|
||||
DEFINE_EVENT(virtio_gpu_cmd, virtio_gpu_cmd_queue,
|
||||
TP_PROTO(struct virtqueue *vq, struct virtio_gpu_ctrl_hdr *hdr),
|
||||
TP_ARGS(vq, hdr)
|
||||
);
|
||||
|
||||
DEFINE_EVENT(virtio_gpu_cmd, virtio_gpu_cmd_response,
|
||||
TP_PROTO(struct virtqueue *vq, struct virtio_gpu_ctrl_hdr *hdr),
|
||||
TP_ARGS(vq, hdr)
|
||||
);
|
||||
|
||||
#endif
|
||||
|
||||
#undef TRACE_INCLUDE_PATH
|
||||
#define TRACE_INCLUDE_PATH ../../drivers/gpu/drm/virtio
|
||||
#include <trace/define_trace.h>
|
5
drivers/gpu/drm/virtio/virtgpu_trace_points.c
Normal file
5
drivers/gpu/drm/virtio/virtgpu_trace_points.c
Normal file
@ -0,0 +1,5 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
#include "virtgpu_drv.h"
|
||||
|
||||
#define CREATE_TRACE_POINTS
|
||||
#include "virtgpu_trace.h"
|
@ -106,29 +106,6 @@ static void virtio_gpu_ttm_global_fini(struct virtio_gpu_device *vgdev)
|
||||
}
|
||||
}
|
||||
|
||||
#if 0
|
||||
/*
|
||||
* Hmm, seems to not do anything useful. Leftover debug hack?
|
||||
* Something like printing pagefaults to kernel log?
|
||||
*/
|
||||
static struct vm_operations_struct virtio_gpu_ttm_vm_ops;
|
||||
static const struct vm_operations_struct *ttm_vm_ops;
|
||||
|
||||
static int virtio_gpu_ttm_fault(struct vm_fault *vmf)
|
||||
{
|
||||
struct ttm_buffer_object *bo;
|
||||
struct virtio_gpu_device *vgdev;
|
||||
int r;
|
||||
|
||||
bo = (struct ttm_buffer_object *)vmf->vma->vm_private_data;
|
||||
if (bo == NULL)
|
||||
return VM_FAULT_NOPAGE;
|
||||
vgdev = virtio_gpu_get_vgdev(bo->bdev);
|
||||
r = ttm_vm_ops->fault(vmf);
|
||||
return r;
|
||||
}
|
||||
#endif
|
||||
|
||||
int virtio_gpu_mmap(struct file *filp, struct vm_area_struct *vma)
|
||||
{
|
||||
struct drm_file *file_priv;
|
||||
@ -143,19 +120,8 @@ int virtio_gpu_mmap(struct file *filp, struct vm_area_struct *vma)
|
||||
return -EINVAL;
|
||||
}
|
||||
r = ttm_bo_mmap(filp, vma, &vgdev->mman.bdev);
|
||||
#if 0
|
||||
if (unlikely(r != 0))
|
||||
return r;
|
||||
if (unlikely(ttm_vm_ops == NULL)) {
|
||||
ttm_vm_ops = vma->vm_ops;
|
||||
virtio_gpu_ttm_vm_ops = *ttm_vm_ops;
|
||||
virtio_gpu_ttm_vm_ops.fault = &virtio_gpu_ttm_fault;
|
||||
}
|
||||
vma->vm_ops = &virtio_gpu_ttm_vm_ops;
|
||||
return 0;
|
||||
#else
|
||||
|
||||
return r;
|
||||
#endif
|
||||
}
|
||||
|
||||
static int virtio_gpu_invalidate_caches(struct ttm_bo_device *bdev,
|
||||
@ -206,10 +172,6 @@ static const struct ttm_mem_type_manager_func virtio_gpu_bo_manager_func = {
|
||||
static int virtio_gpu_init_mem_type(struct ttm_bo_device *bdev, uint32_t type,
|
||||
struct ttm_mem_type_manager *man)
|
||||
{
|
||||
struct virtio_gpu_device *vgdev;
|
||||
|
||||
vgdev = virtio_gpu_get_vgdev(bdev);
|
||||
|
||||
switch (type) {
|
||||
case TTM_PL_SYSTEM:
|
||||
/* System memory */
|
||||
@ -284,42 +246,45 @@ static void virtio_gpu_ttm_io_mem_free(struct ttm_bo_device *bdev,
|
||||
*/
|
||||
struct virtio_gpu_ttm_tt {
|
||||
struct ttm_dma_tt ttm;
|
||||
struct virtio_gpu_device *vgdev;
|
||||
u64 offset;
|
||||
struct virtio_gpu_object *obj;
|
||||
};
|
||||
|
||||
static int virtio_gpu_ttm_backend_bind(struct ttm_tt *ttm,
|
||||
struct ttm_mem_reg *bo_mem)
|
||||
static int virtio_gpu_ttm_tt_bind(struct ttm_tt *ttm,
|
||||
struct ttm_mem_reg *bo_mem)
|
||||
{
|
||||
struct virtio_gpu_ttm_tt *gtt = (void *)ttm;
|
||||
struct virtio_gpu_ttm_tt *gtt =
|
||||
container_of(ttm, struct virtio_gpu_ttm_tt, ttm.ttm);
|
||||
struct virtio_gpu_device *vgdev =
|
||||
virtio_gpu_get_vgdev(gtt->obj->tbo.bdev);
|
||||
|
||||
gtt->offset = (unsigned long)(bo_mem->start << PAGE_SHIFT);
|
||||
if (!ttm->num_pages)
|
||||
WARN(1, "nothing to bind %lu pages for mreg %p back %p!\n",
|
||||
ttm->num_pages, bo_mem, ttm);
|
||||
|
||||
/* Not implemented */
|
||||
virtio_gpu_object_attach(vgdev, gtt->obj, NULL);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int virtio_gpu_ttm_backend_unbind(struct ttm_tt *ttm)
|
||||
static int virtio_gpu_ttm_tt_unbind(struct ttm_tt *ttm)
|
||||
{
|
||||
/* Not implemented */
|
||||
struct virtio_gpu_ttm_tt *gtt =
|
||||
container_of(ttm, struct virtio_gpu_ttm_tt, ttm.ttm);
|
||||
struct virtio_gpu_device *vgdev =
|
||||
virtio_gpu_get_vgdev(gtt->obj->tbo.bdev);
|
||||
|
||||
virtio_gpu_object_detach(vgdev, gtt->obj);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void virtio_gpu_ttm_backend_destroy(struct ttm_tt *ttm)
|
||||
static void virtio_gpu_ttm_tt_destroy(struct ttm_tt *ttm)
|
||||
{
|
||||
struct virtio_gpu_ttm_tt *gtt = (void *)ttm;
|
||||
struct virtio_gpu_ttm_tt *gtt =
|
||||
container_of(ttm, struct virtio_gpu_ttm_tt, ttm.ttm);
|
||||
|
||||
ttm_dma_tt_fini(>t->ttm);
|
||||
kfree(gtt);
|
||||
}
|
||||
|
||||
static struct ttm_backend_func virtio_gpu_backend_func = {
|
||||
.bind = &virtio_gpu_ttm_backend_bind,
|
||||
.unbind = &virtio_gpu_ttm_backend_unbind,
|
||||
.destroy = &virtio_gpu_ttm_backend_destroy,
|
||||
static struct ttm_backend_func virtio_gpu_tt_func = {
|
||||
.bind = &virtio_gpu_ttm_tt_bind,
|
||||
.unbind = &virtio_gpu_ttm_tt_unbind,
|
||||
.destroy = &virtio_gpu_ttm_tt_destroy,
|
||||
};
|
||||
|
||||
static struct ttm_tt *virtio_gpu_ttm_tt_create(struct ttm_buffer_object *bo,
|
||||
@ -332,8 +297,8 @@ static struct ttm_tt *virtio_gpu_ttm_tt_create(struct ttm_buffer_object *bo,
|
||||
gtt = kzalloc(sizeof(struct virtio_gpu_ttm_tt), GFP_KERNEL);
|
||||
if (gtt == NULL)
|
||||
return NULL;
|
||||
gtt->ttm.ttm.func = &virtio_gpu_backend_func;
|
||||
gtt->vgdev = vgdev;
|
||||
gtt->ttm.ttm.func = &virtio_gpu_tt_func;
|
||||
gtt->obj = container_of(bo, struct virtio_gpu_object, tbo);
|
||||
if (ttm_dma_tt_init(>t->ttm, bo, page_flags)) {
|
||||
kfree(gtt);
|
||||
return NULL;
|
||||
@ -341,60 +306,11 @@ static struct ttm_tt *virtio_gpu_ttm_tt_create(struct ttm_buffer_object *bo,
|
||||
return >t->ttm.ttm;
|
||||
}
|
||||
|
||||
static void virtio_gpu_move_null(struct ttm_buffer_object *bo,
|
||||
struct ttm_mem_reg *new_mem)
|
||||
{
|
||||
struct ttm_mem_reg *old_mem = &bo->mem;
|
||||
|
||||
BUG_ON(old_mem->mm_node != NULL);
|
||||
*old_mem = *new_mem;
|
||||
new_mem->mm_node = NULL;
|
||||
}
|
||||
|
||||
static int virtio_gpu_bo_move(struct ttm_buffer_object *bo, bool evict,
|
||||
struct ttm_operation_ctx *ctx,
|
||||
struct ttm_mem_reg *new_mem)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = ttm_bo_wait(bo, ctx->interruptible, ctx->no_wait_gpu);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
virtio_gpu_move_null(bo, new_mem);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void virtio_gpu_bo_move_notify(struct ttm_buffer_object *tbo,
|
||||
bool evict,
|
||||
struct ttm_mem_reg *new_mem)
|
||||
{
|
||||
struct virtio_gpu_object *bo;
|
||||
struct virtio_gpu_device *vgdev;
|
||||
|
||||
bo = container_of(tbo, struct virtio_gpu_object, tbo);
|
||||
vgdev = (struct virtio_gpu_device *)bo->gem_base.dev->dev_private;
|
||||
|
||||
if (!new_mem || (new_mem->placement & TTM_PL_FLAG_SYSTEM)) {
|
||||
if (bo->hw_res_handle)
|
||||
virtio_gpu_cmd_resource_inval_backing(vgdev,
|
||||
bo->hw_res_handle);
|
||||
|
||||
} else if (new_mem->placement & TTM_PL_FLAG_TT) {
|
||||
if (bo->hw_res_handle) {
|
||||
virtio_gpu_object_attach(vgdev, bo, bo->hw_res_handle,
|
||||
NULL);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static void virtio_gpu_bo_swap_notify(struct ttm_buffer_object *tbo)
|
||||
{
|
||||
struct virtio_gpu_object *bo;
|
||||
struct virtio_gpu_device *vgdev;
|
||||
|
||||
bo = container_of(tbo, struct virtio_gpu_object, tbo);
|
||||
vgdev = (struct virtio_gpu_device *)bo->gem_base.dev->dev_private;
|
||||
|
||||
if (bo->pages)
|
||||
virtio_gpu_object_free_sg_table(bo);
|
||||
@ -406,11 +322,9 @@ static struct ttm_bo_driver virtio_gpu_bo_driver = {
|
||||
.init_mem_type = &virtio_gpu_init_mem_type,
|
||||
.eviction_valuable = ttm_bo_eviction_valuable,
|
||||
.evict_flags = &virtio_gpu_evict_flags,
|
||||
.move = &virtio_gpu_bo_move,
|
||||
.verify_access = &virtio_gpu_verify_access,
|
||||
.io_mem_reserve = &virtio_gpu_ttm_io_mem_reserve,
|
||||
.io_mem_free = &virtio_gpu_ttm_io_mem_free,
|
||||
.move_notify = &virtio_gpu_bo_move_notify,
|
||||
.swap_notify = &virtio_gpu_bo_swap_notify,
|
||||
};
|
||||
|
||||
|
@ -28,6 +28,7 @@
|
||||
|
||||
#include <drm/drmP.h>
|
||||
#include "virtgpu_drv.h"
|
||||
#include "virtgpu_trace.h"
|
||||
#include <linux/virtio.h>
|
||||
#include <linux/virtio_config.h>
|
||||
#include <linux/virtio_ring.h>
|
||||
@ -38,26 +39,6 @@
|
||||
+ MAX_INLINE_CMD_SIZE \
|
||||
+ MAX_INLINE_RESP_SIZE)
|
||||
|
||||
void virtio_gpu_resource_id_get(struct virtio_gpu_device *vgdev,
|
||||
uint32_t *resid)
|
||||
{
|
||||
int handle;
|
||||
|
||||
idr_preload(GFP_KERNEL);
|
||||
spin_lock(&vgdev->resource_idr_lock);
|
||||
handle = idr_alloc(&vgdev->resource_idr, NULL, 1, 0, GFP_NOWAIT);
|
||||
spin_unlock(&vgdev->resource_idr_lock);
|
||||
idr_preload_end();
|
||||
*resid = handle;
|
||||
}
|
||||
|
||||
void virtio_gpu_resource_id_put(struct virtio_gpu_device *vgdev, uint32_t id)
|
||||
{
|
||||
spin_lock(&vgdev->resource_idr_lock);
|
||||
idr_remove(&vgdev->resource_idr, id);
|
||||
spin_unlock(&vgdev->resource_idr_lock);
|
||||
}
|
||||
|
||||
void virtio_gpu_ctrl_ack(struct virtqueue *vq)
|
||||
{
|
||||
struct drm_device *dev = vq->vdev->priv;
|
||||
@ -98,10 +79,9 @@ virtio_gpu_get_vbuf(struct virtio_gpu_device *vgdev,
|
||||
{
|
||||
struct virtio_gpu_vbuffer *vbuf;
|
||||
|
||||
vbuf = kmem_cache_alloc(vgdev->vbufs, GFP_KERNEL);
|
||||
vbuf = kmem_cache_zalloc(vgdev->vbufs, GFP_KERNEL);
|
||||
if (!vbuf)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
memset(vbuf, 0, VBUFFER_SIZE);
|
||||
|
||||
BUG_ON(size > MAX_INLINE_CMD_SIZE);
|
||||
vbuf->buf = (void *)vbuf + sizeof(*vbuf);
|
||||
@ -213,8 +193,19 @@ void virtio_gpu_dequeue_ctrl_func(struct work_struct *work)
|
||||
|
||||
list_for_each_entry_safe(entry, tmp, &reclaim_list, list) {
|
||||
resp = (struct virtio_gpu_ctrl_hdr *)entry->resp_buf;
|
||||
if (resp->type != cpu_to_le32(VIRTIO_GPU_RESP_OK_NODATA))
|
||||
DRM_DEBUG("response 0x%x\n", le32_to_cpu(resp->type));
|
||||
|
||||
trace_virtio_gpu_cmd_response(vgdev->ctrlq.vq, resp);
|
||||
|
||||
if (resp->type != cpu_to_le32(VIRTIO_GPU_RESP_OK_NODATA)) {
|
||||
if (resp->type >= cpu_to_le32(VIRTIO_GPU_RESP_ERR_UNSPEC)) {
|
||||
struct virtio_gpu_ctrl_hdr *cmd;
|
||||
cmd = (struct virtio_gpu_ctrl_hdr *)entry->buf;
|
||||
DRM_ERROR("response 0x%x (command 0x%x)\n",
|
||||
le32_to_cpu(resp->type),
|
||||
le32_to_cpu(cmd->type));
|
||||
} else
|
||||
DRM_DEBUG("response 0x%x\n", le32_to_cpu(resp->type));
|
||||
}
|
||||
if (resp->flags & cpu_to_le32(VIRTIO_GPU_FLAG_FENCE)) {
|
||||
u64 f = le64_to_cpu(resp->fence_id);
|
||||
|
||||
@ -297,6 +288,9 @@ retry:
|
||||
spin_lock(&vgdev->ctrlq.qlock);
|
||||
goto retry;
|
||||
} else {
|
||||
trace_virtio_gpu_cmd_queue(vq,
|
||||
(struct virtio_gpu_ctrl_hdr *)vbuf->buf);
|
||||
|
||||
virtqueue_kick(vq);
|
||||
}
|
||||
|
||||
@ -319,7 +313,7 @@ static int virtio_gpu_queue_ctrl_buffer(struct virtio_gpu_device *vgdev,
|
||||
static int virtio_gpu_queue_fenced_ctrl_buffer(struct virtio_gpu_device *vgdev,
|
||||
struct virtio_gpu_vbuffer *vbuf,
|
||||
struct virtio_gpu_ctrl_hdr *hdr,
|
||||
struct virtio_gpu_fence **fence)
|
||||
struct virtio_gpu_fence *fence)
|
||||
{
|
||||
struct virtqueue *vq = vgdev->ctrlq.vq;
|
||||
int rc;
|
||||
@ -372,6 +366,9 @@ retry:
|
||||
spin_lock(&vgdev->cursorq.qlock);
|
||||
goto retry;
|
||||
} else {
|
||||
trace_virtio_gpu_cmd_queue(vq,
|
||||
(struct virtio_gpu_ctrl_hdr *)vbuf->buf);
|
||||
|
||||
virtqueue_kick(vq);
|
||||
}
|
||||
|
||||
@ -388,10 +385,9 @@ retry:
|
||||
|
||||
/* create a basic resource */
|
||||
void virtio_gpu_cmd_create_resource(struct virtio_gpu_device *vgdev,
|
||||
uint32_t resource_id,
|
||||
uint32_t format,
|
||||
uint32_t width,
|
||||
uint32_t height)
|
||||
struct virtio_gpu_object *bo,
|
||||
struct virtio_gpu_object_params *params,
|
||||
struct virtio_gpu_fence *fence)
|
||||
{
|
||||
struct virtio_gpu_resource_create_2d *cmd_p;
|
||||
struct virtio_gpu_vbuffer *vbuf;
|
||||
@ -400,12 +396,13 @@ void virtio_gpu_cmd_create_resource(struct virtio_gpu_device *vgdev,
|
||||
memset(cmd_p, 0, sizeof(*cmd_p));
|
||||
|
||||
cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_CREATE_2D);
|
||||
cmd_p->resource_id = cpu_to_le32(resource_id);
|
||||
cmd_p->format = cpu_to_le32(format);
|
||||
cmd_p->width = cpu_to_le32(width);
|
||||
cmd_p->height = cpu_to_le32(height);
|
||||
cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle);
|
||||
cmd_p->format = cpu_to_le32(params->format);
|
||||
cmd_p->width = cpu_to_le32(params->width);
|
||||
cmd_p->height = cpu_to_le32(params->height);
|
||||
|
||||
virtio_gpu_queue_ctrl_buffer(vgdev, vbuf);
|
||||
virtio_gpu_queue_fenced_ctrl_buffer(vgdev, vbuf, &cmd_p->hdr, fence);
|
||||
bo->created = true;
|
||||
}
|
||||
|
||||
void virtio_gpu_cmd_unref_resource(struct virtio_gpu_device *vgdev,
|
||||
@ -423,8 +420,9 @@ void virtio_gpu_cmd_unref_resource(struct virtio_gpu_device *vgdev,
|
||||
virtio_gpu_queue_ctrl_buffer(vgdev, vbuf);
|
||||
}
|
||||
|
||||
void virtio_gpu_cmd_resource_inval_backing(struct virtio_gpu_device *vgdev,
|
||||
uint32_t resource_id)
|
||||
static void virtio_gpu_cmd_resource_inval_backing(struct virtio_gpu_device *vgdev,
|
||||
uint32_t resource_id,
|
||||
struct virtio_gpu_fence *fence)
|
||||
{
|
||||
struct virtio_gpu_resource_detach_backing *cmd_p;
|
||||
struct virtio_gpu_vbuffer *vbuf;
|
||||
@ -435,7 +433,7 @@ void virtio_gpu_cmd_resource_inval_backing(struct virtio_gpu_device *vgdev,
|
||||
cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING);
|
||||
cmd_p->resource_id = cpu_to_le32(resource_id);
|
||||
|
||||
virtio_gpu_queue_ctrl_buffer(vgdev, vbuf);
|
||||
virtio_gpu_queue_fenced_ctrl_buffer(vgdev, vbuf, &cmd_p->hdr, fence);
|
||||
}
|
||||
|
||||
void virtio_gpu_cmd_set_scanout(struct virtio_gpu_device *vgdev,
|
||||
@ -482,19 +480,26 @@ void virtio_gpu_cmd_resource_flush(struct virtio_gpu_device *vgdev,
|
||||
}
|
||||
|
||||
void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev,
|
||||
uint32_t resource_id, uint64_t offset,
|
||||
struct virtio_gpu_object *bo,
|
||||
uint64_t offset,
|
||||
__le32 width, __le32 height,
|
||||
__le32 x, __le32 y,
|
||||
struct virtio_gpu_fence **fence)
|
||||
struct virtio_gpu_fence *fence)
|
||||
{
|
||||
struct virtio_gpu_transfer_to_host_2d *cmd_p;
|
||||
struct virtio_gpu_vbuffer *vbuf;
|
||||
bool use_dma_api = !virtio_has_iommu_quirk(vgdev->vdev);
|
||||
|
||||
if (use_dma_api)
|
||||
dma_sync_sg_for_device(vgdev->vdev->dev.parent,
|
||||
bo->pages->sgl, bo->pages->nents,
|
||||
DMA_TO_DEVICE);
|
||||
|
||||
cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p));
|
||||
memset(cmd_p, 0, sizeof(*cmd_p));
|
||||
|
||||
cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_TRANSFER_TO_HOST_2D);
|
||||
cmd_p->resource_id = cpu_to_le32(resource_id);
|
||||
cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle);
|
||||
cmd_p->offset = cpu_to_le64(offset);
|
||||
cmd_p->r.width = width;
|
||||
cmd_p->r.height = height;
|
||||
@ -509,7 +514,7 @@ virtio_gpu_cmd_resource_attach_backing(struct virtio_gpu_device *vgdev,
|
||||
uint32_t resource_id,
|
||||
struct virtio_gpu_mem_entry *ents,
|
||||
uint32_t nents,
|
||||
struct virtio_gpu_fence **fence)
|
||||
struct virtio_gpu_fence *fence)
|
||||
{
|
||||
struct virtio_gpu_resource_attach_backing *cmd_p;
|
||||
struct virtio_gpu_vbuffer *vbuf;
|
||||
@ -595,6 +600,45 @@ static void virtio_gpu_cmd_capset_cb(struct virtio_gpu_device *vgdev,
|
||||
}
|
||||
}
|
||||
spin_unlock(&vgdev->display_info_lock);
|
||||
wake_up_all(&vgdev->resp_wq);
|
||||
}
|
||||
|
||||
static int virtio_get_edid_block(void *data, u8 *buf,
|
||||
unsigned int block, size_t len)
|
||||
{
|
||||
struct virtio_gpu_resp_edid *resp = data;
|
||||
size_t start = block * EDID_LENGTH;
|
||||
|
||||
if (start + len > le32_to_cpu(resp->size))
|
||||
return -1;
|
||||
memcpy(buf, resp->edid + start, len);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void virtio_gpu_cmd_get_edid_cb(struct virtio_gpu_device *vgdev,
|
||||
struct virtio_gpu_vbuffer *vbuf)
|
||||
{
|
||||
struct virtio_gpu_cmd_get_edid *cmd =
|
||||
(struct virtio_gpu_cmd_get_edid *)vbuf->buf;
|
||||
struct virtio_gpu_resp_edid *resp =
|
||||
(struct virtio_gpu_resp_edid *)vbuf->resp_buf;
|
||||
uint32_t scanout = le32_to_cpu(cmd->scanout);
|
||||
struct virtio_gpu_output *output;
|
||||
struct edid *new_edid, *old_edid;
|
||||
|
||||
if (scanout >= vgdev->num_scanouts)
|
||||
return;
|
||||
output = vgdev->outputs + scanout;
|
||||
|
||||
new_edid = drm_do_get_edid(&output->conn, virtio_get_edid_block, resp);
|
||||
drm_connector_update_edid_property(&output->conn, new_edid);
|
||||
|
||||
spin_lock(&vgdev->display_info_lock);
|
||||
old_edid = output->edid;
|
||||
output->edid = new_edid;
|
||||
spin_unlock(&vgdev->display_info_lock);
|
||||
|
||||
kfree(old_edid);
|
||||
wake_up(&vgdev->resp_wq);
|
||||
}
|
||||
|
||||
@ -650,11 +694,14 @@ int virtio_gpu_cmd_get_capset(struct virtio_gpu_device *vgdev,
|
||||
{
|
||||
struct virtio_gpu_get_capset *cmd_p;
|
||||
struct virtio_gpu_vbuffer *vbuf;
|
||||
int max_size = vgdev->capsets[idx].max_size;
|
||||
int max_size;
|
||||
struct virtio_gpu_drv_cap_cache *cache_ent;
|
||||
struct virtio_gpu_drv_cap_cache *search_ent;
|
||||
void *resp_buf;
|
||||
|
||||
if (idx > vgdev->num_capsets)
|
||||
*cache_p = NULL;
|
||||
|
||||
if (idx >= vgdev->num_capsets)
|
||||
return -EINVAL;
|
||||
|
||||
if (version > vgdev->capsets[idx].max_version)
|
||||
@ -664,6 +711,7 @@ int virtio_gpu_cmd_get_capset(struct virtio_gpu_device *vgdev,
|
||||
if (!cache_ent)
|
||||
return -ENOMEM;
|
||||
|
||||
max_size = vgdev->capsets[idx].max_size;
|
||||
cache_ent->caps_cache = kmalloc(max_size, GFP_KERNEL);
|
||||
if (!cache_ent->caps_cache) {
|
||||
kfree(cache_ent);
|
||||
@ -683,9 +731,26 @@ int virtio_gpu_cmd_get_capset(struct virtio_gpu_device *vgdev,
|
||||
atomic_set(&cache_ent->is_valid, 0);
|
||||
cache_ent->size = max_size;
|
||||
spin_lock(&vgdev->display_info_lock);
|
||||
list_add_tail(&cache_ent->head, &vgdev->cap_cache);
|
||||
/* Search while under lock in case it was added by another task. */
|
||||
list_for_each_entry(search_ent, &vgdev->cap_cache, head) {
|
||||
if (search_ent->id == vgdev->capsets[idx].id &&
|
||||
search_ent->version == version) {
|
||||
*cache_p = search_ent;
|
||||
break;
|
||||
}
|
||||
}
|
||||
if (!*cache_p)
|
||||
list_add_tail(&cache_ent->head, &vgdev->cap_cache);
|
||||
spin_unlock(&vgdev->display_info_lock);
|
||||
|
||||
if (*cache_p) {
|
||||
/* Entry was found, so free everything that was just created. */
|
||||
kfree(resp_buf);
|
||||
kfree(cache_ent->caps_cache);
|
||||
kfree(cache_ent);
|
||||
return 0;
|
||||
}
|
||||
|
||||
cmd_p = virtio_gpu_alloc_cmd_resp
|
||||
(vgdev, &virtio_gpu_cmd_capset_cb, &vbuf, sizeof(*cmd_p),
|
||||
sizeof(struct virtio_gpu_resp_capset) + max_size,
|
||||
@ -699,6 +764,34 @@ int virtio_gpu_cmd_get_capset(struct virtio_gpu_device *vgdev,
|
||||
return 0;
|
||||
}
|
||||
|
||||
int virtio_gpu_cmd_get_edids(struct virtio_gpu_device *vgdev)
|
||||
{
|
||||
struct virtio_gpu_cmd_get_edid *cmd_p;
|
||||
struct virtio_gpu_vbuffer *vbuf;
|
||||
void *resp_buf;
|
||||
int scanout;
|
||||
|
||||
if (WARN_ON(!vgdev->has_edid))
|
||||
return -EINVAL;
|
||||
|
||||
for (scanout = 0; scanout < vgdev->num_scanouts; scanout++) {
|
||||
resp_buf = kzalloc(sizeof(struct virtio_gpu_resp_edid),
|
||||
GFP_KERNEL);
|
||||
if (!resp_buf)
|
||||
return -ENOMEM;
|
||||
|
||||
cmd_p = virtio_gpu_alloc_cmd_resp
|
||||
(vgdev, &virtio_gpu_cmd_get_edid_cb, &vbuf,
|
||||
sizeof(*cmd_p), sizeof(struct virtio_gpu_resp_edid),
|
||||
resp_buf);
|
||||
cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_GET_EDID);
|
||||
cmd_p->scanout = cpu_to_le32(scanout);
|
||||
virtio_gpu_queue_ctrl_buffer(vgdev, vbuf);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
void virtio_gpu_cmd_context_create(struct virtio_gpu_device *vgdev, uint32_t id,
|
||||
uint32_t nlen, const char *name)
|
||||
{
|
||||
@ -765,8 +858,9 @@ void virtio_gpu_cmd_context_detach_resource(struct virtio_gpu_device *vgdev,
|
||||
|
||||
void
|
||||
virtio_gpu_cmd_resource_create_3d(struct virtio_gpu_device *vgdev,
|
||||
struct virtio_gpu_resource_create_3d *rc_3d,
|
||||
struct virtio_gpu_fence **fence)
|
||||
struct virtio_gpu_object *bo,
|
||||
struct virtio_gpu_object_params *params,
|
||||
struct virtio_gpu_fence *fence)
|
||||
{
|
||||
struct virtio_gpu_resource_create_3d *cmd_p;
|
||||
struct virtio_gpu_vbuffer *vbuf;
|
||||
@ -774,28 +868,46 @@ virtio_gpu_cmd_resource_create_3d(struct virtio_gpu_device *vgdev,
|
||||
cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p));
|
||||
memset(cmd_p, 0, sizeof(*cmd_p));
|
||||
|
||||
*cmd_p = *rc_3d;
|
||||
cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_CREATE_3D);
|
||||
cmd_p->hdr.flags = 0;
|
||||
cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle);
|
||||
cmd_p->format = cpu_to_le32(params->format);
|
||||
cmd_p->width = cpu_to_le32(params->width);
|
||||
cmd_p->height = cpu_to_le32(params->height);
|
||||
|
||||
cmd_p->target = cpu_to_le32(params->target);
|
||||
cmd_p->bind = cpu_to_le32(params->bind);
|
||||
cmd_p->depth = cpu_to_le32(params->depth);
|
||||
cmd_p->array_size = cpu_to_le32(params->array_size);
|
||||
cmd_p->last_level = cpu_to_le32(params->last_level);
|
||||
cmd_p->nr_samples = cpu_to_le32(params->nr_samples);
|
||||
cmd_p->flags = cpu_to_le32(params->flags);
|
||||
|
||||
virtio_gpu_queue_fenced_ctrl_buffer(vgdev, vbuf, &cmd_p->hdr, fence);
|
||||
bo->created = true;
|
||||
}
|
||||
|
||||
void virtio_gpu_cmd_transfer_to_host_3d(struct virtio_gpu_device *vgdev,
|
||||
uint32_t resource_id, uint32_t ctx_id,
|
||||
struct virtio_gpu_object *bo,
|
||||
uint32_t ctx_id,
|
||||
uint64_t offset, uint32_t level,
|
||||
struct virtio_gpu_box *box,
|
||||
struct virtio_gpu_fence **fence)
|
||||
struct virtio_gpu_fence *fence)
|
||||
{
|
||||
struct virtio_gpu_transfer_host_3d *cmd_p;
|
||||
struct virtio_gpu_vbuffer *vbuf;
|
||||
bool use_dma_api = !virtio_has_iommu_quirk(vgdev->vdev);
|
||||
|
||||
if (use_dma_api)
|
||||
dma_sync_sg_for_device(vgdev->vdev->dev.parent,
|
||||
bo->pages->sgl, bo->pages->nents,
|
||||
DMA_TO_DEVICE);
|
||||
|
||||
cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p));
|
||||
memset(cmd_p, 0, sizeof(*cmd_p));
|
||||
|
||||
cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_TRANSFER_TO_HOST_3D);
|
||||
cmd_p->hdr.ctx_id = cpu_to_le32(ctx_id);
|
||||
cmd_p->resource_id = cpu_to_le32(resource_id);
|
||||
cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle);
|
||||
cmd_p->box = *box;
|
||||
cmd_p->offset = cpu_to_le64(offset);
|
||||
cmd_p->level = cpu_to_le32(level);
|
||||
@ -807,7 +919,7 @@ void virtio_gpu_cmd_transfer_from_host_3d(struct virtio_gpu_device *vgdev,
|
||||
uint32_t resource_id, uint32_t ctx_id,
|
||||
uint64_t offset, uint32_t level,
|
||||
struct virtio_gpu_box *box,
|
||||
struct virtio_gpu_fence **fence)
|
||||
struct virtio_gpu_fence *fence)
|
||||
{
|
||||
struct virtio_gpu_transfer_host_3d *cmd_p;
|
||||
struct virtio_gpu_vbuffer *vbuf;
|
||||
@ -827,7 +939,7 @@ void virtio_gpu_cmd_transfer_from_host_3d(struct virtio_gpu_device *vgdev,
|
||||
|
||||
void virtio_gpu_cmd_submit(struct virtio_gpu_device *vgdev,
|
||||
void *data, uint32_t data_size,
|
||||
uint32_t ctx_id, struct virtio_gpu_fence **fence)
|
||||
uint32_t ctx_id, struct virtio_gpu_fence *fence)
|
||||
{
|
||||
struct virtio_gpu_cmd_submit *cmd_p;
|
||||
struct virtio_gpu_vbuffer *vbuf;
|
||||
@ -847,12 +959,15 @@ void virtio_gpu_cmd_submit(struct virtio_gpu_device *vgdev,
|
||||
|
||||
int virtio_gpu_object_attach(struct virtio_gpu_device *vgdev,
|
||||
struct virtio_gpu_object *obj,
|
||||
uint32_t resource_id,
|
||||
struct virtio_gpu_fence **fence)
|
||||
struct virtio_gpu_fence *fence)
|
||||
{
|
||||
bool use_dma_api = !virtio_has_iommu_quirk(vgdev->vdev);
|
||||
struct virtio_gpu_mem_entry *ents;
|
||||
struct scatterlist *sg;
|
||||
int si;
|
||||
int si, nents;
|
||||
|
||||
if (WARN_ON_ONCE(!obj->created))
|
||||
return -EINVAL;
|
||||
|
||||
if (!obj->pages) {
|
||||
int ret;
|
||||
@ -862,28 +977,59 @@ int virtio_gpu_object_attach(struct virtio_gpu_device *vgdev,
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (use_dma_api) {
|
||||
obj->mapped = dma_map_sg(vgdev->vdev->dev.parent,
|
||||
obj->pages->sgl, obj->pages->nents,
|
||||
DMA_TO_DEVICE);
|
||||
nents = obj->mapped;
|
||||
} else {
|
||||
nents = obj->pages->nents;
|
||||
}
|
||||
|
||||
/* gets freed when the ring has consumed it */
|
||||
ents = kmalloc_array(obj->pages->nents,
|
||||
sizeof(struct virtio_gpu_mem_entry),
|
||||
ents = kmalloc_array(nents, sizeof(struct virtio_gpu_mem_entry),
|
||||
GFP_KERNEL);
|
||||
if (!ents) {
|
||||
DRM_ERROR("failed to allocate ent list\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
for_each_sg(obj->pages->sgl, sg, obj->pages->nents, si) {
|
||||
ents[si].addr = cpu_to_le64(sg_phys(sg));
|
||||
for_each_sg(obj->pages->sgl, sg, nents, si) {
|
||||
ents[si].addr = cpu_to_le64(use_dma_api
|
||||
? sg_dma_address(sg)
|
||||
: sg_phys(sg));
|
||||
ents[si].length = cpu_to_le32(sg->length);
|
||||
ents[si].padding = 0;
|
||||
}
|
||||
|
||||
virtio_gpu_cmd_resource_attach_backing(vgdev, resource_id,
|
||||
ents, obj->pages->nents,
|
||||
virtio_gpu_cmd_resource_attach_backing(vgdev, obj->hw_res_handle,
|
||||
ents, nents,
|
||||
fence);
|
||||
obj->hw_res_handle = resource_id;
|
||||
return 0;
|
||||
}
|
||||
|
||||
void virtio_gpu_object_detach(struct virtio_gpu_device *vgdev,
|
||||
struct virtio_gpu_object *obj)
|
||||
{
|
||||
bool use_dma_api = !virtio_has_iommu_quirk(vgdev->vdev);
|
||||
|
||||
if (use_dma_api && obj->mapped) {
|
||||
struct virtio_gpu_fence *fence = virtio_gpu_fence_alloc(vgdev);
|
||||
/* detach backing and wait for the host process it ... */
|
||||
virtio_gpu_cmd_resource_inval_backing(vgdev, obj->hw_res_handle, fence);
|
||||
dma_fence_wait(&fence->f, true);
|
||||
dma_fence_put(&fence->f);
|
||||
|
||||
/* ... then tear down iommu mappings */
|
||||
dma_unmap_sg(vgdev->vdev->dev.parent,
|
||||
obj->pages->sgl, obj->mapped,
|
||||
DMA_TO_DEVICE);
|
||||
obj->mapped = 0;
|
||||
} else {
|
||||
virtio_gpu_cmd_resource_inval_backing(vgdev, obj->hw_res_handle, NULL);
|
||||
}
|
||||
}
|
||||
|
||||
void virtio_gpu_cursor_ping(struct virtio_gpu_device *vgdev,
|
||||
struct virtio_gpu_output *output)
|
||||
{
|
||||
|
@ -4,7 +4,6 @@
|
||||
|
||||
menuconfig IIO
|
||||
tristate "Industrial I/O support"
|
||||
select ANON_INODES
|
||||
help
|
||||
The industrial I/O subsystem provides a unified framework for
|
||||
drivers for many different types of embedded sensors using a
|
||||
|
@ -25,7 +25,6 @@ config INFINIBAND_USER_MAD
|
||||
|
||||
config INFINIBAND_USER_ACCESS
|
||||
tristate "InfiniBand userspace access (verbs and CM)"
|
||||
select ANON_INODES
|
||||
---help---
|
||||
Userspace InfiniBand access support. This enables the
|
||||
kernel side of userspace verbs and the userspace
|
||||
|
@ -126,7 +126,7 @@ __malloc void *_uverbs_alloc(struct uverbs_attr_bundle *bundle, size_t size,
|
||||
res = (void *)pbundle->internal_buffer + pbundle->internal_used;
|
||||
pbundle->internal_used =
|
||||
ALIGN(new_used, sizeof(*pbundle->internal_buffer));
|
||||
if (flags & __GFP_ZERO)
|
||||
if (want_init_on_alloc(flags))
|
||||
memset(res, 0, size);
|
||||
return res;
|
||||
}
|
||||
|
@ -22,7 +22,6 @@ menuconfig VFIO
|
||||
tristate "VFIO Non-Privileged userspace driver framework"
|
||||
depends on IOMMU_API
|
||||
select VFIO_IOMMU_TYPE1 if (X86 || S390 || ARM_SMMU || ARM_SMMU_V3)
|
||||
select ANON_INODES
|
||||
help
|
||||
VFIO provides a framework for secure userspace device drivers.
|
||||
See Documentation/vfio.txt for more details.
|
||||
|
@ -24,7 +24,7 @@ obj-$(CONFIG_PROC_FS) += proc_namespace.o
|
||||
|
||||
obj-y += notify/
|
||||
obj-$(CONFIG_EPOLL) += eventpoll.o
|
||||
obj-$(CONFIG_ANON_INODES) += anon_inodes.o
|
||||
obj-y += anon_inodes.o
|
||||
obj-$(CONFIG_SIGNALFD) += signalfd.o
|
||||
obj-$(CONFIG_TIMERFD) += timerfd.o
|
||||
obj-$(CONFIG_EVENTFD) += eventfd.o
|
||||
|
@ -39,6 +39,8 @@ EXPORT_TRACEPOINT_SYMBOL(android_fs_datawrite_start);
|
||||
EXPORT_TRACEPOINT_SYMBOL(android_fs_datawrite_end);
|
||||
EXPORT_TRACEPOINT_SYMBOL(android_fs_dataread_start);
|
||||
EXPORT_TRACEPOINT_SYMBOL(android_fs_dataread_end);
|
||||
EXPORT_TRACEPOINT_SYMBOL(android_fs_fsync_start);
|
||||
EXPORT_TRACEPOINT_SYMBOL(android_fs_fsync_end);
|
||||
|
||||
/*
|
||||
* I/O completion handler for multipage BIOs.
|
||||
|
90
fs/namei.c
90
fs/namei.c
@ -44,6 +44,9 @@
|
||||
#include "internal.h"
|
||||
#include "mount.h"
|
||||
|
||||
#define CREATE_TRACE_POINTS
|
||||
#include <trace/events/namei.h>
|
||||
|
||||
/* [Feb-1997 T. Schoebel-Theuer]
|
||||
* Fundamental changes in the pathname lookup mechanisms (namei)
|
||||
* were necessary because of omirr. The reason is that omirr needs
|
||||
@ -779,6 +782,81 @@ static inline int d_revalidate(struct dentry *dentry, unsigned int flags)
|
||||
return 1;
|
||||
}
|
||||
|
||||
#define INIT_PATH_SIZE 64
|
||||
|
||||
static void success_walk_trace(struct nameidata *nd)
|
||||
{
|
||||
struct path *pt = &nd->path;
|
||||
struct inode *i = nd->inode;
|
||||
char buf[INIT_PATH_SIZE], *try_buf;
|
||||
int cur_path_size;
|
||||
char *p;
|
||||
|
||||
/* When eBPF/ tracepoint is disabled, keep overhead low. */
|
||||
if (!trace_inodepath_enabled())
|
||||
return;
|
||||
|
||||
/* First try stack allocated buffer. */
|
||||
try_buf = buf;
|
||||
cur_path_size = INIT_PATH_SIZE;
|
||||
|
||||
while (cur_path_size <= PATH_MAX) {
|
||||
/* Free previous heap allocation if we are now trying
|
||||
* a second or later heap allocation.
|
||||
*/
|
||||
if (try_buf != buf)
|
||||
kfree(try_buf);
|
||||
|
||||
/* All but the first alloc are on the heap. */
|
||||
if (cur_path_size != INIT_PATH_SIZE) {
|
||||
try_buf = kmalloc(cur_path_size, GFP_KERNEL);
|
||||
if (!try_buf) {
|
||||
try_buf = buf;
|
||||
sprintf(try_buf, "error:buf_alloc_failed");
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
p = d_path(pt, try_buf, cur_path_size);
|
||||
|
||||
if (!IS_ERR(p)) {
|
||||
char *end = mangle_path(try_buf, p, "\n");
|
||||
|
||||
if (end) {
|
||||
try_buf[end - try_buf] = 0;
|
||||
break;
|
||||
} else {
|
||||
/* On mangle errors, double path size
|
||||
* till PATH_MAX.
|
||||
*/
|
||||
cur_path_size = cur_path_size << 1;
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
if (PTR_ERR(p) == -ENAMETOOLONG) {
|
||||
/* If d_path complains that name is too long,
|
||||
* then double path size till PATH_MAX.
|
||||
*/
|
||||
cur_path_size = cur_path_size << 1;
|
||||
continue;
|
||||
}
|
||||
|
||||
sprintf(try_buf, "error:d_path_failed_%lu",
|
||||
-1 * PTR_ERR(p));
|
||||
break;
|
||||
}
|
||||
|
||||
if (cur_path_size > PATH_MAX)
|
||||
sprintf(try_buf, "error:d_path_name_too_long");
|
||||
|
||||
trace_inodepath(i, try_buf);
|
||||
|
||||
if (try_buf != buf)
|
||||
kfree(try_buf);
|
||||
return;
|
||||
}
|
||||
|
||||
/**
|
||||
* complete_walk - successful completion of path walk
|
||||
* @nd: pointer nameidata
|
||||
@ -801,15 +879,21 @@ static int complete_walk(struct nameidata *nd)
|
||||
return -ECHILD;
|
||||
}
|
||||
|
||||
if (likely(!(nd->flags & LOOKUP_JUMPED)))
|
||||
if (likely(!(nd->flags & LOOKUP_JUMPED))) {
|
||||
success_walk_trace(nd);
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (likely(!(dentry->d_flags & DCACHE_OP_WEAK_REVALIDATE)))
|
||||
if (likely(!(dentry->d_flags & DCACHE_OP_WEAK_REVALIDATE))) {
|
||||
success_walk_trace(nd);
|
||||
return 0;
|
||||
}
|
||||
|
||||
status = dentry->d_op->d_weak_revalidate(dentry, nd->flags);
|
||||
if (status > 0)
|
||||
if (status > 0) {
|
||||
success_walk_trace(nd);
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (!status)
|
||||
status = -ESTALE;
|
||||
|
@ -1,7 +1,6 @@
|
||||
config FANOTIFY
|
||||
bool "Filesystem wide access notification"
|
||||
select FSNOTIFY
|
||||
select ANON_INODES
|
||||
default n
|
||||
---help---
|
||||
Say Y here to enable fanotify support. fanotify is a file access
|
||||
|
@ -1,6 +1,5 @@
|
||||
config INOTIFY_USER
|
||||
bool "Inotify support for userspace"
|
||||
select ANON_INODES
|
||||
select FSNOTIFY
|
||||
default y
|
||||
---help---
|
||||
|
@ -3432,6 +3432,15 @@ static const struct file_operations proc_tgid_base_operations = {
|
||||
.llseek = generic_file_llseek,
|
||||
};
|
||||
|
||||
struct pid *tgid_pidfd_to_pid(const struct file *file)
|
||||
{
|
||||
if (!d_is_dir(file->f_path.dentry) ||
|
||||
(file->f_op != &proc_tgid_base_operations))
|
||||
return ERR_PTR(-EBADF);
|
||||
|
||||
return proc_pid(file_inode(file));
|
||||
}
|
||||
|
||||
static struct dentry *proc_tgid_base_lookup(struct inode *dir, struct dentry *dentry, unsigned int flags)
|
||||
{
|
||||
return proc_pident_lookup(dir, dentry,
|
||||
|
@ -1192,6 +1192,7 @@ int drm_connector_init(struct drm_device *dev,
|
||||
struct drm_connector *connector,
|
||||
const struct drm_connector_funcs *funcs,
|
||||
int connector_type);
|
||||
void drm_connector_attach_edid_property(struct drm_connector *connector);
|
||||
int drm_connector_register(struct drm_connector *connector);
|
||||
void drm_connector_unregister(struct drm_connector *connector);
|
||||
int drm_connector_attach_encoder(struct drm_connector *connector,
|
||||
|
@ -25,6 +25,28 @@
|
||||
#include <linux/types.h>
|
||||
#include <uapi/drm/drm_fourcc.h>
|
||||
|
||||
/*
|
||||
* DRM formats are little endian. Define host endian variants for the
|
||||
* most common formats here, to reduce the #ifdefs needed in drivers.
|
||||
*
|
||||
* Note that the DRM_FORMAT_BIG_ENDIAN flag should only be used in
|
||||
* case the format can't be specified otherwise, so we don't end up
|
||||
* with two values describing the same format.
|
||||
*/
|
||||
#ifdef __BIG_ENDIAN
|
||||
# define DRM_FORMAT_HOST_XRGB1555 (DRM_FORMAT_XRGB1555 | \
|
||||
DRM_FORMAT_BIG_ENDIAN)
|
||||
# define DRM_FORMAT_HOST_RGB565 (DRM_FORMAT_RGB565 | \
|
||||
DRM_FORMAT_BIG_ENDIAN)
|
||||
# define DRM_FORMAT_HOST_XRGB8888 DRM_FORMAT_BGRX8888
|
||||
# define DRM_FORMAT_HOST_ARGB8888 DRM_FORMAT_BGRA8888
|
||||
#else
|
||||
# define DRM_FORMAT_HOST_XRGB1555 DRM_FORMAT_XRGB1555
|
||||
# define DRM_FORMAT_HOST_RGB565 DRM_FORMAT_RGB565
|
||||
# define DRM_FORMAT_HOST_XRGB8888 DRM_FORMAT_XRGB8888
|
||||
# define DRM_FORMAT_HOST_ARGB8888 DRM_FORMAT_ARGB8888
|
||||
#endif
|
||||
|
||||
struct drm_device;
|
||||
struct drm_mode_fb_cmd2;
|
||||
|
||||
|
@ -70,6 +70,7 @@ struct dma_buf *drm_gem_prime_export(struct drm_device *dev,
|
||||
int drm_gem_prime_handle_to_fd(struct drm_device *dev,
|
||||
struct drm_file *file_priv, uint32_t handle, uint32_t flags,
|
||||
int *prime_fd);
|
||||
int drm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
|
||||
struct drm_gem_object *drm_gem_prime_import(struct drm_device *dev,
|
||||
struct dma_buf *dma_buf);
|
||||
|
||||
|
@ -389,12 +389,14 @@ typedef int (*dma_buf_destructor)(struct dma_buf *dmabuf, void *dtor_data);
|
||||
* @file: file pointer used for sharing buffers across, and for refcounting.
|
||||
* @attachments: list of dma_buf_attachment that denotes all devices attached.
|
||||
* @ops: dma_buf_ops associated with this buffer object.
|
||||
* @lock: used internally to serialize list manipulation, attach/detach and vmap/unmap
|
||||
* @lock: used internally to serialize list manipulation, attach/detach and
|
||||
* vmap/unmap, and accesses to name
|
||||
* @vmapping_counter: used internally to refcnt the vmaps
|
||||
* @vmap_ptr: the current vmap ptr if vmapping_counter > 0
|
||||
* @exp_name: name of the exporter; useful for debugging.
|
||||
* @buf_name: unique name for the buffer
|
||||
* @ktime: time (in jiffies) at which the buffer was born
|
||||
* @name: userspace-provided name; useful for accounting and debugging.
|
||||
* @owner: pointer to exporter module; used for refcounting when exporter is a
|
||||
* kernel module.
|
||||
* @list_node: node for dma_buf accounting and debugging.
|
||||
@ -424,6 +426,7 @@ struct dma_buf {
|
||||
const char *exp_name;
|
||||
char *buf_name;
|
||||
ktime_t ktime;
|
||||
const char *name;
|
||||
struct module *owner;
|
||||
struct list_head list_node;
|
||||
void *priv;
|
||||
|
@ -2738,6 +2738,30 @@ static inline void kernel_poison_pages(struct page *page, int numpages,
|
||||
int enable) { }
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_INIT_ON_ALLOC_DEFAULT_ON
|
||||
DECLARE_STATIC_KEY_TRUE(init_on_alloc);
|
||||
#else
|
||||
DECLARE_STATIC_KEY_FALSE(init_on_alloc);
|
||||
#endif
|
||||
static inline bool want_init_on_alloc(gfp_t flags)
|
||||
{
|
||||
if (static_branch_unlikely(&init_on_alloc) &&
|
||||
!page_poisoning_enabled())
|
||||
return true;
|
||||
return flags & __GFP_ZERO;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_INIT_ON_FREE_DEFAULT_ON
|
||||
DECLARE_STATIC_KEY_TRUE(init_on_free);
|
||||
#else
|
||||
DECLARE_STATIC_KEY_FALSE(init_on_free);
|
||||
#endif
|
||||
static inline bool want_init_on_free(void)
|
||||
{
|
||||
return static_branch_unlikely(&init_on_free) &&
|
||||
!page_poisoning_enabled();
|
||||
}
|
||||
|
||||
#ifdef CONFIG_DEBUG_PAGEALLOC
|
||||
extern bool _debug_pagealloc_enabled;
|
||||
extern void __kernel_map_pages(struct page *page, int numpages, int enable);
|
||||
|
@ -3,6 +3,7 @@
|
||||
#define _LINUX_PID_H
|
||||
|
||||
#include <linux/rculist.h>
|
||||
#include <linux/wait.h>
|
||||
|
||||
enum pid_type
|
||||
{
|
||||
@ -60,12 +61,16 @@ struct pid
|
||||
unsigned int level;
|
||||
/* lists of tasks that use this pid */
|
||||
struct hlist_head tasks[PIDTYPE_MAX];
|
||||
/* wait queue for pidfd notifications */
|
||||
wait_queue_head_t wait_pidfd;
|
||||
struct rcu_head rcu;
|
||||
struct upid numbers[1];
|
||||
};
|
||||
|
||||
extern struct pid init_struct_pid;
|
||||
|
||||
extern const struct file_operations pidfd_fops;
|
||||
|
||||
static inline struct pid *get_pid(struct pid *pid)
|
||||
{
|
||||
if (pid)
|
||||
|
@ -73,6 +73,7 @@ struct proc_dir_entry *proc_create_net_single_write(const char *name, umode_t mo
|
||||
int (*show)(struct seq_file *, void *),
|
||||
proc_write_t write,
|
||||
void *data);
|
||||
extern struct pid *tgid_pidfd_to_pid(const struct file *file);
|
||||
|
||||
#else /* CONFIG_PROC_FS */
|
||||
|
||||
@ -114,6 +115,11 @@ static inline int remove_proc_subtree(const char *name, struct proc_dir_entry *p
|
||||
#define proc_create_net(name, mode, parent, state_size, ops) ({NULL;})
|
||||
#define proc_create_net_single(name, mode, parent, show, data) ({NULL;})
|
||||
|
||||
static inline struct pid *tgid_pidfd_to_pid(const struct file *file)
|
||||
{
|
||||
return ERR_PTR(-EBADF);
|
||||
}
|
||||
|
||||
#endif /* CONFIG_PROC_FS */
|
||||
|
||||
#ifdef CONFIG_PROC_UID
|
||||
|
@ -850,6 +850,7 @@ asmlinkage long sys_clock_adjtime(clockid_t which_clock,
|
||||
struct timex __user *tx);
|
||||
asmlinkage long sys_syncfs(int fd);
|
||||
asmlinkage long sys_setns(int fd, int nstype);
|
||||
asmlinkage long sys_pidfd_open(pid_t pid, unsigned int flags);
|
||||
asmlinkage long sys_sendmmsg(int fd, struct mmsghdr __user *msg,
|
||||
unsigned int vlen, unsigned flags);
|
||||
asmlinkage long sys_process_vm_readv(pid_t pid,
|
||||
@ -906,6 +907,9 @@ asmlinkage long sys_statx(int dfd, const char __user *path, unsigned flags,
|
||||
unsigned mask, struct statx __user *buffer);
|
||||
asmlinkage long sys_rseq(struct rseq __user *rseq, uint32_t rseq_len,
|
||||
int flags, uint32_t sig);
|
||||
asmlinkage long sys_pidfd_send_signal(int pidfd, int sig,
|
||||
siginfo_t __user *info,
|
||||
unsigned int flags);
|
||||
|
||||
/*
|
||||
* Architecture-specific system calls
|
||||
|
@ -267,7 +267,7 @@ extern long strncpy_from_unsafe(char *dst, const void *unsafe_addr, long count);
|
||||
probe_kernel_read(&retval, addr, sizeof(retval))
|
||||
|
||||
#ifndef user_access_begin
|
||||
#define user_access_begin() do { } while (0)
|
||||
#define user_access_begin(type, ptr, len) access_ok(type, ptr, len)
|
||||
#define user_access_end() do { } while (0)
|
||||
#define unsafe_get_user(x, ptr, err) do { if (unlikely(__get_user(x, ptr))) goto err; } while (0)
|
||||
#define unsafe_put_user(x, ptr, err) do { if (unlikely(__put_user(x, ptr))) goto err; } while (0)
|
||||
|
42
include/trace/events/namei.h
Normal file
42
include/trace/events/namei.h
Normal file
@ -0,0 +1,42 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
#undef TRACE_SYSTEM
|
||||
#define TRACE_SYSTEM namei
|
||||
|
||||
#if !defined(_TRACE_INODEPATH_H) || defined(TRACE_HEADER_MULTI_READ)
|
||||
#define _TRACE_INODEPATH_H
|
||||
|
||||
#include <linux/types.h>
|
||||
#include <linux/tracepoint.h>
|
||||
#include <linux/mm.h>
|
||||
#include <linux/memcontrol.h>
|
||||
#include <linux/device.h>
|
||||
#include <linux/kdev_t.h>
|
||||
|
||||
TRACE_EVENT(inodepath,
|
||||
TP_PROTO(struct inode *inode, char *path),
|
||||
|
||||
TP_ARGS(inode, path),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
/* dev_t and ino_t are arch dependent bit width
|
||||
* so just use 64-bit
|
||||
*/
|
||||
__field(unsigned long, ino)
|
||||
__field(unsigned long, dev)
|
||||
__string(path, path)
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__entry->ino = inode->i_ino;
|
||||
__entry->dev = inode->i_sb->s_dev;
|
||||
__assign_str(path, path);
|
||||
),
|
||||
|
||||
TP_printk("dev %d:%d ino=%lu path=%s",
|
||||
MAJOR(__entry->dev), MINOR(__entry->dev),
|
||||
__entry->ino, __get_str(path))
|
||||
);
|
||||
#endif /* _TRACE_INODEPATH_H */
|
||||
|
||||
/* This part must be outside protection */
|
||||
#include <trace/define_trace.h>
|
@ -736,9 +736,13 @@ __SYSCALL(__NR_statx, sys_statx)
|
||||
__SC_COMP(__NR_io_pgetevents, sys_io_pgetevents, compat_sys_io_pgetevents)
|
||||
#define __NR_rseq 293
|
||||
__SYSCALL(__NR_rseq, sys_rseq)
|
||||
#define __NR_pidfd_send_signal 424
|
||||
__SYSCALL(__NR_pidfd_send_signal, sys_pidfd_send_signal)
|
||||
#define __NR_pidfd_open 434
|
||||
__SYSCALL(__NR_pidfd_open, sys_pidfd_open)
|
||||
|
||||
#undef __NR_syscalls
|
||||
#define __NR_syscalls 294
|
||||
#define __NR_syscalls 435
|
||||
|
||||
/*
|
||||
* 32 bit systems traditionally used different
|
||||
|
@ -47,6 +47,13 @@ extern "C" {
|
||||
#define DRM_VIRTGPU_WAIT 0x08
|
||||
#define DRM_VIRTGPU_GET_CAPS 0x09
|
||||
|
||||
#define VIRTGPU_EXECBUF_FENCE_FD_IN 0x01
|
||||
#define VIRTGPU_EXECBUF_FENCE_FD_OUT 0x02
|
||||
#define VIRTGPU_EXECBUF_FLAGS (\
|
||||
VIRTGPU_EXECBUF_FENCE_FD_IN |\
|
||||
VIRTGPU_EXECBUF_FENCE_FD_OUT |\
|
||||
0)
|
||||
|
||||
struct drm_virtgpu_map {
|
||||
__u64 offset; /* use for mmap system call */
|
||||
__u32 handle;
|
||||
@ -54,12 +61,12 @@ struct drm_virtgpu_map {
|
||||
};
|
||||
|
||||
struct drm_virtgpu_execbuffer {
|
||||
__u32 flags; /* for future use */
|
||||
__u32 flags;
|
||||
__u32 size;
|
||||
__u64 command; /* void* */
|
||||
__u64 bo_handles;
|
||||
__u32 num_bo_handles;
|
||||
__u32 pad;
|
||||
__s32 fence_fd; /* in/out fence fd (see VIRTGPU_EXECBUF_FENCE_FD_IN/OUT) */
|
||||
};
|
||||
|
||||
#define VIRTGPU_PARAM_3D_FEATURES 1 /* do we have 3D features in the hw */
|
||||
@ -137,7 +144,7 @@ struct drm_virtgpu_get_caps {
|
||||
DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_MAP, struct drm_virtgpu_map)
|
||||
|
||||
#define DRM_IOCTL_VIRTGPU_EXECBUFFER \
|
||||
DRM_IOW(DRM_COMMAND_BASE + DRM_VIRTGPU_EXECBUFFER,\
|
||||
DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_EXECBUFFER,\
|
||||
struct drm_virtgpu_execbuffer)
|
||||
|
||||
#define DRM_IOCTL_VIRTGPU_GETPARAM \
|
||||
|
@ -37,7 +37,10 @@ struct dma_buf_sync {
|
||||
#define DMA_BUF_SYNC_VALID_FLAGS_MASK \
|
||||
(DMA_BUF_SYNC_RW | DMA_BUF_SYNC_END | DMA_BUF_SYNC_USER_MAPPED)
|
||||
|
||||
#define DMA_BUF_NAME_LEN 32
|
||||
|
||||
#define DMA_BUF_BASE 'b'
|
||||
#define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync)
|
||||
#define DMA_BUF_SET_NAME _IOW(DMA_BUF_BASE, 1, const char *)
|
||||
|
||||
#endif
|
||||
|
@ -91,5 +91,6 @@
|
||||
#define UDF_SUPER_MAGIC 0x15013346
|
||||
#define BALLOON_KVM_MAGIC 0x13661366
|
||||
#define ZSMALLOC_MAGIC 0x58295829
|
||||
#define DMA_BUF_MAGIC 0x444d4142 /* "DMAB" */
|
||||
|
||||
#endif /* __LINUX_MAGIC_H__ */
|
||||
|
@ -10,6 +10,7 @@
|
||||
#define CLONE_FS 0x00000200 /* set if fs info shared between processes */
|
||||
#define CLONE_FILES 0x00000400 /* set if open files shared between processes */
|
||||
#define CLONE_SIGHAND 0x00000800 /* set if signal handlers and blocked signals shared */
|
||||
#define CLONE_PIDFD 0x00001000 /* set if a pidfd should be placed in parent */
|
||||
#define CLONE_PTRACE 0x00002000 /* set if we want to let tracing continue on the child too */
|
||||
#define CLONE_VFORK 0x00004000 /* set if the parent wants the child to wake it up on mm_release */
|
||||
#define CLONE_PARENT 0x00008000 /* set if we want to have the same parent as the cloner */
|
||||
|
@ -41,6 +41,7 @@
|
||||
#include <linux/types.h>
|
||||
|
||||
#define VIRTIO_GPU_F_VIRGL 0
|
||||
#define VIRTIO_GPU_F_EDID 1
|
||||
|
||||
enum virtio_gpu_ctrl_type {
|
||||
VIRTIO_GPU_UNDEFINED = 0,
|
||||
@ -56,6 +57,7 @@ enum virtio_gpu_ctrl_type {
|
||||
VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING,
|
||||
VIRTIO_GPU_CMD_GET_CAPSET_INFO,
|
||||
VIRTIO_GPU_CMD_GET_CAPSET,
|
||||
VIRTIO_GPU_CMD_GET_EDID,
|
||||
|
||||
/* 3d commands */
|
||||
VIRTIO_GPU_CMD_CTX_CREATE = 0x0200,
|
||||
@ -76,6 +78,7 @@ enum virtio_gpu_ctrl_type {
|
||||
VIRTIO_GPU_RESP_OK_DISPLAY_INFO,
|
||||
VIRTIO_GPU_RESP_OK_CAPSET_INFO,
|
||||
VIRTIO_GPU_RESP_OK_CAPSET,
|
||||
VIRTIO_GPU_RESP_OK_EDID,
|
||||
|
||||
/* error responses */
|
||||
VIRTIO_GPU_RESP_ERR_UNSPEC = 0x1200,
|
||||
@ -291,6 +294,21 @@ struct virtio_gpu_resp_capset {
|
||||
__u8 capset_data[];
|
||||
};
|
||||
|
||||
/* VIRTIO_GPU_CMD_GET_EDID */
|
||||
struct virtio_gpu_cmd_get_edid {
|
||||
struct virtio_gpu_ctrl_hdr hdr;
|
||||
__le32 scanout;
|
||||
__le32 padding;
|
||||
};
|
||||
|
||||
/* VIRTIO_GPU_RESP_OK_EDID */
|
||||
struct virtio_gpu_resp_edid {
|
||||
struct virtio_gpu_ctrl_hdr hdr;
|
||||
__le32 size;
|
||||
__le32 padding;
|
||||
__u8 edid[1024];
|
||||
};
|
||||
|
||||
#define VIRTIO_GPU_EVENT_DISPLAY (1 << 0)
|
||||
|
||||
struct virtio_gpu_config {
|
||||
|
10
init/Kconfig
10
init/Kconfig
@ -1241,9 +1241,6 @@ config LD_DEAD_CODE_DATA_ELIMINATION
|
||||
config SYSCTL
|
||||
bool
|
||||
|
||||
config ANON_INODES
|
||||
bool
|
||||
|
||||
config HAVE_UID16
|
||||
bool
|
||||
|
||||
@ -1448,14 +1445,12 @@ config HAVE_FUTEX_CMPXCHG
|
||||
config EPOLL
|
||||
bool "Enable eventpoll support" if EXPERT
|
||||
default y
|
||||
select ANON_INODES
|
||||
help
|
||||
Disabling this option will cause the kernel to be built without
|
||||
support for epoll family of system calls.
|
||||
|
||||
config SIGNALFD
|
||||
bool "Enable signalfd() system call" if EXPERT
|
||||
select ANON_INODES
|
||||
default y
|
||||
help
|
||||
Enable the signalfd() system call that allows to receive signals
|
||||
@ -1465,7 +1460,6 @@ config SIGNALFD
|
||||
|
||||
config TIMERFD
|
||||
bool "Enable timerfd() system call" if EXPERT
|
||||
select ANON_INODES
|
||||
default y
|
||||
help
|
||||
Enable the timerfd() system call that allows to receive timer
|
||||
@ -1475,7 +1469,6 @@ config TIMERFD
|
||||
|
||||
config EVENTFD
|
||||
bool "Enable eventfd() system call" if EXPERT
|
||||
select ANON_INODES
|
||||
default y
|
||||
help
|
||||
Enable the eventfd() system call that allows to receive both
|
||||
@ -1577,7 +1570,6 @@ config KALLSYMS_BASE_RELATIVE
|
||||
# syscall, maps, verifier
|
||||
config BPF_SYSCALL
|
||||
bool "Enable bpf() system call"
|
||||
select ANON_INODES
|
||||
select BPF
|
||||
select IRQ_WORK
|
||||
default n
|
||||
@ -1594,7 +1586,6 @@ config BPF_JIT_ALWAYS_ON
|
||||
|
||||
config USERFAULTFD
|
||||
bool "Enable userfaultfd() system call"
|
||||
select ANON_INODES
|
||||
depends on MMU
|
||||
help
|
||||
Enable the userfaultfd() system call that allows to intercept and
|
||||
@ -1661,7 +1652,6 @@ config PERF_EVENTS
|
||||
bool "Kernel performance events and counters"
|
||||
default y if PROFILING
|
||||
depends on HAVE_PERF_EVENTS
|
||||
select ANON_INODES
|
||||
select IRQ_WORK
|
||||
select SRCU
|
||||
help
|
||||
|
24
init/main.c
24
init/main.c
@ -506,6 +506,29 @@ static inline void initcall_debug_enable(void)
|
||||
}
|
||||
#endif
|
||||
|
||||
/* Report memory auto-initialization states for this boot. */
|
||||
static void __init report_meminit(void)
|
||||
{
|
||||
const char *stack;
|
||||
|
||||
if (IS_ENABLED(CONFIG_INIT_STACK_ALL))
|
||||
stack = "all";
|
||||
else if (IS_ENABLED(CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF_ALL))
|
||||
stack = "byref_all";
|
||||
else if (IS_ENABLED(CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF))
|
||||
stack = "byref";
|
||||
else if (IS_ENABLED(CONFIG_GCC_PLUGIN_STRUCTLEAK_USER))
|
||||
stack = "__user";
|
||||
else
|
||||
stack = "off";
|
||||
|
||||
pr_info("mem auto-init: stack:%s, heap alloc:%s, heap free:%s\n",
|
||||
stack, want_init_on_alloc(GFP_KERNEL) ? "on" : "off",
|
||||
want_init_on_free() ? "on" : "off");
|
||||
if (want_init_on_free())
|
||||
pr_info("mem auto-init: clearing system memory may take some time...\n");
|
||||
}
|
||||
|
||||
/*
|
||||
* Set up kernel memory allocators
|
||||
*/
|
||||
@ -516,6 +539,7 @@ static void __init mm_init(void)
|
||||
* bigger than MAX_ORDER unless SPARSEMEM.
|
||||
*/
|
||||
page_ext_init_flatmem();
|
||||
report_meminit();
|
||||
mem_init();
|
||||
kmem_cache_init();
|
||||
pgtable_init();
|
||||
|
@ -354,10 +354,9 @@ long compat_get_bitmap(unsigned long *mask, const compat_ulong_t __user *umask,
|
||||
bitmap_size = ALIGN(bitmap_size, BITS_PER_COMPAT_LONG);
|
||||
nr_compat_longs = BITS_TO_COMPAT_LONGS(bitmap_size);
|
||||
|
||||
if (!access_ok(VERIFY_READ, umask, bitmap_size / 8))
|
||||
if (!user_access_begin(VERIFY_READ, umask, bitmap_size / 8))
|
||||
return -EFAULT;
|
||||
|
||||
user_access_begin();
|
||||
while (nr_compat_longs > 1) {
|
||||
compat_ulong_t l1, l2;
|
||||
unsafe_get_user(l1, umask++, Efault);
|
||||
@ -384,10 +383,9 @@ long compat_put_bitmap(compat_ulong_t __user *umask, unsigned long *mask,
|
||||
bitmap_size = ALIGN(bitmap_size, BITS_PER_COMPAT_LONG);
|
||||
nr_compat_longs = BITS_TO_COMPAT_LONGS(bitmap_size);
|
||||
|
||||
if (!access_ok(VERIFY_WRITE, umask, bitmap_size / 8))
|
||||
if (!user_access_begin(VERIFY_WRITE, umask, bitmap_size / 8))
|
||||
return -EFAULT;
|
||||
|
||||
user_access_begin();
|
||||
while (nr_compat_longs > 1) {
|
||||
unsigned long m = *mask++;
|
||||
unsafe_put_user((compat_ulong_t)m, umask++, Efault);
|
||||
|
@ -715,6 +715,7 @@ static void exit_notify(struct task_struct *tsk, int group_dead)
|
||||
if (group_dead)
|
||||
kill_orphaned_pgrp(tsk->group_leader, NULL);
|
||||
|
||||
tsk->exit_state = EXIT_ZOMBIE;
|
||||
if (unlikely(tsk->ptrace)) {
|
||||
int sig = thread_group_leader(tsk) &&
|
||||
thread_group_empty(tsk) &&
|
||||
@ -1618,10 +1619,9 @@ SYSCALL_DEFINE5(waitid, int, which, pid_t, upid, struct siginfo __user *,
|
||||
if (!infop)
|
||||
return err;
|
||||
|
||||
if (!access_ok(VERIFY_WRITE, infop, sizeof(*infop)))
|
||||
if (!user_access_begin(VERIFY_WRITE, infop, sizeof(*infop)))
|
||||
return -EFAULT;
|
||||
|
||||
user_access_begin();
|
||||
unsafe_put_user(signo, &infop->si_signo, Efault);
|
||||
unsafe_put_user(0, &infop->si_errno, Efault);
|
||||
unsafe_put_user(info.cause, &infop->si_code, Efault);
|
||||
@ -1746,10 +1746,9 @@ COMPAT_SYSCALL_DEFINE5(waitid,
|
||||
if (!infop)
|
||||
return err;
|
||||
|
||||
if (!access_ok(VERIFY_WRITE, infop, sizeof(*infop)))
|
||||
if (!user_access_begin(VERIFY_WRITE, infop, sizeof(*infop)))
|
||||
return -EFAULT;
|
||||
|
||||
user_access_begin();
|
||||
unsafe_put_user(signo, &infop->si_signo, Efault);
|
||||
unsafe_put_user(0, &infop->si_errno, Efault);
|
||||
unsafe_put_user(info.cause, &infop->si_code, Efault);
|
||||
|
136
kernel/fork.c
136
kernel/fork.c
@ -11,6 +11,7 @@
|
||||
* management can be a bitch. See 'mm/memory.c': 'copy_page_range()'
|
||||
*/
|
||||
|
||||
#include <linux/anon_inodes.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/sched/autogroup.h>
|
||||
#include <linux/sched/mm.h>
|
||||
@ -21,6 +22,7 @@
|
||||
#include <linux/sched/task.h>
|
||||
#include <linux/sched/task_stack.h>
|
||||
#include <linux/sched/cputime.h>
|
||||
#include <linux/seq_file.h>
|
||||
#include <linux/rtmutex.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/unistd.h>
|
||||
@ -1675,6 +1677,84 @@ static __always_inline void delayed_free_task(struct task_struct *tsk)
|
||||
free_task(tsk);
|
||||
}
|
||||
|
||||
static int pidfd_release(struct inode *inode, struct file *file)
|
||||
{
|
||||
struct pid *pid = file->private_data;
|
||||
|
||||
file->private_data = NULL;
|
||||
put_pid(pid);
|
||||
return 0;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PROC_FS
|
||||
static void pidfd_show_fdinfo(struct seq_file *m, struct file *f)
|
||||
{
|
||||
struct pid_namespace *ns = proc_pid_ns(file_inode(m->file));
|
||||
struct pid *pid = f->private_data;
|
||||
|
||||
seq_put_decimal_ull(m, "Pid:\t", pid_nr_ns(pid, ns));
|
||||
seq_putc(m, '\n');
|
||||
}
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Poll support for process exit notification.
|
||||
*/
|
||||
static unsigned int pidfd_poll(struct file *file, struct poll_table_struct *pts)
|
||||
{
|
||||
struct task_struct *task;
|
||||
struct pid *pid = file->private_data;
|
||||
int poll_flags = 0;
|
||||
|
||||
poll_wait(file, &pid->wait_pidfd, pts);
|
||||
|
||||
rcu_read_lock();
|
||||
task = pid_task(pid, PIDTYPE_PID);
|
||||
/*
|
||||
* Inform pollers only when the whole thread group exits.
|
||||
* If the thread group leader exits before all other threads in the
|
||||
* group, then poll(2) should block, similar to the wait(2) family.
|
||||
*/
|
||||
if (!task || (task->exit_state && thread_group_empty(task)))
|
||||
poll_flags = POLLIN | POLLRDNORM;
|
||||
rcu_read_unlock();
|
||||
|
||||
return poll_flags;
|
||||
}
|
||||
|
||||
const struct file_operations pidfd_fops = {
|
||||
.release = pidfd_release,
|
||||
.poll = pidfd_poll,
|
||||
#ifdef CONFIG_PROC_FS
|
||||
.show_fdinfo = pidfd_show_fdinfo,
|
||||
#endif
|
||||
};
|
||||
|
||||
/**
|
||||
* pidfd_create() - Create a new pid file descriptor.
|
||||
*
|
||||
* @pid: struct pid that the pidfd will reference
|
||||
*
|
||||
* This creates a new pid file descriptor with the O_CLOEXEC flag set.
|
||||
*
|
||||
* Note, that this function can only be called after the fd table has
|
||||
* been unshared to avoid leaking the pidfd to the new process.
|
||||
*
|
||||
* Return: On success, a cloexec pidfd is returned.
|
||||
* On error, a negative errno number will be returned.
|
||||
*/
|
||||
static int pidfd_create(struct pid *pid)
|
||||
{
|
||||
int fd;
|
||||
|
||||
fd = anon_inode_getfd("[pidfd]", &pidfd_fops, get_pid(pid),
|
||||
O_RDWR | O_CLOEXEC);
|
||||
if (fd < 0)
|
||||
put_pid(pid);
|
||||
|
||||
return fd;
|
||||
}
|
||||
|
||||
/*
|
||||
* This creates a new process as a copy of the old one,
|
||||
* but does not actually start it yet.
|
||||
@ -1687,13 +1767,14 @@ static __latent_entropy struct task_struct *copy_process(
|
||||
unsigned long clone_flags,
|
||||
unsigned long stack_start,
|
||||
unsigned long stack_size,
|
||||
int __user *parent_tidptr,
|
||||
int __user *child_tidptr,
|
||||
struct pid *pid,
|
||||
int trace,
|
||||
unsigned long tls,
|
||||
int node)
|
||||
{
|
||||
int retval;
|
||||
int pidfd = -1, retval;
|
||||
struct task_struct *p;
|
||||
struct multiprocess_signals delayed;
|
||||
|
||||
@ -1743,6 +1824,31 @@ static __latent_entropy struct task_struct *copy_process(
|
||||
return ERR_PTR(-EINVAL);
|
||||
}
|
||||
|
||||
if (clone_flags & CLONE_PIDFD) {
|
||||
int reserved;
|
||||
|
||||
/*
|
||||
* - CLONE_PARENT_SETTID is useless for pidfds and also
|
||||
* parent_tidptr is used to return pidfds.
|
||||
* - CLONE_DETACHED is blocked so that we can potentially
|
||||
* reuse it later for CLONE_PIDFD.
|
||||
* - CLONE_THREAD is blocked until someone really needs it.
|
||||
*/
|
||||
if (clone_flags &
|
||||
(CLONE_DETACHED | CLONE_PARENT_SETTID | CLONE_THREAD))
|
||||
return ERR_PTR(-EINVAL);
|
||||
|
||||
/*
|
||||
* Verify that parent_tidptr is sane so we can potentially
|
||||
* reuse it later.
|
||||
*/
|
||||
if (get_user(reserved, parent_tidptr))
|
||||
return ERR_PTR(-EFAULT);
|
||||
|
||||
if (reserved != 0)
|
||||
return ERR_PTR(-EINVAL);
|
||||
}
|
||||
|
||||
/*
|
||||
* Force any signals received before this point to be delivered
|
||||
* before the fork happens. Collect up signals sent to multiple
|
||||
@ -1949,6 +2055,22 @@ static __latent_entropy struct task_struct *copy_process(
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* This has to happen after we've potentially unshared the file
|
||||
* descriptor table (so that the pidfd doesn't leak into the child
|
||||
* if the fd table isn't shared).
|
||||
*/
|
||||
if (clone_flags & CLONE_PIDFD) {
|
||||
retval = pidfd_create(pid);
|
||||
if (retval < 0)
|
||||
goto bad_fork_free_pid;
|
||||
|
||||
pidfd = retval;
|
||||
retval = put_user(pidfd, parent_tidptr);
|
||||
if (retval)
|
||||
goto bad_fork_put_pidfd;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_BLOCK
|
||||
p->plug = NULL;
|
||||
#endif
|
||||
@ -2009,7 +2131,7 @@ static __latent_entropy struct task_struct *copy_process(
|
||||
*/
|
||||
retval = cgroup_can_fork(p);
|
||||
if (retval)
|
||||
goto bad_fork_free_pid;
|
||||
goto bad_fork_cgroup_threadgroup_change_end;
|
||||
|
||||
/*
|
||||
* From this point on we must avoid any synchronous user-space
|
||||
@ -2124,8 +2246,12 @@ bad_fork_cancel_cgroup:
|
||||
spin_unlock(¤t->sighand->siglock);
|
||||
write_unlock_irq(&tasklist_lock);
|
||||
cgroup_cancel_fork(p);
|
||||
bad_fork_free_pid:
|
||||
bad_fork_cgroup_threadgroup_change_end:
|
||||
cgroup_threadgroup_change_end(current);
|
||||
bad_fork_put_pidfd:
|
||||
if (clone_flags & CLONE_PIDFD)
|
||||
ksys_close(pidfd);
|
||||
bad_fork_free_pid:
|
||||
if (pid != &init_struct_pid)
|
||||
free_pid(pid);
|
||||
bad_fork_cleanup_thread:
|
||||
@ -2192,7 +2318,7 @@ static inline void init_idle_pids(struct task_struct *idle)
|
||||
struct task_struct *fork_idle(int cpu)
|
||||
{
|
||||
struct task_struct *task;
|
||||
task = copy_process(CLONE_VM, 0, 0, NULL, &init_struct_pid, 0, 0,
|
||||
task = copy_process(CLONE_VM, 0, 0, NULL, NULL, &init_struct_pid, 0, 0,
|
||||
cpu_to_node(cpu));
|
||||
if (!IS_ERR(task)) {
|
||||
init_idle_pids(task);
|
||||
@ -2239,7 +2365,7 @@ long _do_fork(unsigned long clone_flags,
|
||||
trace = 0;
|
||||
}
|
||||
|
||||
p = copy_process(clone_flags, stack_start, stack_size,
|
||||
p = copy_process(clone_flags, stack_start, stack_size, parent_tidptr,
|
||||
child_tidptr, NULL, trace, tls, NUMA_NO_NODE);
|
||||
add_latent_entropy();
|
||||
|
||||
|
71
kernel/pid.c
71
kernel/pid.c
@ -38,6 +38,8 @@
|
||||
#include <linux/syscalls.h>
|
||||
#include <linux/proc_ns.h>
|
||||
#include <linux/proc_fs.h>
|
||||
#include <linux/anon_inodes.h>
|
||||
#include <linux/sched/signal.h>
|
||||
#include <linux/sched/task.h>
|
||||
#include <linux/idr.h>
|
||||
|
||||
@ -214,6 +216,8 @@ struct pid *alloc_pid(struct pid_namespace *ns)
|
||||
for (type = 0; type < PIDTYPE_MAX; ++type)
|
||||
INIT_HLIST_HEAD(&pid->tasks[type]);
|
||||
|
||||
init_waitqueue_head(&pid->wait_pidfd);
|
||||
|
||||
upid = pid->numbers + ns->level;
|
||||
spin_lock_irq(&pidmap_lock);
|
||||
if (!(ns->pid_allocated & PIDNS_ADDING))
|
||||
@ -451,6 +455,73 @@ struct pid *find_ge_pid(int nr, struct pid_namespace *ns)
|
||||
return idr_get_next(&ns->idr, &nr);
|
||||
}
|
||||
|
||||
/**
|
||||
* pidfd_create() - Create a new pid file descriptor.
|
||||
*
|
||||
* @pid: struct pid that the pidfd will reference
|
||||
*
|
||||
* This creates a new pid file descriptor with the O_CLOEXEC flag set.
|
||||
*
|
||||
* Note, that this function can only be called after the fd table has
|
||||
* been unshared to avoid leaking the pidfd to the new process.
|
||||
*
|
||||
* Return: On success, a cloexec pidfd is returned.
|
||||
* On error, a negative errno number will be returned.
|
||||
*/
|
||||
static int pidfd_create(struct pid *pid)
|
||||
{
|
||||
int fd;
|
||||
|
||||
fd = anon_inode_getfd("[pidfd]", &pidfd_fops, get_pid(pid),
|
||||
O_RDWR | O_CLOEXEC);
|
||||
if (fd < 0)
|
||||
put_pid(pid);
|
||||
|
||||
return fd;
|
||||
}
|
||||
|
||||
/**
|
||||
* pidfd_open() - Open new pid file descriptor.
|
||||
*
|
||||
* @pid: pid for which to retrieve a pidfd
|
||||
* @flags: flags to pass
|
||||
*
|
||||
* This creates a new pid file descriptor with the O_CLOEXEC flag set for
|
||||
* the process identified by @pid. Currently, the process identified by
|
||||
* @pid must be a thread-group leader. This restriction currently exists
|
||||
* for all aspects of pidfds including pidfd creation (CLONE_PIDFD cannot
|
||||
* be used with CLONE_THREAD) and pidfd polling (only supports thread group
|
||||
* leaders).
|
||||
*
|
||||
* Return: On success, a cloexec pidfd is returned.
|
||||
* On error, a negative errno number will be returned.
|
||||
*/
|
||||
SYSCALL_DEFINE2(pidfd_open, pid_t, pid, unsigned int, flags)
|
||||
{
|
||||
int fd, ret;
|
||||
struct pid *p;
|
||||
|
||||
if (flags)
|
||||
return -EINVAL;
|
||||
|
||||
if (pid <= 0)
|
||||
return -EINVAL;
|
||||
|
||||
p = find_get_pid(pid);
|
||||
if (!p)
|
||||
return -ESRCH;
|
||||
|
||||
ret = 0;
|
||||
rcu_read_lock();
|
||||
if (!pid_task(p, PIDTYPE_TGID))
|
||||
ret = -EINVAL;
|
||||
rcu_read_unlock();
|
||||
|
||||
fd = ret ?: pidfd_create(p);
|
||||
put_pid(p);
|
||||
return fd;
|
||||
}
|
||||
|
||||
void __init pid_idr_init(void)
|
||||
{
|
||||
/* Verify no one has done anything silly: */
|
||||
|
@ -6081,7 +6081,8 @@ unsigned long
|
||||
boosted_cpu_util(int cpu, unsigned long other_util,
|
||||
struct sched_walt_cpu_load *walt_load)
|
||||
{
|
||||
unsigned long util = cpu_util_freq(cpu, walt_load) + other_util;
|
||||
unsigned long util = min_t(unsigned long, SCHED_CAPACITY_SCALE,
|
||||
cpu_util_freq(cpu, walt_load) + other_util);
|
||||
long margin = schedtune_cpu_margin(util, cpu);
|
||||
|
||||
trace_sched_boost_cpu(cpu, util, margin);
|
||||
|
144
kernel/signal.c
144
kernel/signal.c
@ -19,7 +19,9 @@
|
||||
#include <linux/sched/task.h>
|
||||
#include <linux/sched/task_stack.h>
|
||||
#include <linux/sched/cputime.h>
|
||||
#include <linux/file.h>
|
||||
#include <linux/fs.h>
|
||||
#include <linux/proc_fs.h>
|
||||
#include <linux/tty.h>
|
||||
#include <linux/binfmts.h>
|
||||
#include <linux/coredump.h>
|
||||
@ -1808,6 +1810,14 @@ ret:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void do_notify_pidfd(struct task_struct *task)
|
||||
{
|
||||
struct pid *pid;
|
||||
|
||||
pid = task_pid(task);
|
||||
wake_up_all(&pid->wait_pidfd);
|
||||
}
|
||||
|
||||
/*
|
||||
* Let a parent know about the death of a child.
|
||||
* For a stopped/continued status change, use do_notify_parent_cldstop instead.
|
||||
@ -1831,6 +1841,9 @@ bool do_notify_parent(struct task_struct *tsk, int sig)
|
||||
BUG_ON(!tsk->ptrace &&
|
||||
(tsk->group_leader != tsk || !thread_group_empty(tsk)));
|
||||
|
||||
/* Wake up all pidfd waiters */
|
||||
do_notify_pidfd(tsk);
|
||||
|
||||
if (sig != SIGCHLD) {
|
||||
/*
|
||||
* This is only possible if parent == real_parent.
|
||||
@ -3276,6 +3289,16 @@ COMPAT_SYSCALL_DEFINE4(rt_sigtimedwait, compat_sigset_t __user *, uthese,
|
||||
}
|
||||
#endif
|
||||
|
||||
static inline void prepare_kill_siginfo(int sig, struct siginfo *info)
|
||||
{
|
||||
clear_siginfo(info);
|
||||
info->si_signo = sig;
|
||||
info->si_errno = 0;
|
||||
info->si_code = SI_USER;
|
||||
info->si_pid = task_tgid_vnr(current);
|
||||
info->si_uid = from_kuid_munged(current_user_ns(), current_uid());
|
||||
}
|
||||
|
||||
/**
|
||||
* sys_kill - send a signal to a process
|
||||
* @pid: the PID of the process
|
||||
@ -3285,16 +3308,125 @@ SYSCALL_DEFINE2(kill, pid_t, pid, int, sig)
|
||||
{
|
||||
struct siginfo info;
|
||||
|
||||
clear_siginfo(&info);
|
||||
info.si_signo = sig;
|
||||
info.si_errno = 0;
|
||||
info.si_code = SI_USER;
|
||||
info.si_pid = task_tgid_vnr(current);
|
||||
info.si_uid = from_kuid_munged(current_user_ns(), current_uid());
|
||||
prepare_kill_siginfo(sig, &info);
|
||||
|
||||
return kill_something_info(sig, &info, pid);
|
||||
}
|
||||
|
||||
/*
|
||||
* Verify that the signaler and signalee either are in the same pid namespace
|
||||
* or that the signaler's pid namespace is an ancestor of the signalee's pid
|
||||
* namespace.
|
||||
*/
|
||||
static bool access_pidfd_pidns(struct pid *pid)
|
||||
{
|
||||
struct pid_namespace *active = task_active_pid_ns(current);
|
||||
struct pid_namespace *p = ns_of_pid(pid);
|
||||
|
||||
for (;;) {
|
||||
if (!p)
|
||||
return false;
|
||||
if (p == active)
|
||||
break;
|
||||
p = p->parent;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static int copy_siginfo_from_user_any(siginfo_t *kinfo, siginfo_t __user *info)
|
||||
{
|
||||
#ifdef CONFIG_COMPAT
|
||||
/*
|
||||
* Avoid hooking up compat syscalls and instead handle necessary
|
||||
* conversions here. Note, this is a stop-gap measure and should not be
|
||||
* considered a generic solution.
|
||||
*/
|
||||
if (in_compat_syscall())
|
||||
return copy_siginfo_from_user32(
|
||||
kinfo, (struct compat_siginfo __user *)info);
|
||||
#endif
|
||||
return copy_from_user(kinfo, info, sizeof(siginfo_t));
|
||||
}
|
||||
|
||||
static struct pid *pidfd_to_pid(const struct file *file)
|
||||
{
|
||||
if (file->f_op == &pidfd_fops)
|
||||
return file->private_data;
|
||||
|
||||
return tgid_pidfd_to_pid(file);
|
||||
}
|
||||
|
||||
/**
|
||||
* sys_pidfd_send_signal - Signal a process through a pidfd
|
||||
* @pidfd: file descriptor of the process
|
||||
* @sig: signal to send
|
||||
* @info: signal info
|
||||
* @flags: future flags
|
||||
*
|
||||
* The syscall currently only signals via PIDTYPE_PID which covers
|
||||
* kill(<positive-pid>, <signal>. It does not signal threads or process
|
||||
* groups.
|
||||
* In order to extend the syscall to threads and process groups the @flags
|
||||
* argument should be used. In essence, the @flags argument will determine
|
||||
* what is signaled and not the file descriptor itself. Put in other words,
|
||||
* grouping is a property of the flags argument not a property of the file
|
||||
* descriptor.
|
||||
*
|
||||
* Return: 0 on success, negative errno on failure
|
||||
*/
|
||||
SYSCALL_DEFINE4(pidfd_send_signal, int, pidfd, int, sig,
|
||||
siginfo_t __user *, info, unsigned int, flags)
|
||||
{
|
||||
int ret;
|
||||
struct fd f;
|
||||
struct pid *pid;
|
||||
siginfo_t kinfo;
|
||||
|
||||
/* Enforce flags be set to 0 until we add an extension. */
|
||||
if (flags)
|
||||
return -EINVAL;
|
||||
|
||||
f = fdget(pidfd);
|
||||
if (!f.file)
|
||||
return -EBADF;
|
||||
|
||||
/* Is this a pidfd? */
|
||||
pid = pidfd_to_pid(f.file);
|
||||
if (IS_ERR(pid)) {
|
||||
ret = PTR_ERR(pid);
|
||||
goto err;
|
||||
}
|
||||
|
||||
ret = -EINVAL;
|
||||
if (!access_pidfd_pidns(pid))
|
||||
goto err;
|
||||
|
||||
if (info) {
|
||||
ret = copy_siginfo_from_user_any(&kinfo, info);
|
||||
if (unlikely(ret))
|
||||
goto err;
|
||||
|
||||
ret = -EINVAL;
|
||||
if (unlikely(sig != kinfo.si_signo))
|
||||
goto err;
|
||||
|
||||
/* Only allow sending arbitrary signals to yourself. */
|
||||
ret = -EPERM;
|
||||
if ((task_pid(current) != pid) &&
|
||||
(kinfo.si_code >= 0 || kinfo.si_code == SI_TKILL))
|
||||
goto err;
|
||||
} else {
|
||||
prepare_kill_siginfo(sig, &kinfo);
|
||||
}
|
||||
|
||||
ret = kill_pid_info(sig, &kinfo, pid);
|
||||
|
||||
err:
|
||||
fdput(f);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int
|
||||
do_send_specific(pid_t tgid, pid_t pid, int sig, struct siginfo *info)
|
||||
{
|
||||
|
@ -162,8 +162,6 @@ COND_SYSCALL(syslog);
|
||||
|
||||
/* kernel/sched/core.c */
|
||||
|
||||
/* kernel/signal.c */
|
||||
|
||||
/* kernel/sys.c */
|
||||
COND_SYSCALL(setregid);
|
||||
COND_SYSCALL(setgid);
|
||||
|
@ -2063,6 +2063,14 @@ config TEST_DEBUG_VIRTUAL
|
||||
|
||||
If unsure, say N.
|
||||
|
||||
config TEST_MEMINIT
|
||||
tristate "Test heap/page initialization"
|
||||
help
|
||||
Test if the kernel is zero-initializing heap and page allocations.
|
||||
This can be useful to test init_on_alloc and init_on_free features.
|
||||
|
||||
If unsure, say N.
|
||||
|
||||
endif # RUNTIME_TESTING_MENU
|
||||
|
||||
config MEMTEST
|
||||
|
@ -83,6 +83,7 @@ obj-$(CONFIG_TEST_UUID) += test_uuid.o
|
||||
obj-$(CONFIG_TEST_PARMAN) += test_parman.o
|
||||
obj-$(CONFIG_TEST_KMOD) += test_kmod.o
|
||||
obj-$(CONFIG_TEST_DEBUG_VIRTUAL) += test_debug_virtual.o
|
||||
obj-$(CONFIG_TEST_MEMINIT) += test_meminit.o
|
||||
|
||||
ifeq ($(CONFIG_DEBUG_KOBJECT),y)
|
||||
CFLAGS_kobject.o += -DDEBUG
|
||||
|
@ -115,10 +115,11 @@ long strncpy_from_user(char *dst, const char __user *src, long count)
|
||||
|
||||
kasan_check_write(dst, count);
|
||||
check_object_size(dst, count, false);
|
||||
user_access_begin();
|
||||
retval = do_strncpy_from_user(dst, src, count, max);
|
||||
user_access_end();
|
||||
return retval;
|
||||
if (user_access_begin(VERIFY_READ, src, max)) {
|
||||
retval = do_strncpy_from_user(dst, src, count, max);
|
||||
user_access_end();
|
||||
return retval;
|
||||
}
|
||||
}
|
||||
return -EFAULT;
|
||||
}
|
||||
|
@ -114,10 +114,11 @@ long strnlen_user(const char __user *str, long count)
|
||||
unsigned long max = max_addr - src_addr;
|
||||
long retval;
|
||||
|
||||
user_access_begin();
|
||||
retval = do_strnlen_user(str, count, max);
|
||||
user_access_end();
|
||||
return retval;
|
||||
if (user_access_begin(VERIFY_READ, str, max)) {
|
||||
retval = do_strnlen_user(str, count, max);
|
||||
user_access_end();
|
||||
return retval;
|
||||
}
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
364
lib/test_meminit.c
Normal file
364
lib/test_meminit.c
Normal file
@ -0,0 +1,364 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Test cases for SL[AOU]B/page initialization at alloc/free time.
|
||||
*/
|
||||
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
|
||||
|
||||
#include <linux/init.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/mm.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/string.h>
|
||||
#include <linux/vmalloc.h>
|
||||
|
||||
#define GARBAGE_INT (0x09A7BA9E)
|
||||
#define GARBAGE_BYTE (0x9E)
|
||||
|
||||
#define REPORT_FAILURES_IN_FN() \
|
||||
do { \
|
||||
if (failures) \
|
||||
pr_info("%s failed %d out of %d times\n", \
|
||||
__func__, failures, num_tests); \
|
||||
else \
|
||||
pr_info("all %d tests in %s passed\n", \
|
||||
num_tests, __func__); \
|
||||
} while (0)
|
||||
|
||||
/* Calculate the number of uninitialized bytes in the buffer. */
|
||||
static int __init count_nonzero_bytes(void *ptr, size_t size)
|
||||
{
|
||||
int i, ret = 0;
|
||||
unsigned char *p = (unsigned char *)ptr;
|
||||
|
||||
for (i = 0; i < size; i++)
|
||||
if (p[i])
|
||||
ret++;
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* Fill a buffer with garbage, skipping |skip| first bytes. */
|
||||
static void __init fill_with_garbage_skip(void *ptr, int size, size_t skip)
|
||||
{
|
||||
unsigned int *p = (unsigned int *)((char *)ptr + skip);
|
||||
int i = 0;
|
||||
|
||||
WARN_ON(skip > size);
|
||||
size -= skip;
|
||||
|
||||
while (size >= sizeof(*p)) {
|
||||
p[i] = GARBAGE_INT;
|
||||
i++;
|
||||
size -= sizeof(*p);
|
||||
}
|
||||
if (size)
|
||||
memset(&p[i], GARBAGE_BYTE, size);
|
||||
}
|
||||
|
||||
static void __init fill_with_garbage(void *ptr, size_t size)
|
||||
{
|
||||
fill_with_garbage_skip(ptr, size, 0);
|
||||
}
|
||||
|
||||
static int __init do_alloc_pages_order(int order, int *total_failures)
|
||||
{
|
||||
struct page *page;
|
||||
void *buf;
|
||||
size_t size = PAGE_SIZE << order;
|
||||
|
||||
page = alloc_pages(GFP_KERNEL, order);
|
||||
buf = page_address(page);
|
||||
fill_with_garbage(buf, size);
|
||||
__free_pages(page, order);
|
||||
|
||||
page = alloc_pages(GFP_KERNEL, order);
|
||||
buf = page_address(page);
|
||||
if (count_nonzero_bytes(buf, size))
|
||||
(*total_failures)++;
|
||||
fill_with_garbage(buf, size);
|
||||
__free_pages(page, order);
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* Test the page allocator by calling alloc_pages with different orders. */
|
||||
static int __init test_pages(int *total_failures)
|
||||
{
|
||||
int failures = 0, num_tests = 0;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < 10; i++)
|
||||
num_tests += do_alloc_pages_order(i, &failures);
|
||||
|
||||
REPORT_FAILURES_IN_FN();
|
||||
*total_failures += failures;
|
||||
return num_tests;
|
||||
}
|
||||
|
||||
/* Test kmalloc() with given parameters. */
|
||||
static int __init do_kmalloc_size(size_t size, int *total_failures)
|
||||
{
|
||||
void *buf;
|
||||
|
||||
buf = kmalloc(size, GFP_KERNEL);
|
||||
fill_with_garbage(buf, size);
|
||||
kfree(buf);
|
||||
|
||||
buf = kmalloc(size, GFP_KERNEL);
|
||||
if (count_nonzero_bytes(buf, size))
|
||||
(*total_failures)++;
|
||||
fill_with_garbage(buf, size);
|
||||
kfree(buf);
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* Test vmalloc() with given parameters. */
|
||||
static int __init do_vmalloc_size(size_t size, int *total_failures)
|
||||
{
|
||||
void *buf;
|
||||
|
||||
buf = vmalloc(size);
|
||||
fill_with_garbage(buf, size);
|
||||
vfree(buf);
|
||||
|
||||
buf = vmalloc(size);
|
||||
if (count_nonzero_bytes(buf, size))
|
||||
(*total_failures)++;
|
||||
fill_with_garbage(buf, size);
|
||||
vfree(buf);
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* Test kmalloc()/vmalloc() by allocating objects of different sizes. */
|
||||
static int __init test_kvmalloc(int *total_failures)
|
||||
{
|
||||
int failures = 0, num_tests = 0;
|
||||
int i, size;
|
||||
|
||||
for (i = 0; i < 20; i++) {
|
||||
size = 1 << i;
|
||||
num_tests += do_kmalloc_size(size, &failures);
|
||||
num_tests += do_vmalloc_size(size, &failures);
|
||||
}
|
||||
|
||||
REPORT_FAILURES_IN_FN();
|
||||
*total_failures += failures;
|
||||
return num_tests;
|
||||
}
|
||||
|
||||
#define CTOR_BYTES (sizeof(unsigned int))
|
||||
#define CTOR_PATTERN (0x41414141)
|
||||
/* Initialize the first 4 bytes of the object. */
|
||||
static void test_ctor(void *obj)
|
||||
{
|
||||
*(unsigned int *)obj = CTOR_PATTERN;
|
||||
}
|
||||
|
||||
/*
|
||||
* Check the invariants for the buffer allocated from a slab cache.
|
||||
* If the cache has a test constructor, the first 4 bytes of the object must
|
||||
* always remain equal to CTOR_PATTERN.
|
||||
* If the cache isn't an RCU-typesafe one, or if the allocation is done with
|
||||
* __GFP_ZERO, then the object contents must be zeroed after allocation.
|
||||
* If the cache is an RCU-typesafe one, the object contents must never be
|
||||
* zeroed after the first use. This is checked by memcmp() in
|
||||
* do_kmem_cache_size().
|
||||
*/
|
||||
static bool __init check_buf(void *buf, int size, bool want_ctor,
|
||||
bool want_rcu, bool want_zero)
|
||||
{
|
||||
int bytes;
|
||||
bool fail = false;
|
||||
|
||||
bytes = count_nonzero_bytes(buf, size);
|
||||
WARN_ON(want_ctor && want_zero);
|
||||
if (want_zero)
|
||||
return bytes;
|
||||
if (want_ctor) {
|
||||
if (*(unsigned int *)buf != CTOR_PATTERN)
|
||||
fail = 1;
|
||||
} else {
|
||||
if (bytes)
|
||||
fail = !want_rcu;
|
||||
}
|
||||
return fail;
|
||||
}
|
||||
|
||||
/*
|
||||
* Test kmem_cache with given parameters:
|
||||
* want_ctor - use a constructor;
|
||||
* want_rcu - use SLAB_TYPESAFE_BY_RCU;
|
||||
* want_zero - use __GFP_ZERO.
|
||||
*/
|
||||
static int __init do_kmem_cache_size(size_t size, bool want_ctor,
|
||||
bool want_rcu, bool want_zero,
|
||||
int *total_failures)
|
||||
{
|
||||
struct kmem_cache *c;
|
||||
int iter;
|
||||
bool fail = false;
|
||||
gfp_t alloc_mask = GFP_KERNEL | (want_zero ? __GFP_ZERO : 0);
|
||||
void *buf, *buf_copy;
|
||||
|
||||
c = kmem_cache_create("test_cache", size, 1,
|
||||
want_rcu ? SLAB_TYPESAFE_BY_RCU : 0,
|
||||
want_ctor ? test_ctor : NULL);
|
||||
for (iter = 0; iter < 10; iter++) {
|
||||
buf = kmem_cache_alloc(c, alloc_mask);
|
||||
/* Check that buf is zeroed, if it must be. */
|
||||
fail = check_buf(buf, size, want_ctor, want_rcu, want_zero);
|
||||
fill_with_garbage_skip(buf, size, want_ctor ? CTOR_BYTES : 0);
|
||||
|
||||
if (!want_rcu) {
|
||||
kmem_cache_free(c, buf);
|
||||
continue;
|
||||
}
|
||||
|
||||
/*
|
||||
* If this is an RCU cache, use a critical section to ensure we
|
||||
* can touch objects after they're freed.
|
||||
*/
|
||||
rcu_read_lock();
|
||||
/*
|
||||
* Copy the buffer to check that it's not wiped on
|
||||
* free().
|
||||
*/
|
||||
buf_copy = kmalloc(size, GFP_ATOMIC);
|
||||
if (buf_copy)
|
||||
memcpy(buf_copy, buf, size);
|
||||
|
||||
kmem_cache_free(c, buf);
|
||||
/*
|
||||
* Check that |buf| is intact after kmem_cache_free().
|
||||
* |want_zero| is false, because we wrote garbage to
|
||||
* the buffer already.
|
||||
*/
|
||||
fail |= check_buf(buf, size, want_ctor, want_rcu,
|
||||
false);
|
||||
if (buf_copy) {
|
||||
fail |= (bool)memcmp(buf, buf_copy, size);
|
||||
kfree(buf_copy);
|
||||
}
|
||||
rcu_read_unlock();
|
||||
}
|
||||
kmem_cache_destroy(c);
|
||||
|
||||
*total_failures += fail;
|
||||
return 1;
|
||||
}
|
||||
|
||||
/*
|
||||
* Check that the data written to an RCU-allocated object survives
|
||||
* reallocation.
|
||||
*/
|
||||
static int __init do_kmem_cache_rcu_persistent(int size, int *total_failures)
|
||||
{
|
||||
struct kmem_cache *c;
|
||||
void *buf, *buf_contents, *saved_ptr;
|
||||
void **used_objects;
|
||||
int i, iter, maxiter = 1024;
|
||||
bool fail = false;
|
||||
|
||||
c = kmem_cache_create("test_cache", size, size, SLAB_TYPESAFE_BY_RCU,
|
||||
NULL);
|
||||
buf = kmem_cache_alloc(c, GFP_KERNEL);
|
||||
saved_ptr = buf;
|
||||
fill_with_garbage(buf, size);
|
||||
buf_contents = kmalloc(size, GFP_KERNEL);
|
||||
if (!buf_contents)
|
||||
goto out;
|
||||
used_objects = kmalloc_array(maxiter, sizeof(void *), GFP_KERNEL);
|
||||
if (!used_objects) {
|
||||
kfree(buf_contents);
|
||||
goto out;
|
||||
}
|
||||
memcpy(buf_contents, buf, size);
|
||||
kmem_cache_free(c, buf);
|
||||
/*
|
||||
* Run for a fixed number of iterations. If we never hit saved_ptr,
|
||||
* assume the test passes.
|
||||
*/
|
||||
for (iter = 0; iter < maxiter; iter++) {
|
||||
buf = kmem_cache_alloc(c, GFP_KERNEL);
|
||||
used_objects[iter] = buf;
|
||||
if (buf == saved_ptr) {
|
||||
fail = memcmp(buf_contents, buf, size);
|
||||
for (i = 0; i <= iter; i++)
|
||||
kmem_cache_free(c, used_objects[i]);
|
||||
goto free_out;
|
||||
}
|
||||
}
|
||||
|
||||
free_out:
|
||||
kmem_cache_destroy(c);
|
||||
kfree(buf_contents);
|
||||
kfree(used_objects);
|
||||
out:
|
||||
*total_failures += fail;
|
||||
return 1;
|
||||
}
|
||||
|
||||
/*
|
||||
* Test kmem_cache allocation by creating caches of different sizes, with and
|
||||
* without constructors, with and without SLAB_TYPESAFE_BY_RCU.
|
||||
*/
|
||||
static int __init test_kmemcache(int *total_failures)
|
||||
{
|
||||
int failures = 0, num_tests = 0;
|
||||
int i, flags, size;
|
||||
bool ctor, rcu, zero;
|
||||
|
||||
for (i = 0; i < 10; i++) {
|
||||
size = 8 << i;
|
||||
for (flags = 0; flags < 8; flags++) {
|
||||
ctor = flags & 1;
|
||||
rcu = flags & 2;
|
||||
zero = flags & 4;
|
||||
if (ctor & zero)
|
||||
continue;
|
||||
num_tests += do_kmem_cache_size(size, ctor, rcu, zero,
|
||||
&failures);
|
||||
}
|
||||
}
|
||||
REPORT_FAILURES_IN_FN();
|
||||
*total_failures += failures;
|
||||
return num_tests;
|
||||
}
|
||||
|
||||
/* Test the behavior of SLAB_TYPESAFE_BY_RCU caches of different sizes. */
|
||||
static int __init test_rcu_persistent(int *total_failures)
|
||||
{
|
||||
int failures = 0, num_tests = 0;
|
||||
int i, size;
|
||||
|
||||
for (i = 0; i < 10; i++) {
|
||||
size = 8 << i;
|
||||
num_tests += do_kmem_cache_rcu_persistent(size, &failures);
|
||||
}
|
||||
REPORT_FAILURES_IN_FN();
|
||||
*total_failures += failures;
|
||||
return num_tests;
|
||||
}
|
||||
|
||||
/*
|
||||
* Run the tests. Each test function returns the number of executed tests and
|
||||
* updates |failures| with the number of failed tests.
|
||||
*/
|
||||
static int __init test_meminit_init(void)
|
||||
{
|
||||
int failures = 0, num_tests = 0;
|
||||
|
||||
num_tests += test_pages(&failures);
|
||||
num_tests += test_kvmalloc(&failures);
|
||||
num_tests += test_kmemcache(&failures);
|
||||
num_tests += test_rcu_persistent(&failures);
|
||||
|
||||
if (failures == 0)
|
||||
pr_info("all %d tests passed!\n", num_tests);
|
||||
else
|
||||
pr_info("failures: %d out of %d\n", failures, num_tests);
|
||||
|
||||
return failures ? -EINVAL : 0;
|
||||
}
|
||||
module_init(test_meminit_init);
|
||||
|
||||
MODULE_LICENSE("GPL");
|
@ -379,7 +379,7 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags,
|
||||
#endif
|
||||
spin_unlock_irqrestore(&pool->lock, flags);
|
||||
|
||||
if (mem_flags & __GFP_ZERO)
|
||||
if (want_init_on_alloc(mem_flags))
|
||||
memset(retval, 0, pool->size);
|
||||
|
||||
return retval;
|
||||
@ -429,6 +429,8 @@ void dma_pool_free(struct dma_pool *pool, void *vaddr, dma_addr_t dma)
|
||||
}
|
||||
|
||||
offset = vaddr - page->vaddr;
|
||||
if (want_init_on_free())
|
||||
memset(vaddr, 0, pool->size);
|
||||
#ifdef DMAPOOL_DEBUG
|
||||
if ((dma - page->dma) != offset) {
|
||||
spin_unlock_irqrestore(&pool->lock, flags);
|
||||
|
@ -132,6 +132,55 @@ unsigned long totalcma_pages __read_mostly;
|
||||
|
||||
int percpu_pagelist_fraction;
|
||||
gfp_t gfp_allowed_mask __read_mostly = GFP_BOOT_MASK;
|
||||
#ifdef CONFIG_INIT_ON_ALLOC_DEFAULT_ON
|
||||
DEFINE_STATIC_KEY_TRUE(init_on_alloc);
|
||||
#else
|
||||
DEFINE_STATIC_KEY_FALSE(init_on_alloc);
|
||||
#endif
|
||||
EXPORT_SYMBOL(init_on_alloc);
|
||||
|
||||
#ifdef CONFIG_INIT_ON_FREE_DEFAULT_ON
|
||||
DEFINE_STATIC_KEY_TRUE(init_on_free);
|
||||
#else
|
||||
DEFINE_STATIC_KEY_FALSE(init_on_free);
|
||||
#endif
|
||||
EXPORT_SYMBOL(init_on_free);
|
||||
|
||||
static int __init early_init_on_alloc(char *buf)
|
||||
{
|
||||
int ret;
|
||||
bool bool_result;
|
||||
|
||||
if (!buf)
|
||||
return -EINVAL;
|
||||
ret = kstrtobool(buf, &bool_result);
|
||||
if (bool_result && page_poisoning_enabled())
|
||||
pr_info("mem auto-init: CONFIG_PAGE_POISONING is on, will take precedence over init_on_alloc\n");
|
||||
if (bool_result)
|
||||
static_branch_enable(&init_on_alloc);
|
||||
else
|
||||
static_branch_disable(&init_on_alloc);
|
||||
return ret;
|
||||
}
|
||||
early_param("init_on_alloc", early_init_on_alloc);
|
||||
|
||||
static int __init early_init_on_free(char *buf)
|
||||
{
|
||||
int ret;
|
||||
bool bool_result;
|
||||
|
||||
if (!buf)
|
||||
return -EINVAL;
|
||||
ret = kstrtobool(buf, &bool_result);
|
||||
if (bool_result && page_poisoning_enabled())
|
||||
pr_info("mem auto-init: CONFIG_PAGE_POISONING is on, will take precedence over init_on_free\n");
|
||||
if (bool_result)
|
||||
static_branch_enable(&init_on_free);
|
||||
else
|
||||
static_branch_disable(&init_on_free);
|
||||
return ret;
|
||||
}
|
||||
early_param("init_on_free", early_init_on_free);
|
||||
|
||||
/*
|
||||
* A cached value of the page's pageblock's migratetype, used when the page is
|
||||
@ -1090,6 +1139,14 @@ out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void kernel_init_free_pages(struct page *page, int numpages)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < numpages; i++)
|
||||
clear_highpage(page + i);
|
||||
}
|
||||
|
||||
static __always_inline bool free_pages_prepare(struct page *page,
|
||||
unsigned int order, bool check_free)
|
||||
{
|
||||
@ -1141,6 +1198,9 @@ static __always_inline bool free_pages_prepare(struct page *page,
|
||||
PAGE_SIZE << order);
|
||||
}
|
||||
arch_free_page(page, order);
|
||||
if (want_init_on_free())
|
||||
kernel_init_free_pages(page, 1 << order);
|
||||
|
||||
kernel_poison_pages(page, 1 << order, 0);
|
||||
kernel_map_pages(page, 1 << order, 0);
|
||||
kasan_free_nondeferred_pages(page, order);
|
||||
@ -1969,8 +2029,8 @@ static inline int check_new_page(struct page *page)
|
||||
|
||||
static inline bool free_pages_prezeroed(void)
|
||||
{
|
||||
return IS_ENABLED(CONFIG_PAGE_POISONING_ZERO) &&
|
||||
page_poisoning_enabled();
|
||||
return (IS_ENABLED(CONFIG_PAGE_POISONING_ZERO) &&
|
||||
page_poisoning_enabled()) || want_init_on_free();
|
||||
}
|
||||
|
||||
#ifdef CONFIG_DEBUG_VM
|
||||
@ -2023,13 +2083,10 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
|
||||
static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
|
||||
unsigned int alloc_flags)
|
||||
{
|
||||
int i;
|
||||
|
||||
post_alloc_hook(page, order, gfp_flags);
|
||||
|
||||
if (!free_pages_prezeroed() && (gfp_flags & __GFP_ZERO))
|
||||
for (i = 0; i < (1 << order); i++)
|
||||
clear_highpage(page + i);
|
||||
if (!free_pages_prezeroed() && want_init_on_alloc(gfp_flags))
|
||||
kernel_init_free_pages(page, 1 << order);
|
||||
|
||||
if (order && (gfp_flags & __GFP_COMP))
|
||||
prep_compound_page(page, order);
|
||||
|
16
mm/slab.c
16
mm/slab.c
@ -1903,6 +1903,14 @@ static bool set_objfreelist_slab_cache(struct kmem_cache *cachep,
|
||||
|
||||
cachep->num = 0;
|
||||
|
||||
/*
|
||||
* If slab auto-initialization on free is enabled, store the freelist
|
||||
* off-slab, so that its contents don't end up in one of the allocated
|
||||
* objects.
|
||||
*/
|
||||
if (unlikely(slab_want_init_on_free(cachep)))
|
||||
return false;
|
||||
|
||||
if (cachep->ctor || flags & SLAB_TYPESAFE_BY_RCU)
|
||||
return false;
|
||||
|
||||
@ -3334,7 +3342,7 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid,
|
||||
local_irq_restore(save_flags);
|
||||
ptr = cache_alloc_debugcheck_after(cachep, flags, ptr, caller);
|
||||
|
||||
if (unlikely(flags & __GFP_ZERO) && ptr)
|
||||
if (unlikely(slab_want_init_on_alloc(flags, cachep)) && ptr)
|
||||
memset(ptr, 0, cachep->object_size);
|
||||
|
||||
slab_post_alloc_hook(cachep, flags, 1, &ptr);
|
||||
@ -3391,7 +3399,7 @@ slab_alloc(struct kmem_cache *cachep, gfp_t flags, unsigned long caller)
|
||||
objp = cache_alloc_debugcheck_after(cachep, flags, objp, caller);
|
||||
prefetchw(objp);
|
||||
|
||||
if (unlikely(flags & __GFP_ZERO) && objp)
|
||||
if (unlikely(slab_want_init_on_alloc(flags, cachep)) && objp)
|
||||
memset(objp, 0, cachep->object_size);
|
||||
|
||||
slab_post_alloc_hook(cachep, flags, 1, &objp);
|
||||
@ -3512,6 +3520,8 @@ void ___cache_free(struct kmem_cache *cachep, void *objp,
|
||||
struct array_cache *ac = cpu_cache_get(cachep);
|
||||
|
||||
check_irq_off();
|
||||
if (unlikely(slab_want_init_on_free(cachep)))
|
||||
memset(objp, 0, cachep->object_size);
|
||||
kmemleak_free_recursive(objp, cachep->flags);
|
||||
objp = cache_free_debugcheck(cachep, objp, caller);
|
||||
|
||||
@ -3598,7 +3608,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
|
||||
cache_alloc_debugcheck_after_bulk(s, flags, size, p, _RET_IP_);
|
||||
|
||||
/* Clear memory outside IRQ disabled section */
|
||||
if (unlikely(flags & __GFP_ZERO))
|
||||
if (unlikely(slab_want_init_on_alloc(flags, s)))
|
||||
for (i = 0; i < size; i++)
|
||||
memset(p[i], 0, s->object_size);
|
||||
|
||||
|
20
mm/slab.h
20
mm/slab.h
@ -529,4 +529,24 @@ static inline int cache_random_seq_create(struct kmem_cache *cachep,
|
||||
static inline void cache_random_seq_destroy(struct kmem_cache *cachep) { }
|
||||
#endif /* CONFIG_SLAB_FREELIST_RANDOM */
|
||||
|
||||
static inline bool slab_want_init_on_alloc(gfp_t flags, struct kmem_cache *c)
|
||||
{
|
||||
if (static_branch_unlikely(&init_on_alloc)) {
|
||||
if (c->ctor)
|
||||
return false;
|
||||
if (c->flags & (SLAB_TYPESAFE_BY_RCU | SLAB_POISON))
|
||||
return flags & __GFP_ZERO;
|
||||
return true;
|
||||
}
|
||||
return flags & __GFP_ZERO;
|
||||
}
|
||||
|
||||
static inline bool slab_want_init_on_free(struct kmem_cache *c)
|
||||
{
|
||||
if (static_branch_unlikely(&init_on_free))
|
||||
return !(c->ctor ||
|
||||
(c->flags & (SLAB_TYPESAFE_BY_RCU | SLAB_POISON)));
|
||||
return false;
|
||||
}
|
||||
|
||||
#endif /* MM_SLAB_H */
|
||||
|
44
mm/slub.c
44
mm/slub.c
@ -1283,6 +1283,10 @@ check_slabs:
|
||||
if (*str == ',')
|
||||
slub_debug_slabs = str + 1;
|
||||
out:
|
||||
if ((static_branch_unlikely(&init_on_alloc) ||
|
||||
static_branch_unlikely(&init_on_free)) &&
|
||||
(slub_debug & SLAB_POISON))
|
||||
pr_info("mem auto-init: SLAB_POISON will take precedence over init_on_alloc/init_on_free\n");
|
||||
return 1;
|
||||
}
|
||||
|
||||
@ -1386,6 +1390,32 @@ static __always_inline bool slab_free_hook(struct kmem_cache *s, void *x)
|
||||
static inline bool slab_free_freelist_hook(struct kmem_cache *s,
|
||||
void **head, void **tail)
|
||||
{
|
||||
|
||||
void *object;
|
||||
void *next = *head;
|
||||
void *old_tail = *tail ? *tail : *head;
|
||||
int rsize;
|
||||
|
||||
if (slab_want_init_on_free(s)) {
|
||||
void *p = NULL;
|
||||
|
||||
do {
|
||||
object = next;
|
||||
next = get_freepointer(s, object);
|
||||
/*
|
||||
* Clear the object and the metadata, but don't touch
|
||||
* the redzone.
|
||||
*/
|
||||
memset(object, 0, s->object_size);
|
||||
rsize = (s->flags & SLAB_RED_ZONE) ? s->red_left_pad
|
||||
: 0;
|
||||
memset((char *)object + s->inuse, 0,
|
||||
s->size - s->inuse - rsize);
|
||||
set_freepointer(s, object, p);
|
||||
p = object;
|
||||
} while (object != old_tail);
|
||||
}
|
||||
|
||||
/*
|
||||
* Compiler cannot detect this function can be removed if slab_free_hook()
|
||||
* evaluates to nothing. Thus, catch all relevant config debug options here.
|
||||
@ -1395,9 +1425,7 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s,
|
||||
defined(CONFIG_DEBUG_OBJECTS_FREE) || \
|
||||
defined(CONFIG_KASAN)
|
||||
|
||||
void *object;
|
||||
void *next = *head;
|
||||
void *old_tail = *tail ? *tail : *head;
|
||||
next = *head;
|
||||
|
||||
/* Head and tail of the reconstructed freelist */
|
||||
*head = NULL;
|
||||
@ -2712,8 +2740,14 @@ redo:
|
||||
prefetch_freepointer(s, next_object);
|
||||
stat(s, ALLOC_FASTPATH);
|
||||
}
|
||||
/*
|
||||
* If the object has been wiped upon free, make sure it's fully
|
||||
* initialized by zeroing out freelist pointer.
|
||||
*/
|
||||
if (unlikely(slab_want_init_on_free(s)) && object)
|
||||
memset(object + s->offset, 0, sizeof(void *));
|
||||
|
||||
if (unlikely(gfpflags & __GFP_ZERO) && object)
|
||||
if (unlikely(slab_want_init_on_alloc(gfpflags, s)) && object)
|
||||
memset(object, 0, s->object_size);
|
||||
|
||||
slab_post_alloc_hook(s, gfpflags, 1, &object);
|
||||
@ -3135,7 +3169,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
|
||||
local_irq_enable();
|
||||
|
||||
/* Clear memory outside IRQ disabled fastpath loop */
|
||||
if (unlikely(flags & __GFP_ZERO)) {
|
||||
if (unlikely(slab_want_init_on_alloc(flags, s))) {
|
||||
int j;
|
||||
|
||||
for (j = 0; j < i; j++)
|
||||
|
@ -1460,7 +1460,7 @@ static struct sock *sk_prot_alloc(struct proto *prot, gfp_t priority,
|
||||
sk = kmem_cache_alloc(slab, priority & ~__GFP_ZERO);
|
||||
if (!sk)
|
||||
return sk;
|
||||
if (priority & __GFP_ZERO)
|
||||
if (want_init_on_alloc(priority))
|
||||
sk_prot_clear_nulls(sk, prot->obj_size);
|
||||
} else
|
||||
sk = kmalloc(prot->obj_size, priority);
|
||||
|
@ -76,6 +76,35 @@ config GCC_PLUGIN_STRUCTLEAK_VERBOSE
|
||||
initialized. Since not all existing initializers are detected
|
||||
by the plugin, this can produce false positive warnings.
|
||||
|
||||
config INIT_ON_ALLOC_DEFAULT_ON
|
||||
bool "Enable heap memory zeroing on allocation by default"
|
||||
help
|
||||
This has the effect of setting "init_on_alloc=1" on the kernel
|
||||
command line. This can be disabled with "init_on_alloc=0".
|
||||
When "init_on_alloc" is enabled, all page allocator and slab
|
||||
allocator memory will be zeroed when allocated, eliminating
|
||||
many kinds of "uninitialized heap memory" flaws, especially
|
||||
heap content exposures. The performance impact varies by
|
||||
workload, but most cases see <1% impact. Some synthetic
|
||||
workloads have measured as high as 7%.
|
||||
|
||||
config INIT_ON_FREE_DEFAULT_ON
|
||||
bool "Enable heap memory zeroing on free by default"
|
||||
help
|
||||
This has the effect of setting "init_on_free=1" on the kernel
|
||||
command line. This can be disabled with "init_on_free=0".
|
||||
Similar to "init_on_alloc", when "init_on_free" is enabled,
|
||||
all page allocator and slab allocator memory will be zeroed
|
||||
when freed, eliminating many kinds of "uninitialized heap memory"
|
||||
flaws, especially heap content exposures. The primary difference
|
||||
with "init_on_free" is that data lifetime in memory is reduced,
|
||||
as anything freed is wiped immediately, making live forensics or
|
||||
cold boot memory attacks unable to recover freed memory contents.
|
||||
The performance impact varies by workload, but is more expensive
|
||||
than "init_on_alloc" due to the negative cache effects of
|
||||
touching "cold" memory areas. Most cases see 3-5% impact. Some
|
||||
synthetic workloads have measured as high as 8%.
|
||||
|
||||
endmenu
|
||||
|
||||
endmenu
|
||||
|
Loading…
Reference in New Issue
Block a user