Merge crypto changes from android-4.19.79-95 into msm-4.19
Conflicts: block/blk-merge.c drivers/scsi/ufs/ufs-qcom.c drivers/scsi/ufs/ufshcd.c drivers/scsi/ufs/ufshcd.h fs/ext4/inode.c fs/f2fs/data.c include/linux/fscrypt.h Change-Id: Id3b033dbc5886ddb83f235ff80e19755d2b962e2 Signed-off-by: Blagovest Kolenichev <bkolenichev@codeaurora.org> Signed-off-by: Neeraj Soni <neersoni@codeaurora.org>
This commit is contained in:
commit
450ec63ce9
183
Documentation/block/inline-encryption.rst
Normal file
183
Documentation/block/inline-encryption.rst
Normal file
@ -0,0 +1,183 @@
|
||||
.. SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
=================
|
||||
Inline Encryption
|
||||
=================
|
||||
|
||||
Objective
|
||||
=========
|
||||
|
||||
We want to support inline encryption (IE) in the kernel.
|
||||
To allow for testing, we also want a crypto API fallback when actual
|
||||
IE hardware is absent. We also want IE to work with layered devices
|
||||
like dm and loopback (i.e. we want to be able to use the IE hardware
|
||||
of the underlying devices if present, or else fall back to crypto API
|
||||
en/decryption).
|
||||
|
||||
|
||||
Constraints and notes
|
||||
=====================
|
||||
|
||||
- IE hardware have a limited number of "keyslots" that can be programmed
|
||||
with an encryption context (key, algorithm, data unit size, etc.) at any time.
|
||||
One can specify a keyslot in a data request made to the device, and the
|
||||
device will en/decrypt the data using the encryption context programmed into
|
||||
that specified keyslot. When possible, we want to make multiple requests with
|
||||
the same encryption context share the same keyslot.
|
||||
|
||||
- We need a way for filesystems to specify an encryption context to use for
|
||||
en/decrypting a struct bio, and a device driver (like UFS) needs to be able
|
||||
to use that encryption context when it processes the bio.
|
||||
|
||||
- We need a way for device drivers to expose their capabilities in a unified
|
||||
way to the upper layers.
|
||||
|
||||
|
||||
Design
|
||||
======
|
||||
|
||||
We add a struct bio_crypt_ctx to struct bio that can represent an
|
||||
encryption context, because we need to be able to pass this encryption
|
||||
context from the FS layer to the device driver to act upon.
|
||||
|
||||
While IE hardware works on the notion of keyslots, the FS layer has no
|
||||
knowledge of keyslots - it simply wants to specify an encryption context to
|
||||
use while en/decrypting a bio.
|
||||
|
||||
We introduce a keyslot manager (KSM) that handles the translation from
|
||||
encryption contexts specified by the FS to keyslots on the IE hardware.
|
||||
This KSM also serves as the way IE hardware can expose their capabilities to
|
||||
upper layers. The generic mode of operation is: each device driver that wants
|
||||
to support IE will construct a KSM and set it up in its struct request_queue.
|
||||
Upper layers that want to use IE on this device can then use this KSM in
|
||||
the device's struct request_queue to translate an encryption context into
|
||||
a keyslot. The presence of the KSM in the request queue shall be used to mean
|
||||
that the device supports IE.
|
||||
|
||||
On the device driver end of the interface, the device driver needs to tell the
|
||||
KSM how to actually manipulate the IE hardware in the device to do things like
|
||||
programming the crypto key into the IE hardware into a particular keyslot. All
|
||||
this is achieved through the :c:type:`struct keyslot_mgmt_ll_ops` that the
|
||||
device driver passes to the KSM when creating it.
|
||||
|
||||
It uses refcounts to track which keyslots are idle (either they have no
|
||||
encryption context programmed, or there are no in-flight struct bios
|
||||
referencing that keyslot). When a new encryption context needs a keyslot, it
|
||||
tries to find a keyslot that has already been programmed with the same
|
||||
encryption context, and if there is no such keyslot, it evicts the least
|
||||
recently used idle keyslot and programs the new encryption context into that
|
||||
one. If no idle keyslots are available, then the caller will sleep until there
|
||||
is at least one.
|
||||
|
||||
|
||||
Blk-crypto
|
||||
==========
|
||||
|
||||
The above is sufficient for simple cases, but does not work if there is a
|
||||
need for a crypto API fallback, or if we are want to use IE with layered
|
||||
devices. To these ends, we introduce blk-crypto. Blk-crypto allows us to
|
||||
present a unified view of encryption to the FS (so FS only needs to specify
|
||||
an encryption context and not worry about keyslots at all), and blk-crypto
|
||||
can decide whether to delegate the en/decryption to IE hardware or to the
|
||||
crypto API. Blk-crypto maintains an internal KSM that serves as the crypto
|
||||
API fallback.
|
||||
|
||||
Blk-crypto needs to ensure that the encryption context is programmed into the
|
||||
"correct" keyslot manager for IE. If a bio is submitted to a layered device
|
||||
that eventually passes the bio down to a device that really does support IE, we
|
||||
want the encryption context to be programmed into a keyslot for the KSM of the
|
||||
device with IE support. However, blk-crypto does not know a priori whether a
|
||||
particular device is the final device in the layering structure for a bio or
|
||||
not. So in the case that a particular device does not support IE, since it is
|
||||
possibly the final destination device for the bio, if the bio requires
|
||||
encryption (i.e. the bio is doing a write operation), blk-crypto must fallback
|
||||
to the crypto API *before* sending the bio to the device.
|
||||
|
||||
Blk-crypto ensures that:
|
||||
|
||||
- The bio's encryption context is programmed into a keyslot in the KSM of the
|
||||
request queue that the bio is being submitted to (or the crypto API fallback
|
||||
KSM if the request queue doesn't have a KSM), and that the ``processing_ksm``
|
||||
in the ``bi_crypt_context`` is set to this KSM
|
||||
|
||||
- That the bio has its own individual reference to the keyslot in this KSM.
|
||||
Once the bio passes through blk-crypto, its encryption context is programmed
|
||||
in some KSM. The "its own individual reference to the keyslot" ensures that
|
||||
keyslots can be released by each bio independently of other bios while
|
||||
ensuring that the bio has a valid reference to the keyslot when, for e.g., the
|
||||
crypto API fallback KSM in blk-crypto performs crypto on the device's behalf.
|
||||
The individual references are ensured by increasing the refcount for the
|
||||
keyslot in the ``processing_ksm`` when a bio with a programmed encryption
|
||||
context is cloned.
|
||||
|
||||
|
||||
What blk-crypto does on bio submission
|
||||
--------------------------------------
|
||||
|
||||
**Case 1:** blk-crypto is given a bio with only an encryption context that hasn't
|
||||
been programmed into any keyslot in any KSM (for e.g. a bio from the FS).
|
||||
In this case, blk-crypto will program the encryption context into the KSM of the
|
||||
request queue the bio is being submitted to (and if this KSM does not exist,
|
||||
then it will program it into blk-crypto's internal KSM for crypto API
|
||||
fallback). The KSM that this encryption context was programmed into is stored
|
||||
as the ``processing_ksm`` in the bio's ``bi_crypt_context``.
|
||||
|
||||
**Case 2:** blk-crypto is given a bio whose encryption context has already been
|
||||
programmed into a keyslot in the *crypto API fallback* KSM.
|
||||
In this case, blk-crypto does nothing; it treats the bio as not having
|
||||
specified an encryption context. Note that we cannot do here what we will do
|
||||
in Case 3 because we would have already encrypted the bio via the crypto API
|
||||
by this point.
|
||||
|
||||
**Case 3:** blk-crypto is given a bio whose encryption context has already been
|
||||
programmed into a keyslot in some KSM (that is *not* the crypto API fallback
|
||||
KSM).
|
||||
In this case, blk-crypto first releases that keyslot from that KSM and then
|
||||
treats the bio as in Case 1.
|
||||
|
||||
This way, when a device driver is processing a bio, it can be sure that
|
||||
the bio's encryption context has been programmed into some KSM (either the
|
||||
device driver's request queue's KSM, or blk-crypto's crypto API fallback KSM).
|
||||
It then simply needs to check if the bio's processing_ksm is the device's
|
||||
request queue's KSM. If so, then it should proceed with IE. If not, it should
|
||||
simply do nothing with respect to crypto, because some other KSM (perhaps the
|
||||
blk-crypto crypto API fallback KSM) is handling the en/decryption.
|
||||
|
||||
Blk-crypto will release the keyslot that is being held by the bio (and also
|
||||
decrypt it if the bio is using the crypto API fallback KSM) once
|
||||
``bio_remaining_done`` returns true for the bio.
|
||||
|
||||
|
||||
Layered Devices
|
||||
===============
|
||||
|
||||
Layered devices that wish to support IE need to create their own keyslot
|
||||
manager for their request queue, and expose whatever functionality they choose.
|
||||
When a layered device wants to pass a bio to another layer (either by
|
||||
resubmitting the same bio, or by submitting a clone), it doesn't need to do
|
||||
anything special because the bio (or the clone) will once again pass through
|
||||
blk-crypto, which will work as described in Case 3. If a layered device wants
|
||||
for some reason to do the IO by itself instead of passing it on to a child
|
||||
device, but it also chose to expose IE capabilities by setting up a KSM in its
|
||||
request queue, it is then responsible for en/decrypting the data itself. In
|
||||
such cases, the device can choose to call the blk-crypto function
|
||||
``blk_crypto_fallback_to_kernel_crypto_api`` (TODO: Not yet implemented), which will
|
||||
cause the en/decryption to be done via the crypto API fallback.
|
||||
|
||||
|
||||
Future Optimizations for layered devices
|
||||
========================================
|
||||
|
||||
Creating a keyslot manager for the layered device uses up memory for each
|
||||
keyslot, and in general, a layered device (like dm-linear) merely passes the
|
||||
request on to a "child" device, so the keyslots in the layered device itself
|
||||
might be completely unused. We can instead define a new type of KSM; the
|
||||
"passthrough KSM", that layered devices can use to let blk-crypto know that
|
||||
this layered device *will* pass the bio to some child device (and hence
|
||||
through blk-crypto again, at which point blk-crypto can program the encryption
|
||||
context, instead of programming it into the layered device's KSM). Again, if
|
||||
the device "lies" and decides to do the IO itself instead of passing it on to
|
||||
a child device, it is responsible for doing the en/decryption (and can choose
|
||||
to call ``blk_crypto_fallback_to_kernel_crypto_api``). Another use case for the
|
||||
"passthrough KSM" is for IE devices that want to manage their own keyslots/do
|
||||
not have a limited number of keyslots.
|
@ -72,6 +72,9 @@ Online attacks
|
||||
fscrypt (and storage encryption in general) can only provide limited
|
||||
protection, if any at all, against online attacks. In detail:
|
||||
|
||||
Side-channel attacks
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
fscrypt is only resistant to side-channel attacks, such as timing or
|
||||
electromagnetic attacks, to the extent that the underlying Linux
|
||||
Cryptographic API algorithms are. If a vulnerable algorithm is used,
|
||||
@ -80,29 +83,90 @@ attacker to mount a side channel attack against the online system.
|
||||
Side channel attacks may also be mounted against applications
|
||||
consuming decrypted data.
|
||||
|
||||
After an encryption key has been provided, fscrypt is not designed to
|
||||
hide the plaintext file contents or filenames from other users on the
|
||||
same system, regardless of the visibility of the keyring key.
|
||||
Instead, existing access control mechanisms such as file mode bits,
|
||||
POSIX ACLs, LSMs, or mount namespaces should be used for this purpose.
|
||||
Also note that as long as the encryption keys are *anywhere* in
|
||||
memory, an online attacker can necessarily compromise them by mounting
|
||||
a physical attack or by exploiting any kernel security vulnerability
|
||||
which provides an arbitrary memory read primitive.
|
||||
Unauthorized file access
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
While it is ostensibly possible to "evict" keys from the system,
|
||||
recently accessed encrypted files will remain accessible at least
|
||||
until the filesystem is unmounted or the VFS caches are dropped, e.g.
|
||||
using ``echo 2 > /proc/sys/vm/drop_caches``. Even after that, if the
|
||||
RAM is compromised before being powered off, it will likely still be
|
||||
possible to recover portions of the plaintext file contents, if not
|
||||
some of the encryption keys as well. (Since Linux v4.12, all
|
||||
in-kernel keys related to fscrypt are sanitized before being freed.
|
||||
However, userspace would need to do its part as well.)
|
||||
After an encryption key has been added, fscrypt does not hide the
|
||||
plaintext file contents or filenames from other users on the same
|
||||
system. Instead, existing access control mechanisms such as file mode
|
||||
bits, POSIX ACLs, LSMs, or namespaces should be used for this purpose.
|
||||
|
||||
Currently, fscrypt does not prevent a user from maliciously providing
|
||||
an incorrect key for another user's existing encrypted files. A
|
||||
protection against this is planned.
|
||||
(For the reasoning behind this, understand that while the key is
|
||||
added, the confidentiality of the data, from the perspective of the
|
||||
system itself, is *not* protected by the mathematical properties of
|
||||
encryption but rather only by the correctness of the kernel.
|
||||
Therefore, any encryption-specific access control checks would merely
|
||||
be enforced by kernel *code* and therefore would be largely redundant
|
||||
with the wide variety of access control mechanisms already available.)
|
||||
|
||||
Kernel memory compromise
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
An attacker who compromises the system enough to read from arbitrary
|
||||
memory, e.g. by mounting a physical attack or by exploiting a kernel
|
||||
security vulnerability, can compromise all encryption keys that are
|
||||
currently in use.
|
||||
|
||||
However, fscrypt allows encryption keys to be removed from the kernel,
|
||||
which may protect them from later compromise.
|
||||
|
||||
In more detail, the FS_IOC_REMOVE_ENCRYPTION_KEY ioctl (or the
|
||||
FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS ioctl) can wipe a master
|
||||
encryption key from kernel memory. If it does so, it will also try to
|
||||
evict all cached inodes which had been "unlocked" using the key,
|
||||
thereby wiping their per-file keys and making them once again appear
|
||||
"locked", i.e. in ciphertext or encrypted form.
|
||||
|
||||
However, these ioctls have some limitations:
|
||||
|
||||
- Per-file keys for in-use files will *not* be removed or wiped.
|
||||
Therefore, for maximum effect, userspace should close the relevant
|
||||
encrypted files and directories before removing a master key, as
|
||||
well as kill any processes whose working directory is in an affected
|
||||
encrypted directory.
|
||||
|
||||
- The kernel cannot magically wipe copies of the master key(s) that
|
||||
userspace might have as well. Therefore, userspace must wipe all
|
||||
copies of the master key(s) it makes as well; normally this should
|
||||
be done immediately after FS_IOC_ADD_ENCRYPTION_KEY, without waiting
|
||||
for FS_IOC_REMOVE_ENCRYPTION_KEY. Naturally, the same also applies
|
||||
to all higher levels in the key hierarchy. Userspace should also
|
||||
follow other security precautions such as mlock()ing memory
|
||||
containing keys to prevent it from being swapped out.
|
||||
|
||||
- In general, decrypted contents and filenames in the kernel VFS
|
||||
caches are freed but not wiped. Therefore, portions thereof may be
|
||||
recoverable from freed memory, even after the corresponding key(s)
|
||||
were wiped. To partially solve this, you can set
|
||||
CONFIG_PAGE_POISONING=y in your kernel config and add page_poison=1
|
||||
to your kernel command line. However, this has a performance cost.
|
||||
|
||||
- Secret keys might still exist in CPU registers, in crypto
|
||||
accelerator hardware (if used by the crypto API to implement any of
|
||||
the algorithms), or in other places not explicitly considered here.
|
||||
|
||||
Limitations of v1 policies
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
v1 encryption policies have some weaknesses with respect to online
|
||||
attacks:
|
||||
|
||||
- There is no verification that the provided master key is correct.
|
||||
Therefore, a malicious user can temporarily associate the wrong key
|
||||
with another user's encrypted files to which they have read-only
|
||||
access. Because of filesystem caching, the wrong key will then be
|
||||
used by the other user's accesses to those files, even if the other
|
||||
user has the correct key in their own keyring. This violates the
|
||||
meaning of "read-only access".
|
||||
|
||||
- A compromise of a per-file key also compromises the master key from
|
||||
which it was derived.
|
||||
|
||||
- Non-root users cannot securely remove encryption keys.
|
||||
|
||||
All the above problems are fixed with v2 encryption policies. For
|
||||
this reason among others, it is recommended to use v2 encryption
|
||||
policies on all new encrypted directories.
|
||||
|
||||
Key hierarchy
|
||||
=============
|
||||
@ -123,11 +187,52 @@ appropriate master key. There can be any number of master keys, each
|
||||
of which protects any number of directory trees on any number of
|
||||
filesystems.
|
||||
|
||||
Userspace should generate master keys either using a cryptographically
|
||||
secure random number generator, or by using a KDF (Key Derivation
|
||||
Function). Note that whenever a KDF is used to "stretch" a
|
||||
lower-entropy secret such as a passphrase, it is critical that a KDF
|
||||
designed for this purpose be used, such as scrypt, PBKDF2, or Argon2.
|
||||
Master keys must be real cryptographic keys, i.e. indistinguishable
|
||||
from random bytestrings of the same length. This implies that users
|
||||
**must not** directly use a password as a master key, zero-pad a
|
||||
shorter key, or repeat a shorter key. Security cannot be guaranteed
|
||||
if userspace makes any such error, as the cryptographic proofs and
|
||||
analysis would no longer apply.
|
||||
|
||||
Instead, users should generate master keys either using a
|
||||
cryptographically secure random number generator, or by using a KDF
|
||||
(Key Derivation Function). The kernel does not do any key stretching;
|
||||
therefore, if userspace derives the key from a low-entropy secret such
|
||||
as a passphrase, it is critical that a KDF designed for this purpose
|
||||
be used, such as scrypt, PBKDF2, or Argon2.
|
||||
|
||||
Key derivation function
|
||||
-----------------------
|
||||
|
||||
With one exception, fscrypt never uses the master key(s) for
|
||||
encryption directly. Instead, they are only used as input to a KDF
|
||||
(Key Derivation Function) to derive the actual keys.
|
||||
|
||||
The KDF used for a particular master key differs depending on whether
|
||||
the key is used for v1 encryption policies or for v2 encryption
|
||||
policies. Users **must not** use the same key for both v1 and v2
|
||||
encryption policies. (No real-world attack is currently known on this
|
||||
specific case of key reuse, but its security cannot be guaranteed
|
||||
since the cryptographic proofs and analysis would no longer apply.)
|
||||
|
||||
For v1 encryption policies, the KDF only supports deriving per-file
|
||||
encryption keys. It works by encrypting the master key with
|
||||
AES-128-ECB, using the file's 16-byte nonce as the AES key. The
|
||||
resulting ciphertext is used as the derived key. If the ciphertext is
|
||||
longer than needed, then it is truncated to the needed length.
|
||||
|
||||
For v2 encryption policies, the KDF is HKDF-SHA512. The master key is
|
||||
passed as the "input keying material", no salt is used, and a distinct
|
||||
"application-specific information string" is used for each distinct
|
||||
key to be derived. For example, when a per-file encryption key is
|
||||
derived, the application-specific information string is the file's
|
||||
nonce prefixed with "fscrypt\\0" and a context byte. Different
|
||||
context bytes are used for other types of derived keys.
|
||||
|
||||
HKDF-SHA512 is preferred to the original AES-128-ECB based KDF because
|
||||
HKDF is more flexible, is nonreversible, and evenly distributes
|
||||
entropy from the master key. HKDF is also standardized and widely
|
||||
used by other software, whereas the AES-128-ECB based KDF is ad-hoc.
|
||||
|
||||
Per-file keys
|
||||
-------------
|
||||
@ -138,29 +243,9 @@ files doesn't map to the same ciphertext, or vice versa. In most
|
||||
cases, fscrypt does this by deriving per-file keys. When a new
|
||||
encrypted inode (regular file, directory, or symlink) is created,
|
||||
fscrypt randomly generates a 16-byte nonce and stores it in the
|
||||
inode's encryption xattr. Then, it uses a KDF (Key Derivation
|
||||
Function) to derive the file's key from the master key and nonce.
|
||||
|
||||
The Adiantum encryption mode (see `Encryption modes and usage`_) is
|
||||
special, since it accepts longer IVs and is suitable for both contents
|
||||
and filenames encryption. For it, a "direct key" option is offered
|
||||
where the file's nonce is included in the IVs and the master key is
|
||||
used for encryption directly. This improves performance; however,
|
||||
users must not use the same master key for any other encryption mode.
|
||||
|
||||
Below, the KDF and design considerations are described in more detail.
|
||||
|
||||
The current KDF works by encrypting the master key with AES-128-ECB,
|
||||
using the file's nonce as the AES key. The output is used as the
|
||||
derived key. If the output is longer than needed, then it is
|
||||
truncated to the needed length.
|
||||
|
||||
Note: this KDF meets the primary security requirement, which is to
|
||||
produce unique derived keys that preserve the entropy of the master
|
||||
key, assuming that the master key is already a good pseudorandom key.
|
||||
However, it is nonstandard and has some problems such as being
|
||||
reversible, so it is generally considered to be a mistake! It may be
|
||||
replaced with HKDF or another more standard KDF in the future.
|
||||
inode's encryption xattr. Then, it uses a KDF (as described in `Key
|
||||
derivation function`_) to derive the file's key from the master key
|
||||
and nonce.
|
||||
|
||||
Key derivation was chosen over key wrapping because wrapped keys would
|
||||
require larger xattrs which would be less likely to fit in-line in the
|
||||
@ -171,10 +256,51 @@ alternative master keys or to support rotating master keys. Instead,
|
||||
the master keys may be wrapped in userspace, e.g. as is done by the
|
||||
`fscrypt <https://github.com/google/fscrypt>`_ tool.
|
||||
|
||||
Including the inode number in the IVs was considered. However, it was
|
||||
rejected as it would have prevented ext4 filesystems from being
|
||||
resized, and by itself still wouldn't have been sufficient to prevent
|
||||
the same key from being directly reused for both XTS and CTS-CBC.
|
||||
DIRECT_KEY policies
|
||||
-------------------
|
||||
|
||||
The Adiantum encryption mode (see `Encryption modes and usage`_) is
|
||||
suitable for both contents and filenames encryption, and it accepts
|
||||
long IVs --- long enough to hold both an 8-byte logical block number
|
||||
and a 16-byte per-file nonce. Also, the overhead of each Adiantum key
|
||||
is greater than that of an AES-256-XTS key.
|
||||
|
||||
Therefore, to improve performance and save memory, for Adiantum a
|
||||
"direct key" configuration is supported. When the user has enabled
|
||||
this by setting FSCRYPT_POLICY_FLAG_DIRECT_KEY in the fscrypt policy,
|
||||
per-file keys are not used. Instead, whenever any data (contents or
|
||||
filenames) is encrypted, the file's 16-byte nonce is included in the
|
||||
IV. Moreover:
|
||||
|
||||
- For v1 encryption policies, the encryption is done directly with the
|
||||
master key. Because of this, users **must not** use the same master
|
||||
key for any other purpose, even for other v1 policies.
|
||||
|
||||
- For v2 encryption policies, the encryption is done with a per-mode
|
||||
key derived using the KDF. Users may use the same master key for
|
||||
other v2 encryption policies.
|
||||
|
||||
IV_INO_LBLK_64 policies
|
||||
-----------------------
|
||||
|
||||
When FSCRYPT_POLICY_FLAG_IV_INO_LBLK_64 is set in the fscrypt policy,
|
||||
the encryption keys are derived from the master key, encryption mode
|
||||
number, and filesystem UUID. This normally results in all files
|
||||
protected by the same master key sharing a single contents encryption
|
||||
key and a single filenames encryption key. To still encrypt different
|
||||
files' data differently, inode numbers are included in the IVs.
|
||||
Consequently, shrinking the filesystem may not be allowed.
|
||||
|
||||
This format is optimized for use with inline encryption hardware
|
||||
compliant with the UFS or eMMC standards, which support only 64 IV
|
||||
bits per I/O request and may have only a small number of keyslots.
|
||||
|
||||
Key identifiers
|
||||
---------------
|
||||
|
||||
For master keys used for v2 encryption policies, a unique 16-byte "key
|
||||
identifier" is also derived using the KDF. This value is stored in
|
||||
the clear, since it is needed to reliably identify the key itself.
|
||||
|
||||
Encryption modes and usage
|
||||
==========================
|
||||
@ -192,8 +318,9 @@ If unsure, you should use the (AES-256-XTS, AES-256-CTS-CBC) pair.
|
||||
|
||||
AES-128-CBC was added only for low-powered embedded devices with
|
||||
crypto accelerators such as CAAM or CESA that do not support XTS. To
|
||||
use AES-128-CBC, CONFIG_CRYPTO_SHA256 (or another SHA-256
|
||||
implementation) must be enabled so that ESSIV can be used.
|
||||
use AES-128-CBC, CONFIG_CRYPTO_ESSIV and CONFIG_CRYPTO_SHA256 (or
|
||||
another SHA-256 implementation) must be enabled so that ESSIV can be
|
||||
used.
|
||||
|
||||
Adiantum is a (primarily) stream cipher-based mode that is fast even
|
||||
on CPUs without dedicated crypto instructions. It's also a true
|
||||
@ -225,10 +352,17 @@ a little endian number, except that:
|
||||
is encrypted with AES-256 where the AES-256 key is the SHA-256 hash
|
||||
of the file's data encryption key.
|
||||
|
||||
- In the "direct key" configuration (FS_POLICY_FLAG_DIRECT_KEY set in
|
||||
the fscrypt_policy), the file's nonce is also appended to the IV.
|
||||
- With `DIRECT_KEY policies`_, the file's nonce is appended to the IV.
|
||||
Currently this is only allowed with the Adiantum encryption mode.
|
||||
|
||||
- With `IV_INO_LBLK_64 policies`_, the logical block number is limited
|
||||
to 32 bits and is placed in bits 0-31 of the IV. The inode number
|
||||
(which is also limited to 32 bits) is placed in bits 32-63.
|
||||
|
||||
Note that because file logical block numbers are included in the IVs,
|
||||
filesystems must enforce that blocks are never shifted around within
|
||||
encrypted files, e.g. via "collapse range" or "insert range".
|
||||
|
||||
Filenames encryption
|
||||
--------------------
|
||||
|
||||
@ -237,10 +371,10 @@ the requirements to retain support for efficient directory lookups and
|
||||
filenames of up to 255 bytes, the same IV is used for every filename
|
||||
in a directory.
|
||||
|
||||
However, each encrypted directory still uses a unique key; or
|
||||
alternatively (for the "direct key" configuration) has the file's
|
||||
nonce included in the IVs. Thus, IV reuse is limited to within a
|
||||
single directory.
|
||||
However, each encrypted directory still uses a unique key, or
|
||||
alternatively has the file's nonce (for `DIRECT_KEY policies`_) or
|
||||
inode number (for `IV_INO_LBLK_64 policies`_) included in the IVs.
|
||||
Thus, IV reuse is limited to within a single directory.
|
||||
|
||||
With CTS-CBC, the IV reuse means that when the plaintext filenames
|
||||
share a common prefix at least as long as the cipher block size (16
|
||||
@ -269,49 +403,80 @@ User API
|
||||
Setting an encryption policy
|
||||
----------------------------
|
||||
|
||||
FS_IOC_SET_ENCRYPTION_POLICY
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The FS_IOC_SET_ENCRYPTION_POLICY ioctl sets an encryption policy on an
|
||||
empty directory or verifies that a directory or regular file already
|
||||
has the specified encryption policy. It takes in a pointer to a
|
||||
:c:type:`struct fscrypt_policy`, defined as follows::
|
||||
:c:type:`struct fscrypt_policy_v1` or a :c:type:`struct
|
||||
fscrypt_policy_v2`, defined as follows::
|
||||
|
||||
#define FS_KEY_DESCRIPTOR_SIZE 8
|
||||
|
||||
struct fscrypt_policy {
|
||||
#define FSCRYPT_POLICY_V1 0
|
||||
#define FSCRYPT_KEY_DESCRIPTOR_SIZE 8
|
||||
struct fscrypt_policy_v1 {
|
||||
__u8 version;
|
||||
__u8 contents_encryption_mode;
|
||||
__u8 filenames_encryption_mode;
|
||||
__u8 flags;
|
||||
__u8 master_key_descriptor[FS_KEY_DESCRIPTOR_SIZE];
|
||||
__u8 master_key_descriptor[FSCRYPT_KEY_DESCRIPTOR_SIZE];
|
||||
};
|
||||
#define fscrypt_policy fscrypt_policy_v1
|
||||
|
||||
#define FSCRYPT_POLICY_V2 2
|
||||
#define FSCRYPT_KEY_IDENTIFIER_SIZE 16
|
||||
struct fscrypt_policy_v2 {
|
||||
__u8 version;
|
||||
__u8 contents_encryption_mode;
|
||||
__u8 filenames_encryption_mode;
|
||||
__u8 flags;
|
||||
__u8 __reserved[4];
|
||||
__u8 master_key_identifier[FSCRYPT_KEY_IDENTIFIER_SIZE];
|
||||
};
|
||||
|
||||
This structure must be initialized as follows:
|
||||
|
||||
- ``version`` must be 0.
|
||||
- ``version`` must be FSCRYPT_POLICY_V1 (0) if the struct is
|
||||
:c:type:`fscrypt_policy_v1` or FSCRYPT_POLICY_V2 (2) if the struct
|
||||
is :c:type:`fscrypt_policy_v2`. (Note: we refer to the original
|
||||
policy version as "v1", though its version code is really 0.) For
|
||||
new encrypted directories, use v2 policies.
|
||||
|
||||
- ``contents_encryption_mode`` and ``filenames_encryption_mode`` must
|
||||
be set to constants from ``<linux/fs.h>`` which identify the
|
||||
encryption modes to use. If unsure, use
|
||||
FS_ENCRYPTION_MODE_AES_256_XTS (1) for ``contents_encryption_mode``
|
||||
and FS_ENCRYPTION_MODE_AES_256_CTS (4) for
|
||||
``filenames_encryption_mode``.
|
||||
be set to constants from ``<linux/fscrypt.h>`` which identify the
|
||||
encryption modes to use. If unsure, use FSCRYPT_MODE_AES_256_XTS
|
||||
(1) for ``contents_encryption_mode`` and FSCRYPT_MODE_AES_256_CTS
|
||||
(4) for ``filenames_encryption_mode``.
|
||||
|
||||
- ``flags`` must contain a value from ``<linux/fs.h>`` which
|
||||
identifies the amount of NUL-padding to use when encrypting
|
||||
filenames. If unsure, use FS_POLICY_FLAGS_PAD_32 (0x3).
|
||||
In addition, if the chosen encryption modes are both
|
||||
FS_ENCRYPTION_MODE_ADIANTUM, this can contain
|
||||
FS_POLICY_FLAG_DIRECT_KEY to specify that the master key should be
|
||||
used directly, without key derivation.
|
||||
- ``flags`` contains optional flags from ``<linux/fscrypt.h>``:
|
||||
|
||||
- ``master_key_descriptor`` specifies how to find the master key in
|
||||
the keyring; see `Adding keys`_. It is up to userspace to choose a
|
||||
unique ``master_key_descriptor`` for each master key. The e4crypt
|
||||
and fscrypt tools use the first 8 bytes of
|
||||
- FSCRYPT_POLICY_FLAGS_PAD_*: The amount of NUL padding to use when
|
||||
encrypting filenames. If unsure, use FSCRYPT_POLICY_FLAGS_PAD_32
|
||||
(0x3).
|
||||
- FSCRYPT_POLICY_FLAG_DIRECT_KEY: See `DIRECT_KEY policies`_.
|
||||
- FSCRYPT_POLICY_FLAG_IV_INO_LBLK_64: See `IV_INO_LBLK_64
|
||||
policies`_. This is mutually exclusive with DIRECT_KEY and is not
|
||||
supported on v1 policies.
|
||||
|
||||
- For v2 encryption policies, ``__reserved`` must be zeroed.
|
||||
|
||||
- For v1 encryption policies, ``master_key_descriptor`` specifies how
|
||||
to find the master key in a keyring; see `Adding keys`_. It is up
|
||||
to userspace to choose a unique ``master_key_descriptor`` for each
|
||||
master key. The e4crypt and fscrypt tools use the first 8 bytes of
|
||||
``SHA-512(SHA-512(master_key))``, but this particular scheme is not
|
||||
required. Also, the master key need not be in the keyring yet when
|
||||
FS_IOC_SET_ENCRYPTION_POLICY is executed. However, it must be added
|
||||
before any files can be created in the encrypted directory.
|
||||
|
||||
For v2 encryption policies, ``master_key_descriptor`` has been
|
||||
replaced with ``master_key_identifier``, which is longer and cannot
|
||||
be arbitrarily chosen. Instead, the key must first be added using
|
||||
`FS_IOC_ADD_ENCRYPTION_KEY`_. Then, the ``key_spec.u.identifier``
|
||||
the kernel returned in the :c:type:`struct fscrypt_add_key_arg` must
|
||||
be used as the ``master_key_identifier`` in the :c:type:`struct
|
||||
fscrypt_policy_v2`.
|
||||
|
||||
If the file is not yet encrypted, then FS_IOC_SET_ENCRYPTION_POLICY
|
||||
verifies that the file is an empty directory. If so, the specified
|
||||
encryption policy is assigned to the directory, turning it into an
|
||||
@ -327,6 +492,15 @@ policy exactly matches the actual one. If they match, then the ioctl
|
||||
returns 0. Otherwise, it fails with EEXIST. This works on both
|
||||
regular files and directories, including nonempty directories.
|
||||
|
||||
When a v2 encryption policy is assigned to a directory, it is also
|
||||
required that either the specified key has been added by the current
|
||||
user or that the caller has CAP_FOWNER in the initial user namespace.
|
||||
(This is needed to prevent a user from encrypting their data with
|
||||
another user's key.) The key must remain added while
|
||||
FS_IOC_SET_ENCRYPTION_POLICY is executing. However, if the new
|
||||
encrypted directory does not need to be accessed immediately, then the
|
||||
key can be removed right away afterwards.
|
||||
|
||||
Note that the ext4 filesystem does not allow the root directory to be
|
||||
encrypted, even if it is empty. Users who want to encrypt an entire
|
||||
filesystem with one key should consider using dm-crypt instead.
|
||||
@ -339,7 +513,11 @@ FS_IOC_SET_ENCRYPTION_POLICY can fail with the following errors:
|
||||
- ``EEXIST``: the file is already encrypted with an encryption policy
|
||||
different from the one specified
|
||||
- ``EINVAL``: an invalid encryption policy was specified (invalid
|
||||
version, mode(s), or flags)
|
||||
version, mode(s), or flags; or reserved bits were set)
|
||||
- ``ENOKEY``: a v2 encryption policy was specified, but the key with
|
||||
the specified ``master_key_identifier`` has not been added, nor does
|
||||
the process have the CAP_FOWNER capability in the initial user
|
||||
namespace
|
||||
- ``ENOTDIR``: the file is unencrypted and is a regular file, not a
|
||||
directory
|
||||
- ``ENOTEMPTY``: the file is unencrypted and is a nonempty directory
|
||||
@ -358,25 +536,79 @@ FS_IOC_SET_ENCRYPTION_POLICY can fail with the following errors:
|
||||
Getting an encryption policy
|
||||
----------------------------
|
||||
|
||||
The FS_IOC_GET_ENCRYPTION_POLICY ioctl retrieves the :c:type:`struct
|
||||
fscrypt_policy`, if any, for a directory or regular file. See above
|
||||
for the struct definition. No additional permissions are required
|
||||
beyond the ability to open the file.
|
||||
Two ioctls are available to get a file's encryption policy:
|
||||
|
||||
FS_IOC_GET_ENCRYPTION_POLICY can fail with the following errors:
|
||||
- `FS_IOC_GET_ENCRYPTION_POLICY_EX`_
|
||||
- `FS_IOC_GET_ENCRYPTION_POLICY`_
|
||||
|
||||
The extended (_EX) version of the ioctl is more general and is
|
||||
recommended to use when possible. However, on older kernels only the
|
||||
original ioctl is available. Applications should try the extended
|
||||
version, and if it fails with ENOTTY fall back to the original
|
||||
version.
|
||||
|
||||
FS_IOC_GET_ENCRYPTION_POLICY_EX
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The FS_IOC_GET_ENCRYPTION_POLICY_EX ioctl retrieves the encryption
|
||||
policy, if any, for a directory or regular file. No additional
|
||||
permissions are required beyond the ability to open the file. It
|
||||
takes in a pointer to a :c:type:`struct fscrypt_get_policy_ex_arg`,
|
||||
defined as follows::
|
||||
|
||||
struct fscrypt_get_policy_ex_arg {
|
||||
__u64 policy_size; /* input/output */
|
||||
union {
|
||||
__u8 version;
|
||||
struct fscrypt_policy_v1 v1;
|
||||
struct fscrypt_policy_v2 v2;
|
||||
} policy; /* output */
|
||||
};
|
||||
|
||||
The caller must initialize ``policy_size`` to the size available for
|
||||
the policy struct, i.e. ``sizeof(arg.policy)``.
|
||||
|
||||
On success, the policy struct is returned in ``policy``, and its
|
||||
actual size is returned in ``policy_size``. ``policy.version`` should
|
||||
be checked to determine the version of policy returned. Note that the
|
||||
version code for the "v1" policy is actually 0 (FSCRYPT_POLICY_V1).
|
||||
|
||||
FS_IOC_GET_ENCRYPTION_POLICY_EX can fail with the following errors:
|
||||
|
||||
- ``EINVAL``: the file is encrypted, but it uses an unrecognized
|
||||
encryption context format
|
||||
encryption policy version
|
||||
- ``ENODATA``: the file is not encrypted
|
||||
- ``ENOTTY``: this type of filesystem does not implement encryption
|
||||
- ``ENOTTY``: this type of filesystem does not implement encryption,
|
||||
or this kernel is too old to support FS_IOC_GET_ENCRYPTION_POLICY_EX
|
||||
(try FS_IOC_GET_ENCRYPTION_POLICY instead)
|
||||
- ``EOPNOTSUPP``: the kernel was not configured with encryption
|
||||
support for this filesystem
|
||||
support for this filesystem, or the filesystem superblock has not
|
||||
had encryption enabled on it
|
||||
- ``EOVERFLOW``: the file is encrypted and uses a recognized
|
||||
encryption policy version, but the policy struct does not fit into
|
||||
the provided buffer
|
||||
|
||||
Note: if you only need to know whether a file is encrypted or not, on
|
||||
most filesystems it is also possible to use the FS_IOC_GETFLAGS ioctl
|
||||
and check for FS_ENCRYPT_FL, or to use the statx() system call and
|
||||
check for STATX_ATTR_ENCRYPTED in stx_attributes.
|
||||
|
||||
FS_IOC_GET_ENCRYPTION_POLICY
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The FS_IOC_GET_ENCRYPTION_POLICY ioctl can also retrieve the
|
||||
encryption policy, if any, for a directory or regular file. However,
|
||||
unlike `FS_IOC_GET_ENCRYPTION_POLICY_EX`_,
|
||||
FS_IOC_GET_ENCRYPTION_POLICY only supports the original policy
|
||||
version. It takes in a pointer directly to a :c:type:`struct
|
||||
fscrypt_policy_v1` rather than a :c:type:`struct
|
||||
fscrypt_get_policy_ex_arg`.
|
||||
|
||||
The error codes for FS_IOC_GET_ENCRYPTION_POLICY are the same as those
|
||||
for FS_IOC_GET_ENCRYPTION_POLICY_EX, except that
|
||||
FS_IOC_GET_ENCRYPTION_POLICY also returns ``EINVAL`` if the file is
|
||||
encrypted using a newer encryption policy version.
|
||||
|
||||
Getting the per-filesystem salt
|
||||
-------------------------------
|
||||
|
||||
@ -392,8 +624,115 @@ generate and manage any needed salt(s) in userspace.
|
||||
Adding keys
|
||||
-----------
|
||||
|
||||
To provide a master key, userspace must add it to an appropriate
|
||||
keyring using the add_key() system call (see:
|
||||
FS_IOC_ADD_ENCRYPTION_KEY
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The FS_IOC_ADD_ENCRYPTION_KEY ioctl adds a master encryption key to
|
||||
the filesystem, making all files on the filesystem which were
|
||||
encrypted using that key appear "unlocked", i.e. in plaintext form.
|
||||
It can be executed on any file or directory on the target filesystem,
|
||||
but using the filesystem's root directory is recommended. It takes in
|
||||
a pointer to a :c:type:`struct fscrypt_add_key_arg`, defined as
|
||||
follows::
|
||||
|
||||
struct fscrypt_add_key_arg {
|
||||
struct fscrypt_key_specifier key_spec;
|
||||
__u32 raw_size;
|
||||
__u32 __reserved[9];
|
||||
__u8 raw[];
|
||||
};
|
||||
|
||||
#define FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR 1
|
||||
#define FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER 2
|
||||
|
||||
struct fscrypt_key_specifier {
|
||||
__u32 type; /* one of FSCRYPT_KEY_SPEC_TYPE_* */
|
||||
__u32 __reserved;
|
||||
union {
|
||||
__u8 __reserved[32]; /* reserve some extra space */
|
||||
__u8 descriptor[FSCRYPT_KEY_DESCRIPTOR_SIZE];
|
||||
__u8 identifier[FSCRYPT_KEY_IDENTIFIER_SIZE];
|
||||
} u;
|
||||
};
|
||||
|
||||
:c:type:`struct fscrypt_add_key_arg` must be zeroed, then initialized
|
||||
as follows:
|
||||
|
||||
- If the key is being added for use by v1 encryption policies, then
|
||||
``key_spec.type`` must contain FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR, and
|
||||
``key_spec.u.descriptor`` must contain the descriptor of the key
|
||||
being added, corresponding to the value in the
|
||||
``master_key_descriptor`` field of :c:type:`struct
|
||||
fscrypt_policy_v1`. To add this type of key, the calling process
|
||||
must have the CAP_SYS_ADMIN capability in the initial user
|
||||
namespace.
|
||||
|
||||
Alternatively, if the key is being added for use by v2 encryption
|
||||
policies, then ``key_spec.type`` must contain
|
||||
FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER, and ``key_spec.u.identifier`` is
|
||||
an *output* field which the kernel fills in with a cryptographic
|
||||
hash of the key. To add this type of key, the calling process does
|
||||
not need any privileges. However, the number of keys that can be
|
||||
added is limited by the user's quota for the keyrings service (see
|
||||
``Documentation/security/keys/core.rst``).
|
||||
|
||||
- ``raw_size`` must be the size of the ``raw`` key provided, in bytes.
|
||||
|
||||
- ``raw`` is a variable-length field which must contain the actual
|
||||
key, ``raw_size`` bytes long.
|
||||
|
||||
For v2 policy keys, the kernel keeps track of which user (identified
|
||||
by effective user ID) added the key, and only allows the key to be
|
||||
removed by that user --- or by "root", if they use
|
||||
`FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS`_.
|
||||
|
||||
However, if another user has added the key, it may be desirable to
|
||||
prevent that other user from unexpectedly removing it. Therefore,
|
||||
FS_IOC_ADD_ENCRYPTION_KEY may also be used to add a v2 policy key
|
||||
*again*, even if it's already added by other user(s). In this case,
|
||||
FS_IOC_ADD_ENCRYPTION_KEY will just install a claim to the key for the
|
||||
current user, rather than actually add the key again (but the raw key
|
||||
must still be provided, as a proof of knowledge).
|
||||
|
||||
FS_IOC_ADD_ENCRYPTION_KEY returns 0 if either the key or a claim to
|
||||
the key was either added or already exists.
|
||||
|
||||
FS_IOC_ADD_ENCRYPTION_KEY can fail with the following errors:
|
||||
|
||||
- ``EACCES``: FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR was specified, but the
|
||||
caller does not have the CAP_SYS_ADMIN capability in the initial
|
||||
user namespace
|
||||
- ``EDQUOT``: the key quota for this user would be exceeded by adding
|
||||
the key
|
||||
- ``EINVAL``: invalid key size or key specifier type, or reserved bits
|
||||
were set
|
||||
- ``ENOTTY``: this type of filesystem does not implement encryption
|
||||
- ``EOPNOTSUPP``: the kernel was not configured with encryption
|
||||
support for this filesystem, or the filesystem superblock has not
|
||||
had encryption enabled on it
|
||||
|
||||
Legacy method
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
For v1 encryption policies, a master encryption key can also be
|
||||
provided by adding it to a process-subscribed keyring, e.g. to a
|
||||
session keyring, or to a user keyring if the user keyring is linked
|
||||
into the session keyring.
|
||||
|
||||
This method is deprecated (and not supported for v2 encryption
|
||||
policies) for several reasons. First, it cannot be used in
|
||||
combination with FS_IOC_REMOVE_ENCRYPTION_KEY (see `Removing keys`_),
|
||||
so for removing a key a workaround such as keyctl_unlink() in
|
||||
combination with ``sync; echo 2 > /proc/sys/vm/drop_caches`` would
|
||||
have to be used. Second, it doesn't match the fact that the
|
||||
locked/unlocked status of encrypted files (i.e. whether they appear to
|
||||
be in plaintext form or in ciphertext form) is global. This mismatch
|
||||
has caused much confusion as well as real problems when processes
|
||||
running under different UIDs, such as a ``sudo`` command, need to
|
||||
access encrypted files.
|
||||
|
||||
Nevertheless, to add a key to one of the process-subscribed keyrings,
|
||||
the add_key() system call can be used (see:
|
||||
``Documentation/security/keys/core.rst``). The key type must be
|
||||
"logon"; keys of this type are kept in kernel memory and cannot be
|
||||
read back by userspace. The key description must be "fscrypt:"
|
||||
@ -401,12 +740,12 @@ followed by the 16-character lower case hex representation of the
|
||||
``master_key_descriptor`` that was set in the encryption policy. The
|
||||
key payload must conform to the following structure::
|
||||
|
||||
#define FS_MAX_KEY_SIZE 64
|
||||
#define FSCRYPT_MAX_KEY_SIZE 64
|
||||
|
||||
struct fscrypt_key {
|
||||
u32 mode;
|
||||
u8 raw[FS_MAX_KEY_SIZE];
|
||||
u32 size;
|
||||
__u32 mode;
|
||||
__u8 raw[FSCRYPT_MAX_KEY_SIZE];
|
||||
__u32 size;
|
||||
};
|
||||
|
||||
``mode`` is ignored; just set it to 0. The actual key is provided in
|
||||
@ -418,26 +757,194 @@ with a filesystem-specific prefix such as "ext4:". However, the
|
||||
filesystem-specific prefixes are deprecated and should not be used in
|
||||
new programs.
|
||||
|
||||
There are several different types of keyrings in which encryption keys
|
||||
may be placed, such as a session keyring, a user session keyring, or a
|
||||
user keyring. Each key must be placed in a keyring that is "attached"
|
||||
to all processes that might need to access files encrypted with it, in
|
||||
the sense that request_key() will find the key. Generally, if only
|
||||
processes belonging to a specific user need to access a given
|
||||
encrypted directory and no session keyring has been installed, then
|
||||
that directory's key should be placed in that user's user session
|
||||
keyring or user keyring. Otherwise, a session keyring should be
|
||||
installed if needed, and the key should be linked into that session
|
||||
keyring, or in a keyring linked into that session keyring.
|
||||
Removing keys
|
||||
-------------
|
||||
|
||||
Note: introducing the complex visibility semantics of keyrings here
|
||||
was arguably a mistake --- especially given that by design, after any
|
||||
process successfully opens an encrypted file (thereby setting up the
|
||||
per-file key), possessing the keyring key is not actually required for
|
||||
any process to read/write the file until its in-memory inode is
|
||||
evicted. In the future there probably should be a way to provide keys
|
||||
directly to the filesystem instead, which would make the intended
|
||||
semantics clearer.
|
||||
Two ioctls are available for removing a key that was added by
|
||||
`FS_IOC_ADD_ENCRYPTION_KEY`_:
|
||||
|
||||
- `FS_IOC_REMOVE_ENCRYPTION_KEY`_
|
||||
- `FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS`_
|
||||
|
||||
These two ioctls differ only in cases where v2 policy keys are added
|
||||
or removed by non-root users.
|
||||
|
||||
These ioctls don't work on keys that were added via the legacy
|
||||
process-subscribed keyrings mechanism.
|
||||
|
||||
Before using these ioctls, read the `Kernel memory compromise`_
|
||||
section for a discussion of the security goals and limitations of
|
||||
these ioctls.
|
||||
|
||||
FS_IOC_REMOVE_ENCRYPTION_KEY
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The FS_IOC_REMOVE_ENCRYPTION_KEY ioctl removes a claim to a master
|
||||
encryption key from the filesystem, and possibly removes the key
|
||||
itself. It can be executed on any file or directory on the target
|
||||
filesystem, but using the filesystem's root directory is recommended.
|
||||
It takes in a pointer to a :c:type:`struct fscrypt_remove_key_arg`,
|
||||
defined as follows::
|
||||
|
||||
struct fscrypt_remove_key_arg {
|
||||
struct fscrypt_key_specifier key_spec;
|
||||
#define FSCRYPT_KEY_REMOVAL_STATUS_FLAG_FILES_BUSY 0x00000001
|
||||
#define FSCRYPT_KEY_REMOVAL_STATUS_FLAG_OTHER_USERS 0x00000002
|
||||
__u32 removal_status_flags; /* output */
|
||||
__u32 __reserved[5];
|
||||
};
|
||||
|
||||
This structure must be zeroed, then initialized as follows:
|
||||
|
||||
- The key to remove is specified by ``key_spec``:
|
||||
|
||||
- To remove a key used by v1 encryption policies, set
|
||||
``key_spec.type`` to FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR and fill
|
||||
in ``key_spec.u.descriptor``. To remove this type of key, the
|
||||
calling process must have the CAP_SYS_ADMIN capability in the
|
||||
initial user namespace.
|
||||
|
||||
- To remove a key used by v2 encryption policies, set
|
||||
``key_spec.type`` to FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER and fill
|
||||
in ``key_spec.u.identifier``.
|
||||
|
||||
For v2 policy keys, this ioctl is usable by non-root users. However,
|
||||
to make this possible, it actually just removes the current user's
|
||||
claim to the key, undoing a single call to FS_IOC_ADD_ENCRYPTION_KEY.
|
||||
Only after all claims are removed is the key really removed.
|
||||
|
||||
For example, if FS_IOC_ADD_ENCRYPTION_KEY was called with uid 1000,
|
||||
then the key will be "claimed" by uid 1000, and
|
||||
FS_IOC_REMOVE_ENCRYPTION_KEY will only succeed as uid 1000. Or, if
|
||||
both uids 1000 and 2000 added the key, then for each uid
|
||||
FS_IOC_REMOVE_ENCRYPTION_KEY will only remove their own claim. Only
|
||||
once *both* are removed is the key really removed. (Think of it like
|
||||
unlinking a file that may have hard links.)
|
||||
|
||||
If FS_IOC_REMOVE_ENCRYPTION_KEY really removes the key, it will also
|
||||
try to "lock" all files that had been unlocked with the key. It won't
|
||||
lock files that are still in-use, so this ioctl is expected to be used
|
||||
in cooperation with userspace ensuring that none of the files are
|
||||
still open. However, if necessary, this ioctl can be executed again
|
||||
later to retry locking any remaining files.
|
||||
|
||||
FS_IOC_REMOVE_ENCRYPTION_KEY returns 0 if either the key was removed
|
||||
(but may still have files remaining to be locked), the user's claim to
|
||||
the key was removed, or the key was already removed but had files
|
||||
remaining to be the locked so the ioctl retried locking them. In any
|
||||
of these cases, ``removal_status_flags`` is filled in with the
|
||||
following informational status flags:
|
||||
|
||||
- ``FSCRYPT_KEY_REMOVAL_STATUS_FLAG_FILES_BUSY``: set if some file(s)
|
||||
are still in-use. Not guaranteed to be set in the case where only
|
||||
the user's claim to the key was removed.
|
||||
- ``FSCRYPT_KEY_REMOVAL_STATUS_FLAG_OTHER_USERS``: set if only the
|
||||
user's claim to the key was removed, not the key itself
|
||||
|
||||
FS_IOC_REMOVE_ENCRYPTION_KEY can fail with the following errors:
|
||||
|
||||
- ``EACCES``: The FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR key specifier type
|
||||
was specified, but the caller does not have the CAP_SYS_ADMIN
|
||||
capability in the initial user namespace
|
||||
- ``EINVAL``: invalid key specifier type, or reserved bits were set
|
||||
- ``ENOKEY``: the key object was not found at all, i.e. it was never
|
||||
added in the first place or was already fully removed including all
|
||||
files locked; or, the user does not have a claim to the key (but
|
||||
someone else does).
|
||||
- ``ENOTTY``: this type of filesystem does not implement encryption
|
||||
- ``EOPNOTSUPP``: the kernel was not configured with encryption
|
||||
support for this filesystem, or the filesystem superblock has not
|
||||
had encryption enabled on it
|
||||
|
||||
FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS is exactly the same as
|
||||
`FS_IOC_REMOVE_ENCRYPTION_KEY`_, except that for v2 policy keys, the
|
||||
ALL_USERS version of the ioctl will remove all users' claims to the
|
||||
key, not just the current user's. I.e., the key itself will always be
|
||||
removed, no matter how many users have added it. This difference is
|
||||
only meaningful if non-root users are adding and removing keys.
|
||||
|
||||
Because of this, FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS also requires
|
||||
"root", namely the CAP_SYS_ADMIN capability in the initial user
|
||||
namespace. Otherwise it will fail with EACCES.
|
||||
|
||||
Getting key status
|
||||
------------------
|
||||
|
||||
FS_IOC_GET_ENCRYPTION_KEY_STATUS
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The FS_IOC_GET_ENCRYPTION_KEY_STATUS ioctl retrieves the status of a
|
||||
master encryption key. It can be executed on any file or directory on
|
||||
the target filesystem, but using the filesystem's root directory is
|
||||
recommended. It takes in a pointer to a :c:type:`struct
|
||||
fscrypt_get_key_status_arg`, defined as follows::
|
||||
|
||||
struct fscrypt_get_key_status_arg {
|
||||
/* input */
|
||||
struct fscrypt_key_specifier key_spec;
|
||||
__u32 __reserved[6];
|
||||
|
||||
/* output */
|
||||
#define FSCRYPT_KEY_STATUS_ABSENT 1
|
||||
#define FSCRYPT_KEY_STATUS_PRESENT 2
|
||||
#define FSCRYPT_KEY_STATUS_INCOMPLETELY_REMOVED 3
|
||||
__u32 status;
|
||||
#define FSCRYPT_KEY_STATUS_FLAG_ADDED_BY_SELF 0x00000001
|
||||
__u32 status_flags;
|
||||
__u32 user_count;
|
||||
__u32 __out_reserved[13];
|
||||
};
|
||||
|
||||
The caller must zero all input fields, then fill in ``key_spec``:
|
||||
|
||||
- To get the status of a key for v1 encryption policies, set
|
||||
``key_spec.type`` to FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR and fill
|
||||
in ``key_spec.u.descriptor``.
|
||||
|
||||
- To get the status of a key for v2 encryption policies, set
|
||||
``key_spec.type`` to FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER and fill
|
||||
in ``key_spec.u.identifier``.
|
||||
|
||||
On success, 0 is returned and the kernel fills in the output fields:
|
||||
|
||||
- ``status`` indicates whether the key is absent, present, or
|
||||
incompletely removed. Incompletely removed means that the master
|
||||
secret has been removed, but some files are still in use; i.e.,
|
||||
`FS_IOC_REMOVE_ENCRYPTION_KEY`_ returned 0 but set the informational
|
||||
status flag FSCRYPT_KEY_REMOVAL_STATUS_FLAG_FILES_BUSY.
|
||||
|
||||
- ``status_flags`` can contain the following flags:
|
||||
|
||||
- ``FSCRYPT_KEY_STATUS_FLAG_ADDED_BY_SELF`` indicates that the key
|
||||
has added by the current user. This is only set for keys
|
||||
identified by ``identifier`` rather than by ``descriptor``.
|
||||
|
||||
- ``user_count`` specifies the number of users who have added the key.
|
||||
This is only set for keys identified by ``identifier`` rather than
|
||||
by ``descriptor``.
|
||||
|
||||
FS_IOC_GET_ENCRYPTION_KEY_STATUS can fail with the following errors:
|
||||
|
||||
- ``EINVAL``: invalid key specifier type, or reserved bits were set
|
||||
- ``ENOTTY``: this type of filesystem does not implement encryption
|
||||
- ``EOPNOTSUPP``: the kernel was not configured with encryption
|
||||
support for this filesystem, or the filesystem superblock has not
|
||||
had encryption enabled on it
|
||||
|
||||
Among other use cases, FS_IOC_GET_ENCRYPTION_KEY_STATUS can be useful
|
||||
for determining whether the key for a given encrypted directory needs
|
||||
to be added before prompting the user for the passphrase needed to
|
||||
derive the key.
|
||||
|
||||
FS_IOC_GET_ENCRYPTION_KEY_STATUS can only get the status of keys in
|
||||
the filesystem-level keyring, i.e. the keyring managed by
|
||||
`FS_IOC_ADD_ENCRYPTION_KEY`_ and `FS_IOC_REMOVE_ENCRYPTION_KEY`_. It
|
||||
cannot get the status of a key that has only been added for use by v1
|
||||
encryption policies using the legacy mechanism involving
|
||||
process-subscribed keyrings.
|
||||
|
||||
Access semantics
|
||||
================
|
||||
@ -500,7 +1007,7 @@ Without the key
|
||||
|
||||
Some filesystem operations may be performed on encrypted regular
|
||||
files, directories, and symlinks even before their encryption key has
|
||||
been provided:
|
||||
been added, or after their encryption key has been removed:
|
||||
|
||||
- File metadata may be read, e.g. using stat().
|
||||
|
||||
@ -565,33 +1072,44 @@ Encryption context
|
||||
------------------
|
||||
|
||||
An encryption policy is represented on-disk by a :c:type:`struct
|
||||
fscrypt_context`. It is up to individual filesystems to decide where
|
||||
to store it, but normally it would be stored in a hidden extended
|
||||
attribute. It should *not* be exposed by the xattr-related system
|
||||
calls such as getxattr() and setxattr() because of the special
|
||||
semantics of the encryption xattr. (In particular, there would be
|
||||
much confusion if an encryption policy were to be added to or removed
|
||||
from anything other than an empty directory.) The struct is defined
|
||||
as follows::
|
||||
fscrypt_context_v1` or a :c:type:`struct fscrypt_context_v2`. It is
|
||||
up to individual filesystems to decide where to store it, but normally
|
||||
it would be stored in a hidden extended attribute. It should *not* be
|
||||
exposed by the xattr-related system calls such as getxattr() and
|
||||
setxattr() because of the special semantics of the encryption xattr.
|
||||
(In particular, there would be much confusion if an encryption policy
|
||||
were to be added to or removed from anything other than an empty
|
||||
directory.) These structs are defined as follows::
|
||||
|
||||
#define FS_KEY_DESCRIPTOR_SIZE 8
|
||||
#define FS_KEY_DERIVATION_NONCE_SIZE 16
|
||||
|
||||
struct fscrypt_context {
|
||||
u8 format;
|
||||
#define FSCRYPT_KEY_DESCRIPTOR_SIZE 8
|
||||
struct fscrypt_context_v1 {
|
||||
u8 version;
|
||||
u8 contents_encryption_mode;
|
||||
u8 filenames_encryption_mode;
|
||||
u8 flags;
|
||||
u8 master_key_descriptor[FS_KEY_DESCRIPTOR_SIZE];
|
||||
u8 master_key_descriptor[FSCRYPT_KEY_DESCRIPTOR_SIZE];
|
||||
u8 nonce[FS_KEY_DERIVATION_NONCE_SIZE];
|
||||
};
|
||||
|
||||
Note that :c:type:`struct fscrypt_context` contains the same
|
||||
information as :c:type:`struct fscrypt_policy` (see `Setting an
|
||||
encryption policy`_), except that :c:type:`struct fscrypt_context`
|
||||
also contains a nonce. The nonce is randomly generated by the kernel
|
||||
and is used to derive the inode's encryption key as described in
|
||||
`Per-file keys`_.
|
||||
#define FSCRYPT_KEY_IDENTIFIER_SIZE 16
|
||||
struct fscrypt_context_v2 {
|
||||
u8 version;
|
||||
u8 contents_encryption_mode;
|
||||
u8 filenames_encryption_mode;
|
||||
u8 flags;
|
||||
u8 __reserved[4];
|
||||
u8 master_key_identifier[FSCRYPT_KEY_IDENTIFIER_SIZE];
|
||||
u8 nonce[FS_KEY_DERIVATION_NONCE_SIZE];
|
||||
};
|
||||
|
||||
The context structs contain the same information as the corresponding
|
||||
policy structs (see `Setting an encryption policy`_), except that the
|
||||
context structs also contain a nonce. The nonce is randomly generated
|
||||
by the kernel and is used as KDF input or as a tweak to cause
|
||||
different files to be encrypted differently; see `Per-file keys`_ and
|
||||
`DIRECT_KEY policies`_.
|
||||
|
||||
Data path changes
|
||||
-----------------
|
||||
|
@ -6013,6 +6013,7 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/tytso/fscrypt.git
|
||||
S: Supported
|
||||
F: fs/crypto/
|
||||
F: include/linux/fscrypt*.h
|
||||
F: include/uapi/linux/fscrypt.h
|
||||
F: Documentation/filesystems/fscrypt.rst
|
||||
|
||||
FSI-ATTACHED I2C DRIVER
|
||||
|
@ -81,6 +81,7 @@ CONFIG_SHADOW_CALL_STACK=y
|
||||
CONFIG_MODULES=y
|
||||
CONFIG_MODULE_UNLOAD=y
|
||||
CONFIG_MODVERSIONS=y
|
||||
CONFIG_BLK_INLINE_ENCRYPTION=y
|
||||
CONFIG_GKI_HACKS_TO_FIX=y
|
||||
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
|
||||
CONFIG_BINFMT_MISC=m
|
||||
@ -222,6 +223,7 @@ CONFIG_SCSI=y
|
||||
CONFIG_BLK_DEV_SD=y
|
||||
CONFIG_SCSI_UFSHCD=y
|
||||
CONFIG_SCSI_UFSHCD_PLATFORM=y
|
||||
CONFIG_SCSI_UFS_CRYPTO=y
|
||||
CONFIG_MD=y
|
||||
CONFIG_BLK_DEV_DM=y
|
||||
CONFIG_DM_CRYPT=y
|
||||
@ -392,6 +394,7 @@ CONFIG_EXT4_FS_SECURITY=y
|
||||
CONFIG_F2FS_FS=y
|
||||
CONFIG_F2FS_FS_SECURITY=y
|
||||
CONFIG_FS_ENCRYPTION=y
|
||||
CONFIG_FS_ENCRYPTION_INLINE_CRYPT=y
|
||||
CONFIG_FS_VERITY=y
|
||||
CONFIG_FS_VERITY_BUILTIN_SIGNATURES=y
|
||||
# CONFIG_DNOTIFY is not set
|
||||
|
@ -50,6 +50,7 @@ CONFIG_KPROBES=y
|
||||
CONFIG_MODULES=y
|
||||
CONFIG_MODULE_UNLOAD=y
|
||||
CONFIG_MODVERSIONS=y
|
||||
CONFIG_BLK_INLINE_ENCRYPTION=y
|
||||
CONFIG_GKI_HACKS_TO_FIX=y
|
||||
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
|
||||
CONFIG_BINFMT_MISC=m
|
||||
@ -329,6 +330,7 @@ CONFIG_EXT4_ENCRYPTION=y
|
||||
CONFIG_F2FS_FS=y
|
||||
CONFIG_F2FS_FS_SECURITY=y
|
||||
CONFIG_F2FS_FS_ENCRYPTION=y
|
||||
CONFIG_FS_ENCRYPTION_INLINE_CRYPT=y
|
||||
CONFIG_FS_VERITY=y
|
||||
CONFIG_FS_VERITY_BUILTIN_SIGNATURES=y
|
||||
# CONFIG_DNOTIFY is not set
|
||||
|
@ -200,6 +200,16 @@ config BLK_SED_OPAL
|
||||
Enabling this option enables users to setup/unlock/lock
|
||||
Locking ranges for SED devices using the Opal protocol.
|
||||
|
||||
config BLK_INLINE_ENCRYPTION
|
||||
bool "Enable inline encryption support in block layer"
|
||||
select CRYPTO
|
||||
select CRYPTO_BLKCIPHER
|
||||
help
|
||||
Build the blk-crypto subsystem.
|
||||
Enabling this lets the block layer handle encryption,
|
||||
so users can take advantage of inline encryption
|
||||
hardware if present.
|
||||
|
||||
menu "Partition Types"
|
||||
|
||||
source "block/partitions/Kconfig"
|
||||
|
@ -37,3 +37,5 @@ obj-$(CONFIG_BLK_WBT) += blk-wbt.o
|
||||
obj-$(CONFIG_BLK_DEBUG_FS) += blk-mq-debugfs.o
|
||||
obj-$(CONFIG_BLK_DEBUG_FS_ZONED)+= blk-mq-debugfs-zoned.o
|
||||
obj-$(CONFIG_BLK_SED_OPAL) += sed-opal.o
|
||||
obj-$(CONFIG_BLK_INLINE_ENCRYPTION) += keyslot-manager.o bio-crypt-ctx.o \
|
||||
blk-crypto.o
|
||||
|
145
block/bio-crypt-ctx.c
Normal file
145
block/bio-crypt-ctx.c
Normal file
@ -0,0 +1,145 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Copyright 2019 Google LLC
|
||||
*/
|
||||
|
||||
#include <linux/bio.h>
|
||||
#include <linux/blkdev.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/keyslot-manager.h>
|
||||
|
||||
static int num_prealloc_crypt_ctxs = 128;
|
||||
static struct kmem_cache *bio_crypt_ctx_cache;
|
||||
static mempool_t *bio_crypt_ctx_pool;
|
||||
|
||||
int bio_crypt_ctx_init(void)
|
||||
{
|
||||
bio_crypt_ctx_cache = KMEM_CACHE(bio_crypt_ctx, 0);
|
||||
if (!bio_crypt_ctx_cache)
|
||||
return -ENOMEM;
|
||||
|
||||
bio_crypt_ctx_pool = mempool_create_slab_pool(
|
||||
num_prealloc_crypt_ctxs,
|
||||
bio_crypt_ctx_cache);
|
||||
|
||||
if (!bio_crypt_ctx_pool)
|
||||
return -ENOMEM;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
struct bio_crypt_ctx *bio_crypt_alloc_ctx(gfp_t gfp_mask)
|
||||
{
|
||||
return mempool_alloc(bio_crypt_ctx_pool, gfp_mask);
|
||||
}
|
||||
EXPORT_SYMBOL(bio_crypt_alloc_ctx);
|
||||
|
||||
void bio_crypt_free_ctx(struct bio *bio)
|
||||
{
|
||||
mempool_free(bio->bi_crypt_context, bio_crypt_ctx_pool);
|
||||
bio->bi_crypt_context = NULL;
|
||||
}
|
||||
EXPORT_SYMBOL(bio_crypt_free_ctx);
|
||||
|
||||
int bio_crypt_clone(struct bio *dst, struct bio *src, gfp_t gfp_mask)
|
||||
{
|
||||
/*
|
||||
* If a bio is swhandled, then it will be decrypted when bio_endio
|
||||
* is called. As we only want the data to be decrypted once, copies
|
||||
* of the bio must not have have a crypt context.
|
||||
*/
|
||||
if (!bio_has_crypt_ctx(src) || bio_crypt_swhandled(src))
|
||||
return 0;
|
||||
|
||||
dst->bi_crypt_context = bio_crypt_alloc_ctx(gfp_mask);
|
||||
if (!dst->bi_crypt_context)
|
||||
return -ENOMEM;
|
||||
|
||||
*dst->bi_crypt_context = *src->bi_crypt_context;
|
||||
|
||||
if (bio_crypt_has_keyslot(src))
|
||||
keyslot_manager_get_slot(src->bi_crypt_context->processing_ksm,
|
||||
src->bi_crypt_context->keyslot);
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(bio_crypt_clone);
|
||||
|
||||
bool bio_crypt_should_process(struct bio *bio, struct request_queue *q)
|
||||
{
|
||||
if (!bio_has_crypt_ctx(bio))
|
||||
return false;
|
||||
|
||||
if (q->ksm != bio->bi_crypt_context->processing_ksm)
|
||||
return false;
|
||||
|
||||
WARN_ON(!bio_crypt_has_keyslot(bio));
|
||||
return true;
|
||||
}
|
||||
EXPORT_SYMBOL(bio_crypt_should_process);
|
||||
|
||||
/*
|
||||
* Checks that two bio crypt contexts are compatible - i.e. that
|
||||
* they are mergeable except for data_unit_num continuity.
|
||||
*/
|
||||
bool bio_crypt_ctx_compatible(struct bio *b_1, struct bio *b_2)
|
||||
{
|
||||
struct bio_crypt_ctx *bc1 = b_1->bi_crypt_context;
|
||||
struct bio_crypt_ctx *bc2 = b_2->bi_crypt_context;
|
||||
|
||||
if (bio_has_crypt_ctx(b_1) != bio_has_crypt_ctx(b_2))
|
||||
return false;
|
||||
|
||||
if (!bio_has_crypt_ctx(b_1))
|
||||
return true;
|
||||
|
||||
return bc1->keyslot == bc2->keyslot &&
|
||||
bc1->data_unit_size_bits == bc2->data_unit_size_bits;
|
||||
}
|
||||
|
||||
/*
|
||||
* Checks that two bio crypt contexts are compatible, and also
|
||||
* that their data_unit_nums are continuous (and can hence be merged)
|
||||
*/
|
||||
bool bio_crypt_ctx_back_mergeable(struct bio *b_1,
|
||||
unsigned int b1_sectors,
|
||||
struct bio *b_2)
|
||||
{
|
||||
struct bio_crypt_ctx *bc1 = b_1->bi_crypt_context;
|
||||
struct bio_crypt_ctx *bc2 = b_2->bi_crypt_context;
|
||||
|
||||
if (!bio_crypt_ctx_compatible(b_1, b_2))
|
||||
return false;
|
||||
|
||||
return !bio_has_crypt_ctx(b_1) ||
|
||||
(bc1->data_unit_num +
|
||||
(b1_sectors >> (bc1->data_unit_size_bits - 9)) ==
|
||||
bc2->data_unit_num);
|
||||
}
|
||||
|
||||
void bio_crypt_ctx_release_keyslot(struct bio *bio)
|
||||
{
|
||||
struct bio_crypt_ctx *crypt_ctx = bio->bi_crypt_context;
|
||||
|
||||
keyslot_manager_put_slot(crypt_ctx->processing_ksm, crypt_ctx->keyslot);
|
||||
bio->bi_crypt_context->processing_ksm = NULL;
|
||||
bio->bi_crypt_context->keyslot = -1;
|
||||
}
|
||||
|
||||
int bio_crypt_ctx_acquire_keyslot(struct bio *bio, struct keyslot_manager *ksm)
|
||||
{
|
||||
int slot;
|
||||
enum blk_crypto_mode_num crypto_mode = bio_crypto_mode(bio);
|
||||
|
||||
if (!ksm)
|
||||
return -ENOMEM;
|
||||
|
||||
slot = keyslot_manager_get_slot_for_key(ksm,
|
||||
bio_crypt_raw_key(bio), crypto_mode,
|
||||
1 << bio->bi_crypt_context->data_unit_size_bits);
|
||||
if (slot < 0)
|
||||
return slot;
|
||||
|
||||
bio_crypt_set_keyslot(bio, slot, ksm);
|
||||
return 0;
|
||||
}
|
23
block/bio.c
23
block/bio.c
@ -29,6 +29,7 @@
|
||||
#include <linux/workqueue.h>
|
||||
#include <linux/cgroup.h>
|
||||
#include <linux/blk-cgroup.h>
|
||||
#include <linux/blk-crypto.h>
|
||||
|
||||
#include <trace/events/block.h>
|
||||
#include "blk.h"
|
||||
@ -253,6 +254,7 @@ static void bio_free(struct bio *bio)
|
||||
struct bio_set *bs = bio->bi_pool;
|
||||
void *p;
|
||||
|
||||
bio_crypt_free_ctx(bio);
|
||||
bio_uninit(bio);
|
||||
|
||||
if (bs) {
|
||||
@ -632,15 +634,15 @@ struct bio *bio_clone_fast(struct bio *bio, gfp_t gfp_mask, struct bio_set *bs)
|
||||
|
||||
__bio_clone_fast(b, bio);
|
||||
|
||||
if (bio_integrity(bio)) {
|
||||
int ret;
|
||||
if (bio_crypt_clone(b, bio, gfp_mask) < 0) {
|
||||
bio_put(b);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
ret = bio_integrity_clone(b, bio, gfp_mask);
|
||||
|
||||
if (ret < 0) {
|
||||
bio_put(b);
|
||||
return NULL;
|
||||
}
|
||||
if (bio_integrity(bio) &&
|
||||
bio_integrity_clone(b, bio, gfp_mask) < 0) {
|
||||
bio_put(b);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
return b;
|
||||
@ -953,6 +955,7 @@ void bio_advance(struct bio *bio, unsigned bytes)
|
||||
if (bio_integrity(bio))
|
||||
bio_integrity_advance(bio, bytes);
|
||||
|
||||
bio_crypt_advance(bio, bytes);
|
||||
bio_advance_iter(bio, &bio->bi_iter, bytes);
|
||||
}
|
||||
EXPORT_SYMBOL(bio_advance);
|
||||
@ -1751,6 +1754,10 @@ void bio_endio(struct bio *bio)
|
||||
again:
|
||||
if (!bio_remaining_done(bio))
|
||||
return;
|
||||
|
||||
if (!blk_crypto_endio(bio))
|
||||
return;
|
||||
|
||||
if (!bio_integrity_endio(bio))
|
||||
return;
|
||||
|
||||
|
@ -36,6 +36,7 @@
|
||||
#include <linux/debugfs.h>
|
||||
#include <linux/bpf.h>
|
||||
#include <linux/psi.h>
|
||||
#include <linux/blk-crypto.h>
|
||||
|
||||
#define CREATE_TRACE_POINTS
|
||||
#include <trace/events/block.h>
|
||||
@ -2462,7 +2463,9 @@ blk_qc_t generic_make_request(struct bio *bio)
|
||||
/* Create a fresh bio_list for all subordinate requests */
|
||||
bio_list_on_stack[1] = bio_list_on_stack[0];
|
||||
bio_list_init(&bio_list_on_stack[0]);
|
||||
ret = q->make_request_fn(q, bio);
|
||||
|
||||
if (!blk_crypto_submit_bio(&bio))
|
||||
ret = q->make_request_fn(q, bio);
|
||||
|
||||
/* sort new bios into those for a lower level
|
||||
* and those for the same level
|
||||
@ -2516,6 +2519,9 @@ blk_qc_t direct_make_request(struct bio *bio)
|
||||
if (!generic_make_request_checks(bio))
|
||||
return BLK_QC_T_NONE;
|
||||
|
||||
if (blk_crypto_submit_bio(&bio))
|
||||
return BLK_QC_T_NONE;
|
||||
|
||||
if (unlikely(blk_queue_enter(q, nowait ? BLK_MQ_REQ_NOWAIT : 0))) {
|
||||
if (nowait && !blk_queue_dying(q))
|
||||
bio->bi_status = BLK_STS_AGAIN;
|
||||
@ -3992,5 +3998,11 @@ int __init blk_dev_init(void)
|
||||
blk_debugfs_root = debugfs_create_dir("block", NULL);
|
||||
#endif
|
||||
|
||||
if (bio_crypt_ctx_init() < 0)
|
||||
panic("Failed to allocate mem for bio crypt ctxs\n");
|
||||
|
||||
if (blk_crypto_init() < 0)
|
||||
panic("Failed to init blk-crypto\n");
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
797
block/blk-crypto.c
Normal file
797
block/blk-crypto.c
Normal file
@ -0,0 +1,797 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Copyright 2019 Google LLC
|
||||
*/
|
||||
|
||||
/*
|
||||
* Refer to Documentation/block/inline-encryption.rst for detailed explanation.
|
||||
*/
|
||||
|
||||
#define pr_fmt(fmt) "blk-crypto: " fmt
|
||||
|
||||
#include <linux/blk-crypto.h>
|
||||
#include <linux/keyslot-manager.h>
|
||||
#include <linux/mempool.h>
|
||||
#include <linux/blk-cgroup.h>
|
||||
#include <linux/crypto.h>
|
||||
#include <crypto/skcipher.h>
|
||||
#include <crypto/algapi.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/sched/mm.h>
|
||||
|
||||
/* Represents a crypto mode supported by blk-crypto */
|
||||
struct blk_crypto_mode {
|
||||
const char *cipher_str; /* crypto API name (for fallback case) */
|
||||
size_t keysize; /* key size in bytes */
|
||||
};
|
||||
|
||||
static const struct blk_crypto_mode blk_crypto_modes[] = {
|
||||
[BLK_ENCRYPTION_MODE_AES_256_XTS] = {
|
||||
.cipher_str = "xts(aes)",
|
||||
.keysize = 64,
|
||||
},
|
||||
};
|
||||
|
||||
static unsigned int num_prealloc_bounce_pg = 32;
|
||||
module_param(num_prealloc_bounce_pg, uint, 0);
|
||||
MODULE_PARM_DESC(num_prealloc_bounce_pg,
|
||||
"Number of preallocated bounce pages for blk-crypto to use during crypto API fallback encryption");
|
||||
|
||||
#define BLK_CRYPTO_MAX_KEY_SIZE 64
|
||||
static int blk_crypto_num_keyslots = 100;
|
||||
module_param_named(num_keyslots, blk_crypto_num_keyslots, int, 0);
|
||||
MODULE_PARM_DESC(num_keyslots,
|
||||
"Number of keyslots for crypto API fallback in blk-crypto.");
|
||||
|
||||
static struct blk_crypto_keyslot {
|
||||
struct crypto_skcipher *tfm;
|
||||
enum blk_crypto_mode_num crypto_mode;
|
||||
u8 key[BLK_CRYPTO_MAX_KEY_SIZE];
|
||||
struct crypto_skcipher *tfms[ARRAY_SIZE(blk_crypto_modes)];
|
||||
} *blk_crypto_keyslots;
|
||||
|
||||
/*
|
||||
* Allocating a crypto tfm during I/O can deadlock, so we have to preallocate
|
||||
* all of a mode's tfms when that mode starts being used. Since each mode may
|
||||
* need all the keyslots at some point, each mode needs its own tfm for each
|
||||
* keyslot; thus, a keyslot may contain tfms for multiple modes. However, to
|
||||
* match the behavior of real inline encryption hardware (which only supports a
|
||||
* single encryption context per keyslot), we only allow one tfm per keyslot to
|
||||
* be used at a time - the rest of the unused tfms have their keys cleared.
|
||||
*/
|
||||
static struct mutex tfms_lock[ARRAY_SIZE(blk_crypto_modes)];
|
||||
static bool tfms_inited[ARRAY_SIZE(blk_crypto_modes)];
|
||||
|
||||
struct work_mem {
|
||||
struct work_struct crypto_work;
|
||||
struct bio *bio;
|
||||
};
|
||||
|
||||
/* The following few vars are only used during the crypto API fallback */
|
||||
static struct keyslot_manager *blk_crypto_ksm;
|
||||
static struct workqueue_struct *blk_crypto_wq;
|
||||
static mempool_t *blk_crypto_page_pool;
|
||||
static struct kmem_cache *blk_crypto_work_mem_cache;
|
||||
|
||||
bool bio_crypt_swhandled(struct bio *bio)
|
||||
{
|
||||
return bio_has_crypt_ctx(bio) &&
|
||||
bio->bi_crypt_context->processing_ksm == blk_crypto_ksm;
|
||||
}
|
||||
|
||||
static u8 blank_key[BLK_CRYPTO_MAX_KEY_SIZE];
|
||||
static void evict_keyslot(unsigned int slot)
|
||||
{
|
||||
struct blk_crypto_keyslot *slotp = &blk_crypto_keyslots[slot];
|
||||
enum blk_crypto_mode_num crypto_mode = slotp->crypto_mode;
|
||||
int err;
|
||||
|
||||
WARN_ON(slotp->crypto_mode == BLK_ENCRYPTION_MODE_INVALID);
|
||||
|
||||
/* Clear the key in the skcipher */
|
||||
err = crypto_skcipher_setkey(slotp->tfms[crypto_mode], blank_key,
|
||||
blk_crypto_modes[crypto_mode].keysize);
|
||||
WARN_ON(err);
|
||||
memzero_explicit(slotp->key, BLK_CRYPTO_MAX_KEY_SIZE);
|
||||
slotp->crypto_mode = BLK_ENCRYPTION_MODE_INVALID;
|
||||
}
|
||||
|
||||
static int blk_crypto_keyslot_program(void *priv, const u8 *key,
|
||||
enum blk_crypto_mode_num crypto_mode,
|
||||
unsigned int data_unit_size,
|
||||
unsigned int slot)
|
||||
{
|
||||
struct blk_crypto_keyslot *slotp = &blk_crypto_keyslots[slot];
|
||||
const struct blk_crypto_mode *mode = &blk_crypto_modes[crypto_mode];
|
||||
size_t keysize = mode->keysize;
|
||||
int err;
|
||||
|
||||
if (crypto_mode != slotp->crypto_mode &&
|
||||
slotp->crypto_mode != BLK_ENCRYPTION_MODE_INVALID) {
|
||||
evict_keyslot(slot);
|
||||
}
|
||||
|
||||
if (!slotp->tfms[crypto_mode])
|
||||
return -ENOMEM;
|
||||
slotp->crypto_mode = crypto_mode;
|
||||
err = crypto_skcipher_setkey(slotp->tfms[crypto_mode], key, keysize);
|
||||
|
||||
if (err) {
|
||||
evict_keyslot(slot);
|
||||
return err;
|
||||
}
|
||||
|
||||
memcpy(slotp->key, key, keysize);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int blk_crypto_keyslot_evict(void *priv, const u8 *key,
|
||||
enum blk_crypto_mode_num crypto_mode,
|
||||
unsigned int data_unit_size,
|
||||
unsigned int slot)
|
||||
{
|
||||
evict_keyslot(slot);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int blk_crypto_keyslot_find(void *priv,
|
||||
const u8 *key,
|
||||
enum blk_crypto_mode_num crypto_mode,
|
||||
unsigned int data_unit_size_bytes)
|
||||
{
|
||||
int slot;
|
||||
const size_t keysize = blk_crypto_modes[crypto_mode].keysize;
|
||||
|
||||
for (slot = 0; slot < blk_crypto_num_keyslots; slot++) {
|
||||
if (blk_crypto_keyslots[slot].crypto_mode == crypto_mode &&
|
||||
!crypto_memneq(blk_crypto_keyslots[slot].key, key, keysize))
|
||||
return slot;
|
||||
}
|
||||
|
||||
return -ENOKEY;
|
||||
}
|
||||
|
||||
static bool blk_crypto_mode_supported(void *priv,
|
||||
enum blk_crypto_mode_num crypt_mode,
|
||||
unsigned int data_unit_size)
|
||||
{
|
||||
/* All blk_crypto_modes are required to have a crypto API fallback. */
|
||||
return true;
|
||||
}
|
||||
|
||||
/*
|
||||
* The crypto API fallback KSM ops - only used for a bio when it specifies a
|
||||
* blk_crypto_mode for which we failed to get a keyslot in the device's inline
|
||||
* encryption hardware (which probably means the device doesn't have inline
|
||||
* encryption hardware that supports that crypto mode).
|
||||
*/
|
||||
static const struct keyslot_mgmt_ll_ops blk_crypto_ksm_ll_ops = {
|
||||
.keyslot_program = blk_crypto_keyslot_program,
|
||||
.keyslot_evict = blk_crypto_keyslot_evict,
|
||||
.keyslot_find = blk_crypto_keyslot_find,
|
||||
.crypto_mode_supported = blk_crypto_mode_supported,
|
||||
};
|
||||
|
||||
static void blk_crypto_encrypt_endio(struct bio *enc_bio)
|
||||
{
|
||||
struct bio *src_bio = enc_bio->bi_private;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < enc_bio->bi_vcnt; i++)
|
||||
mempool_free(enc_bio->bi_io_vec[i].bv_page,
|
||||
blk_crypto_page_pool);
|
||||
|
||||
src_bio->bi_status = enc_bio->bi_status;
|
||||
|
||||
bio_put(enc_bio);
|
||||
bio_endio(src_bio);
|
||||
}
|
||||
|
||||
static struct bio *blk_crypto_clone_bio(struct bio *bio_src)
|
||||
{
|
||||
struct bvec_iter iter;
|
||||
struct bio_vec bv;
|
||||
struct bio *bio;
|
||||
|
||||
bio = bio_alloc_bioset(GFP_NOIO, bio_segments(bio_src), NULL);
|
||||
if (!bio)
|
||||
return NULL;
|
||||
bio->bi_disk = bio_src->bi_disk;
|
||||
bio->bi_opf = bio_src->bi_opf;
|
||||
bio->bi_ioprio = bio_src->bi_ioprio;
|
||||
bio->bi_write_hint = bio_src->bi_write_hint;
|
||||
bio->bi_iter.bi_sector = bio_src->bi_iter.bi_sector;
|
||||
bio->bi_iter.bi_size = bio_src->bi_iter.bi_size;
|
||||
|
||||
bio_for_each_segment(bv, bio_src, iter)
|
||||
bio->bi_io_vec[bio->bi_vcnt++] = bv;
|
||||
|
||||
if (bio_integrity(bio_src) &&
|
||||
bio_integrity_clone(bio, bio_src, GFP_NOIO) < 0) {
|
||||
bio_put(bio);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
bio_clone_blkcg_association(bio, bio_src);
|
||||
|
||||
return bio;
|
||||
}
|
||||
|
||||
/* Check that all I/O segments are data unit aligned */
|
||||
static int bio_crypt_check_alignment(struct bio *bio)
|
||||
{
|
||||
int data_unit_size = 1 << bio->bi_crypt_context->data_unit_size_bits;
|
||||
struct bvec_iter iter;
|
||||
struct bio_vec bv;
|
||||
|
||||
bio_for_each_segment(bv, bio, iter) {
|
||||
if (!IS_ALIGNED(bv.bv_len | bv.bv_offset, data_unit_size))
|
||||
return -EIO;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int blk_crypto_alloc_cipher_req(struct bio *src_bio,
|
||||
struct skcipher_request **ciph_req_ptr,
|
||||
struct crypto_wait *wait)
|
||||
{
|
||||
int slot;
|
||||
struct skcipher_request *ciph_req;
|
||||
struct blk_crypto_keyslot *slotp;
|
||||
|
||||
slot = bio_crypt_get_keyslot(src_bio);
|
||||
slotp = &blk_crypto_keyslots[slot];
|
||||
ciph_req = skcipher_request_alloc(slotp->tfms[slotp->crypto_mode],
|
||||
GFP_NOIO);
|
||||
if (!ciph_req) {
|
||||
src_bio->bi_status = BLK_STS_RESOURCE;
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
skcipher_request_set_callback(ciph_req,
|
||||
CRYPTO_TFM_REQ_MAY_BACKLOG |
|
||||
CRYPTO_TFM_REQ_MAY_SLEEP,
|
||||
crypto_req_done, wait);
|
||||
*ciph_req_ptr = ciph_req;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int blk_crypto_split_bio_if_needed(struct bio **bio_ptr)
|
||||
{
|
||||
struct bio *bio = *bio_ptr;
|
||||
unsigned int i = 0;
|
||||
unsigned int num_sectors = 0;
|
||||
struct bio_vec bv;
|
||||
struct bvec_iter iter;
|
||||
|
||||
bio_for_each_segment(bv, bio, iter) {
|
||||
num_sectors += bv.bv_len >> SECTOR_SHIFT;
|
||||
if (++i == BIO_MAX_PAGES)
|
||||
break;
|
||||
}
|
||||
if (num_sectors < bio_sectors(bio)) {
|
||||
struct bio *split_bio;
|
||||
|
||||
split_bio = bio_split(bio, num_sectors, GFP_NOIO, NULL);
|
||||
if (!split_bio) {
|
||||
bio->bi_status = BLK_STS_RESOURCE;
|
||||
return -ENOMEM;
|
||||
}
|
||||
bio_chain(split_bio, bio);
|
||||
generic_make_request(bio);
|
||||
*bio_ptr = split_bio;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* The crypto API fallback's encryption routine.
|
||||
* Allocate a bounce bio for encryption, encrypt the input bio using
|
||||
* crypto API, and replace *bio_ptr with the bounce bio. May split input
|
||||
* bio if it's too large.
|
||||
*/
|
||||
static int blk_crypto_encrypt_bio(struct bio **bio_ptr)
|
||||
{
|
||||
struct bio *src_bio;
|
||||
struct skcipher_request *ciph_req = NULL;
|
||||
DECLARE_CRYPTO_WAIT(wait);
|
||||
int err = 0;
|
||||
u64 curr_dun;
|
||||
union {
|
||||
__le64 dun;
|
||||
u8 bytes[16];
|
||||
} iv;
|
||||
struct scatterlist src, dst;
|
||||
struct bio *enc_bio;
|
||||
struct bio_vec *enc_bvec;
|
||||
int i, j;
|
||||
int data_unit_size;
|
||||
|
||||
/* Split the bio if it's too big for single page bvec */
|
||||
err = blk_crypto_split_bio_if_needed(bio_ptr);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
src_bio = *bio_ptr;
|
||||
data_unit_size = 1 << src_bio->bi_crypt_context->data_unit_size_bits;
|
||||
|
||||
/* Allocate bounce bio for encryption */
|
||||
enc_bio = blk_crypto_clone_bio(src_bio);
|
||||
if (!enc_bio) {
|
||||
src_bio->bi_status = BLK_STS_RESOURCE;
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
/*
|
||||
* Use the crypto API fallback keyslot manager to get a crypto_skcipher
|
||||
* for the algorithm and key specified for this bio.
|
||||
*/
|
||||
err = bio_crypt_ctx_acquire_keyslot(src_bio, blk_crypto_ksm);
|
||||
if (err) {
|
||||
src_bio->bi_status = BLK_STS_IOERR;
|
||||
goto out_put_enc_bio;
|
||||
}
|
||||
|
||||
/* and then allocate an skcipher_request for it */
|
||||
err = blk_crypto_alloc_cipher_req(src_bio, &ciph_req, &wait);
|
||||
if (err)
|
||||
goto out_release_keyslot;
|
||||
|
||||
curr_dun = bio_crypt_data_unit_num(src_bio);
|
||||
sg_init_table(&src, 1);
|
||||
sg_init_table(&dst, 1);
|
||||
|
||||
skcipher_request_set_crypt(ciph_req, &src, &dst,
|
||||
data_unit_size, iv.bytes);
|
||||
|
||||
/* Encrypt each page in the bounce bio */
|
||||
for (i = 0, enc_bvec = enc_bio->bi_io_vec; i < enc_bio->bi_vcnt;
|
||||
enc_bvec++, i++) {
|
||||
struct page *plaintext_page = enc_bvec->bv_page;
|
||||
struct page *ciphertext_page =
|
||||
mempool_alloc(blk_crypto_page_pool, GFP_NOIO);
|
||||
|
||||
enc_bvec->bv_page = ciphertext_page;
|
||||
|
||||
if (!ciphertext_page) {
|
||||
src_bio->bi_status = BLK_STS_RESOURCE;
|
||||
err = -ENOMEM;
|
||||
goto out_free_bounce_pages;
|
||||
}
|
||||
|
||||
sg_set_page(&src, plaintext_page, data_unit_size,
|
||||
enc_bvec->bv_offset);
|
||||
sg_set_page(&dst, ciphertext_page, data_unit_size,
|
||||
enc_bvec->bv_offset);
|
||||
|
||||
/* Encrypt each data unit in this page */
|
||||
for (j = 0; j < enc_bvec->bv_len; j += data_unit_size) {
|
||||
memset(&iv, 0, sizeof(iv));
|
||||
iv.dun = cpu_to_le64(curr_dun);
|
||||
|
||||
err = crypto_wait_req(crypto_skcipher_encrypt(ciph_req),
|
||||
&wait);
|
||||
if (err) {
|
||||
i++;
|
||||
src_bio->bi_status = BLK_STS_RESOURCE;
|
||||
goto out_free_bounce_pages;
|
||||
}
|
||||
curr_dun++;
|
||||
src.offset += data_unit_size;
|
||||
dst.offset += data_unit_size;
|
||||
}
|
||||
}
|
||||
|
||||
enc_bio->bi_private = src_bio;
|
||||
enc_bio->bi_end_io = blk_crypto_encrypt_endio;
|
||||
*bio_ptr = enc_bio;
|
||||
|
||||
enc_bio = NULL;
|
||||
err = 0;
|
||||
goto out_free_ciph_req;
|
||||
|
||||
out_free_bounce_pages:
|
||||
while (i > 0)
|
||||
mempool_free(enc_bio->bi_io_vec[--i].bv_page,
|
||||
blk_crypto_page_pool);
|
||||
out_free_ciph_req:
|
||||
skcipher_request_free(ciph_req);
|
||||
out_release_keyslot:
|
||||
bio_crypt_ctx_release_keyslot(src_bio);
|
||||
out_put_enc_bio:
|
||||
if (enc_bio)
|
||||
bio_put(enc_bio);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
/*
|
||||
* The crypto API fallback's main decryption routine.
|
||||
* Decrypts input bio in place.
|
||||
*/
|
||||
static void blk_crypto_decrypt_bio(struct work_struct *w)
|
||||
{
|
||||
struct work_mem *work_mem =
|
||||
container_of(w, struct work_mem, crypto_work);
|
||||
struct bio *bio = work_mem->bio;
|
||||
struct skcipher_request *ciph_req = NULL;
|
||||
DECLARE_CRYPTO_WAIT(wait);
|
||||
struct bio_vec bv;
|
||||
struct bvec_iter iter;
|
||||
u64 curr_dun;
|
||||
union {
|
||||
__le64 dun;
|
||||
u8 bytes[16];
|
||||
} iv;
|
||||
struct scatterlist sg;
|
||||
int data_unit_size = 1 << bio->bi_crypt_context->data_unit_size_bits;
|
||||
int i;
|
||||
int err;
|
||||
|
||||
/*
|
||||
* Use the crypto API fallback keyslot manager to get a crypto_skcipher
|
||||
* for the algorithm and key specified for this bio.
|
||||
*/
|
||||
if (bio_crypt_ctx_acquire_keyslot(bio, blk_crypto_ksm)) {
|
||||
bio->bi_status = BLK_STS_RESOURCE;
|
||||
goto out_no_keyslot;
|
||||
}
|
||||
|
||||
/* and then allocate an skcipher_request for it */
|
||||
err = blk_crypto_alloc_cipher_req(bio, &ciph_req, &wait);
|
||||
if (err)
|
||||
goto out;
|
||||
|
||||
curr_dun = bio_crypt_sw_data_unit_num(bio);
|
||||
sg_init_table(&sg, 1);
|
||||
skcipher_request_set_crypt(ciph_req, &sg, &sg, data_unit_size,
|
||||
iv.bytes);
|
||||
|
||||
/* Decrypt each segment in the bio */
|
||||
__bio_for_each_segment(bv, bio, iter,
|
||||
bio->bi_crypt_context->crypt_iter) {
|
||||
struct page *page = bv.bv_page;
|
||||
|
||||
sg_set_page(&sg, page, data_unit_size, bv.bv_offset);
|
||||
|
||||
/* Decrypt each data unit in the segment */
|
||||
for (i = 0; i < bv.bv_len; i += data_unit_size) {
|
||||
memset(&iv, 0, sizeof(iv));
|
||||
iv.dun = cpu_to_le64(curr_dun);
|
||||
if (crypto_wait_req(crypto_skcipher_decrypt(ciph_req),
|
||||
&wait)) {
|
||||
bio->bi_status = BLK_STS_IOERR;
|
||||
goto out;
|
||||
}
|
||||
curr_dun++;
|
||||
sg.offset += data_unit_size;
|
||||
}
|
||||
}
|
||||
|
||||
out:
|
||||
skcipher_request_free(ciph_req);
|
||||
bio_crypt_ctx_release_keyslot(bio);
|
||||
out_no_keyslot:
|
||||
kmem_cache_free(blk_crypto_work_mem_cache, work_mem);
|
||||
bio_endio(bio);
|
||||
}
|
||||
|
||||
/* Queue bio for decryption */
|
||||
static void blk_crypto_queue_decrypt_bio(struct bio *bio)
|
||||
{
|
||||
struct work_mem *work_mem =
|
||||
kmem_cache_zalloc(blk_crypto_work_mem_cache, GFP_ATOMIC);
|
||||
|
||||
if (!work_mem) {
|
||||
bio->bi_status = BLK_STS_RESOURCE;
|
||||
bio_endio(bio);
|
||||
return;
|
||||
}
|
||||
|
||||
INIT_WORK(&work_mem->crypto_work, blk_crypto_decrypt_bio);
|
||||
work_mem->bio = bio;
|
||||
queue_work(blk_crypto_wq, &work_mem->crypto_work);
|
||||
}
|
||||
|
||||
/**
|
||||
* blk_crypto_submit_bio - handle submitting bio for inline encryption
|
||||
*
|
||||
* @bio_ptr: pointer to original bio pointer
|
||||
*
|
||||
* If the bio doesn't have inline encryption enabled or the submitter already
|
||||
* specified a keyslot for the target device, do nothing. Else, a raw key must
|
||||
* have been provided, so acquire a device keyslot for it if supported. Else,
|
||||
* use the crypto API fallback.
|
||||
*
|
||||
* When the crypto API fallback is used for encryption, blk-crypto may choose to
|
||||
* split the bio into 2 - the first one that will continue to be processed and
|
||||
* the second one that will be resubmitted via generic_make_request.
|
||||
* A bounce bio will be allocated to encrypt the contents of the aforementioned
|
||||
* "first one", and *bio_ptr will be updated to this bounce bio.
|
||||
*
|
||||
* Return: 0 if bio submission should continue; nonzero if bio_endio() was
|
||||
* already called so bio submission should abort.
|
||||
*/
|
||||
int blk_crypto_submit_bio(struct bio **bio_ptr)
|
||||
{
|
||||
struct bio *bio = *bio_ptr;
|
||||
struct request_queue *q;
|
||||
int err;
|
||||
struct bio_crypt_ctx *crypt_ctx;
|
||||
|
||||
if (!bio_has_crypt_ctx(bio) || !bio_has_data(bio))
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* When a read bio is marked for sw decryption, its bi_iter is saved
|
||||
* so that when we decrypt the bio later, we know what part of it was
|
||||
* marked for sw decryption (when the bio is passed down after
|
||||
* blk_crypto_submit bio, it may be split or advanced so we cannot rely
|
||||
* on the bi_iter while decrypting in blk_crypto_endio)
|
||||
*/
|
||||
if (bio_crypt_swhandled(bio))
|
||||
return 0;
|
||||
|
||||
err = bio_crypt_check_alignment(bio);
|
||||
if (err) {
|
||||
bio->bi_status = BLK_STS_IOERR;
|
||||
goto out;
|
||||
}
|
||||
|
||||
crypt_ctx = bio->bi_crypt_context;
|
||||
q = bio->bi_disk->queue;
|
||||
|
||||
if (bio_crypt_has_keyslot(bio)) {
|
||||
/* Key already programmed into device? */
|
||||
if (q->ksm == crypt_ctx->processing_ksm)
|
||||
return 0;
|
||||
|
||||
/* Nope, release the existing keyslot. */
|
||||
bio_crypt_ctx_release_keyslot(bio);
|
||||
}
|
||||
|
||||
/* Get device keyslot if supported */
|
||||
if (q->ksm) {
|
||||
err = bio_crypt_ctx_acquire_keyslot(bio, q->ksm);
|
||||
if (!err)
|
||||
return 0;
|
||||
|
||||
pr_warn_once("Failed to acquire keyslot for %s (err=%d). Falling back to crypto API.\n",
|
||||
bio->bi_disk->disk_name, err);
|
||||
}
|
||||
|
||||
/* Fallback to crypto API */
|
||||
if (!READ_ONCE(tfms_inited[bio->bi_crypt_context->crypto_mode])) {
|
||||
err = -EIO;
|
||||
bio->bi_status = BLK_STS_IOERR;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (bio_data_dir(bio) == WRITE) {
|
||||
/* Encrypt the data now */
|
||||
err = blk_crypto_encrypt_bio(bio_ptr);
|
||||
if (err)
|
||||
goto out;
|
||||
} else {
|
||||
/* Mark bio as swhandled */
|
||||
bio->bi_crypt_context->processing_ksm = blk_crypto_ksm;
|
||||
bio->bi_crypt_context->crypt_iter = bio->bi_iter;
|
||||
bio->bi_crypt_context->sw_data_unit_num =
|
||||
bio->bi_crypt_context->data_unit_num;
|
||||
}
|
||||
return 0;
|
||||
out:
|
||||
bio_endio(*bio_ptr);
|
||||
return err;
|
||||
}
|
||||
|
||||
/**
|
||||
* blk_crypto_endio - clean up bio w.r.t inline encryption during bio_endio
|
||||
*
|
||||
* @bio - the bio to clean up
|
||||
*
|
||||
* If blk_crypto_submit_bio decided to fallback to crypto API for this
|
||||
* bio, we queue the bio for decryption into a workqueue and return false,
|
||||
* and call bio_endio(bio) at a later time (after the bio has been decrypted).
|
||||
*
|
||||
* If the bio is not to be decrypted by the crypto API, this function releases
|
||||
* the reference to the keyslot that blk_crypto_submit_bio got.
|
||||
*
|
||||
* Return: true if bio_endio should continue; false otherwise (bio_endio will
|
||||
* be called again when bio has been decrypted).
|
||||
*/
|
||||
bool blk_crypto_endio(struct bio *bio)
|
||||
{
|
||||
if (!bio_has_crypt_ctx(bio))
|
||||
return true;
|
||||
|
||||
if (bio_crypt_swhandled(bio)) {
|
||||
/*
|
||||
* The only bios that are swhandled when they reach here
|
||||
* are those with bio_data_dir(bio) == READ, since WRITE
|
||||
* bios that are encrypted by the crypto API fallback are
|
||||
* handled by blk_crypto_encrypt_endio.
|
||||
*/
|
||||
|
||||
/* If there was an IO error, don't decrypt. */
|
||||
if (bio->bi_status)
|
||||
return true;
|
||||
|
||||
blk_crypto_queue_decrypt_bio(bio);
|
||||
return false;
|
||||
}
|
||||
|
||||
if (bio_crypt_has_keyslot(bio))
|
||||
bio_crypt_ctx_release_keyslot(bio);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* blk_crypto_start_using_mode() - Allocate skciphers for a
|
||||
* mode_num for all keyslots
|
||||
* @mode_num - the blk_crypto_mode we want to allocate ciphers for.
|
||||
*
|
||||
* Upper layers (filesystems) should call this function to ensure that a
|
||||
* the crypto API fallback has transforms for this algorithm, if they become
|
||||
* necessary.
|
||||
*
|
||||
* Return: 0 on success and -err on error.
|
||||
*/
|
||||
int blk_crypto_start_using_mode(enum blk_crypto_mode_num mode_num,
|
||||
unsigned int data_unit_size,
|
||||
struct request_queue *q)
|
||||
{
|
||||
struct blk_crypto_keyslot *slotp;
|
||||
int err = 0;
|
||||
int i;
|
||||
|
||||
/*
|
||||
* Fast path
|
||||
* Ensure that updates to blk_crypto_keyslots[i].tfms[mode_num]
|
||||
* for each i are visible before we try to access them.
|
||||
*/
|
||||
if (likely(smp_load_acquire(&tfms_inited[mode_num])))
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* If the keyslot manager of the request queue supports this
|
||||
* crypto mode, then we don't need to allocate this mode.
|
||||
*/
|
||||
if (keyslot_manager_crypto_mode_supported(q->ksm, mode_num,
|
||||
data_unit_size)) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
mutex_lock(&tfms_lock[mode_num]);
|
||||
if (likely(tfms_inited[mode_num]))
|
||||
goto out;
|
||||
|
||||
for (i = 0; i < blk_crypto_num_keyslots; i++) {
|
||||
slotp = &blk_crypto_keyslots[i];
|
||||
slotp->tfms[mode_num] = crypto_alloc_skcipher(
|
||||
blk_crypto_modes[mode_num].cipher_str,
|
||||
0, 0);
|
||||
if (IS_ERR(slotp->tfms[mode_num])) {
|
||||
err = PTR_ERR(slotp->tfms[mode_num]);
|
||||
slotp->tfms[mode_num] = NULL;
|
||||
goto out_free_tfms;
|
||||
}
|
||||
|
||||
crypto_skcipher_set_flags(slotp->tfms[mode_num],
|
||||
CRYPTO_TFM_REQ_WEAK_KEY);
|
||||
}
|
||||
|
||||
/*
|
||||
* Ensure that updates to blk_crypto_keyslots[i].tfms[mode_num]
|
||||
* for each i are visible before we set tfms_inited[mode_num].
|
||||
*/
|
||||
smp_store_release(&tfms_inited[mode_num], true);
|
||||
goto out;
|
||||
|
||||
out_free_tfms:
|
||||
for (i = 0; i < blk_crypto_num_keyslots; i++) {
|
||||
slotp = &blk_crypto_keyslots[i];
|
||||
crypto_free_skcipher(slotp->tfms[mode_num]);
|
||||
slotp->tfms[mode_num] = NULL;
|
||||
}
|
||||
out:
|
||||
mutex_unlock(&tfms_lock[mode_num]);
|
||||
return err;
|
||||
}
|
||||
EXPORT_SYMBOL(blk_crypto_start_using_mode);
|
||||
|
||||
/**
|
||||
* blk_crypto_evict_key() - Evict a key from any inline encryption hardware
|
||||
* it may have been programmed into
|
||||
* @q - The request queue who's keyslot manager this key might have been
|
||||
* programmed into
|
||||
* @key - The key to evict
|
||||
* @mode - The blk_crypto_mode_num used with this key
|
||||
* @data_unit_size - The data unit size used with this key
|
||||
*
|
||||
* Upper layers (filesystems) should call this function to ensure that a key
|
||||
* is evicted from hardware that it might have been programmed into. This
|
||||
* will call keyslot_manager_evict_key on the queue's keyslot manager, if one
|
||||
* exists, and supports the crypto algorithm with the specified data unit size.
|
||||
* Otherwise, it will evict the key from the blk_crypto_ksm.
|
||||
*
|
||||
* Return: 0 on success, -err on error.
|
||||
*/
|
||||
int blk_crypto_evict_key(struct request_queue *q, const u8 *key,
|
||||
enum blk_crypto_mode_num mode,
|
||||
unsigned int data_unit_size)
|
||||
{
|
||||
struct keyslot_manager *ksm = blk_crypto_ksm;
|
||||
|
||||
if (q && q->ksm && keyslot_manager_crypto_mode_supported(q->ksm, mode,
|
||||
data_unit_size)) {
|
||||
ksm = q->ksm;
|
||||
}
|
||||
|
||||
return keyslot_manager_evict_key(ksm, key, mode, data_unit_size);
|
||||
}
|
||||
EXPORT_SYMBOL(blk_crypto_evict_key);
|
||||
|
||||
int __init blk_crypto_init(void)
|
||||
{
|
||||
int i;
|
||||
int err = -ENOMEM;
|
||||
|
||||
prandom_bytes(blank_key, BLK_CRYPTO_MAX_KEY_SIZE);
|
||||
|
||||
blk_crypto_ksm = keyslot_manager_create(blk_crypto_num_keyslots,
|
||||
&blk_crypto_ksm_ll_ops,
|
||||
NULL);
|
||||
if (!blk_crypto_ksm)
|
||||
goto out;
|
||||
|
||||
blk_crypto_wq = alloc_workqueue("blk_crypto_wq",
|
||||
WQ_UNBOUND | WQ_HIGHPRI |
|
||||
WQ_MEM_RECLAIM,
|
||||
num_online_cpus());
|
||||
if (!blk_crypto_wq)
|
||||
goto out_free_ksm;
|
||||
|
||||
blk_crypto_keyslots = kcalloc(blk_crypto_num_keyslots,
|
||||
sizeof(*blk_crypto_keyslots),
|
||||
GFP_KERNEL);
|
||||
if (!blk_crypto_keyslots)
|
||||
goto out_free_workqueue;
|
||||
|
||||
for (i = 0; i < blk_crypto_num_keyslots; i++) {
|
||||
blk_crypto_keyslots[i].crypto_mode =
|
||||
BLK_ENCRYPTION_MODE_INVALID;
|
||||
}
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(blk_crypto_modes); i++)
|
||||
mutex_init(&tfms_lock[i]);
|
||||
|
||||
blk_crypto_page_pool =
|
||||
mempool_create_page_pool(num_prealloc_bounce_pg, 0);
|
||||
if (!blk_crypto_page_pool)
|
||||
goto out_free_keyslots;
|
||||
|
||||
blk_crypto_work_mem_cache = KMEM_CACHE(work_mem, SLAB_RECLAIM_ACCOUNT);
|
||||
if (!blk_crypto_work_mem_cache)
|
||||
goto out_free_page_pool;
|
||||
|
||||
return 0;
|
||||
|
||||
out_free_page_pool:
|
||||
mempool_destroy(blk_crypto_page_pool);
|
||||
blk_crypto_page_pool = NULL;
|
||||
out_free_keyslots:
|
||||
kzfree(blk_crypto_keyslots);
|
||||
blk_crypto_keyslots = NULL;
|
||||
out_free_workqueue:
|
||||
destroy_workqueue(blk_crypto_wq);
|
||||
blk_crypto_wq = NULL;
|
||||
out_free_ksm:
|
||||
keyslot_manager_destroy(blk_crypto_ksm);
|
||||
blk_crypto_ksm = NULL;
|
||||
out:
|
||||
pr_warn("No memory for blk-crypto crypto API fallback.");
|
||||
return err;
|
||||
}
|
@ -495,6 +495,9 @@ static inline int ll_new_hw_segment(struct request_queue *q,
|
||||
if (blk_integrity_merge_bio(q, req, bio) == false)
|
||||
goto no_merge;
|
||||
|
||||
if (WARN_ON_ONCE(!bio_crypt_ctx_compatible(bio, req->bio)))
|
||||
goto no_merge;
|
||||
|
||||
/*
|
||||
* This will form the start of a new hw segment. Bump both
|
||||
* counters.
|
||||
@ -708,6 +711,11 @@ static struct request *attempt_merge(struct request_queue *q,
|
||||
if (req->write_hint != next->write_hint)
|
||||
return NULL;
|
||||
|
||||
if (!bio_crypt_ctx_back_mergeable(req->bio, blk_rq_sectors(req),
|
||||
next->bio)) {
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/*
|
||||
* If we are allowed to merge, then append bio list
|
||||
* from next to rq and release next. merge_requests_fn
|
||||
@ -838,18 +846,32 @@ bool blk_rq_merge_ok(struct request *rq, struct bio *bio)
|
||||
if (rq->write_hint != bio->bi_write_hint)
|
||||
return false;
|
||||
|
||||
/* Only merge if the crypt contexts are compatible */
|
||||
if (!bio_crypt_ctx_compatible(bio, rq->bio))
|
||||
return false;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
enum elv_merge blk_try_merge(struct request *rq, struct bio *bio)
|
||||
{
|
||||
if (req_op(rq) == REQ_OP_DISCARD &&
|
||||
queue_max_discard_segments(rq->q) > 1)
|
||||
queue_max_discard_segments(rq->q) > 1) {
|
||||
return ELEVATOR_DISCARD_MERGE;
|
||||
else if (blk_rq_pos(rq) + blk_rq_sectors(rq) ==
|
||||
bio->bi_iter.bi_sector)
|
||||
} else if (blk_rq_pos(rq) + blk_rq_sectors(rq) ==
|
||||
bio->bi_iter.bi_sector) {
|
||||
if (!bio_crypt_ctx_back_mergeable(rq->bio,
|
||||
blk_rq_sectors(rq), bio)) {
|
||||
return ELEVATOR_NO_MERGE;
|
||||
}
|
||||
return ELEVATOR_BACK_MERGE;
|
||||
else if (blk_rq_pos(rq) - bio_sectors(bio) == bio->bi_iter.bi_sector)
|
||||
} else if (blk_rq_pos(rq) - bio_sectors(bio) ==
|
||||
bio->bi_iter.bi_sector) {
|
||||
if (!bio_crypt_ctx_back_mergeable(bio,
|
||||
bio_sectors(bio), rq->bio)) {
|
||||
return ELEVATOR_NO_MERGE;
|
||||
}
|
||||
return ELEVATOR_FRONT_MERGE;
|
||||
}
|
||||
return ELEVATOR_NO_MERGE;
|
||||
}
|
||||
|
@ -267,14 +267,15 @@ static struct bio *bounce_clone_bio(struct bio *bio_src, gfp_t gfp_mask,
|
||||
break;
|
||||
}
|
||||
|
||||
if (bio_integrity(bio_src)) {
|
||||
int ret;
|
||||
if (bio_crypt_clone(bio, bio_src, gfp_mask) < 0) {
|
||||
bio_put(bio);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
ret = bio_integrity_clone(bio, bio_src, gfp_mask);
|
||||
if (ret < 0) {
|
||||
bio_put(bio);
|
||||
return NULL;
|
||||
}
|
||||
if (bio_integrity(bio_src) &&
|
||||
bio_integrity_clone(bio, bio_src, gfp_mask) < 0) {
|
||||
bio_put(bio);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
bio_clone_blkcg_association(bio, bio_src);
|
||||
|
352
block/keyslot-manager.c
Normal file
352
block/keyslot-manager.c
Normal file
@ -0,0 +1,352 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* keyslot-manager.c
|
||||
*
|
||||
* Copyright 2019 Google LLC
|
||||
*/
|
||||
|
||||
/**
|
||||
* DOC: The Keyslot Manager
|
||||
*
|
||||
* Many devices with inline encryption support have a limited number of "slots"
|
||||
* into which encryption contexts may be programmed, and requests can be tagged
|
||||
* with a slot number to specify the key to use for en/decryption.
|
||||
*
|
||||
* As the number of slots are limited, and programming keys is expensive on
|
||||
* many inline encryption hardware, we don't want to program the same key into
|
||||
* multiple slots - if multiple requests are using the same key, we want to
|
||||
* program just one slot with that key and use that slot for all requests.
|
||||
*
|
||||
* The keyslot manager manages these keyslots appropriately, and also acts as
|
||||
* an abstraction between the inline encryption hardware and the upper layers.
|
||||
*
|
||||
* Lower layer devices will set up a keyslot manager in their request queue
|
||||
* and tell it how to perform device specific operations like programming/
|
||||
* evicting keys from keyslots.
|
||||
*
|
||||
* Upper layers will call keyslot_manager_get_slot_for_key() to program a
|
||||
* key into some slot in the inline encryption hardware.
|
||||
*/
|
||||
#include <linux/keyslot-manager.h>
|
||||
#include <linux/atomic.h>
|
||||
#include <linux/mutex.h>
|
||||
#include <linux/wait.h>
|
||||
#include <linux/blkdev.h>
|
||||
|
||||
struct keyslot {
|
||||
atomic_t slot_refs;
|
||||
struct list_head idle_slot_node;
|
||||
};
|
||||
|
||||
struct keyslot_manager {
|
||||
unsigned int num_slots;
|
||||
atomic_t num_idle_slots;
|
||||
struct keyslot_mgmt_ll_ops ksm_ll_ops;
|
||||
void *ll_priv_data;
|
||||
|
||||
/* Protects programming and evicting keys from the device */
|
||||
struct rw_semaphore lock;
|
||||
|
||||
/* List of idle slots, with least recently used slot at front */
|
||||
wait_queue_head_t idle_slots_wait_queue;
|
||||
struct list_head idle_slots;
|
||||
spinlock_t idle_slots_lock;
|
||||
|
||||
/* Per-keyslot data */
|
||||
struct keyslot slots[];
|
||||
};
|
||||
|
||||
/**
|
||||
* keyslot_manager_create() - Create a keyslot manager
|
||||
* @num_slots: The number of key slots to manage.
|
||||
* @ksm_ll_ops: The struct keyslot_mgmt_ll_ops for the device that this keyslot
|
||||
* manager will use to perform operations like programming and
|
||||
* evicting keys.
|
||||
* @ll_priv_data: Private data passed as is to the functions in ksm_ll_ops.
|
||||
*
|
||||
* Allocate memory for and initialize a keyslot manager. Called by e.g.
|
||||
* storage drivers to set up a keyslot manager in their request_queue.
|
||||
*
|
||||
* Context: May sleep
|
||||
* Return: Pointer to constructed keyslot manager or NULL on error.
|
||||
*/
|
||||
struct keyslot_manager *keyslot_manager_create(unsigned int num_slots,
|
||||
const struct keyslot_mgmt_ll_ops *ksm_ll_ops,
|
||||
void *ll_priv_data)
|
||||
{
|
||||
struct keyslot_manager *ksm;
|
||||
int slot;
|
||||
|
||||
if (num_slots == 0)
|
||||
return NULL;
|
||||
|
||||
/* Check that all ops are specified */
|
||||
if (ksm_ll_ops->keyslot_program == NULL ||
|
||||
ksm_ll_ops->keyslot_evict == NULL ||
|
||||
ksm_ll_ops->crypto_mode_supported == NULL ||
|
||||
ksm_ll_ops->keyslot_find == NULL)
|
||||
return NULL;
|
||||
|
||||
ksm = kvzalloc(struct_size(ksm, slots, num_slots), GFP_KERNEL);
|
||||
if (!ksm)
|
||||
return NULL;
|
||||
|
||||
ksm->num_slots = num_slots;
|
||||
atomic_set(&ksm->num_idle_slots, num_slots);
|
||||
ksm->ksm_ll_ops = *ksm_ll_ops;
|
||||
ksm->ll_priv_data = ll_priv_data;
|
||||
|
||||
init_rwsem(&ksm->lock);
|
||||
|
||||
init_waitqueue_head(&ksm->idle_slots_wait_queue);
|
||||
INIT_LIST_HEAD(&ksm->idle_slots);
|
||||
|
||||
for (slot = 0; slot < num_slots; slot++) {
|
||||
list_add_tail(&ksm->slots[slot].idle_slot_node,
|
||||
&ksm->idle_slots);
|
||||
}
|
||||
|
||||
spin_lock_init(&ksm->idle_slots_lock);
|
||||
|
||||
return ksm;
|
||||
}
|
||||
EXPORT_SYMBOL(keyslot_manager_create);
|
||||
|
||||
static void remove_slot_from_lru_list(struct keyslot_manager *ksm, int slot)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&ksm->idle_slots_lock, flags);
|
||||
list_del(&ksm->slots[slot].idle_slot_node);
|
||||
spin_unlock_irqrestore(&ksm->idle_slots_lock, flags);
|
||||
|
||||
atomic_dec(&ksm->num_idle_slots);
|
||||
}
|
||||
|
||||
static int find_and_grab_keyslot(struct keyslot_manager *ksm, const u8 *key,
|
||||
enum blk_crypto_mode_num crypto_mode,
|
||||
unsigned int data_unit_size)
|
||||
{
|
||||
int slot;
|
||||
|
||||
slot = ksm->ksm_ll_ops.keyslot_find(ksm->ll_priv_data, key,
|
||||
crypto_mode, data_unit_size);
|
||||
if (slot < 0)
|
||||
return slot;
|
||||
if (WARN_ON(slot >= ksm->num_slots))
|
||||
return -EINVAL;
|
||||
if (atomic_inc_return(&ksm->slots[slot].slot_refs) == 1) {
|
||||
/* Took first reference to this slot; remove it from LRU list */
|
||||
remove_slot_from_lru_list(ksm, slot);
|
||||
}
|
||||
return slot;
|
||||
}
|
||||
|
||||
/**
|
||||
* keyslot_manager_get_slot_for_key() - Program a key into a keyslot.
|
||||
* @ksm: The keyslot manager to program the key into.
|
||||
* @key: Pointer to the bytes of the key to program. Must be the correct length
|
||||
* for the chosen @crypto_mode; see blk_crypto_modes in blk-crypto.c.
|
||||
* @crypto_mode: Identifier for the encryption algorithm to use.
|
||||
* @data_unit_size: The data unit size to use for en/decryption.
|
||||
*
|
||||
* Get a keyslot that's been programmed with the specified key, crypto_mode, and
|
||||
* data_unit_size. If one already exists, return it with incremented refcount.
|
||||
* Otherwise, wait for a keyslot to become idle and program it.
|
||||
*
|
||||
* Context: Process context. Takes and releases ksm->lock.
|
||||
* Return: The keyslot on success, else a -errno value.
|
||||
*/
|
||||
int keyslot_manager_get_slot_for_key(struct keyslot_manager *ksm,
|
||||
const u8 *key,
|
||||
enum blk_crypto_mode_num crypto_mode,
|
||||
unsigned int data_unit_size)
|
||||
{
|
||||
int slot;
|
||||
int err;
|
||||
struct keyslot *idle_slot;
|
||||
|
||||
down_read(&ksm->lock);
|
||||
slot = find_and_grab_keyslot(ksm, key, crypto_mode, data_unit_size);
|
||||
up_read(&ksm->lock);
|
||||
if (slot != -ENOKEY)
|
||||
return slot;
|
||||
|
||||
for (;;) {
|
||||
down_write(&ksm->lock);
|
||||
slot = find_and_grab_keyslot(ksm, key, crypto_mode,
|
||||
data_unit_size);
|
||||
if (slot != -ENOKEY) {
|
||||
up_write(&ksm->lock);
|
||||
return slot;
|
||||
}
|
||||
|
||||
/*
|
||||
* If we're here, that means there wasn't a slot that was
|
||||
* already programmed with the key. So try to program it.
|
||||
*/
|
||||
if (atomic_read(&ksm->num_idle_slots) > 0)
|
||||
break;
|
||||
|
||||
up_write(&ksm->lock);
|
||||
wait_event(ksm->idle_slots_wait_queue,
|
||||
(atomic_read(&ksm->num_idle_slots) > 0));
|
||||
}
|
||||
|
||||
idle_slot = list_first_entry(&ksm->idle_slots, struct keyslot,
|
||||
idle_slot_node);
|
||||
slot = idle_slot - ksm->slots;
|
||||
|
||||
err = ksm->ksm_ll_ops.keyslot_program(ksm->ll_priv_data, key,
|
||||
crypto_mode,
|
||||
data_unit_size,
|
||||
slot);
|
||||
|
||||
if (err) {
|
||||
wake_up(&ksm->idle_slots_wait_queue);
|
||||
up_write(&ksm->lock);
|
||||
return err;
|
||||
}
|
||||
|
||||
atomic_set(&ksm->slots[slot].slot_refs, 1);
|
||||
remove_slot_from_lru_list(ksm, slot);
|
||||
|
||||
up_write(&ksm->lock);
|
||||
return slot;
|
||||
|
||||
}
|
||||
EXPORT_SYMBOL(keyslot_manager_get_slot_for_key);
|
||||
|
||||
/**
|
||||
* keyslot_manager_get_slot() - Increment the refcount on the specified slot.
|
||||
* @ksm - The keyslot manager that we want to modify.
|
||||
* @slot - The slot to increment the refcount of.
|
||||
*
|
||||
* This function assumes that there is already an active reference to that slot
|
||||
* and simply increments the refcount. This is useful when cloning a bio that
|
||||
* already has a reference to a keyslot, and we want the cloned bio to also have
|
||||
* its own reference.
|
||||
*
|
||||
* Context: Any context.
|
||||
*/
|
||||
void keyslot_manager_get_slot(struct keyslot_manager *ksm, unsigned int slot)
|
||||
{
|
||||
if (WARN_ON(slot >= ksm->num_slots))
|
||||
return;
|
||||
|
||||
WARN_ON(atomic_inc_return(&ksm->slots[slot].slot_refs) < 2);
|
||||
}
|
||||
EXPORT_SYMBOL(keyslot_manager_get_slot);
|
||||
|
||||
/**
|
||||
* keyslot_manager_put_slot() - Release a reference to a slot
|
||||
* @ksm: The keyslot manager to release the reference from.
|
||||
* @slot: The slot to release the reference from.
|
||||
*
|
||||
* Context: Any context.
|
||||
*/
|
||||
void keyslot_manager_put_slot(struct keyslot_manager *ksm, unsigned int slot)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
||||
if (WARN_ON(slot >= ksm->num_slots))
|
||||
return;
|
||||
|
||||
if (atomic_dec_and_lock_irqsave(&ksm->slots[slot].slot_refs,
|
||||
&ksm->idle_slots_lock, flags)) {
|
||||
list_add_tail(&ksm->slots[slot].idle_slot_node,
|
||||
&ksm->idle_slots);
|
||||
spin_unlock_irqrestore(&ksm->idle_slots_lock, flags);
|
||||
atomic_inc(&ksm->num_idle_slots);
|
||||
wake_up(&ksm->idle_slots_wait_queue);
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL(keyslot_manager_put_slot);
|
||||
|
||||
/**
|
||||
* keyslot_manager_crypto_mode_supported() - Find out if a crypto_mode/data
|
||||
* unit size combination is supported
|
||||
* by a ksm.
|
||||
* @ksm - The keyslot manager to check
|
||||
* @crypto_mode - The crypto mode to check for.
|
||||
* @data_unit_size - The data_unit_size for the mode.
|
||||
*
|
||||
* Calls and returns the result of the crypto_mode_supported function specified
|
||||
* by the ksm.
|
||||
*
|
||||
* Context: Process context.
|
||||
* Return: Whether or not this ksm supports the specified crypto_mode/
|
||||
* data_unit_size combo.
|
||||
*/
|
||||
bool keyslot_manager_crypto_mode_supported(struct keyslot_manager *ksm,
|
||||
enum blk_crypto_mode_num crypto_mode,
|
||||
unsigned int data_unit_size)
|
||||
{
|
||||
if (!ksm)
|
||||
return false;
|
||||
return ksm->ksm_ll_ops.crypto_mode_supported(ksm->ll_priv_data,
|
||||
crypto_mode,
|
||||
data_unit_size);
|
||||
}
|
||||
EXPORT_SYMBOL(keyslot_manager_crypto_mode_supported);
|
||||
|
||||
bool keyslot_manager_rq_crypto_mode_supported(struct request_queue *q,
|
||||
enum blk_crypto_mode_num crypto_mode,
|
||||
unsigned int data_unit_size)
|
||||
{
|
||||
return keyslot_manager_crypto_mode_supported(q->ksm, crypto_mode,
|
||||
data_unit_size);
|
||||
}
|
||||
EXPORT_SYMBOL(keyslot_manager_rq_crypto_mode_supported);
|
||||
|
||||
/**
|
||||
* keyslot_manager_evict_key() - Evict a key from the lower layer device.
|
||||
* @ksm - The keyslot manager to evict from
|
||||
* @key - The key to evict
|
||||
* @crypto_mode - The crypto algorithm the key was programmed with.
|
||||
* @data_unit_size - The data_unit_size the key was programmed with.
|
||||
*
|
||||
* Finds the slot that the specified key, crypto_mode, data_unit_size combo
|
||||
* was programmed into, and evicts that slot from the lower layer device if
|
||||
* the refcount on the slot is 0. Returns -EBUSY if the refcount is not 0, and
|
||||
* -errno on error.
|
||||
*
|
||||
* Context: Process context. Takes and releases ksm->lock.
|
||||
*/
|
||||
int keyslot_manager_evict_key(struct keyslot_manager *ksm,
|
||||
const u8 *key,
|
||||
enum blk_crypto_mode_num crypto_mode,
|
||||
unsigned int data_unit_size)
|
||||
{
|
||||
int slot;
|
||||
int err = 0;
|
||||
|
||||
down_write(&ksm->lock);
|
||||
slot = ksm->ksm_ll_ops.keyslot_find(ksm->ll_priv_data, key,
|
||||
crypto_mode,
|
||||
data_unit_size);
|
||||
|
||||
if (slot < 0) {
|
||||
up_write(&ksm->lock);
|
||||
return slot;
|
||||
}
|
||||
|
||||
if (atomic_read(&ksm->slots[slot].slot_refs) == 0) {
|
||||
err = ksm->ksm_ll_ops.keyslot_evict(ksm->ll_priv_data, key,
|
||||
crypto_mode,
|
||||
data_unit_size,
|
||||
slot);
|
||||
} else {
|
||||
err = -EBUSY;
|
||||
}
|
||||
|
||||
up_write(&ksm->lock);
|
||||
return err;
|
||||
}
|
||||
EXPORT_SYMBOL(keyslot_manager_evict_key);
|
||||
|
||||
void keyslot_manager_destroy(struct keyslot_manager *ksm)
|
||||
{
|
||||
kvfree(ksm);
|
||||
}
|
||||
EXPORT_SYMBOL(keyslot_manager_destroy);
|
@ -1312,12 +1312,15 @@ static int clone_bio(struct dm_target_io *tio, struct bio *bio,
|
||||
sector_t sector, unsigned len)
|
||||
{
|
||||
struct bio *clone = &tio->clone;
|
||||
int ret;
|
||||
|
||||
__bio_clone_fast(clone, bio);
|
||||
|
||||
if (unlikely(bio_integrity(bio) != NULL)) {
|
||||
int r;
|
||||
ret = bio_crypt_clone(clone, bio, GFP_NOIO);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
if (unlikely(bio_integrity(bio) != NULL)) {
|
||||
if (unlikely(!dm_target_has_integrity(tio->ti->type) &&
|
||||
!dm_target_passes_integrity(tio->ti->type))) {
|
||||
DMWARN("%s: the target %s doesn't support integrity data.",
|
||||
@ -1326,9 +1329,11 @@ static int clone_bio(struct dm_target_io *tio, struct bio *bio,
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
r = bio_integrity_clone(clone, bio, GFP_NOIO);
|
||||
if (r < 0)
|
||||
return r;
|
||||
ret = bio_integrity_clone(clone, bio, GFP_NOIO);
|
||||
if (ret < 0) {
|
||||
bio_crypt_free_ctx(clone);
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
if (bio_op(bio) != REQ_OP_ZONE_REPORT)
|
||||
|
@ -131,3 +131,12 @@ config SCSI_UFS_HISI
|
||||
|
||||
Select this if you have UFS controller on Hisilicon chipset.
|
||||
If unsure, say N.
|
||||
|
||||
config SCSI_UFS_CRYPTO
|
||||
bool "UFS Crypto Engine Support"
|
||||
depends on SCSI_UFSHCD && BLK_INLINE_ENCRYPTION
|
||||
help
|
||||
Enable Crypto Engine Support in UFS.
|
||||
Enabling this makes it possible for the kernel to use the crypto
|
||||
capabilities of the UFS device (if present) to perform crypto
|
||||
operations on data being transferred to/from the device.
|
||||
|
@ -11,3 +11,4 @@ obj-$(CONFIG_SCSI_UFSHCD_PLATFORM) += ufshcd-pltfrm.o
|
||||
obj-$(CONFIG_SCSI_UFS_TEST) += ufs_test.o
|
||||
obj-$(CONFIG_DEBUG_FS) += ufs-debugfs.o ufs-qcom-debugfs.o
|
||||
obj-$(CONFIG_SCSI_UFS_HISI) += ufs-hisi.o
|
||||
ufshcd-core-$(CONFIG_SCSI_UFS_CRYPTO) += ufshcd-crypto.o
|
||||
|
@ -540,6 +540,14 @@ static int ufs_hisi_init_common(struct ufs_hba *hba)
|
||||
if (!host)
|
||||
return -ENOMEM;
|
||||
|
||||
/*
|
||||
* Inline crypto is currently broken with ufs-hisi because the keyslots
|
||||
* overlap with the vendor-specific SYS CTRL registers -- and even if
|
||||
* software uses only non-overlapping keyslots, the kernel crashes when
|
||||
* programming a key or a UFS error occurs on the first encrypted I/O.
|
||||
*/
|
||||
hba->quirks |= UFSHCD_QUIRK_BROKEN_CRYPTO;
|
||||
|
||||
host->hba = hba;
|
||||
ufshcd_set_variant(hba, host);
|
||||
|
||||
|
@ -1459,6 +1459,12 @@ static void ufs_qcom_advertise_quirks(struct ufs_hba *hba)
|
||||
|
||||
if (host->disable_lpm)
|
||||
hba->quirks |= UFSHCD_QUIRK_BROKEN_AUTO_HIBERN8;
|
||||
/*
|
||||
* Inline crypto is currently broken with ufs-qcom at least because the
|
||||
* device tree doesn't include the crypto registers. There are likely
|
||||
* to be other issues that will need to be addressed too.
|
||||
*/
|
||||
//hba->quirks |= UFSHCD_QUIRK_BROKEN_CRYPTO;
|
||||
}
|
||||
|
||||
static void ufs_qcom_set_caps(struct ufs_hba *hba)
|
||||
|
521
drivers/scsi/ufs/ufshcd-crypto.c
Normal file
521
drivers/scsi/ufs/ufshcd-crypto.c
Normal file
@ -0,0 +1,521 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Copyright 2019 Google LLC
|
||||
*/
|
||||
|
||||
#include <crypto/algapi.h>
|
||||
|
||||
#include "ufshcd.h"
|
||||
#include "ufshcd-crypto.h"
|
||||
|
||||
static bool ufshcd_cap_idx_valid(struct ufs_hba *hba, unsigned int cap_idx)
|
||||
{
|
||||
return cap_idx < hba->crypto_capabilities.num_crypto_cap;
|
||||
}
|
||||
|
||||
static u8 get_data_unit_size_mask(unsigned int data_unit_size)
|
||||
{
|
||||
if (data_unit_size < 512 || data_unit_size > 65536 ||
|
||||
!is_power_of_2(data_unit_size))
|
||||
return 0;
|
||||
|
||||
return data_unit_size / 512;
|
||||
}
|
||||
|
||||
static size_t get_keysize_bytes(enum ufs_crypto_key_size size)
|
||||
{
|
||||
switch (size) {
|
||||
case UFS_CRYPTO_KEY_SIZE_128: return 16;
|
||||
case UFS_CRYPTO_KEY_SIZE_192: return 24;
|
||||
case UFS_CRYPTO_KEY_SIZE_256: return 32;
|
||||
case UFS_CRYPTO_KEY_SIZE_512: return 64;
|
||||
default: return 0;
|
||||
}
|
||||
}
|
||||
|
||||
static int ufshcd_crypto_cap_find(void *hba_p,
|
||||
enum blk_crypto_mode_num crypto_mode,
|
||||
unsigned int data_unit_size)
|
||||
{
|
||||
struct ufs_hba *hba = hba_p;
|
||||
enum ufs_crypto_alg ufs_alg;
|
||||
u8 data_unit_mask;
|
||||
int cap_idx;
|
||||
enum ufs_crypto_key_size ufs_key_size;
|
||||
union ufs_crypto_cap_entry *ccap_array = hba->crypto_cap_array;
|
||||
|
||||
if (!ufshcd_hba_is_crypto_supported(hba))
|
||||
return -EINVAL;
|
||||
|
||||
switch (crypto_mode) {
|
||||
case BLK_ENCRYPTION_MODE_AES_256_XTS:
|
||||
ufs_alg = UFS_CRYPTO_ALG_AES_XTS;
|
||||
ufs_key_size = UFS_CRYPTO_KEY_SIZE_256;
|
||||
break;
|
||||
default: return -EINVAL;
|
||||
}
|
||||
|
||||
data_unit_mask = get_data_unit_size_mask(data_unit_size);
|
||||
|
||||
for (cap_idx = 0; cap_idx < hba->crypto_capabilities.num_crypto_cap;
|
||||
cap_idx++) {
|
||||
if (ccap_array[cap_idx].algorithm_id == ufs_alg &&
|
||||
(ccap_array[cap_idx].sdus_mask & data_unit_mask) &&
|
||||
ccap_array[cap_idx].key_size == ufs_key_size)
|
||||
return cap_idx;
|
||||
}
|
||||
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/**
|
||||
* ufshcd_crypto_cfg_entry_write_key - Write a key into a crypto_cfg_entry
|
||||
*
|
||||
* Writes the key with the appropriate format - for AES_XTS,
|
||||
* the first half of the key is copied as is, the second half is
|
||||
* copied with an offset halfway into the cfg->crypto_key array.
|
||||
* For the other supported crypto algs, the key is just copied.
|
||||
*
|
||||
* @cfg: The crypto config to write to
|
||||
* @key: The key to write
|
||||
* @cap: The crypto capability (which specifies the crypto alg and key size)
|
||||
*
|
||||
* Returns 0 on success, or -EINVAL
|
||||
*/
|
||||
static int ufshcd_crypto_cfg_entry_write_key(union ufs_crypto_cfg_entry *cfg,
|
||||
const u8 *key,
|
||||
union ufs_crypto_cap_entry cap)
|
||||
{
|
||||
size_t key_size_bytes = get_keysize_bytes(cap.key_size);
|
||||
|
||||
if (key_size_bytes == 0)
|
||||
return -EINVAL;
|
||||
|
||||
switch (cap.algorithm_id) {
|
||||
case UFS_CRYPTO_ALG_AES_XTS:
|
||||
key_size_bytes *= 2;
|
||||
if (key_size_bytes > UFS_CRYPTO_KEY_MAX_SIZE)
|
||||
return -EINVAL;
|
||||
|
||||
memcpy(cfg->crypto_key, key, key_size_bytes/2);
|
||||
memcpy(cfg->crypto_key + UFS_CRYPTO_KEY_MAX_SIZE/2,
|
||||
key + key_size_bytes/2, key_size_bytes/2);
|
||||
return 0;
|
||||
case UFS_CRYPTO_ALG_BITLOCKER_AES_CBC: // fallthrough
|
||||
case UFS_CRYPTO_ALG_AES_ECB: // fallthrough
|
||||
case UFS_CRYPTO_ALG_ESSIV_AES_CBC:
|
||||
memcpy(cfg->crypto_key, key, key_size_bytes);
|
||||
return 0;
|
||||
}
|
||||
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
static void program_key(struct ufs_hba *hba,
|
||||
const union ufs_crypto_cfg_entry *cfg,
|
||||
int slot)
|
||||
{
|
||||
int i;
|
||||
u32 slot_offset = hba->crypto_cfg_register + slot * sizeof(*cfg);
|
||||
|
||||
/* Clear the dword 16 */
|
||||
ufshcd_writel(hba, 0, slot_offset + 16 * sizeof(cfg->reg_val[0]));
|
||||
/* Ensure that CFGE is cleared before programming the key */
|
||||
wmb();
|
||||
for (i = 0; i < 16; i++) {
|
||||
ufshcd_writel(hba, le32_to_cpu(cfg->reg_val[i]),
|
||||
slot_offset + i * sizeof(cfg->reg_val[0]));
|
||||
/* Spec says each dword in key must be written sequentially */
|
||||
wmb();
|
||||
}
|
||||
/* Write dword 17 */
|
||||
ufshcd_writel(hba, le32_to_cpu(cfg->reg_val[17]),
|
||||
slot_offset + 17 * sizeof(cfg->reg_val[0]));
|
||||
/* Dword 16 must be written last */
|
||||
wmb();
|
||||
/* Write dword 16 */
|
||||
ufshcd_writel(hba, le32_to_cpu(cfg->reg_val[16]),
|
||||
slot_offset + 16 * sizeof(cfg->reg_val[0]));
|
||||
wmb();
|
||||
}
|
||||
|
||||
static int ufshcd_crypto_keyslot_program(void *hba_p, const u8 *key,
|
||||
enum blk_crypto_mode_num crypto_mode,
|
||||
unsigned int data_unit_size,
|
||||
unsigned int slot)
|
||||
{
|
||||
struct ufs_hba *hba = hba_p;
|
||||
int err = 0;
|
||||
u8 data_unit_mask;
|
||||
union ufs_crypto_cfg_entry cfg;
|
||||
union ufs_crypto_cfg_entry *cfg_arr = hba->crypto_cfgs;
|
||||
int cap_idx;
|
||||
|
||||
cap_idx = ufshcd_crypto_cap_find(hba_p, crypto_mode,
|
||||
data_unit_size);
|
||||
|
||||
if (!ufshcd_is_crypto_enabled(hba) ||
|
||||
!ufshcd_keyslot_valid(hba, slot) ||
|
||||
!ufshcd_cap_idx_valid(hba, cap_idx))
|
||||
return -EINVAL;
|
||||
|
||||
data_unit_mask = get_data_unit_size_mask(data_unit_size);
|
||||
|
||||
if (!(data_unit_mask & hba->crypto_cap_array[cap_idx].sdus_mask))
|
||||
return -EINVAL;
|
||||
|
||||
memset(&cfg, 0, sizeof(cfg));
|
||||
cfg.data_unit_size = data_unit_mask;
|
||||
cfg.crypto_cap_idx = cap_idx;
|
||||
cfg.config_enable |= UFS_CRYPTO_CONFIGURATION_ENABLE;
|
||||
|
||||
err = ufshcd_crypto_cfg_entry_write_key(&cfg, key,
|
||||
hba->crypto_cap_array[cap_idx]);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
program_key(hba, &cfg, slot);
|
||||
|
||||
memcpy(&cfg_arr[slot], &cfg, sizeof(cfg));
|
||||
memzero_explicit(&cfg, sizeof(cfg));
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int ufshcd_crypto_keyslot_find(void *hba_p,
|
||||
const u8 *key,
|
||||
enum blk_crypto_mode_num crypto_mode,
|
||||
unsigned int data_unit_size)
|
||||
{
|
||||
struct ufs_hba *hba = hba_p;
|
||||
int err = 0;
|
||||
int slot;
|
||||
u8 data_unit_mask;
|
||||
union ufs_crypto_cfg_entry cfg;
|
||||
union ufs_crypto_cfg_entry *cfg_arr = hba->crypto_cfgs;
|
||||
int cap_idx;
|
||||
|
||||
cap_idx = ufshcd_crypto_cap_find(hba_p, crypto_mode,
|
||||
data_unit_size);
|
||||
|
||||
if (!ufshcd_is_crypto_enabled(hba) ||
|
||||
!ufshcd_cap_idx_valid(hba, cap_idx))
|
||||
return -EINVAL;
|
||||
|
||||
data_unit_mask = get_data_unit_size_mask(data_unit_size);
|
||||
|
||||
if (!(data_unit_mask & hba->crypto_cap_array[cap_idx].sdus_mask))
|
||||
return -EINVAL;
|
||||
|
||||
memset(&cfg, 0, sizeof(cfg));
|
||||
err = ufshcd_crypto_cfg_entry_write_key(&cfg, key,
|
||||
hba->crypto_cap_array[cap_idx]);
|
||||
|
||||
if (err)
|
||||
return -EINVAL;
|
||||
|
||||
for (slot = 0; slot < NUM_KEYSLOTS(hba); slot++) {
|
||||
if ((cfg_arr[slot].config_enable &
|
||||
UFS_CRYPTO_CONFIGURATION_ENABLE) &&
|
||||
data_unit_mask == cfg_arr[slot].data_unit_size &&
|
||||
cap_idx == cfg_arr[slot].crypto_cap_idx &&
|
||||
!crypto_memneq(&cfg.crypto_key, cfg_arr[slot].crypto_key,
|
||||
UFS_CRYPTO_KEY_MAX_SIZE)) {
|
||||
memzero_explicit(&cfg, sizeof(cfg));
|
||||
return slot;
|
||||
}
|
||||
}
|
||||
|
||||
memzero_explicit(&cfg, sizeof(cfg));
|
||||
return -ENOKEY;
|
||||
}
|
||||
|
||||
static int ufshcd_crypto_keyslot_evict(void *hba_p, const u8 *key,
|
||||
enum blk_crypto_mode_num crypto_mode,
|
||||
unsigned int data_unit_size,
|
||||
unsigned int slot)
|
||||
{
|
||||
struct ufs_hba *hba = hba_p;
|
||||
int i = 0;
|
||||
u32 reg_base;
|
||||
union ufs_crypto_cfg_entry *cfg_arr = hba->crypto_cfgs;
|
||||
|
||||
if (!ufshcd_is_crypto_enabled(hba) ||
|
||||
!ufshcd_keyslot_valid(hba, slot))
|
||||
return -EINVAL;
|
||||
|
||||
memset(&cfg_arr[slot], 0, sizeof(cfg_arr[slot]));
|
||||
reg_base = hba->crypto_cfg_register + slot * sizeof(cfg_arr[0]);
|
||||
|
||||
/*
|
||||
* Clear the crypto cfg on the device. Clearing CFGE
|
||||
* might not be sufficient, so just clear the entire cfg.
|
||||
*/
|
||||
for (i = 0; i < sizeof(cfg_arr[0]); i += sizeof(__le32))
|
||||
ufshcd_writel(hba, 0, reg_base + i);
|
||||
wmb();
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static bool ufshcd_crypto_mode_supported(void *hba_p,
|
||||
enum blk_crypto_mode_num crypto_mode,
|
||||
unsigned int data_unit_size)
|
||||
{
|
||||
return ufshcd_crypto_cap_find(hba_p, crypto_mode, data_unit_size) >= 0;
|
||||
}
|
||||
|
||||
/* Functions implementing UFSHCI v2.1 specification behaviour */
|
||||
void ufshcd_crypto_enable_spec(struct ufs_hba *hba)
|
||||
{
|
||||
union ufs_crypto_cfg_entry *cfg_arr = hba->crypto_cfgs;
|
||||
int slot;
|
||||
|
||||
if (!ufshcd_hba_is_crypto_supported(hba))
|
||||
return;
|
||||
|
||||
hba->caps |= UFSHCD_CAP_CRYPTO;
|
||||
/*
|
||||
* Reset might clear all keys, so reprogram all the keys.
|
||||
* Also serves to clear keys on driver init.
|
||||
*/
|
||||
for (slot = 0; slot < NUM_KEYSLOTS(hba); slot++)
|
||||
program_key(hba, &cfg_arr[slot], slot);
|
||||
}
|
||||
EXPORT_SYMBOL(ufshcd_crypto_enable_spec);
|
||||
|
||||
void ufshcd_crypto_disable_spec(struct ufs_hba *hba)
|
||||
{
|
||||
hba->caps &= ~UFSHCD_CAP_CRYPTO;
|
||||
}
|
||||
EXPORT_SYMBOL(ufshcd_crypto_disable_spec);
|
||||
|
||||
static const struct keyslot_mgmt_ll_ops ufshcd_ksm_ops = {
|
||||
.keyslot_program = ufshcd_crypto_keyslot_program,
|
||||
.keyslot_evict = ufshcd_crypto_keyslot_evict,
|
||||
.keyslot_find = ufshcd_crypto_keyslot_find,
|
||||
.crypto_mode_supported = ufshcd_crypto_mode_supported,
|
||||
};
|
||||
|
||||
/**
|
||||
* ufshcd_hba_init_crypto - Read crypto capabilities, init crypto fields in hba
|
||||
* @hba: Per adapter instance
|
||||
*
|
||||
* Return: 0 if crypto was initialized or is not supported, else a -errno value.
|
||||
*/
|
||||
int ufshcd_hba_init_crypto_spec(struct ufs_hba *hba,
|
||||
const struct keyslot_mgmt_ll_ops *ksm_ops)
|
||||
{
|
||||
int cap_idx = 0;
|
||||
int err = 0;
|
||||
|
||||
/* Default to disabling crypto */
|
||||
hba->caps &= ~UFSHCD_CAP_CRYPTO;
|
||||
|
||||
/* Return 0 if crypto support isn't present */
|
||||
if (!(hba->capabilities & MASK_CRYPTO_SUPPORT) ||
|
||||
(hba->quirks & UFSHCD_QUIRK_BROKEN_CRYPTO))
|
||||
goto out;
|
||||
|
||||
/*
|
||||
* Crypto Capabilities should never be 0, because the
|
||||
* config_array_ptr > 04h. So we use a 0 value to indicate that
|
||||
* crypto init failed, and can't be enabled.
|
||||
*/
|
||||
hba->crypto_capabilities.reg_val =
|
||||
cpu_to_le32(ufshcd_readl(hba, REG_UFS_CCAP));
|
||||
hba->crypto_cfg_register =
|
||||
(u32)hba->crypto_capabilities.config_array_ptr * 0x100;
|
||||
hba->crypto_cap_array =
|
||||
devm_kcalloc(hba->dev,
|
||||
hba->crypto_capabilities.num_crypto_cap,
|
||||
sizeof(hba->crypto_cap_array[0]),
|
||||
GFP_KERNEL);
|
||||
if (!hba->crypto_cap_array) {
|
||||
err = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
hba->crypto_cfgs =
|
||||
devm_kcalloc(hba->dev,
|
||||
NUM_KEYSLOTS(hba),
|
||||
sizeof(hba->crypto_cfgs[0]),
|
||||
GFP_KERNEL);
|
||||
if (!hba->crypto_cfgs) {
|
||||
err = -ENOMEM;
|
||||
goto out_free_cfg_mem;
|
||||
}
|
||||
|
||||
/*
|
||||
* Store all the capabilities now so that we don't need to repeatedly
|
||||
* access the device each time we want to know its capabilities
|
||||
*/
|
||||
for (cap_idx = 0; cap_idx < hba->crypto_capabilities.num_crypto_cap;
|
||||
cap_idx++) {
|
||||
hba->crypto_cap_array[cap_idx].reg_val =
|
||||
cpu_to_le32(ufshcd_readl(hba,
|
||||
REG_UFS_CRYPTOCAP +
|
||||
cap_idx * sizeof(__le32)));
|
||||
}
|
||||
|
||||
hba->ksm = keyslot_manager_create(NUM_KEYSLOTS(hba), ksm_ops, hba);
|
||||
|
||||
if (!hba->ksm) {
|
||||
err = -ENOMEM;
|
||||
goto out_free_crypto_cfgs;
|
||||
}
|
||||
|
||||
return 0;
|
||||
out_free_crypto_cfgs:
|
||||
devm_kfree(hba->dev, hba->crypto_cfgs);
|
||||
out_free_cfg_mem:
|
||||
devm_kfree(hba->dev, hba->crypto_cap_array);
|
||||
out:
|
||||
/* Indicate that init failed by setting crypto_capabilities to 0 */
|
||||
hba->crypto_capabilities.reg_val = 0;
|
||||
return err;
|
||||
}
|
||||
EXPORT_SYMBOL(ufshcd_hba_init_crypto_spec);
|
||||
|
||||
void ufshcd_crypto_setup_rq_keyslot_manager_spec(struct ufs_hba *hba,
|
||||
struct request_queue *q)
|
||||
{
|
||||
if (!ufshcd_hba_is_crypto_supported(hba) || !q)
|
||||
return;
|
||||
|
||||
q->ksm = hba->ksm;
|
||||
}
|
||||
EXPORT_SYMBOL(ufshcd_crypto_setup_rq_keyslot_manager_spec);
|
||||
|
||||
void ufshcd_crypto_destroy_rq_keyslot_manager_spec(struct ufs_hba *hba,
|
||||
struct request_queue *q)
|
||||
{
|
||||
keyslot_manager_destroy(hba->ksm);
|
||||
}
|
||||
EXPORT_SYMBOL(ufshcd_crypto_destroy_rq_keyslot_manager_spec);
|
||||
|
||||
int ufshcd_prepare_lrbp_crypto_spec(struct ufs_hba *hba,
|
||||
struct scsi_cmnd *cmd,
|
||||
struct ufshcd_lrb *lrbp)
|
||||
{
|
||||
int key_slot;
|
||||
|
||||
if (!cmd->request->bio ||
|
||||
!bio_crypt_should_process(cmd->request->bio, cmd->request->q)) {
|
||||
lrbp->crypto_enable = false;
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (WARN_ON(!ufshcd_is_crypto_enabled(hba))) {
|
||||
/*
|
||||
* Upper layer asked us to do inline encryption
|
||||
* but that isn't enabled, so we fail this request.
|
||||
*/
|
||||
return -EINVAL;
|
||||
}
|
||||
key_slot = bio_crypt_get_keyslot(cmd->request->bio);
|
||||
if (!ufshcd_keyslot_valid(hba, key_slot))
|
||||
return -EINVAL;
|
||||
|
||||
lrbp->crypto_enable = true;
|
||||
lrbp->crypto_key_slot = key_slot;
|
||||
lrbp->data_unit_num = bio_crypt_data_unit_num(cmd->request->bio);
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(ufshcd_prepare_lrbp_crypto_spec);
|
||||
|
||||
/* Crypto Variant Ops Support */
|
||||
|
||||
void ufshcd_crypto_enable(struct ufs_hba *hba)
|
||||
{
|
||||
if (hba->crypto_vops && hba->crypto_vops->enable)
|
||||
return hba->crypto_vops->enable(hba);
|
||||
|
||||
return ufshcd_crypto_enable_spec(hba);
|
||||
}
|
||||
|
||||
void ufshcd_crypto_disable(struct ufs_hba *hba)
|
||||
{
|
||||
if (hba->crypto_vops && hba->crypto_vops->disable)
|
||||
return hba->crypto_vops->disable(hba);
|
||||
|
||||
return ufshcd_crypto_disable_spec(hba);
|
||||
}
|
||||
|
||||
int ufshcd_hba_init_crypto(struct ufs_hba *hba)
|
||||
{
|
||||
if (hba->crypto_vops && hba->crypto_vops->hba_init_crypto)
|
||||
return hba->crypto_vops->hba_init_crypto(hba,
|
||||
&ufshcd_ksm_ops);
|
||||
|
||||
return ufshcd_hba_init_crypto_spec(hba, &ufshcd_ksm_ops);
|
||||
}
|
||||
|
||||
void ufshcd_crypto_setup_rq_keyslot_manager(struct ufs_hba *hba,
|
||||
struct request_queue *q)
|
||||
{
|
||||
if (hba->crypto_vops && hba->crypto_vops->setup_rq_keyslot_manager)
|
||||
return hba->crypto_vops->setup_rq_keyslot_manager(hba, q);
|
||||
|
||||
return ufshcd_crypto_setup_rq_keyslot_manager_spec(hba, q);
|
||||
}
|
||||
|
||||
void ufshcd_crypto_destroy_rq_keyslot_manager(struct ufs_hba *hba,
|
||||
struct request_queue *q)
|
||||
{
|
||||
if (hba->crypto_vops && hba->crypto_vops->destroy_rq_keyslot_manager)
|
||||
return hba->crypto_vops->destroy_rq_keyslot_manager(hba, q);
|
||||
|
||||
return ufshcd_crypto_destroy_rq_keyslot_manager_spec(hba, q);
|
||||
}
|
||||
|
||||
int ufshcd_prepare_lrbp_crypto(struct ufs_hba *hba,
|
||||
struct scsi_cmnd *cmd,
|
||||
struct ufshcd_lrb *lrbp)
|
||||
{
|
||||
if (hba->crypto_vops && hba->crypto_vops->prepare_lrbp_crypto)
|
||||
return hba->crypto_vops->prepare_lrbp_crypto(hba, cmd, lrbp);
|
||||
|
||||
return ufshcd_prepare_lrbp_crypto_spec(hba, cmd, lrbp);
|
||||
}
|
||||
|
||||
int ufshcd_complete_lrbp_crypto(struct ufs_hba *hba,
|
||||
struct scsi_cmnd *cmd,
|
||||
struct ufshcd_lrb *lrbp)
|
||||
{
|
||||
if (hba->crypto_vops && hba->crypto_vops->complete_lrbp_crypto)
|
||||
return hba->crypto_vops->complete_lrbp_crypto(hba, cmd, lrbp);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
void ufshcd_crypto_debug(struct ufs_hba *hba)
|
||||
{
|
||||
if (hba->crypto_vops && hba->crypto_vops->debug)
|
||||
hba->crypto_vops->debug(hba);
|
||||
}
|
||||
|
||||
int ufshcd_crypto_suspend(struct ufs_hba *hba,
|
||||
enum ufs_pm_op pm_op)
|
||||
{
|
||||
if (hba->crypto_vops && hba->crypto_vops->suspend)
|
||||
return hba->crypto_vops->suspend(hba, pm_op);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int ufshcd_crypto_resume(struct ufs_hba *hba,
|
||||
enum ufs_pm_op pm_op)
|
||||
{
|
||||
if (hba->crypto_vops && hba->crypto_vops->resume)
|
||||
return hba->crypto_vops->resume(hba, pm_op);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
void ufshcd_crypto_set_vops(struct ufs_hba *hba,
|
||||
struct ufs_hba_crypto_variant_ops *crypto_vops)
|
||||
{
|
||||
hba->crypto_vops = crypto_vops;
|
||||
}
|
151
drivers/scsi/ufs/ufshcd-crypto.h
Normal file
151
drivers/scsi/ufs/ufshcd-crypto.h
Normal file
@ -0,0 +1,151 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
/*
|
||||
* Copyright 2019 Google LLC
|
||||
*/
|
||||
|
||||
#ifndef _UFSHCD_CRYPTO_H
|
||||
#define _UFSHCD_CRYPTO_H
|
||||
|
||||
#ifdef CONFIG_SCSI_UFS_CRYPTO
|
||||
#include <linux/keyslot-manager.h>
|
||||
#include "ufshcd.h"
|
||||
#include "ufshci.h"
|
||||
|
||||
#define NUM_KEYSLOTS(hba) (hba->crypto_capabilities.config_count + 1)
|
||||
|
||||
static inline bool ufshcd_keyslot_valid(struct ufs_hba *hba, unsigned int slot)
|
||||
{
|
||||
/*
|
||||
* The actual number of configurations supported is (CFGC+1), so slot
|
||||
* numbers range from 0 to config_count inclusive.
|
||||
*/
|
||||
return slot < NUM_KEYSLOTS(hba);
|
||||
}
|
||||
|
||||
static inline bool ufshcd_hba_is_crypto_supported(struct ufs_hba *hba)
|
||||
{
|
||||
return hba->crypto_capabilities.reg_val != 0;
|
||||
}
|
||||
|
||||
static inline bool ufshcd_is_crypto_enabled(struct ufs_hba *hba)
|
||||
{
|
||||
return hba->caps & UFSHCD_CAP_CRYPTO;
|
||||
}
|
||||
|
||||
/* Functions implementing UFSHCI v2.1 specification behaviour */
|
||||
int ufshcd_prepare_lrbp_crypto_spec(struct ufs_hba *hba,
|
||||
struct scsi_cmnd *cmd,
|
||||
struct ufshcd_lrb *lrbp);
|
||||
|
||||
void ufshcd_crypto_enable_spec(struct ufs_hba *hba);
|
||||
|
||||
void ufshcd_crypto_disable_spec(struct ufs_hba *hba);
|
||||
|
||||
struct keyslot_mgmt_ll_ops;
|
||||
int ufshcd_hba_init_crypto_spec(struct ufs_hba *hba,
|
||||
const struct keyslot_mgmt_ll_ops *ksm_ops);
|
||||
|
||||
void ufshcd_crypto_setup_rq_keyslot_manager_spec(struct ufs_hba *hba,
|
||||
struct request_queue *q);
|
||||
|
||||
void ufshcd_crypto_destroy_rq_keyslot_manager_spec(struct ufs_hba *hba,
|
||||
struct request_queue *q);
|
||||
|
||||
/* Crypto Variant Ops Support */
|
||||
void ufshcd_crypto_enable(struct ufs_hba *hba);
|
||||
|
||||
void ufshcd_crypto_disable(struct ufs_hba *hba);
|
||||
|
||||
int ufshcd_hba_init_crypto(struct ufs_hba *hba);
|
||||
|
||||
void ufshcd_crypto_setup_rq_keyslot_manager(struct ufs_hba *hba,
|
||||
struct request_queue *q);
|
||||
|
||||
void ufshcd_crypto_destroy_rq_keyslot_manager(struct ufs_hba *hba,
|
||||
struct request_queue *q);
|
||||
|
||||
int ufshcd_prepare_lrbp_crypto(struct ufs_hba *hba,
|
||||
struct scsi_cmnd *cmd,
|
||||
struct ufshcd_lrb *lrbp);
|
||||
|
||||
int ufshcd_complete_lrbp_crypto(struct ufs_hba *hba,
|
||||
struct scsi_cmnd *cmd,
|
||||
struct ufshcd_lrb *lrbp);
|
||||
|
||||
void ufshcd_crypto_debug(struct ufs_hba *hba);
|
||||
|
||||
int ufshcd_crypto_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op);
|
||||
|
||||
int ufshcd_crypto_resume(struct ufs_hba *hba, enum ufs_pm_op pm_op);
|
||||
|
||||
void ufshcd_crypto_set_vops(struct ufs_hba *hba,
|
||||
struct ufs_hba_crypto_variant_ops *crypto_vops);
|
||||
|
||||
#else /* CONFIG_SCSI_UFS_CRYPTO */
|
||||
|
||||
static inline bool ufshcd_keyslot_valid(struct ufs_hba *hba,
|
||||
unsigned int slot)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
static inline bool ufshcd_hba_is_crypto_supported(struct ufs_hba *hba)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
static inline bool ufshcd_is_crypto_enabled(struct ufs_hba *hba)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
static inline void ufshcd_crypto_enable(struct ufs_hba *hba) { }
|
||||
|
||||
static inline void ufshcd_crypto_disable(struct ufs_hba *hba) { }
|
||||
|
||||
static inline int ufshcd_hba_init_crypto(struct ufs_hba *hba)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void ufshcd_crypto_setup_rq_keyslot_manager(struct ufs_hba *hba,
|
||||
struct request_queue *q) { }
|
||||
|
||||
static inline void ufshcd_crypto_destroy_rq_keyslot_manager(struct ufs_hba *hba,
|
||||
struct request_queue *q) { }
|
||||
|
||||
static inline int ufshcd_prepare_lrbp_crypto(struct ufs_hba *hba,
|
||||
struct scsi_cmnd *cmd,
|
||||
struct ufshcd_lrb *lrbp)
|
||||
{
|
||||
lrbp->crypto_enable = false;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int ufshcd_complete_lrbp_crypto(struct ufs_hba *hba,
|
||||
struct scsi_cmnd *cmd,
|
||||
struct ufshcd_lrb *lrbp)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void ufshcd_crypto_debug(struct ufs_hba *hba) { }
|
||||
|
||||
static inline int ufshcd_crypto_suspend(struct ufs_hba *hba,
|
||||
enum ufs_pm_op pm_op)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int ufshcd_crypto_resume(struct ufs_hba *hba,
|
||||
enum ufs_pm_op pm_op)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void ufshcd_crypto_set_vops(struct ufs_hba *hba,
|
||||
struct ufs_hba_crypto_variant_ops *crypto_vops) { }
|
||||
|
||||
#endif /* CONFIG_SCSI_UFS_CRYPTO */
|
||||
|
||||
#endif /* _UFSHCD_CRYPTO_H */
|
@ -204,6 +204,7 @@ static void ufshcd_update_uic_error_cnt(struct ufs_hba *hba, u32 reg, int type)
|
||||
break;
|
||||
}
|
||||
}
|
||||
#include "ufshcd-crypto.h"
|
||||
|
||||
#define CREATE_TRACE_POINTS
|
||||
#include <trace/events/ufs.h>
|
||||
@ -905,6 +906,8 @@ static inline void __ufshcd_print_host_regs(struct ufs_hba *hba, bool no_sleep)
|
||||
static void ufshcd_print_host_regs(struct ufs_hba *hba)
|
||||
{
|
||||
__ufshcd_print_host_regs(hba, false);
|
||||
|
||||
ufshcd_crypto_debug(hba);
|
||||
}
|
||||
|
||||
static
|
||||
@ -1412,8 +1415,11 @@ static inline void ufshcd_hba_start(struct ufs_hba *hba)
|
||||
{
|
||||
u32 val = CONTROLLER_ENABLE;
|
||||
|
||||
if (ufshcd_is_crypto_supported(hba))
|
||||
if (ufshcd_hba_is_crypto_supported(hba)) {
|
||||
ufshcd_crypto_enable(hba);
|
||||
val |= CRYPTO_GENERAL_ENABLE;
|
||||
}
|
||||
|
||||
ufshcd_writel(hba, val, REG_CONTROLLER_ENABLE);
|
||||
}
|
||||
|
||||
@ -3399,9 +3405,21 @@ static int ufshcd_prepare_req_desc_hdr(struct ufs_hba *hba,
|
||||
dword_0 |= UTP_REQ_DESC_INT_CMD;
|
||||
|
||||
/* Transfer request descriptor header fields */
|
||||
if (lrbp->crypto_enable) {
|
||||
dword_0 |= UTP_REQ_DESC_CRYPTO_ENABLE_CMD;
|
||||
dword_0 |= lrbp->crypto_key_slot;
|
||||
req_desc->header.dword_1 =
|
||||
cpu_to_le32((u32)lrbp->data_unit_num);
|
||||
req_desc->header.dword_3 =
|
||||
cpu_to_le32((u32)(lrbp->data_unit_num >> 32));
|
||||
} else {
|
||||
/* dword_1 and dword_3 are reserved, hence they are set to 0 */
|
||||
req_desc->header.dword_1 = 0;
|
||||
req_desc->header.dword_3 = 0;
|
||||
}
|
||||
|
||||
req_desc->header.dword_0 = cpu_to_le32(dword_0);
|
||||
/* dword_1 is reserved, hence it is set to 0 */
|
||||
req_desc->header.dword_1 = 0;
|
||||
|
||||
/*
|
||||
* assigning invalid value for command status. Controller
|
||||
* updates OCS on command completion, with the command
|
||||
@ -3409,8 +3427,6 @@ static int ufshcd_prepare_req_desc_hdr(struct ufs_hba *hba,
|
||||
*/
|
||||
req_desc->header.dword_2 =
|
||||
cpu_to_le32(OCS_INVALID_COMMAND_STATUS);
|
||||
/* dword_3 is reserved, hence it is set to 0 */
|
||||
req_desc->header.dword_3 = 0;
|
||||
|
||||
req_desc->prd_table_length = 0;
|
||||
|
||||
@ -3769,6 +3785,13 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
|
||||
lrbp->task_tag = tag;
|
||||
lrbp->lun = ufshcd_scsi_to_upiu_lun(cmd->device->lun);
|
||||
lrbp->intr_cmd = !ufshcd_is_intr_aggr_allowed(hba) ? true : false;
|
||||
|
||||
err = ufshcd_prepare_lrbp_crypto(hba, cmd, lrbp);
|
||||
if (err) {
|
||||
lrbp->cmd = NULL;
|
||||
clear_bit_unlock(tag, &hba->lrb_in_use);
|
||||
goto out;
|
||||
}
|
||||
lrbp->req_abort_skip = false;
|
||||
|
||||
err = ufshcd_comp_scsi_upiu(hba, lrbp);
|
||||
@ -3833,6 +3856,7 @@ static int ufshcd_compose_dev_cmd(struct ufs_hba *hba,
|
||||
lrbp->task_tag = tag;
|
||||
lrbp->lun = 0; /* device management cmd is not specific to any LUN */
|
||||
lrbp->intr_cmd = true; /* No interrupt aggregation */
|
||||
lrbp->crypto_enable = false; /* No crypto operations */
|
||||
hba->dev_cmd.type = cmd_type;
|
||||
|
||||
return ufshcd_comp_devman_upiu(hba, lrbp);
|
||||
@ -5734,6 +5758,8 @@ static inline void ufshcd_hba_stop(struct ufs_hba *hba, bool can_sleep)
|
||||
{
|
||||
int err;
|
||||
|
||||
ufshcd_crypto_disable(hba);
|
||||
|
||||
ufshcd_writel(hba, CONTROLLER_DISABLE, REG_CONTROLLER_ENABLE);
|
||||
err = ufshcd_wait_for_register(hba, REG_CONTROLLER_ENABLE,
|
||||
CONTROLLER_ENABLE, CONTROLLER_DISABLE,
|
||||
@ -6170,6 +6196,8 @@ static int ufshcd_slave_configure(struct scsi_device *sdev)
|
||||
sdev->autosuspend_delay = UFSHCD_AUTO_SUSPEND_DELAY_MS;
|
||||
sdev->use_rpm_auto = 1;
|
||||
|
||||
ufshcd_crypto_setup_rq_keyslot_manager(hba, q);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -6180,6 +6208,7 @@ static int ufshcd_slave_configure(struct scsi_device *sdev)
|
||||
static void ufshcd_slave_destroy(struct scsi_device *sdev)
|
||||
{
|
||||
struct ufs_hba *hba;
|
||||
struct request_queue *q = sdev->request_queue;
|
||||
|
||||
hba = shost_priv(sdev->host);
|
||||
/* Drop the reference as it won't be needed anymore */
|
||||
@ -6190,6 +6219,8 @@ static void ufshcd_slave_destroy(struct scsi_device *sdev)
|
||||
hba->sdev_ufs_device = NULL;
|
||||
spin_unlock_irqrestore(hba->host->host_lock, flags);
|
||||
}
|
||||
|
||||
ufshcd_crypto_destroy_rq_keyslot_manager(hba, q);
|
||||
}
|
||||
|
||||
/**
|
||||
@ -6463,6 +6494,7 @@ static void __ufshcd_transfer_req_compl(struct ufs_hba *hba,
|
||||
cmd->result = result;
|
||||
lrbp->compl_time_stamp = ktime_get();
|
||||
update_req_stats(hba, lrbp);
|
||||
ufshcd_complete_lrbp_crypto(hba, cmd, lrbp);
|
||||
/* Mark completed command as NULL in LRB */
|
||||
lrbp->cmd = NULL;
|
||||
hba->ufs_stats.clk_rel.ctx = XFR_REQ_COMPL;
|
||||
@ -10284,6 +10316,10 @@ static int ufshcd_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op)
|
||||
req_link_state = UIC_LINK_OFF_STATE;
|
||||
}
|
||||
|
||||
ret = ufshcd_crypto_suspend(hba, pm_op);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
/*
|
||||
* If we can't transition into any of the low power modes
|
||||
* just gate the clocks.
|
||||
@ -10412,6 +10448,7 @@ enable_gating:
|
||||
hba->hibern8_on_idle.is_suspended = false;
|
||||
hba->clk_gating.is_suspended = false;
|
||||
ufshcd_release_all(hba);
|
||||
ufshcd_crypto_resume(hba, pm_op);
|
||||
out:
|
||||
hba->pm_op_in_progress = 0;
|
||||
|
||||
@ -10435,9 +10472,11 @@ static int ufshcd_resume(struct ufs_hba *hba, enum ufs_pm_op pm_op)
|
||||
{
|
||||
int ret;
|
||||
enum uic_link_state old_link_state;
|
||||
enum ufs_dev_pwr_mode old_pwr_mode;
|
||||
|
||||
hba->pm_op_in_progress = 1;
|
||||
old_link_state = hba->uic_link_state;
|
||||
old_pwr_mode = hba->curr_dev_pwr_mode;
|
||||
|
||||
ufshcd_hba_vreg_set_hpm(hba);
|
||||
/* Make sure clocks are enabled before accessing controller */
|
||||
@ -10505,6 +10544,11 @@ static int ufshcd_resume(struct ufs_hba *hba, enum ufs_pm_op pm_op)
|
||||
goto set_old_link_state;
|
||||
}
|
||||
}
|
||||
|
||||
ret = ufshcd_crypto_resume(hba, pm_op);
|
||||
if (ret)
|
||||
goto set_old_dev_pwr_mode;
|
||||
|
||||
if (ufshcd_keep_autobkops_enabled_except_suspend(hba))
|
||||
ufshcd_enable_auto_bkops(hba);
|
||||
else
|
||||
@ -10527,6 +10571,9 @@ static int ufshcd_resume(struct ufs_hba *hba, enum ufs_pm_op pm_op)
|
||||
ufshcd_release_all(hba);
|
||||
goto out;
|
||||
|
||||
set_old_dev_pwr_mode:
|
||||
if (old_pwr_mode != hba->curr_dev_pwr_mode)
|
||||
ufshcd_set_dev_pwr_mode(hba, old_pwr_mode);
|
||||
set_old_link_state:
|
||||
ufshcd_link_state_transition(hba, old_link_state, 0);
|
||||
if (ufshcd_is_link_hibern8(hba) &&
|
||||
@ -11043,6 +11090,12 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
|
||||
|
||||
if (hba->force_g4)
|
||||
hba->phy_init_g4 = true;
|
||||
/* Init crypto */
|
||||
err = ufshcd_hba_init_crypto(hba);
|
||||
if (err) {
|
||||
dev_err(hba->dev, "crypto setup failed\n");
|
||||
goto out_remove_scsi_host;
|
||||
}
|
||||
|
||||
/* Host controller enable */
|
||||
err = ufshcd_hba_enable(hba);
|
||||
|
@ -197,6 +197,9 @@ struct ufs_pm_lvl_states {
|
||||
* @intr_cmd: Interrupt command (doesn't participate in interrupt aggregation)
|
||||
* @issue_time_stamp: time stamp for debug purposes
|
||||
* @compl_time_stamp: time stamp for statistics
|
||||
* @crypto_enable: whether or not the request needs inline crypto operations
|
||||
* @crypto_key_slot: the key slot to use for inline crypto
|
||||
* @data_unit_num: the data unit number for the first block for inline crypto
|
||||
* @req_abort_skip: skip request abort task flag
|
||||
*/
|
||||
struct ufshcd_lrb {
|
||||
@ -221,6 +224,9 @@ struct ufshcd_lrb {
|
||||
bool intr_cmd;
|
||||
ktime_t issue_time_stamp;
|
||||
ktime_t compl_time_stamp;
|
||||
bool crypto_enable;
|
||||
u8 crypto_key_slot;
|
||||
u64 data_unit_num;
|
||||
|
||||
bool req_abort_skip;
|
||||
};
|
||||
@ -390,6 +396,28 @@ struct ufs_hba_variant {
|
||||
struct ufs_hba_pm_qos_variant_ops *pm_qos_vops;
|
||||
};
|
||||
|
||||
struct keyslot_mgmt_ll_ops;
|
||||
struct ufs_hba_crypto_variant_ops {
|
||||
void (*setup_rq_keyslot_manager)(struct ufs_hba *hba,
|
||||
struct request_queue *q);
|
||||
void (*destroy_rq_keyslot_manager)(struct ufs_hba *hba,
|
||||
struct request_queue *q);
|
||||
int (*hba_init_crypto)(struct ufs_hba *hba,
|
||||
const struct keyslot_mgmt_ll_ops *ksm_ops);
|
||||
void (*enable)(struct ufs_hba *hba);
|
||||
void (*disable)(struct ufs_hba *hba);
|
||||
int (*suspend)(struct ufs_hba *hba, enum ufs_pm_op pm_op);
|
||||
int (*resume)(struct ufs_hba *hba, enum ufs_pm_op pm_op);
|
||||
int (*debug)(struct ufs_hba *hba);
|
||||
int (*prepare_lrbp_crypto)(struct ufs_hba *hba,
|
||||
struct scsi_cmnd *cmd,
|
||||
struct ufshcd_lrb *lrbp);
|
||||
int (*complete_lrbp_crypto)(struct ufs_hba *hba,
|
||||
struct scsi_cmnd *cmd,
|
||||
struct ufshcd_lrb *lrbp);
|
||||
void *priv;
|
||||
};
|
||||
|
||||
/* clock gating state */
|
||||
enum clk_gating_state {
|
||||
CLKS_OFF,
|
||||
@ -758,6 +786,11 @@ struct ufshcd_cmd_log {
|
||||
* @is_urgent_bkops_lvl_checked: keeps track if the urgent bkops level for
|
||||
* device is known or not.
|
||||
* @scsi_block_reqs_cnt: reference counting for scsi block requests
|
||||
* @crypto_capabilities: Content of crypto capabilities register (0x100)
|
||||
* @crypto_cap_array: Array of crypto capabilities
|
||||
* @crypto_cfg_register: Start of the crypto cfg array
|
||||
* @crypto_cfgs: Array of crypto configurations (i.e. config for each slot)
|
||||
* @ksm: the keyslot manager tied to this hba
|
||||
*/
|
||||
struct ufs_hba {
|
||||
void __iomem *mmio_base;
|
||||
@ -805,6 +838,7 @@ struct ufs_hba {
|
||||
u32 ufs_version;
|
||||
struct ufs_hba_variant *var;
|
||||
void *priv;
|
||||
const struct ufs_hba_crypto_variant_ops *crypto_vops;
|
||||
unsigned int irq;
|
||||
bool is_irq_enabled;
|
||||
bool crash_on_err;
|
||||
@ -909,6 +943,11 @@ struct ufs_hba {
|
||||
#define UFSHCD_QUIRK_DME_PEER_GET_FAST_MODE 0x20000
|
||||
|
||||
#define UFSHCD_QUIRK_BROKEN_AUTO_HIBERN8 0x40000
|
||||
/*
|
||||
* This quirk needs to be enabled if the host controller advertises
|
||||
* inline encryption support but it doesn't work correctly.
|
||||
*/
|
||||
#define UFSHCD_QUIRK_BROKEN_CRYPTO 0x800
|
||||
|
||||
unsigned int quirks; /* Deviations from standard UFSHCI spec. */
|
||||
|
||||
@ -1022,6 +1061,11 @@ struct ufs_hba {
|
||||
* in hibern8 then enable this cap.
|
||||
*/
|
||||
#define UFSHCD_CAP_POWER_COLLAPSE_DURING_HIBERN8 (1 << 7)
|
||||
/*
|
||||
* This capability allows the host controller driver to use the
|
||||
* inline crypto engine, if it is present
|
||||
*/
|
||||
#define UFSHCD_CAP_CRYPTO (1 << 7)
|
||||
|
||||
struct devfreq *devfreq;
|
||||
struct ufs_clk_scaling clk_scaling;
|
||||
@ -1049,6 +1093,15 @@ struct ufs_hba {
|
||||
bool phy_init_g4;
|
||||
bool force_g4;
|
||||
bool wb_enabled;
|
||||
|
||||
#ifdef CONFIG_SCSI_UFS_CRYPTO
|
||||
/* crypto */
|
||||
union ufs_crypto_capabilities crypto_capabilities;
|
||||
union ufs_crypto_cap_entry *crypto_cap_array;
|
||||
u32 crypto_cfg_register;
|
||||
union ufs_crypto_cfg_entry *crypto_cfgs;
|
||||
struct keyslot_manager *ksm;
|
||||
#endif /* CONFIG_SCSI_UFS_CRYPTO */
|
||||
};
|
||||
|
||||
static inline void ufshcd_mark_shutdown_ongoing(struct ufs_hba *hba)
|
||||
|
@ -363,6 +363,61 @@ enum {
|
||||
INTERRUPT_MASK_ALL_VER_21 = 0x71FFF,
|
||||
};
|
||||
|
||||
/* CCAP - Crypto Capability 100h */
|
||||
union ufs_crypto_capabilities {
|
||||
__le32 reg_val;
|
||||
struct {
|
||||
u8 num_crypto_cap;
|
||||
u8 config_count;
|
||||
u8 reserved;
|
||||
u8 config_array_ptr;
|
||||
};
|
||||
};
|
||||
|
||||
enum ufs_crypto_key_size {
|
||||
UFS_CRYPTO_KEY_SIZE_INVALID = 0x0,
|
||||
UFS_CRYPTO_KEY_SIZE_128 = 0x1,
|
||||
UFS_CRYPTO_KEY_SIZE_192 = 0x2,
|
||||
UFS_CRYPTO_KEY_SIZE_256 = 0x3,
|
||||
UFS_CRYPTO_KEY_SIZE_512 = 0x4,
|
||||
};
|
||||
|
||||
enum ufs_crypto_alg {
|
||||
UFS_CRYPTO_ALG_AES_XTS = 0x0,
|
||||
UFS_CRYPTO_ALG_BITLOCKER_AES_CBC = 0x1,
|
||||
UFS_CRYPTO_ALG_AES_ECB = 0x2,
|
||||
UFS_CRYPTO_ALG_ESSIV_AES_CBC = 0x3,
|
||||
};
|
||||
|
||||
/* x-CRYPTOCAP - Crypto Capability X */
|
||||
union ufs_crypto_cap_entry {
|
||||
__le32 reg_val;
|
||||
struct {
|
||||
u8 algorithm_id;
|
||||
u8 sdus_mask; /* Supported data unit size mask */
|
||||
u8 key_size;
|
||||
u8 reserved;
|
||||
};
|
||||
};
|
||||
|
||||
#define UFS_CRYPTO_CONFIGURATION_ENABLE (1 << 7)
|
||||
#define UFS_CRYPTO_KEY_MAX_SIZE 64
|
||||
/* x-CRYPTOCFG - Crypto Configuration X */
|
||||
union ufs_crypto_cfg_entry {
|
||||
__le32 reg_val[32];
|
||||
struct {
|
||||
u8 crypto_key[UFS_CRYPTO_KEY_MAX_SIZE];
|
||||
u8 data_unit_size;
|
||||
u8 crypto_cap_idx;
|
||||
u8 reserved_1;
|
||||
u8 config_enable;
|
||||
u8 reserved_multi_host;
|
||||
u8 reserved_2;
|
||||
u8 vsb[2];
|
||||
u8 reserved_3[56];
|
||||
};
|
||||
};
|
||||
|
||||
/*
|
||||
* Request Descriptor Definitions
|
||||
*/
|
||||
@ -384,6 +439,7 @@ enum {
|
||||
UTP_NATIVE_UFS_COMMAND = 0x10000000,
|
||||
UTP_DEVICE_MANAGEMENT_FUNCTION = 0x20000000,
|
||||
UTP_REQ_DESC_INT_CMD = 0x01000000,
|
||||
UTP_REQ_DESC_CRYPTO_ENABLE_CMD = 0x00800000,
|
||||
};
|
||||
|
||||
/* UTP Transfer Request Data Direction (DD) */
|
||||
|
@ -46,6 +46,7 @@
|
||||
#include <linux/pagevec.h>
|
||||
#include <linux/sched/mm.h>
|
||||
#include <trace/events/block.h>
|
||||
#include <linux/fscrypt.h>
|
||||
|
||||
static int fsync_buffers_list(spinlock_t *lock, struct list_head *list);
|
||||
static int submit_bh_wbc(int op, int op_flags, struct buffer_head *bh,
|
||||
@ -3104,6 +3105,8 @@ static int submit_bh_wbc(int op, int op_flags, struct buffer_head *bh,
|
||||
*/
|
||||
bio = bio_alloc(GFP_NOIO, 1);
|
||||
|
||||
fscrypt_set_bio_crypt_ctx_bh(bio, bh, GFP_NOIO | __GFP_NOFAIL);
|
||||
|
||||
if (wbc) {
|
||||
wbc_init_bio(wbc, bio);
|
||||
wbc_account_io(wbc, bh->b_page, bh->b_size);
|
||||
|
@ -6,6 +6,8 @@ config FS_ENCRYPTION
|
||||
select CRYPTO_ECB
|
||||
select CRYPTO_XTS
|
||||
select CRYPTO_CTS
|
||||
select CRYPTO_SHA512
|
||||
select CRYPTO_HMAC
|
||||
select KEYS
|
||||
help
|
||||
Enable encryption of files and directories. This
|
||||
@ -13,3 +15,9 @@ config FS_ENCRYPTION
|
||||
efficient since it avoids caching the encrypted and
|
||||
decrypted pages in the page cache. Currently Ext4,
|
||||
F2FS and UBIFS make use of this feature.
|
||||
|
||||
config FS_ENCRYPTION_INLINE_CRYPT
|
||||
bool "Enable fscrypt to use inline crypto"
|
||||
depends on FS_ENCRYPTION && BLK_INLINE_ENCRYPTION
|
||||
help
|
||||
Enable fscrypt to use inline encryption hardware if available.
|
||||
|
@ -1,4 +1,13 @@
|
||||
obj-$(CONFIG_FS_ENCRYPTION) += fscrypto.o
|
||||
|
||||
fscrypto-y := crypto.o fname.o hooks.o keyinfo.o policy.o
|
||||
fscrypto-y := crypto.o \
|
||||
fname.o \
|
||||
hkdf.o \
|
||||
hooks.o \
|
||||
keyring.o \
|
||||
keysetup.o \
|
||||
keysetup_v1.o \
|
||||
policy.o
|
||||
|
||||
fscrypto-$(CONFIG_BLOCK) += bio.o
|
||||
fscrypto-$(CONFIG_FS_ENCRYPTION_INLINE_CRYPT) += inline_crypt.o
|
||||
|
@ -26,7 +26,7 @@
|
||||
#include <linux/namei.h>
|
||||
#include "fscrypt_private.h"
|
||||
|
||||
static void __fscrypt_decrypt_bio(struct bio *bio, bool done)
|
||||
void fscrypt_decrypt_bio(struct bio *bio)
|
||||
{
|
||||
struct bio_vec *bv;
|
||||
int i;
|
||||
@ -38,62 +38,47 @@ static void __fscrypt_decrypt_bio(struct bio *bio, bool done)
|
||||
bv->bv_offset);
|
||||
if (ret)
|
||||
SetPageError(page);
|
||||
else if (done)
|
||||
SetPageUptodate(page);
|
||||
if (done)
|
||||
unlock_page(page);
|
||||
}
|
||||
}
|
||||
|
||||
void fscrypt_decrypt_bio(struct bio *bio)
|
||||
{
|
||||
__fscrypt_decrypt_bio(bio, false);
|
||||
}
|
||||
EXPORT_SYMBOL(fscrypt_decrypt_bio);
|
||||
|
||||
static void completion_pages(struct work_struct *work)
|
||||
{
|
||||
struct fscrypt_ctx *ctx = container_of(work, struct fscrypt_ctx, work);
|
||||
struct bio *bio = ctx->bio;
|
||||
|
||||
__fscrypt_decrypt_bio(bio, true);
|
||||
fscrypt_release_ctx(ctx);
|
||||
bio_put(bio);
|
||||
}
|
||||
|
||||
void fscrypt_enqueue_decrypt_bio(struct fscrypt_ctx *ctx, struct bio *bio)
|
||||
{
|
||||
INIT_WORK(&ctx->work, completion_pages);
|
||||
ctx->bio = bio;
|
||||
fscrypt_enqueue_decrypt_work(&ctx->work);
|
||||
}
|
||||
EXPORT_SYMBOL(fscrypt_enqueue_decrypt_bio);
|
||||
|
||||
int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk,
|
||||
sector_t pblk, unsigned int len)
|
||||
{
|
||||
const unsigned int blockbits = inode->i_blkbits;
|
||||
const unsigned int blocksize = 1 << blockbits;
|
||||
const bool inlinecrypt = fscrypt_inode_uses_inline_crypto(inode);
|
||||
struct page *ciphertext_page;
|
||||
struct bio *bio;
|
||||
int ret, err = 0;
|
||||
|
||||
ciphertext_page = fscrypt_alloc_bounce_page(GFP_NOWAIT);
|
||||
if (!ciphertext_page)
|
||||
return -ENOMEM;
|
||||
if (inlinecrypt) {
|
||||
ciphertext_page = ZERO_PAGE(0);
|
||||
} else {
|
||||
ciphertext_page = fscrypt_alloc_bounce_page(GFP_NOWAIT);
|
||||
if (!ciphertext_page)
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
while (len--) {
|
||||
err = fscrypt_crypt_block(inode, FS_ENCRYPT, lblk,
|
||||
ZERO_PAGE(0), ciphertext_page,
|
||||
blocksize, 0, GFP_NOFS);
|
||||
if (err)
|
||||
goto errout;
|
||||
if (!inlinecrypt) {
|
||||
err = fscrypt_crypt_block(inode, FS_ENCRYPT, lblk,
|
||||
ZERO_PAGE(0), ciphertext_page,
|
||||
blocksize, 0, GFP_NOFS);
|
||||
if (err)
|
||||
goto errout;
|
||||
}
|
||||
|
||||
bio = bio_alloc(GFP_NOWAIT, 1);
|
||||
if (!bio) {
|
||||
err = -ENOMEM;
|
||||
goto errout;
|
||||
}
|
||||
err = fscrypt_set_bio_crypt_ctx(bio, inode, lblk, GFP_NOIO);
|
||||
if (err) {
|
||||
bio_put(bio);
|
||||
goto errout;
|
||||
}
|
||||
bio_set_dev(bio, inode->i_sb->s_bdev);
|
||||
bio->bi_iter.bi_sector = pblk << (blockbits - 9);
|
||||
bio_set_op_attrs(bio, REQ_OP_WRITE, 0);
|
||||
@ -115,7 +100,8 @@ int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk,
|
||||
}
|
||||
err = 0;
|
||||
errout:
|
||||
fscrypt_free_bounce_page(ciphertext_page);
|
||||
if (!inlinecrypt)
|
||||
fscrypt_free_bounce_page(ciphertext_page);
|
||||
return err;
|
||||
}
|
||||
EXPORT_SYMBOL(fscrypt_zeroout_range);
|
||||
|
@ -26,29 +26,20 @@
|
||||
#include <linux/ratelimit.h>
|
||||
#include <linux/dcache.h>
|
||||
#include <linux/namei.h>
|
||||
#include <crypto/aes.h>
|
||||
#include <crypto/skcipher.h>
|
||||
#include "fscrypt_private.h"
|
||||
|
||||
static unsigned int num_prealloc_crypto_pages = 32;
|
||||
static unsigned int num_prealloc_crypto_ctxs = 128;
|
||||
|
||||
module_param(num_prealloc_crypto_pages, uint, 0444);
|
||||
MODULE_PARM_DESC(num_prealloc_crypto_pages,
|
||||
"Number of crypto pages to preallocate");
|
||||
module_param(num_prealloc_crypto_ctxs, uint, 0444);
|
||||
MODULE_PARM_DESC(num_prealloc_crypto_ctxs,
|
||||
"Number of crypto contexts to preallocate");
|
||||
|
||||
static mempool_t *fscrypt_bounce_page_pool = NULL;
|
||||
|
||||
static LIST_HEAD(fscrypt_free_ctxs);
|
||||
static DEFINE_SPINLOCK(fscrypt_ctx_lock);
|
||||
|
||||
static struct workqueue_struct *fscrypt_read_workqueue;
|
||||
static DEFINE_MUTEX(fscrypt_init_mutex);
|
||||
|
||||
static struct kmem_cache *fscrypt_ctx_cachep;
|
||||
struct kmem_cache *fscrypt_info_cachep;
|
||||
|
||||
void fscrypt_enqueue_decrypt_work(struct work_struct *work)
|
||||
@ -57,62 +48,6 @@ void fscrypt_enqueue_decrypt_work(struct work_struct *work)
|
||||
}
|
||||
EXPORT_SYMBOL(fscrypt_enqueue_decrypt_work);
|
||||
|
||||
/**
|
||||
* fscrypt_release_ctx() - Release a decryption context
|
||||
* @ctx: The decryption context to release.
|
||||
*
|
||||
* If the decryption context was allocated from the pre-allocated pool, return
|
||||
* it to that pool. Else, free it.
|
||||
*/
|
||||
void fscrypt_release_ctx(struct fscrypt_ctx *ctx)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
||||
if (ctx->flags & FS_CTX_REQUIRES_FREE_ENCRYPT_FL) {
|
||||
kmem_cache_free(fscrypt_ctx_cachep, ctx);
|
||||
} else {
|
||||
spin_lock_irqsave(&fscrypt_ctx_lock, flags);
|
||||
list_add(&ctx->free_list, &fscrypt_free_ctxs);
|
||||
spin_unlock_irqrestore(&fscrypt_ctx_lock, flags);
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL(fscrypt_release_ctx);
|
||||
|
||||
/**
|
||||
* fscrypt_get_ctx() - Get a decryption context
|
||||
* @gfp_flags: The gfp flag for memory allocation
|
||||
*
|
||||
* Allocate and initialize a decryption context.
|
||||
*
|
||||
* Return: A new decryption context on success; an ERR_PTR() otherwise.
|
||||
*/
|
||||
struct fscrypt_ctx *fscrypt_get_ctx(gfp_t gfp_flags)
|
||||
{
|
||||
struct fscrypt_ctx *ctx;
|
||||
unsigned long flags;
|
||||
|
||||
/*
|
||||
* First try getting a ctx from the free list so that we don't have to
|
||||
* call into the slab allocator.
|
||||
*/
|
||||
spin_lock_irqsave(&fscrypt_ctx_lock, flags);
|
||||
ctx = list_first_entry_or_null(&fscrypt_free_ctxs,
|
||||
struct fscrypt_ctx, free_list);
|
||||
if (ctx)
|
||||
list_del(&ctx->free_list);
|
||||
spin_unlock_irqrestore(&fscrypt_ctx_lock, flags);
|
||||
if (!ctx) {
|
||||
ctx = kmem_cache_zalloc(fscrypt_ctx_cachep, gfp_flags);
|
||||
if (!ctx)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
ctx->flags |= FS_CTX_REQUIRES_FREE_ENCRYPT_FL;
|
||||
} else {
|
||||
ctx->flags &= ~FS_CTX_REQUIRES_FREE_ENCRYPT_FL;
|
||||
}
|
||||
return ctx;
|
||||
}
|
||||
EXPORT_SYMBOL(fscrypt_get_ctx);
|
||||
|
||||
struct page *fscrypt_alloc_bounce_page(gfp_t gfp_flags)
|
||||
{
|
||||
return mempool_alloc(fscrypt_bounce_page_pool, gfp_flags);
|
||||
@ -137,14 +72,17 @@ EXPORT_SYMBOL(fscrypt_free_bounce_page);
|
||||
void fscrypt_generate_iv(union fscrypt_iv *iv, u64 lblk_num,
|
||||
const struct fscrypt_info *ci)
|
||||
{
|
||||
u8 flags = fscrypt_policy_flags(&ci->ci_policy);
|
||||
|
||||
memset(iv, 0, ci->ci_mode->ivsize);
|
||||
iv->lblk_num = cpu_to_le64(lblk_num);
|
||||
|
||||
if (ci->ci_flags & FS_POLICY_FLAG_DIRECT_KEY)
|
||||
if (flags & FSCRYPT_POLICY_FLAG_IV_INO_LBLK_64) {
|
||||
WARN_ON_ONCE((u32)lblk_num != lblk_num);
|
||||
lblk_num |= (u64)ci->ci_inode->i_ino << 32;
|
||||
} else if (flags & FSCRYPT_POLICY_FLAG_DIRECT_KEY) {
|
||||
memcpy(iv->nonce, ci->ci_nonce, FS_KEY_DERIVATION_NONCE_SIZE);
|
||||
|
||||
if (ci->ci_essiv_tfm != NULL)
|
||||
crypto_cipher_encrypt_one(ci->ci_essiv_tfm, iv->raw, iv->raw);
|
||||
}
|
||||
iv->lblk_num = cpu_to_le64(lblk_num);
|
||||
}
|
||||
|
||||
/* Encrypt or decrypt a single filesystem block of file contents */
|
||||
@ -187,10 +125,8 @@ int fscrypt_crypt_block(const struct inode *inode, fscrypt_direction_t rw,
|
||||
res = crypto_wait_req(crypto_skcipher_encrypt(req), &wait);
|
||||
skcipher_request_free(req);
|
||||
if (res) {
|
||||
fscrypt_err(inode->i_sb,
|
||||
"%scryption failed for inode %lu, block %llu: %d",
|
||||
(rw == FS_DECRYPT ? "de" : "en"),
|
||||
inode->i_ino, lblk_num, res);
|
||||
fscrypt_err(inode, "%scryption failed for block %llu: %d",
|
||||
(rw == FS_DECRYPT ? "De" : "En"), lblk_num, res);
|
||||
return res;
|
||||
}
|
||||
return 0;
|
||||
@ -397,17 +333,6 @@ const struct dentry_operations fscrypt_d_ops = {
|
||||
.d_revalidate = fscrypt_d_revalidate,
|
||||
};
|
||||
|
||||
static void fscrypt_destroy(void)
|
||||
{
|
||||
struct fscrypt_ctx *pos, *n;
|
||||
|
||||
list_for_each_entry_safe(pos, n, &fscrypt_free_ctxs, free_list)
|
||||
kmem_cache_free(fscrypt_ctx_cachep, pos);
|
||||
INIT_LIST_HEAD(&fscrypt_free_ctxs);
|
||||
mempool_destroy(fscrypt_bounce_page_pool);
|
||||
fscrypt_bounce_page_pool = NULL;
|
||||
}
|
||||
|
||||
/**
|
||||
* fscrypt_initialize() - allocate major buffers for fs encryption.
|
||||
* @cop_flags: fscrypt operations flags
|
||||
@ -415,11 +340,11 @@ static void fscrypt_destroy(void)
|
||||
* We only call this when we start accessing encrypted files, since it
|
||||
* results in memory getting allocated that wouldn't otherwise be used.
|
||||
*
|
||||
* Return: Zero on success, non-zero otherwise.
|
||||
* Return: 0 on success; -errno on failure
|
||||
*/
|
||||
int fscrypt_initialize(unsigned int cop_flags)
|
||||
{
|
||||
int i, res = -ENOMEM;
|
||||
int err = 0;
|
||||
|
||||
/* No need to allocate a bounce page pool if this FS won't use it. */
|
||||
if (cop_flags & FS_CFLG_OWN_PAGES)
|
||||
@ -427,32 +352,21 @@ int fscrypt_initialize(unsigned int cop_flags)
|
||||
|
||||
mutex_lock(&fscrypt_init_mutex);
|
||||
if (fscrypt_bounce_page_pool)
|
||||
goto already_initialized;
|
||||
|
||||
for (i = 0; i < num_prealloc_crypto_ctxs; i++) {
|
||||
struct fscrypt_ctx *ctx;
|
||||
|
||||
ctx = kmem_cache_zalloc(fscrypt_ctx_cachep, GFP_NOFS);
|
||||
if (!ctx)
|
||||
goto fail;
|
||||
list_add(&ctx->free_list, &fscrypt_free_ctxs);
|
||||
}
|
||||
goto out_unlock;
|
||||
|
||||
err = -ENOMEM;
|
||||
fscrypt_bounce_page_pool =
|
||||
mempool_create_page_pool(num_prealloc_crypto_pages, 0);
|
||||
if (!fscrypt_bounce_page_pool)
|
||||
goto fail;
|
||||
goto out_unlock;
|
||||
|
||||
already_initialized:
|
||||
err = 0;
|
||||
out_unlock:
|
||||
mutex_unlock(&fscrypt_init_mutex);
|
||||
return 0;
|
||||
fail:
|
||||
fscrypt_destroy();
|
||||
mutex_unlock(&fscrypt_init_mutex);
|
||||
return res;
|
||||
return err;
|
||||
}
|
||||
|
||||
void fscrypt_msg(struct super_block *sb, const char *level,
|
||||
void fscrypt_msg(const struct inode *inode, const char *level,
|
||||
const char *fmt, ...)
|
||||
{
|
||||
static DEFINE_RATELIMIT_STATE(rs, DEFAULT_RATELIMIT_INTERVAL,
|
||||
@ -466,8 +380,9 @@ void fscrypt_msg(struct super_block *sb, const char *level,
|
||||
va_start(args, fmt);
|
||||
vaf.fmt = fmt;
|
||||
vaf.va = &args;
|
||||
if (sb)
|
||||
printk("%sfscrypt (%s): %pV\n", level, sb->s_id, &vaf);
|
||||
if (inode)
|
||||
printk("%sfscrypt (%s, inode %lu): %pV\n",
|
||||
level, inode->i_sb->s_id, inode->i_ino, &vaf);
|
||||
else
|
||||
printk("%sfscrypt: %pV\n", level, &vaf);
|
||||
va_end(args);
|
||||
@ -478,6 +393,8 @@ void fscrypt_msg(struct super_block *sb, const char *level,
|
||||
*/
|
||||
static int __init fscrypt_init(void)
|
||||
{
|
||||
int err = -ENOMEM;
|
||||
|
||||
/*
|
||||
* Use an unbound workqueue to allow bios to be decrypted in parallel
|
||||
* even when they happen to complete on the same CPU. This sacrifices
|
||||
@ -492,39 +409,21 @@ static int __init fscrypt_init(void)
|
||||
if (!fscrypt_read_workqueue)
|
||||
goto fail;
|
||||
|
||||
fscrypt_ctx_cachep = KMEM_CACHE(fscrypt_ctx, SLAB_RECLAIM_ACCOUNT);
|
||||
if (!fscrypt_ctx_cachep)
|
||||
goto fail_free_queue;
|
||||
|
||||
fscrypt_info_cachep = KMEM_CACHE(fscrypt_info, SLAB_RECLAIM_ACCOUNT);
|
||||
if (!fscrypt_info_cachep)
|
||||
goto fail_free_ctx;
|
||||
goto fail_free_queue;
|
||||
|
||||
err = fscrypt_init_keyring();
|
||||
if (err)
|
||||
goto fail_free_info;
|
||||
|
||||
return 0;
|
||||
|
||||
fail_free_ctx:
|
||||
kmem_cache_destroy(fscrypt_ctx_cachep);
|
||||
fail_free_info:
|
||||
kmem_cache_destroy(fscrypt_info_cachep);
|
||||
fail_free_queue:
|
||||
destroy_workqueue(fscrypt_read_workqueue);
|
||||
fail:
|
||||
return -ENOMEM;
|
||||
return err;
|
||||
}
|
||||
module_init(fscrypt_init)
|
||||
|
||||
/**
|
||||
* fscrypt_exit() - Shutdown the fs encryption system
|
||||
*/
|
||||
static void __exit fscrypt_exit(void)
|
||||
{
|
||||
fscrypt_destroy();
|
||||
|
||||
if (fscrypt_read_workqueue)
|
||||
destroy_workqueue(fscrypt_read_workqueue);
|
||||
kmem_cache_destroy(fscrypt_ctx_cachep);
|
||||
kmem_cache_destroy(fscrypt_info_cachep);
|
||||
|
||||
fscrypt_essiv_cleanup();
|
||||
}
|
||||
module_exit(fscrypt_exit);
|
||||
|
||||
MODULE_LICENSE("GPL");
|
||||
late_initcall(fscrypt_init)
|
||||
|
@ -71,9 +71,7 @@ int fname_encrypt(struct inode *inode, const struct qstr *iname,
|
||||
res = crypto_wait_req(crypto_skcipher_encrypt(req), &wait);
|
||||
skcipher_request_free(req);
|
||||
if (res < 0) {
|
||||
fscrypt_err(inode->i_sb,
|
||||
"Filename encryption failed for inode %lu: %d",
|
||||
inode->i_ino, res);
|
||||
fscrypt_err(inode, "Filename encryption failed: %d", res);
|
||||
return res;
|
||||
}
|
||||
|
||||
@ -117,9 +115,7 @@ static int fname_decrypt(struct inode *inode,
|
||||
res = crypto_wait_req(crypto_skcipher_decrypt(req), &wait);
|
||||
skcipher_request_free(req);
|
||||
if (res < 0) {
|
||||
fscrypt_err(inode->i_sb,
|
||||
"Filename decryption failed for inode %lu: %d",
|
||||
inode->i_ino, res);
|
||||
fscrypt_err(inode, "Filename decryption failed: %d", res);
|
||||
return res;
|
||||
}
|
||||
|
||||
@ -127,44 +123,45 @@ static int fname_decrypt(struct inode *inode,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const char *lookup_table =
|
||||
static const char lookup_table[65] =
|
||||
"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+,";
|
||||
|
||||
#define BASE64_CHARS(nbytes) DIV_ROUND_UP((nbytes) * 4, 3)
|
||||
|
||||
/**
|
||||
* digest_encode() -
|
||||
* base64_encode() -
|
||||
*
|
||||
* Encodes the input digest using characters from the set [a-zA-Z0-9_+].
|
||||
* Encodes the input string using characters from the set [A-Za-z0-9+,].
|
||||
* The encoded string is roughly 4/3 times the size of the input string.
|
||||
*
|
||||
* Return: length of the encoded string
|
||||
*/
|
||||
static int digest_encode(const char *src, int len, char *dst)
|
||||
static int base64_encode(const u8 *src, int len, char *dst)
|
||||
{
|
||||
int i = 0, bits = 0, ac = 0;
|
||||
int i, bits = 0, ac = 0;
|
||||
char *cp = dst;
|
||||
|
||||
while (i < len) {
|
||||
ac += (((unsigned char) src[i]) << bits);
|
||||
for (i = 0; i < len; i++) {
|
||||
ac += src[i] << bits;
|
||||
bits += 8;
|
||||
do {
|
||||
*cp++ = lookup_table[ac & 0x3f];
|
||||
ac >>= 6;
|
||||
bits -= 6;
|
||||
} while (bits >= 6);
|
||||
i++;
|
||||
}
|
||||
if (bits)
|
||||
*cp++ = lookup_table[ac & 0x3f];
|
||||
return cp - dst;
|
||||
}
|
||||
|
||||
static int digest_decode(const char *src, int len, char *dst)
|
||||
static int base64_decode(const char *src, int len, u8 *dst)
|
||||
{
|
||||
int i = 0, bits = 0, ac = 0;
|
||||
int i, bits = 0, ac = 0;
|
||||
const char *p;
|
||||
char *cp = dst;
|
||||
u8 *cp = dst;
|
||||
|
||||
while (i < len) {
|
||||
for (i = 0; i < len; i++) {
|
||||
p = strchr(lookup_table, src[i]);
|
||||
if (p == NULL || src[i] == 0)
|
||||
return -2;
|
||||
@ -175,7 +172,6 @@ static int digest_decode(const char *src, int len, char *dst)
|
||||
ac >>= 8;
|
||||
bits -= 8;
|
||||
}
|
||||
i++;
|
||||
}
|
||||
if (ac)
|
||||
return -1;
|
||||
@ -185,8 +181,9 @@ static int digest_decode(const char *src, int len, char *dst)
|
||||
bool fscrypt_fname_encrypted_size(const struct inode *inode, u32 orig_len,
|
||||
u32 max_len, u32 *encrypted_len_ret)
|
||||
{
|
||||
int padding = 4 << (inode->i_crypt_info->ci_flags &
|
||||
FS_POLICY_FLAGS_PAD_MASK);
|
||||
const struct fscrypt_info *ci = inode->i_crypt_info;
|
||||
int padding = 4 << (fscrypt_policy_flags(&ci->ci_policy) &
|
||||
FSCRYPT_POLICY_FLAGS_PAD_MASK);
|
||||
u32 encrypted_len;
|
||||
|
||||
if (orig_len > max_len)
|
||||
@ -272,7 +269,7 @@ int fscrypt_fname_disk_to_usr(struct inode *inode,
|
||||
return fname_decrypt(inode, iname, oname);
|
||||
|
||||
if (iname->len <= FSCRYPT_FNAME_MAX_UNDIGESTED_SIZE) {
|
||||
oname->len = digest_encode(iname->name, iname->len,
|
||||
oname->len = base64_encode(iname->name, iname->len,
|
||||
oname->name);
|
||||
return 0;
|
||||
}
|
||||
@ -287,7 +284,7 @@ int fscrypt_fname_disk_to_usr(struct inode *inode,
|
||||
FSCRYPT_FNAME_DIGEST(iname->name, iname->len),
|
||||
FSCRYPT_FNAME_DIGEST_SIZE);
|
||||
oname->name[0] = '_';
|
||||
oname->len = 1 + digest_encode((const char *)&digested_name,
|
||||
oname->len = 1 + base64_encode((const u8 *)&digested_name,
|
||||
sizeof(digested_name), oname->name + 1);
|
||||
return 0;
|
||||
}
|
||||
@ -380,8 +377,8 @@ int fscrypt_setup_filename(struct inode *dir, const struct qstr *iname,
|
||||
if (fname->crypto_buf.name == NULL)
|
||||
return -ENOMEM;
|
||||
|
||||
ret = digest_decode(iname->name + digested, iname->len - digested,
|
||||
fname->crypto_buf.name);
|
||||
ret = base64_decode(iname->name + digested, iname->len - digested,
|
||||
fname->crypto_buf.name);
|
||||
if (ret < 0) {
|
||||
ret = -ENOENT;
|
||||
goto errout;
|
||||
|
@ -4,9 +4,8 @@
|
||||
*
|
||||
* Copyright (C) 2015, Google, Inc.
|
||||
*
|
||||
* This contains encryption key functions.
|
||||
*
|
||||
* Written by Michael Halcrow, Ildar Muslukhov, and Uday Savagaonkar, 2015.
|
||||
* Originally written by Michael Halcrow, Ildar Muslukhov, and Uday Savagaonkar.
|
||||
* Heavily modified since then.
|
||||
*/
|
||||
|
||||
#ifndef _FSCRYPT_PRIVATE_H
|
||||
@ -14,31 +13,137 @@
|
||||
|
||||
#include <linux/fscrypt.h>
|
||||
#include <crypto/hash.h>
|
||||
#include <linux/bio-crypt-ctx.h>
|
||||
|
||||
struct fscrypt_master_key;
|
||||
|
||||
#define CONST_STRLEN(str) (sizeof(str) - 1)
|
||||
|
||||
/* Encryption parameters */
|
||||
#define FS_KEY_DERIVATION_NONCE_SIZE 16
|
||||
|
||||
/**
|
||||
* Encryption context for inode
|
||||
*
|
||||
* Protector format:
|
||||
* 1 byte: Protector format (1 = this version)
|
||||
* 1 byte: File contents encryption mode
|
||||
* 1 byte: File names encryption mode
|
||||
* 1 byte: Flags
|
||||
* 8 bytes: Master Key descriptor
|
||||
* 16 bytes: Encryption Key derivation nonce
|
||||
*/
|
||||
struct fscrypt_context {
|
||||
u8 format;
|
||||
#define FSCRYPT_MIN_KEY_SIZE 16
|
||||
|
||||
#define FSCRYPT_CONTEXT_V1 1
|
||||
#define FSCRYPT_CONTEXT_V2 2
|
||||
|
||||
struct fscrypt_context_v1 {
|
||||
u8 version; /* FSCRYPT_CONTEXT_V1 */
|
||||
u8 contents_encryption_mode;
|
||||
u8 filenames_encryption_mode;
|
||||
u8 flags;
|
||||
u8 master_key_descriptor[FS_KEY_DESCRIPTOR_SIZE];
|
||||
u8 master_key_descriptor[FSCRYPT_KEY_DESCRIPTOR_SIZE];
|
||||
u8 nonce[FS_KEY_DERIVATION_NONCE_SIZE];
|
||||
} __packed;
|
||||
};
|
||||
|
||||
#define FS_ENCRYPTION_CONTEXT_FORMAT_V1 1
|
||||
struct fscrypt_context_v2 {
|
||||
u8 version; /* FSCRYPT_CONTEXT_V2 */
|
||||
u8 contents_encryption_mode;
|
||||
u8 filenames_encryption_mode;
|
||||
u8 flags;
|
||||
u8 __reserved[4];
|
||||
u8 master_key_identifier[FSCRYPT_KEY_IDENTIFIER_SIZE];
|
||||
u8 nonce[FS_KEY_DERIVATION_NONCE_SIZE];
|
||||
};
|
||||
|
||||
/**
|
||||
* fscrypt_context - the encryption context of an inode
|
||||
*
|
||||
* This is the on-disk equivalent of an fscrypt_policy, stored alongside each
|
||||
* encrypted file usually in a hidden extended attribute. It contains the
|
||||
* fields from the fscrypt_policy, in order to identify the encryption algorithm
|
||||
* and key with which the file is encrypted. It also contains a nonce that was
|
||||
* randomly generated by fscrypt itself; this is used as KDF input or as a tweak
|
||||
* to cause different files to be encrypted differently.
|
||||
*/
|
||||
union fscrypt_context {
|
||||
u8 version;
|
||||
struct fscrypt_context_v1 v1;
|
||||
struct fscrypt_context_v2 v2;
|
||||
};
|
||||
|
||||
/*
|
||||
* Return the size expected for the given fscrypt_context based on its version
|
||||
* number, or 0 if the context version is unrecognized.
|
||||
*/
|
||||
static inline int fscrypt_context_size(const union fscrypt_context *ctx)
|
||||
{
|
||||
switch (ctx->version) {
|
||||
case FSCRYPT_CONTEXT_V1:
|
||||
BUILD_BUG_ON(sizeof(ctx->v1) != 28);
|
||||
return sizeof(ctx->v1);
|
||||
case FSCRYPT_CONTEXT_V2:
|
||||
BUILD_BUG_ON(sizeof(ctx->v2) != 40);
|
||||
return sizeof(ctx->v2);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
#undef fscrypt_policy
|
||||
union fscrypt_policy {
|
||||
u8 version;
|
||||
struct fscrypt_policy_v1 v1;
|
||||
struct fscrypt_policy_v2 v2;
|
||||
};
|
||||
|
||||
/*
|
||||
* Return the size expected for the given fscrypt_policy based on its version
|
||||
* number, or 0 if the policy version is unrecognized.
|
||||
*/
|
||||
static inline int fscrypt_policy_size(const union fscrypt_policy *policy)
|
||||
{
|
||||
switch (policy->version) {
|
||||
case FSCRYPT_POLICY_V1:
|
||||
return sizeof(policy->v1);
|
||||
case FSCRYPT_POLICY_V2:
|
||||
return sizeof(policy->v2);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Return the contents encryption mode of a valid encryption policy */
|
||||
static inline u8
|
||||
fscrypt_policy_contents_mode(const union fscrypt_policy *policy)
|
||||
{
|
||||
switch (policy->version) {
|
||||
case FSCRYPT_POLICY_V1:
|
||||
return policy->v1.contents_encryption_mode;
|
||||
case FSCRYPT_POLICY_V2:
|
||||
return policy->v2.contents_encryption_mode;
|
||||
}
|
||||
BUG();
|
||||
}
|
||||
|
||||
/* Return the filenames encryption mode of a valid encryption policy */
|
||||
static inline u8
|
||||
fscrypt_policy_fnames_mode(const union fscrypt_policy *policy)
|
||||
{
|
||||
switch (policy->version) {
|
||||
case FSCRYPT_POLICY_V1:
|
||||
return policy->v1.filenames_encryption_mode;
|
||||
case FSCRYPT_POLICY_V2:
|
||||
return policy->v2.filenames_encryption_mode;
|
||||
}
|
||||
BUG();
|
||||
}
|
||||
|
||||
/* Return the flags (FSCRYPT_POLICY_FLAG*) of a valid encryption policy */
|
||||
static inline u8
|
||||
fscrypt_policy_flags(const union fscrypt_policy *policy)
|
||||
{
|
||||
switch (policy->version) {
|
||||
case FSCRYPT_POLICY_V1:
|
||||
return policy->v1.flags;
|
||||
case FSCRYPT_POLICY_V2:
|
||||
return policy->v2.flags;
|
||||
}
|
||||
BUG();
|
||||
}
|
||||
|
||||
static inline bool
|
||||
fscrypt_is_direct_key_policy(const union fscrypt_policy *policy)
|
||||
{
|
||||
return fscrypt_policy_flags(policy) & FSCRYPT_POLICY_FLAG_DIRECT_KEY;
|
||||
}
|
||||
|
||||
/**
|
||||
* For encrypted symlinks, the ciphertext length is stored at the beginning
|
||||
@ -61,30 +166,49 @@ struct fscrypt_info {
|
||||
/* The actual crypto transform used for encryption and decryption */
|
||||
struct crypto_skcipher *ci_ctfm;
|
||||
|
||||
#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT
|
||||
/*
|
||||
* Cipher for ESSIV IV generation. Only set for CBC contents
|
||||
* encryption, otherwise is NULL.
|
||||
* The raw key for inline encryption, if this file is using inline
|
||||
* encryption rather than the traditional filesystem layer encryption.
|
||||
*/
|
||||
struct crypto_cipher *ci_essiv_tfm;
|
||||
const u8 *ci_inline_crypt_key;
|
||||
#endif
|
||||
|
||||
/* True if the key should be freed when this fscrypt_info is freed */
|
||||
bool ci_owns_key;
|
||||
|
||||
/*
|
||||
* Encryption mode used for this inode. It corresponds to either
|
||||
* ci_data_mode or ci_filename_mode, depending on the inode type.
|
||||
* Encryption mode used for this inode. It corresponds to either the
|
||||
* contents or filenames encryption mode, depending on the inode type.
|
||||
*/
|
||||
struct fscrypt_mode *ci_mode;
|
||||
|
||||
/*
|
||||
* If non-NULL, then this inode uses a master key directly rather than a
|
||||
* derived key, and ci_ctfm will equal ci_master_key->mk_ctfm.
|
||||
* Otherwise, this inode uses a derived key.
|
||||
*/
|
||||
struct fscrypt_master_key *ci_master_key;
|
||||
/* Back-pointer to the inode */
|
||||
struct inode *ci_inode;
|
||||
|
||||
/* fields from the fscrypt_context */
|
||||
u8 ci_data_mode;
|
||||
u8 ci_filename_mode;
|
||||
u8 ci_flags;
|
||||
u8 ci_master_key_descriptor[FS_KEY_DESCRIPTOR_SIZE];
|
||||
/*
|
||||
* The master key with which this inode was unlocked (decrypted). This
|
||||
* will be NULL if the master key was found in a process-subscribed
|
||||
* keyring rather than in the filesystem-level keyring.
|
||||
*/
|
||||
struct key *ci_master_key;
|
||||
|
||||
/*
|
||||
* Link in list of inodes that were unlocked with the master key.
|
||||
* Only used when ->ci_master_key is set.
|
||||
*/
|
||||
struct list_head ci_master_key_link;
|
||||
|
||||
/*
|
||||
* If non-NULL, then encryption is done using the master key directly
|
||||
* and ci_ctfm will equal ci_direct_key->dk_ctfm.
|
||||
*/
|
||||
struct fscrypt_direct_key *ci_direct_key;
|
||||
|
||||
/* The encryption policy used by this inode */
|
||||
union fscrypt_policy ci_policy;
|
||||
|
||||
/* This inode's nonce, copied from the fscrypt_context */
|
||||
u8 ci_nonce[FS_KEY_DERIVATION_NONCE_SIZE];
|
||||
};
|
||||
|
||||
@ -93,21 +217,19 @@ typedef enum {
|
||||
FS_ENCRYPT,
|
||||
} fscrypt_direction_t;
|
||||
|
||||
#define FS_CTX_REQUIRES_FREE_ENCRYPT_FL 0x00000001
|
||||
|
||||
static inline bool fscrypt_valid_enc_modes(u32 contents_mode,
|
||||
u32 filenames_mode)
|
||||
{
|
||||
if (contents_mode == FS_ENCRYPTION_MODE_AES_128_CBC &&
|
||||
filenames_mode == FS_ENCRYPTION_MODE_AES_128_CTS)
|
||||
if (contents_mode == FSCRYPT_MODE_AES_128_CBC &&
|
||||
filenames_mode == FSCRYPT_MODE_AES_128_CTS)
|
||||
return true;
|
||||
|
||||
if (contents_mode == FS_ENCRYPTION_MODE_AES_256_XTS &&
|
||||
filenames_mode == FS_ENCRYPTION_MODE_AES_256_CTS)
|
||||
if (contents_mode == FSCRYPT_MODE_AES_256_XTS &&
|
||||
filenames_mode == FSCRYPT_MODE_AES_256_CTS)
|
||||
return true;
|
||||
|
||||
if (contents_mode == FS_ENCRYPTION_MODE_ADIANTUM &&
|
||||
filenames_mode == FS_ENCRYPTION_MODE_ADIANTUM)
|
||||
if (contents_mode == FSCRYPT_MODE_ADIANTUM &&
|
||||
filenames_mode == FSCRYPT_MODE_ADIANTUM)
|
||||
return true;
|
||||
|
||||
return false;
|
||||
@ -125,12 +247,12 @@ extern struct page *fscrypt_alloc_bounce_page(gfp_t gfp_flags);
|
||||
extern const struct dentry_operations fscrypt_d_ops;
|
||||
|
||||
extern void __printf(3, 4) __cold
|
||||
fscrypt_msg(struct super_block *sb, const char *level, const char *fmt, ...);
|
||||
fscrypt_msg(const struct inode *inode, const char *level, const char *fmt, ...);
|
||||
|
||||
#define fscrypt_warn(sb, fmt, ...) \
|
||||
fscrypt_msg(sb, KERN_WARNING, fmt, ##__VA_ARGS__)
|
||||
#define fscrypt_err(sb, fmt, ...) \
|
||||
fscrypt_msg(sb, KERN_ERR, fmt, ##__VA_ARGS__)
|
||||
#define fscrypt_warn(inode, fmt, ...) \
|
||||
fscrypt_msg((inode), KERN_WARNING, fmt, ##__VA_ARGS__)
|
||||
#define fscrypt_err(inode, fmt, ...) \
|
||||
fscrypt_msg((inode), KERN_ERR, fmt, ##__VA_ARGS__)
|
||||
|
||||
#define FSCRYPT_MAX_IV_SIZE 32
|
||||
|
||||
@ -155,17 +277,279 @@ extern bool fscrypt_fname_encrypted_size(const struct inode *inode,
|
||||
u32 orig_len, u32 max_len,
|
||||
u32 *encrypted_len_ret);
|
||||
|
||||
/* keyinfo.c */
|
||||
/* hkdf.c */
|
||||
|
||||
struct fscrypt_hkdf {
|
||||
struct crypto_shash *hmac_tfm;
|
||||
};
|
||||
|
||||
extern int fscrypt_init_hkdf(struct fscrypt_hkdf *hkdf, const u8 *master_key,
|
||||
unsigned int master_key_size);
|
||||
|
||||
/*
|
||||
* The list of contexts in which fscrypt uses HKDF. These values are used as
|
||||
* the first byte of the HKDF application-specific info string to guarantee that
|
||||
* info strings are never repeated between contexts. This ensures that all HKDF
|
||||
* outputs are unique and cryptographically isolated, i.e. knowledge of one
|
||||
* output doesn't reveal another.
|
||||
*/
|
||||
#define HKDF_CONTEXT_KEY_IDENTIFIER 1
|
||||
#define HKDF_CONTEXT_PER_FILE_KEY 2
|
||||
#define HKDF_CONTEXT_DIRECT_KEY 3
|
||||
#define HKDF_CONTEXT_IV_INO_LBLK_64_KEY 4
|
||||
|
||||
extern int fscrypt_hkdf_expand(struct fscrypt_hkdf *hkdf, u8 context,
|
||||
const u8 *info, unsigned int infolen,
|
||||
u8 *okm, unsigned int okmlen);
|
||||
|
||||
extern void fscrypt_destroy_hkdf(struct fscrypt_hkdf *hkdf);
|
||||
|
||||
/* inline_crypt.c */
|
||||
#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT
|
||||
extern bool fscrypt_should_use_inline_encryption(const struct fscrypt_info *ci);
|
||||
|
||||
extern int fscrypt_set_inline_crypt_key(struct fscrypt_info *ci,
|
||||
const u8 *derived_key);
|
||||
|
||||
extern void fscrypt_free_inline_crypt_key(struct fscrypt_info *ci);
|
||||
|
||||
extern int fscrypt_setup_per_mode_inline_crypt_key(
|
||||
struct fscrypt_info *ci,
|
||||
struct fscrypt_master_key *mk);
|
||||
|
||||
extern void fscrypt_evict_inline_crypt_keys(struct fscrypt_master_key *mk);
|
||||
|
||||
#else /* CONFIG_FS_ENCRYPTION_INLINE_CRYPT */
|
||||
|
||||
static inline bool fscrypt_should_use_inline_encryption(
|
||||
const struct fscrypt_info *ci)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
static inline int fscrypt_set_inline_crypt_key(struct fscrypt_info *ci,
|
||||
const u8 *derived_key)
|
||||
{
|
||||
WARN_ON(1);
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
static inline void fscrypt_free_inline_crypt_key(struct fscrypt_info *ci)
|
||||
{
|
||||
}
|
||||
|
||||
static inline int fscrypt_setup_per_mode_inline_crypt_key(
|
||||
struct fscrypt_info *ci,
|
||||
struct fscrypt_master_key *mk)
|
||||
{
|
||||
WARN_ON(1);
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
static inline void fscrypt_evict_inline_crypt_keys(
|
||||
struct fscrypt_master_key *mk)
|
||||
{
|
||||
}
|
||||
#endif /* !CONFIG_FS_ENCRYPTION_INLINE_CRYPT */
|
||||
|
||||
/* keyring.c */
|
||||
|
||||
/*
|
||||
* fscrypt_master_key_secret - secret key material of an in-use master key
|
||||
*/
|
||||
struct fscrypt_master_key_secret {
|
||||
|
||||
/*
|
||||
* For v2 policy keys: HKDF context keyed by this master key.
|
||||
* For v1 policy keys: not set (hkdf.hmac_tfm == NULL).
|
||||
*/
|
||||
struct fscrypt_hkdf hkdf;
|
||||
|
||||
/* Size of the raw key in bytes. Set even if ->raw isn't set. */
|
||||
u32 size;
|
||||
|
||||
/* For v1 policy keys: the raw key. Wiped for v2 policy keys. */
|
||||
u8 raw[FSCRYPT_MAX_KEY_SIZE];
|
||||
|
||||
} __randomize_layout;
|
||||
|
||||
/*
|
||||
* fscrypt_master_key - an in-use master key
|
||||
*
|
||||
* This represents a master encryption key which has been added to the
|
||||
* filesystem and can be used to "unlock" the encrypted files which were
|
||||
* encrypted with it.
|
||||
*/
|
||||
struct fscrypt_master_key {
|
||||
|
||||
/*
|
||||
* The secret key material. After FS_IOC_REMOVE_ENCRYPTION_KEY is
|
||||
* executed, this is wiped and no new inodes can be unlocked with this
|
||||
* key; however, there may still be inodes in ->mk_decrypted_inodes
|
||||
* which could not be evicted. As long as some inodes still remain,
|
||||
* FS_IOC_REMOVE_ENCRYPTION_KEY can be retried, or
|
||||
* FS_IOC_ADD_ENCRYPTION_KEY can add the secret again.
|
||||
*
|
||||
* Locking: protected by key->sem (outer) and mk_secret_sem (inner).
|
||||
* The reason for two locks is that key->sem also protects modifying
|
||||
* mk_users, which ranks it above the semaphore for the keyring key
|
||||
* type, which is in turn above page faults (via keyring_read). But
|
||||
* sometimes filesystems call fscrypt_get_encryption_info() from within
|
||||
* a transaction, which ranks it below page faults. So we need a
|
||||
* separate lock which protects mk_secret but not also mk_users.
|
||||
*/
|
||||
struct fscrypt_master_key_secret mk_secret;
|
||||
struct rw_semaphore mk_secret_sem;
|
||||
|
||||
/*
|
||||
* For v1 policy keys: an arbitrary key descriptor which was assigned by
|
||||
* userspace (->descriptor).
|
||||
*
|
||||
* For v2 policy keys: a cryptographic hash of this key (->identifier).
|
||||
*/
|
||||
struct fscrypt_key_specifier mk_spec;
|
||||
|
||||
/*
|
||||
* Keyring which contains a key of type 'key_type_fscrypt_user' for each
|
||||
* user who has added this key. Normally each key will be added by just
|
||||
* one user, but it's possible that multiple users share a key, and in
|
||||
* that case we need to keep track of those users so that one user can't
|
||||
* remove the key before the others want it removed too.
|
||||
*
|
||||
* This is NULL for v1 policy keys; those can only be added by root.
|
||||
*
|
||||
* Locking: in addition to this keyrings own semaphore, this is
|
||||
* protected by the master key's key->sem, so we can do atomic
|
||||
* search+insert. It can also be searched without taking any locks, but
|
||||
* in that case the returned key may have already been removed.
|
||||
*/
|
||||
struct key *mk_users;
|
||||
|
||||
/*
|
||||
* Length of ->mk_decrypted_inodes, plus one if mk_secret is present.
|
||||
* Once this goes to 0, the master key is removed from ->s_master_keys.
|
||||
* The 'struct fscrypt_master_key' will continue to live as long as the
|
||||
* 'struct key' whose payload it is, but we won't let this reference
|
||||
* count rise again.
|
||||
*/
|
||||
refcount_t mk_refcount;
|
||||
|
||||
/*
|
||||
* List of inodes that were unlocked using this key. This allows the
|
||||
* inodes to be evicted efficiently if the key is removed.
|
||||
*/
|
||||
struct list_head mk_decrypted_inodes;
|
||||
spinlock_t mk_decrypted_inodes_lock;
|
||||
|
||||
/* Crypto API transforms for DIRECT_KEY policies, allocated on-demand */
|
||||
struct crypto_skcipher *mk_direct_tfms[__FSCRYPT_MODE_MAX + 1];
|
||||
|
||||
/*
|
||||
* Crypto API transforms for filesystem-layer implementation of
|
||||
* IV_INO_LBLK_64 policies, allocated on-demand.
|
||||
*/
|
||||
struct crypto_skcipher *mk_iv_ino_lblk_64_tfms[__FSCRYPT_MODE_MAX + 1];
|
||||
|
||||
#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT
|
||||
/* Raw keys for IV_INO_LBLK_64 policies, allocated on-demand */
|
||||
u8 *mk_iv_ino_lblk_64_raw_keys[__FSCRYPT_MODE_MAX + 1];
|
||||
|
||||
/* The data unit size being used for inline encryption */
|
||||
unsigned int mk_data_unit_size;
|
||||
|
||||
/* The filesystem's block device */
|
||||
struct block_device *mk_bdev;
|
||||
#endif
|
||||
} __randomize_layout;
|
||||
|
||||
static inline bool
|
||||
is_master_key_secret_present(const struct fscrypt_master_key_secret *secret)
|
||||
{
|
||||
/*
|
||||
* The READ_ONCE() is only necessary for fscrypt_drop_inode() and
|
||||
* fscrypt_key_describe(). These run in atomic context, so they can't
|
||||
* take ->mk_secret_sem and thus 'secret' can change concurrently which
|
||||
* would be a data race. But they only need to know whether the secret
|
||||
* *was* present at the time of check, so READ_ONCE() suffices.
|
||||
*/
|
||||
return READ_ONCE(secret->size) != 0;
|
||||
}
|
||||
|
||||
static inline const char *master_key_spec_type(
|
||||
const struct fscrypt_key_specifier *spec)
|
||||
{
|
||||
switch (spec->type) {
|
||||
case FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR:
|
||||
return "descriptor";
|
||||
case FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER:
|
||||
return "identifier";
|
||||
}
|
||||
return "[unknown]";
|
||||
}
|
||||
|
||||
static inline int master_key_spec_len(const struct fscrypt_key_specifier *spec)
|
||||
{
|
||||
switch (spec->type) {
|
||||
case FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR:
|
||||
return FSCRYPT_KEY_DESCRIPTOR_SIZE;
|
||||
case FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER:
|
||||
return FSCRYPT_KEY_IDENTIFIER_SIZE;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
extern struct key *
|
||||
fscrypt_find_master_key(struct super_block *sb,
|
||||
const struct fscrypt_key_specifier *mk_spec);
|
||||
|
||||
extern int fscrypt_verify_key_added(struct super_block *sb,
|
||||
const u8 identifier[FSCRYPT_KEY_IDENTIFIER_SIZE]);
|
||||
|
||||
extern int __init fscrypt_init_keyring(void);
|
||||
|
||||
/* keysetup.c */
|
||||
|
||||
struct fscrypt_mode {
|
||||
const char *friendly_name;
|
||||
const char *cipher_str;
|
||||
int keysize;
|
||||
int ivsize;
|
||||
enum blk_crypto_mode_num blk_crypto_mode;
|
||||
bool logged_impl_name;
|
||||
bool needs_essiv;
|
||||
};
|
||||
|
||||
extern void __exit fscrypt_essiv_cleanup(void);
|
||||
extern struct fscrypt_mode fscrypt_modes[];
|
||||
|
||||
static inline bool
|
||||
fscrypt_mode_supports_direct_key(const struct fscrypt_mode *mode)
|
||||
{
|
||||
return mode->ivsize >= offsetofend(union fscrypt_iv, nonce);
|
||||
}
|
||||
|
||||
extern struct crypto_skcipher *
|
||||
fscrypt_allocate_skcipher(struct fscrypt_mode *mode, const u8 *raw_key,
|
||||
const struct inode *inode);
|
||||
|
||||
extern int fscrypt_set_derived_key(struct fscrypt_info *ci,
|
||||
const u8 *derived_key);
|
||||
|
||||
/* keysetup_v1.c */
|
||||
|
||||
extern void fscrypt_put_direct_key(struct fscrypt_direct_key *dk);
|
||||
|
||||
extern int fscrypt_setup_v1_file_key(struct fscrypt_info *ci,
|
||||
const u8 *raw_master_key);
|
||||
|
||||
extern int fscrypt_setup_v1_file_key_via_subscribed_keyrings(
|
||||
struct fscrypt_info *ci);
|
||||
/* policy.c */
|
||||
|
||||
extern bool fscrypt_policies_equal(const union fscrypt_policy *policy1,
|
||||
const union fscrypt_policy *policy2);
|
||||
extern bool fscrypt_supported_policy(const union fscrypt_policy *policy_u,
|
||||
const struct inode *inode);
|
||||
extern int fscrypt_policy_from_context(union fscrypt_policy *policy_u,
|
||||
const union fscrypt_context *ctx_u,
|
||||
int ctx_size);
|
||||
|
||||
#endif /* _FSCRYPT_PRIVATE_H */
|
||||
|
183
fs/crypto/hkdf.c
Normal file
183
fs/crypto/hkdf.c
Normal file
@ -0,0 +1,183 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Implementation of HKDF ("HMAC-based Extract-and-Expand Key Derivation
|
||||
* Function"), aka RFC 5869. See also the original paper (Krawczyk 2010):
|
||||
* "Cryptographic Extraction and Key Derivation: The HKDF Scheme".
|
||||
*
|
||||
* This is used to derive keys from the fscrypt master keys.
|
||||
*
|
||||
* Copyright 2019 Google LLC
|
||||
*/
|
||||
|
||||
#include <crypto/hash.h>
|
||||
#include <crypto/sha.h>
|
||||
|
||||
#include "fscrypt_private.h"
|
||||
|
||||
/*
|
||||
* HKDF supports any unkeyed cryptographic hash algorithm, but fscrypt uses
|
||||
* SHA-512 because it is reasonably secure and efficient; and since it produces
|
||||
* a 64-byte digest, deriving an AES-256-XTS key preserves all 64 bytes of
|
||||
* entropy from the master key and requires only one iteration of HKDF-Expand.
|
||||
*/
|
||||
#define HKDF_HMAC_ALG "hmac(sha512)"
|
||||
#define HKDF_HASHLEN SHA512_DIGEST_SIZE
|
||||
|
||||
/*
|
||||
* HKDF consists of two steps:
|
||||
*
|
||||
* 1. HKDF-Extract: extract a pseudorandom key of length HKDF_HASHLEN bytes from
|
||||
* the input keying material and optional salt.
|
||||
* 2. HKDF-Expand: expand the pseudorandom key into output keying material of
|
||||
* any length, parameterized by an application-specific info string.
|
||||
*
|
||||
* HKDF-Extract can be skipped if the input is already a pseudorandom key of
|
||||
* length HKDF_HASHLEN bytes. However, cipher modes other than AES-256-XTS take
|
||||
* shorter keys, and we don't want to force users of those modes to provide
|
||||
* unnecessarily long master keys. Thus fscrypt still does HKDF-Extract. No
|
||||
* salt is used, since fscrypt master keys should already be pseudorandom and
|
||||
* there's no way to persist a random salt per master key from kernel mode.
|
||||
*/
|
||||
|
||||
/* HKDF-Extract (RFC 5869 section 2.2), unsalted */
|
||||
static int hkdf_extract(struct crypto_shash *hmac_tfm, const u8 *ikm,
|
||||
unsigned int ikmlen, u8 prk[HKDF_HASHLEN])
|
||||
{
|
||||
static const u8 default_salt[HKDF_HASHLEN];
|
||||
SHASH_DESC_ON_STACK(desc, hmac_tfm);
|
||||
int err;
|
||||
|
||||
err = crypto_shash_setkey(hmac_tfm, default_salt, HKDF_HASHLEN);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
desc->tfm = hmac_tfm;
|
||||
desc->flags = 0;
|
||||
err = crypto_shash_digest(desc, ikm, ikmlen, prk);
|
||||
shash_desc_zero(desc);
|
||||
return err;
|
||||
}
|
||||
|
||||
/*
|
||||
* Compute HKDF-Extract using the given master key as the input keying material,
|
||||
* and prepare an HMAC transform object keyed by the resulting pseudorandom key.
|
||||
*
|
||||
* Afterwards, the keyed HMAC transform object can be used for HKDF-Expand many
|
||||
* times without having to recompute HKDF-Extract each time.
|
||||
*/
|
||||
int fscrypt_init_hkdf(struct fscrypt_hkdf *hkdf, const u8 *master_key,
|
||||
unsigned int master_key_size)
|
||||
{
|
||||
struct crypto_shash *hmac_tfm;
|
||||
u8 prk[HKDF_HASHLEN];
|
||||
int err;
|
||||
|
||||
hmac_tfm = crypto_alloc_shash(HKDF_HMAC_ALG, 0, 0);
|
||||
if (IS_ERR(hmac_tfm)) {
|
||||
fscrypt_err(NULL, "Error allocating " HKDF_HMAC_ALG ": %ld",
|
||||
PTR_ERR(hmac_tfm));
|
||||
return PTR_ERR(hmac_tfm);
|
||||
}
|
||||
|
||||
if (WARN_ON(crypto_shash_digestsize(hmac_tfm) != sizeof(prk))) {
|
||||
err = -EINVAL;
|
||||
goto err_free_tfm;
|
||||
}
|
||||
|
||||
err = hkdf_extract(hmac_tfm, master_key, master_key_size, prk);
|
||||
if (err)
|
||||
goto err_free_tfm;
|
||||
|
||||
err = crypto_shash_setkey(hmac_tfm, prk, sizeof(prk));
|
||||
if (err)
|
||||
goto err_free_tfm;
|
||||
|
||||
hkdf->hmac_tfm = hmac_tfm;
|
||||
goto out;
|
||||
|
||||
err_free_tfm:
|
||||
crypto_free_shash(hmac_tfm);
|
||||
out:
|
||||
memzero_explicit(prk, sizeof(prk));
|
||||
return err;
|
||||
}
|
||||
|
||||
/*
|
||||
* HKDF-Expand (RFC 5869 section 2.3). This expands the pseudorandom key, which
|
||||
* was already keyed into 'hkdf->hmac_tfm' by fscrypt_init_hkdf(), into 'okmlen'
|
||||
* bytes of output keying material parameterized by the application-specific
|
||||
* 'info' of length 'infolen' bytes, prefixed by "fscrypt\0" and the 'context'
|
||||
* byte. This is thread-safe and may be called by multiple threads in parallel.
|
||||
*
|
||||
* ('context' isn't part of the HKDF specification; it's just a prefix fscrypt
|
||||
* adds to its application-specific info strings to guarantee that it doesn't
|
||||
* accidentally repeat an info string when using HKDF for different purposes.)
|
||||
*/
|
||||
int fscrypt_hkdf_expand(struct fscrypt_hkdf *hkdf, u8 context,
|
||||
const u8 *info, unsigned int infolen,
|
||||
u8 *okm, unsigned int okmlen)
|
||||
{
|
||||
SHASH_DESC_ON_STACK(desc, hkdf->hmac_tfm);
|
||||
u8 prefix[9];
|
||||
unsigned int i;
|
||||
int err;
|
||||
const u8 *prev = NULL;
|
||||
u8 counter = 1;
|
||||
u8 tmp[HKDF_HASHLEN];
|
||||
|
||||
if (WARN_ON(okmlen > 255 * HKDF_HASHLEN))
|
||||
return -EINVAL;
|
||||
|
||||
desc->tfm = hkdf->hmac_tfm;
|
||||
desc->flags = 0;
|
||||
|
||||
memcpy(prefix, "fscrypt\0", 8);
|
||||
prefix[8] = context;
|
||||
|
||||
for (i = 0; i < okmlen; i += HKDF_HASHLEN) {
|
||||
|
||||
err = crypto_shash_init(desc);
|
||||
if (err)
|
||||
goto out;
|
||||
|
||||
if (prev) {
|
||||
err = crypto_shash_update(desc, prev, HKDF_HASHLEN);
|
||||
if (err)
|
||||
goto out;
|
||||
}
|
||||
|
||||
err = crypto_shash_update(desc, prefix, sizeof(prefix));
|
||||
if (err)
|
||||
goto out;
|
||||
|
||||
err = crypto_shash_update(desc, info, infolen);
|
||||
if (err)
|
||||
goto out;
|
||||
|
||||
BUILD_BUG_ON(sizeof(counter) != 1);
|
||||
if (okmlen - i < HKDF_HASHLEN) {
|
||||
err = crypto_shash_finup(desc, &counter, 1, tmp);
|
||||
if (err)
|
||||
goto out;
|
||||
memcpy(&okm[i], tmp, okmlen - i);
|
||||
memzero_explicit(tmp, sizeof(tmp));
|
||||
} else {
|
||||
err = crypto_shash_finup(desc, &counter, 1, &okm[i]);
|
||||
if (err)
|
||||
goto out;
|
||||
}
|
||||
counter++;
|
||||
prev = &okm[i];
|
||||
}
|
||||
err = 0;
|
||||
out:
|
||||
if (unlikely(err))
|
||||
memzero_explicit(okm, okmlen); /* so caller doesn't need to */
|
||||
shash_desc_zero(desc);
|
||||
return err;
|
||||
}
|
||||
|
||||
void fscrypt_destroy_hkdf(struct fscrypt_hkdf *hkdf)
|
||||
{
|
||||
crypto_free_shash(hkdf->hmac_tfm);
|
||||
}
|
@ -38,9 +38,9 @@ int fscrypt_file_open(struct inode *inode, struct file *filp)
|
||||
dir = dget_parent(file_dentry(filp));
|
||||
if (IS_ENCRYPTED(d_inode(dir)) &&
|
||||
!fscrypt_has_permitted_context(d_inode(dir), inode)) {
|
||||
fscrypt_warn(inode->i_sb,
|
||||
"inconsistent encryption contexts: %lu/%lu",
|
||||
d_inode(dir)->i_ino, inode->i_ino);
|
||||
fscrypt_warn(inode,
|
||||
"Inconsistent encryption context (parent directory: %lu)",
|
||||
d_inode(dir)->i_ino);
|
||||
err = -EPERM;
|
||||
}
|
||||
dput(dir);
|
||||
|
390
fs/crypto/inline_crypt.c
Normal file
390
fs/crypto/inline_crypt.c
Normal file
@ -0,0 +1,390 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Inline encryption support for fscrypt
|
||||
*
|
||||
* Copyright 2019 Google LLC
|
||||
*/
|
||||
|
||||
/*
|
||||
* With "inline encryption", the block layer handles the decryption/encryption
|
||||
* as part of the bio, instead of the filesystem doing the crypto itself via
|
||||
* crypto API. See Documentation/block/inline-encryption.rst. fscrypt still
|
||||
* provides the key and IV to use.
|
||||
*/
|
||||
|
||||
#include <linux/blk-crypto.h>
|
||||
#include <linux/blkdev.h>
|
||||
#include <linux/buffer_head.h>
|
||||
#include <linux/keyslot-manager.h>
|
||||
|
||||
#include "fscrypt_private.h"
|
||||
|
||||
/* Return true iff inline encryption should be used for this file */
|
||||
bool fscrypt_should_use_inline_encryption(const struct fscrypt_info *ci)
|
||||
{
|
||||
const struct inode *inode = ci->ci_inode;
|
||||
struct super_block *sb = inode->i_sb;
|
||||
|
||||
/* The file must need contents encryption, not filenames encryption */
|
||||
if (!S_ISREG(inode->i_mode))
|
||||
return false;
|
||||
|
||||
/* blk-crypto must implement the needed encryption algorithm */
|
||||
if (ci->ci_mode->blk_crypto_mode == BLK_ENCRYPTION_MODE_INVALID)
|
||||
return false;
|
||||
|
||||
/* DIRECT_KEY needs a 24+ byte IV, so it can't work with 8-byte DUNs */
|
||||
if (fscrypt_is_direct_key_policy(&ci->ci_policy))
|
||||
return false;
|
||||
|
||||
/* The filesystem must be mounted with -o inlinecrypt */
|
||||
if (!sb->s_cop->inline_crypt_enabled ||
|
||||
!sb->s_cop->inline_crypt_enabled(sb))
|
||||
return false;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/* Set a per-file inline encryption key (for passing to blk-crypto) */
|
||||
int fscrypt_set_inline_crypt_key(struct fscrypt_info *ci, const u8 *derived_key)
|
||||
{
|
||||
const struct fscrypt_mode *mode = ci->ci_mode;
|
||||
const struct super_block *sb = ci->ci_inode->i_sb;
|
||||
|
||||
ci->ci_inline_crypt_key = kmemdup(derived_key, mode->keysize, GFP_NOFS);
|
||||
if (!ci->ci_inline_crypt_key)
|
||||
return -ENOMEM;
|
||||
ci->ci_owns_key = true;
|
||||
|
||||
return blk_crypto_start_using_mode(mode->blk_crypto_mode,
|
||||
sb->s_blocksize,
|
||||
sb->s_bdev->bd_queue);
|
||||
}
|
||||
|
||||
/* Free a per-file inline encryption key and evict it from blk-crypto */
|
||||
void fscrypt_free_inline_crypt_key(struct fscrypt_info *ci)
|
||||
{
|
||||
if (ci->ci_inline_crypt_key != NULL) {
|
||||
const struct fscrypt_mode *mode = ci->ci_mode;
|
||||
const struct super_block *sb = ci->ci_inode->i_sb;
|
||||
|
||||
blk_crypto_evict_key(sb->s_bdev->bd_queue,
|
||||
ci->ci_inline_crypt_key,
|
||||
mode->blk_crypto_mode, sb->s_blocksize);
|
||||
kzfree(ci->ci_inline_crypt_key);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Set up ->inline_crypt_key (for passing to blk-crypto) for inodes which use an
|
||||
* IV_INO_LBLK_64 encryption policy.
|
||||
*
|
||||
* Return: 0 on success, -errno on failure
|
||||
*/
|
||||
int fscrypt_setup_per_mode_inline_crypt_key(struct fscrypt_info *ci,
|
||||
struct fscrypt_master_key *mk)
|
||||
{
|
||||
static DEFINE_MUTEX(inline_crypt_setup_mutex);
|
||||
const struct super_block *sb = ci->ci_inode->i_sb;
|
||||
struct block_device *bdev = sb->s_bdev;
|
||||
const struct fscrypt_mode *mode = ci->ci_mode;
|
||||
const u8 mode_num = mode - fscrypt_modes;
|
||||
u8 *raw_key;
|
||||
u8 hkdf_info[sizeof(mode_num) + sizeof(sb->s_uuid)];
|
||||
int err;
|
||||
|
||||
if (WARN_ON(mode_num > __FSCRYPT_MODE_MAX))
|
||||
return -EINVAL;
|
||||
|
||||
/* pairs with smp_store_release() below */
|
||||
raw_key = smp_load_acquire(&mk->mk_iv_ino_lblk_64_raw_keys[mode_num]);
|
||||
if (raw_key) {
|
||||
err = 0;
|
||||
goto out;
|
||||
}
|
||||
|
||||
mutex_lock(&inline_crypt_setup_mutex);
|
||||
|
||||
raw_key = mk->mk_iv_ino_lblk_64_raw_keys[mode_num];
|
||||
if (raw_key) {
|
||||
err = 0;
|
||||
goto out_unlock;
|
||||
}
|
||||
|
||||
raw_key = kmalloc(mode->keysize, GFP_NOFS);
|
||||
if (!raw_key) {
|
||||
err = -ENOMEM;
|
||||
goto out_unlock;
|
||||
}
|
||||
|
||||
BUILD_BUG_ON(sizeof(mode_num) != 1);
|
||||
BUILD_BUG_ON(sizeof(sb->s_uuid) != 16);
|
||||
BUILD_BUG_ON(sizeof(hkdf_info) != 17);
|
||||
hkdf_info[0] = mode_num;
|
||||
memcpy(&hkdf_info[1], &sb->s_uuid, sizeof(sb->s_uuid));
|
||||
|
||||
err = fscrypt_hkdf_expand(&mk->mk_secret.hkdf,
|
||||
HKDF_CONTEXT_IV_INO_LBLK_64_KEY,
|
||||
hkdf_info, sizeof(hkdf_info),
|
||||
raw_key, mode->keysize);
|
||||
if (err)
|
||||
goto out_unlock;
|
||||
|
||||
err = blk_crypto_start_using_mode(mode->blk_crypto_mode,
|
||||
sb->s_blocksize, bdev->bd_queue);
|
||||
if (err)
|
||||
goto out_unlock;
|
||||
|
||||
/*
|
||||
* When a master key's first inline encryption key is set up, save a
|
||||
* reference to the filesystem's block device so that the inline
|
||||
* encryption keys can be evicted when the master key is destroyed.
|
||||
*/
|
||||
if (!mk->mk_bdev) {
|
||||
mk->mk_bdev = bdgrab(bdev);
|
||||
mk->mk_data_unit_size = sb->s_blocksize;
|
||||
}
|
||||
|
||||
/* pairs with smp_load_acquire() above */
|
||||
smp_store_release(&mk->mk_iv_ino_lblk_64_raw_keys[mode_num], raw_key);
|
||||
err = 0;
|
||||
out_unlock:
|
||||
mutex_unlock(&inline_crypt_setup_mutex);
|
||||
out:
|
||||
if (err == 0) {
|
||||
ci->ci_inline_crypt_key = raw_key;
|
||||
/*
|
||||
* Since each struct fscrypt_master_key belongs to a particular
|
||||
* filesystem (a struct super_block), there should be only one
|
||||
* block device, and only one data unit size as it should equal
|
||||
* the filesystem's blocksize (i.e. s_blocksize).
|
||||
*/
|
||||
if (WARN_ON(mk->mk_bdev != bdev))
|
||||
err = -EINVAL;
|
||||
if (WARN_ON(mk->mk_data_unit_size != sb->s_blocksize))
|
||||
err = -EINVAL;
|
||||
} else {
|
||||
kzfree(raw_key);
|
||||
}
|
||||
return err;
|
||||
}
|
||||
|
||||
/*
|
||||
* Evict per-mode inline encryption keys from blk-crypto when a master key is
|
||||
* destroyed.
|
||||
*/
|
||||
void fscrypt_evict_inline_crypt_keys(struct fscrypt_master_key *mk)
|
||||
{
|
||||
struct block_device *bdev = mk->mk_bdev;
|
||||
size_t i;
|
||||
|
||||
if (!bdev) /* No inline encryption keys? */
|
||||
return;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(mk->mk_iv_ino_lblk_64_raw_keys); i++) {
|
||||
u8 *raw_key = mk->mk_iv_ino_lblk_64_raw_keys[i];
|
||||
|
||||
if (raw_key != NULL) {
|
||||
blk_crypto_evict_key(bdev->bd_queue, raw_key,
|
||||
fscrypt_modes[i].blk_crypto_mode,
|
||||
mk->mk_data_unit_size);
|
||||
kzfree(raw_key);
|
||||
}
|
||||
}
|
||||
bdput(bdev);
|
||||
}
|
||||
|
||||
/**
|
||||
* fscrypt_inode_uses_inline_crypto - test whether an inode uses inline encryption
|
||||
* @inode: an inode
|
||||
*
|
||||
* Return: true if the inode requires file contents encryption and if the
|
||||
* encryption should be done in the block layer via blk-crypto rather
|
||||
* than in the filesystem layer.
|
||||
*/
|
||||
bool fscrypt_inode_uses_inline_crypto(const struct inode *inode)
|
||||
{
|
||||
return IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode) &&
|
||||
inode->i_crypt_info->ci_inline_crypt_key != NULL;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(fscrypt_inode_uses_inline_crypto);
|
||||
|
||||
/**
|
||||
* fscrypt_inode_uses_fs_layer_crypto - test whether an inode uses fs-layer encryption
|
||||
* @inode: an inode
|
||||
*
|
||||
* Return: true if the inode requires file contents encryption and if the
|
||||
* encryption should be done in the filesystem layer rather than in the
|
||||
* block layer via blk-crypto.
|
||||
*/
|
||||
bool fscrypt_inode_uses_fs_layer_crypto(const struct inode *inode)
|
||||
{
|
||||
return IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode) &&
|
||||
inode->i_crypt_info->ci_inline_crypt_key == NULL;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(fscrypt_inode_uses_fs_layer_crypto);
|
||||
|
||||
static inline u64 fscrypt_generate_dun(const struct fscrypt_info *ci,
|
||||
u64 lblk_num)
|
||||
{
|
||||
union fscrypt_iv iv;
|
||||
|
||||
fscrypt_generate_iv(&iv, lblk_num, ci);
|
||||
/*
|
||||
* fscrypt_should_use_inline_encryption() ensures we never get here if
|
||||
* more than the first 8 bytes of the IV are nonzero.
|
||||
*/
|
||||
BUG_ON(memchr_inv(&iv.raw[8], 0, ci->ci_mode->ivsize - 8));
|
||||
return le64_to_cpu(iv.lblk_num);
|
||||
}
|
||||
|
||||
/**
|
||||
* fscrypt_set_bio_crypt_ctx - prepare a file contents bio for inline encryption
|
||||
* @bio: a bio which will eventually be submitted to the file
|
||||
* @inode: the file's inode
|
||||
* @first_lblk: the first file logical block number in the I/O
|
||||
* @gfp_mask: memory allocation flags
|
||||
*
|
||||
* If the contents of the file should be encrypted (or decrypted) with inline
|
||||
* encryption, then assign the appropriate encryption context to the bio.
|
||||
*
|
||||
* Normally the bio should be newly allocated (i.e. no pages added yet), as
|
||||
* otherwise fscrypt_mergeable_bio() won't work as intended.
|
||||
*
|
||||
* The encryption context will be freed automatically when the bio is freed.
|
||||
*
|
||||
* Return: 0 on success, -errno on failure. If __GFP_NOFAIL is specified, this
|
||||
* is guaranteed to succeed.
|
||||
*/
|
||||
int fscrypt_set_bio_crypt_ctx(struct bio *bio, const struct inode *inode,
|
||||
u64 first_lblk, gfp_t gfp_mask)
|
||||
{
|
||||
const struct fscrypt_info *ci = inode->i_crypt_info;
|
||||
u64 dun;
|
||||
|
||||
if (!fscrypt_inode_uses_inline_crypto(inode))
|
||||
return 0;
|
||||
|
||||
dun = fscrypt_generate_dun(ci, first_lblk);
|
||||
|
||||
return bio_crypt_set_ctx(bio, ci->ci_inline_crypt_key,
|
||||
ci->ci_mode->blk_crypto_mode,
|
||||
dun, inode->i_blkbits, gfp_mask);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(fscrypt_set_bio_crypt_ctx);
|
||||
|
||||
/* Extract the inode and logical block number from a buffer_head. */
|
||||
static bool bh_get_inode_and_lblk_num(const struct buffer_head *bh,
|
||||
const struct inode **inode_ret,
|
||||
u64 *lblk_num_ret)
|
||||
{
|
||||
struct page *page = bh->b_page;
|
||||
const struct address_space *mapping;
|
||||
const struct inode *inode;
|
||||
|
||||
/*
|
||||
* The ext4 journal (jbd2) can submit a buffer_head it directly created
|
||||
* for a non-pagecache page. fscrypt doesn't care about these.
|
||||
*/
|
||||
mapping = page_mapping(page);
|
||||
if (!mapping)
|
||||
return false;
|
||||
inode = mapping->host;
|
||||
|
||||
*inode_ret = inode;
|
||||
*lblk_num_ret = ((u64)page->index << (PAGE_SHIFT - inode->i_blkbits)) +
|
||||
(bh_offset(bh) >> inode->i_blkbits);
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* fscrypt_set_bio_crypt_ctx_bh - prepare a file contents bio for inline encryption
|
||||
* @bio: a bio which will eventually be submitted to the file
|
||||
* @first_bh: the first buffer_head for which I/O will be submitted
|
||||
* @gfp_mask: memory allocation flags
|
||||
*
|
||||
* Same as fscrypt_set_bio_crypt_ctx(), except this takes a buffer_head instead
|
||||
* of an inode and block number directly.
|
||||
*
|
||||
* Return: 0 on success, -errno on failure
|
||||
*/
|
||||
int fscrypt_set_bio_crypt_ctx_bh(struct bio *bio,
|
||||
const struct buffer_head *first_bh,
|
||||
gfp_t gfp_mask)
|
||||
{
|
||||
const struct inode *inode;
|
||||
u64 first_lblk;
|
||||
|
||||
if (!bh_get_inode_and_lblk_num(first_bh, &inode, &first_lblk))
|
||||
return 0;
|
||||
|
||||
return fscrypt_set_bio_crypt_ctx(bio, inode, first_lblk, gfp_mask);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(fscrypt_set_bio_crypt_ctx_bh);
|
||||
|
||||
/**
|
||||
* fscrypt_mergeable_bio - test whether data can be added to a bio
|
||||
* @bio: the bio being built up
|
||||
* @inode: the inode for the next part of the I/O
|
||||
* @next_lblk: the next file logical block number in the I/O
|
||||
*
|
||||
* When building a bio which may contain data which should undergo inline
|
||||
* encryption (or decryption) via fscrypt, filesystems should call this function
|
||||
* to ensure that the resulting bio contains only logically contiguous data.
|
||||
* This will return false if the next part of the I/O cannot be merged with the
|
||||
* bio because either the encryption key would be different or the encryption
|
||||
* data unit numbers would be discontiguous.
|
||||
*
|
||||
* fscrypt_set_bio_crypt_ctx() must have already been called on the bio.
|
||||
*
|
||||
* Return: true iff the I/O is mergeable
|
||||
*/
|
||||
bool fscrypt_mergeable_bio(struct bio *bio, const struct inode *inode,
|
||||
u64 next_lblk)
|
||||
{
|
||||
const struct bio_crypt_ctx *bc;
|
||||
const u8 *next_key;
|
||||
u64 next_dun;
|
||||
|
||||
if (bio_has_crypt_ctx(bio) != fscrypt_inode_uses_inline_crypto(inode))
|
||||
return false;
|
||||
if (!bio_has_crypt_ctx(bio))
|
||||
return true;
|
||||
bc = bio->bi_crypt_context;
|
||||
next_key = inode->i_crypt_info->ci_inline_crypt_key;
|
||||
next_dun = fscrypt_generate_dun(inode->i_crypt_info, next_lblk);
|
||||
|
||||
/*
|
||||
* Comparing the key pointers is good enough, as all I/O for each key
|
||||
* uses the same pointer. I.e., there's currently no need to support
|
||||
* merging requests where the keys are the same but the pointers differ.
|
||||
*/
|
||||
return next_key == bc->raw_key &&
|
||||
next_dun == bc->data_unit_num +
|
||||
(bio_sectors(bio) >>
|
||||
(bc->data_unit_size_bits - SECTOR_SHIFT));
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(fscrypt_mergeable_bio);
|
||||
|
||||
/**
|
||||
* fscrypt_mergeable_bio_bh - test whether data can be added to a bio
|
||||
* @bio: the bio being built up
|
||||
* @next_bh: the next buffer_head for which I/O will be submitted
|
||||
*
|
||||
* Same as fscrypt_mergeable_bio(), except this takes a buffer_head instead of
|
||||
* an inode and block number directly.
|
||||
*
|
||||
* Return: true iff the I/O is mergeable
|
||||
*/
|
||||
bool fscrypt_mergeable_bio_bh(struct bio *bio,
|
||||
const struct buffer_head *next_bh)
|
||||
{
|
||||
const struct inode *inode;
|
||||
u64 next_lblk;
|
||||
|
||||
if (!bh_get_inode_and_lblk_num(next_bh, &inode, &next_lblk))
|
||||
return !bio_has_crypt_ctx(bio);
|
||||
|
||||
return fscrypt_mergeable_bio(bio, inode, next_lblk);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(fscrypt_mergeable_bio_bh);
|
@ -1,612 +0,0 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* key management facility for FS encryption support.
|
||||
*
|
||||
* Copyright (C) 2015, Google, Inc.
|
||||
*
|
||||
* This contains encryption key functions.
|
||||
*
|
||||
* Written by Michael Halcrow, Ildar Muslukhov, and Uday Savagaonkar, 2015.
|
||||
*/
|
||||
|
||||
#include <keys/user-type.h>
|
||||
#include <linux/hashtable.h>
|
||||
#include <linux/scatterlist.h>
|
||||
#include <crypto/aes.h>
|
||||
#include <crypto/algapi.h>
|
||||
#include <crypto/sha.h>
|
||||
#include <crypto/skcipher.h>
|
||||
#include "fscrypt_private.h"
|
||||
|
||||
static struct crypto_shash *essiv_hash_tfm;
|
||||
|
||||
/* Table of keys referenced by FS_POLICY_FLAG_DIRECT_KEY policies */
|
||||
static DEFINE_HASHTABLE(fscrypt_master_keys, 6); /* 6 bits = 64 buckets */
|
||||
static DEFINE_SPINLOCK(fscrypt_master_keys_lock);
|
||||
|
||||
/*
|
||||
* Key derivation function. This generates the derived key by encrypting the
|
||||
* master key with AES-128-ECB using the inode's nonce as the AES key.
|
||||
*
|
||||
* The master key must be at least as long as the derived key. If the master
|
||||
* key is longer, then only the first 'derived_keysize' bytes are used.
|
||||
*/
|
||||
static int derive_key_aes(const u8 *master_key,
|
||||
const struct fscrypt_context *ctx,
|
||||
u8 *derived_key, unsigned int derived_keysize)
|
||||
{
|
||||
int res = 0;
|
||||
struct skcipher_request *req = NULL;
|
||||
DECLARE_CRYPTO_WAIT(wait);
|
||||
struct scatterlist src_sg, dst_sg;
|
||||
struct crypto_skcipher *tfm = crypto_alloc_skcipher("ecb(aes)", 0, 0);
|
||||
|
||||
if (IS_ERR(tfm)) {
|
||||
res = PTR_ERR(tfm);
|
||||
tfm = NULL;
|
||||
goto out;
|
||||
}
|
||||
crypto_skcipher_set_flags(tfm, CRYPTO_TFM_REQ_WEAK_KEY);
|
||||
req = skcipher_request_alloc(tfm, GFP_NOFS);
|
||||
if (!req) {
|
||||
res = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
skcipher_request_set_callback(req,
|
||||
CRYPTO_TFM_REQ_MAY_BACKLOG | CRYPTO_TFM_REQ_MAY_SLEEP,
|
||||
crypto_req_done, &wait);
|
||||
res = crypto_skcipher_setkey(tfm, ctx->nonce, sizeof(ctx->nonce));
|
||||
if (res < 0)
|
||||
goto out;
|
||||
|
||||
sg_init_one(&src_sg, master_key, derived_keysize);
|
||||
sg_init_one(&dst_sg, derived_key, derived_keysize);
|
||||
skcipher_request_set_crypt(req, &src_sg, &dst_sg, derived_keysize,
|
||||
NULL);
|
||||
res = crypto_wait_req(crypto_skcipher_encrypt(req), &wait);
|
||||
out:
|
||||
skcipher_request_free(req);
|
||||
crypto_free_skcipher(tfm);
|
||||
return res;
|
||||
}
|
||||
|
||||
/*
|
||||
* Search the current task's subscribed keyrings for a "logon" key with
|
||||
* description prefix:descriptor, and if found acquire a read lock on it and
|
||||
* return a pointer to its validated payload in *payload_ret.
|
||||
*/
|
||||
static struct key *
|
||||
find_and_lock_process_key(const char *prefix,
|
||||
const u8 descriptor[FS_KEY_DESCRIPTOR_SIZE],
|
||||
unsigned int min_keysize,
|
||||
const struct fscrypt_key **payload_ret)
|
||||
{
|
||||
char *description;
|
||||
struct key *key;
|
||||
const struct user_key_payload *ukp;
|
||||
const struct fscrypt_key *payload;
|
||||
|
||||
description = kasprintf(GFP_NOFS, "%s%*phN", prefix,
|
||||
FS_KEY_DESCRIPTOR_SIZE, descriptor);
|
||||
if (!description)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
key = request_key(&key_type_logon, description, NULL);
|
||||
kfree(description);
|
||||
if (IS_ERR(key))
|
||||
return key;
|
||||
|
||||
down_read(&key->sem);
|
||||
ukp = user_key_payload_locked(key);
|
||||
|
||||
if (!ukp) /* was the key revoked before we acquired its semaphore? */
|
||||
goto invalid;
|
||||
|
||||
payload = (const struct fscrypt_key *)ukp->data;
|
||||
|
||||
if (ukp->datalen != sizeof(struct fscrypt_key) ||
|
||||
payload->size < 1 || payload->size > FS_MAX_KEY_SIZE) {
|
||||
fscrypt_warn(NULL,
|
||||
"key with description '%s' has invalid payload",
|
||||
key->description);
|
||||
goto invalid;
|
||||
}
|
||||
|
||||
if (payload->size < min_keysize) {
|
||||
fscrypt_warn(NULL,
|
||||
"key with description '%s' is too short (got %u bytes, need %u+ bytes)",
|
||||
key->description, payload->size, min_keysize);
|
||||
goto invalid;
|
||||
}
|
||||
|
||||
*payload_ret = payload;
|
||||
return key;
|
||||
|
||||
invalid:
|
||||
up_read(&key->sem);
|
||||
key_put(key);
|
||||
return ERR_PTR(-ENOKEY);
|
||||
}
|
||||
|
||||
static struct fscrypt_mode available_modes[] = {
|
||||
[FS_ENCRYPTION_MODE_AES_256_XTS] = {
|
||||
.friendly_name = "AES-256-XTS",
|
||||
.cipher_str = "xts(aes)",
|
||||
.keysize = 64,
|
||||
.ivsize = 16,
|
||||
},
|
||||
[FS_ENCRYPTION_MODE_AES_256_CTS] = {
|
||||
.friendly_name = "AES-256-CTS-CBC",
|
||||
.cipher_str = "cts(cbc(aes))",
|
||||
.keysize = 32,
|
||||
.ivsize = 16,
|
||||
},
|
||||
[FS_ENCRYPTION_MODE_AES_128_CBC] = {
|
||||
.friendly_name = "AES-128-CBC",
|
||||
.cipher_str = "cbc(aes)",
|
||||
.keysize = 16,
|
||||
.ivsize = 16,
|
||||
.needs_essiv = true,
|
||||
},
|
||||
[FS_ENCRYPTION_MODE_AES_128_CTS] = {
|
||||
.friendly_name = "AES-128-CTS-CBC",
|
||||
.cipher_str = "cts(cbc(aes))",
|
||||
.keysize = 16,
|
||||
.ivsize = 16,
|
||||
},
|
||||
[FS_ENCRYPTION_MODE_ADIANTUM] = {
|
||||
.friendly_name = "Adiantum",
|
||||
.cipher_str = "adiantum(xchacha12,aes)",
|
||||
.keysize = 32,
|
||||
.ivsize = 32,
|
||||
},
|
||||
};
|
||||
|
||||
static struct fscrypt_mode *
|
||||
select_encryption_mode(const struct fscrypt_info *ci, const struct inode *inode)
|
||||
{
|
||||
if (!fscrypt_valid_enc_modes(ci->ci_data_mode, ci->ci_filename_mode)) {
|
||||
fscrypt_warn(inode->i_sb,
|
||||
"inode %lu uses unsupported encryption modes (contents mode %d, filenames mode %d)",
|
||||
inode->i_ino, ci->ci_data_mode,
|
||||
ci->ci_filename_mode);
|
||||
return ERR_PTR(-EINVAL);
|
||||
}
|
||||
|
||||
if (S_ISREG(inode->i_mode))
|
||||
return &available_modes[ci->ci_data_mode];
|
||||
|
||||
if (S_ISDIR(inode->i_mode) || S_ISLNK(inode->i_mode))
|
||||
return &available_modes[ci->ci_filename_mode];
|
||||
|
||||
WARN_ONCE(1, "fscrypt: filesystem tried to load encryption info for inode %lu, which is not encryptable (file type %d)\n",
|
||||
inode->i_ino, (inode->i_mode & S_IFMT));
|
||||
return ERR_PTR(-EINVAL);
|
||||
}
|
||||
|
||||
/* Find the master key, then derive the inode's actual encryption key */
|
||||
static int find_and_derive_key(const struct inode *inode,
|
||||
const struct fscrypt_context *ctx,
|
||||
u8 *derived_key, const struct fscrypt_mode *mode)
|
||||
{
|
||||
struct key *key;
|
||||
const struct fscrypt_key *payload;
|
||||
int err;
|
||||
|
||||
key = find_and_lock_process_key(FS_KEY_DESC_PREFIX,
|
||||
ctx->master_key_descriptor,
|
||||
mode->keysize, &payload);
|
||||
if (key == ERR_PTR(-ENOKEY) && inode->i_sb->s_cop->key_prefix) {
|
||||
key = find_and_lock_process_key(inode->i_sb->s_cop->key_prefix,
|
||||
ctx->master_key_descriptor,
|
||||
mode->keysize, &payload);
|
||||
}
|
||||
if (IS_ERR(key))
|
||||
return PTR_ERR(key);
|
||||
|
||||
if (ctx->flags & FS_POLICY_FLAG_DIRECT_KEY) {
|
||||
if (mode->ivsize < offsetofend(union fscrypt_iv, nonce)) {
|
||||
fscrypt_warn(inode->i_sb,
|
||||
"direct key mode not allowed with %s",
|
||||
mode->friendly_name);
|
||||
err = -EINVAL;
|
||||
} else if (ctx->contents_encryption_mode !=
|
||||
ctx->filenames_encryption_mode) {
|
||||
fscrypt_warn(inode->i_sb,
|
||||
"direct key mode not allowed with different contents and filenames modes");
|
||||
err = -EINVAL;
|
||||
} else {
|
||||
memcpy(derived_key, payload->raw, mode->keysize);
|
||||
err = 0;
|
||||
}
|
||||
} else {
|
||||
err = derive_key_aes(payload->raw, ctx, derived_key,
|
||||
mode->keysize);
|
||||
}
|
||||
up_read(&key->sem);
|
||||
key_put(key);
|
||||
return err;
|
||||
}
|
||||
|
||||
/* Allocate and key a symmetric cipher object for the given encryption mode */
|
||||
static struct crypto_skcipher *
|
||||
allocate_skcipher_for_mode(struct fscrypt_mode *mode, const u8 *raw_key,
|
||||
const struct inode *inode)
|
||||
{
|
||||
struct crypto_skcipher *tfm;
|
||||
int err;
|
||||
|
||||
tfm = crypto_alloc_skcipher(mode->cipher_str, 0, 0);
|
||||
if (IS_ERR(tfm)) {
|
||||
fscrypt_warn(inode->i_sb,
|
||||
"error allocating '%s' transform for inode %lu: %ld",
|
||||
mode->cipher_str, inode->i_ino, PTR_ERR(tfm));
|
||||
return tfm;
|
||||
}
|
||||
if (unlikely(!mode->logged_impl_name)) {
|
||||
/*
|
||||
* fscrypt performance can vary greatly depending on which
|
||||
* crypto algorithm implementation is used. Help people debug
|
||||
* performance problems by logging the ->cra_driver_name the
|
||||
* first time a mode is used. Note that multiple threads can
|
||||
* race here, but it doesn't really matter.
|
||||
*/
|
||||
mode->logged_impl_name = true;
|
||||
pr_info("fscrypt: %s using implementation \"%s\"\n",
|
||||
mode->friendly_name,
|
||||
crypto_skcipher_alg(tfm)->base.cra_driver_name);
|
||||
}
|
||||
crypto_skcipher_set_flags(tfm, CRYPTO_TFM_REQ_WEAK_KEY);
|
||||
err = crypto_skcipher_setkey(tfm, raw_key, mode->keysize);
|
||||
if (err)
|
||||
goto err_free_tfm;
|
||||
|
||||
return tfm;
|
||||
|
||||
err_free_tfm:
|
||||
crypto_free_skcipher(tfm);
|
||||
return ERR_PTR(err);
|
||||
}
|
||||
|
||||
/* Master key referenced by FS_POLICY_FLAG_DIRECT_KEY policy */
|
||||
struct fscrypt_master_key {
|
||||
struct hlist_node mk_node;
|
||||
refcount_t mk_refcount;
|
||||
const struct fscrypt_mode *mk_mode;
|
||||
struct crypto_skcipher *mk_ctfm;
|
||||
u8 mk_descriptor[FS_KEY_DESCRIPTOR_SIZE];
|
||||
u8 mk_raw[FS_MAX_KEY_SIZE];
|
||||
};
|
||||
|
||||
static void free_master_key(struct fscrypt_master_key *mk)
|
||||
{
|
||||
if (mk) {
|
||||
crypto_free_skcipher(mk->mk_ctfm);
|
||||
kzfree(mk);
|
||||
}
|
||||
}
|
||||
|
||||
static void put_master_key(struct fscrypt_master_key *mk)
|
||||
{
|
||||
if (!refcount_dec_and_lock(&mk->mk_refcount, &fscrypt_master_keys_lock))
|
||||
return;
|
||||
hash_del(&mk->mk_node);
|
||||
spin_unlock(&fscrypt_master_keys_lock);
|
||||
|
||||
free_master_key(mk);
|
||||
}
|
||||
|
||||
/*
|
||||
* Find/insert the given master key into the fscrypt_master_keys table. If
|
||||
* found, it is returned with elevated refcount, and 'to_insert' is freed if
|
||||
* non-NULL. If not found, 'to_insert' is inserted and returned if it's
|
||||
* non-NULL; otherwise NULL is returned.
|
||||
*/
|
||||
static struct fscrypt_master_key *
|
||||
find_or_insert_master_key(struct fscrypt_master_key *to_insert,
|
||||
const u8 *raw_key, const struct fscrypt_mode *mode,
|
||||
const struct fscrypt_info *ci)
|
||||
{
|
||||
unsigned long hash_key;
|
||||
struct fscrypt_master_key *mk;
|
||||
|
||||
/*
|
||||
* Careful: to avoid potentially leaking secret key bytes via timing
|
||||
* information, we must key the hash table by descriptor rather than by
|
||||
* raw key, and use crypto_memneq() when comparing raw keys.
|
||||
*/
|
||||
|
||||
BUILD_BUG_ON(sizeof(hash_key) > FS_KEY_DESCRIPTOR_SIZE);
|
||||
memcpy(&hash_key, ci->ci_master_key_descriptor, sizeof(hash_key));
|
||||
|
||||
spin_lock(&fscrypt_master_keys_lock);
|
||||
hash_for_each_possible(fscrypt_master_keys, mk, mk_node, hash_key) {
|
||||
if (memcmp(ci->ci_master_key_descriptor, mk->mk_descriptor,
|
||||
FS_KEY_DESCRIPTOR_SIZE) != 0)
|
||||
continue;
|
||||
if (mode != mk->mk_mode)
|
||||
continue;
|
||||
if (crypto_memneq(raw_key, mk->mk_raw, mode->keysize))
|
||||
continue;
|
||||
/* using existing tfm with same (descriptor, mode, raw_key) */
|
||||
refcount_inc(&mk->mk_refcount);
|
||||
spin_unlock(&fscrypt_master_keys_lock);
|
||||
free_master_key(to_insert);
|
||||
return mk;
|
||||
}
|
||||
if (to_insert)
|
||||
hash_add(fscrypt_master_keys, &to_insert->mk_node, hash_key);
|
||||
spin_unlock(&fscrypt_master_keys_lock);
|
||||
return to_insert;
|
||||
}
|
||||
|
||||
/* Prepare to encrypt directly using the master key in the given mode */
|
||||
static struct fscrypt_master_key *
|
||||
fscrypt_get_master_key(const struct fscrypt_info *ci, struct fscrypt_mode *mode,
|
||||
const u8 *raw_key, const struct inode *inode)
|
||||
{
|
||||
struct fscrypt_master_key *mk;
|
||||
int err;
|
||||
|
||||
/* Is there already a tfm for this key? */
|
||||
mk = find_or_insert_master_key(NULL, raw_key, mode, ci);
|
||||
if (mk)
|
||||
return mk;
|
||||
|
||||
/* Nope, allocate one. */
|
||||
mk = kzalloc(sizeof(*mk), GFP_NOFS);
|
||||
if (!mk)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
refcount_set(&mk->mk_refcount, 1);
|
||||
mk->mk_mode = mode;
|
||||
mk->mk_ctfm = allocate_skcipher_for_mode(mode, raw_key, inode);
|
||||
if (IS_ERR(mk->mk_ctfm)) {
|
||||
err = PTR_ERR(mk->mk_ctfm);
|
||||
mk->mk_ctfm = NULL;
|
||||
goto err_free_mk;
|
||||
}
|
||||
memcpy(mk->mk_descriptor, ci->ci_master_key_descriptor,
|
||||
FS_KEY_DESCRIPTOR_SIZE);
|
||||
memcpy(mk->mk_raw, raw_key, mode->keysize);
|
||||
|
||||
return find_or_insert_master_key(mk, raw_key, mode, ci);
|
||||
|
||||
err_free_mk:
|
||||
free_master_key(mk);
|
||||
return ERR_PTR(err);
|
||||
}
|
||||
|
||||
static int derive_essiv_salt(const u8 *key, int keysize, u8 *salt)
|
||||
{
|
||||
struct crypto_shash *tfm = READ_ONCE(essiv_hash_tfm);
|
||||
|
||||
/* init hash transform on demand */
|
||||
if (unlikely(!tfm)) {
|
||||
struct crypto_shash *prev_tfm;
|
||||
|
||||
tfm = crypto_alloc_shash("sha256", 0, 0);
|
||||
if (IS_ERR(tfm)) {
|
||||
fscrypt_warn(NULL,
|
||||
"error allocating SHA-256 transform: %ld",
|
||||
PTR_ERR(tfm));
|
||||
return PTR_ERR(tfm);
|
||||
}
|
||||
prev_tfm = cmpxchg(&essiv_hash_tfm, NULL, tfm);
|
||||
if (prev_tfm) {
|
||||
crypto_free_shash(tfm);
|
||||
tfm = prev_tfm;
|
||||
}
|
||||
}
|
||||
|
||||
{
|
||||
SHASH_DESC_ON_STACK(desc, tfm);
|
||||
desc->tfm = tfm;
|
||||
desc->flags = 0;
|
||||
|
||||
return crypto_shash_digest(desc, key, keysize, salt);
|
||||
}
|
||||
}
|
||||
|
||||
static int init_essiv_generator(struct fscrypt_info *ci, const u8 *raw_key,
|
||||
int keysize)
|
||||
{
|
||||
int err;
|
||||
struct crypto_cipher *essiv_tfm;
|
||||
u8 salt[SHA256_DIGEST_SIZE];
|
||||
|
||||
essiv_tfm = crypto_alloc_cipher("aes", 0, 0);
|
||||
if (IS_ERR(essiv_tfm))
|
||||
return PTR_ERR(essiv_tfm);
|
||||
|
||||
ci->ci_essiv_tfm = essiv_tfm;
|
||||
|
||||
err = derive_essiv_salt(raw_key, keysize, salt);
|
||||
if (err)
|
||||
goto out;
|
||||
|
||||
/*
|
||||
* Using SHA256 to derive the salt/key will result in AES-256 being
|
||||
* used for IV generation. File contents encryption will still use the
|
||||
* configured keysize (AES-128) nevertheless.
|
||||
*/
|
||||
err = crypto_cipher_setkey(essiv_tfm, salt, sizeof(salt));
|
||||
if (err)
|
||||
goto out;
|
||||
|
||||
out:
|
||||
memzero_explicit(salt, sizeof(salt));
|
||||
return err;
|
||||
}
|
||||
|
||||
void __exit fscrypt_essiv_cleanup(void)
|
||||
{
|
||||
crypto_free_shash(essiv_hash_tfm);
|
||||
}
|
||||
|
||||
/*
|
||||
* Given the encryption mode and key (normally the derived key, but for
|
||||
* FS_POLICY_FLAG_DIRECT_KEY mode it's the master key), set up the inode's
|
||||
* symmetric cipher transform object(s).
|
||||
*/
|
||||
static int setup_crypto_transform(struct fscrypt_info *ci,
|
||||
struct fscrypt_mode *mode,
|
||||
const u8 *raw_key, const struct inode *inode)
|
||||
{
|
||||
struct fscrypt_master_key *mk;
|
||||
struct crypto_skcipher *ctfm;
|
||||
int err;
|
||||
|
||||
if (ci->ci_flags & FS_POLICY_FLAG_DIRECT_KEY) {
|
||||
mk = fscrypt_get_master_key(ci, mode, raw_key, inode);
|
||||
if (IS_ERR(mk))
|
||||
return PTR_ERR(mk);
|
||||
ctfm = mk->mk_ctfm;
|
||||
} else {
|
||||
mk = NULL;
|
||||
ctfm = allocate_skcipher_for_mode(mode, raw_key, inode);
|
||||
if (IS_ERR(ctfm))
|
||||
return PTR_ERR(ctfm);
|
||||
}
|
||||
ci->ci_master_key = mk;
|
||||
ci->ci_ctfm = ctfm;
|
||||
|
||||
if (mode->needs_essiv) {
|
||||
/* ESSIV implies 16-byte IVs which implies !DIRECT_KEY */
|
||||
WARN_ON(mode->ivsize != AES_BLOCK_SIZE);
|
||||
WARN_ON(ci->ci_flags & FS_POLICY_FLAG_DIRECT_KEY);
|
||||
|
||||
err = init_essiv_generator(ci, raw_key, mode->keysize);
|
||||
if (err) {
|
||||
fscrypt_warn(inode->i_sb,
|
||||
"error initializing ESSIV generator for inode %lu: %d",
|
||||
inode->i_ino, err);
|
||||
return err;
|
||||
}
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void put_crypt_info(struct fscrypt_info *ci)
|
||||
{
|
||||
if (!ci)
|
||||
return;
|
||||
|
||||
if (ci->ci_master_key) {
|
||||
put_master_key(ci->ci_master_key);
|
||||
} else {
|
||||
crypto_free_skcipher(ci->ci_ctfm);
|
||||
crypto_free_cipher(ci->ci_essiv_tfm);
|
||||
}
|
||||
kmem_cache_free(fscrypt_info_cachep, ci);
|
||||
}
|
||||
|
||||
int fscrypt_get_encryption_info(struct inode *inode)
|
||||
{
|
||||
struct fscrypt_info *crypt_info;
|
||||
struct fscrypt_context ctx;
|
||||
struct fscrypt_mode *mode;
|
||||
u8 *raw_key = NULL;
|
||||
int res;
|
||||
|
||||
if (fscrypt_has_encryption_key(inode))
|
||||
return 0;
|
||||
|
||||
res = fscrypt_initialize(inode->i_sb->s_cop->flags);
|
||||
if (res)
|
||||
return res;
|
||||
|
||||
res = inode->i_sb->s_cop->get_context(inode, &ctx, sizeof(ctx));
|
||||
if (res < 0) {
|
||||
if (!fscrypt_dummy_context_enabled(inode) ||
|
||||
IS_ENCRYPTED(inode))
|
||||
return res;
|
||||
/* Fake up a context for an unencrypted directory */
|
||||
memset(&ctx, 0, sizeof(ctx));
|
||||
ctx.format = FS_ENCRYPTION_CONTEXT_FORMAT_V1;
|
||||
ctx.contents_encryption_mode = FS_ENCRYPTION_MODE_AES_256_XTS;
|
||||
ctx.filenames_encryption_mode = FS_ENCRYPTION_MODE_AES_256_CTS;
|
||||
memset(ctx.master_key_descriptor, 0x42, FS_KEY_DESCRIPTOR_SIZE);
|
||||
} else if (res != sizeof(ctx)) {
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (ctx.format != FS_ENCRYPTION_CONTEXT_FORMAT_V1)
|
||||
return -EINVAL;
|
||||
|
||||
if (ctx.flags & ~FS_POLICY_FLAGS_VALID)
|
||||
return -EINVAL;
|
||||
|
||||
crypt_info = kmem_cache_zalloc(fscrypt_info_cachep, GFP_NOFS);
|
||||
if (!crypt_info)
|
||||
return -ENOMEM;
|
||||
|
||||
crypt_info->ci_flags = ctx.flags;
|
||||
crypt_info->ci_data_mode = ctx.contents_encryption_mode;
|
||||
crypt_info->ci_filename_mode = ctx.filenames_encryption_mode;
|
||||
memcpy(crypt_info->ci_master_key_descriptor, ctx.master_key_descriptor,
|
||||
FS_KEY_DESCRIPTOR_SIZE);
|
||||
memcpy(crypt_info->ci_nonce, ctx.nonce, FS_KEY_DERIVATION_NONCE_SIZE);
|
||||
|
||||
mode = select_encryption_mode(crypt_info, inode);
|
||||
if (IS_ERR(mode)) {
|
||||
res = PTR_ERR(mode);
|
||||
goto out;
|
||||
}
|
||||
WARN_ON(mode->ivsize > FSCRYPT_MAX_IV_SIZE);
|
||||
crypt_info->ci_mode = mode;
|
||||
|
||||
/*
|
||||
* This cannot be a stack buffer because it may be passed to the
|
||||
* scatterlist crypto API as part of key derivation.
|
||||
*/
|
||||
res = -ENOMEM;
|
||||
raw_key = kmalloc(mode->keysize, GFP_NOFS);
|
||||
if (!raw_key)
|
||||
goto out;
|
||||
|
||||
res = find_and_derive_key(inode, &ctx, raw_key, mode);
|
||||
if (res)
|
||||
goto out;
|
||||
|
||||
res = setup_crypto_transform(crypt_info, mode, raw_key, inode);
|
||||
if (res)
|
||||
goto out;
|
||||
|
||||
if (cmpxchg_release(&inode->i_crypt_info, NULL, crypt_info) == NULL)
|
||||
crypt_info = NULL;
|
||||
out:
|
||||
if (res == -ENOKEY)
|
||||
res = 0;
|
||||
put_crypt_info(crypt_info);
|
||||
kzfree(raw_key);
|
||||
return res;
|
||||
}
|
||||
EXPORT_SYMBOL(fscrypt_get_encryption_info);
|
||||
|
||||
/**
|
||||
* fscrypt_put_encryption_info - free most of an inode's fscrypt data
|
||||
*
|
||||
* Free the inode's fscrypt_info. Filesystems must call this when the inode is
|
||||
* being evicted. An RCU grace period need not have elapsed yet.
|
||||
*/
|
||||
void fscrypt_put_encryption_info(struct inode *inode)
|
||||
{
|
||||
put_crypt_info(inode->i_crypt_info);
|
||||
inode->i_crypt_info = NULL;
|
||||
}
|
||||
EXPORT_SYMBOL(fscrypt_put_encryption_info);
|
||||
|
||||
/**
|
||||
* fscrypt_free_inode - free an inode's fscrypt data requiring RCU delay
|
||||
*
|
||||
* Free the inode's cached decrypted symlink target, if any. Filesystems must
|
||||
* call this after an RCU grace period, just before they free the inode.
|
||||
*/
|
||||
void fscrypt_free_inode(struct inode *inode)
|
||||
{
|
||||
if (IS_ENCRYPTED(inode) && S_ISLNK(inode->i_mode)) {
|
||||
kfree(inode->i_link);
|
||||
inode->i_link = NULL;
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL(fscrypt_free_inode);
|
1010
fs/crypto/keyring.c
Normal file
1010
fs/crypto/keyring.c
Normal file
File diff suppressed because it is too large
Load Diff
537
fs/crypto/keysetup.c
Normal file
537
fs/crypto/keysetup.c
Normal file
@ -0,0 +1,537 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Key setup facility for FS encryption support.
|
||||
*
|
||||
* Copyright (C) 2015, Google, Inc.
|
||||
*
|
||||
* Originally written by Michael Halcrow, Ildar Muslukhov, and Uday Savagaonkar.
|
||||
* Heavily modified since then.
|
||||
*/
|
||||
|
||||
#include <crypto/skcipher.h>
|
||||
#include <linux/key.h>
|
||||
|
||||
#include "fscrypt_private.h"
|
||||
|
||||
struct fscrypt_mode fscrypt_modes[] = {
|
||||
[FSCRYPT_MODE_AES_256_XTS] = {
|
||||
.friendly_name = "AES-256-XTS",
|
||||
.cipher_str = "xts(aes)",
|
||||
.keysize = 64,
|
||||
.ivsize = 16,
|
||||
.blk_crypto_mode = BLK_ENCRYPTION_MODE_AES_256_XTS,
|
||||
},
|
||||
[FSCRYPT_MODE_AES_256_CTS] = {
|
||||
.friendly_name = "AES-256-CTS-CBC",
|
||||
.cipher_str = "cts(cbc(aes))",
|
||||
.keysize = 32,
|
||||
.ivsize = 16,
|
||||
},
|
||||
[FSCRYPT_MODE_AES_128_CBC] = {
|
||||
.friendly_name = "AES-128-CBC-ESSIV",
|
||||
.cipher_str = "essiv(cbc(aes),sha256)",
|
||||
.keysize = 16,
|
||||
.ivsize = 16,
|
||||
},
|
||||
[FSCRYPT_MODE_AES_128_CTS] = {
|
||||
.friendly_name = "AES-128-CTS-CBC",
|
||||
.cipher_str = "cts(cbc(aes))",
|
||||
.keysize = 16,
|
||||
.ivsize = 16,
|
||||
},
|
||||
[FSCRYPT_MODE_ADIANTUM] = {
|
||||
.friendly_name = "Adiantum",
|
||||
.cipher_str = "adiantum(xchacha12,aes)",
|
||||
.keysize = 32,
|
||||
.ivsize = 32,
|
||||
},
|
||||
};
|
||||
|
||||
static struct fscrypt_mode *
|
||||
select_encryption_mode(const union fscrypt_policy *policy,
|
||||
const struct inode *inode)
|
||||
{
|
||||
if (S_ISREG(inode->i_mode))
|
||||
return &fscrypt_modes[fscrypt_policy_contents_mode(policy)];
|
||||
|
||||
if (S_ISDIR(inode->i_mode) || S_ISLNK(inode->i_mode))
|
||||
return &fscrypt_modes[fscrypt_policy_fnames_mode(policy)];
|
||||
|
||||
WARN_ONCE(1, "fscrypt: filesystem tried to load encryption info for inode %lu, which is not encryptable (file type %d)\n",
|
||||
inode->i_ino, (inode->i_mode & S_IFMT));
|
||||
return ERR_PTR(-EINVAL);
|
||||
}
|
||||
|
||||
/* Create a symmetric cipher object for the given encryption mode and key */
|
||||
struct crypto_skcipher *fscrypt_allocate_skcipher(struct fscrypt_mode *mode,
|
||||
const u8 *raw_key,
|
||||
const struct inode *inode)
|
||||
{
|
||||
struct crypto_skcipher *tfm;
|
||||
int err;
|
||||
|
||||
tfm = crypto_alloc_skcipher(mode->cipher_str, 0, 0);
|
||||
if (IS_ERR(tfm)) {
|
||||
if (PTR_ERR(tfm) == -ENOENT) {
|
||||
fscrypt_warn(inode,
|
||||
"Missing crypto API support for %s (API name: \"%s\")",
|
||||
mode->friendly_name, mode->cipher_str);
|
||||
return ERR_PTR(-ENOPKG);
|
||||
}
|
||||
fscrypt_err(inode, "Error allocating '%s' transform: %ld",
|
||||
mode->cipher_str, PTR_ERR(tfm));
|
||||
return tfm;
|
||||
}
|
||||
if (unlikely(!mode->logged_impl_name)) {
|
||||
/*
|
||||
* fscrypt performance can vary greatly depending on which
|
||||
* crypto algorithm implementation is used. Help people debug
|
||||
* performance problems by logging the ->cra_driver_name the
|
||||
* first time a mode is used. Note that multiple threads can
|
||||
* race here, but it doesn't really matter.
|
||||
*/
|
||||
mode->logged_impl_name = true;
|
||||
pr_info("fscrypt: %s using implementation \"%s\"\n",
|
||||
mode->friendly_name,
|
||||
crypto_skcipher_alg(tfm)->base.cra_driver_name);
|
||||
}
|
||||
crypto_skcipher_set_flags(tfm, CRYPTO_TFM_REQ_WEAK_KEY);
|
||||
err = crypto_skcipher_setkey(tfm, raw_key, mode->keysize);
|
||||
if (err)
|
||||
goto err_free_tfm;
|
||||
|
||||
return tfm;
|
||||
|
||||
err_free_tfm:
|
||||
crypto_free_skcipher(tfm);
|
||||
return ERR_PTR(err);
|
||||
}
|
||||
|
||||
/* Given the per-file key, set up the file's crypto transform object */
|
||||
int fscrypt_set_derived_key(struct fscrypt_info *ci, const u8 *derived_key)
|
||||
{
|
||||
struct crypto_skcipher *tfm;
|
||||
|
||||
if (fscrypt_should_use_inline_encryption(ci))
|
||||
return fscrypt_set_inline_crypt_key(ci, derived_key);
|
||||
|
||||
tfm = fscrypt_allocate_skcipher(ci->ci_mode, derived_key, ci->ci_inode);
|
||||
if (IS_ERR(tfm))
|
||||
return PTR_ERR(tfm);
|
||||
|
||||
ci->ci_ctfm = tfm;
|
||||
ci->ci_owns_key = true;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int setup_per_mode_key(struct fscrypt_info *ci,
|
||||
struct fscrypt_master_key *mk,
|
||||
struct crypto_skcipher **tfms,
|
||||
u8 hkdf_context, bool include_fs_uuid)
|
||||
{
|
||||
const struct inode *inode = ci->ci_inode;
|
||||
const struct super_block *sb = inode->i_sb;
|
||||
struct fscrypt_mode *mode = ci->ci_mode;
|
||||
const u8 mode_num = mode - fscrypt_modes;
|
||||
struct crypto_skcipher *tfm, *prev_tfm;
|
||||
u8 mode_key[FSCRYPT_MAX_KEY_SIZE];
|
||||
u8 hkdf_info[sizeof(mode_num) + sizeof(sb->s_uuid)];
|
||||
unsigned int hkdf_infolen = 0;
|
||||
int err;
|
||||
|
||||
if (WARN_ON(mode_num > __FSCRYPT_MODE_MAX))
|
||||
return -EINVAL;
|
||||
|
||||
/* pairs with cmpxchg() below */
|
||||
tfm = READ_ONCE(tfms[mode_num]);
|
||||
if (likely(tfm != NULL))
|
||||
goto done;
|
||||
|
||||
BUILD_BUG_ON(sizeof(mode_num) != 1);
|
||||
BUILD_BUG_ON(sizeof(sb->s_uuid) != 16);
|
||||
BUILD_BUG_ON(sizeof(hkdf_info) != 17);
|
||||
hkdf_info[hkdf_infolen++] = mode_num;
|
||||
if (include_fs_uuid) {
|
||||
memcpy(&hkdf_info[hkdf_infolen], &sb->s_uuid,
|
||||
sizeof(sb->s_uuid));
|
||||
hkdf_infolen += sizeof(sb->s_uuid);
|
||||
}
|
||||
err = fscrypt_hkdf_expand(&mk->mk_secret.hkdf,
|
||||
hkdf_context, hkdf_info, hkdf_infolen,
|
||||
mode_key, mode->keysize);
|
||||
if (err)
|
||||
return err;
|
||||
tfm = fscrypt_allocate_skcipher(mode, mode_key, inode);
|
||||
memzero_explicit(mode_key, mode->keysize);
|
||||
if (IS_ERR(tfm))
|
||||
return PTR_ERR(tfm);
|
||||
|
||||
/* pairs with READ_ONCE() above */
|
||||
prev_tfm = cmpxchg(&tfms[mode_num], NULL, tfm);
|
||||
if (prev_tfm != NULL) {
|
||||
crypto_free_skcipher(tfm);
|
||||
tfm = prev_tfm;
|
||||
}
|
||||
done:
|
||||
ci->ci_ctfm = tfm;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int fscrypt_setup_v2_file_key(struct fscrypt_info *ci,
|
||||
struct fscrypt_master_key *mk)
|
||||
{
|
||||
u8 derived_key[FSCRYPT_MAX_KEY_SIZE];
|
||||
int err;
|
||||
|
||||
if (ci->ci_policy.v2.flags & FSCRYPT_POLICY_FLAG_DIRECT_KEY) {
|
||||
/*
|
||||
* DIRECT_KEY: instead of deriving per-file keys, the per-file
|
||||
* nonce will be included in all the IVs. But unlike v1
|
||||
* policies, for v2 policies in this case we don't encrypt with
|
||||
* the master key directly but rather derive a per-mode key.
|
||||
* This ensures that the master key is consistently used only
|
||||
* for HKDF, avoiding key reuse issues.
|
||||
*/
|
||||
if (!fscrypt_mode_supports_direct_key(ci->ci_mode)) {
|
||||
fscrypt_warn(ci->ci_inode,
|
||||
"Direct key flag not allowed with %s",
|
||||
ci->ci_mode->friendly_name);
|
||||
return -EINVAL;
|
||||
}
|
||||
return setup_per_mode_key(ci, mk, mk->mk_direct_tfms,
|
||||
HKDF_CONTEXT_DIRECT_KEY, false);
|
||||
} else if (ci->ci_policy.v2.flags &
|
||||
FSCRYPT_POLICY_FLAG_IV_INO_LBLK_64) {
|
||||
/*
|
||||
* IV_INO_LBLK_64: encryption keys are derived from (master_key,
|
||||
* mode_num, filesystem_uuid), and inode number is included in
|
||||
* the IVs. This format is optimized for use with inline
|
||||
* encryption hardware compliant with the UFS or eMMC standards.
|
||||
*/
|
||||
if (fscrypt_should_use_inline_encryption(ci))
|
||||
return fscrypt_setup_per_mode_inline_crypt_key(ci, mk);
|
||||
return setup_per_mode_key(ci, mk, mk->mk_iv_ino_lblk_64_tfms,
|
||||
HKDF_CONTEXT_IV_INO_LBLK_64_KEY,
|
||||
true);
|
||||
}
|
||||
|
||||
err = fscrypt_hkdf_expand(&mk->mk_secret.hkdf,
|
||||
HKDF_CONTEXT_PER_FILE_KEY,
|
||||
ci->ci_nonce, FS_KEY_DERIVATION_NONCE_SIZE,
|
||||
derived_key, ci->ci_mode->keysize);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
err = fscrypt_set_derived_key(ci, derived_key);
|
||||
memzero_explicit(derived_key, ci->ci_mode->keysize);
|
||||
return err;
|
||||
}
|
||||
|
||||
/*
|
||||
* Find the master key, then set up the inode's actual encryption key.
|
||||
*
|
||||
* If the master key is found in the filesystem-level keyring, then the
|
||||
* corresponding 'struct key' is returned in *master_key_ret with
|
||||
* ->mk_secret_sem read-locked. This is needed to ensure that only one task
|
||||
* links the fscrypt_info into ->mk_decrypted_inodes (as multiple tasks may race
|
||||
* to create an fscrypt_info for the same inode), and to synchronize the master
|
||||
* key being removed with a new inode starting to use it.
|
||||
*/
|
||||
static int setup_file_encryption_key(struct fscrypt_info *ci,
|
||||
struct key **master_key_ret)
|
||||
{
|
||||
struct key *key;
|
||||
struct fscrypt_master_key *mk = NULL;
|
||||
struct fscrypt_key_specifier mk_spec;
|
||||
int err;
|
||||
|
||||
switch (ci->ci_policy.version) {
|
||||
case FSCRYPT_POLICY_V1:
|
||||
mk_spec.type = FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR;
|
||||
memcpy(mk_spec.u.descriptor,
|
||||
ci->ci_policy.v1.master_key_descriptor,
|
||||
FSCRYPT_KEY_DESCRIPTOR_SIZE);
|
||||
break;
|
||||
case FSCRYPT_POLICY_V2:
|
||||
mk_spec.type = FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER;
|
||||
memcpy(mk_spec.u.identifier,
|
||||
ci->ci_policy.v2.master_key_identifier,
|
||||
FSCRYPT_KEY_IDENTIFIER_SIZE);
|
||||
break;
|
||||
default:
|
||||
WARN_ON(1);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
key = fscrypt_find_master_key(ci->ci_inode->i_sb, &mk_spec);
|
||||
if (IS_ERR(key)) {
|
||||
if (key != ERR_PTR(-ENOKEY) ||
|
||||
ci->ci_policy.version != FSCRYPT_POLICY_V1)
|
||||
return PTR_ERR(key);
|
||||
|
||||
/*
|
||||
* As a legacy fallback for v1 policies, search for the key in
|
||||
* the current task's subscribed keyrings too. Don't move this
|
||||
* to before the search of ->s_master_keys, since users
|
||||
* shouldn't be able to override filesystem-level keys.
|
||||
*/
|
||||
return fscrypt_setup_v1_file_key_via_subscribed_keyrings(ci);
|
||||
}
|
||||
|
||||
mk = key->payload.data[0];
|
||||
down_read(&mk->mk_secret_sem);
|
||||
|
||||
/* Has the secret been removed (via FS_IOC_REMOVE_ENCRYPTION_KEY)? */
|
||||
if (!is_master_key_secret_present(&mk->mk_secret)) {
|
||||
err = -ENOKEY;
|
||||
goto out_release_key;
|
||||
}
|
||||
|
||||
/*
|
||||
* Require that the master key be at least as long as the derived key.
|
||||
* Otherwise, the derived key cannot possibly contain as much entropy as
|
||||
* that required by the encryption mode it will be used for. For v1
|
||||
* policies it's also required for the KDF to work at all.
|
||||
*/
|
||||
if (mk->mk_secret.size < ci->ci_mode->keysize) {
|
||||
fscrypt_warn(NULL,
|
||||
"key with %s %*phN is too short (got %u bytes, need %u+ bytes)",
|
||||
master_key_spec_type(&mk_spec),
|
||||
master_key_spec_len(&mk_spec), (u8 *)&mk_spec.u,
|
||||
mk->mk_secret.size, ci->ci_mode->keysize);
|
||||
err = -ENOKEY;
|
||||
goto out_release_key;
|
||||
}
|
||||
|
||||
switch (ci->ci_policy.version) {
|
||||
case FSCRYPT_POLICY_V1:
|
||||
err = fscrypt_setup_v1_file_key(ci, mk->mk_secret.raw);
|
||||
break;
|
||||
case FSCRYPT_POLICY_V2:
|
||||
err = fscrypt_setup_v2_file_key(ci, mk);
|
||||
break;
|
||||
default:
|
||||
WARN_ON(1);
|
||||
err = -EINVAL;
|
||||
break;
|
||||
}
|
||||
if (err)
|
||||
goto out_release_key;
|
||||
|
||||
*master_key_ret = key;
|
||||
return 0;
|
||||
|
||||
out_release_key:
|
||||
up_read(&mk->mk_secret_sem);
|
||||
key_put(key);
|
||||
return err;
|
||||
}
|
||||
|
||||
static void put_crypt_info(struct fscrypt_info *ci)
|
||||
{
|
||||
struct key *key;
|
||||
|
||||
if (!ci)
|
||||
return;
|
||||
|
||||
if (ci->ci_direct_key)
|
||||
fscrypt_put_direct_key(ci->ci_direct_key);
|
||||
else if (ci->ci_owns_key) {
|
||||
crypto_free_skcipher(ci->ci_ctfm);
|
||||
fscrypt_free_inline_crypt_key(ci);
|
||||
}
|
||||
|
||||
key = ci->ci_master_key;
|
||||
if (key) {
|
||||
struct fscrypt_master_key *mk = key->payload.data[0];
|
||||
|
||||
/*
|
||||
* Remove this inode from the list of inodes that were unlocked
|
||||
* with the master key.
|
||||
*
|
||||
* In addition, if we're removing the last inode from a key that
|
||||
* already had its secret removed, invalidate the key so that it
|
||||
* gets removed from ->s_master_keys.
|
||||
*/
|
||||
spin_lock(&mk->mk_decrypted_inodes_lock);
|
||||
list_del(&ci->ci_master_key_link);
|
||||
spin_unlock(&mk->mk_decrypted_inodes_lock);
|
||||
if (refcount_dec_and_test(&mk->mk_refcount))
|
||||
key_invalidate(key);
|
||||
key_put(key);
|
||||
}
|
||||
memzero_explicit(ci, sizeof(*ci));
|
||||
kmem_cache_free(fscrypt_info_cachep, ci);
|
||||
}
|
||||
|
||||
int fscrypt_get_encryption_info(struct inode *inode)
|
||||
{
|
||||
struct fscrypt_info *crypt_info;
|
||||
union fscrypt_context ctx;
|
||||
struct fscrypt_mode *mode;
|
||||
struct key *master_key = NULL;
|
||||
int res;
|
||||
|
||||
if (fscrypt_has_encryption_key(inode))
|
||||
return 0;
|
||||
|
||||
res = fscrypt_initialize(inode->i_sb->s_cop->flags);
|
||||
if (res)
|
||||
return res;
|
||||
|
||||
res = inode->i_sb->s_cop->get_context(inode, &ctx, sizeof(ctx));
|
||||
if (res < 0) {
|
||||
if (!fscrypt_dummy_context_enabled(inode) ||
|
||||
IS_ENCRYPTED(inode)) {
|
||||
fscrypt_warn(inode,
|
||||
"Error %d getting encryption context",
|
||||
res);
|
||||
return res;
|
||||
}
|
||||
/* Fake up a context for an unencrypted directory */
|
||||
memset(&ctx, 0, sizeof(ctx));
|
||||
ctx.version = FSCRYPT_CONTEXT_V1;
|
||||
ctx.v1.contents_encryption_mode = FSCRYPT_MODE_AES_256_XTS;
|
||||
ctx.v1.filenames_encryption_mode = FSCRYPT_MODE_AES_256_CTS;
|
||||
memset(ctx.v1.master_key_descriptor, 0x42,
|
||||
FSCRYPT_KEY_DESCRIPTOR_SIZE);
|
||||
res = sizeof(ctx.v1);
|
||||
}
|
||||
|
||||
crypt_info = kmem_cache_zalloc(fscrypt_info_cachep, GFP_NOFS);
|
||||
if (!crypt_info)
|
||||
return -ENOMEM;
|
||||
|
||||
crypt_info->ci_inode = inode;
|
||||
|
||||
res = fscrypt_policy_from_context(&crypt_info->ci_policy, &ctx, res);
|
||||
if (res) {
|
||||
fscrypt_warn(inode,
|
||||
"Unrecognized or corrupt encryption context");
|
||||
goto out;
|
||||
}
|
||||
|
||||
switch (ctx.version) {
|
||||
case FSCRYPT_CONTEXT_V1:
|
||||
memcpy(crypt_info->ci_nonce, ctx.v1.nonce,
|
||||
FS_KEY_DERIVATION_NONCE_SIZE);
|
||||
break;
|
||||
case FSCRYPT_CONTEXT_V2:
|
||||
memcpy(crypt_info->ci_nonce, ctx.v2.nonce,
|
||||
FS_KEY_DERIVATION_NONCE_SIZE);
|
||||
break;
|
||||
default:
|
||||
WARN_ON(1);
|
||||
res = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (!fscrypt_supported_policy(&crypt_info->ci_policy, inode)) {
|
||||
res = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
mode = select_encryption_mode(&crypt_info->ci_policy, inode);
|
||||
if (IS_ERR(mode)) {
|
||||
res = PTR_ERR(mode);
|
||||
goto out;
|
||||
}
|
||||
WARN_ON(mode->ivsize > FSCRYPT_MAX_IV_SIZE);
|
||||
crypt_info->ci_mode = mode;
|
||||
|
||||
res = setup_file_encryption_key(crypt_info, &master_key);
|
||||
if (res)
|
||||
goto out;
|
||||
|
||||
if (cmpxchg_release(&inode->i_crypt_info, NULL, crypt_info) == NULL) {
|
||||
if (master_key) {
|
||||
struct fscrypt_master_key *mk =
|
||||
master_key->payload.data[0];
|
||||
|
||||
refcount_inc(&mk->mk_refcount);
|
||||
crypt_info->ci_master_key = key_get(master_key);
|
||||
spin_lock(&mk->mk_decrypted_inodes_lock);
|
||||
list_add(&crypt_info->ci_master_key_link,
|
||||
&mk->mk_decrypted_inodes);
|
||||
spin_unlock(&mk->mk_decrypted_inodes_lock);
|
||||
}
|
||||
crypt_info = NULL;
|
||||
}
|
||||
res = 0;
|
||||
out:
|
||||
if (master_key) {
|
||||
struct fscrypt_master_key *mk = master_key->payload.data[0];
|
||||
|
||||
up_read(&mk->mk_secret_sem);
|
||||
key_put(master_key);
|
||||
}
|
||||
if (res == -ENOKEY)
|
||||
res = 0;
|
||||
put_crypt_info(crypt_info);
|
||||
return res;
|
||||
}
|
||||
EXPORT_SYMBOL(fscrypt_get_encryption_info);
|
||||
|
||||
/**
|
||||
* fscrypt_put_encryption_info - free most of an inode's fscrypt data
|
||||
*
|
||||
* Free the inode's fscrypt_info. Filesystems must call this when the inode is
|
||||
* being evicted. An RCU grace period need not have elapsed yet.
|
||||
*/
|
||||
void fscrypt_put_encryption_info(struct inode *inode)
|
||||
{
|
||||
put_crypt_info(inode->i_crypt_info);
|
||||
inode->i_crypt_info = NULL;
|
||||
}
|
||||
EXPORT_SYMBOL(fscrypt_put_encryption_info);
|
||||
|
||||
/**
|
||||
* fscrypt_free_inode - free an inode's fscrypt data requiring RCU delay
|
||||
*
|
||||
* Free the inode's cached decrypted symlink target, if any. Filesystems must
|
||||
* call this after an RCU grace period, just before they free the inode.
|
||||
*/
|
||||
void fscrypt_free_inode(struct inode *inode)
|
||||
{
|
||||
if (IS_ENCRYPTED(inode) && S_ISLNK(inode->i_mode)) {
|
||||
kfree(inode->i_link);
|
||||
inode->i_link = NULL;
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL(fscrypt_free_inode);
|
||||
|
||||
/**
|
||||
* fscrypt_drop_inode - check whether the inode's master key has been removed
|
||||
*
|
||||
* Filesystems supporting fscrypt must call this from their ->drop_inode()
|
||||
* method so that encrypted inodes are evicted as soon as they're no longer in
|
||||
* use and their master key has been removed.
|
||||
*
|
||||
* Return: 1 if fscrypt wants the inode to be evicted now, otherwise 0
|
||||
*/
|
||||
int fscrypt_drop_inode(struct inode *inode)
|
||||
{
|
||||
const struct fscrypt_info *ci = READ_ONCE(inode->i_crypt_info);
|
||||
const struct fscrypt_master_key *mk;
|
||||
|
||||
/*
|
||||
* If ci is NULL, then the inode doesn't have an encryption key set up
|
||||
* so it's irrelevant. If ci_master_key is NULL, then the master key
|
||||
* was provided via the legacy mechanism of the process-subscribed
|
||||
* keyrings, so we don't know whether it's been removed or not.
|
||||
*/
|
||||
if (!ci || !ci->ci_master_key)
|
||||
return 0;
|
||||
mk = ci->ci_master_key->payload.data[0];
|
||||
|
||||
/*
|
||||
* Note: since we aren't holding ->mk_secret_sem, the result here can
|
||||
* immediately become outdated. But there's no correctness problem with
|
||||
* unnecessarily evicting. Nor is there a correctness problem with not
|
||||
* evicting while iput() is racing with the key being removed, since
|
||||
* then the thread removing the key will either evict the inode itself
|
||||
* or will correctly detect that it wasn't evicted due to the race.
|
||||
*/
|
||||
return !is_master_key_secret_present(&mk->mk_secret);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(fscrypt_drop_inode);
|
336
fs/crypto/keysetup_v1.c
Normal file
336
fs/crypto/keysetup_v1.c
Normal file
@ -0,0 +1,336 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Key setup for v1 encryption policies
|
||||
*
|
||||
* Copyright 2015, 2019 Google LLC
|
||||
*/
|
||||
|
||||
/*
|
||||
* This file implements compatibility functions for the original encryption
|
||||
* policy version ("v1"), including:
|
||||
*
|
||||
* - Deriving per-file keys using the AES-128-ECB based KDF
|
||||
* (rather than the new method of using HKDF-SHA512)
|
||||
*
|
||||
* - Retrieving fscrypt master keys from process-subscribed keyrings
|
||||
* (rather than the new method of using a filesystem-level keyring)
|
||||
*
|
||||
* - Handling policies with the DIRECT_KEY flag set using a master key table
|
||||
* (rather than the new method of implementing DIRECT_KEY with per-mode keys
|
||||
* managed alongside the master keys in the filesystem-level keyring)
|
||||
*/
|
||||
|
||||
#include <crypto/algapi.h>
|
||||
#include <crypto/skcipher.h>
|
||||
#include <keys/user-type.h>
|
||||
#include <linux/hashtable.h>
|
||||
#include <linux/scatterlist.h>
|
||||
|
||||
#include "fscrypt_private.h"
|
||||
|
||||
/* Table of keys referenced by DIRECT_KEY policies */
|
||||
static DEFINE_HASHTABLE(fscrypt_direct_keys, 6); /* 6 bits = 64 buckets */
|
||||
static DEFINE_SPINLOCK(fscrypt_direct_keys_lock);
|
||||
|
||||
/*
|
||||
* v1 key derivation function. This generates the derived key by encrypting the
|
||||
* master key with AES-128-ECB using the nonce as the AES key. This provides a
|
||||
* unique derived key with sufficient entropy for each inode. However, it's
|
||||
* nonstandard, non-extensible, doesn't evenly distribute the entropy from the
|
||||
* master key, and is trivially reversible: an attacker who compromises a
|
||||
* derived key can "decrypt" it to get back to the master key, then derive any
|
||||
* other key. For all new code, use HKDF instead.
|
||||
*
|
||||
* The master key must be at least as long as the derived key. If the master
|
||||
* key is longer, then only the first 'derived_keysize' bytes are used.
|
||||
*/
|
||||
static int derive_key_aes(const u8 *master_key,
|
||||
const u8 nonce[FS_KEY_DERIVATION_NONCE_SIZE],
|
||||
u8 *derived_key, unsigned int derived_keysize)
|
||||
{
|
||||
int res = 0;
|
||||
struct skcipher_request *req = NULL;
|
||||
DECLARE_CRYPTO_WAIT(wait);
|
||||
struct scatterlist src_sg, dst_sg;
|
||||
struct crypto_skcipher *tfm = crypto_alloc_skcipher("ecb(aes)", 0, 0);
|
||||
|
||||
if (IS_ERR(tfm)) {
|
||||
res = PTR_ERR(tfm);
|
||||
tfm = NULL;
|
||||
goto out;
|
||||
}
|
||||
crypto_skcipher_set_flags(tfm, CRYPTO_TFM_REQ_WEAK_KEY);
|
||||
req = skcipher_request_alloc(tfm, GFP_NOFS);
|
||||
if (!req) {
|
||||
res = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
skcipher_request_set_callback(req,
|
||||
CRYPTO_TFM_REQ_MAY_BACKLOG | CRYPTO_TFM_REQ_MAY_SLEEP,
|
||||
crypto_req_done, &wait);
|
||||
res = crypto_skcipher_setkey(tfm, nonce, FS_KEY_DERIVATION_NONCE_SIZE);
|
||||
if (res < 0)
|
||||
goto out;
|
||||
|
||||
sg_init_one(&src_sg, master_key, derived_keysize);
|
||||
sg_init_one(&dst_sg, derived_key, derived_keysize);
|
||||
skcipher_request_set_crypt(req, &src_sg, &dst_sg, derived_keysize,
|
||||
NULL);
|
||||
res = crypto_wait_req(crypto_skcipher_encrypt(req), &wait);
|
||||
out:
|
||||
skcipher_request_free(req);
|
||||
crypto_free_skcipher(tfm);
|
||||
return res;
|
||||
}
|
||||
|
||||
/*
|
||||
* Search the current task's subscribed keyrings for a "logon" key with
|
||||
* description prefix:descriptor, and if found acquire a read lock on it and
|
||||
* return a pointer to its validated payload in *payload_ret.
|
||||
*/
|
||||
static struct key *
|
||||
find_and_lock_process_key(const char *prefix,
|
||||
const u8 descriptor[FSCRYPT_KEY_DESCRIPTOR_SIZE],
|
||||
unsigned int min_keysize,
|
||||
const struct fscrypt_key **payload_ret)
|
||||
{
|
||||
char *description;
|
||||
struct key *key;
|
||||
const struct user_key_payload *ukp;
|
||||
const struct fscrypt_key *payload;
|
||||
|
||||
description = kasprintf(GFP_NOFS, "%s%*phN", prefix,
|
||||
FSCRYPT_KEY_DESCRIPTOR_SIZE, descriptor);
|
||||
if (!description)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
key = request_key(&key_type_logon, description, NULL);
|
||||
kfree(description);
|
||||
if (IS_ERR(key))
|
||||
return key;
|
||||
|
||||
down_read(&key->sem);
|
||||
ukp = user_key_payload_locked(key);
|
||||
|
||||
if (!ukp) /* was the key revoked before we acquired its semaphore? */
|
||||
goto invalid;
|
||||
|
||||
payload = (const struct fscrypt_key *)ukp->data;
|
||||
|
||||
if (ukp->datalen != sizeof(struct fscrypt_key) ||
|
||||
payload->size < 1 || payload->size > FSCRYPT_MAX_KEY_SIZE) {
|
||||
fscrypt_warn(NULL,
|
||||
"key with description '%s' has invalid payload",
|
||||
key->description);
|
||||
goto invalid;
|
||||
}
|
||||
|
||||
if (payload->size < min_keysize) {
|
||||
fscrypt_warn(NULL,
|
||||
"key with description '%s' is too short (got %u bytes, need %u+ bytes)",
|
||||
key->description, payload->size, min_keysize);
|
||||
goto invalid;
|
||||
}
|
||||
|
||||
*payload_ret = payload;
|
||||
return key;
|
||||
|
||||
invalid:
|
||||
up_read(&key->sem);
|
||||
key_put(key);
|
||||
return ERR_PTR(-ENOKEY);
|
||||
}
|
||||
|
||||
/* Master key referenced by DIRECT_KEY policy */
|
||||
struct fscrypt_direct_key {
|
||||
struct hlist_node dk_node;
|
||||
refcount_t dk_refcount;
|
||||
const struct fscrypt_mode *dk_mode;
|
||||
struct crypto_skcipher *dk_ctfm;
|
||||
u8 dk_descriptor[FSCRYPT_KEY_DESCRIPTOR_SIZE];
|
||||
u8 dk_raw[FSCRYPT_MAX_KEY_SIZE];
|
||||
};
|
||||
|
||||
static void free_direct_key(struct fscrypt_direct_key *dk)
|
||||
{
|
||||
if (dk) {
|
||||
crypto_free_skcipher(dk->dk_ctfm);
|
||||
kzfree(dk);
|
||||
}
|
||||
}
|
||||
|
||||
void fscrypt_put_direct_key(struct fscrypt_direct_key *dk)
|
||||
{
|
||||
if (!refcount_dec_and_lock(&dk->dk_refcount, &fscrypt_direct_keys_lock))
|
||||
return;
|
||||
hash_del(&dk->dk_node);
|
||||
spin_unlock(&fscrypt_direct_keys_lock);
|
||||
|
||||
free_direct_key(dk);
|
||||
}
|
||||
|
||||
/*
|
||||
* Find/insert the given key into the fscrypt_direct_keys table. If found, it
|
||||
* is returned with elevated refcount, and 'to_insert' is freed if non-NULL. If
|
||||
* not found, 'to_insert' is inserted and returned if it's non-NULL; otherwise
|
||||
* NULL is returned.
|
||||
*/
|
||||
static struct fscrypt_direct_key *
|
||||
find_or_insert_direct_key(struct fscrypt_direct_key *to_insert,
|
||||
const u8 *raw_key, const struct fscrypt_info *ci)
|
||||
{
|
||||
unsigned long hash_key;
|
||||
struct fscrypt_direct_key *dk;
|
||||
|
||||
/*
|
||||
* Careful: to avoid potentially leaking secret key bytes via timing
|
||||
* information, we must key the hash table by descriptor rather than by
|
||||
* raw key, and use crypto_memneq() when comparing raw keys.
|
||||
*/
|
||||
|
||||
BUILD_BUG_ON(sizeof(hash_key) > FSCRYPT_KEY_DESCRIPTOR_SIZE);
|
||||
memcpy(&hash_key, ci->ci_policy.v1.master_key_descriptor,
|
||||
sizeof(hash_key));
|
||||
|
||||
spin_lock(&fscrypt_direct_keys_lock);
|
||||
hash_for_each_possible(fscrypt_direct_keys, dk, dk_node, hash_key) {
|
||||
if (memcmp(ci->ci_policy.v1.master_key_descriptor,
|
||||
dk->dk_descriptor, FSCRYPT_KEY_DESCRIPTOR_SIZE) != 0)
|
||||
continue;
|
||||
if (ci->ci_mode != dk->dk_mode)
|
||||
continue;
|
||||
if (crypto_memneq(raw_key, dk->dk_raw, ci->ci_mode->keysize))
|
||||
continue;
|
||||
/* using existing tfm with same (descriptor, mode, raw_key) */
|
||||
refcount_inc(&dk->dk_refcount);
|
||||
spin_unlock(&fscrypt_direct_keys_lock);
|
||||
free_direct_key(to_insert);
|
||||
return dk;
|
||||
}
|
||||
if (to_insert)
|
||||
hash_add(fscrypt_direct_keys, &to_insert->dk_node, hash_key);
|
||||
spin_unlock(&fscrypt_direct_keys_lock);
|
||||
return to_insert;
|
||||
}
|
||||
|
||||
/* Prepare to encrypt directly using the master key in the given mode */
|
||||
static struct fscrypt_direct_key *
|
||||
fscrypt_get_direct_key(const struct fscrypt_info *ci, const u8 *raw_key)
|
||||
{
|
||||
struct fscrypt_direct_key *dk;
|
||||
int err;
|
||||
|
||||
/* Is there already a tfm for this key? */
|
||||
dk = find_or_insert_direct_key(NULL, raw_key, ci);
|
||||
if (dk)
|
||||
return dk;
|
||||
|
||||
/* Nope, allocate one. */
|
||||
dk = kzalloc(sizeof(*dk), GFP_NOFS);
|
||||
if (!dk)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
refcount_set(&dk->dk_refcount, 1);
|
||||
dk->dk_mode = ci->ci_mode;
|
||||
dk->dk_ctfm = fscrypt_allocate_skcipher(ci->ci_mode, raw_key,
|
||||
ci->ci_inode);
|
||||
if (IS_ERR(dk->dk_ctfm)) {
|
||||
err = PTR_ERR(dk->dk_ctfm);
|
||||
dk->dk_ctfm = NULL;
|
||||
goto err_free_dk;
|
||||
}
|
||||
memcpy(dk->dk_descriptor, ci->ci_policy.v1.master_key_descriptor,
|
||||
FSCRYPT_KEY_DESCRIPTOR_SIZE);
|
||||
memcpy(dk->dk_raw, raw_key, ci->ci_mode->keysize);
|
||||
|
||||
return find_or_insert_direct_key(dk, raw_key, ci);
|
||||
|
||||
err_free_dk:
|
||||
free_direct_key(dk);
|
||||
return ERR_PTR(err);
|
||||
}
|
||||
|
||||
/* v1 policy, DIRECT_KEY: use the master key directly */
|
||||
static int setup_v1_file_key_direct(struct fscrypt_info *ci,
|
||||
const u8 *raw_master_key)
|
||||
{
|
||||
const struct fscrypt_mode *mode = ci->ci_mode;
|
||||
struct fscrypt_direct_key *dk;
|
||||
|
||||
if (!fscrypt_mode_supports_direct_key(mode)) {
|
||||
fscrypt_warn(ci->ci_inode,
|
||||
"Direct key mode not allowed with %s",
|
||||
mode->friendly_name);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (ci->ci_policy.v1.contents_encryption_mode !=
|
||||
ci->ci_policy.v1.filenames_encryption_mode) {
|
||||
fscrypt_warn(ci->ci_inode,
|
||||
"Direct key mode not allowed with different contents and filenames modes");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
dk = fscrypt_get_direct_key(ci, raw_master_key);
|
||||
if (IS_ERR(dk))
|
||||
return PTR_ERR(dk);
|
||||
ci->ci_direct_key = dk;
|
||||
ci->ci_ctfm = dk->dk_ctfm;
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* v1 policy, !DIRECT_KEY: derive the file's encryption key */
|
||||
static int setup_v1_file_key_derived(struct fscrypt_info *ci,
|
||||
const u8 *raw_master_key)
|
||||
{
|
||||
u8 *derived_key;
|
||||
int err;
|
||||
|
||||
/*
|
||||
* This cannot be a stack buffer because it will be passed to the
|
||||
* scatterlist crypto API during derive_key_aes().
|
||||
*/
|
||||
derived_key = kmalloc(ci->ci_mode->keysize, GFP_NOFS);
|
||||
if (!derived_key)
|
||||
return -ENOMEM;
|
||||
|
||||
err = derive_key_aes(raw_master_key, ci->ci_nonce,
|
||||
derived_key, ci->ci_mode->keysize);
|
||||
if (err)
|
||||
goto out;
|
||||
|
||||
err = fscrypt_set_derived_key(ci, derived_key);
|
||||
out:
|
||||
kzfree(derived_key);
|
||||
return err;
|
||||
}
|
||||
|
||||
int fscrypt_setup_v1_file_key(struct fscrypt_info *ci, const u8 *raw_master_key)
|
||||
{
|
||||
if (ci->ci_policy.v1.flags & FSCRYPT_POLICY_FLAG_DIRECT_KEY)
|
||||
return setup_v1_file_key_direct(ci, raw_master_key);
|
||||
else
|
||||
return setup_v1_file_key_derived(ci, raw_master_key);
|
||||
}
|
||||
|
||||
int fscrypt_setup_v1_file_key_via_subscribed_keyrings(struct fscrypt_info *ci)
|
||||
{
|
||||
struct key *key;
|
||||
const struct fscrypt_key *payload;
|
||||
int err;
|
||||
|
||||
key = find_and_lock_process_key(FSCRYPT_KEY_DESC_PREFIX,
|
||||
ci->ci_policy.v1.master_key_descriptor,
|
||||
ci->ci_mode->keysize, &payload);
|
||||
if (key == ERR_PTR(-ENOKEY) && ci->ci_inode->i_sb->s_cop->key_prefix) {
|
||||
key = find_and_lock_process_key(ci->ci_inode->i_sb->s_cop->key_prefix,
|
||||
ci->ci_policy.v1.master_key_descriptor,
|
||||
ci->ci_mode->keysize, &payload);
|
||||
}
|
||||
if (IS_ERR(key))
|
||||
return PTR_ERR(key);
|
||||
|
||||
err = fscrypt_setup_v1_file_key(ci, payload->raw);
|
||||
up_read(&key->sem);
|
||||
key_put(key);
|
||||
return err;
|
||||
}
|
@ -5,8 +5,9 @@
|
||||
* Copyright (C) 2015, Google, Inc.
|
||||
* Copyright (C) 2015, Motorola Mobility.
|
||||
*
|
||||
* Written by Michael Halcrow, 2015.
|
||||
* Originally written by Michael Halcrow, 2015.
|
||||
* Modified by Jaegeuk Kim, 2015.
|
||||
* Modified by Eric Biggers, 2019 for v2 policy support.
|
||||
*/
|
||||
|
||||
#include <linux/random.h>
|
||||
@ -14,70 +15,342 @@
|
||||
#include <linux/mount.h>
|
||||
#include "fscrypt_private.h"
|
||||
|
||||
/*
|
||||
* check whether an encryption policy is consistent with an encryption context
|
||||
/**
|
||||
* fscrypt_policies_equal - check whether two encryption policies are the same
|
||||
*
|
||||
* Return: %true if equal, else %false
|
||||
*/
|
||||
static bool is_encryption_context_consistent_with_policy(
|
||||
const struct fscrypt_context *ctx,
|
||||
const struct fscrypt_policy *policy)
|
||||
bool fscrypt_policies_equal(const union fscrypt_policy *policy1,
|
||||
const union fscrypt_policy *policy2)
|
||||
{
|
||||
return memcmp(ctx->master_key_descriptor, policy->master_key_descriptor,
|
||||
FS_KEY_DESCRIPTOR_SIZE) == 0 &&
|
||||
(ctx->flags == policy->flags) &&
|
||||
(ctx->contents_encryption_mode ==
|
||||
policy->contents_encryption_mode) &&
|
||||
(ctx->filenames_encryption_mode ==
|
||||
policy->filenames_encryption_mode);
|
||||
if (policy1->version != policy2->version)
|
||||
return false;
|
||||
|
||||
return !memcmp(policy1, policy2, fscrypt_policy_size(policy1));
|
||||
}
|
||||
|
||||
static int create_encryption_context_from_policy(struct inode *inode,
|
||||
const struct fscrypt_policy *policy)
|
||||
static bool supported_iv_ino_lblk_64_policy(
|
||||
const struct fscrypt_policy_v2 *policy,
|
||||
const struct inode *inode)
|
||||
{
|
||||
struct fscrypt_context ctx;
|
||||
struct super_block *sb = inode->i_sb;
|
||||
int ino_bits = 64, lblk_bits = 64;
|
||||
|
||||
ctx.format = FS_ENCRYPTION_CONTEXT_FORMAT_V1;
|
||||
memcpy(ctx.master_key_descriptor, policy->master_key_descriptor,
|
||||
FS_KEY_DESCRIPTOR_SIZE);
|
||||
if (policy->flags & FSCRYPT_POLICY_FLAG_DIRECT_KEY) {
|
||||
fscrypt_warn(inode,
|
||||
"The DIRECT_KEY and IV_INO_LBLK_64 flags are mutually exclusive");
|
||||
return false;
|
||||
}
|
||||
/*
|
||||
* It's unsafe to include inode numbers in the IVs if the filesystem can
|
||||
* potentially renumber inodes, e.g. via filesystem shrinking.
|
||||
*/
|
||||
if (!sb->s_cop->has_stable_inodes ||
|
||||
!sb->s_cop->has_stable_inodes(sb)) {
|
||||
fscrypt_warn(inode,
|
||||
"Can't use IV_INO_LBLK_64 policy on filesystem '%s' because it doesn't have stable inode numbers",
|
||||
sb->s_id);
|
||||
return false;
|
||||
}
|
||||
if (sb->s_cop->get_ino_and_lblk_bits)
|
||||
sb->s_cop->get_ino_and_lblk_bits(sb, &ino_bits, &lblk_bits);
|
||||
if (ino_bits > 32 || lblk_bits > 32) {
|
||||
fscrypt_warn(inode,
|
||||
"Can't use IV_INO_LBLK_64 policy on filesystem '%s' because it doesn't use 32-bit inode and block numbers",
|
||||
sb->s_id);
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
if (!fscrypt_valid_enc_modes(policy->contents_encryption_mode,
|
||||
policy->filenames_encryption_mode))
|
||||
/**
|
||||
* fscrypt_supported_policy - check whether an encryption policy is supported
|
||||
*
|
||||
* Given an encryption policy, check whether all its encryption modes and other
|
||||
* settings are supported by this kernel. (But we don't currently don't check
|
||||
* for crypto API support here, so attempting to use an algorithm not configured
|
||||
* into the crypto API will still fail later.)
|
||||
*
|
||||
* Return: %true if supported, else %false
|
||||
*/
|
||||
bool fscrypt_supported_policy(const union fscrypt_policy *policy_u,
|
||||
const struct inode *inode)
|
||||
{
|
||||
switch (policy_u->version) {
|
||||
case FSCRYPT_POLICY_V1: {
|
||||
const struct fscrypt_policy_v1 *policy = &policy_u->v1;
|
||||
|
||||
if (!fscrypt_valid_enc_modes(policy->contents_encryption_mode,
|
||||
policy->filenames_encryption_mode)) {
|
||||
fscrypt_warn(inode,
|
||||
"Unsupported encryption modes (contents %d, filenames %d)",
|
||||
policy->contents_encryption_mode,
|
||||
policy->filenames_encryption_mode);
|
||||
return false;
|
||||
}
|
||||
|
||||
if (policy->flags & ~(FSCRYPT_POLICY_FLAGS_PAD_MASK |
|
||||
FSCRYPT_POLICY_FLAG_DIRECT_KEY)) {
|
||||
fscrypt_warn(inode,
|
||||
"Unsupported encryption flags (0x%02x)",
|
||||
policy->flags);
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
case FSCRYPT_POLICY_V2: {
|
||||
const struct fscrypt_policy_v2 *policy = &policy_u->v2;
|
||||
|
||||
if (!fscrypt_valid_enc_modes(policy->contents_encryption_mode,
|
||||
policy->filenames_encryption_mode)) {
|
||||
fscrypt_warn(inode,
|
||||
"Unsupported encryption modes (contents %d, filenames %d)",
|
||||
policy->contents_encryption_mode,
|
||||
policy->filenames_encryption_mode);
|
||||
return false;
|
||||
}
|
||||
|
||||
if (policy->flags & ~FSCRYPT_POLICY_FLAGS_VALID) {
|
||||
fscrypt_warn(inode,
|
||||
"Unsupported encryption flags (0x%02x)",
|
||||
policy->flags);
|
||||
return false;
|
||||
}
|
||||
|
||||
if ((policy->flags & FSCRYPT_POLICY_FLAG_IV_INO_LBLK_64) &&
|
||||
!supported_iv_ino_lblk_64_policy(policy, inode))
|
||||
return false;
|
||||
|
||||
if (memchr_inv(policy->__reserved, 0,
|
||||
sizeof(policy->__reserved))) {
|
||||
fscrypt_warn(inode,
|
||||
"Reserved bits set in encryption policy");
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* fscrypt_new_context_from_policy - create a new fscrypt_context from a policy
|
||||
*
|
||||
* Create an fscrypt_context for an inode that is being assigned the given
|
||||
* encryption policy. A new nonce is randomly generated.
|
||||
*
|
||||
* Return: the size of the new context in bytes.
|
||||
*/
|
||||
static int fscrypt_new_context_from_policy(union fscrypt_context *ctx_u,
|
||||
const union fscrypt_policy *policy_u)
|
||||
{
|
||||
memset(ctx_u, 0, sizeof(*ctx_u));
|
||||
|
||||
switch (policy_u->version) {
|
||||
case FSCRYPT_POLICY_V1: {
|
||||
const struct fscrypt_policy_v1 *policy = &policy_u->v1;
|
||||
struct fscrypt_context_v1 *ctx = &ctx_u->v1;
|
||||
|
||||
ctx->version = FSCRYPT_CONTEXT_V1;
|
||||
ctx->contents_encryption_mode =
|
||||
policy->contents_encryption_mode;
|
||||
ctx->filenames_encryption_mode =
|
||||
policy->filenames_encryption_mode;
|
||||
ctx->flags = policy->flags;
|
||||
memcpy(ctx->master_key_descriptor,
|
||||
policy->master_key_descriptor,
|
||||
sizeof(ctx->master_key_descriptor));
|
||||
get_random_bytes(ctx->nonce, sizeof(ctx->nonce));
|
||||
return sizeof(*ctx);
|
||||
}
|
||||
case FSCRYPT_POLICY_V2: {
|
||||
const struct fscrypt_policy_v2 *policy = &policy_u->v2;
|
||||
struct fscrypt_context_v2 *ctx = &ctx_u->v2;
|
||||
|
||||
ctx->version = FSCRYPT_CONTEXT_V2;
|
||||
ctx->contents_encryption_mode =
|
||||
policy->contents_encryption_mode;
|
||||
ctx->filenames_encryption_mode =
|
||||
policy->filenames_encryption_mode;
|
||||
ctx->flags = policy->flags;
|
||||
memcpy(ctx->master_key_identifier,
|
||||
policy->master_key_identifier,
|
||||
sizeof(ctx->master_key_identifier));
|
||||
get_random_bytes(ctx->nonce, sizeof(ctx->nonce));
|
||||
return sizeof(*ctx);
|
||||
}
|
||||
}
|
||||
BUG();
|
||||
}
|
||||
|
||||
/**
|
||||
* fscrypt_policy_from_context - convert an fscrypt_context to an fscrypt_policy
|
||||
*
|
||||
* Given an fscrypt_context, build the corresponding fscrypt_policy.
|
||||
*
|
||||
* Return: 0 on success, or -EINVAL if the fscrypt_context has an unrecognized
|
||||
* version number or size.
|
||||
*
|
||||
* This does *not* validate the settings within the policy itself, e.g. the
|
||||
* modes, flags, and reserved bits. Use fscrypt_supported_policy() for that.
|
||||
*/
|
||||
int fscrypt_policy_from_context(union fscrypt_policy *policy_u,
|
||||
const union fscrypt_context *ctx_u,
|
||||
int ctx_size)
|
||||
{
|
||||
memset(policy_u, 0, sizeof(*policy_u));
|
||||
|
||||
if (ctx_size <= 0 || ctx_size != fscrypt_context_size(ctx_u))
|
||||
return -EINVAL;
|
||||
|
||||
if (policy->flags & ~FS_POLICY_FLAGS_VALID)
|
||||
switch (ctx_u->version) {
|
||||
case FSCRYPT_CONTEXT_V1: {
|
||||
const struct fscrypt_context_v1 *ctx = &ctx_u->v1;
|
||||
struct fscrypt_policy_v1 *policy = &policy_u->v1;
|
||||
|
||||
policy->version = FSCRYPT_POLICY_V1;
|
||||
policy->contents_encryption_mode =
|
||||
ctx->contents_encryption_mode;
|
||||
policy->filenames_encryption_mode =
|
||||
ctx->filenames_encryption_mode;
|
||||
policy->flags = ctx->flags;
|
||||
memcpy(policy->master_key_descriptor,
|
||||
ctx->master_key_descriptor,
|
||||
sizeof(policy->master_key_descriptor));
|
||||
return 0;
|
||||
}
|
||||
case FSCRYPT_CONTEXT_V2: {
|
||||
const struct fscrypt_context_v2 *ctx = &ctx_u->v2;
|
||||
struct fscrypt_policy_v2 *policy = &policy_u->v2;
|
||||
|
||||
policy->version = FSCRYPT_POLICY_V2;
|
||||
policy->contents_encryption_mode =
|
||||
ctx->contents_encryption_mode;
|
||||
policy->filenames_encryption_mode =
|
||||
ctx->filenames_encryption_mode;
|
||||
policy->flags = ctx->flags;
|
||||
memcpy(policy->__reserved, ctx->__reserved,
|
||||
sizeof(policy->__reserved));
|
||||
memcpy(policy->master_key_identifier,
|
||||
ctx->master_key_identifier,
|
||||
sizeof(policy->master_key_identifier));
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
/* unreachable */
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* Retrieve an inode's encryption policy */
|
||||
static int fscrypt_get_policy(struct inode *inode, union fscrypt_policy *policy)
|
||||
{
|
||||
const struct fscrypt_info *ci;
|
||||
union fscrypt_context ctx;
|
||||
int ret;
|
||||
|
||||
ci = READ_ONCE(inode->i_crypt_info);
|
||||
if (ci) {
|
||||
/* key available, use the cached policy */
|
||||
*policy = ci->ci_policy;
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (!IS_ENCRYPTED(inode))
|
||||
return -ENODATA;
|
||||
|
||||
ret = inode->i_sb->s_cop->get_context(inode, &ctx, sizeof(ctx));
|
||||
if (ret < 0)
|
||||
return (ret == -ERANGE) ? -EINVAL : ret;
|
||||
|
||||
return fscrypt_policy_from_context(policy, &ctx, ret);
|
||||
}
|
||||
|
||||
static int set_encryption_policy(struct inode *inode,
|
||||
const union fscrypt_policy *policy)
|
||||
{
|
||||
union fscrypt_context ctx;
|
||||
int ctxsize;
|
||||
int err;
|
||||
|
||||
if (!fscrypt_supported_policy(policy, inode))
|
||||
return -EINVAL;
|
||||
|
||||
ctx.contents_encryption_mode = policy->contents_encryption_mode;
|
||||
ctx.filenames_encryption_mode = policy->filenames_encryption_mode;
|
||||
ctx.flags = policy->flags;
|
||||
BUILD_BUG_ON(sizeof(ctx.nonce) != FS_KEY_DERIVATION_NONCE_SIZE);
|
||||
get_random_bytes(ctx.nonce, FS_KEY_DERIVATION_NONCE_SIZE);
|
||||
switch (policy->version) {
|
||||
case FSCRYPT_POLICY_V1:
|
||||
/*
|
||||
* The original encryption policy version provided no way of
|
||||
* verifying that the correct master key was supplied, which was
|
||||
* insecure in scenarios where multiple users have access to the
|
||||
* same encrypted files (even just read-only access). The new
|
||||
* encryption policy version fixes this and also implies use of
|
||||
* an improved key derivation function and allows non-root users
|
||||
* to securely remove keys. So as long as compatibility with
|
||||
* old kernels isn't required, it is recommended to use the new
|
||||
* policy version for all new encrypted directories.
|
||||
*/
|
||||
pr_warn_once("%s (pid %d) is setting deprecated v1 encryption policy; recommend upgrading to v2.\n",
|
||||
current->comm, current->pid);
|
||||
break;
|
||||
case FSCRYPT_POLICY_V2:
|
||||
err = fscrypt_verify_key_added(inode->i_sb,
|
||||
policy->v2.master_key_identifier);
|
||||
if (err)
|
||||
return err;
|
||||
break;
|
||||
default:
|
||||
WARN_ON(1);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
return inode->i_sb->s_cop->set_context(inode, &ctx, sizeof(ctx), NULL);
|
||||
ctxsize = fscrypt_new_context_from_policy(&ctx, policy);
|
||||
|
||||
return inode->i_sb->s_cop->set_context(inode, &ctx, ctxsize, NULL);
|
||||
}
|
||||
|
||||
int fscrypt_ioctl_set_policy(struct file *filp, const void __user *arg)
|
||||
{
|
||||
struct fscrypt_policy policy;
|
||||
union fscrypt_policy policy;
|
||||
union fscrypt_policy existing_policy;
|
||||
struct inode *inode = file_inode(filp);
|
||||
u8 version;
|
||||
int size;
|
||||
int ret;
|
||||
struct fscrypt_context ctx;
|
||||
|
||||
if (copy_from_user(&policy, arg, sizeof(policy)))
|
||||
if (get_user(policy.version, (const u8 __user *)arg))
|
||||
return -EFAULT;
|
||||
|
||||
size = fscrypt_policy_size(&policy);
|
||||
if (size <= 0)
|
||||
return -EINVAL;
|
||||
|
||||
/*
|
||||
* We should just copy the remaining 'size - 1' bytes here, but a
|
||||
* bizarre bug in gcc 7 and earlier (fixed by gcc r255731) causes gcc to
|
||||
* think that size can be 0 here (despite the check above!) *and* that
|
||||
* it's a compile-time constant. Thus it would think copy_from_user()
|
||||
* is passed compile-time constant ULONG_MAX, causing the compile-time
|
||||
* buffer overflow check to fail, breaking the build. This only occurred
|
||||
* when building an i386 kernel with -Os and branch profiling enabled.
|
||||
*
|
||||
* Work around it by just copying the first byte again...
|
||||
*/
|
||||
version = policy.version;
|
||||
if (copy_from_user(&policy, arg, size))
|
||||
return -EFAULT;
|
||||
policy.version = version;
|
||||
|
||||
if (!inode_owner_or_capable(inode))
|
||||
return -EACCES;
|
||||
|
||||
if (policy.version != 0)
|
||||
return -EINVAL;
|
||||
|
||||
ret = mnt_want_write_file(filp);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
inode_lock(inode);
|
||||
|
||||
ret = inode->i_sb->s_cop->get_context(inode, &ctx, sizeof(ctx));
|
||||
ret = fscrypt_get_policy(inode, &existing_policy);
|
||||
if (ret == -ENODATA) {
|
||||
if (!S_ISDIR(inode->i_mode))
|
||||
ret = -ENOTDIR;
|
||||
@ -86,14 +359,10 @@ int fscrypt_ioctl_set_policy(struct file *filp, const void __user *arg)
|
||||
else if (!inode->i_sb->s_cop->empty_dir(inode))
|
||||
ret = -ENOTEMPTY;
|
||||
else
|
||||
ret = create_encryption_context_from_policy(inode,
|
||||
&policy);
|
||||
} else if (ret == sizeof(ctx) &&
|
||||
is_encryption_context_consistent_with_policy(&ctx,
|
||||
&policy)) {
|
||||
/* The file already uses the same encryption policy. */
|
||||
ret = 0;
|
||||
} else if (ret >= 0 || ret == -ERANGE) {
|
||||
ret = set_encryption_policy(inode, &policy);
|
||||
} else if (ret == -EINVAL ||
|
||||
(ret == 0 && !fscrypt_policies_equal(&policy,
|
||||
&existing_policy))) {
|
||||
/* The file already uses a different encryption policy. */
|
||||
ret = -EEXIST;
|
||||
}
|
||||
@ -105,37 +374,57 @@ int fscrypt_ioctl_set_policy(struct file *filp, const void __user *arg)
|
||||
}
|
||||
EXPORT_SYMBOL(fscrypt_ioctl_set_policy);
|
||||
|
||||
/* Original ioctl version; can only get the original policy version */
|
||||
int fscrypt_ioctl_get_policy(struct file *filp, void __user *arg)
|
||||
{
|
||||
struct inode *inode = file_inode(filp);
|
||||
struct fscrypt_context ctx;
|
||||
struct fscrypt_policy policy;
|
||||
int res;
|
||||
union fscrypt_policy policy;
|
||||
int err;
|
||||
|
||||
if (!IS_ENCRYPTED(inode))
|
||||
return -ENODATA;
|
||||
err = fscrypt_get_policy(file_inode(filp), &policy);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
res = inode->i_sb->s_cop->get_context(inode, &ctx, sizeof(ctx));
|
||||
if (res < 0 && res != -ERANGE)
|
||||
return res;
|
||||
if (res != sizeof(ctx))
|
||||
return -EINVAL;
|
||||
if (ctx.format != FS_ENCRYPTION_CONTEXT_FORMAT_V1)
|
||||
if (policy.version != FSCRYPT_POLICY_V1)
|
||||
return -EINVAL;
|
||||
|
||||
policy.version = 0;
|
||||
policy.contents_encryption_mode = ctx.contents_encryption_mode;
|
||||
policy.filenames_encryption_mode = ctx.filenames_encryption_mode;
|
||||
policy.flags = ctx.flags;
|
||||
memcpy(policy.master_key_descriptor, ctx.master_key_descriptor,
|
||||
FS_KEY_DESCRIPTOR_SIZE);
|
||||
|
||||
if (copy_to_user(arg, &policy, sizeof(policy)))
|
||||
if (copy_to_user(arg, &policy, sizeof(policy.v1)))
|
||||
return -EFAULT;
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(fscrypt_ioctl_get_policy);
|
||||
|
||||
/* Extended ioctl version; can get policies of any version */
|
||||
int fscrypt_ioctl_get_policy_ex(struct file *filp, void __user *uarg)
|
||||
{
|
||||
struct fscrypt_get_policy_ex_arg arg;
|
||||
union fscrypt_policy *policy = (union fscrypt_policy *)&arg.policy;
|
||||
size_t policy_size;
|
||||
int err;
|
||||
|
||||
/* arg is policy_size, then policy */
|
||||
BUILD_BUG_ON(offsetof(typeof(arg), policy_size) != 0);
|
||||
BUILD_BUG_ON(offsetofend(typeof(arg), policy_size) !=
|
||||
offsetof(typeof(arg), policy));
|
||||
BUILD_BUG_ON(sizeof(arg.policy) != sizeof(*policy));
|
||||
|
||||
err = fscrypt_get_policy(file_inode(filp), policy);
|
||||
if (err)
|
||||
return err;
|
||||
policy_size = fscrypt_policy_size(policy);
|
||||
|
||||
if (copy_from_user(&arg, uarg, sizeof(arg.policy_size)))
|
||||
return -EFAULT;
|
||||
|
||||
if (policy_size > arg.policy_size)
|
||||
return -EOVERFLOW;
|
||||
arg.policy_size = policy_size;
|
||||
|
||||
if (copy_to_user(uarg, &arg, sizeof(arg.policy_size) + policy_size))
|
||||
return -EFAULT;
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(fscrypt_ioctl_get_policy_ex);
|
||||
|
||||
/**
|
||||
* fscrypt_has_permitted_context() - is a file's encryption policy permitted
|
||||
* within its directory?
|
||||
@ -157,10 +446,8 @@ EXPORT_SYMBOL(fscrypt_ioctl_get_policy);
|
||||
*/
|
||||
int fscrypt_has_permitted_context(struct inode *parent, struct inode *child)
|
||||
{
|
||||
const struct fscrypt_operations *cops = parent->i_sb->s_cop;
|
||||
const struct fscrypt_info *parent_ci, *child_ci;
|
||||
struct fscrypt_context parent_ctx, child_ctx;
|
||||
int res;
|
||||
union fscrypt_policy parent_policy, child_policy;
|
||||
int err;
|
||||
|
||||
/* No restrictions on file types which are never encrypted */
|
||||
if (!S_ISREG(child->i_mode) && !S_ISDIR(child->i_mode) &&
|
||||
@ -190,41 +477,22 @@ int fscrypt_has_permitted_context(struct inode *parent, struct inode *child)
|
||||
* In any case, if an unexpected error occurs, fall back to "forbidden".
|
||||
*/
|
||||
|
||||
res = fscrypt_get_encryption_info(parent);
|
||||
if (res)
|
||||
err = fscrypt_get_encryption_info(parent);
|
||||
if (err)
|
||||
return 0;
|
||||
res = fscrypt_get_encryption_info(child);
|
||||
if (res)
|
||||
return 0;
|
||||
parent_ci = READ_ONCE(parent->i_crypt_info);
|
||||
child_ci = READ_ONCE(child->i_crypt_info);
|
||||
|
||||
if (parent_ci && child_ci) {
|
||||
return memcmp(parent_ci->ci_master_key_descriptor,
|
||||
child_ci->ci_master_key_descriptor,
|
||||
FS_KEY_DESCRIPTOR_SIZE) == 0 &&
|
||||
(parent_ci->ci_data_mode == child_ci->ci_data_mode) &&
|
||||
(parent_ci->ci_filename_mode ==
|
||||
child_ci->ci_filename_mode) &&
|
||||
(parent_ci->ci_flags == child_ci->ci_flags);
|
||||
}
|
||||
|
||||
res = cops->get_context(parent, &parent_ctx, sizeof(parent_ctx));
|
||||
if (res != sizeof(parent_ctx))
|
||||
err = fscrypt_get_encryption_info(child);
|
||||
if (err)
|
||||
return 0;
|
||||
|
||||
res = cops->get_context(child, &child_ctx, sizeof(child_ctx));
|
||||
if (res != sizeof(child_ctx))
|
||||
err = fscrypt_get_policy(parent, &parent_policy);
|
||||
if (err)
|
||||
return 0;
|
||||
|
||||
return memcmp(parent_ctx.master_key_descriptor,
|
||||
child_ctx.master_key_descriptor,
|
||||
FS_KEY_DESCRIPTOR_SIZE) == 0 &&
|
||||
(parent_ctx.contents_encryption_mode ==
|
||||
child_ctx.contents_encryption_mode) &&
|
||||
(parent_ctx.filenames_encryption_mode ==
|
||||
child_ctx.filenames_encryption_mode) &&
|
||||
(parent_ctx.flags == child_ctx.flags);
|
||||
err = fscrypt_get_policy(child, &child_policy);
|
||||
if (err)
|
||||
return 0;
|
||||
|
||||
return fscrypt_policies_equal(&parent_policy, &child_policy);
|
||||
}
|
||||
EXPORT_SYMBOL(fscrypt_has_permitted_context);
|
||||
|
||||
@ -240,7 +508,8 @@ EXPORT_SYMBOL(fscrypt_has_permitted_context);
|
||||
int fscrypt_inherit_context(struct inode *parent, struct inode *child,
|
||||
void *fs_data, bool preload)
|
||||
{
|
||||
struct fscrypt_context ctx;
|
||||
union fscrypt_context ctx;
|
||||
int ctxsize;
|
||||
struct fscrypt_info *ci;
|
||||
int res;
|
||||
|
||||
@ -252,16 +521,10 @@ int fscrypt_inherit_context(struct inode *parent, struct inode *child,
|
||||
if (ci == NULL)
|
||||
return -ENOKEY;
|
||||
|
||||
ctx.format = FS_ENCRYPTION_CONTEXT_FORMAT_V1;
|
||||
ctx.contents_encryption_mode = ci->ci_data_mode;
|
||||
ctx.filenames_encryption_mode = ci->ci_filename_mode;
|
||||
ctx.flags = ci->ci_flags;
|
||||
memcpy(ctx.master_key_descriptor, ci->ci_master_key_descriptor,
|
||||
FS_KEY_DESCRIPTOR_SIZE);
|
||||
get_random_bytes(ctx.nonce, FS_KEY_DERIVATION_NONCE_SIZE);
|
||||
ctxsize = fscrypt_new_context_from_policy(&ctx, &ci->ci_policy);
|
||||
|
||||
BUILD_BUG_ON(sizeof(ctx) != FSCRYPT_SET_CONTEXT_MAX_SIZE);
|
||||
res = parent->i_sb->s_cop->set_context(child, &ctx,
|
||||
sizeof(ctx), fs_data);
|
||||
res = parent->i_sb->s_cop->set_context(child, &ctx, ctxsize, fs_data);
|
||||
if (res)
|
||||
return res;
|
||||
return preload ? fscrypt_get_encryption_info(child): 0;
|
||||
|
@ -1140,6 +1140,7 @@ struct ext4_inode_info {
|
||||
#define EXT4_MOUNT_JOURNAL_CHECKSUM 0x800000 /* Journal checksums */
|
||||
#define EXT4_MOUNT_JOURNAL_ASYNC_COMMIT 0x1000000 /* Journal Async Commit */
|
||||
#define EXT4_MOUNT_WARN_ON_ERROR 0x2000000 /* Trigger WARN_ON on error */
|
||||
#define EXT4_MOUNT_INLINECRYPT 0x4000000 /* Inline encryption support */
|
||||
#define EXT4_MOUNT_DELALLOC 0x8000000 /* Delalloc support */
|
||||
#define EXT4_MOUNT_DATA_ERR_ABORT 0x10000000 /* Abort on file data write */
|
||||
#define EXT4_MOUNT_BLOCK_VALIDITY 0x20000000 /* Block validity checking */
|
||||
@ -1666,6 +1667,7 @@ static inline bool ext4_verity_in_progress(struct inode *inode)
|
||||
#define EXT4_FEATURE_COMPAT_RESIZE_INODE 0x0010
|
||||
#define EXT4_FEATURE_COMPAT_DIR_INDEX 0x0020
|
||||
#define EXT4_FEATURE_COMPAT_SPARSE_SUPER2 0x0200
|
||||
#define EXT4_FEATURE_COMPAT_STABLE_INODES 0x0800
|
||||
|
||||
#define EXT4_FEATURE_RO_COMPAT_SPARSE_SUPER 0x0001
|
||||
#define EXT4_FEATURE_RO_COMPAT_LARGE_FILE 0x0002
|
||||
@ -1767,6 +1769,7 @@ EXT4_FEATURE_COMPAT_FUNCS(xattr, EXT_ATTR)
|
||||
EXT4_FEATURE_COMPAT_FUNCS(resize_inode, RESIZE_INODE)
|
||||
EXT4_FEATURE_COMPAT_FUNCS(dir_index, DIR_INDEX)
|
||||
EXT4_FEATURE_COMPAT_FUNCS(sparse_super2, SPARSE_SUPER2)
|
||||
EXT4_FEATURE_COMPAT_FUNCS(stable_inodes, STABLE_INODES)
|
||||
|
||||
EXT4_FEATURE_RO_COMPAT_FUNCS(sparse_super, SPARSE_SUPER)
|
||||
EXT4_FEATURE_RO_COMPAT_FUNCS(large_file, LARGE_FILE)
|
||||
|
@ -1237,8 +1237,7 @@ static int ext4_block_write_begin(struct page *page, loff_t pos, unsigned len,
|
||||
(block_start < from || block_end > to)) {
|
||||
ll_rw_block(REQ_OP_READ, 0, 1, &bh);
|
||||
*wait_bh++ = bh;
|
||||
decrypt = IS_ENCRYPTED(inode) &&
|
||||
S_ISREG(inode->i_mode);
|
||||
decrypt = fscrypt_inode_uses_fs_layer_crypto(inode);
|
||||
}
|
||||
}
|
||||
/*
|
||||
@ -4137,8 +4136,7 @@ static int __ext4_block_zero_page_range(handle_t *handle,
|
||||
/* Uhhuh. Read error. Complain and punt. */
|
||||
if (!buffer_uptodate(bh))
|
||||
goto unlock;
|
||||
if (S_ISREG(inode->i_mode) &&
|
||||
IS_ENCRYPTED(inode)) {
|
||||
if (fscrypt_inode_uses_fs_layer_crypto(inode)) {
|
||||
/* We expect the key to be set. */
|
||||
BUG_ON(!fscrypt_has_encryption_key(inode));
|
||||
BUG_ON(blocksize != PAGE_SIZE);
|
||||
|
@ -1131,8 +1131,35 @@ resizefs_out:
|
||||
#endif
|
||||
}
|
||||
case EXT4_IOC_GET_ENCRYPTION_POLICY:
|
||||
if (!ext4_has_feature_encrypt(sb))
|
||||
return -EOPNOTSUPP;
|
||||
return fscrypt_ioctl_get_policy(filp, (void __user *)arg);
|
||||
|
||||
case FS_IOC_GET_ENCRYPTION_POLICY_EX:
|
||||
if (!ext4_has_feature_encrypt(sb))
|
||||
return -EOPNOTSUPP;
|
||||
return fscrypt_ioctl_get_policy_ex(filp, (void __user *)arg);
|
||||
|
||||
case FS_IOC_ADD_ENCRYPTION_KEY:
|
||||
if (!ext4_has_feature_encrypt(sb))
|
||||
return -EOPNOTSUPP;
|
||||
return fscrypt_ioctl_add_key(filp, (void __user *)arg);
|
||||
|
||||
case FS_IOC_REMOVE_ENCRYPTION_KEY:
|
||||
if (!ext4_has_feature_encrypt(sb))
|
||||
return -EOPNOTSUPP;
|
||||
return fscrypt_ioctl_remove_key(filp, (void __user *)arg);
|
||||
|
||||
case FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS:
|
||||
if (!ext4_has_feature_encrypt(sb))
|
||||
return -EOPNOTSUPP;
|
||||
return fscrypt_ioctl_remove_key_all_users(filp,
|
||||
(void __user *)arg);
|
||||
case FS_IOC_GET_ENCRYPTION_KEY_STATUS:
|
||||
if (!ext4_has_feature_encrypt(sb))
|
||||
return -EOPNOTSUPP;
|
||||
return fscrypt_ioctl_get_key_status(filp, (void __user *)arg);
|
||||
|
||||
case EXT4_IOC_FSGETXATTR:
|
||||
{
|
||||
struct fsxattr fa;
|
||||
@ -1265,6 +1292,11 @@ long ext4_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
|
||||
case EXT4_IOC_SET_ENCRYPTION_POLICY:
|
||||
case EXT4_IOC_GET_ENCRYPTION_PWSALT:
|
||||
case EXT4_IOC_GET_ENCRYPTION_POLICY:
|
||||
case FS_IOC_GET_ENCRYPTION_POLICY_EX:
|
||||
case FS_IOC_ADD_ENCRYPTION_KEY:
|
||||
case FS_IOC_REMOVE_ENCRYPTION_KEY:
|
||||
case FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS:
|
||||
case FS_IOC_GET_ENCRYPTION_KEY_STATUS:
|
||||
case EXT4_IOC_SHUTDOWN:
|
||||
case FS_IOC_GETFSMAP:
|
||||
case FS_IOC_ENABLE_VERITY:
|
||||
|
@ -362,10 +362,16 @@ static int io_submit_init_bio(struct ext4_io_submit *io,
|
||||
struct buffer_head *bh)
|
||||
{
|
||||
struct bio *bio;
|
||||
int err;
|
||||
|
||||
bio = bio_alloc(GFP_NOIO, BIO_MAX_PAGES);
|
||||
if (!bio)
|
||||
return -ENOMEM;
|
||||
err = fscrypt_set_bio_crypt_ctx_bh(bio, bh, GFP_NOIO);
|
||||
if (err) {
|
||||
bio_put(bio);
|
||||
return err;
|
||||
}
|
||||
wbc_init_bio(io->io_wbc, bio);
|
||||
bio->bi_iter.bi_sector = bh->b_blocknr * (bh->b_size >> 9);
|
||||
bio_set_dev(bio, bh->b_bdev);
|
||||
@ -383,7 +389,8 @@ static int io_submit_add_bh(struct ext4_io_submit *io,
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (io->io_bio && bh->b_blocknr != io->io_next_block) {
|
||||
if (io->io_bio && (bh->b_blocknr != io->io_next_block ||
|
||||
!fscrypt_mergeable_bio_bh(io->io_bio, bh))) {
|
||||
submit_and_retry:
|
||||
ext4_io_submit(io);
|
||||
}
|
||||
@ -469,7 +476,7 @@ int ext4_bio_write_page(struct ext4_io_submit *io,
|
||||
|
||||
bh = head = page_buffers(page);
|
||||
|
||||
if (IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode) && nr_to_submit) {
|
||||
if (fscrypt_inode_uses_fs_layer_crypto(inode) && nr_to_submit) {
|
||||
gfp_t gfp_flags = GFP_NOFS;
|
||||
|
||||
retry_encrypt:
|
||||
|
@ -198,7 +198,7 @@ static struct bio_post_read_ctx *get_bio_post_read_ctx(struct inode *inode,
|
||||
unsigned int post_read_steps = 0;
|
||||
struct bio_post_read_ctx *ctx = NULL;
|
||||
|
||||
if (IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode))
|
||||
if (fscrypt_inode_uses_fs_layer_crypto(inode))
|
||||
post_read_steps |= 1 << STEP_DECRYPT;
|
||||
|
||||
if (ext4_need_verity(inode, first_idx))
|
||||
@ -259,6 +259,7 @@ int ext4_mpage_readpages(struct address_space *mapping,
|
||||
const unsigned blkbits = inode->i_blkbits;
|
||||
const unsigned blocks_per_page = PAGE_SIZE >> blkbits;
|
||||
const unsigned blocksize = 1 << blkbits;
|
||||
sector_t next_block;
|
||||
sector_t block_in_file;
|
||||
sector_t last_block;
|
||||
sector_t last_block_in_file;
|
||||
@ -290,7 +291,8 @@ int ext4_mpage_readpages(struct address_space *mapping,
|
||||
if (page_has_buffers(page))
|
||||
goto confused;
|
||||
|
||||
block_in_file = (sector_t)page->index << (PAGE_SHIFT - blkbits);
|
||||
block_in_file = next_block =
|
||||
(sector_t)page->index << (PAGE_SHIFT - blkbits);
|
||||
last_block = block_in_file + nr_pages * blocks_per_page;
|
||||
last_block_in_file = (ext4_readpage_limit(inode) +
|
||||
blocksize - 1) >> blkbits;
|
||||
@ -390,7 +392,8 @@ int ext4_mpage_readpages(struct address_space *mapping,
|
||||
* This page will go to BIO. Do we need to send this
|
||||
* BIO off first?
|
||||
*/
|
||||
if (bio && (last_block_in_bio != blocks[0] - 1)) {
|
||||
if (bio && (last_block_in_bio != blocks[0] - 1 ||
|
||||
!fscrypt_mergeable_bio(bio, inode, next_block))) {
|
||||
submit_and_realloc:
|
||||
ext4_submit_bio_read(bio);
|
||||
bio = NULL;
|
||||
@ -402,6 +405,12 @@ int ext4_mpage_readpages(struct address_space *mapping,
|
||||
min_t(int, nr_pages, BIO_MAX_PAGES));
|
||||
if (!bio)
|
||||
goto set_error_page;
|
||||
if (fscrypt_set_bio_crypt_ctx(bio, inode, next_block,
|
||||
GFP_KERNEL) != 0) {
|
||||
bio_put(bio);
|
||||
bio = NULL;
|
||||
goto set_error_page;
|
||||
}
|
||||
ctx = get_bio_post_read_ctx(inode, bio, page->index);
|
||||
if (IS_ERR(ctx)) {
|
||||
bio_put(bio);
|
||||
|
@ -1107,6 +1107,9 @@ static int ext4_drop_inode(struct inode *inode)
|
||||
{
|
||||
int drop = generic_drop_inode(inode);
|
||||
|
||||
if (!drop)
|
||||
drop = fscrypt_drop_inode(inode);
|
||||
|
||||
trace_ext4_drop_inode(inode, drop);
|
||||
return drop;
|
||||
}
|
||||
@ -1346,6 +1349,23 @@ static bool ext4_dummy_context(struct inode *inode)
|
||||
return DUMMY_ENCRYPTION_ENABLED(EXT4_SB(inode->i_sb));
|
||||
}
|
||||
|
||||
static bool ext4_has_stable_inodes(struct super_block *sb)
|
||||
{
|
||||
return ext4_has_feature_stable_inodes(sb);
|
||||
}
|
||||
|
||||
static void ext4_get_ino_and_lblk_bits(struct super_block *sb,
|
||||
int *ino_bits_ret, int *lblk_bits_ret)
|
||||
{
|
||||
*ino_bits_ret = 8 * sizeof(EXT4_SB(sb)->s_es->s_inodes_count);
|
||||
*lblk_bits_ret = 8 * sizeof(ext4_lblk_t);
|
||||
}
|
||||
|
||||
static bool ext4_inline_crypt_enabled(struct super_block *sb)
|
||||
{
|
||||
return test_opt(sb, INLINECRYPT);
|
||||
}
|
||||
|
||||
static const struct fscrypt_operations ext4_cryptops = {
|
||||
.key_prefix = "ext4:",
|
||||
.get_context = ext4_get_context,
|
||||
@ -1353,6 +1373,9 @@ static const struct fscrypt_operations ext4_cryptops = {
|
||||
.dummy_context = ext4_dummy_context,
|
||||
.empty_dir = ext4_empty_dir,
|
||||
.max_namelen = EXT4_NAME_LEN,
|
||||
.has_stable_inodes = ext4_has_stable_inodes,
|
||||
.get_ino_and_lblk_bits = ext4_get_ino_and_lblk_bits,
|
||||
.inline_crypt_enabled = ext4_inline_crypt_enabled,
|
||||
};
|
||||
#endif
|
||||
|
||||
@ -1447,6 +1470,7 @@ enum {
|
||||
Opt_journal_path, Opt_journal_checksum, Opt_journal_async_commit,
|
||||
Opt_abort, Opt_data_journal, Opt_data_ordered, Opt_data_writeback,
|
||||
Opt_data_err_abort, Opt_data_err_ignore, Opt_test_dummy_encryption,
|
||||
Opt_inlinecrypt,
|
||||
Opt_usrjquota, Opt_grpjquota, Opt_offusrjquota, Opt_offgrpjquota,
|
||||
Opt_jqfmt_vfsold, Opt_jqfmt_vfsv0, Opt_jqfmt_vfsv1, Opt_quota,
|
||||
Opt_noquota, Opt_barrier, Opt_nobarrier, Opt_err,
|
||||
@ -1543,6 +1567,7 @@ static const match_table_t tokens = {
|
||||
{Opt_noinit_itable, "noinit_itable"},
|
||||
{Opt_max_dir_size_kb, "max_dir_size_kb=%u"},
|
||||
{Opt_test_dummy_encryption, "test_dummy_encryption"},
|
||||
{Opt_inlinecrypt, "inlinecrypt"},
|
||||
{Opt_nombcache, "nombcache"},
|
||||
{Opt_nombcache, "no_mbcache"}, /* for backward compatibility */
|
||||
{Opt_removed, "check=none"}, /* mount option from ext2/3 */
|
||||
@ -1754,6 +1779,11 @@ static const struct mount_opts {
|
||||
{Opt_jqfmt_vfsv1, QFMT_VFS_V1, MOPT_QFMT},
|
||||
{Opt_max_dir_size_kb, 0, MOPT_GTE0},
|
||||
{Opt_test_dummy_encryption, 0, MOPT_GTE0},
|
||||
#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT
|
||||
{Opt_inlinecrypt, EXT4_MOUNT_INLINECRYPT, MOPT_SET},
|
||||
#else
|
||||
{Opt_inlinecrypt, EXT4_MOUNT_INLINECRYPT, MOPT_NOSUPPORT},
|
||||
#endif
|
||||
{Opt_nombcache, EXT4_MOUNT_NO_MBCACHE, MOPT_SET},
|
||||
{Opt_err, 0, 0}
|
||||
};
|
||||
|
@ -317,6 +317,35 @@ static struct bio *__bio_alloc(struct f2fs_io_info *fio, int npages)
|
||||
return bio;
|
||||
}
|
||||
|
||||
static int f2fs_set_bio_crypt_ctx(struct bio *bio, const struct inode *inode,
|
||||
pgoff_t first_idx,
|
||||
const struct f2fs_io_info *fio,
|
||||
gfp_t gfp_mask)
|
||||
{
|
||||
/*
|
||||
* The f2fs garbage collector sets ->encrypted_page when it wants to
|
||||
* read/write raw data without encryption.
|
||||
*/
|
||||
if (fio && fio->encrypted_page)
|
||||
return 0;
|
||||
|
||||
return fscrypt_set_bio_crypt_ctx(bio, inode, first_idx, gfp_mask);
|
||||
}
|
||||
|
||||
static bool f2fs_crypt_mergeable_bio(struct bio *bio, const struct inode *inode,
|
||||
pgoff_t next_idx,
|
||||
const struct f2fs_io_info *fio)
|
||||
{
|
||||
/*
|
||||
* The f2fs garbage collector sets ->encrypted_page when it wants to
|
||||
* read/write raw data without encryption.
|
||||
*/
|
||||
if (fio && fio->encrypted_page)
|
||||
return !bio_has_crypt_ctx(bio);
|
||||
|
||||
return fscrypt_mergeable_bio(bio, inode, next_idx);
|
||||
}
|
||||
|
||||
static inline void __submit_bio(struct f2fs_sb_info *sbi,
|
||||
struct bio *bio, enum page_type type)
|
||||
{
|
||||
@ -514,6 +543,7 @@ int f2fs_submit_page_bio(struct f2fs_io_info *fio)
|
||||
struct bio *bio;
|
||||
struct page *page = fio->encrypted_page ?
|
||||
fio->encrypted_page : fio->page;
|
||||
int err;
|
||||
|
||||
if (!f2fs_is_valid_blkaddr(fio->sbi, fio->new_blkaddr,
|
||||
fio->is_por ? META_POR : (__is_meta_io(fio) ?
|
||||
@ -526,6 +556,13 @@ int f2fs_submit_page_bio(struct f2fs_io_info *fio)
|
||||
/* Allocate a new bio */
|
||||
bio = __bio_alloc(fio, 1);
|
||||
|
||||
err = f2fs_set_bio_crypt_ctx(bio, fio->page->mapping->host,
|
||||
fio->page->index, fio, GFP_NOIO);
|
||||
if (err) {
|
||||
bio_put(bio);
|
||||
return err;
|
||||
}
|
||||
|
||||
if (bio_add_page(bio, page, PAGE_SIZE, 0) < PAGE_SIZE) {
|
||||
bio_put(bio);
|
||||
return -EFAULT;
|
||||
@ -716,13 +753,18 @@ int f2fs_merge_page_bio(struct f2fs_io_info *fio)
|
||||
trace_f2fs_submit_page_bio(page, fio);
|
||||
f2fs_trace_ios(fio, 0);
|
||||
|
||||
if (bio && !page_is_mergeable(fio->sbi, bio, *fio->last_block,
|
||||
fio->new_blkaddr))
|
||||
if (bio && (!page_is_mergeable(fio->sbi, bio, *fio->last_block,
|
||||
fio->new_blkaddr) ||
|
||||
!f2fs_crypt_mergeable_bio(bio, fio->page->mapping->host,
|
||||
fio->page->index, fio)))
|
||||
f2fs_submit_merged_ipu_write(fio->sbi, &bio, NULL);
|
||||
|
||||
alloc_new:
|
||||
if (!bio) {
|
||||
bio = __bio_alloc(fio, BIO_MAX_PAGES);
|
||||
f2fs_set_bio_crypt_ctx(bio, fio->page->mapping->host,
|
||||
fio->page->index, fio,
|
||||
GFP_NOIO | __GFP_NOFAIL);
|
||||
bio_set_op_attrs(bio, fio->op, fio->op_flags);
|
||||
|
||||
add_bio_entry(fio->sbi, bio, page, fio->temp);
|
||||
@ -774,8 +816,11 @@ next:
|
||||
|
||||
inc_page_count(sbi, WB_DATA_TYPE(bio_page));
|
||||
|
||||
if (io->bio && !io_is_mergeable(sbi, io->bio, io, fio,
|
||||
io->last_block_in_bio, fio->new_blkaddr))
|
||||
if (io->bio &&
|
||||
(!io_is_mergeable(sbi, io->bio, io, fio, io->last_block_in_bio,
|
||||
fio->new_blkaddr) ||
|
||||
!f2fs_crypt_mergeable_bio(io->bio, fio->page->mapping->host,
|
||||
fio->page->index, fio)))
|
||||
__submit_merged_bio(io);
|
||||
alloc_new:
|
||||
if (io->bio == NULL) {
|
||||
@ -787,7 +832,9 @@ alloc_new:
|
||||
goto skip;
|
||||
}
|
||||
io->bio = __bio_alloc(fio, BIO_MAX_PAGES);
|
||||
|
||||
f2fs_set_bio_crypt_ctx(io->bio, fio->page->mapping->host,
|
||||
fio->page->index, fio,
|
||||
GFP_NOIO | __GFP_NOFAIL);
|
||||
io->fio = *fio;
|
||||
}
|
||||
|
||||
@ -827,15 +874,23 @@ static struct bio *f2fs_grab_read_bio(struct inode *inode, block_t blkaddr,
|
||||
struct bio *bio;
|
||||
struct bio_post_read_ctx *ctx;
|
||||
unsigned int post_read_steps = 0;
|
||||
int err;
|
||||
|
||||
bio = f2fs_bio_alloc(sbi, min_t(int, nr_pages, BIO_MAX_PAGES), false);
|
||||
if (!bio)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
err = f2fs_set_bio_crypt_ctx(bio, inode, first_idx, NULL, GFP_NOFS);
|
||||
if (err) {
|
||||
bio_put(bio);
|
||||
return ERR_PTR(err);
|
||||
}
|
||||
|
||||
f2fs_target_device(sbi, blkaddr, bio);
|
||||
bio->bi_end_io = f2fs_read_end_io;
|
||||
bio_set_op_attrs(bio, REQ_OP_READ, op_flag);
|
||||
|
||||
if (f2fs_encrypted_file(inode))
|
||||
if (fscrypt_inode_uses_fs_layer_crypto(inode))
|
||||
post_read_steps |= 1 << STEP_DECRYPT;
|
||||
|
||||
if (f2fs_need_verity(inode, first_idx))
|
||||
@ -1870,8 +1925,9 @@ zero_out:
|
||||
* This page will go to BIO. Do we need to send this
|
||||
* BIO off first?
|
||||
*/
|
||||
if (bio && !page_is_mergeable(F2FS_I_SB(inode), bio,
|
||||
*last_block_in_bio, block_nr)) {
|
||||
if (bio && (!page_is_mergeable(F2FS_I_SB(inode), bio,
|
||||
*last_block_in_bio, block_nr) ||
|
||||
!f2fs_crypt_mergeable_bio(bio, inode, page->index, NULL))) {
|
||||
submit_and_realloc:
|
||||
__f2fs_submit_read_bio(F2FS_I_SB(inode), bio, DATA);
|
||||
bio = NULL;
|
||||
@ -2013,6 +2069,9 @@ static int encrypt_one_page(struct f2fs_io_info *fio)
|
||||
/* wait for GCed page writeback via META_MAPPING */
|
||||
f2fs_wait_on_block_writeback(inode, fio->old_blkaddr);
|
||||
|
||||
if (fscrypt_inode_uses_inline_crypto(inode))
|
||||
return 0;
|
||||
|
||||
retry_encrypt:
|
||||
|
||||
fio->encrypted_page = fscrypt_encrypt_pagecache_blocks(fio->page,
|
||||
@ -2188,7 +2247,7 @@ got_it:
|
||||
f2fs_unlock_op(fio->sbi);
|
||||
err = f2fs_inplace_write_data(fio);
|
||||
if (err) {
|
||||
if (f2fs_encrypted_file(inode))
|
||||
if (fscrypt_inode_uses_fs_layer_crypto(inode))
|
||||
fscrypt_finalize_bounce_page(&fio->encrypted_page);
|
||||
if (PageWriteback(page))
|
||||
end_page_writeback(page);
|
||||
|
@ -137,6 +137,9 @@ struct f2fs_mount_info {
|
||||
int alloc_mode; /* segment allocation policy */
|
||||
int fsync_mode; /* fsync policy */
|
||||
bool test_dummy_encryption; /* test dummy encryption */
|
||||
#ifdef CONFIG_FS_ENCRYPTION
|
||||
bool inlinecrypt; /* inline encryption enabled */
|
||||
#endif
|
||||
block_t unusable_cap; /* Amount of space allowed to be
|
||||
* unusable when disabling checkpoint
|
||||
*/
|
||||
|
@ -2267,6 +2267,49 @@ out_err:
|
||||
return err;
|
||||
}
|
||||
|
||||
static int f2fs_ioc_get_encryption_policy_ex(struct file *filp,
|
||||
unsigned long arg)
|
||||
{
|
||||
if (!f2fs_sb_has_encrypt(F2FS_I_SB(file_inode(filp))))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
return fscrypt_ioctl_get_policy_ex(filp, (void __user *)arg);
|
||||
}
|
||||
|
||||
static int f2fs_ioc_add_encryption_key(struct file *filp, unsigned long arg)
|
||||
{
|
||||
if (!f2fs_sb_has_encrypt(F2FS_I_SB(file_inode(filp))))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
return fscrypt_ioctl_add_key(filp, (void __user *)arg);
|
||||
}
|
||||
|
||||
static int f2fs_ioc_remove_encryption_key(struct file *filp, unsigned long arg)
|
||||
{
|
||||
if (!f2fs_sb_has_encrypt(F2FS_I_SB(file_inode(filp))))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
return fscrypt_ioctl_remove_key(filp, (void __user *)arg);
|
||||
}
|
||||
|
||||
static int f2fs_ioc_remove_encryption_key_all_users(struct file *filp,
|
||||
unsigned long arg)
|
||||
{
|
||||
if (!f2fs_sb_has_encrypt(F2FS_I_SB(file_inode(filp))))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
return fscrypt_ioctl_remove_key_all_users(filp, (void __user *)arg);
|
||||
}
|
||||
|
||||
static int f2fs_ioc_get_encryption_key_status(struct file *filp,
|
||||
unsigned long arg)
|
||||
{
|
||||
if (!f2fs_sb_has_encrypt(F2FS_I_SB(file_inode(filp))))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
return fscrypt_ioctl_get_key_status(filp, (void __user *)arg);
|
||||
}
|
||||
|
||||
static int f2fs_ioc_gc(struct file *filp, unsigned long arg)
|
||||
{
|
||||
struct inode *inode = file_inode(filp);
|
||||
@ -3265,6 +3308,16 @@ long f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
|
||||
return f2fs_ioc_get_encryption_policy(filp, arg);
|
||||
case F2FS_IOC_GET_ENCRYPTION_PWSALT:
|
||||
return f2fs_ioc_get_encryption_pwsalt(filp, arg);
|
||||
case FS_IOC_GET_ENCRYPTION_POLICY_EX:
|
||||
return f2fs_ioc_get_encryption_policy_ex(filp, arg);
|
||||
case FS_IOC_ADD_ENCRYPTION_KEY:
|
||||
return f2fs_ioc_add_encryption_key(filp, arg);
|
||||
case FS_IOC_REMOVE_ENCRYPTION_KEY:
|
||||
return f2fs_ioc_remove_encryption_key(filp, arg);
|
||||
case FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS:
|
||||
return f2fs_ioc_remove_encryption_key_all_users(filp, arg);
|
||||
case FS_IOC_GET_ENCRYPTION_KEY_STATUS:
|
||||
return f2fs_ioc_get_encryption_key_status(filp, arg);
|
||||
case F2FS_IOC_GARBAGE_COLLECT:
|
||||
return f2fs_ioc_gc(filp, arg);
|
||||
case F2FS_IOC_GARBAGE_COLLECT_RANGE:
|
||||
@ -3396,6 +3449,11 @@ long f2fs_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
|
||||
case F2FS_IOC_SET_ENCRYPTION_POLICY:
|
||||
case F2FS_IOC_GET_ENCRYPTION_PWSALT:
|
||||
case F2FS_IOC_GET_ENCRYPTION_POLICY:
|
||||
case FS_IOC_GET_ENCRYPTION_POLICY_EX:
|
||||
case FS_IOC_ADD_ENCRYPTION_KEY:
|
||||
case FS_IOC_REMOVE_ENCRYPTION_KEY:
|
||||
case FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS:
|
||||
case FS_IOC_GET_ENCRYPTION_KEY_STATUS:
|
||||
case F2FS_IOC_GARBAGE_COLLECT:
|
||||
case F2FS_IOC_GARBAGE_COLLECT_RANGE:
|
||||
case F2FS_IOC_WRITE_CHECKPOINT:
|
||||
|
@ -137,6 +137,7 @@ enum {
|
||||
Opt_alloc,
|
||||
Opt_fsync,
|
||||
Opt_test_dummy_encryption,
|
||||
Opt_inlinecrypt,
|
||||
Opt_checkpoint_disable,
|
||||
Opt_checkpoint_disable_cap,
|
||||
Opt_checkpoint_disable_cap_perc,
|
||||
@ -199,6 +200,7 @@ static match_table_t f2fs_tokens = {
|
||||
{Opt_alloc, "alloc_mode=%s"},
|
||||
{Opt_fsync, "fsync_mode=%s"},
|
||||
{Opt_test_dummy_encryption, "test_dummy_encryption"},
|
||||
{Opt_inlinecrypt, "inlinecrypt"},
|
||||
{Opt_checkpoint_disable, "checkpoint=disable"},
|
||||
{Opt_checkpoint_disable_cap, "checkpoint=disable:%u"},
|
||||
{Opt_checkpoint_disable_cap_perc, "checkpoint=disable:%u%%"},
|
||||
@ -783,6 +785,13 @@ static int parse_options(struct super_block *sb, char *options)
|
||||
f2fs_info(sbi, "Test dummy encryption mode enabled");
|
||||
#else
|
||||
f2fs_info(sbi, "Test dummy encryption mount option ignored");
|
||||
#endif
|
||||
break;
|
||||
case Opt_inlinecrypt:
|
||||
#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT
|
||||
F2FS_OPTION(sbi).inlinecrypt = true;
|
||||
#else
|
||||
f2fs_info(sbi, "inline encryption not supported");
|
||||
#endif
|
||||
break;
|
||||
case Opt_checkpoint_disable_cap_perc:
|
||||
@ -965,6 +974,8 @@ static int f2fs_drop_inode(struct inode *inode)
|
||||
return 0;
|
||||
}
|
||||
ret = generic_drop_inode(inode);
|
||||
if (!ret)
|
||||
ret = fscrypt_drop_inode(inode);
|
||||
trace_f2fs_drop_inode(inode, ret);
|
||||
return ret;
|
||||
}
|
||||
@ -1452,6 +1463,8 @@ static int f2fs_show_options(struct seq_file *seq, struct dentry *root)
|
||||
#ifdef CONFIG_FS_ENCRYPTION
|
||||
if (F2FS_OPTION(sbi).test_dummy_encryption)
|
||||
seq_puts(seq, ",test_dummy_encryption");
|
||||
if (F2FS_OPTION(sbi).inlinecrypt)
|
||||
seq_puts(seq, ",inlinecrypt");
|
||||
#endif
|
||||
|
||||
if (F2FS_OPTION(sbi).alloc_mode == ALLOC_MODE_DEFAULT)
|
||||
@ -1480,6 +1493,9 @@ static void default_options(struct f2fs_sb_info *sbi)
|
||||
F2FS_OPTION(sbi).alloc_mode = ALLOC_MODE_DEFAULT;
|
||||
F2FS_OPTION(sbi).fsync_mode = FSYNC_MODE_POSIX;
|
||||
F2FS_OPTION(sbi).test_dummy_encryption = false;
|
||||
#ifdef CONFIG_FS_ENCRYPTION
|
||||
F2FS_OPTION(sbi).inlinecrypt = false;
|
||||
#endif
|
||||
F2FS_OPTION(sbi).s_resuid = make_kuid(&init_user_ns, F2FS_DEF_RESUID);
|
||||
F2FS_OPTION(sbi).s_resgid = make_kgid(&init_user_ns, F2FS_DEF_RESGID);
|
||||
|
||||
@ -2322,13 +2338,33 @@ static bool f2fs_dummy_context(struct inode *inode)
|
||||
return DUMMY_ENCRYPTION_ENABLED(F2FS_I_SB(inode));
|
||||
}
|
||||
|
||||
static bool f2fs_has_stable_inodes(struct super_block *sb)
|
||||
{
|
||||
return true;
|
||||
}
|
||||
|
||||
static void f2fs_get_ino_and_lblk_bits(struct super_block *sb,
|
||||
int *ino_bits_ret, int *lblk_bits_ret)
|
||||
{
|
||||
*ino_bits_ret = 8 * sizeof(nid_t);
|
||||
*lblk_bits_ret = 8 * sizeof(block_t);
|
||||
}
|
||||
|
||||
static bool f2fs_inline_crypt_enabled(struct super_block *sb)
|
||||
{
|
||||
return F2FS_OPTION(F2FS_SB(sb)).inlinecrypt;
|
||||
}
|
||||
|
||||
static const struct fscrypt_operations f2fs_cryptops = {
|
||||
.key_prefix = "f2fs:",
|
||||
.get_context = f2fs_get_context,
|
||||
.set_context = f2fs_set_context,
|
||||
.dummy_context = f2fs_dummy_context,
|
||||
.empty_dir = f2fs_empty_dir,
|
||||
.max_namelen = F2FS_NAME_LEN,
|
||||
.key_prefix = "f2fs:",
|
||||
.get_context = f2fs_get_context,
|
||||
.set_context = f2fs_set_context,
|
||||
.dummy_context = f2fs_dummy_context,
|
||||
.empty_dir = f2fs_empty_dir,
|
||||
.max_namelen = F2FS_NAME_LEN,
|
||||
.has_stable_inodes = f2fs_has_stable_inodes,
|
||||
.get_ino_and_lblk_bits = f2fs_get_ino_and_lblk_bits,
|
||||
.inline_crypt_enabled = f2fs_inline_crypt_enabled,
|
||||
};
|
||||
#endif
|
||||
|
||||
|
@ -19,6 +19,7 @@
|
||||
*/
|
||||
|
||||
#include "sdcardfs.h"
|
||||
#include <linux/fscrypt.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/parser.h>
|
||||
@ -375,6 +376,9 @@ static int sdcardfs_read_super(struct vfsmount *mnt, struct super_block *sb,
|
||||
list_add(&sb_info->list, &sdcardfs_super_list);
|
||||
mutex_unlock(&sdcardfs_super_list_lock);
|
||||
|
||||
sb_info->fscrypt_nb.notifier_call = sdcardfs_on_fscrypt_key_removed;
|
||||
fscrypt_register_key_removal_notifier(&sb_info->fscrypt_nb);
|
||||
|
||||
if (!silent)
|
||||
pr_info("sdcardfs: mounted on top of %s type %s\n",
|
||||
dev_name, lower_sb->s_type->name);
|
||||
@ -445,6 +449,9 @@ void sdcardfs_kill_sb(struct super_block *sb)
|
||||
|
||||
if (sb->s_magic == SDCARDFS_SUPER_MAGIC && sb->s_fs_info) {
|
||||
sbi = SDCARDFS_SB(sb);
|
||||
|
||||
fscrypt_unregister_key_removal_notifier(&sbi->fscrypt_nb);
|
||||
|
||||
mutex_lock(&sdcardfs_super_list_lock);
|
||||
list_del(&sbi->list);
|
||||
mutex_unlock(&sdcardfs_super_list_lock);
|
||||
|
@ -151,6 +151,8 @@ extern struct inode *sdcardfs_iget(struct super_block *sb,
|
||||
struct inode *lower_inode, userid_t id);
|
||||
extern int sdcardfs_interpose(struct dentry *dentry, struct super_block *sb,
|
||||
struct path *lower_path, userid_t id);
|
||||
extern int sdcardfs_on_fscrypt_key_removed(struct notifier_block *nb,
|
||||
unsigned long action, void *data);
|
||||
|
||||
/* file private data */
|
||||
struct sdcardfs_file_info {
|
||||
@ -224,6 +226,7 @@ struct sdcardfs_sb_info {
|
||||
struct path obbpath;
|
||||
void *pkgl_id;
|
||||
struct list_head list;
|
||||
struct notifier_block fscrypt_nb;
|
||||
};
|
||||
|
||||
/*
|
||||
|
@ -319,6 +319,23 @@ static int sdcardfs_show_options(struct vfsmount *mnt, struct seq_file *m,
|
||||
return 0;
|
||||
};
|
||||
|
||||
int sdcardfs_on_fscrypt_key_removed(struct notifier_block *nb,
|
||||
unsigned long action, void *data)
|
||||
{
|
||||
struct sdcardfs_sb_info *sbi = container_of(nb, struct sdcardfs_sb_info,
|
||||
fscrypt_nb);
|
||||
|
||||
/*
|
||||
* Evict any unused sdcardfs dentries (and hence any unused sdcardfs
|
||||
* inodes, since sdcardfs doesn't cache unpinned inodes by themselves)
|
||||
* so that the lower filesystem's encrypted inodes can be evicted.
|
||||
* This is needed to make the FS_IOC_REMOVE_ENCRYPTION_KEY ioctl
|
||||
* properly "lock" the files underneath the sdcardfs mount.
|
||||
*/
|
||||
shrink_dcache_sb(sbi->sb);
|
||||
return NOTIFY_OK;
|
||||
}
|
||||
|
||||
const struct super_operations sdcardfs_sops = {
|
||||
.put_super = sdcardfs_put_super,
|
||||
.statfs = sdcardfs_statfs,
|
||||
|
@ -32,6 +32,7 @@
|
||||
#include <linux/backing-dev.h>
|
||||
#include <linux/rculist_bl.h>
|
||||
#include <linux/cleancache.h>
|
||||
#include <linux/fscrypt.h>
|
||||
#include <linux/fsnotify.h>
|
||||
#include <linux/lockdep.h>
|
||||
#include <linux/user_namespace.h>
|
||||
@ -288,6 +289,7 @@ static void __put_super(struct super_block *s)
|
||||
WARN_ON(s->s_inode_lru.node);
|
||||
WARN_ON(!list_empty(&s->s_mounts));
|
||||
security_sb_free(s);
|
||||
fscrypt_sb_free(s);
|
||||
put_user_ns(s->s_user_ns);
|
||||
kfree(s->s_subtype);
|
||||
call_rcu(&s->rcu, destroy_super_rcu);
|
||||
|
@ -205,6 +205,21 @@ long ubifs_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
|
||||
#endif
|
||||
}
|
||||
|
||||
case FS_IOC_GET_ENCRYPTION_POLICY_EX:
|
||||
return fscrypt_ioctl_get_policy_ex(file, (void __user *)arg);
|
||||
|
||||
case FS_IOC_ADD_ENCRYPTION_KEY:
|
||||
return fscrypt_ioctl_add_key(file, (void __user *)arg);
|
||||
|
||||
case FS_IOC_REMOVE_ENCRYPTION_KEY:
|
||||
return fscrypt_ioctl_remove_key(file, (void __user *)arg);
|
||||
|
||||
case FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS:
|
||||
return fscrypt_ioctl_remove_key_all_users(file,
|
||||
(void __user *)arg);
|
||||
case FS_IOC_GET_ENCRYPTION_KEY_STATUS:
|
||||
return fscrypt_ioctl_get_key_status(file, (void __user *)arg);
|
||||
|
||||
default:
|
||||
return -ENOTTY;
|
||||
}
|
||||
@ -222,6 +237,11 @@ long ubifs_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
|
||||
break;
|
||||
case FS_IOC_SET_ENCRYPTION_POLICY:
|
||||
case FS_IOC_GET_ENCRYPTION_POLICY:
|
||||
case FS_IOC_GET_ENCRYPTION_POLICY_EX:
|
||||
case FS_IOC_ADD_ENCRYPTION_KEY:
|
||||
case FS_IOC_REMOVE_ENCRYPTION_KEY:
|
||||
case FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS:
|
||||
case FS_IOC_GET_ENCRYPTION_KEY_STATUS:
|
||||
break;
|
||||
default:
|
||||
return -ENOIOCTLCMD;
|
||||
|
@ -336,6 +336,16 @@ static int ubifs_write_inode(struct inode *inode, struct writeback_control *wbc)
|
||||
return err;
|
||||
}
|
||||
|
||||
static int ubifs_drop_inode(struct inode *inode)
|
||||
{
|
||||
int drop = generic_drop_inode(inode);
|
||||
|
||||
if (!drop)
|
||||
drop = fscrypt_drop_inode(inode);
|
||||
|
||||
return drop;
|
||||
}
|
||||
|
||||
static void ubifs_evict_inode(struct inode *inode)
|
||||
{
|
||||
int err;
|
||||
@ -1925,6 +1935,7 @@ const struct super_operations ubifs_super_operations = {
|
||||
.destroy_inode = ubifs_destroy_inode,
|
||||
.put_super = ubifs_put_super,
|
||||
.write_inode = ubifs_write_inode,
|
||||
.drop_inode = ubifs_drop_inode,
|
||||
.evict_inode = ubifs_evict_inode,
|
||||
.statfs = ubifs_statfs,
|
||||
.dirty_inode = ubifs_dirty_inode,
|
||||
|
226
include/linux/bio-crypt-ctx.h
Normal file
226
include/linux/bio-crypt-ctx.h
Normal file
@ -0,0 +1,226 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
/*
|
||||
* Copyright 2019 Google LLC
|
||||
*/
|
||||
#ifndef __LINUX_BIO_CRYPT_CTX_H
|
||||
#define __LINUX_BIO_CRYPT_CTX_H
|
||||
|
||||
enum blk_crypto_mode_num {
|
||||
BLK_ENCRYPTION_MODE_INVALID = 0,
|
||||
BLK_ENCRYPTION_MODE_AES_256_XTS = 1,
|
||||
};
|
||||
|
||||
#ifdef CONFIG_BLOCK
|
||||
#include <linux/blk_types.h>
|
||||
|
||||
#ifdef CONFIG_BLK_INLINE_ENCRYPTION
|
||||
struct bio_crypt_ctx {
|
||||
int keyslot;
|
||||
const u8 *raw_key;
|
||||
enum blk_crypto_mode_num crypto_mode;
|
||||
u64 data_unit_num;
|
||||
unsigned int data_unit_size_bits;
|
||||
|
||||
/*
|
||||
* The keyslot manager where the key has been programmed
|
||||
* with keyslot.
|
||||
*/
|
||||
struct keyslot_manager *processing_ksm;
|
||||
|
||||
/*
|
||||
* Copy of the bvec_iter when this bio was submitted.
|
||||
* We only want to en/decrypt the part of the bio
|
||||
* as described by the bvec_iter upon submission because
|
||||
* bio might be split before being resubmitted
|
||||
*/
|
||||
struct bvec_iter crypt_iter;
|
||||
u64 sw_data_unit_num;
|
||||
};
|
||||
|
||||
extern int bio_crypt_clone(struct bio *dst, struct bio *src,
|
||||
gfp_t gfp_mask);
|
||||
|
||||
static inline bool bio_has_crypt_ctx(struct bio *bio)
|
||||
{
|
||||
return bio->bi_crypt_context;
|
||||
}
|
||||
|
||||
static inline void bio_crypt_advance(struct bio *bio, unsigned int bytes)
|
||||
{
|
||||
if (bio_has_crypt_ctx(bio)) {
|
||||
bio->bi_crypt_context->data_unit_num +=
|
||||
bytes >> bio->bi_crypt_context->data_unit_size_bits;
|
||||
}
|
||||
}
|
||||
|
||||
extern bool bio_crypt_swhandled(struct bio *bio);
|
||||
|
||||
static inline bool bio_crypt_has_keyslot(struct bio *bio)
|
||||
{
|
||||
return bio->bi_crypt_context->keyslot >= 0;
|
||||
}
|
||||
|
||||
extern int bio_crypt_ctx_init(void);
|
||||
|
||||
extern struct bio_crypt_ctx *bio_crypt_alloc_ctx(gfp_t gfp_mask);
|
||||
|
||||
extern void bio_crypt_free_ctx(struct bio *bio);
|
||||
|
||||
static inline int bio_crypt_set_ctx(struct bio *bio,
|
||||
const u8 *raw_key,
|
||||
enum blk_crypto_mode_num crypto_mode,
|
||||
u64 dun,
|
||||
unsigned int dun_bits,
|
||||
gfp_t gfp_mask)
|
||||
{
|
||||
struct bio_crypt_ctx *crypt_ctx;
|
||||
|
||||
crypt_ctx = bio_crypt_alloc_ctx(gfp_mask);
|
||||
if (!crypt_ctx)
|
||||
return -ENOMEM;
|
||||
|
||||
crypt_ctx->raw_key = raw_key;
|
||||
crypt_ctx->data_unit_num = dun;
|
||||
crypt_ctx->data_unit_size_bits = dun_bits;
|
||||
crypt_ctx->crypto_mode = crypto_mode;
|
||||
crypt_ctx->processing_ksm = NULL;
|
||||
crypt_ctx->keyslot = -1;
|
||||
bio->bi_crypt_context = crypt_ctx;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void bio_set_data_unit_num(struct bio *bio, u64 dun)
|
||||
{
|
||||
bio->bi_crypt_context->data_unit_num = dun;
|
||||
}
|
||||
|
||||
static inline int bio_crypt_get_keyslot(struct bio *bio)
|
||||
{
|
||||
return bio->bi_crypt_context->keyslot;
|
||||
}
|
||||
|
||||
static inline void bio_crypt_set_keyslot(struct bio *bio,
|
||||
unsigned int keyslot,
|
||||
struct keyslot_manager *ksm)
|
||||
{
|
||||
bio->bi_crypt_context->keyslot = keyslot;
|
||||
bio->bi_crypt_context->processing_ksm = ksm;
|
||||
}
|
||||
|
||||
extern void bio_crypt_ctx_release_keyslot(struct bio *bio);
|
||||
|
||||
extern int bio_crypt_ctx_acquire_keyslot(struct bio *bio,
|
||||
struct keyslot_manager *ksm);
|
||||
|
||||
static inline const u8 *bio_crypt_raw_key(struct bio *bio)
|
||||
{
|
||||
return bio->bi_crypt_context->raw_key;
|
||||
}
|
||||
|
||||
static inline enum blk_crypto_mode_num bio_crypto_mode(struct bio *bio)
|
||||
{
|
||||
return bio->bi_crypt_context->crypto_mode;
|
||||
}
|
||||
|
||||
static inline u64 bio_crypt_data_unit_num(struct bio *bio)
|
||||
{
|
||||
return bio->bi_crypt_context->data_unit_num;
|
||||
}
|
||||
|
||||
static inline u64 bio_crypt_sw_data_unit_num(struct bio *bio)
|
||||
{
|
||||
return bio->bi_crypt_context->sw_data_unit_num;
|
||||
}
|
||||
|
||||
extern bool bio_crypt_should_process(struct bio *bio, struct request_queue *q);
|
||||
|
||||
extern bool bio_crypt_ctx_compatible(struct bio *b_1, struct bio *b_2);
|
||||
|
||||
extern bool bio_crypt_ctx_back_mergeable(struct bio *b_1,
|
||||
unsigned int b1_sectors,
|
||||
struct bio *b_2);
|
||||
|
||||
#else /* CONFIG_BLK_INLINE_ENCRYPTION */
|
||||
struct keyslot_manager;
|
||||
|
||||
static inline int bio_crypt_ctx_init(void)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int bio_crypt_clone(struct bio *dst, struct bio *src,
|
||||
gfp_t gfp_mask)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void bio_crypt_advance(struct bio *bio,
|
||||
unsigned int bytes) { }
|
||||
|
||||
static inline bool bio_has_crypt_ctx(struct bio *bio)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
static inline void bio_crypt_free_ctx(struct bio *bio) { }
|
||||
|
||||
static inline void bio_crypt_set_ctx(struct bio *bio,
|
||||
u8 *raw_key,
|
||||
enum blk_crypto_mode_num crypto_mode,
|
||||
u64 dun,
|
||||
unsigned int dun_bits,
|
||||
gfp_t gfp_mask) { }
|
||||
|
||||
static inline bool bio_crypt_swhandled(struct bio *bio)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
static inline void bio_set_data_unit_num(struct bio *bio, u64 dun) { }
|
||||
|
||||
static inline bool bio_crypt_has_keyslot(struct bio *bio)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
static inline void bio_crypt_set_keyslot(struct bio *bio,
|
||||
unsigned int keyslot,
|
||||
struct keyslot_manager *ksm) { }
|
||||
|
||||
static inline int bio_crypt_get_keyslot(struct bio *bio)
|
||||
{
|
||||
return -1;
|
||||
}
|
||||
|
||||
static inline u8 *bio_crypt_raw_key(struct bio *bio)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static inline u64 bio_crypt_data_unit_num(struct bio *bio)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline bool bio_crypt_should_process(struct bio *bio,
|
||||
struct request_queue *q)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
static inline bool bio_crypt_ctx_compatible(struct bio *b_1, struct bio *b_2)
|
||||
{
|
||||
return true;
|
||||
}
|
||||
|
||||
static inline bool bio_crypt_ctx_back_mergeable(struct bio *b_1,
|
||||
unsigned int b1_sectors,
|
||||
struct bio *b_2)
|
||||
{
|
||||
return true;
|
||||
}
|
||||
|
||||
#endif /* CONFIG_BLK_INLINE_ENCRYPTION */
|
||||
#endif /* CONFIG_BLOCK */
|
||||
#endif /* __LINUX_BIO_CRYPT_CTX_H */
|
@ -22,6 +22,7 @@
|
||||
#include <linux/mempool.h>
|
||||
#include <linux/ioprio.h>
|
||||
#include <linux/bug.h>
|
||||
#include <linux/bio-crypt-ctx.h>
|
||||
|
||||
#ifdef CONFIG_BLOCK
|
||||
|
||||
|
62
include/linux/blk-crypto.h
Normal file
62
include/linux/blk-crypto.h
Normal file
@ -0,0 +1,62 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
/*
|
||||
* Copyright 2019 Google LLC
|
||||
*/
|
||||
|
||||
#ifndef __LINUX_BLK_CRYPTO_H
|
||||
#define __LINUX_BLK_CRYPTO_H
|
||||
|
||||
#include <linux/types.h>
|
||||
#include <linux/bio.h>
|
||||
|
||||
#ifdef CONFIG_BLK_INLINE_ENCRYPTION
|
||||
|
||||
int blk_crypto_init(void);
|
||||
|
||||
int blk_crypto_submit_bio(struct bio **bio_ptr);
|
||||
|
||||
bool blk_crypto_endio(struct bio *bio);
|
||||
|
||||
int blk_crypto_start_using_mode(enum blk_crypto_mode_num mode_num,
|
||||
unsigned int data_unit_size,
|
||||
struct request_queue *q);
|
||||
|
||||
int blk_crypto_evict_key(struct request_queue *q, const u8 *key,
|
||||
enum blk_crypto_mode_num mode,
|
||||
unsigned int data_unit_size);
|
||||
|
||||
#else /* CONFIG_BLK_INLINE_ENCRYPTION */
|
||||
|
||||
static inline int blk_crypto_init(void)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int blk_crypto_submit_bio(struct bio **bio_ptr)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline bool blk_crypto_endio(struct bio *bio)
|
||||
{
|
||||
return true;
|
||||
}
|
||||
|
||||
static inline int
|
||||
blk_crypto_start_using_mode(enum blk_crypto_mode_num mode_num,
|
||||
unsigned int data_unit_size,
|
||||
struct request_queue *q)
|
||||
{
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
static inline int blk_crypto_evict_key(struct request_queue *q, const u8 *key,
|
||||
enum blk_crypto_mode_num mode,
|
||||
unsigned int data_unit_size)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
#endif /* CONFIG_BLK_INLINE_ENCRYPTION */
|
||||
|
||||
#endif /* __LINUX_BLK_CRYPTO_H */
|
@ -18,6 +18,7 @@ struct block_device;
|
||||
struct io_context;
|
||||
struct cgroup_subsys_state;
|
||||
typedef void (bio_end_io_t) (struct bio *);
|
||||
struct bio_crypt_ctx;
|
||||
|
||||
/*
|
||||
* Block error status values. See block/blk-core:blk_errors for the details.
|
||||
@ -182,6 +183,11 @@ struct bio {
|
||||
struct blkcg_gq *bi_blkg;
|
||||
struct bio_issue bi_issue;
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_BLK_INLINE_ENCRYPTION
|
||||
struct bio_crypt_ctx *bi_crypt_context;
|
||||
#endif
|
||||
|
||||
union {
|
||||
#if defined(CONFIG_BLK_DEV_INTEGRITY)
|
||||
struct bio_integrity_payload *bi_integrity; /* data integrity */
|
||||
|
@ -43,6 +43,7 @@ struct pr_ops;
|
||||
struct rq_qos;
|
||||
struct blk_queue_stats;
|
||||
struct blk_stat_callback;
|
||||
struct keyslot_manager;
|
||||
|
||||
#define BLKDEV_MIN_RQ 4
|
||||
#define BLKDEV_MAX_RQ 128 /* Default maximum */
|
||||
@ -574,6 +575,10 @@ struct request_queue {
|
||||
* queue_lock internally, e.g. scsi_request_fn().
|
||||
*/
|
||||
unsigned int request_fn_active;
|
||||
#ifdef CONFIG_BLK_INLINE_ENCRYPTION
|
||||
/* Inline crypto capabilities */
|
||||
struct keyslot_manager *ksm;
|
||||
#endif
|
||||
|
||||
unsigned int rq_timeout;
|
||||
int poll_nsec;
|
||||
|
@ -1396,6 +1396,7 @@ struct super_block {
|
||||
const struct xattr_handler **s_xattr;
|
||||
#ifdef CONFIG_FS_ENCRYPTION
|
||||
const struct fscrypt_operations *s_cop;
|
||||
struct key *s_master_keys; /* master crypto keys in use */
|
||||
#endif
|
||||
#ifdef CONFIG_FS_VERITY
|
||||
const struct fsverity_operations *s_vop;
|
||||
|
@ -16,10 +16,10 @@
|
||||
#include <linux/fs.h>
|
||||
#include <linux/mm.h>
|
||||
#include <linux/slab.h>
|
||||
#include <uapi/linux/fscrypt.h>
|
||||
|
||||
#define FS_CRYPTO_BLOCK_SIZE 16
|
||||
|
||||
struct fscrypt_ctx;
|
||||
struct fscrypt_info;
|
||||
|
||||
struct fscrypt_str {
|
||||
@ -42,7 +42,7 @@ struct fscrypt_name {
|
||||
#define fname_len(p) ((p)->disk_name.len)
|
||||
|
||||
/* Maximum value for the third parameter of fscrypt_operations.set_context(). */
|
||||
#define FSCRYPT_SET_CONTEXT_MAX_SIZE 28
|
||||
#define FSCRYPT_SET_CONTEXT_MAX_SIZE 40
|
||||
|
||||
#ifdef CONFIG_FS_ENCRYPTION
|
||||
/*
|
||||
@ -61,19 +61,10 @@ struct fscrypt_operations {
|
||||
bool (*dummy_context)(struct inode *);
|
||||
bool (*empty_dir)(struct inode *);
|
||||
unsigned int max_namelen;
|
||||
bool (*is_encrypted)(struct inode *inode);
|
||||
};
|
||||
|
||||
/* Decryption work */
|
||||
struct fscrypt_ctx {
|
||||
union {
|
||||
struct {
|
||||
struct bio *bio;
|
||||
struct work_struct work;
|
||||
};
|
||||
struct list_head free_list; /* Free list */
|
||||
};
|
||||
u8 flags; /* Flags */
|
||||
bool (*has_stable_inodes)(struct super_block *sb);
|
||||
void (*get_ino_and_lblk_bits)(struct super_block *sb,
|
||||
int *ino_bits_ret, int *lblk_bits_ret);
|
||||
bool (*inline_crypt_enabled)(struct super_block *sb);
|
||||
};
|
||||
|
||||
static inline bool fscrypt_has_encryption_key(const struct inode *inode)
|
||||
@ -102,8 +93,6 @@ static inline void fscrypt_handle_d_move(struct dentry *dentry)
|
||||
|
||||
/* crypto.c */
|
||||
extern void fscrypt_enqueue_decrypt_work(struct work_struct *);
|
||||
extern struct fscrypt_ctx *fscrypt_get_ctx(gfp_t);
|
||||
extern void fscrypt_release_ctx(struct fscrypt_ctx *);
|
||||
|
||||
extern struct page *fscrypt_encrypt_pagecache_blocks(struct page *page,
|
||||
unsigned int len,
|
||||
@ -135,13 +124,25 @@ extern void fscrypt_free_bounce_page(struct page *bounce_page);
|
||||
/* policy.c */
|
||||
extern int fscrypt_ioctl_set_policy(struct file *, const void __user *);
|
||||
extern int fscrypt_ioctl_get_policy(struct file *, void __user *);
|
||||
extern int fscrypt_ioctl_get_policy_ex(struct file *, void __user *);
|
||||
extern int fscrypt_has_permitted_context(struct inode *, struct inode *);
|
||||
extern int fscrypt_inherit_context(struct inode *, struct inode *,
|
||||
void *, bool);
|
||||
/* keyinfo.c */
|
||||
/* keyring.c */
|
||||
extern void fscrypt_sb_free(struct super_block *sb);
|
||||
extern int fscrypt_ioctl_add_key(struct file *filp, void __user *arg);
|
||||
extern int fscrypt_ioctl_remove_key(struct file *filp, void __user *arg);
|
||||
extern int fscrypt_ioctl_remove_key_all_users(struct file *filp,
|
||||
void __user *arg);
|
||||
extern int fscrypt_ioctl_get_key_status(struct file *filp, void __user *arg);
|
||||
extern int fscrypt_register_key_removal_notifier(struct notifier_block *nb);
|
||||
extern int fscrypt_unregister_key_removal_notifier(struct notifier_block *nb);
|
||||
|
||||
/* keysetup.c */
|
||||
extern int fscrypt_get_encryption_info(struct inode *);
|
||||
extern void fscrypt_put_encryption_info(struct inode *);
|
||||
extern void fscrypt_free_inode(struct inode *);
|
||||
extern int fscrypt_drop_inode(struct inode *inode);
|
||||
|
||||
/* fname.c */
|
||||
extern int fscrypt_setup_filename(struct inode *, const struct qstr *,
|
||||
@ -234,8 +235,6 @@ static inline bool fscrypt_match_name(const struct fscrypt_name *fname,
|
||||
|
||||
/* bio.c */
|
||||
extern void fscrypt_decrypt_bio(struct bio *);
|
||||
extern void fscrypt_enqueue_decrypt_bio(struct fscrypt_ctx *ctx,
|
||||
struct bio *bio);
|
||||
extern int fscrypt_zeroout_range(const struct inode *, pgoff_t, sector_t,
|
||||
unsigned int);
|
||||
|
||||
@ -280,16 +279,6 @@ static inline void fscrypt_enqueue_decrypt_work(struct work_struct *work)
|
||||
{
|
||||
}
|
||||
|
||||
static inline struct fscrypt_ctx *fscrypt_get_ctx(gfp_t gfp_flags)
|
||||
{
|
||||
return ERR_PTR(-EOPNOTSUPP);
|
||||
}
|
||||
|
||||
static inline void fscrypt_release_ctx(struct fscrypt_ctx *ctx)
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
static inline struct page *fscrypt_encrypt_pagecache_blocks(struct page *page,
|
||||
unsigned int len,
|
||||
unsigned int offs,
|
||||
@ -349,6 +338,12 @@ static inline int fscrypt_ioctl_get_policy(struct file *filp, void __user *arg)
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
static inline int fscrypt_ioctl_get_policy_ex(struct file *filp,
|
||||
void __user *arg)
|
||||
{
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
static inline int fscrypt_has_permitted_context(struct inode *parent,
|
||||
struct inode *child)
|
||||
{
|
||||
@ -362,7 +357,46 @@ static inline int fscrypt_inherit_context(struct inode *parent,
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
/* keyinfo.c */
|
||||
/* keyring.c */
|
||||
static inline void fscrypt_sb_free(struct super_block *sb)
|
||||
{
|
||||
}
|
||||
|
||||
static inline int fscrypt_ioctl_add_key(struct file *filp, void __user *arg)
|
||||
{
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
static inline int fscrypt_ioctl_remove_key(struct file *filp, void __user *arg)
|
||||
{
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
static inline int fscrypt_ioctl_remove_key_all_users(struct file *filp,
|
||||
void __user *arg)
|
||||
{
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
static inline int fscrypt_ioctl_get_key_status(struct file *filp,
|
||||
void __user *arg)
|
||||
{
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
static inline int fscrypt_register_key_removal_notifier(
|
||||
struct notifier_block *nb)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int fscrypt_unregister_key_removal_notifier(
|
||||
struct notifier_block *nb)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* keysetup.c */
|
||||
static inline int fscrypt_get_encryption_info(struct inode *inode)
|
||||
{
|
||||
return -EOPNOTSUPP;
|
||||
@ -377,6 +411,11 @@ static inline void fscrypt_free_inode(struct inode *inode)
|
||||
{
|
||||
}
|
||||
|
||||
static inline int fscrypt_drop_inode(struct inode *inode)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* fname.c */
|
||||
static inline int fscrypt_setup_filename(struct inode *dir,
|
||||
const struct qstr *iname,
|
||||
@ -431,11 +470,6 @@ static inline void fscrypt_decrypt_bio(struct bio *bio)
|
||||
{
|
||||
}
|
||||
|
||||
static inline void fscrypt_enqueue_decrypt_bio(struct fscrypt_ctx *ctx,
|
||||
struct bio *bio)
|
||||
{
|
||||
}
|
||||
|
||||
static inline int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk,
|
||||
sector_t pblk, unsigned int len)
|
||||
{
|
||||
@ -499,6 +533,65 @@ static inline const char *fscrypt_get_symlink(struct inode *inode,
|
||||
}
|
||||
#endif /* !CONFIG_FS_ENCRYPTION */
|
||||
|
||||
/* inline_crypt.c */
|
||||
#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT
|
||||
extern bool fscrypt_inode_uses_inline_crypto(const struct inode *inode);
|
||||
|
||||
extern bool fscrypt_inode_uses_fs_layer_crypto(const struct inode *inode);
|
||||
|
||||
extern int fscrypt_set_bio_crypt_ctx(struct bio *bio, const struct inode *inode,
|
||||
u64 first_lblk, gfp_t gfp_mask);
|
||||
|
||||
extern int fscrypt_set_bio_crypt_ctx_bh(struct bio *bio,
|
||||
const struct buffer_head *first_bh,
|
||||
gfp_t gfp_mask);
|
||||
|
||||
extern bool fscrypt_mergeable_bio(struct bio *bio, const struct inode *inode,
|
||||
u64 next_lblk);
|
||||
|
||||
extern bool fscrypt_mergeable_bio_bh(struct bio *bio,
|
||||
const struct buffer_head *next_bh);
|
||||
|
||||
#else /* CONFIG_FS_ENCRYPTION_INLINE_CRYPT */
|
||||
static inline bool fscrypt_inode_uses_inline_crypto(const struct inode *inode)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
static inline bool fscrypt_inode_uses_fs_layer_crypto(const struct inode *inode)
|
||||
{
|
||||
return IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode);
|
||||
}
|
||||
|
||||
static inline int fscrypt_set_bio_crypt_ctx(struct bio *bio,
|
||||
const struct inode *inode,
|
||||
u64 first_lblk, gfp_t gfp_mask)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int fscrypt_set_bio_crypt_ctx_bh(
|
||||
struct bio *bio,
|
||||
const struct buffer_head *first_bh,
|
||||
gfp_t gfp_mask)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline bool fscrypt_mergeable_bio(struct bio *bio,
|
||||
const struct inode *inode,
|
||||
u64 next_lblk)
|
||||
{
|
||||
return true;
|
||||
}
|
||||
|
||||
static inline bool fscrypt_mergeable_bio_bh(struct bio *bio,
|
||||
const struct buffer_head *next_bh)
|
||||
{
|
||||
return true;
|
||||
}
|
||||
#endif /* !CONFIG_FS_ENCRYPTION_INLINE_CRYPT */
|
||||
|
||||
/**
|
||||
* fscrypt_require_key - require an inode's encryption key
|
||||
* @inode: the inode we need the key for
|
||||
|
98
include/linux/keyslot-manager.h
Normal file
98
include/linux/keyslot-manager.h
Normal file
@ -0,0 +1,98 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
/*
|
||||
* Copyright 2019 Google LLC
|
||||
*/
|
||||
|
||||
#include <linux/bio.h>
|
||||
|
||||
#ifdef CONFIG_BLOCK
|
||||
|
||||
#ifndef __LINUX_KEYSLOT_MANAGER_H
|
||||
#define __LINUX_KEYSLOT_MANAGER_H
|
||||
|
||||
/**
|
||||
* struct keyslot_mgmt_ll_ops - functions to manage keyslots in hardware
|
||||
* @keyslot_program: Program the specified key and algorithm into the
|
||||
* specified slot in the inline encryption hardware.
|
||||
* @keyslot_evict: Evict key from the specified keyslot in the hardware.
|
||||
* The key, crypto_mode and data_unit_size are also passed
|
||||
* down so that e.g. dm layers can evict keys from
|
||||
* the devices that they map over.
|
||||
* Returns 0 on success, -errno otherwise.
|
||||
* @crypto_mode_supported: Check whether a crypto_mode and data_unit_size
|
||||
* combo is supported.
|
||||
* @keyslot_find: Returns the slot number that matches the key,
|
||||
* or -ENOKEY if no match found, or -errno on
|
||||
* error.
|
||||
*
|
||||
* This structure should be provided by storage device drivers when they set up
|
||||
* a keyslot manager - this structure holds the function ptrs that the keyslot
|
||||
* manager will use to manipulate keyslots in the hardware.
|
||||
*/
|
||||
struct keyslot_mgmt_ll_ops {
|
||||
int (*keyslot_program)(void *ll_priv_data, const u8 *key,
|
||||
enum blk_crypto_mode_num crypto_mode,
|
||||
unsigned int data_unit_size,
|
||||
unsigned int slot);
|
||||
int (*keyslot_evict)(void *ll_priv_data, const u8 *key,
|
||||
enum blk_crypto_mode_num crypto_mode,
|
||||
unsigned int data_unit_size,
|
||||
unsigned int slot);
|
||||
bool (*crypto_mode_supported)(void *ll_priv_data,
|
||||
enum blk_crypto_mode_num crypto_mode,
|
||||
unsigned int data_unit_size);
|
||||
int (*keyslot_find)(void *ll_priv_data, const u8 *key,
|
||||
enum blk_crypto_mode_num crypto_mode,
|
||||
unsigned int data_unit_size);
|
||||
};
|
||||
|
||||
#ifdef CONFIG_BLK_INLINE_ENCRYPTION
|
||||
struct keyslot_manager;
|
||||
|
||||
extern struct keyslot_manager *keyslot_manager_create(unsigned int num_slots,
|
||||
const struct keyslot_mgmt_ll_ops *ksm_ops,
|
||||
void *ll_priv_data);
|
||||
|
||||
extern int
|
||||
keyslot_manager_get_slot_for_key(struct keyslot_manager *ksm,
|
||||
const u8 *key,
|
||||
enum blk_crypto_mode_num crypto_mode,
|
||||
unsigned int data_unit_size);
|
||||
|
||||
extern void keyslot_manager_get_slot(struct keyslot_manager *ksm,
|
||||
unsigned int slot);
|
||||
|
||||
extern void keyslot_manager_put_slot(struct keyslot_manager *ksm,
|
||||
unsigned int slot);
|
||||
|
||||
extern bool
|
||||
keyslot_manager_crypto_mode_supported(struct keyslot_manager *ksm,
|
||||
enum blk_crypto_mode_num crypto_mode,
|
||||
unsigned int data_unit_size);
|
||||
|
||||
extern bool
|
||||
keyslot_manager_rq_crypto_mode_supported(struct request_queue *q,
|
||||
enum blk_crypto_mode_num crypto_mode,
|
||||
unsigned int data_unit_size);
|
||||
|
||||
extern int keyslot_manager_evict_key(struct keyslot_manager *ksm,
|
||||
const u8 *key,
|
||||
enum blk_crypto_mode_num crypto_mode,
|
||||
unsigned int data_unit_size);
|
||||
|
||||
extern void keyslot_manager_destroy(struct keyslot_manager *ksm);
|
||||
|
||||
#else /* CONFIG_BLK_INLINE_ENCRYPTION */
|
||||
|
||||
static inline bool
|
||||
keyslot_manager_rq_crypto_mode_supported(struct request_queue *q,
|
||||
enum blk_crypto_mode_num crypto_mode,
|
||||
unsigned int data_unit_size)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
#endif /* CONFIG_BLK_INLINE_ENCRYPTION */
|
||||
|
||||
#endif /* __LINUX_KEYSLOT_MANAGER_H */
|
||||
|
||||
#endif /* CONFIG_BLOCK */
|
@ -13,6 +13,9 @@
|
||||
#include <linux/limits.h>
|
||||
#include <linux/ioctl.h>
|
||||
#include <linux/types.h>
|
||||
#ifndef __KERNEL__
|
||||
#include <linux/fscrypt.h>
|
||||
#endif
|
||||
|
||||
/*
|
||||
* It's silly to have NR_OPEN bigger than NR_FILE, but you can change
|
||||
@ -258,57 +261,6 @@ struct fsxattr {
|
||||
#define FS_IOC_GETFSLABEL _IOR(0x94, 49, char[FSLABEL_MAX])
|
||||
#define FS_IOC_SETFSLABEL _IOW(0x94, 50, char[FSLABEL_MAX])
|
||||
|
||||
/*
|
||||
* File system encryption support
|
||||
*/
|
||||
/* Policy provided via an ioctl on the topmost directory */
|
||||
#define FS_KEY_DESCRIPTOR_SIZE 8
|
||||
|
||||
#define FS_POLICY_FLAGS_PAD_4 0x00
|
||||
#define FS_POLICY_FLAGS_PAD_8 0x01
|
||||
#define FS_POLICY_FLAGS_PAD_16 0x02
|
||||
#define FS_POLICY_FLAGS_PAD_32 0x03
|
||||
#define FS_POLICY_FLAGS_PAD_MASK 0x03
|
||||
#define FS_POLICY_FLAG_DIRECT_KEY 0x04 /* use master key directly */
|
||||
#define FS_POLICY_FLAGS_VALID 0x07
|
||||
|
||||
/* Encryption algorithms */
|
||||
#define FS_ENCRYPTION_MODE_INVALID 0
|
||||
#define FS_ENCRYPTION_MODE_AES_256_XTS 1
|
||||
#define FS_ENCRYPTION_MODE_AES_256_GCM 2
|
||||
#define FS_ENCRYPTION_MODE_AES_256_CBC 3
|
||||
#define FS_ENCRYPTION_MODE_AES_256_CTS 4
|
||||
#define FS_ENCRYPTION_MODE_AES_128_CBC 5
|
||||
#define FS_ENCRYPTION_MODE_AES_128_CTS 6
|
||||
#define FS_ENCRYPTION_MODE_SPECK128_256_XTS 7 /* Removed, do not use. */
|
||||
#define FS_ENCRYPTION_MODE_SPECK128_256_CTS 8 /* Removed, do not use. */
|
||||
#define FS_ENCRYPTION_MODE_ADIANTUM 9
|
||||
|
||||
struct fscrypt_policy {
|
||||
__u8 version;
|
||||
__u8 contents_encryption_mode;
|
||||
__u8 filenames_encryption_mode;
|
||||
__u8 flags;
|
||||
__u8 master_key_descriptor[FS_KEY_DESCRIPTOR_SIZE];
|
||||
};
|
||||
|
||||
#define FS_IOC_SET_ENCRYPTION_POLICY _IOR('f', 19, struct fscrypt_policy)
|
||||
#define FS_IOC_GET_ENCRYPTION_PWSALT _IOW('f', 20, __u8[16])
|
||||
#define FS_IOC_GET_ENCRYPTION_POLICY _IOW('f', 21, struct fscrypt_policy)
|
||||
|
||||
/* Parameters for passing an encryption key into the kernel keyring */
|
||||
#define FS_KEY_DESC_PREFIX "fscrypt:"
|
||||
#define FS_KEY_DESC_PREFIX_SIZE 8
|
||||
|
||||
/* Structure that userspace passes to the kernel keyring */
|
||||
#define FS_MAX_KEY_SIZE 64
|
||||
|
||||
struct fscrypt_key {
|
||||
__u32 mode;
|
||||
__u8 raw[FS_MAX_KEY_SIZE];
|
||||
__u32 size;
|
||||
};
|
||||
|
||||
/*
|
||||
* Inode flags (FS_IOC_GETFLAGS / FS_IOC_SETFLAGS)
|
||||
*
|
||||
|
182
include/uapi/linux/fscrypt.h
Normal file
182
include/uapi/linux/fscrypt.h
Normal file
@ -0,0 +1,182 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
|
||||
/*
|
||||
* fscrypt user API
|
||||
*
|
||||
* These ioctls can be used on filesystems that support fscrypt. See the
|
||||
* "User API" section of Documentation/filesystems/fscrypt.rst.
|
||||
*/
|
||||
#ifndef _UAPI_LINUX_FSCRYPT_H
|
||||
#define _UAPI_LINUX_FSCRYPT_H
|
||||
|
||||
#include <linux/types.h>
|
||||
|
||||
/* Encryption policy flags */
|
||||
#define FSCRYPT_POLICY_FLAGS_PAD_4 0x00
|
||||
#define FSCRYPT_POLICY_FLAGS_PAD_8 0x01
|
||||
#define FSCRYPT_POLICY_FLAGS_PAD_16 0x02
|
||||
#define FSCRYPT_POLICY_FLAGS_PAD_32 0x03
|
||||
#define FSCRYPT_POLICY_FLAGS_PAD_MASK 0x03
|
||||
#define FSCRYPT_POLICY_FLAG_DIRECT_KEY 0x04
|
||||
#define FSCRYPT_POLICY_FLAG_IV_INO_LBLK_64 0x08
|
||||
#define FSCRYPT_POLICY_FLAGS_VALID 0x0F
|
||||
|
||||
/* Encryption algorithms */
|
||||
#define FSCRYPT_MODE_AES_256_XTS 1
|
||||
#define FSCRYPT_MODE_AES_256_CTS 4
|
||||
#define FSCRYPT_MODE_AES_128_CBC 5
|
||||
#define FSCRYPT_MODE_AES_128_CTS 6
|
||||
#define FSCRYPT_MODE_ADIANTUM 9
|
||||
#define __FSCRYPT_MODE_MAX 9
|
||||
|
||||
/*
|
||||
* Legacy policy version; ad-hoc KDF and no key verification.
|
||||
* For new encrypted directories, use fscrypt_policy_v2 instead.
|
||||
*
|
||||
* Careful: the .version field for this is actually 0, not 1.
|
||||
*/
|
||||
#define FSCRYPT_POLICY_V1 0
|
||||
#define FSCRYPT_KEY_DESCRIPTOR_SIZE 8
|
||||
struct fscrypt_policy_v1 {
|
||||
__u8 version;
|
||||
__u8 contents_encryption_mode;
|
||||
__u8 filenames_encryption_mode;
|
||||
__u8 flags;
|
||||
__u8 master_key_descriptor[FSCRYPT_KEY_DESCRIPTOR_SIZE];
|
||||
};
|
||||
#define fscrypt_policy fscrypt_policy_v1
|
||||
|
||||
/*
|
||||
* Process-subscribed "logon" key description prefix and payload format.
|
||||
* Deprecated; prefer FS_IOC_ADD_ENCRYPTION_KEY instead.
|
||||
*/
|
||||
#define FSCRYPT_KEY_DESC_PREFIX "fscrypt:"
|
||||
#define FSCRYPT_KEY_DESC_PREFIX_SIZE 8
|
||||
#define FSCRYPT_MAX_KEY_SIZE 64
|
||||
struct fscrypt_key {
|
||||
__u32 mode;
|
||||
__u8 raw[FSCRYPT_MAX_KEY_SIZE];
|
||||
__u32 size;
|
||||
};
|
||||
|
||||
/*
|
||||
* New policy version with HKDF and key verification (recommended).
|
||||
*/
|
||||
#define FSCRYPT_POLICY_V2 2
|
||||
#define FSCRYPT_KEY_IDENTIFIER_SIZE 16
|
||||
struct fscrypt_policy_v2 {
|
||||
__u8 version;
|
||||
__u8 contents_encryption_mode;
|
||||
__u8 filenames_encryption_mode;
|
||||
__u8 flags;
|
||||
__u8 __reserved[4];
|
||||
__u8 master_key_identifier[FSCRYPT_KEY_IDENTIFIER_SIZE];
|
||||
};
|
||||
|
||||
/* Struct passed to FS_IOC_GET_ENCRYPTION_POLICY_EX */
|
||||
struct fscrypt_get_policy_ex_arg {
|
||||
__u64 policy_size; /* input/output */
|
||||
union {
|
||||
__u8 version;
|
||||
struct fscrypt_policy_v1 v1;
|
||||
struct fscrypt_policy_v2 v2;
|
||||
} policy; /* output */
|
||||
};
|
||||
|
||||
/*
|
||||
* v1 policy keys are specified by an arbitrary 8-byte key "descriptor",
|
||||
* matching fscrypt_policy_v1::master_key_descriptor.
|
||||
*/
|
||||
#define FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR 1
|
||||
|
||||
/*
|
||||
* v2 policy keys are specified by a 16-byte key "identifier" which the kernel
|
||||
* calculates as a cryptographic hash of the key itself,
|
||||
* matching fscrypt_policy_v2::master_key_identifier.
|
||||
*/
|
||||
#define FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER 2
|
||||
|
||||
/*
|
||||
* Specifies a key, either for v1 or v2 policies. This doesn't contain the
|
||||
* actual key itself; this is just the "name" of the key.
|
||||
*/
|
||||
struct fscrypt_key_specifier {
|
||||
__u32 type; /* one of FSCRYPT_KEY_SPEC_TYPE_* */
|
||||
__u32 __reserved;
|
||||
union {
|
||||
__u8 __reserved[32]; /* reserve some extra space */
|
||||
__u8 descriptor[FSCRYPT_KEY_DESCRIPTOR_SIZE];
|
||||
__u8 identifier[FSCRYPT_KEY_IDENTIFIER_SIZE];
|
||||
} u;
|
||||
};
|
||||
|
||||
/* Struct passed to FS_IOC_ADD_ENCRYPTION_KEY */
|
||||
struct fscrypt_add_key_arg {
|
||||
struct fscrypt_key_specifier key_spec;
|
||||
__u32 raw_size;
|
||||
__u32 __reserved[9];
|
||||
__u8 raw[];
|
||||
};
|
||||
|
||||
/* Struct passed to FS_IOC_REMOVE_ENCRYPTION_KEY */
|
||||
struct fscrypt_remove_key_arg {
|
||||
struct fscrypt_key_specifier key_spec;
|
||||
#define FSCRYPT_KEY_REMOVAL_STATUS_FLAG_FILES_BUSY 0x00000001
|
||||
#define FSCRYPT_KEY_REMOVAL_STATUS_FLAG_OTHER_USERS 0x00000002
|
||||
__u32 removal_status_flags; /* output */
|
||||
__u32 __reserved[5];
|
||||
};
|
||||
|
||||
/* Struct passed to FS_IOC_GET_ENCRYPTION_KEY_STATUS */
|
||||
struct fscrypt_get_key_status_arg {
|
||||
/* input */
|
||||
struct fscrypt_key_specifier key_spec;
|
||||
__u32 __reserved[6];
|
||||
|
||||
/* output */
|
||||
#define FSCRYPT_KEY_STATUS_ABSENT 1
|
||||
#define FSCRYPT_KEY_STATUS_PRESENT 2
|
||||
#define FSCRYPT_KEY_STATUS_INCOMPLETELY_REMOVED 3
|
||||
__u32 status;
|
||||
#define FSCRYPT_KEY_STATUS_FLAG_ADDED_BY_SELF 0x00000001
|
||||
__u32 status_flags;
|
||||
__u32 user_count;
|
||||
__u32 __out_reserved[13];
|
||||
};
|
||||
|
||||
#define FS_IOC_SET_ENCRYPTION_POLICY _IOR('f', 19, struct fscrypt_policy)
|
||||
#define FS_IOC_GET_ENCRYPTION_PWSALT _IOW('f', 20, __u8[16])
|
||||
#define FS_IOC_GET_ENCRYPTION_POLICY _IOW('f', 21, struct fscrypt_policy)
|
||||
#define FS_IOC_GET_ENCRYPTION_POLICY_EX _IOWR('f', 22, __u8[9]) /* size + version */
|
||||
#define FS_IOC_ADD_ENCRYPTION_KEY _IOWR('f', 23, struct fscrypt_add_key_arg)
|
||||
#define FS_IOC_REMOVE_ENCRYPTION_KEY _IOWR('f', 24, struct fscrypt_remove_key_arg)
|
||||
#define FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS _IOWR('f', 25, struct fscrypt_remove_key_arg)
|
||||
#define FS_IOC_GET_ENCRYPTION_KEY_STATUS _IOWR('f', 26, struct fscrypt_get_key_status_arg)
|
||||
|
||||
/**********************************************************************/
|
||||
|
||||
/* old names; don't add anything new here! */
|
||||
#ifndef __KERNEL__
|
||||
#define FS_KEY_DESCRIPTOR_SIZE FSCRYPT_KEY_DESCRIPTOR_SIZE
|
||||
#define FS_POLICY_FLAGS_PAD_4 FSCRYPT_POLICY_FLAGS_PAD_4
|
||||
#define FS_POLICY_FLAGS_PAD_8 FSCRYPT_POLICY_FLAGS_PAD_8
|
||||
#define FS_POLICY_FLAGS_PAD_16 FSCRYPT_POLICY_FLAGS_PAD_16
|
||||
#define FS_POLICY_FLAGS_PAD_32 FSCRYPT_POLICY_FLAGS_PAD_32
|
||||
#define FS_POLICY_FLAGS_PAD_MASK FSCRYPT_POLICY_FLAGS_PAD_MASK
|
||||
#define FS_POLICY_FLAG_DIRECT_KEY FSCRYPT_POLICY_FLAG_DIRECT_KEY
|
||||
#define FS_POLICY_FLAGS_VALID FSCRYPT_POLICY_FLAGS_VALID
|
||||
#define FS_ENCRYPTION_MODE_INVALID 0 /* never used */
|
||||
#define FS_ENCRYPTION_MODE_AES_256_XTS FSCRYPT_MODE_AES_256_XTS
|
||||
#define FS_ENCRYPTION_MODE_AES_256_GCM 2 /* never used */
|
||||
#define FS_ENCRYPTION_MODE_AES_256_CBC 3 /* never used */
|
||||
#define FS_ENCRYPTION_MODE_AES_256_CTS FSCRYPT_MODE_AES_256_CTS
|
||||
#define FS_ENCRYPTION_MODE_AES_128_CBC FSCRYPT_MODE_AES_128_CBC
|
||||
#define FS_ENCRYPTION_MODE_AES_128_CTS FSCRYPT_MODE_AES_128_CTS
|
||||
#define FS_ENCRYPTION_MODE_SPECK128_256_XTS 7 /* removed */
|
||||
#define FS_ENCRYPTION_MODE_SPECK128_256_CTS 8 /* removed */
|
||||
#define FS_ENCRYPTION_MODE_ADIANTUM FSCRYPT_MODE_ADIANTUM
|
||||
#define FS_KEY_DESC_PREFIX FSCRYPT_KEY_DESC_PREFIX
|
||||
#define FS_KEY_DESC_PREFIX_SIZE FSCRYPT_KEY_DESC_PREFIX_SIZE
|
||||
#define FS_MAX_KEY_SIZE FSCRYPT_MAX_KEY_SIZE
|
||||
#endif /* !__KERNEL__ */
|
||||
|
||||
#endif /* _UAPI_LINUX_FSCRYPT_H */
|
Loading…
Reference in New Issue
Block a user