Compare commits
127 Commits
Author | SHA1 | Date | |
---|---|---|---|
b47e3ec018 | |||
09009c2aa7 | |||
fb21bbf763 | |||
0f1afff7a6 | |||
48a00fb481 | |||
3b7e2ca9c8 | |||
5bd7514871 | |||
d3c2955fb9 | |||
5042f8de93 | |||
7536f95d1c | |||
6832e8d629 | |||
28313631e5 | |||
fdbc9397a7 | |||
a3cdf1a6b8 | |||
b16d4d1835 | |||
b8d43866d2 | |||
2d79aeb117 | |||
888a5aae7d | |||
323ea2f1d9 | |||
9ca554ca41 | |||
9d3c656414 | |||
379caf8551 | |||
2aec7b4c9d | |||
fc52b24503 | |||
ff5ffc892f | |||
360f2fea1e | |||
24999d0d32 | |||
8e2c0d6407 | |||
9a2abba098 | |||
b2d860382f | |||
4c4a03ebe1 | |||
798dfd240e | |||
c4caa6b81d | |||
1efbfe1175 | |||
751c09f47a | |||
6ad46d70b8 | |||
f46758d1c5 | |||
e71f5358b3 | |||
a2c8910006 | |||
cab354102d | |||
597d2ccc01 | |||
38900d7730 | |||
e926bf1fe8 | |||
6298ef095f | |||
7d33798b40 | |||
d500550212 | |||
a166eee909 | |||
b11b363e8e | |||
7321db6f28 | |||
d6bcde565f | |||
98f4f4e68a | |||
be86e14e05 | |||
8c650e53cd | |||
e89e772c7c | |||
feab87053a | |||
751a3653a0 | |||
9c08eb79cb | |||
92483fe11b | |||
6928acdeac | |||
0f3af2d20b | |||
c083a7cc90 | |||
9eb6591913 | |||
9cfea347d0 | |||
ce064fb5e0 | |||
afcdfc7fd7 | |||
051b74c883 | |||
2e532afd2b | |||
fdf465851c | |||
669e9a8d1e | |||
f77e1c474e | |||
2f9396e24a | |||
0ebb3df69c | |||
2fff9b9edf | |||
57f879d68a | |||
3626d90004 | |||
02f4ff7534 | |||
a3871130cc | |||
53cb5635cf | |||
d7841664f4 | |||
39544f34b4 | |||
4755b97908 | |||
cba2501254 | |||
6ba99a62ff | |||
d2b7315ba6 | |||
337d6c2fd3 | |||
2eced46a37 | |||
1a80333747 | |||
fb61b779a6 | |||
5899af6a2f | |||
cbca79b95b | |||
02586cf21f | |||
80a6fa7a01 | |||
6b769675fa | |||
e5aa2ce01f | |||
cbc638c6ba | |||
abc1be4872 | |||
f1b2d6bbbb | |||
90926d40ad | |||
f12590ed08 | |||
05a55e3687 | |||
affda26bfa | |||
0bf866fa8d | |||
73727e7fdf | |||
c22c87eded | |||
426dfbf21d | |||
5512777404 | |||
cc07f8dac4 | |||
f5bb9e576d | |||
2380879376 | |||
eca8766545 | |||
0b8a5ca923 | |||
e06acc1ba2 | |||
14f9d82fda | |||
bb509d9317 | |||
238ab872ba | |||
3a72f3fb4d | |||
aa1efbc35c | |||
be985b8242 | |||
a9f6001f51 | |||
5052e15f7f | |||
4bdf632dfa | |||
4db50ddbb7 | |||
46f6918179 | |||
d93582724d | |||
b4caa483dd | |||
87da137238 | |||
b3f626feee |
309
DEVELOP.md
309
DEVELOP.md
@ -1,309 +0,0 @@
|
||||
# scrcpy for developers
|
||||
|
||||
## Overview
|
||||
|
||||
This application is composed of two parts:
|
||||
- the server (`scrcpy-server`), to be executed on the device,
|
||||
- the client (the `scrcpy` binary), executed on the host computer.
|
||||
|
||||
The client is responsible to push the server to the device and start its
|
||||
execution.
|
||||
|
||||
Once the client and the server are connected to each other, the server initially
|
||||
sends device information (name and initial screen dimensions), then starts to
|
||||
send a raw H.264 video stream of the device screen. The client decodes the video
|
||||
frames, and display them as soon as possible, without buffering, to minimize
|
||||
latency. The client is not aware of the device rotation (which is handled by the
|
||||
server), it just knows the dimensions of the video frames.
|
||||
|
||||
The client captures relevant keyboard and mouse events, that it transmits to the
|
||||
server, which injects them to the device.
|
||||
|
||||
|
||||
|
||||
## Server
|
||||
|
||||
|
||||
### Privileges
|
||||
|
||||
Capturing the screen requires some privileges, which are granted to `shell`.
|
||||
|
||||
The server is a Java application (with a [`public static void main(String...
|
||||
args)`][main] method), compiled against the Android framework, and executed as
|
||||
`shell` on the Android device.
|
||||
|
||||
[main]: https://github.com/Genymobile/scrcpy/blob/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/server/src/main/java/com/genymobile/scrcpy/Server.java#L123
|
||||
|
||||
To run such a Java application, the classes must be [_dexed_][dex] (typically,
|
||||
to `classes.dex`). If `my.package.MainClass` is the main class, compiled to
|
||||
`classes.dex`, pushed to the device in `/data/local/tmp`, then it can be run
|
||||
with:
|
||||
|
||||
adb shell CLASSPATH=/data/local/tmp/classes.dex \
|
||||
app_process / my.package.MainClass
|
||||
|
||||
_The path `/data/local/tmp` is a good candidate to push the server, since it's
|
||||
readable and writable by `shell`, but not world-writable, so a malicious
|
||||
application may not replace the server just before the client executes it._
|
||||
|
||||
Instead of a raw _dex_ file, `app_process` accepts a _jar_ containing
|
||||
`classes.dex` (e.g. an [APK]). For simplicity, and to benefit from the gradle
|
||||
build system, the server is built to an (unsigned) APK (renamed to
|
||||
`scrcpy-server`).
|
||||
|
||||
[dex]: https://en.wikipedia.org/wiki/Dalvik_(software)
|
||||
[apk]: https://en.wikipedia.org/wiki/Android_application_package
|
||||
|
||||
|
||||
### Hidden methods
|
||||
|
||||
Although compiled against the Android framework, [hidden] methods and classes are
|
||||
not directly accessible (and they may differ from one Android version to
|
||||
another).
|
||||
|
||||
They can be called using reflection though. The communication with hidden
|
||||
components is provided by [_wrappers_ classes][wrappers] and [aidl].
|
||||
|
||||
[hidden]: https://stackoverflow.com/a/31908373/1987178
|
||||
[wrappers]: https://github.com/Genymobile/scrcpy/tree/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/server/src/main/java/com/genymobile/scrcpy/wrappers
|
||||
[aidl]: https://github.com/Genymobile/scrcpy/tree/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/server/src/main/aidl/android/view
|
||||
|
||||
|
||||
### Threading
|
||||
|
||||
The server uses 3 threads:
|
||||
|
||||
- the **main** thread, encoding and streaming the video to the client;
|
||||
- the **controller** thread, listening for _control messages_ (typically,
|
||||
keyboard and mouse events) from the client;
|
||||
- the **receiver** thread (managed by the controller), sending _device messages_
|
||||
to the clients (currently, it is only used to send the device clipboard
|
||||
content).
|
||||
|
||||
Since the video encoding is typically hardware, there would be no benefit in
|
||||
encoding and streaming in two different threads.
|
||||
|
||||
|
||||
### Screen video encoding
|
||||
|
||||
The encoding is managed by [`ScreenEncoder`].
|
||||
|
||||
The video is encoded using the [`MediaCodec`] API. The codec takes its input
|
||||
from a [surface] associated to the display, and writes the resulting H.264
|
||||
stream to the provided output stream (the socket connected to the client).
|
||||
|
||||
[`ScreenEncoder`]: https://github.com/Genymobile/scrcpy/blob/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/server/src/main/java/com/genymobile/scrcpy/ScreenEncoder.java
|
||||
[`MediaCodec`]: https://developer.android.com/reference/android/media/MediaCodec.html
|
||||
[surface]: https://github.com/Genymobile/scrcpy/blob/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/server/src/main/java/com/genymobile/scrcpy/ScreenEncoder.java#L68-L69
|
||||
|
||||
On device [rotation], the codec, surface and display are reinitialized, and a
|
||||
new video stream is produced.
|
||||
|
||||
New frames are produced only when changes occur on the surface. This is good
|
||||
because it avoids to send unnecessary frames, but there are drawbacks:
|
||||
|
||||
- it does not send any frame on start if the device screen does not change,
|
||||
- after fast motion changes, the last frame may have poor quality.
|
||||
|
||||
Both problems are [solved][repeat] by the flag
|
||||
[`KEY_REPEAT_PREVIOUS_FRAME_AFTER`][repeat-flag].
|
||||
|
||||
[rotation]: https://github.com/Genymobile/scrcpy/blob/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/server/src/main/java/com/genymobile/scrcpy/ScreenEncoder.java#L90
|
||||
[repeat]: https://github.com/Genymobile/scrcpy/blob/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/server/src/main/java/com/genymobile/scrcpy/ScreenEncoder.java#L147-L148
|
||||
[repeat-flag]: https://developer.android.com/reference/android/media/MediaFormat.html#KEY_REPEAT_PREVIOUS_FRAME_AFTER
|
||||
|
||||
|
||||
### Input events injection
|
||||
|
||||
_Control messages_ are received from the client by the [`Controller`] (run in a
|
||||
separate thread). There are several types of input events:
|
||||
- keycode (cf [`KeyEvent`]),
|
||||
- text (special characters may not be handled by keycodes directly),
|
||||
- mouse motion/click,
|
||||
- mouse scroll,
|
||||
- other commands (e.g. to switch the screen on or to copy the clipboard).
|
||||
|
||||
Some of them need to inject input events to the system. To do so, they use the
|
||||
_hidden_ method [`InputManager.injectInputEvent`] (exposed by our
|
||||
[`InputManager` wrapper][inject-wrapper]).
|
||||
|
||||
[`Controller`]: https://github.com/Genymobile/scrcpy/blob/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/server/src/main/java/com/genymobile/scrcpy/Controller.java#L81
|
||||
[`KeyEvent`]: https://developer.android.com/reference/android/view/KeyEvent.html
|
||||
[`MotionEvent`]: https://developer.android.com/reference/android/view/MotionEvent.html
|
||||
[`InputManager.injectInputEvent`]: https://android.googlesource.com/platform/frameworks/base/+/oreo-release/core/java/android/hardware/input/InputManager.java#857
|
||||
[inject-wrapper]: https://github.com/Genymobile/scrcpy/blob/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/server/src/main/java/com/genymobile/scrcpy/wrappers/InputManager.java#L27
|
||||
|
||||
|
||||
|
||||
## Client
|
||||
|
||||
The client relies on [SDL], which provides cross-platform API for UI, input
|
||||
events, threading, etc.
|
||||
|
||||
The video stream is decoded by [libav] (FFmpeg).
|
||||
|
||||
[SDL]: https://www.libsdl.org
|
||||
[libav]: https://www.libav.org/
|
||||
|
||||
### Initialization
|
||||
|
||||
On startup, in addition to _libav_ and _SDL_ initialization, the client must
|
||||
push and start the server on the device, and open two sockets (one for the video
|
||||
stream, one for control) so that they may communicate.
|
||||
|
||||
Note that the client-server roles are expressed at the application level:
|
||||
|
||||
- the server _serves_ video stream and handle requests from the client,
|
||||
- the client _controls_ the device through the server.
|
||||
|
||||
However, the roles are reversed at the network level:
|
||||
|
||||
- the client opens a server socket and listen on a port before starting the
|
||||
server,
|
||||
- the server connects to the client.
|
||||
|
||||
This role inversion guarantees that the connection will not fail due to race
|
||||
conditions, and avoids polling.
|
||||
|
||||
_(Note that over TCP/IP, the roles are not reversed, due to a bug in `adb
|
||||
reverse`. See commit [1038bad] and [issue #5].)_
|
||||
|
||||
Once the server is connected, it sends the device information (name and initial
|
||||
screen dimensions). Thus, the client may init the window and renderer, before
|
||||
the first frame is available.
|
||||
|
||||
To minimize startup time, SDL initialization is performed while listening for
|
||||
the connection from the server (see commit [90a46b4]).
|
||||
|
||||
[1038bad]: https://github.com/Genymobile/scrcpy/commit/1038bad3850f18717a048a4d5c0f8110e54ee172
|
||||
[issue #5]: https://github.com/Genymobile/scrcpy/issues/5
|
||||
[90a46b4]: https://github.com/Genymobile/scrcpy/commit/90a46b4c45637d083e877020d85ade52a9a5fa8e
|
||||
|
||||
|
||||
### Threading
|
||||
|
||||
The client uses 4 threads:
|
||||
|
||||
- the **main** thread, executing the SDL event loop,
|
||||
- the **stream** thread, receiving the video and used for decoding and
|
||||
recording,
|
||||
- the **controller** thread, sending _control messages_ to the server,
|
||||
- the **receiver** thread (managed by the controller), receiving _device
|
||||
messages_ from the server.
|
||||
|
||||
In addition, another thread can be started if necessary to handle APK
|
||||
installation or file push requests (via drag&drop on the main window) or to
|
||||
print the framerate regularly in the console.
|
||||
|
||||
|
||||
|
||||
### Stream
|
||||
|
||||
The video [stream] is received from the socket (connected to the server on the
|
||||
device) in a separate thread.
|
||||
|
||||
If a [decoder] is present (i.e. `--no-display` is not set), then it uses _libav_
|
||||
to decode the H.264 stream from the socket, and notifies the main thread when a
|
||||
new frame is available.
|
||||
|
||||
There are two [frames][video_buffer] simultaneously in memory:
|
||||
- the **decoding** frame, written by the decoder from the decoder thread,
|
||||
- the **rendering** frame, rendered in a texture from the main thread.
|
||||
|
||||
When a new decoded frame is available, the decoder _swaps_ the decoding and
|
||||
rendering frame (with proper synchronization). Thus, it immediately starts
|
||||
to decode a new frame while the main thread renders the last one.
|
||||
|
||||
If a [recorder] is present (i.e. `--record` is enabled), then it muxes the raw
|
||||
H.264 packet to the output video file.
|
||||
|
||||
[stream]: https://github.com/Genymobile/scrcpy/blob/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/app/src/stream.h
|
||||
[decoder]: https://github.com/Genymobile/scrcpy/blob/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/app/src/decoder.h
|
||||
[video_buffer]: https://github.com/Genymobile/scrcpy/blob/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/app/src/video_buffer.h
|
||||
[recorder]: https://github.com/Genymobile/scrcpy/blob/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/app/src/recorder.h
|
||||
|
||||
```
|
||||
+----------+ +----------+
|
||||
---> | decoder | ---> | screen |
|
||||
+---------+ / +----------+ +----------+
|
||||
socket ---> | stream | ----
|
||||
+---------+ \ +----------+
|
||||
---> | recorder |
|
||||
+----------+
|
||||
```
|
||||
|
||||
### Controller
|
||||
|
||||
The [controller] is responsible to send _control messages_ to the device. It
|
||||
runs in a separate thread, to avoid I/O on the main thread.
|
||||
|
||||
On SDL event, received on the main thread, the [input manager][inputmanager]
|
||||
creates appropriate [_control messages_][controlmsg]. It is responsible to
|
||||
convert SDL events to Android events (using [convert]). It pushes the _control
|
||||
messages_ to a queue hold by the controller. On its own thread, the controller
|
||||
takes messages from the queue, that it serializes and sends to the client.
|
||||
|
||||
[controller]: https://github.com/Genymobile/scrcpy/blob/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/app/src/controller.h
|
||||
[controlmsg]: https://github.com/Genymobile/scrcpy/blob/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/app/src/control_msg.h
|
||||
[inputmanager]: https://github.com/Genymobile/scrcpy/blob/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/app/src/input_manager.h
|
||||
[convert]: https://github.com/Genymobile/scrcpy/blob/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/app/src/convert.h
|
||||
|
||||
|
||||
### UI and event loop
|
||||
|
||||
Initialization, input events and rendering are all [managed][scrcpy] in the main
|
||||
thread.
|
||||
|
||||
Events are handled in the [event loop], which either updates the [screen] or
|
||||
delegates to the [input manager][inputmanager].
|
||||
|
||||
[scrcpy]: https://github.com/Genymobile/scrcpy/blob/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/app/src/scrcpy.c
|
||||
[event loop]: https://github.com/Genymobile/scrcpy/blob/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/app/src/scrcpy.c#L201
|
||||
[screen]: https://github.com/Genymobile/scrcpy/blob/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/app/src/screen.h
|
||||
|
||||
|
||||
## Hack
|
||||
|
||||
For more details, go read the code!
|
||||
|
||||
If you find a bug, or have an awesome idea to implement, please discuss and
|
||||
contribute ;-)
|
||||
|
||||
|
||||
### Debug the server
|
||||
|
||||
The server is pushed to the device by the client on startup.
|
||||
|
||||
To debug it, enable the server debugger during configuration:
|
||||
|
||||
```bash
|
||||
meson setup x -Dserver_debugger=true
|
||||
# or, if x is already configured
|
||||
meson configure x -Dserver_debugger=true
|
||||
```
|
||||
|
||||
If your device runs Android 8 or below, set the `server_debugger_method` to
|
||||
`old` in addition:
|
||||
|
||||
```bash
|
||||
meson setup x -Dserver_debugger=true -Dserver_debugger_method=old
|
||||
# or, if x is already configured
|
||||
meson configure x -Dserver_debugger=true -Dserver_debugger_method=old
|
||||
```
|
||||
|
||||
Then recompile.
|
||||
|
||||
When you start scrcpy, it will start a debugger on port 5005 on the device.
|
||||
Redirect that port to the computer:
|
||||
|
||||
```bash
|
||||
adb forward tcp:5005 tcp:5005
|
||||
```
|
||||
|
||||
In Android Studio, _Run_ > _Debug_ > _Edit configurations..._ On the left, click on
|
||||
`+`, _Remote_, and fill the form:
|
||||
|
||||
- Host: `localhost`
|
||||
- Port: `5005`
|
||||
|
||||
Then click on _Debug_.
|
143
FAQ.md
143
FAQ.md
@ -7,7 +7,7 @@ Here are the common reported problems and their status.
|
||||
If you encounter any error, the first step is to upgrade to the latest version.
|
||||
|
||||
|
||||
## `adb` issues
|
||||
## `adb` and USB issues
|
||||
|
||||
`scrcpy` execute `adb` commands to initialize the connection with the device. If
|
||||
`adb` fails, then scrcpy will not work.
|
||||
@ -133,6 +133,21 @@ Try with another USB cable or plug it into another USB port. See [#281] and
|
||||
[#283]: https://github.com/Genymobile/scrcpy/issues/283
|
||||
|
||||
|
||||
## HID/OTG issues on Windows
|
||||
|
||||
On Windows, if `scrcpy --otg` (or `--hid-keyboard`/`--hid-mouse`) results in:
|
||||
|
||||
> ERROR: Could not find any USB device
|
||||
|
||||
(or if only unrelated USB devices are detected), there might be drivers issues.
|
||||
|
||||
Please read [#3654], in particular [this comment][#3654-comment1] and [the next
|
||||
one][#3654-comment2].
|
||||
|
||||
[#3654]: https://github.com/Genymobile/scrcpy/issues/3654
|
||||
[#3654-comment1]: https://github.com/Genymobile/scrcpy/issues/3654#issuecomment-1369278232
|
||||
[#3654-comment2]: https://github.com/Genymobile/scrcpy/issues/3654#issuecomment-1369295011
|
||||
|
||||
|
||||
## Control issues
|
||||
|
||||
@ -153,8 +168,7 @@ The default text injection method is [limited to ASCII characters][text-input].
|
||||
A trick allows to also inject some [accented characters][accented-characters],
|
||||
but that's all. See [#37].
|
||||
|
||||
Since scrcpy v1.20 on Linux, it is possible to simulate a [physical
|
||||
keyboard][hid] (HID).
|
||||
Since scrcpy v1.20, it is possible to simulate a [physical keyboard][hid] (HID).
|
||||
|
||||
[text-input]: https://github.com/Genymobile/scrcpy/issues?q=is%3Aopen+is%3Aissue+label%3Aunicode
|
||||
[accented-characters]: https://blog.rom1v.com/2018/03/introducing-scrcpy/#handle-accented-characters
|
||||
@ -164,32 +178,6 @@ keyboard][hid] (HID).
|
||||
|
||||
## Client issues
|
||||
|
||||
### The quality is low
|
||||
|
||||
If the definition of your client window is smaller than that of your device
|
||||
screen, then you might get poor quality, especially visible on text (see [#40]).
|
||||
|
||||
[#40]: https://github.com/Genymobile/scrcpy/issues/40
|
||||
|
||||
This problem should be fixed in scrcpy v1.22: **update to the latest version**.
|
||||
|
||||
On older versions, you must configure the [scaling behavior]:
|
||||
|
||||
> `scrcpy.exe` > Properties > Compatibility > Change high DPI settings >
|
||||
> Override high DPI scaling behavior > Scaling performed by: _Application_.
|
||||
|
||||
[scaling behavior]: https://github.com/Genymobile/scrcpy/issues/40#issuecomment-424466723
|
||||
|
||||
Also, to improve downscaling quality, trilinear filtering is enabled
|
||||
automatically if the renderer is OpenGL and if it supports mipmapping.
|
||||
|
||||
On Windows, you might want to force OpenGL to enable mipmapping:
|
||||
|
||||
```
|
||||
scrcpy --render-driver=opengl
|
||||
```
|
||||
|
||||
|
||||
### Issue with Wayland
|
||||
|
||||
By default, SDL uses x11 on Linux. The [video driver] can be changed via the
|
||||
@ -224,102 +212,15 @@ As a workaround, [disable "Block compositing"][kwin].
|
||||
|
||||
### Exception
|
||||
|
||||
There may be many reasons. One common cause is that the hardware encoder of your
|
||||
device is not able to encode at the given definition:
|
||||
|
||||
> ```
|
||||
> ERROR: Exception on thread Thread[main,5,main]
|
||||
> android.media.MediaCodec$CodecException: Error 0xfffffc0e
|
||||
> ...
|
||||
> Exit due to uncaughtException in main thread:
|
||||
> ERROR: Could not open video stream
|
||||
> INFO: Initial texture: 1080x2336
|
||||
> ```
|
||||
|
||||
or
|
||||
|
||||
> ```
|
||||
> ERROR: Exception on thread Thread[main,5,main]
|
||||
> java.lang.IllegalStateException
|
||||
> at android.media.MediaCodec.native_dequeueOutputBuffer(Native Method)
|
||||
> ```
|
||||
|
||||
Just try with a lower definition:
|
||||
If you get any exception related to `MediaCodec`:
|
||||
|
||||
```
|
||||
scrcpy -m 1920
|
||||
scrcpy -m 1024
|
||||
scrcpy -m 800
|
||||
ERROR: Exception on thread Thread[main,5,main]
|
||||
java.lang.IllegalStateException
|
||||
at android.media.MediaCodec.native_dequeueOutputBuffer(Native Method)
|
||||
```
|
||||
|
||||
Since scrcpy v1.22, scrcpy automatically tries again with a lower definition
|
||||
before failing. This behavior can be disabled with `--no-downsize-on-error`.
|
||||
|
||||
You could also try another [encoder](README.md#encoder).
|
||||
|
||||
|
||||
If you encounter this exception on Android 12, then just upgrade to scrcpy >=
|
||||
1.18 (see [#2129]):
|
||||
|
||||
```
|
||||
> ERROR: Exception on thread Thread[main,5,main]
|
||||
java.lang.AssertionError: java.lang.reflect.InvocationTargetException
|
||||
at com.genymobile.scrcpy.wrappers.SurfaceControl.setDisplaySurface(SurfaceControl.java:75)
|
||||
...
|
||||
Caused by: java.lang.reflect.InvocationTargetException
|
||||
at java.lang.reflect.Method.invoke(Native Method)
|
||||
at com.genymobile.scrcpy.wrappers.SurfaceControl.setDisplaySurface(SurfaceControl.java:73)
|
||||
... 7 more
|
||||
Caused by: java.lang.IllegalArgumentException: displayToken must not be null
|
||||
at android.view.SurfaceControl$Transaction.setDisplaySurface(SurfaceControl.java:3067)
|
||||
at android.view.SurfaceControl.setDisplaySurface(SurfaceControl.java:2147)
|
||||
... 9 more
|
||||
```
|
||||
|
||||
[#2129]: https://github.com/Genymobile/scrcpy/issues/2129
|
||||
|
||||
|
||||
## Command line on Windows
|
||||
|
||||
Since v1.22, a "shortcut" has been added to directly open a terminal in the
|
||||
scrcpy directory. Double-click on `open_a_terminal_here.bat`, then type your
|
||||
command. For example:
|
||||
|
||||
```
|
||||
scrcpy --record file.mkv
|
||||
```
|
||||
|
||||
You could also open a terminal and go to the scrcpy folder manually:
|
||||
|
||||
1. Press <kbd>Windows</kbd>+<kbd>r</kbd>, this opens a dialog box.
|
||||
2. Type `cmd` and press <kbd>Enter</kbd>, this opens a terminal.
|
||||
3. Go to your _scrcpy_ directory, by typing (adapt the path):
|
||||
|
||||
```bat
|
||||
cd C:\Users\user\Downloads\scrcpy-win64-xxx
|
||||
```
|
||||
|
||||
and press <kbd>Enter</kbd>
|
||||
4. Type your command. For example:
|
||||
|
||||
```bat
|
||||
scrcpy --record file.mkv
|
||||
```
|
||||
|
||||
If you plan to always use the same arguments, create a file `myscrcpy.bat`
|
||||
(enable [show file extensions] to avoid confusion) in the `scrcpy` directory,
|
||||
containing your command. For example:
|
||||
|
||||
```bat
|
||||
scrcpy --prefer-text --turn-screen-off --stay-awake
|
||||
```
|
||||
|
||||
Then just double-click on that file.
|
||||
|
||||
You could also edit (a copy of) `scrcpy-console.bat` or `scrcpy-noconsole.vbs`
|
||||
to add some arguments.
|
||||
|
||||
[show file extensions]: https://www.howtogeek.com/205086/beginner-how-to-make-windows-show-file-extensions/
|
||||
then try with another [encoder](doc/video.md#codec).
|
||||
|
||||
|
||||
## Translations
|
||||
|
2
LICENSE
2
LICENSE
@ -188,7 +188,7 @@
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright (C) 2018 Genymobile
|
||||
Copyright (C) 2018-2022 Romain Vimont
|
||||
Copyright (C) 2018-2023 Romain Vimont
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
|
@ -3,9 +3,12 @@ _scrcpy() {
|
||||
local opts="
|
||||
--always-on-top
|
||||
--audio-bit-rate=
|
||||
--audio-buffer=
|
||||
--audio-codec=
|
||||
--audio-codec-options=
|
||||
--audio-encoder=
|
||||
--audio-source=
|
||||
--audio-output-buffer=
|
||||
-b --video-bit-rate=
|
||||
--crop=
|
||||
-d --select-usb
|
||||
@ -13,52 +16,57 @@ _scrcpy() {
|
||||
--display=
|
||||
--display-buffer=
|
||||
-e --select-tcpip
|
||||
-f --fullscreen
|
||||
--force-adb-forward
|
||||
--forward-all-clicks
|
||||
-f --fullscreen
|
||||
-K --hid-keyboard
|
||||
-h --help
|
||||
--kill-adb-on-close
|
||||
-K --hid-keyboard
|
||||
--legacy-paste
|
||||
--list-displays
|
||||
--list-encoders
|
||||
--lock-video-orientation
|
||||
--lock-video-orientation=
|
||||
--max-fps=
|
||||
-M --hid-mouse
|
||||
-m --max-size=
|
||||
-M --hid-mouse
|
||||
--max-fps=
|
||||
-n --no-control
|
||||
-N --no-playback
|
||||
--no-audio
|
||||
--no-audio-playback
|
||||
--no-cleanup
|
||||
--no-clipboard-autosync
|
||||
--no-downsize-on-error
|
||||
-n --no-control
|
||||
-N --no-display
|
||||
--no-key-repeat
|
||||
--no-mipmaps
|
||||
--no-power-on
|
||||
--no-video
|
||||
--no-video-playback
|
||||
--otg
|
||||
-p --port=
|
||||
--power-off-on-close
|
||||
--prefer-text
|
||||
--print-fps
|
||||
--push-target=
|
||||
--raw-key-events
|
||||
-r --record=
|
||||
--raw-key-events
|
||||
--record-format=
|
||||
--render-driver=
|
||||
--require-audio
|
||||
--rotation=
|
||||
-s --serial=
|
||||
--shortcut-mod=
|
||||
-S --turn-screen-off
|
||||
--shortcut-mod=
|
||||
-t --show-touches
|
||||
--tcpip
|
||||
--tcpip=
|
||||
--time-limit=
|
||||
--tunnel-host=
|
||||
--tunnel-port=
|
||||
--v4l2-buffer=
|
||||
--v4l2-sink=
|
||||
-V --verbosity=
|
||||
-v --version
|
||||
-V --verbosity=
|
||||
--video-codec=
|
||||
--video-codec-options=
|
||||
--video-encoder=
|
||||
@ -81,6 +89,10 @@ _scrcpy() {
|
||||
COMPREPLY=($(compgen -W 'opus aac raw' -- "$cur"))
|
||||
return
|
||||
;;
|
||||
--audio-source)
|
||||
COMPREPLY=($(compgen -W 'output mic' -- "$cur"))
|
||||
return
|
||||
;;
|
||||
--lock-video-orientation)
|
||||
COMPREPLY=($(compgen -W 'unlocked initial 0 1 2 3' -- "$cur"))
|
||||
return
|
||||
@ -115,20 +127,26 @@ _scrcpy() {
|
||||
COMPREPLY=($(compgen -W "$("${ADB:-adb}" devices | awk '$2 == "device" {print $1}')" -- ${cur}))
|
||||
return
|
||||
;;
|
||||
-b|--video-bit-rate \
|
||||
|--codec-options \
|
||||
--audio-bit-rate \
|
||||
|--audio-buffer \
|
||||
|-b|--video-bit-rate \
|
||||
|--audio-codec-options \
|
||||
|--audio-encoder \
|
||||
|--audio-output-buffer \
|
||||
|--crop \
|
||||
|--display \
|
||||
|--display-buffer \
|
||||
|--encoder \
|
||||
|--max-fps \
|
||||
|-m|--max-size \
|
||||
|-p|--port \
|
||||
|--push-target \
|
||||
|--rotation \
|
||||
|--tunnel-host \
|
||||
|--tunnel-port \
|
||||
|--v4l2-buffer \
|
||||
|--v4l2-sink \
|
||||
|--video-codec-options \
|
||||
|--video-encoder \
|
||||
|--tcpip \
|
||||
|--window-*)
|
||||
# Option accepting an argument, but nothing to auto-complete
|
||||
|
@ -5,7 +5,7 @@ Comment=Display and control your Android device
|
||||
# For some users, the PATH or ADB environment variables are set from the shell
|
||||
# startup file, like .bashrc or .zshrc… Run an interactive shell to get
|
||||
# environment correctly initialized.
|
||||
Exec=/bin/bash --norc --noprofile -i -c '"$SHELL" -i -c scrcpy || read -p "Press any key to quit..."'
|
||||
Exec=/bin/bash --norc --noprofile -i -c "\"\\$SHELL\" -i -c scrcpy || read -p 'Press Enter to quit...'"
|
||||
Icon=scrcpy
|
||||
Terminal=true
|
||||
Type=Application
|
||||
|
@ -5,7 +5,7 @@ Comment=Display and control your Android device
|
||||
# For some users, the PATH or ADB environment variables are set from the shell
|
||||
# startup file, like .bashrc or .zshrc… Run an interactive shell to get
|
||||
# environment correctly initialized.
|
||||
Exec=/bin/sh -c '"$SHELL" -i -c scrcpy'
|
||||
Exec=/bin/sh -c "\"\\$SHELL\" -i -c scrcpy"
|
||||
Icon=scrcpy
|
||||
Terminal=false
|
||||
Type=Application
|
||||
|
@ -10,9 +10,12 @@ local arguments
|
||||
arguments=(
|
||||
'--always-on-top[Make scrcpy window always on top \(above other windows\)]'
|
||||
'--audio-bit-rate=[Encode the audio at the given bit-rate]'
|
||||
'--audio-buffer=[Configure the audio buffering delay (in milliseconds)]'
|
||||
'--audio-codec=[Select the audio codec]:codec:(opus aac raw)'
|
||||
'--audio-codec-options=[Set a list of comma-separated key\:type=value options for the device audio encoder]'
|
||||
'--audio-encoder=[Use a specific MediaCodec audio encoder]'
|
||||
'--audio-source=[Select the audio source]:source:(output mic)'
|
||||
'--audio-output-buffer=[Configure the size of the SDL audio output buffer (in milliseconds)]'
|
||||
{-b,--video-bit-rate=}'[Encode the video at the given bit-rate]'
|
||||
'--crop=[\[width\:height\:x\:y\] Crop the device screen on the server]'
|
||||
{-d,--select-usb}'[Use USB device]'
|
||||
@ -20,50 +23,55 @@ arguments=(
|
||||
'--display=[Specify the display id to mirror]'
|
||||
'--display-buffer=[Add a buffering delay \(in milliseconds\) before displaying]'
|
||||
{-e,--select-tcpip}'[Use TCP/IP device]'
|
||||
{-f,--fullscreen}'[Start in fullscreen]'
|
||||
'--force-adb-forward[Do not attempt to use \"adb reverse\" to connect to the device]'
|
||||
'--forward-all-clicks[Forward clicks to device]'
|
||||
{-f,--fullscreen}'[Start in fullscreen]'
|
||||
{-K,--hid-keyboard}'[Simulate a physical keyboard by using HID over AOAv2]'
|
||||
{-h,--help}'[Print the help]'
|
||||
'--kill-adb-on-close[Kill adb when scrcpy terminates]'
|
||||
{-K,--hid-keyboard}'[Simulate a physical keyboard by using HID over AOAv2]'
|
||||
'--legacy-paste[Inject computer clipboard text as a sequence of key events on Ctrl+v]'
|
||||
'--list-displays[List displays available on the device]'
|
||||
'--list-encoders[List video and audio encoders available on the device]'
|
||||
'--lock-video-orientation=[Lock video orientation]:orientation:(unlocked initial 0 1 2 3)'
|
||||
'--max-fps=[Limit the frame rate of screen capture]'
|
||||
{-M,--hid-mouse}'[Simulate a physical mouse by using HID over AOAv2]'
|
||||
{-m,--max-size=}'[Limit both the width and height of the video to value]'
|
||||
{-M,--hid-mouse}'[Simulate a physical mouse by using HID over AOAv2]'
|
||||
'--max-fps=[Limit the frame rate of screen capture]'
|
||||
{-n,--no-control}'[Disable device control \(mirror the device in read only\)]'
|
||||
{-N,--no-playback}'[Disable video and audio playback]'
|
||||
'--no-audio[Disable audio forwarding]'
|
||||
'--no-audio-playback[Disable audio playback]'
|
||||
'--no-cleanup[Disable device cleanup actions on exit]'
|
||||
'--no-clipboard-autosync[Disable automatic clipboard synchronization]'
|
||||
'--no-downsize-on-error[Disable lowering definition on MediaCodec error]'
|
||||
{-n,--no-control}'[Disable device control \(mirror the device in read only\)]'
|
||||
{-N,--no-display}'[Do not display device \(during screen recording or when V4L2 sink is enabled\)]'
|
||||
'--no-key-repeat[Do not forward repeated key events when a key is held down]'
|
||||
'--no-mipmaps[Disable the generation of mipmaps]'
|
||||
'--no-power-on[Do not power on the device on start]'
|
||||
'--no-video[Disable video forwarding]'
|
||||
'--no-video-playback[Disable video playback]'
|
||||
'--otg[Run in OTG mode \(simulating physical keyboard and mouse\)]'
|
||||
{-p,--port=}'[\[port\[\:port\]\] Set the TCP port \(range\) used by the client to listen]'
|
||||
'--power-off-on-close[Turn the device screen off when closing scrcpy]'
|
||||
'--prefer-text[Inject alpha characters and space as text events instead of key events]'
|
||||
'--print-fps[Start FPS counter, to print frame logs to the console]'
|
||||
'--push-target=[Set the target directory for pushing files to the device by drag and drop]'
|
||||
'--raw-key-events[Inject key events for all input keys, and ignore text events]'
|
||||
{-r,--record=}'[Record screen to file]:record file:_files'
|
||||
'--raw-key-events[Inject key events for all input keys, and ignore text events]'
|
||||
'--record-format=[Force recording format]:format:(mp4 mkv)'
|
||||
'--render-driver=[Request SDL to use the given render driver]:driver name:(direct3d opengl opengles2 opengles metal software)'
|
||||
'--require-audio=[Make scrcpy fail if audio is enabled but does not work]'
|
||||
'--rotation=[Set the initial display rotation]:rotation values:(0 1 2 3)'
|
||||
{-s,--serial=}'[The device serial number \(mandatory for multiple devices only\)]:serial:($("${ADB-adb}" devices | awk '\''$2 == "device" {print $1}'\''))'
|
||||
'--shortcut-mod=[\[key1,key2+key3,...\] Specify the modifiers to use for scrcpy shortcuts]:shortcut mod:(lctrl rctrl lalt ralt lsuper rsuper)'
|
||||
{-S,--turn-screen-off}'[Turn the device screen off immediately]'
|
||||
'--shortcut-mod=[\[key1,key2+key3,...\] Specify the modifiers to use for scrcpy shortcuts]:shortcut mod:(lctrl rctrl lalt ralt lsuper rsuper)'
|
||||
{-t,--show-touches}'[Show physical touches]'
|
||||
'--tcpip[\(optional \[ip\:port\]\) Configure and connect the device over TCP/IP]'
|
||||
'--time-limit=[Set the maximum mirroring time, in seconds]'
|
||||
'--tunnel-host=[Set the IP address of the adb tunnel to reach the scrcpy server]'
|
||||
'--tunnel-port=[Set the TCP port of the adb tunnel to reach the scrcpy server]'
|
||||
'--v4l2-buffer=[Add a buffering delay \(in milliseconds\) before pushing frames]'
|
||||
'--v4l2-sink=[\[\/dev\/videoN\] Output to v4l2loopback device]'
|
||||
{-V,--verbosity=}'[Set the log level]:verbosity:(verbose debug info warn error)'
|
||||
{-v,--version}'[Print the version of scrcpy]'
|
||||
{-V,--verbosity=}'[Set the log level]:verbosity:(verbose debug info warn error)'
|
||||
'--video-codec=[Select the video codec]:codec:(h264 h265 av1)'
|
||||
'--video-codec-options=[Set a list of comma-separated key\:type=value options for the device video encoder]'
|
||||
'--video-encoder=[Use a specific MediaCodec video encoder]'
|
||||
|
@ -14,6 +14,7 @@ src = [
|
||||
'src/delay_buffer.c',
|
||||
'src/demuxer.c',
|
||||
'src/device_msg.c',
|
||||
'src/display.c',
|
||||
'src/icon.c',
|
||||
'src/file_pusher.c',
|
||||
'src/fps_counter.c',
|
||||
@ -50,6 +51,7 @@ src = [
|
||||
'src/util/term.c',
|
||||
'src/util/thread.c',
|
||||
'src/util/tick.c',
|
||||
'src/util/timeout.c',
|
||||
]
|
||||
|
||||
conf = configuration_data()
|
||||
@ -277,10 +279,6 @@ if get_option('buildtype') == 'debug'
|
||||
'src/util/strbuf.c',
|
||||
'src/util/term.c',
|
||||
]],
|
||||
['test_clock', [
|
||||
'tests/test_clock.c',
|
||||
'src/clock.c',
|
||||
]],
|
||||
['test_control_msg_serialize', [
|
||||
'tests/test_control_msg_serialize.c',
|
||||
'src/control_msg.c',
|
||||
@ -310,7 +308,8 @@ if get_option('buildtype') == 'debug'
|
||||
]
|
||||
|
||||
foreach t : tests
|
||||
exe = executable(t[0], t[1],
|
||||
sources = t[1] + ['src/compat.c']
|
||||
exe = executable(t[0], sources,
|
||||
include_directories: src_dir,
|
||||
dependencies: dependencies,
|
||||
c_args: ['-DSDL_MAIN_HANDLED', '-DSC_TEST'])
|
||||
|
@ -6,10 +6,10 @@ cd "$DIR"
|
||||
mkdir -p "$PREBUILT_DATA_DIR"
|
||||
cd "$PREBUILT_DATA_DIR"
|
||||
|
||||
DEP_DIR=platform-tools-33.0.3
|
||||
DEP_DIR=platform-tools-34.0.1
|
||||
|
||||
FILENAME=platform-tools_r33.0.3-windows.zip
|
||||
SHA256SUM=1e59afd40a74c5c0eab0a9fad3f0faf8a674267106e0b19921be9f67081808c2
|
||||
FILENAME=platform-tools_r34.0.1-windows.zip
|
||||
SHA256SUM=5dd9c2be744c224fa3a7cbe30ba02d2cb378c763bd0f797a7e47e9f3156a5daa
|
||||
|
||||
if [[ -d "$DEP_DIR" ]]
|
||||
then
|
||||
|
@ -6,11 +6,11 @@ cd "$DIR"
|
||||
mkdir -p "$PREBUILT_DATA_DIR"
|
||||
cd "$PREBUILT_DATA_DIR"
|
||||
|
||||
VERSION=6.0-scrcpy-2
|
||||
VERSION=6.0-scrcpy-4
|
||||
DEP_DIR="ffmpeg-$VERSION"
|
||||
|
||||
FILENAME="$DEP_DIR".7z
|
||||
SHA256SUM=98ef97f8607c97a5c4f9c5a0a991b78f105d002a3619145011d16ffb92501b14
|
||||
SHA256SUM=39274b321491ce83e76cab5d24e7cbe3f402d3ccf382f739b13be5651c146b60
|
||||
|
||||
if [[ -d "$DEP_DIR" ]]
|
||||
then
|
||||
|
@ -6,10 +6,10 @@ cd "$DIR"
|
||||
mkdir -p "$PREBUILT_DATA_DIR"
|
||||
cd "$PREBUILT_DATA_DIR"
|
||||
|
||||
DEP_DIR=SDL2-2.26.1
|
||||
DEP_DIR=SDL2-2.28.0
|
||||
|
||||
FILENAME=SDL2-devel-2.26.1-mingw.tar.gz
|
||||
SHA256SUM=aa43e1531a89551f9f9e14b27953a81d4ac946a9e574b5813cd0f2b36e83cc1c
|
||||
FILENAME=SDL2-devel-2.28.0-mingw.tar.gz
|
||||
SHA256SUM=b91ce59eeacd4a9db403f976fd2337d9360b21ada374124417d716065c380e20
|
||||
|
||||
if [[ -d "$DEP_DIR" ]]
|
||||
then
|
||||
|
@ -13,7 +13,7 @@ BEGIN
|
||||
VALUE "LegalCopyright", "Romain Vimont, Genymobile"
|
||||
VALUE "OriginalFilename", "scrcpy.exe"
|
||||
VALUE "ProductName", "scrcpy"
|
||||
VALUE "ProductVersion", "1.25"
|
||||
VALUE "ProductVersion", "2.0"
|
||||
END
|
||||
END
|
||||
BLOCK "VarFileInfo"
|
||||
|
116
app/scrcpy.1
116
app/scrcpy.1
@ -55,6 +55,20 @@ Use a specific MediaCodec audio encoder (depending on the codec provided by \fB\
|
||||
|
||||
The available encoders can be listed by \-\-list\-encoders.
|
||||
|
||||
.TP
|
||||
.BI "\-\-audio\-source " source
|
||||
Select the audio source (output or mic).
|
||||
|
||||
Default is output.
|
||||
|
||||
.TP
|
||||
.BI "\-\-audio\-output\-buffer ms
|
||||
Configure the size of the SDL audio output buffer (in milliseconds).
|
||||
|
||||
If you get "robotic" audio playback, you should test with a higher value (10). Do not change this setting otherwise.
|
||||
|
||||
Default is 5.
|
||||
|
||||
.TP
|
||||
.BI "\-b, \-\-video\-bit\-rate " value
|
||||
Encode the video at the given bit\-rate, expressed in bits/s. Unit suffixes are supported: '\fBK\fR' (x1000) and '\fBM\fR' (x1000000).
|
||||
@ -99,6 +113,10 @@ Use TCP/IP device (if there is exactly one, like adb -e).
|
||||
|
||||
Also see \fB\-d\fR (\fB\-\-select\-usb\fR).
|
||||
|
||||
.TP
|
||||
.B \-f, \-\-fullscreen
|
||||
Start in fullscreen.
|
||||
|
||||
.TP
|
||||
.B \-\-force\-adb\-forward
|
||||
Do not attempt to use "adb reverse" to connect to the device.
|
||||
@ -107,14 +125,14 @@ Do not attempt to use "adb reverse" to connect to the device.
|
||||
.B \-\-forward\-all\-clicks
|
||||
By default, right-click triggers BACK (or POWER on) and middle-click triggers HOME. This option disables these shortcuts and forward the clicks to the device instead.
|
||||
|
||||
.TP
|
||||
.B \-f, \-\-fullscreen
|
||||
Start in fullscreen.
|
||||
|
||||
.TP
|
||||
.B \-h, \-\-help
|
||||
Print this help.
|
||||
|
||||
.TP
|
||||
.B \-\-kill\-adb\-on\-close
|
||||
Kill adb when scrcpy terminates.
|
||||
|
||||
.TP
|
||||
.B \-K, \-\-hid\-keyboard
|
||||
Simulate a physical keyboard by using HID over AOAv2.
|
||||
@ -153,10 +171,6 @@ Default is "unlocked".
|
||||
|
||||
Passing the option without argument is equivalent to passing "initial".
|
||||
|
||||
.TP
|
||||
.BI "\-\-max\-fps " value
|
||||
Limit the framerate of screen capture (officially supported since Android 10, but may work on earlier versions).
|
||||
|
||||
.TP
|
||||
.BI "\-m, \-\-max\-size " value
|
||||
Limit both the width and height of the video to \fIvalue\fR. The other dimension is computed so that the device aspect\-ratio is preserved.
|
||||
@ -175,6 +189,26 @@ It may only work over USB.
|
||||
|
||||
Also see \fB\-\-hid\-keyboard\fR.
|
||||
|
||||
.TP
|
||||
.BI "\-\-max\-fps " value
|
||||
Limit the framerate of screen capture (officially supported since Android 10, but may work on earlier versions).
|
||||
|
||||
.TP
|
||||
.B \-n, \-\-no\-control
|
||||
Disable device control (mirror the device in read\-only).
|
||||
|
||||
.TP
|
||||
.B \-N, \-\-no\-playback
|
||||
Disable video and audio playback on the computer (equivalent to --no-video-playback --no-audio-playback).
|
||||
|
||||
.TP
|
||||
.B \-\-no\-audio
|
||||
Disable audio forwarding.
|
||||
|
||||
.TP
|
||||
.B \-\-no\-audio\-playback
|
||||
Disable audio playback on the computer.
|
||||
|
||||
.TP
|
||||
.B \-\-no\-cleanup
|
||||
By default, scrcpy removes the server binary from the device and restores the device state (show touches, stay awake and power mode) on exit.
|
||||
@ -193,14 +227,6 @@ By default, on MediaCodec error, scrcpy automatically tries again with a lower d
|
||||
|
||||
This option disables this behavior.
|
||||
|
||||
.TP
|
||||
.B \-n, \-\-no\-control
|
||||
Disable device control (mirror the device in read\-only).
|
||||
|
||||
.TP
|
||||
.B \-N, \-\-no\-display
|
||||
Do not display device (only when screen recording is enabled).
|
||||
|
||||
.TP
|
||||
.B \-\-no\-key\-repeat
|
||||
Do not forward repeated key events when a key is held down.
|
||||
@ -213,6 +239,14 @@ If the renderer is OpenGL 3.0+ or OpenGL ES 2.0+, then mipmaps are automatically
|
||||
.B \-\-no\-power\-on
|
||||
Do not power on the device on start.
|
||||
|
||||
.TP
|
||||
.B \-\-no\-video
|
||||
Disable video forwarding.
|
||||
|
||||
.TP
|
||||
.B \-\-no\-video\-playback
|
||||
Disable video playback on the computer.
|
||||
|
||||
.TP
|
||||
.B \-\-otg
|
||||
Run in OTG mode: simulate physical keyboard and mouse, as if the computer keyboard and mouse were plugged directly to the device via an OTG cable.
|
||||
@ -254,10 +288,6 @@ Set the target directory for pushing files to the device by drag & drop. It is p
|
||||
|
||||
Default is "/sdcard/Download/".
|
||||
|
||||
.TP
|
||||
.B \-\-raw\-key\-events
|
||||
Inject key events for all input keys, and ignore text events.
|
||||
|
||||
.TP
|
||||
.BI "\-r, \-\-record " file
|
||||
Record screen to
|
||||
@ -267,6 +297,10 @@ The format is determined by the
|
||||
.B \-\-record\-format
|
||||
option if set, or by the file extension (.mp4 or .mkv).
|
||||
|
||||
.TP
|
||||
.B \-\-raw\-key\-events
|
||||
Inject key events for all input keys, and ignore text events.
|
||||
|
||||
.TP
|
||||
.BI "\-\-record\-format " format
|
||||
Force recording format (either mp4 or mkv).
|
||||
@ -292,6 +326,10 @@ Set the initial display rotation. Possibles values are 0, 1, 2 and 3. Each incre
|
||||
.BI "\-s, \-\-serial " number
|
||||
The device serial number. Mandatory only if several devices are connected to adb.
|
||||
|
||||
.TP
|
||||
.B \-S, \-\-turn\-screen\-off
|
||||
Turn the device screen off immediately.
|
||||
|
||||
.TP
|
||||
.BI "\-\-shortcut\-mod " key\fR[+...]][,...]
|
||||
Specify the modifiers to use for scrcpy shortcuts. Possible keys are "lctrl", "rctrl", "lalt", "ralt", "lsuper" and "rsuper".
|
||||
@ -302,6 +340,12 @@ For example, to use either LCtrl+LAlt or LSuper for scrcpy shortcuts, pass "lctr
|
||||
|
||||
Default is "lalt,lsuper" (left-Alt or left-Super).
|
||||
|
||||
.TP
|
||||
.B \-t, \-\-show\-touches
|
||||
Enable "show touches" on start, restore the initial value on exit.
|
||||
|
||||
It only shows physical touches (not clicks from scrcpy).
|
||||
|
||||
.TP
|
||||
.BI "\-\-tcpip\fR[=\fIip\fR[:\fIport\fR]]
|
||||
Configure and reconnect the device over TCP/IP.
|
||||
@ -311,14 +355,8 @@ If a destination address is provided, then scrcpy connects to this address befor
|
||||
If no destination address is provided, then scrcpy attempts to find the IP address and adb port of the current device (typically connected over USB), enables TCP/IP mode if necessary, then connects to this address before starting.
|
||||
|
||||
.TP
|
||||
.B \-S, \-\-turn\-screen\-off
|
||||
Turn the device screen off immediately.
|
||||
|
||||
.TP
|
||||
.B \-t, \-\-show\-touches
|
||||
Enable "show touches" on start, restore the initial value on exit.
|
||||
|
||||
It only shows physical touches (not clicks from scrcpy).
|
||||
.BI "\-\-time\-limit " seconds
|
||||
Set the maximum mirroring time, in seconds.
|
||||
|
||||
.TP
|
||||
.BI "\-\-tunnel\-host " ip
|
||||
@ -332,6 +370,16 @@ Set the TCP port of the adb tunnel to reach the scrcpy server. This option autom
|
||||
|
||||
Default is 0 (not forced): the local port used for establishing the tunnel will be used.
|
||||
|
||||
.TP
|
||||
.B \-v, \-\-version
|
||||
Print the version of scrcpy.
|
||||
|
||||
.TP
|
||||
.BI "\-V, \-\-verbosity " value
|
||||
Set the log level ("verbose", "debug", "info", "warn" or "error").
|
||||
|
||||
Default is "info" for release builds, "debug" for debug builds.
|
||||
|
||||
.TP
|
||||
.BI "\-\-v4l2-sink " /dev/videoN
|
||||
Output to v4l2loopback device.
|
||||
@ -346,16 +394,6 @@ This option is similar to \fB\-\-display\-buffer\fR, but specific to V4L2 sink.
|
||||
|
||||
Default is 0 (no buffering).
|
||||
|
||||
.TP
|
||||
.BI "\-V, \-\-verbosity " value
|
||||
Set the log level ("verbose", "debug", "info", "warn" or "error").
|
||||
|
||||
Default is "info" for release builds, "debug" for debug builds.
|
||||
|
||||
.TP
|
||||
.B \-v, \-\-version
|
||||
Print the version of scrcpy.
|
||||
|
||||
.TP
|
||||
.BI "\-\-video\-codec " name
|
||||
Select a video codec (h264, h265 or av1).
|
||||
@ -571,7 +609,7 @@ Copyright \(co 2018 Genymobile
|
||||
Genymobile
|
||||
.UE
|
||||
|
||||
Copyright \(co 2018\-2022
|
||||
Copyright \(co 2018\-2023
|
||||
.MT rom@rom1v.com
|
||||
Romain Vimont
|
||||
.ME
|
||||
|
@ -204,6 +204,7 @@ sc_adb_parse_device_ip(char *str) {
|
||||
while (str[idx_line] != '\0') {
|
||||
char *line = &str[idx_line];
|
||||
size_t len = strcspn(line, "\n");
|
||||
bool is_last_line = line[len] == '\0';
|
||||
|
||||
// The same, but without any trailing '\r'
|
||||
size_t line_len = sc_str_remove_trailing_cr(line, len);
|
||||
@ -215,12 +216,12 @@ sc_adb_parse_device_ip(char *str) {
|
||||
return ip;
|
||||
}
|
||||
|
||||
idx_line += len;
|
||||
|
||||
if (str[idx_line] != '\0') {
|
||||
// The next line starts after the '\n'
|
||||
++idx_line;
|
||||
if (is_last_line) {
|
||||
break;
|
||||
}
|
||||
|
||||
// The next line starts after the '\n'
|
||||
idx_line += len + 1;
|
||||
}
|
||||
|
||||
return NULL;
|
||||
|
@ -1,55 +1,90 @@
|
||||
#include "audio_player.h"
|
||||
|
||||
#include <libavcodec/avcodec.h>
|
||||
#include <libavutil/opt.h>
|
||||
|
||||
#include "util/log.h"
|
||||
|
||||
#define SC_AUDIO_PLAYER_NDEBUG // comment to debug
|
||||
|
||||
/**
|
||||
* Real-time audio player with configurable latency
|
||||
*
|
||||
* As input, the player regularly receives AVFrames of decoded audio samples.
|
||||
* As output, an SDL callback regularly requests audio samples to be played.
|
||||
* In the middle, an audio buffer stores the samples produced but not consumed
|
||||
* yet.
|
||||
*
|
||||
* The goal of the player is to feed the audio output with a latency as low as
|
||||
* possible while avoiding buffer underrun (i.e. not being able to provide
|
||||
* samples when requested).
|
||||
*
|
||||
* The player aims to feed the audio output with as little latency as possible
|
||||
* while avoiding buffer underrun. To achieve this, it attempts to maintain the
|
||||
* average buffering (the number of samples present in the buffer) around a
|
||||
* target value. If this target buffering is too low, then buffer underrun will
|
||||
* occur frequently. If it is too high, then latency will become unacceptable.
|
||||
* This target value is configured using the scrcpy option --audio-buffer.
|
||||
*
|
||||
* The player cannot adjust the sample input rate (it receives samples produced
|
||||
* in real-time) or the sample output rate (it must provide samples as
|
||||
* requested by the audio output callback). Therefore, it may only apply
|
||||
* compensation by resampling (converting _m_ input samples to _n_ output
|
||||
* samples).
|
||||
*
|
||||
* The compensation itself is applied by libswresample (FFmpeg). It is
|
||||
* configured using swr_set_compensation(). An important work for the player
|
||||
* is to estimate the compensation value regularly and apply it.
|
||||
*
|
||||
* The estimated buffering level is the result of averaging the "natural"
|
||||
* buffering (samples are produced and consumed by blocks, so it must be
|
||||
* smoothed), and making instant adjustments resulting of its own actions
|
||||
* (explicit compensation and silence insertion on underflow), which are not
|
||||
* smoothed.
|
||||
*
|
||||
* Buffer underflow events can occur when packets arrive too late. In that case,
|
||||
* the player inserts silence. Once the packets finally arrive (late), one
|
||||
* strategy could be to drop the samples that were replaced by silence, in
|
||||
* order to keep a minimal latency. However, dropping samples in case of buffer
|
||||
* underflow is inadvisable, as it would temporarily increase the underflow
|
||||
* even more and cause very noticeable audio glitches.
|
||||
*
|
||||
* Therefore, the player doesn't drop any sample on underflow. The compensation
|
||||
* mechanism will absorb the delay introduced by the inserted silence.
|
||||
*/
|
||||
|
||||
/** Downcast frame_sink to sc_audio_player */
|
||||
#define DOWNCAST(SINK) container_of(SINK, struct sc_audio_player, frame_sink)
|
||||
|
||||
#define SC_AV_SAMPLE_FMT AV_SAMPLE_FMT_FLT
|
||||
#define SC_SDL_SAMPLE_FMT AUDIO_F32
|
||||
|
||||
#define SC_AUDIO_OUTPUT_BUFFER_SAMPLES 240 // 5ms at 48000Hz
|
||||
|
||||
static inline uint32_t
|
||||
bytes_to_samples(struct sc_audio_player *ap, size_t bytes) {
|
||||
assert(bytes % (ap->nb_channels * ap->out_bytes_per_sample) == 0);
|
||||
return bytes / (ap->nb_channels * ap->out_bytes_per_sample);
|
||||
}
|
||||
|
||||
static inline size_t
|
||||
samples_to_bytes(struct sc_audio_player *ap, uint32_t samples) {
|
||||
return samples * ap->nb_channels * ap->out_bytes_per_sample;
|
||||
}
|
||||
#define TO_BYTES(SAMPLES) sc_audiobuf_to_bytes(&ap->buf, (SAMPLES))
|
||||
#define TO_SAMPLES(BYTES) sc_audiobuf_to_samples(&ap->buf, (BYTES))
|
||||
|
||||
static void SDLCALL
|
||||
sc_audio_player_sdl_callback(void *userdata, uint8_t *stream, int len_int) {
|
||||
struct sc_audio_player *ap = userdata;
|
||||
|
||||
// This callback is called with the lock used by SDL_AudioDeviceLock(), so
|
||||
// the bytebuf is protected
|
||||
// the audiobuf is protected
|
||||
|
||||
assert(len_int > 0);
|
||||
size_t len = len_int;
|
||||
uint32_t count = TO_SAMPLES(len);
|
||||
|
||||
#ifndef SC_AUDIO_PLAYER_NDEBUG
|
||||
LOGD("[Audio] SDL callback requests %" PRIu32 " samples",
|
||||
bytes_to_samples(ap, len));
|
||||
LOGD("[Audio] SDL callback requests %" PRIu32 " samples", count);
|
||||
#endif
|
||||
|
||||
size_t read_avail = sc_bytebuf_read_available(&ap->buf);
|
||||
uint32_t buffered_samples = sc_audiobuf_can_read(&ap->buf);
|
||||
if (!ap->played) {
|
||||
uint32_t buffered_samples = bytes_to_samples(ap, read_avail);
|
||||
|
||||
// Part of the buffering is handled by inserting initial silence. The
|
||||
// remaining (margin) last samples will be handled by compensation.
|
||||
uint32_t margin = 30 * ap->sample_rate / 1000; // 30ms
|
||||
if (buffered_samples + margin < ap->target_buffering) {
|
||||
LOGV("[Audio] Inserting initial buffering silence: %" PRIu32
|
||||
" samples", bytes_to_samples(ap, len));
|
||||
" samples", count);
|
||||
// Delay playback starting to reach the target buffering. Fill the
|
||||
// whole buffer with silence (len is small compared to the
|
||||
// arbitrary margin value).
|
||||
@ -58,26 +93,25 @@ sc_audio_player_sdl_callback(void *userdata, uint8_t *stream, int len_int) {
|
||||
}
|
||||
}
|
||||
|
||||
size_t read = MIN(read_avail, len);
|
||||
uint32_t read = MIN(buffered_samples, count);
|
||||
if (read) {
|
||||
sc_bytebuf_read(&ap->buf, stream, read);
|
||||
sc_audiobuf_read(&ap->buf, stream, read);
|
||||
}
|
||||
|
||||
if (read < len) {
|
||||
size_t silence_bytes = len - read;
|
||||
uint32_t silence_samples = bytes_to_samples(ap, silence_bytes);
|
||||
if (read < count) {
|
||||
uint32_t silence = count - read;
|
||||
// Insert silence. In theory, the inserted silent samples replace the
|
||||
// missing real samples, which will arrive later, so they should be
|
||||
// dropped to keep the latency minimal. However, this would cause very
|
||||
// audible glitches, so let the clock compensation restore the target
|
||||
// latency.
|
||||
LOGD("[Audio] Buffer underflow, inserting silence: %" PRIu32 " samples",
|
||||
silence_samples);
|
||||
memset(stream + read, 0, silence_bytes);
|
||||
silence);
|
||||
memset(stream + TO_BYTES(read), 0, TO_BYTES(silence));
|
||||
|
||||
if (ap->received) {
|
||||
// Inserting additional samples immediately increases buffering
|
||||
ap->avg_buffering.avg += silence_samples;
|
||||
ap->underflow += silence;
|
||||
}
|
||||
}
|
||||
|
||||
@ -86,7 +120,7 @@ sc_audio_player_sdl_callback(void *userdata, uint8_t *stream, int len_int) {
|
||||
|
||||
static uint8_t *
|
||||
sc_audio_player_get_swr_buf(struct sc_audio_player *ap, uint32_t min_samples) {
|
||||
size_t min_buf_size = samples_to_bytes(ap, min_samples);
|
||||
size_t min_buf_size = TO_BYTES(min_samples);
|
||||
if (min_buf_size > ap->swr_buf_alloc_size) {
|
||||
size_t new_size = min_buf_size + 4096;
|
||||
uint8_t *buf = realloc(ap->swr_buf, new_size);
|
||||
@ -129,7 +163,6 @@ sc_audio_player_frame_sink_push(struct sc_frame_sink *sink,
|
||||
// swr_convert() returns the number of samples which would have been
|
||||
// written if the buffer was big enough.
|
||||
uint32_t samples_written = MIN(ret, dst_nb_samples);
|
||||
size_t swr_buf_size = samples_to_bytes(ap, samples_written);
|
||||
#ifndef SC_AUDIO_PLAYER_NDEBUG
|
||||
LOGD("[Audio] %" PRIu32 " samples written to buffer", samples_written);
|
||||
#endif
|
||||
@ -137,46 +170,40 @@ sc_audio_player_frame_sink_push(struct sc_frame_sink *sink,
|
||||
// Since this function is the only writer, the current available space is
|
||||
// at least the previous available space. In practice, it should almost
|
||||
// always be possible to write without lock.
|
||||
bool lockless_write = swr_buf_size <= ap->previous_write_avail;
|
||||
bool lockless_write = samples_written <= ap->previous_can_write;
|
||||
if (lockless_write) {
|
||||
sc_bytebuf_prepare_write(&ap->buf, swr_buf, swr_buf_size);
|
||||
sc_audiobuf_prepare_write(&ap->buf, swr_buf, samples_written);
|
||||
}
|
||||
|
||||
SDL_LockAudioDevice(ap->device);
|
||||
|
||||
size_t read_avail = sc_bytebuf_read_available(&ap->buf);
|
||||
uint32_t buffered_samples = bytes_to_samples(ap, read_avail);
|
||||
uint32_t buffered_samples = sc_audiobuf_can_read(&ap->buf);
|
||||
|
||||
if (lockless_write) {
|
||||
sc_bytebuf_commit_write(&ap->buf, swr_buf_size);
|
||||
sc_audiobuf_commit_write(&ap->buf, samples_written);
|
||||
} else {
|
||||
// Take care to keep full samples
|
||||
size_t align = ap->nb_channels * ap->out_bytes_per_sample;
|
||||
size_t write_avail =
|
||||
sc_bytebuf_write_available(&ap->buf) / align * align;
|
||||
if (swr_buf_size > write_avail) {
|
||||
// Entering this branch is very unlikely, the ring-buffer (bytebuf)
|
||||
// is allocated with a size sufficient to store 1 second more than
|
||||
// the target buffering. If this happens, though, we have to skip
|
||||
// old samples.
|
||||
size_t cap = sc_bytebuf_capacity(&ap->buf) / align * align;
|
||||
if (swr_buf_size > cap) {
|
||||
uint32_t can_write = sc_audiobuf_can_write(&ap->buf);
|
||||
if (samples_written > can_write) {
|
||||
// Entering this branch is very unlikely, the audio buffer is
|
||||
// allocated with a size sufficient to store 1 second more than the
|
||||
// target buffering. If this happens, though, we have to skip old
|
||||
// samples.
|
||||
uint32_t cap = sc_audiobuf_capacity(&ap->buf);
|
||||
if (samples_written > cap) {
|
||||
// Very very unlikely: a single resampled frame should never
|
||||
// exceed the ring-buffer size (or something is very wrong).
|
||||
// exceed the audio buffer size (or something is very wrong).
|
||||
// Ignore the first bytes in swr_buf
|
||||
swr_buf += swr_buf_size - cap;
|
||||
swr_buf_size = cap;
|
||||
swr_buf += TO_BYTES(samples_written - cap);
|
||||
// This change in samples_written will impact the
|
||||
// instant_compensation below
|
||||
samples_written -= bytes_to_samples(ap, swr_buf_size - cap);
|
||||
samples_written = cap;
|
||||
}
|
||||
|
||||
assert(swr_buf_size >= write_avail);
|
||||
if (swr_buf_size > write_avail) {
|
||||
sc_bytebuf_skip(&ap->buf, swr_buf_size - write_avail);
|
||||
uint32_t skip_samples =
|
||||
bytes_to_samples(ap, swr_buf_size - write_avail);
|
||||
assert(samples_written >= can_write);
|
||||
if (samples_written > can_write) {
|
||||
uint32_t skip_samples = samples_written - can_write;
|
||||
assert(buffered_samples >= skip_samples);
|
||||
sc_audiobuf_skip(&ap->buf, skip_samples);
|
||||
buffered_samples -= skip_samples;
|
||||
if (ap->played) {
|
||||
// Dropping input samples instantly decreases buffering
|
||||
@ -186,39 +213,62 @@ sc_audio_player_frame_sink_push(struct sc_frame_sink *sink,
|
||||
|
||||
// It should remain exactly the expected size to write the new
|
||||
// samples.
|
||||
assert((sc_bytebuf_write_available(&ap->buf) / align * align)
|
||||
== swr_buf_size);
|
||||
assert(sc_audiobuf_can_write(&ap->buf) == samples_written);
|
||||
}
|
||||
|
||||
sc_bytebuf_write(&ap->buf, swr_buf, swr_buf_size);
|
||||
sc_audiobuf_write(&ap->buf, swr_buf, samples_written);
|
||||
}
|
||||
|
||||
buffered_samples += samples_written;
|
||||
assert(samples_to_bytes(ap, buffered_samples)
|
||||
== sc_bytebuf_read_available(&ap->buf));
|
||||
assert(buffered_samples == sc_audiobuf_can_read(&ap->buf));
|
||||
|
||||
// Read with lock held, to be used after unlocking
|
||||
bool played = ap->played;
|
||||
uint32_t underflow = ap->underflow;
|
||||
|
||||
if (played) {
|
||||
uint32_t max_buffered_samples = ap->target_buffering
|
||||
+ 12 * SC_AUDIO_OUTPUT_BUFFER_SAMPLES
|
||||
+ 12 * ap->output_buffer
|
||||
+ ap->target_buffering / 10;
|
||||
if (buffered_samples > max_buffered_samples) {
|
||||
uint32_t skip_samples = buffered_samples - max_buffered_samples;
|
||||
size_t skip_bytes = samples_to_bytes(ap, skip_samples);
|
||||
sc_bytebuf_skip(&ap->buf, skip_bytes);
|
||||
#ifndef SC_AUDIO_PLAYER_NDEBUG
|
||||
sc_audiobuf_skip(&ap->buf, skip_samples);
|
||||
LOGD("[Audio] Buffering threshold exceeded, skipping %" PRIu32
|
||||
" samples", skip_samples);
|
||||
#endif
|
||||
}
|
||||
|
||||
// reset (the current value was copied to a local variable)
|
||||
ap->underflow = 0;
|
||||
} else {
|
||||
// SDL playback not started yet, do not accumulate more than
|
||||
// max_initial_buffering samples, this would cause unnecessary delay
|
||||
// (and glitches to compensate) on start.
|
||||
uint32_t max_initial_buffering = ap->target_buffering
|
||||
+ 2 * ap->output_buffer;
|
||||
if (buffered_samples > max_initial_buffering) {
|
||||
uint32_t skip_samples = buffered_samples - max_initial_buffering;
|
||||
sc_audiobuf_skip(&ap->buf, skip_samples);
|
||||
#ifndef SC_AUDIO_PLAYER_NDEBUG
|
||||
LOGD("[Audio] Playback not started, skipping %" PRIu32 " samples",
|
||||
skip_samples);
|
||||
#endif
|
||||
}
|
||||
}
|
||||
|
||||
ap->previous_can_write = sc_audiobuf_can_write(&ap->buf);
|
||||
ap->received = true;
|
||||
|
||||
SDL_UnlockAudioDevice(ap->device);
|
||||
|
||||
if (played) {
|
||||
// Number of samples added (or removed, if negative) for compensation
|
||||
int32_t instant_compensation =
|
||||
(int32_t) samples_written - frame->nb_samples;
|
||||
int32_t inserted_silence = (int32_t) underflow;
|
||||
|
||||
// The compensation must apply instantly, it must not be smoothed
|
||||
ap->avg_buffering.avg += instant_compensation;
|
||||
ap->avg_buffering.avg += instant_compensation + inserted_silence;
|
||||
|
||||
|
||||
// However, the buffering level must be smoothed
|
||||
sc_average_push(&ap->avg_buffering, buffered_samples);
|
||||
@ -227,29 +277,7 @@ sc_audio_player_frame_sink_push(struct sc_frame_sink *sink,
|
||||
LOGD("[Audio] buffered_samples=%" PRIu32 " avg_buffering=%f",
|
||||
buffered_samples, sc_average_get(&ap->avg_buffering));
|
||||
#endif
|
||||
} else {
|
||||
// SDL playback not started yet, do not accumulate more than
|
||||
// max_initial_buffering samples, this would cause unnecessary delay
|
||||
// (and glitches to compensate) on start.
|
||||
uint32_t max_initial_buffering = ap->target_buffering
|
||||
+ 2 * SC_AUDIO_OUTPUT_BUFFER_SAMPLES;
|
||||
if (buffered_samples > max_initial_buffering) {
|
||||
uint32_t skip_samples = buffered_samples - max_initial_buffering;
|
||||
size_t skip_bytes = samples_to_bytes(ap, skip_samples);
|
||||
sc_bytebuf_skip(&ap->buf, skip_bytes);
|
||||
#ifndef SC_AUDIO_PLAYER_NDEBUG
|
||||
LOGD("[Audio] Playback not started, skipping %" PRIu32 " samples",
|
||||
skip_samples);
|
||||
#endif
|
||||
}
|
||||
}
|
||||
|
||||
ap->previous_write_avail = sc_bytebuf_write_available(&ap->buf);
|
||||
ap->received = true;
|
||||
|
||||
SDL_UnlockAudioDevice(ap->device);
|
||||
|
||||
if (played) {
|
||||
ap->samples_since_resync += samples_written;
|
||||
if (ap->samples_since_resync >= ap->sample_rate) {
|
||||
// Recompute compensation every second
|
||||
@ -257,7 +285,10 @@ sc_audio_player_frame_sink_push(struct sc_frame_sink *sink,
|
||||
|
||||
float avg = sc_average_get(&ap->avg_buffering);
|
||||
int diff = ap->target_buffering - avg;
|
||||
if (diff < 0 && buffered_samples < ap->target_buffering) {
|
||||
if (abs(diff) < (int) ap->sample_rate / 1000) {
|
||||
// Do not compensate for less than 1ms, the error is just noise
|
||||
diff = 0;
|
||||
} else if (diff < 0 && buffered_samples < ap->target_buffering) {
|
||||
// Do not accelerate if the instant buffering level is below
|
||||
// the average, this would increase underflow
|
||||
diff = 0;
|
||||
@ -271,10 +302,15 @@ sc_audio_player_frame_sink_push(struct sc_frame_sink *sink,
|
||||
LOGV("[Audio] Buffering: target=%" PRIu32 " avg=%f cur=%" PRIu32
|
||||
" compensation=%d", ap->target_buffering, avg,
|
||||
buffered_samples, diff);
|
||||
int ret = swr_set_compensation(swr_ctx, diff, distance);
|
||||
if (ret < 0) {
|
||||
LOGW("Resampling compensation failed: %d", ret);
|
||||
// not fatal
|
||||
|
||||
if (diff != ap->compensation) {
|
||||
int ret = swr_set_compensation(swr_ctx, diff, distance);
|
||||
if (ret < 0) {
|
||||
LOGW("Resampling compensation failed: %d", ret);
|
||||
// not fatal
|
||||
} else {
|
||||
ap->compensation = diff;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -295,11 +331,28 @@ sc_audio_player_frame_sink_open(struct sc_frame_sink *sink,
|
||||
unsigned nb_channels = tmp;
|
||||
#endif
|
||||
|
||||
assert(ctx->sample_rate > 0);
|
||||
assert(!av_sample_fmt_is_planar(SC_AV_SAMPLE_FMT));
|
||||
int out_bytes_per_sample = av_get_bytes_per_sample(SC_AV_SAMPLE_FMT);
|
||||
assert(out_bytes_per_sample > 0);
|
||||
|
||||
ap->sample_rate = ctx->sample_rate;
|
||||
ap->nb_channels = nb_channels;
|
||||
ap->out_bytes_per_sample = out_bytes_per_sample;
|
||||
|
||||
ap->target_buffering = ap->target_buffering_delay * ap->sample_rate
|
||||
/ SC_TICK_FREQ;
|
||||
|
||||
uint64_t aout_samples = ap->output_buffer_duration * ap->sample_rate
|
||||
/ SC_TICK_FREQ;
|
||||
assert(aout_samples <= 0xFFFF);
|
||||
ap->output_buffer = (uint16_t) aout_samples;
|
||||
|
||||
SDL_AudioSpec desired = {
|
||||
.freq = ctx->sample_rate,
|
||||
.format = SC_SDL_SAMPLE_FMT,
|
||||
.channels = nb_channels,
|
||||
.samples = SC_AUDIO_OUTPUT_BUFFER_SAMPLES,
|
||||
.samples = aout_samples,
|
||||
.callback = sc_audio_player_sdl_callback,
|
||||
.userdata = ap,
|
||||
};
|
||||
@ -318,11 +371,6 @@ sc_audio_player_frame_sink_open(struct sc_frame_sink *sink,
|
||||
}
|
||||
ap->swr_ctx = swr_ctx;
|
||||
|
||||
assert(ctx->sample_rate > 0);
|
||||
assert(!av_sample_fmt_is_planar(SC_AV_SAMPLE_FMT));
|
||||
int out_bytes_per_sample = av_get_bytes_per_sample(SC_AV_SAMPLE_FMT);
|
||||
assert(out_bytes_per_sample > 0);
|
||||
|
||||
#ifdef SCRCPY_LAVU_HAS_CHLAYOUT
|
||||
av_opt_set_chlayout(swr_ctx, "in_chlayout", &ctx->ch_layout, 0);
|
||||
av_opt_set_chlayout(swr_ctx, "out_chlayout", &ctx->ch_layout, 0);
|
||||
@ -345,34 +393,27 @@ sc_audio_player_frame_sink_open(struct sc_frame_sink *sink,
|
||||
goto error_free_swr_ctx;
|
||||
}
|
||||
|
||||
ap->sample_rate = ctx->sample_rate;
|
||||
ap->nb_channels = nb_channels;
|
||||
ap->out_bytes_per_sample = out_bytes_per_sample;
|
||||
|
||||
ap->target_buffering = ap->target_buffering_delay * ap->sample_rate
|
||||
/ SC_TICK_FREQ;
|
||||
|
||||
// Use a ring-buffer of the target buffering size plus 1 second between the
|
||||
// producer and the consumer. It's too big on purpose, to guarantee that
|
||||
// the producer and the consumer will be able to access it in parallel
|
||||
// without locking.
|
||||
size_t bytebuf_samples = ap->target_buffering + ap->sample_rate;
|
||||
size_t bytebuf_size = samples_to_bytes(ap, bytebuf_samples);
|
||||
size_t audiobuf_samples = ap->target_buffering + ap->sample_rate;
|
||||
|
||||
bool ok = sc_bytebuf_init(&ap->buf, bytebuf_size);
|
||||
size_t sample_size = ap->nb_channels * ap->out_bytes_per_sample;
|
||||
bool ok = sc_audiobuf_init(&ap->buf, sample_size, audiobuf_samples);
|
||||
if (!ok) {
|
||||
goto error_free_swr_ctx;
|
||||
}
|
||||
|
||||
size_t initial_swr_buf_size = samples_to_bytes(ap, 4096);
|
||||
size_t initial_swr_buf_size = TO_BYTES(4096);
|
||||
ap->swr_buf = malloc(initial_swr_buf_size);
|
||||
if (!ap->swr_buf) {
|
||||
LOG_OOM();
|
||||
goto error_destroy_bytebuf;
|
||||
goto error_destroy_audiobuf;
|
||||
}
|
||||
ap->swr_buf_alloc_size = initial_swr_buf_size;
|
||||
|
||||
ap->previous_write_avail = sc_bytebuf_write_available(&ap->buf);
|
||||
ap->previous_can_write = sc_audiobuf_can_write(&ap->buf);
|
||||
|
||||
// Samples are produced and consumed by blocks, so the buffering must be
|
||||
// smoothed to get a relatively stable value.
|
||||
@ -381,6 +422,8 @@ sc_audio_player_frame_sink_open(struct sc_frame_sink *sink,
|
||||
|
||||
ap->received = false;
|
||||
ap->played = false;
|
||||
ap->underflow = 0;
|
||||
ap->compensation = 0;
|
||||
|
||||
// The thread calling open() is the thread calling push(), which fills the
|
||||
// audio buffer consumed by the SDL audio thread.
|
||||
@ -394,8 +437,8 @@ sc_audio_player_frame_sink_open(struct sc_frame_sink *sink,
|
||||
|
||||
return true;
|
||||
|
||||
error_destroy_bytebuf:
|
||||
sc_bytebuf_destroy(&ap->buf);
|
||||
error_destroy_audiobuf:
|
||||
sc_audiobuf_destroy(&ap->buf);
|
||||
error_free_swr_ctx:
|
||||
swr_free(&ap->swr_ctx);
|
||||
error_close_audio_device:
|
||||
@ -413,13 +456,15 @@ sc_audio_player_frame_sink_close(struct sc_frame_sink *sink) {
|
||||
SDL_CloseAudioDevice(ap->device);
|
||||
|
||||
free(ap->swr_buf);
|
||||
sc_bytebuf_destroy(&ap->buf);
|
||||
sc_audiobuf_destroy(&ap->buf);
|
||||
swr_free(&ap->swr_ctx);
|
||||
}
|
||||
|
||||
void
|
||||
sc_audio_player_init(struct sc_audio_player *ap, sc_tick target_buffering) {
|
||||
sc_audio_player_init(struct sc_audio_player *ap, sc_tick target_buffering,
|
||||
sc_tick output_buffer_duration) {
|
||||
ap->target_buffering_delay = target_buffering;
|
||||
ap->output_buffer_duration = output_buffer_duration;
|
||||
|
||||
static const struct sc_frame_sink_ops ops = {
|
||||
.open = sc_audio_player_frame_sink_open,
|
||||
|
@ -5,8 +5,8 @@
|
||||
|
||||
#include <stdbool.h>
|
||||
#include "trait/frame_sink.h"
|
||||
#include <util/audiobuf.h>
|
||||
#include <util/average.h>
|
||||
#include <util/bytebuf.h>
|
||||
#include <util/thread.h>
|
||||
#include <util/tick.h>
|
||||
|
||||
@ -27,13 +27,17 @@ struct sc_audio_player {
|
||||
sc_tick target_buffering_delay;
|
||||
uint32_t target_buffering; // in samples
|
||||
|
||||
// SDL audio output buffer size.
|
||||
sc_tick output_buffer_duration;
|
||||
uint16_t output_buffer;
|
||||
|
||||
// Audio buffer to communicate between the receiver and the SDL audio
|
||||
// callback (protected by SDL_AudioDeviceLock())
|
||||
struct sc_bytebuf buf;
|
||||
struct sc_audiobuf buf;
|
||||
|
||||
// The previous number of bytes available in the buffer (only used by the
|
||||
// receiver thread)
|
||||
size_t previous_write_avail;
|
||||
// The previous empty space in the buffer (only used by the receiver
|
||||
// thread)
|
||||
uint32_t previous_can_write;
|
||||
|
||||
// Resampler (only used from the receiver thread)
|
||||
struct SwrContext *swr_ctx;
|
||||
@ -56,6 +60,13 @@ struct sc_audio_player {
|
||||
// (only used by the receiver thread)
|
||||
uint32_t samples_since_resync;
|
||||
|
||||
// Number of silence samples inserted since the last received packet
|
||||
// (protected by SDL_AudioDeviceLock())
|
||||
uint32_t underflow;
|
||||
|
||||
// Current applied compensation value (only used by the receiver thread)
|
||||
int compensation;
|
||||
|
||||
// Set to true the first time a sample is received (protected by
|
||||
// SDL_AudioDeviceLock())
|
||||
bool received;
|
||||
@ -73,6 +84,7 @@ struct sc_audio_player_callbacks {
|
||||
};
|
||||
|
||||
void
|
||||
sc_audio_player_init(struct sc_audio_player *ap, sc_tick target_buffering);
|
||||
sc_audio_player_init(struct sc_audio_player *ap, sc_tick target_buffering,
|
||||
sc_tick audio_output_buffer);
|
||||
|
||||
#endif
|
||||
|
437
app/src/cli.c
437
app/src/cli.c
@ -18,8 +18,7 @@
|
||||
#define STR(x) STR_IMPL_(x)
|
||||
|
||||
enum {
|
||||
OPT_RENDER_EXPIRED_FRAMES = 1000,
|
||||
OPT_BIT_RATE,
|
||||
OPT_BIT_RATE = 1000,
|
||||
OPT_WINDOW_TITLE,
|
||||
OPT_PUSH_TARGET,
|
||||
OPT_ALWAYS_ON_TOP,
|
||||
@ -72,6 +71,14 @@ enum {
|
||||
OPT_LIST_DISPLAYS,
|
||||
OPT_REQUIRE_AUDIO,
|
||||
OPT_AUDIO_BUFFER,
|
||||
OPT_AUDIO_OUTPUT_BUFFER,
|
||||
OPT_NO_DISPLAY,
|
||||
OPT_NO_VIDEO,
|
||||
OPT_NO_AUDIO_PLAYBACK,
|
||||
OPT_NO_VIDEO_PLAYBACK,
|
||||
OPT_AUDIO_SOURCE,
|
||||
OPT_KILL_ADB_ON_CLOSE,
|
||||
OPT_TIME_LIMIT,
|
||||
};
|
||||
|
||||
struct sc_option {
|
||||
@ -157,6 +164,23 @@ static const struct sc_option options[] = {
|
||||
"codec provided by --audio-codec).\n"
|
||||
"The available encoders can be listed by --list-encoders.",
|
||||
},
|
||||
{
|
||||
.longopt_id = OPT_AUDIO_SOURCE,
|
||||
.longopt = "audio-source",
|
||||
.argdesc = "source",
|
||||
.text = "Select the audio source (output or mic).\n"
|
||||
"Default is output.",
|
||||
},
|
||||
{
|
||||
.longopt_id = OPT_AUDIO_OUTPUT_BUFFER,
|
||||
.longopt = "audio-output-buffer",
|
||||
.argdesc = "ms",
|
||||
.text = "Configure the size of the SDL audio output buffer (in "
|
||||
"milliseconds).\n"
|
||||
"If you get \"robotic\" audio playback, you should test with "
|
||||
"a higher value (10). Do not change this setting otherwise.\n"
|
||||
"Default is 5.",
|
||||
},
|
||||
{
|
||||
.shortopt = 'b',
|
||||
.longopt = "video-bit-rate",
|
||||
@ -235,6 +259,11 @@ static const struct sc_option options[] = {
|
||||
.longopt = "encoder",
|
||||
.argdesc = "name",
|
||||
},
|
||||
{
|
||||
.shortopt = 'f',
|
||||
.longopt = "fullscreen",
|
||||
.text = "Start in fullscreen.",
|
||||
},
|
||||
{
|
||||
.longopt_id = OPT_FORCE_ADB_FORWARD,
|
||||
.longopt = "force-adb-forward",
|
||||
@ -249,9 +278,14 @@ static const struct sc_option options[] = {
|
||||
"shortcuts and forwards the clicks to the device instead.",
|
||||
},
|
||||
{
|
||||
.shortopt = 'f',
|
||||
.longopt = "fullscreen",
|
||||
.text = "Start in fullscreen.",
|
||||
.shortopt = 'h',
|
||||
.longopt = "help",
|
||||
.text = "Print this help.",
|
||||
},
|
||||
{
|
||||
.longopt_id = OPT_KILL_ADB_ON_CLOSE,
|
||||
.longopt = "kill-adb-on-close",
|
||||
.text = "Kill adb when scrcpy terminates.",
|
||||
},
|
||||
{
|
||||
.shortopt = 'K',
|
||||
@ -270,11 +304,6 @@ static const struct sc_option options[] = {
|
||||
"is enabled (or a physical keyboard is connected).\n"
|
||||
"Also see --hid-mouse.",
|
||||
},
|
||||
{
|
||||
.shortopt = 'h',
|
||||
.longopt = "help",
|
||||
.text = "Print this help.",
|
||||
},
|
||||
{
|
||||
.longopt_id = OPT_LEGACY_PASTE,
|
||||
.longopt = "legacy-paste",
|
||||
@ -308,11 +337,13 @@ static const struct sc_option options[] = {
|
||||
"\"initial\".",
|
||||
},
|
||||
{
|
||||
.longopt_id = OPT_MAX_FPS,
|
||||
.longopt = "max-fps",
|
||||
.shortopt = 'm',
|
||||
.longopt = "max-size",
|
||||
.argdesc = "value",
|
||||
.text = "Limit the frame rate of screen capture (officially supported "
|
||||
"since Android 10, but may work on earlier versions).",
|
||||
.text = "Limit both the width and height of the video to value. The "
|
||||
"other dimension is computed so that the device aspect-ratio "
|
||||
"is preserved.\n"
|
||||
"Default is 0 (unlimited).",
|
||||
},
|
||||
{
|
||||
.shortopt = 'M',
|
||||
@ -326,19 +357,33 @@ static const struct sc_option options[] = {
|
||||
"Also see --hid-keyboard.",
|
||||
},
|
||||
{
|
||||
.shortopt = 'm',
|
||||
.longopt = "max-size",
|
||||
.longopt_id = OPT_MAX_FPS,
|
||||
.longopt = "max-fps",
|
||||
.argdesc = "value",
|
||||
.text = "Limit both the width and height of the video to value. The "
|
||||
"other dimension is computed so that the device aspect-ratio "
|
||||
"is preserved.\n"
|
||||
"Default is 0 (unlimited).",
|
||||
.text = "Limit the frame rate of screen capture (officially supported "
|
||||
"since Android 10, but may work on earlier versions).",
|
||||
},
|
||||
{
|
||||
.shortopt = 'n',
|
||||
.longopt = "no-control",
|
||||
.text = "Disable device control (mirror the device in read-only).",
|
||||
},
|
||||
{
|
||||
.shortopt = 'N',
|
||||
.longopt = "no-playback",
|
||||
.text = "Disable video and audio playback on the computer (equivalent "
|
||||
"to --no-video-playback --no-audio-playback).",
|
||||
},
|
||||
{
|
||||
.longopt_id = OPT_NO_AUDIO,
|
||||
.longopt = "no-audio",
|
||||
.text = "Disable audio forwarding.",
|
||||
},
|
||||
{
|
||||
.longopt_id = OPT_NO_AUDIO_PLAYBACK,
|
||||
.longopt = "no-audio-playback",
|
||||
.text = "Disable audio playback on the computer.",
|
||||
},
|
||||
{
|
||||
.longopt_id = OPT_NO_CLEANUP,
|
||||
.longopt = "no-cleanup",
|
||||
@ -364,15 +409,9 @@ static const struct sc_option options[] = {
|
||||
"This option disables this behavior.",
|
||||
},
|
||||
{
|
||||
.shortopt = 'n',
|
||||
.longopt = "no-control",
|
||||
.text = "Disable device control (mirror the device in read-only).",
|
||||
},
|
||||
{
|
||||
.shortopt = 'N',
|
||||
// deprecated
|
||||
.longopt_id = OPT_NO_DISPLAY,
|
||||
.longopt = "no-display",
|
||||
.text = "Do not display device (only when screen recording or V4L2 "
|
||||
"sink is enabled).",
|
||||
},
|
||||
{
|
||||
.longopt_id = OPT_NO_KEY_REPEAT,
|
||||
@ -391,6 +430,16 @@ static const struct sc_option options[] = {
|
||||
.longopt = "no-power-on",
|
||||
.text = "Do not power on the device on start.",
|
||||
},
|
||||
{
|
||||
.longopt_id = OPT_NO_VIDEO,
|
||||
.longopt = "no-video",
|
||||
.text = "Disable video forwarding.",
|
||||
},
|
||||
{
|
||||
.longopt_id = OPT_NO_VIDEO_PLAYBACK,
|
||||
.longopt = "no-video-playback",
|
||||
.text = "Disable video playback on the computer.",
|
||||
},
|
||||
{
|
||||
.longopt_id = OPT_OTG,
|
||||
.longopt = "otg",
|
||||
@ -442,11 +491,6 @@ static const struct sc_option options[] = {
|
||||
"drag & drop. It is passed as is to \"adb push\".\n"
|
||||
"Default is \"/sdcard/Download/\".",
|
||||
},
|
||||
{
|
||||
.longopt_id = OPT_RAW_KEY_EVENTS,
|
||||
.longopt = "raw-key-events",
|
||||
.text = "Inject key events for all input keys, and ignore text events."
|
||||
},
|
||||
{
|
||||
.shortopt = 'r',
|
||||
.longopt = "record",
|
||||
@ -455,6 +499,11 @@ static const struct sc_option options[] = {
|
||||
"The format is determined by the --record-format option if "
|
||||
"set, or by the file extension (.mp4 or .mkv).",
|
||||
},
|
||||
{
|
||||
.longopt_id = OPT_RAW_KEY_EVENTS,
|
||||
.longopt = "raw-key-events",
|
||||
.text = "Inject key events for all input keys, and ignore text events."
|
||||
},
|
||||
{
|
||||
.longopt_id = OPT_RECORD_FORMAT,
|
||||
.longopt = "record-format",
|
||||
@ -471,11 +520,6 @@ static const struct sc_option options[] = {
|
||||
"\"opengles2\", \"opengles\", \"metal\" and \"software\".\n"
|
||||
"<https://wiki.libsdl.org/SDL_HINT_RENDER_DRIVER>",
|
||||
},
|
||||
{
|
||||
// deprecated
|
||||
.longopt_id = OPT_RENDER_EXPIRED_FRAMES,
|
||||
.longopt = "render-expired-frames",
|
||||
},
|
||||
{
|
||||
.longopt_id = OPT_REQUIRE_AUDIO,
|
||||
.longopt = "require-audio",
|
||||
@ -498,6 +542,11 @@ static const struct sc_option options[] = {
|
||||
.text = "The device serial number. Mandatory only if several devices "
|
||||
"are connected to adb.",
|
||||
},
|
||||
{
|
||||
.shortopt = 'S',
|
||||
.longopt = "turn-screen-off",
|
||||
.text = "Turn the device screen off immediately.",
|
||||
},
|
||||
{
|
||||
.longopt_id = OPT_SHORTCUT_MOD,
|
||||
.longopt = "shortcut-mod",
|
||||
@ -511,11 +560,6 @@ static const struct sc_option options[] = {
|
||||
"shortcuts, pass \"lctrl+lalt,lsuper\".\n"
|
||||
"Default is \"lalt,lsuper\" (left-Alt or left-Super).",
|
||||
},
|
||||
{
|
||||
.shortopt = 'S',
|
||||
.longopt = "turn-screen-off",
|
||||
.text = "Turn the device screen off immediately.",
|
||||
},
|
||||
{
|
||||
.shortopt = 't',
|
||||
.longopt = "show-touches",
|
||||
@ -537,6 +581,12 @@ static const struct sc_option options[] = {
|
||||
"connected over USB), enables TCP/IP mode, then connects to "
|
||||
"this address before starting.",
|
||||
},
|
||||
{
|
||||
.longopt_id = OPT_TIME_LIMIT,
|
||||
.longopt = "time-limit",
|
||||
.argdesc = "seconds",
|
||||
.text = "Set the maximum mirroring time, in seconds.",
|
||||
},
|
||||
{
|
||||
.longopt_id = OPT_TUNNEL_HOST,
|
||||
.longopt = "tunnel-host",
|
||||
@ -556,6 +606,22 @@ static const struct sc_option options[] = {
|
||||
"Default is 0 (not forced): the local port used for "
|
||||
"establishing the tunnel will be used.",
|
||||
},
|
||||
{
|
||||
.shortopt = 'v',
|
||||
.longopt = "version",
|
||||
.text = "Print the version of scrcpy.",
|
||||
},
|
||||
{
|
||||
.shortopt = 'V',
|
||||
.longopt = "verbosity",
|
||||
.argdesc = "value",
|
||||
.text = "Set the log level (verbose, debug, info, warn or error).\n"
|
||||
#ifndef NDEBUG
|
||||
"Default is debug.",
|
||||
#else
|
||||
"Default is info.",
|
||||
#endif
|
||||
},
|
||||
{
|
||||
.longopt_id = OPT_V4L2_SINK,
|
||||
.longopt = "v4l2-sink",
|
||||
@ -576,22 +642,6 @@ static const struct sc_option options[] = {
|
||||
"Default is 0 (no buffering).\n"
|
||||
"This option is only available on Linux.",
|
||||
},
|
||||
{
|
||||
.shortopt = 'V',
|
||||
.longopt = "verbosity",
|
||||
.argdesc = "value",
|
||||
.text = "Set the log level (verbose, debug, info, warn or error).\n"
|
||||
#ifndef NDEBUG
|
||||
"Default is debug.",
|
||||
#else
|
||||
"Default is info.",
|
||||
#endif
|
||||
},
|
||||
{
|
||||
.shortopt = 'v',
|
||||
.longopt = "version",
|
||||
.text = "Print the version of scrcpy.",
|
||||
},
|
||||
{
|
||||
.longopt_id = OPT_VIDEO_CODEC,
|
||||
.longopt = "video-codec",
|
||||
@ -1210,6 +1260,19 @@ parse_buffering_time(const char *s, sc_tick *tick) {
|
||||
return true;
|
||||
}
|
||||
|
||||
static bool
|
||||
parse_audio_output_buffer(const char *s, sc_tick *tick) {
|
||||
long value;
|
||||
bool ok = parse_integer_arg(s, &value, false, 0, 1000,
|
||||
"audio output buffer");
|
||||
if (!ok) {
|
||||
return false;
|
||||
}
|
||||
|
||||
*tick = SC_TICK_FROM_MS(value);
|
||||
return true;
|
||||
}
|
||||
|
||||
static bool
|
||||
parse_lock_video_orientation(const char *s,
|
||||
enum sc_lock_video_orientation *lock_mode) {
|
||||
@ -1449,18 +1512,39 @@ sc_parse_shortcut_mods(const char *s, struct sc_shortcut_mods *mods) {
|
||||
}
|
||||
#endif
|
||||
|
||||
static enum sc_record_format
|
||||
get_record_format(const char *name) {
|
||||
if (!strcmp(name, "mp4")) {
|
||||
return SC_RECORD_FORMAT_MP4;
|
||||
}
|
||||
if (!strcmp(name, "mkv")) {
|
||||
return SC_RECORD_FORMAT_MKV;
|
||||
}
|
||||
if (!strcmp(name, "m4a")) {
|
||||
return SC_RECORD_FORMAT_M4A;
|
||||
}
|
||||
if (!strcmp(name, "mka")) {
|
||||
return SC_RECORD_FORMAT_MKA;
|
||||
}
|
||||
if (!strcmp(name, "opus")) {
|
||||
return SC_RECORD_FORMAT_OPUS;
|
||||
}
|
||||
if (!strcmp(name, "aac")) {
|
||||
return SC_RECORD_FORMAT_AAC;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static bool
|
||||
parse_record_format(const char *optarg, enum sc_record_format *format) {
|
||||
if (!strcmp(optarg, "mp4")) {
|
||||
*format = SC_RECORD_FORMAT_MP4;
|
||||
return true;
|
||||
enum sc_record_format fmt = get_record_format(optarg);
|
||||
if (!fmt) {
|
||||
LOGE("Unsupported format: %s (expected mp4 or mkv)", optarg);
|
||||
return false;
|
||||
}
|
||||
if (!strcmp(optarg, "mkv")) {
|
||||
*format = SC_RECORD_FORMAT_MKV;
|
||||
return true;
|
||||
}
|
||||
LOGE("Unsupported format: %s (expected mp4 or mkv)", optarg);
|
||||
return false;
|
||||
|
||||
*format = fmt;
|
||||
return true;
|
||||
}
|
||||
|
||||
static bool
|
||||
@ -1480,18 +1564,13 @@ parse_port(const char *optarg, uint16_t *port) {
|
||||
|
||||
static enum sc_record_format
|
||||
guess_record_format(const char *filename) {
|
||||
size_t len = strlen(filename);
|
||||
if (len < 4) {
|
||||
const char *dot = strrchr(filename, '.');
|
||||
if (!dot) {
|
||||
return 0;
|
||||
}
|
||||
const char *ext = &filename[len - 4];
|
||||
if (!strcmp(ext, ".mp4")) {
|
||||
return SC_RECORD_FORMAT_MP4;
|
||||
}
|
||||
if (!strcmp(ext, ".mkv")) {
|
||||
return SC_RECORD_FORMAT_MKV;
|
||||
}
|
||||
return 0;
|
||||
|
||||
const char *ext = dot + 1;
|
||||
return get_record_format(ext);
|
||||
}
|
||||
|
||||
static bool
|
||||
@ -1530,6 +1609,34 @@ parse_audio_codec(const char *optarg, enum sc_codec *codec) {
|
||||
return false;
|
||||
}
|
||||
|
||||
static bool
|
||||
parse_audio_source(const char *optarg, enum sc_audio_source *source) {
|
||||
if (!strcmp(optarg, "mic")) {
|
||||
*source = SC_AUDIO_SOURCE_MIC;
|
||||
return true;
|
||||
}
|
||||
|
||||
if (!strcmp(optarg, "output")) {
|
||||
*source = SC_AUDIO_SOURCE_OUTPUT;
|
||||
return true;
|
||||
}
|
||||
|
||||
LOGE("Unsupported audio source: %s (expected output or mic)", optarg);
|
||||
return false;
|
||||
}
|
||||
|
||||
static bool
|
||||
parse_time_limit(const char *s, sc_tick *tick) {
|
||||
long value;
|
||||
bool ok = parse_integer_arg(s, &value, false, 0, 0x7FFFFFFF, "time limit");
|
||||
if (!ok) {
|
||||
return false;
|
||||
}
|
||||
|
||||
*tick = SC_TICK_FROM_SEC(value);
|
||||
return true;
|
||||
}
|
||||
|
||||
static bool
|
||||
parse_args_with_getopt(struct scrcpy_cli_args *args, int argc, char *argv[],
|
||||
const char *optstring, const struct option *longopts) {
|
||||
@ -1541,8 +1648,9 @@ parse_args_with_getopt(struct scrcpy_cli_args *args, int argc, char *argv[],
|
||||
while ((c = getopt_long(argc, argv, optstring, longopts, NULL)) != -1) {
|
||||
switch (c) {
|
||||
case OPT_BIT_RATE:
|
||||
LOGW("--bit-rate is deprecated, use --video-bit-rate instead.");
|
||||
// fall through
|
||||
LOGE("--bit-rate has been removed, "
|
||||
"use --video-bit-rate or --audio-bit-rate.");
|
||||
return false;
|
||||
case 'b':
|
||||
if (!parse_bit_rate(optarg, &opts->video_bit_rate)) {
|
||||
return false;
|
||||
@ -1570,9 +1678,6 @@ parse_args_with_getopt(struct scrcpy_cli_args *args, int argc, char *argv[],
|
||||
case 'f':
|
||||
opts->fullscreen = true;
|
||||
break;
|
||||
case 'F':
|
||||
LOGW("Deprecated option -F. Use --record-format instead.");
|
||||
// fall through
|
||||
case OPT_RECORD_FORMAT:
|
||||
if (!parse_record_format(optarg, &opts->record_format)) {
|
||||
return false;
|
||||
@ -1626,8 +1731,18 @@ parse_args_with_getopt(struct scrcpy_cli_args *args, int argc, char *argv[],
|
||||
case 'n':
|
||||
opts->control = false;
|
||||
break;
|
||||
case OPT_NO_DISPLAY:
|
||||
LOGW("--no-display is deprecated, use --no-playback instead.");
|
||||
// fall through
|
||||
case 'N':
|
||||
opts->display = false;
|
||||
opts->video_playback = false;
|
||||
opts->audio_playback = false;
|
||||
break;
|
||||
case OPT_NO_VIDEO_PLAYBACK:
|
||||
opts->video_playback = false;
|
||||
break;
|
||||
case OPT_NO_AUDIO_PLAYBACK:
|
||||
opts->audio_playback = false;
|
||||
break;
|
||||
case 'p':
|
||||
if (!parse_port_range(optarg, &opts->port_range)) {
|
||||
@ -1660,10 +1775,6 @@ parse_args_with_getopt(struct scrcpy_cli_args *args, int argc, char *argv[],
|
||||
case 'w':
|
||||
opts->stay_awake = true;
|
||||
break;
|
||||
case OPT_RENDER_EXPIRED_FRAMES:
|
||||
LOGW("Option --render-expired-frames has been removed. This "
|
||||
"flag has been ignored.");
|
||||
break;
|
||||
case OPT_WINDOW_TITLE:
|
||||
opts->window_title = optarg;
|
||||
break;
|
||||
@ -1722,9 +1833,9 @@ parse_args_with_getopt(struct scrcpy_cli_args *args, int argc, char *argv[],
|
||||
opts->forward_key_repeat = false;
|
||||
break;
|
||||
case OPT_CODEC_OPTIONS:
|
||||
LOGW("--codec-options is deprecated, use --video-codec-options "
|
||||
"instead.");
|
||||
// fall through
|
||||
LOGE("--codec-options has been removed, "
|
||||
"use --video-codec-options or --audio-codec-options.");
|
||||
return false;
|
||||
case OPT_VIDEO_CODEC_OPTIONS:
|
||||
opts->video_codec_options = optarg;
|
||||
break;
|
||||
@ -1732,8 +1843,9 @@ parse_args_with_getopt(struct scrcpy_cli_args *args, int argc, char *argv[],
|
||||
opts->audio_codec_options = optarg;
|
||||
break;
|
||||
case OPT_ENCODER:
|
||||
LOGW("--encoder is deprecated, use --video-encoder instead.");
|
||||
// fall through
|
||||
LOGE("--encoder has been removed, "
|
||||
"use --video-encoder or --audio-encoder.");
|
||||
return false;
|
||||
case OPT_VIDEO_ENCODER:
|
||||
opts->video_encoder = optarg;
|
||||
break;
|
||||
@ -1775,6 +1887,9 @@ parse_args_with_getopt(struct scrcpy_cli_args *args, int argc, char *argv[],
|
||||
case OPT_NO_DOWNSIZE_ON_ERROR:
|
||||
opts->downsize_on_error = false;
|
||||
break;
|
||||
case OPT_NO_VIDEO:
|
||||
opts->video = false;
|
||||
break;
|
||||
case OPT_NO_AUDIO:
|
||||
opts->audio = false;
|
||||
break;
|
||||
@ -1788,8 +1903,9 @@ parse_args_with_getopt(struct scrcpy_cli_args *args, int argc, char *argv[],
|
||||
opts->start_fps_counter = true;
|
||||
break;
|
||||
case OPT_CODEC:
|
||||
LOGW("--codec is deprecated, use --video-codec instead.");
|
||||
// fall through
|
||||
LOGE("--codec has been removed, "
|
||||
"use --video-codec or --audio-codec.");
|
||||
return false;
|
||||
case OPT_VIDEO_CODEC:
|
||||
if (!parse_video_codec(optarg, &opts->video_codec)) {
|
||||
return false;
|
||||
@ -1824,7 +1940,8 @@ parse_args_with_getopt(struct scrcpy_cli_args *args, int argc, char *argv[],
|
||||
}
|
||||
break;
|
||||
#else
|
||||
LOGE("V4L2 (--v4l2-buffer) is only available on Linux.");
|
||||
LOGE("V4L2 (--v4l2-buffer) is disabled (or unsupported on this "
|
||||
"platform).");
|
||||
return false;
|
||||
#endif
|
||||
case OPT_LIST_ENCODERS:
|
||||
@ -1841,6 +1958,25 @@ parse_args_with_getopt(struct scrcpy_cli_args *args, int argc, char *argv[],
|
||||
return false;
|
||||
}
|
||||
break;
|
||||
case OPT_AUDIO_OUTPUT_BUFFER:
|
||||
if (!parse_audio_output_buffer(optarg,
|
||||
&opts->audio_output_buffer)) {
|
||||
return false;
|
||||
}
|
||||
break;
|
||||
case OPT_AUDIO_SOURCE:
|
||||
if (!parse_audio_source(optarg, &opts->audio_source)) {
|
||||
return false;
|
||||
}
|
||||
break;
|
||||
case OPT_KILL_ADB_ON_CLOSE:
|
||||
opts->kill_adb_on_close = true;
|
||||
break;
|
||||
case OPT_TIME_LIMIT:
|
||||
if (!parse_time_limit(optarg, &opts->time_limit)) {
|
||||
return false;
|
||||
}
|
||||
break;
|
||||
default:
|
||||
// getopt prints the error message on stderr
|
||||
return false;
|
||||
@ -1869,14 +2005,52 @@ parse_args_with_getopt(struct scrcpy_cli_args *args, int argc, char *argv[],
|
||||
return false;
|
||||
}
|
||||
|
||||
bool otg = false;
|
||||
bool v4l2 = false;
|
||||
#ifdef HAVE_USB
|
||||
otg = opts->otg;
|
||||
#endif
|
||||
#ifdef HAVE_V4L2
|
||||
if (!opts->display && !opts->record_filename && !opts->v4l2_device) {
|
||||
LOGE("-N/--no-display requires either screen recording (-r/--record)"
|
||||
" or sink to v4l2loopback device (--v4l2-sink)");
|
||||
v4l2 = !!opts->v4l2_device;
|
||||
#endif
|
||||
|
||||
if (!opts->video) {
|
||||
opts->video_playback = false;
|
||||
}
|
||||
|
||||
if (!opts->audio) {
|
||||
opts->audio_playback = false;
|
||||
}
|
||||
|
||||
if (!opts->video_playback && !otg) {
|
||||
// If video playback is disabled and OTG are disabled, then there is
|
||||
// no way to control the device.
|
||||
opts->control = false;
|
||||
}
|
||||
|
||||
if (opts->video && !opts->video_playback && !opts->record_filename
|
||||
&& !v4l2) {
|
||||
LOGI("No video playback, no recording, no V4L2 sink: video disabled");
|
||||
opts->video = false;
|
||||
}
|
||||
|
||||
if (opts->audio && !opts->audio_playback && !opts->record_filename) {
|
||||
LOGI("No audio playback, no recording: audio disabled");
|
||||
opts->audio = false;
|
||||
}
|
||||
|
||||
if (!opts->video && !opts->audio && !otg) {
|
||||
LOGE("No video, no audio, no OTG: nothing to do");
|
||||
return false;
|
||||
}
|
||||
|
||||
if (opts->v4l2_device) {
|
||||
if (!opts->video && !otg) {
|
||||
// If video is disabled, then scrcpy must exit on audio failure.
|
||||
opts->require_audio = true;
|
||||
}
|
||||
|
||||
#ifdef HAVE_V4L2
|
||||
if (v4l2) {
|
||||
if (opts->lock_video_orientation ==
|
||||
SC_LOCK_VIDEO_ORIENTATION_UNLOCKED) {
|
||||
LOGI("Video orientation is locked for v4l2 sink. "
|
||||
@ -1894,18 +2068,8 @@ parse_args_with_getopt(struct scrcpy_cli_args *args, int argc, char *argv[],
|
||||
LOGE("V4L2 buffer value without V4L2 sink\n");
|
||||
return false;
|
||||
}
|
||||
#else
|
||||
if (!opts->display && !opts->record_filename) {
|
||||
LOGE("-N/--no-display requires screen recording (-r/--record)");
|
||||
return false;
|
||||
}
|
||||
#endif
|
||||
|
||||
if (opts->audio && !opts->display && !opts->record_filename) {
|
||||
LOGI("No display and no recording: audio disabled");
|
||||
opts->audio = false;
|
||||
}
|
||||
|
||||
if ((opts->tunnel_host || opts->tunnel_port) && !opts->force_adb_forward) {
|
||||
LOGI("Tunnel host/port is set, "
|
||||
"--force-adb-forward automatically enabled.");
|
||||
@ -1917,19 +2081,41 @@ parse_args_with_getopt(struct scrcpy_cli_args *args, int argc, char *argv[],
|
||||
return false;
|
||||
}
|
||||
|
||||
if (opts->record_filename && !opts->record_format) {
|
||||
opts->record_format = guess_record_format(opts->record_filename);
|
||||
if (opts->record_filename) {
|
||||
if (!opts->record_format) {
|
||||
LOGE("No format specified for \"%s\" "
|
||||
"(try with --record-format=mkv)",
|
||||
opts->record_filename);
|
||||
opts->record_format = guess_record_format(opts->record_filename);
|
||||
if (!opts->record_format) {
|
||||
LOGE("No format specified for \"%s\" "
|
||||
"(try with --record-format=mkv)",
|
||||
opts->record_filename);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
if (opts->audio_codec == SC_CODEC_RAW) {
|
||||
LOGW("Recording does not support RAW audio codec");
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
if (opts->record_filename && opts->audio_codec == SC_CODEC_RAW) {
|
||||
LOGW("Recording does not support RAW audio codec");
|
||||
return false;
|
||||
if (opts->video
|
||||
&& sc_record_format_is_audio_only(opts->record_format)) {
|
||||
LOGE("Audio container does not support video stream");
|
||||
return false;
|
||||
}
|
||||
|
||||
if (opts->record_format == SC_RECORD_FORMAT_OPUS
|
||||
&& opts->audio_codec != SC_CODEC_OPUS) {
|
||||
LOGE("Recording to OPUS file requires an OPUS audio stream "
|
||||
"(try with --audio-codec=opus)");
|
||||
return false;
|
||||
}
|
||||
|
||||
if (opts->record_format == SC_RECORD_FORMAT_AAC
|
||||
&& opts->audio_codec != SC_CODEC_AAC) {
|
||||
LOGE("Recording to AAC file requires an AAC audio stream "
|
||||
"(try with --audio-codec=aac)");
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
if (opts->audio_codec == SC_CODEC_RAW) {
|
||||
@ -1963,11 +2149,9 @@ parse_args_with_getopt(struct scrcpy_cli_args *args, int argc, char *argv[],
|
||||
}
|
||||
}
|
||||
|
||||
#ifdef HAVE_USB
|
||||
|
||||
# ifdef _WIN32
|
||||
if (!opts->otg && (opts->keyboard_input_mode == SC_KEYBOARD_INPUT_MODE_HID
|
||||
|| opts->mouse_input_mode == SC_MOUSE_INPUT_MODE_HID)) {
|
||||
if (!otg && (opts->keyboard_input_mode == SC_KEYBOARD_INPUT_MODE_HID
|
||||
|| opts->mouse_input_mode == SC_MOUSE_INPUT_MODE_HID)) {
|
||||
LOGE("On Windows, it is not possible to open a USB device already open "
|
||||
"by another process (like adb).");
|
||||
LOGE("Therefore, -K/--hid-keyboard and -M/--hid-mouse may only work in "
|
||||
@ -1976,7 +2160,7 @@ parse_args_with_getopt(struct scrcpy_cli_args *args, int argc, char *argv[],
|
||||
}
|
||||
# endif
|
||||
|
||||
if (opts->otg) {
|
||||
if (otg) {
|
||||
// OTG mode is compatible with only very few options.
|
||||
// Only report obvious errors.
|
||||
if (opts->record_filename) {
|
||||
@ -2003,14 +2187,11 @@ parse_args_with_getopt(struct scrcpy_cli_args *args, int argc, char *argv[],
|
||||
LOGE("OTG mode: could not select display");
|
||||
return false;
|
||||
}
|
||||
# ifdef HAVE_V4L2
|
||||
if (opts->v4l2_device) {
|
||||
if (v4l2) {
|
||||
LOGE("OTG mode: could not sink to V4L2 device");
|
||||
return false;
|
||||
}
|
||||
# endif
|
||||
}
|
||||
#endif
|
||||
|
||||
return true;
|
||||
}
|
||||
|
108
app/src/clock.c
108
app/src/clock.c
@ -1,116 +1,36 @@
|
||||
#include "clock.h"
|
||||
|
||||
#include <assert.h>
|
||||
|
||||
#include "util/log.h"
|
||||
|
||||
#define SC_CLOCK_NDEBUG // comment to debug
|
||||
|
||||
#define SC_CLOCK_RANGE 32
|
||||
|
||||
void
|
||||
sc_clock_init(struct sc_clock *clock) {
|
||||
clock->count = 0;
|
||||
clock->head = 0;
|
||||
clock->left_sum.system = 0;
|
||||
clock->left_sum.stream = 0;
|
||||
clock->right_sum.system = 0;
|
||||
clock->right_sum.stream = 0;
|
||||
}
|
||||
|
||||
// Estimate the affine function f(stream) = slope * stream + offset
|
||||
static void
|
||||
sc_clock_estimate(struct sc_clock *clock,
|
||||
double *out_slope, sc_tick *out_offset) {
|
||||
assert(clock->count);
|
||||
|
||||
if (clock->count == 1) {
|
||||
// If there is only 1 point, we can't compute a slope. Assume it is 1.
|
||||
struct sc_clock_point *single_point = &clock->right_sum;
|
||||
*out_slope = 1;
|
||||
*out_offset = single_point->system - single_point->stream;
|
||||
return;
|
||||
}
|
||||
|
||||
struct sc_clock_point left_avg = {
|
||||
.system = clock->left_sum.system / (clock->count / 2),
|
||||
.stream = clock->left_sum.stream / (clock->count / 2),
|
||||
};
|
||||
struct sc_clock_point right_avg = {
|
||||
.system = clock->right_sum.system / ((clock->count + 1) / 2),
|
||||
.stream = clock->right_sum.stream / ((clock->count + 1) / 2),
|
||||
};
|
||||
|
||||
double slope = (double) (right_avg.system - left_avg.system)
|
||||
/ (right_avg.stream - left_avg.stream);
|
||||
|
||||
if (clock->count < SC_CLOCK_RANGE) {
|
||||
/* The first frames are typically received and decoded with more delay
|
||||
* than the others, causing a wrong slope estimation on start. To
|
||||
* compensate, assume an initial slope of 1, then progressively use the
|
||||
* estimated slope. */
|
||||
slope = (clock->count * slope + (SC_CLOCK_RANGE - clock->count))
|
||||
/ SC_CLOCK_RANGE;
|
||||
}
|
||||
|
||||
struct sc_clock_point global_avg = {
|
||||
.system = (clock->left_sum.system + clock->right_sum.system)
|
||||
/ clock->count,
|
||||
.stream = (clock->left_sum.stream + clock->right_sum.stream)
|
||||
/ clock->count,
|
||||
};
|
||||
|
||||
sc_tick offset = global_avg.system - (sc_tick) (global_avg.stream * slope);
|
||||
|
||||
*out_slope = slope;
|
||||
*out_offset = offset;
|
||||
clock->range = 0;
|
||||
clock->offset = 0;
|
||||
}
|
||||
|
||||
void
|
||||
sc_clock_update(struct sc_clock *clock, sc_tick system, sc_tick stream) {
|
||||
struct sc_clock_point *point = &clock->points[clock->head];
|
||||
|
||||
if (clock->count == SC_CLOCK_RANGE || clock->count & 1) {
|
||||
// One point passes from the right sum to the left sum
|
||||
|
||||
unsigned mid;
|
||||
if (clock->count == SC_CLOCK_RANGE) {
|
||||
mid = (clock->head + SC_CLOCK_RANGE / 2) % SC_CLOCK_RANGE;
|
||||
} else {
|
||||
// Only for the first frames
|
||||
mid = clock->count / 2;
|
||||
}
|
||||
|
||||
struct sc_clock_point *mid_point = &clock->points[mid];
|
||||
clock->left_sum.system += mid_point->system;
|
||||
clock->left_sum.stream += mid_point->stream;
|
||||
clock->right_sum.system -= mid_point->system;
|
||||
clock->right_sum.stream -= mid_point->stream;
|
||||
if (clock->range < SC_CLOCK_RANGE) {
|
||||
++clock->range;
|
||||
}
|
||||
|
||||
if (clock->count == SC_CLOCK_RANGE) {
|
||||
// The current point overwrites the previous value in the circular
|
||||
// array, update the left sum accordingly
|
||||
clock->left_sum.system -= point->system;
|
||||
clock->left_sum.stream -= point->stream;
|
||||
} else {
|
||||
++clock->count;
|
||||
}
|
||||
|
||||
point->system = system;
|
||||
point->stream = stream;
|
||||
|
||||
clock->right_sum.system += system;
|
||||
clock->right_sum.stream += stream;
|
||||
|
||||
clock->head = (clock->head + 1) % SC_CLOCK_RANGE;
|
||||
|
||||
// Update estimation
|
||||
sc_clock_estimate(clock, &clock->slope, &clock->offset);
|
||||
sc_tick offset = system - stream;
|
||||
clock->offset = ((clock->range - 1) * clock->offset + offset)
|
||||
/ clock->range;
|
||||
|
||||
#ifndef SC_CLOCK_NDEBUG
|
||||
LOGD("Clock estimation: %f * pts + %" PRItick, clock->slope, clock->offset);
|
||||
LOGD("Clock estimation: pts + %" PRItick, clock->offset);
|
||||
#endif
|
||||
}
|
||||
|
||||
sc_tick
|
||||
sc_clock_to_system_time(struct sc_clock *clock, sc_tick stream) {
|
||||
assert(clock->count); // sc_clock_update() must have been called
|
||||
return (sc_tick) (stream * clock->slope) + clock->offset;
|
||||
assert(clock->range); // sc_clock_update() must have been called
|
||||
return stream + clock->offset;
|
||||
}
|
||||
|
@ -3,13 +3,8 @@
|
||||
|
||||
#include "common.h"
|
||||
|
||||
#include <assert.h>
|
||||
|
||||
#include "util/tick.h"
|
||||
|
||||
#define SC_CLOCK_RANGE 32
|
||||
static_assert(!(SC_CLOCK_RANGE & 1), "SC_CLOCK_RANGE must be even");
|
||||
|
||||
struct sc_clock_point {
|
||||
sc_tick system;
|
||||
sc_tick stream;
|
||||
@ -21,40 +16,18 @@ struct sc_clock_point {
|
||||
*
|
||||
* f(stream) = slope * stream + offset
|
||||
*
|
||||
* To that end, it stores the SC_CLOCK_RANGE last clock points (the timestamps
|
||||
* of a frame expressed both in stream time and system time) in a circular
|
||||
* array.
|
||||
* Theoretically, the slope encodes the drift between the device clock and the
|
||||
* computer clock. It is expected to be very close to 1.
|
||||
*
|
||||
* To estimate the slope, it splits the last SC_CLOCK_RANGE points into two
|
||||
* sets of SC_CLOCK_RANGE/2 points, and computes their centroid ("average
|
||||
* point"). The slope of the estimated affine function is that of the line
|
||||
* passing through these two points.
|
||||
* Since the clock is used to estimate very close points in the future (which
|
||||
* are reestimated on every clock update, see delay_buffer), the error caused
|
||||
* by clock drift is totally negligible, so it is better to assume that the
|
||||
* slope is 1 than to estimate it (the estimation error would be larger).
|
||||
*
|
||||
* To estimate the offset, it computes the centroid of all the SC_CLOCK_RANGE
|
||||
* points. The resulting affine function passes by this centroid.
|
||||
*
|
||||
* With a circular array, the rolling sums (and average) are quick to compute.
|
||||
* In practice, the estimation is stable and the evolution is smooth.
|
||||
* Therefore, only the offset is estimated.
|
||||
*/
|
||||
struct sc_clock {
|
||||
// Circular array
|
||||
struct sc_clock_point points[SC_CLOCK_RANGE];
|
||||
|
||||
// Number of points in the array (count <= SC_CLOCK_RANGE)
|
||||
unsigned count;
|
||||
|
||||
// Index of the next point to write
|
||||
unsigned head;
|
||||
|
||||
// Sum of the first count/2 points
|
||||
struct sc_clock_point left_sum;
|
||||
|
||||
// Sum of the last (count+1)/2 points
|
||||
struct sc_clock_point right_sum;
|
||||
|
||||
// Estimated slope and offset
|
||||
// (computed on sc_clock_update(), used by sc_clock_to_system_time())
|
||||
double slope;
|
||||
unsigned range;
|
||||
sc_tick offset;
|
||||
};
|
||||
|
||||
|
@ -25,6 +25,12 @@
|
||||
# define SCRCPY_LAVF_REQUIRES_REGISTER_ALL
|
||||
#endif
|
||||
|
||||
// Not documented in ffmpeg/doc/APIchanges, but AV_CODEC_ID_AV1 has been added
|
||||
// by FFmpeg commit d42809f9835a4e9e5c7c63210abb09ad0ef19cfb (included in tag
|
||||
// n3.3).
|
||||
#if LIBAVFORMAT_VERSION_INT >= AV_VERSION_INT(57, 89, 100)
|
||||
# define SCRCPY_LAVC_HAS_AV1
|
||||
#endif
|
||||
|
||||
// In ffmpeg/doc/APIchanges:
|
||||
// 2018-01-28 - ea3672b7d6 - lavf 58.7.100 - avformat.h
|
||||
|
@ -12,52 +12,20 @@
|
||||
#define DOWNCAST(SINK) container_of(SINK, struct sc_decoder, packet_sink)
|
||||
|
||||
static bool
|
||||
sc_decoder_open(struct sc_decoder *decoder, const AVCodec *codec) {
|
||||
decoder->codec_ctx = avcodec_alloc_context3(codec);
|
||||
if (!decoder->codec_ctx) {
|
||||
LOG_OOM();
|
||||
return false;
|
||||
}
|
||||
|
||||
decoder->codec_ctx->flags |= AV_CODEC_FLAG_LOW_DELAY;
|
||||
|
||||
if (codec->type == AVMEDIA_TYPE_VIDEO) {
|
||||
// Hardcoded video properties
|
||||
decoder->codec_ctx->pix_fmt = AV_PIX_FMT_YUV420P;
|
||||
} else {
|
||||
// Hardcoded audio properties
|
||||
#ifdef SCRCPY_LAVU_HAS_CHLAYOUT
|
||||
decoder->codec_ctx->ch_layout =
|
||||
(AVChannelLayout) AV_CHANNEL_LAYOUT_STEREO;
|
||||
#else
|
||||
decoder->codec_ctx->channel_layout = AV_CH_LAYOUT_STEREO;
|
||||
decoder->codec_ctx->channels = 2;
|
||||
#endif
|
||||
decoder->codec_ctx->sample_rate = 48000;
|
||||
}
|
||||
|
||||
if (avcodec_open2(decoder->codec_ctx, codec, NULL) < 0) {
|
||||
LOGE("Decoder '%s': could not open codec", decoder->name);
|
||||
avcodec_free_context(&decoder->codec_ctx);
|
||||
return false;
|
||||
}
|
||||
|
||||
sc_decoder_open(struct sc_decoder *decoder, AVCodecContext *ctx) {
|
||||
decoder->frame = av_frame_alloc();
|
||||
if (!decoder->frame) {
|
||||
LOG_OOM();
|
||||
avcodec_close(decoder->codec_ctx);
|
||||
avcodec_free_context(&decoder->codec_ctx);
|
||||
return false;
|
||||
}
|
||||
|
||||
if (!sc_frame_source_sinks_open(&decoder->frame_source,
|
||||
decoder->codec_ctx)) {
|
||||
if (!sc_frame_source_sinks_open(&decoder->frame_source, ctx)) {
|
||||
av_frame_free(&decoder->frame);
|
||||
avcodec_close(decoder->codec_ctx);
|
||||
avcodec_free_context(&decoder->codec_ctx);
|
||||
return false;
|
||||
}
|
||||
|
||||
decoder->ctx = ctx;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
@ -65,8 +33,6 @@ static void
|
||||
sc_decoder_close(struct sc_decoder *decoder) {
|
||||
sc_frame_source_sinks_close(&decoder->frame_source);
|
||||
av_frame_free(&decoder->frame);
|
||||
avcodec_close(decoder->codec_ctx);
|
||||
avcodec_free_context(&decoder->codec_ctx);
|
||||
}
|
||||
|
||||
static bool
|
||||
@ -77,7 +43,7 @@ sc_decoder_push(struct sc_decoder *decoder, const AVPacket *packet) {
|
||||
return true;
|
||||
}
|
||||
|
||||
int ret = avcodec_send_packet(decoder->codec_ctx, packet);
|
||||
int ret = avcodec_send_packet(decoder->ctx, packet);
|
||||
if (ret < 0 && ret != AVERROR(EAGAIN)) {
|
||||
LOGE("Decoder '%s': could not send video packet: %d",
|
||||
decoder->name, ret);
|
||||
@ -85,7 +51,7 @@ sc_decoder_push(struct sc_decoder *decoder, const AVPacket *packet) {
|
||||
}
|
||||
|
||||
for (;;) {
|
||||
ret = avcodec_receive_frame(decoder->codec_ctx, decoder->frame);
|
||||
ret = avcodec_receive_frame(decoder->ctx, decoder->frame);
|
||||
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
|
||||
break;
|
||||
}
|
||||
@ -110,9 +76,9 @@ sc_decoder_push(struct sc_decoder *decoder, const AVPacket *packet) {
|
||||
}
|
||||
|
||||
static bool
|
||||
sc_decoder_packet_sink_open(struct sc_packet_sink *sink, const AVCodec *codec) {
|
||||
sc_decoder_packet_sink_open(struct sc_packet_sink *sink, AVCodecContext *ctx) {
|
||||
struct sc_decoder *decoder = DOWNCAST(sink);
|
||||
return sc_decoder_open(decoder, codec);
|
||||
return sc_decoder_open(decoder, ctx);
|
||||
}
|
||||
|
||||
static void
|
||||
|
@ -16,7 +16,7 @@ struct sc_decoder {
|
||||
|
||||
const char *name; // must be statically allocated (e.g. a string literal)
|
||||
|
||||
AVCodecContext *codec_ctx;
|
||||
AVCodecContext *ctx;
|
||||
AVFrame *frame;
|
||||
};
|
||||
|
||||
|
@ -194,7 +194,7 @@ sc_delay_buffer_frame_sink_push(struct sc_frame_sink *sink,
|
||||
sc_clock_update(&db->clock, sc_tick_now(), pts);
|
||||
sc_cond_signal(&db->wait_cond);
|
||||
|
||||
if (db->first_frame_asap && db->clock.count == 1) {
|
||||
if (db->first_frame_asap && db->clock.range == 1) {
|
||||
sc_mutex_unlock(&db->mutex);
|
||||
return sc_frame_source_sinks_push(&db->frame_source, frame);
|
||||
}
|
||||
|
@ -1,6 +1,7 @@
|
||||
#include "demuxer.h"
|
||||
|
||||
#include <assert.h>
|
||||
#include <libavutil/channel_layout.h>
|
||||
#include <libavutil/time.h>
|
||||
#include <unistd.h>
|
||||
|
||||
@ -32,7 +33,12 @@ sc_demuxer_to_avcodec_id(uint32_t codec_id) {
|
||||
case SC_CODEC_ID_H265:
|
||||
return AV_CODEC_ID_HEVC;
|
||||
case SC_CODEC_ID_AV1:
|
||||
#ifdef SCRCPY_LAVC_HAS_AV1
|
||||
return AV_CODEC_ID_AV1;
|
||||
#else
|
||||
LOGE("AV1 not supported by this FFmpeg version");
|
||||
return AV_CODEC_ID_NONE;
|
||||
#endif
|
||||
case SC_CODEC_ID_OPUS:
|
||||
return AV_CODEC_ID_OPUS;
|
||||
case SC_CODEC_ID_AAC:
|
||||
@ -57,11 +63,24 @@ sc_demuxer_recv_codec_id(struct sc_demuxer *demuxer, uint32_t *codec_id) {
|
||||
return true;
|
||||
}
|
||||
|
||||
static bool
|
||||
sc_demuxer_recv_video_size(struct sc_demuxer *demuxer, uint32_t *width,
|
||||
uint32_t *height) {
|
||||
uint8_t data[8];
|
||||
ssize_t r = net_recv_all(demuxer->socket, data, 8);
|
||||
if (r < 8) {
|
||||
return false;
|
||||
}
|
||||
|
||||
*width = sc_read32be(data);
|
||||
*height = sc_read32be(data + 4);
|
||||
return true;
|
||||
}
|
||||
|
||||
static bool
|
||||
sc_demuxer_recv_packet(struct sc_demuxer *demuxer, AVPacket *packet) {
|
||||
// The video stream contains raw packets, without time information. When we
|
||||
// record, we retrieve the timestamps separately, from a "meta" header
|
||||
// added by the server before each raw packet.
|
||||
// The video and audio streams contain a sequence of raw packets (as
|
||||
// provided by MediaCodec), each prefixed with a "meta" header.
|
||||
//
|
||||
// The "meta" header length is 12 bytes:
|
||||
// [. . . . . . . .|. . . .]. . . . . . . . . . . . . . . ...
|
||||
@ -160,10 +179,45 @@ run_demuxer(void *data) {
|
||||
goto end;
|
||||
}
|
||||
|
||||
if (!sc_packet_source_sinks_open(&demuxer->packet_source, codec)) {
|
||||
AVCodecContext *codec_ctx = avcodec_alloc_context3(codec);
|
||||
if (!codec_ctx) {
|
||||
LOG_OOM();
|
||||
goto end;
|
||||
}
|
||||
|
||||
codec_ctx->flags |= AV_CODEC_FLAG_LOW_DELAY;
|
||||
|
||||
if (codec->type == AVMEDIA_TYPE_VIDEO) {
|
||||
uint32_t width;
|
||||
uint32_t height;
|
||||
ok = sc_demuxer_recv_video_size(demuxer, &width, &height);
|
||||
if (!ok) {
|
||||
goto finally_free_context;
|
||||
}
|
||||
|
||||
codec_ctx->width = width;
|
||||
codec_ctx->height = height;
|
||||
codec_ctx->pix_fmt = AV_PIX_FMT_YUV420P;
|
||||
} else {
|
||||
// Hardcoded audio properties
|
||||
#ifdef SCRCPY_LAVU_HAS_CHLAYOUT
|
||||
codec_ctx->ch_layout = (AVChannelLayout) AV_CHANNEL_LAYOUT_STEREO;
|
||||
#else
|
||||
codec_ctx->channel_layout = AV_CH_LAYOUT_STEREO;
|
||||
codec_ctx->channels = 2;
|
||||
#endif
|
||||
codec_ctx->sample_rate = 48000;
|
||||
}
|
||||
|
||||
if (avcodec_open2(codec_ctx, codec, NULL) < 0) {
|
||||
LOGE("Demuxer '%s': could not open codec", demuxer->name);
|
||||
goto finally_free_context;
|
||||
}
|
||||
|
||||
if (!sc_packet_source_sinks_open(&demuxer->packet_source, codec_ctx)) {
|
||||
goto finally_free_context;
|
||||
}
|
||||
|
||||
// Config packets must be merged with the next non-config packet only for
|
||||
// video streams
|
||||
bool must_merge_config_packet = codec->type == AVMEDIA_TYPE_VIDEO;
|
||||
@ -214,6 +268,9 @@ run_demuxer(void *data) {
|
||||
av_packet_free(&packet);
|
||||
finally_close_sinks:
|
||||
sc_packet_source_sinks_close(&demuxer->packet_source);
|
||||
finally_free_context:
|
||||
// This also calls avcodec_close() internally
|
||||
avcodec_free_context(&codec_ctx);
|
||||
end:
|
||||
demuxer->cbs->on_ended(demuxer, status, demuxer->cbs_userdata);
|
||||
|
||||
|
285
app/src/display.c
Normal file
285
app/src/display.c
Normal file
@ -0,0 +1,285 @@
|
||||
#include "display.h"
|
||||
|
||||
#include <assert.h>
|
||||
|
||||
#include "util/log.h"
|
||||
|
||||
bool
|
||||
sc_display_init(struct sc_display *display, SDL_Window *window, bool mipmaps) {
|
||||
display->renderer =
|
||||
SDL_CreateRenderer(window, -1, SDL_RENDERER_ACCELERATED);
|
||||
if (!display->renderer) {
|
||||
LOGE("Could not create renderer: %s", SDL_GetError());
|
||||
return false;
|
||||
}
|
||||
|
||||
SDL_RendererInfo renderer_info;
|
||||
int r = SDL_GetRendererInfo(display->renderer, &renderer_info);
|
||||
const char *renderer_name = r ? NULL : renderer_info.name;
|
||||
LOGI("Renderer: %s", renderer_name ? renderer_name : "(unknown)");
|
||||
|
||||
display->mipmaps = false;
|
||||
|
||||
// starts with "opengl"
|
||||
bool use_opengl = renderer_name && !strncmp(renderer_name, "opengl", 6);
|
||||
if (use_opengl) {
|
||||
|
||||
#ifdef SC_DISPLAY_FORCE_OPENGL_CORE_PROFILE
|
||||
// Persuade macOS to give us something better than OpenGL 2.1.
|
||||
// If we create a Core Profile context, we get the best OpenGL version.
|
||||
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK,
|
||||
SDL_GL_CONTEXT_PROFILE_CORE);
|
||||
|
||||
LOGD("Creating OpenGL Core Profile context");
|
||||
display->gl_context = SDL_GL_CreateContext(window);
|
||||
if (!display->gl_context) {
|
||||
LOGE("Could not create OpenGL context: %s", SDL_GetError());
|
||||
SDL_DestroyRenderer(display->renderer);
|
||||
return false;
|
||||
}
|
||||
#endif
|
||||
|
||||
struct sc_opengl *gl = &display->gl;
|
||||
sc_opengl_init(gl);
|
||||
|
||||
LOGI("OpenGL version: %s", gl->version);
|
||||
|
||||
if (mipmaps) {
|
||||
bool supports_mipmaps =
|
||||
sc_opengl_version_at_least(gl, 3, 0, /* OpenGL 3.0+ */
|
||||
2, 0 /* OpenGL ES 2.0+ */);
|
||||
if (supports_mipmaps) {
|
||||
LOGI("Trilinear filtering enabled");
|
||||
display->mipmaps = true;
|
||||
} else {
|
||||
LOGW("Trilinear filtering disabled "
|
||||
"(OpenGL 3.0+ or ES 2.0+ required");
|
||||
}
|
||||
} else {
|
||||
LOGI("Trilinear filtering disabled");
|
||||
}
|
||||
} else if (mipmaps) {
|
||||
LOGD("Trilinear filtering disabled (not an OpenGL renderer");
|
||||
}
|
||||
|
||||
display->pending.flags = 0;
|
||||
display->pending.frame = NULL;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
void
|
||||
sc_display_destroy(struct sc_display *display) {
|
||||
if (display->pending.frame) {
|
||||
av_frame_free(&display->pending.frame);
|
||||
}
|
||||
#ifdef SC_DISPLAY_FORCE_OPENGL_CORE_PROFILE
|
||||
SDL_GL_DeleteContext(display->gl_context);
|
||||
#endif
|
||||
if (display->texture) {
|
||||
SDL_DestroyTexture(display->texture);
|
||||
}
|
||||
SDL_DestroyRenderer(display->renderer);
|
||||
}
|
||||
|
||||
static SDL_Texture *
|
||||
sc_display_create_texture(struct sc_display *display,
|
||||
struct sc_size size) {
|
||||
SDL_Renderer *renderer = display->renderer;
|
||||
SDL_Texture *texture = SDL_CreateTexture(renderer, SDL_PIXELFORMAT_YV12,
|
||||
SDL_TEXTUREACCESS_STREAMING,
|
||||
size.width, size.height);
|
||||
if (!texture) {
|
||||
LOGD("Could not create texture: %s", SDL_GetError());
|
||||
return NULL;
|
||||
}
|
||||
|
||||
if (display->mipmaps) {
|
||||
struct sc_opengl *gl = &display->gl;
|
||||
|
||||
SDL_GL_BindTexture(texture, NULL, NULL);
|
||||
|
||||
// Enable trilinear filtering for downscaling
|
||||
gl->TexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,
|
||||
GL_LINEAR_MIPMAP_LINEAR);
|
||||
gl->TexParameterf(GL_TEXTURE_2D, GL_TEXTURE_LOD_BIAS, -1.f);
|
||||
|
||||
SDL_GL_UnbindTexture(texture);
|
||||
}
|
||||
|
||||
return texture;
|
||||
}
|
||||
|
||||
static inline void
|
||||
sc_display_set_pending_size(struct sc_display *display, struct sc_size size) {
|
||||
assert(!display->texture);
|
||||
display->pending.size = size;
|
||||
display->pending.flags |= SC_DISPLAY_PENDING_FLAG_SIZE;
|
||||
}
|
||||
|
||||
static bool
|
||||
sc_display_set_pending_frame(struct sc_display *display, const AVFrame *frame) {
|
||||
if (!display->pending.frame) {
|
||||
display->pending.frame = av_frame_alloc();
|
||||
if (!display->pending.frame) {
|
||||
LOG_OOM();
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
int r = av_frame_ref(display->pending.frame, frame);
|
||||
if (r) {
|
||||
LOGE("Could not ref frame: %d", r);
|
||||
return false;
|
||||
}
|
||||
|
||||
display->pending.flags |= SC_DISPLAY_PENDING_FLAG_FRAME;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static bool
|
||||
sc_display_apply_pending(struct sc_display *display) {
|
||||
if (display->pending.flags & SC_DISPLAY_PENDING_FLAG_SIZE) {
|
||||
assert(!display->texture);
|
||||
display->texture =
|
||||
sc_display_create_texture(display, display->pending.size);
|
||||
if (!display->texture) {
|
||||
return false;
|
||||
}
|
||||
|
||||
display->pending.flags &= ~SC_DISPLAY_PENDING_FLAG_SIZE;
|
||||
}
|
||||
|
||||
if (display->pending.flags & SC_DISPLAY_PENDING_FLAG_FRAME) {
|
||||
assert(display->pending.frame);
|
||||
bool ok = sc_display_update_texture(display, display->pending.frame);
|
||||
if (!ok) {
|
||||
return false;
|
||||
}
|
||||
|
||||
av_frame_unref(display->pending.frame);
|
||||
display->pending.flags &= ~SC_DISPLAY_PENDING_FLAG_FRAME;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static bool
|
||||
sc_display_set_texture_size_internal(struct sc_display *display,
|
||||
struct sc_size size) {
|
||||
assert(size.width && size.height);
|
||||
|
||||
if (display->texture) {
|
||||
SDL_DestroyTexture(display->texture);
|
||||
}
|
||||
|
||||
display->texture = sc_display_create_texture(display, size);
|
||||
if (!display->texture) {
|
||||
return false;
|
||||
}
|
||||
|
||||
LOGI("Texture: %" PRIu16 "x%" PRIu16, size.width, size.height);
|
||||
return true;
|
||||
}
|
||||
|
||||
enum sc_display_result
|
||||
sc_display_set_texture_size(struct sc_display *display, struct sc_size size) {
|
||||
bool ok = sc_display_set_texture_size_internal(display, size);
|
||||
if (!ok) {
|
||||
sc_display_set_pending_size(display, size);
|
||||
return SC_DISPLAY_RESULT_PENDING;
|
||||
|
||||
}
|
||||
|
||||
return SC_DISPLAY_RESULT_OK;
|
||||
}
|
||||
|
||||
static bool
|
||||
sc_display_update_texture_internal(struct sc_display *display,
|
||||
const AVFrame *frame) {
|
||||
int ret = SDL_UpdateYUVTexture(display->texture, NULL,
|
||||
frame->data[0], frame->linesize[0],
|
||||
frame->data[1], frame->linesize[1],
|
||||
frame->data[2], frame->linesize[2]);
|
||||
if (ret) {
|
||||
LOGD("Could not update texture: %s", SDL_GetError());
|
||||
return false;
|
||||
}
|
||||
|
||||
if (display->mipmaps) {
|
||||
SDL_GL_BindTexture(display->texture, NULL, NULL);
|
||||
display->gl.GenerateMipmap(GL_TEXTURE_2D);
|
||||
SDL_GL_UnbindTexture(display->texture);
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
enum sc_display_result
|
||||
sc_display_update_texture(struct sc_display *display, const AVFrame *frame) {
|
||||
bool ok = sc_display_update_texture_internal(display, frame);
|
||||
if (!ok) {
|
||||
ok = sc_display_set_pending_frame(display, frame);
|
||||
if (!ok) {
|
||||
LOGE("Could not set pending frame");
|
||||
return SC_DISPLAY_RESULT_ERROR;
|
||||
}
|
||||
|
||||
return SC_DISPLAY_RESULT_PENDING;
|
||||
}
|
||||
|
||||
return SC_DISPLAY_RESULT_OK;
|
||||
}
|
||||
|
||||
enum sc_display_result
|
||||
sc_display_render(struct sc_display *display, const SDL_Rect *geometry,
|
||||
unsigned rotation) {
|
||||
SDL_RenderClear(display->renderer);
|
||||
|
||||
if (display->pending.flags) {
|
||||
bool ok = sc_display_apply_pending(display);
|
||||
if (!ok) {
|
||||
return SC_DISPLAY_RESULT_PENDING;
|
||||
}
|
||||
}
|
||||
|
||||
SDL_Renderer *renderer = display->renderer;
|
||||
SDL_Texture *texture = display->texture;
|
||||
|
||||
if (rotation == 0) {
|
||||
int ret = SDL_RenderCopy(renderer, texture, NULL, geometry);
|
||||
if (ret) {
|
||||
LOGE("Could not render texture: %s", SDL_GetError());
|
||||
return SC_DISPLAY_RESULT_ERROR;
|
||||
}
|
||||
} else {
|
||||
// rotation in RenderCopyEx() is clockwise, while screen->rotation is
|
||||
// counterclockwise (to be consistent with --lock-video-orientation)
|
||||
int cw_rotation = (4 - rotation) % 4;
|
||||
double angle = 90 * cw_rotation;
|
||||
|
||||
const SDL_Rect *dstrect = NULL;
|
||||
SDL_Rect rect;
|
||||
if (rotation & 1) {
|
||||
rect.x = geometry->x + (geometry->w - geometry->h) / 2;
|
||||
rect.y = geometry->y + (geometry->h - geometry->w) / 2;
|
||||
rect.w = geometry->h;
|
||||
rect.h = geometry->w;
|
||||
dstrect = ▭
|
||||
} else {
|
||||
assert(rotation == 2);
|
||||
dstrect = geometry;
|
||||
}
|
||||
|
||||
int ret = SDL_RenderCopyEx(renderer, texture, NULL, dstrect, angle,
|
||||
NULL, 0);
|
||||
if (ret) {
|
||||
LOGE("Could not render texture: %s", SDL_GetError());
|
||||
return SC_DISPLAY_RESULT_ERROR;
|
||||
}
|
||||
}
|
||||
|
||||
SDL_RenderPresent(display->renderer);
|
||||
return SC_DISPLAY_RESULT_OK;
|
||||
}
|
59
app/src/display.h
Normal file
59
app/src/display.h
Normal file
@ -0,0 +1,59 @@
|
||||
#ifndef SC_DISPLAY_H
|
||||
#define SC_DISPLAY_H
|
||||
|
||||
#include "common.h"
|
||||
|
||||
#include <stdbool.h>
|
||||
#include <libavformat/avformat.h>
|
||||
#include <SDL2/SDL.h>
|
||||
|
||||
#include "coords.h"
|
||||
#include "opengl.h"
|
||||
|
||||
#ifdef __APPLE__
|
||||
# define SC_DISPLAY_FORCE_OPENGL_CORE_PROFILE
|
||||
#endif
|
||||
|
||||
struct sc_display {
|
||||
SDL_Renderer *renderer;
|
||||
SDL_Texture *texture;
|
||||
|
||||
struct sc_opengl gl;
|
||||
#ifdef SC_DISPLAY_FORCE_OPENGL_CORE_PROFILE
|
||||
SDL_GLContext *gl_context;
|
||||
#endif
|
||||
|
||||
bool mipmaps;
|
||||
|
||||
struct {
|
||||
#define SC_DISPLAY_PENDING_FLAG_SIZE 1
|
||||
#define SC_DISPLAY_PENDING_FLAG_FRAME 2
|
||||
int8_t flags;
|
||||
struct sc_size size;
|
||||
AVFrame *frame;
|
||||
} pending;
|
||||
};
|
||||
|
||||
enum sc_display_result {
|
||||
SC_DISPLAY_RESULT_OK,
|
||||
SC_DISPLAY_RESULT_PENDING,
|
||||
SC_DISPLAY_RESULT_ERROR,
|
||||
};
|
||||
|
||||
bool
|
||||
sc_display_init(struct sc_display *display, SDL_Window *window, bool mipmaps);
|
||||
|
||||
void
|
||||
sc_display_destroy(struct sc_display *display);
|
||||
|
||||
enum sc_display_result
|
||||
sc_display_set_texture_size(struct sc_display *display, struct sc_size size);
|
||||
|
||||
enum sc_display_result
|
||||
sc_display_update_texture(struct sc_display *display, const AVFrame *frame);
|
||||
|
||||
enum sc_display_result
|
||||
sc_display_render(struct sc_display *display, const SDL_Rect *geometry,
|
||||
unsigned rotation);
|
||||
|
||||
#endif
|
@ -5,3 +5,5 @@
|
||||
#define SC_EVENT_USB_DEVICE_DISCONNECTED (SDL_USEREVENT + 4)
|
||||
#define SC_EVENT_DEMUXER_ERROR (SDL_USEREVENT + 5)
|
||||
#define SC_EVENT_RECORDER_ERROR (SDL_USEREVENT + 6)
|
||||
#define SC_EVENT_SCREEN_INIT_SIZE (SDL_USEREVENT + 7)
|
||||
#define SC_EVENT_TIME_LIMIT_REACHED (SDL_USEREVENT + 8)
|
||||
|
@ -172,14 +172,18 @@ sc_file_pusher_start(struct sc_file_pusher *fp) {
|
||||
|
||||
void
|
||||
sc_file_pusher_stop(struct sc_file_pusher *fp) {
|
||||
sc_mutex_lock(&fp->mutex);
|
||||
fp->stopped = true;
|
||||
sc_cond_signal(&fp->event_cond);
|
||||
sc_intr_interrupt(&fp->intr);
|
||||
sc_mutex_unlock(&fp->mutex);
|
||||
if (fp->initialized) {
|
||||
sc_mutex_lock(&fp->mutex);
|
||||
fp->stopped = true;
|
||||
sc_cond_signal(&fp->event_cond);
|
||||
sc_intr_interrupt(&fp->intr);
|
||||
sc_mutex_unlock(&fp->mutex);
|
||||
}
|
||||
}
|
||||
|
||||
void
|
||||
sc_file_pusher_join(struct sc_file_pusher *fp) {
|
||||
sc_thread_join(&fp->thread, NULL);
|
||||
if (fp->initialized) {
|
||||
sc_thread_join(&fp->thread, NULL);
|
||||
}
|
||||
}
|
||||
|
@ -96,6 +96,7 @@ run_fps_counter(void *data) {
|
||||
bool
|
||||
sc_fps_counter_start(struct sc_fps_counter *counter) {
|
||||
sc_mutex_lock(&counter->mutex);
|
||||
counter->interrupted = false;
|
||||
counter->next_timestamp = sc_tick_now() + SC_FPS_COUNTER_INTERVAL;
|
||||
counter->nr_rendered = 0;
|
||||
counter->nr_skipped = 0;
|
||||
|
@ -797,7 +797,8 @@ sc_input_manager_process_file(struct sc_input_manager *im,
|
||||
}
|
||||
|
||||
void
|
||||
sc_input_manager_handle_event(struct sc_input_manager *im, SDL_Event *event) {
|
||||
sc_input_manager_handle_event(struct sc_input_manager *im,
|
||||
const SDL_Event *event) {
|
||||
bool control = im->controller;
|
||||
switch (event->type) {
|
||||
case SDL_TEXTINPUT:
|
||||
|
@ -61,6 +61,7 @@ sc_input_manager_init(struct sc_input_manager *im,
|
||||
const struct sc_input_manager_params *params);
|
||||
|
||||
void
|
||||
sc_input_manager_handle_event(struct sc_input_manager *im, SDL_Event *event);
|
||||
sc_input_manager_handle_event(struct sc_input_manager *im,
|
||||
const SDL_Event *event);
|
||||
|
||||
#endif
|
||||
|
@ -11,12 +11,10 @@ const struct scrcpy_options scrcpy_options_default = {
|
||||
.audio_codec_options = NULL,
|
||||
.video_encoder = NULL,
|
||||
.audio_encoder = NULL,
|
||||
#ifdef HAVE_V4L2
|
||||
.v4l2_device = NULL,
|
||||
#endif
|
||||
.log_level = SC_LOG_LEVEL_INFO,
|
||||
.video_codec = SC_CODEC_H264,
|
||||
.audio_codec = SC_CODEC_OPUS,
|
||||
.audio_source = SC_AUDIO_SOURCE_OUTPUT,
|
||||
.record_format = SC_RECORD_FORMAT_AUTO,
|
||||
.keyboard_input_mode = SC_KEYBOARD_INPUT_MODE_INJECT,
|
||||
.mouse_input_mode = SC_MOUSE_INPUT_MODE_INJECT,
|
||||
@ -42,8 +40,13 @@ const struct scrcpy_options scrcpy_options_default = {
|
||||
.window_height = 0,
|
||||
.display_id = 0,
|
||||
.display_buffer = 0,
|
||||
.v4l2_buffer = 0,
|
||||
.audio_buffer = SC_TICK_FROM_MS(50),
|
||||
.audio_output_buffer = SC_TICK_FROM_MS(5),
|
||||
.time_limit = 0,
|
||||
#ifdef HAVE_V4L2
|
||||
.v4l2_device = NULL,
|
||||
.v4l2_buffer = 0,
|
||||
#endif
|
||||
#ifdef HAVE_USB
|
||||
.otg = false,
|
||||
#endif
|
||||
@ -51,7 +54,8 @@ const struct scrcpy_options scrcpy_options_default = {
|
||||
.fullscreen = false,
|
||||
.always_on_top = false,
|
||||
.control = true,
|
||||
.display = true,
|
||||
.video_playback = true,
|
||||
.audio_playback = true,
|
||||
.turn_screen_off = false,
|
||||
.key_inject_mode = SC_KEY_INJECT_MODE_MIXED,
|
||||
.window_borderless = false,
|
||||
@ -72,8 +76,10 @@ const struct scrcpy_options scrcpy_options_default = {
|
||||
.cleanup = true,
|
||||
.start_fps_counter = false,
|
||||
.power_on = true,
|
||||
.video = true,
|
||||
.audio = true,
|
||||
.require_audio = false,
|
||||
.list_encoders = false,
|
||||
.list_displays = false,
|
||||
.kill_adb_on_close = false,
|
||||
};
|
||||
|
@ -21,8 +21,20 @@ enum sc_record_format {
|
||||
SC_RECORD_FORMAT_AUTO,
|
||||
SC_RECORD_FORMAT_MP4,
|
||||
SC_RECORD_FORMAT_MKV,
|
||||
SC_RECORD_FORMAT_M4A,
|
||||
SC_RECORD_FORMAT_MKA,
|
||||
SC_RECORD_FORMAT_OPUS,
|
||||
SC_RECORD_FORMAT_AAC,
|
||||
};
|
||||
|
||||
static inline bool
|
||||
sc_record_format_is_audio_only(enum sc_record_format fmt) {
|
||||
return fmt == SC_RECORD_FORMAT_M4A
|
||||
|| fmt == SC_RECORD_FORMAT_MKA
|
||||
|| fmt == SC_RECORD_FORMAT_OPUS
|
||||
|| fmt == SC_RECORD_FORMAT_AAC;
|
||||
}
|
||||
|
||||
enum sc_codec {
|
||||
SC_CODEC_H264,
|
||||
SC_CODEC_H265,
|
||||
@ -32,6 +44,11 @@ enum sc_codec {
|
||||
SC_CODEC_RAW,
|
||||
};
|
||||
|
||||
enum sc_audio_source {
|
||||
SC_AUDIO_SOURCE_OUTPUT,
|
||||
SC_AUDIO_SOURCE_MIC,
|
||||
};
|
||||
|
||||
enum sc_lock_video_orientation {
|
||||
SC_LOCK_VIDEO_ORIENTATION_UNLOCKED = -1,
|
||||
// lock the current orientation when scrcpy starts
|
||||
@ -100,12 +117,10 @@ struct scrcpy_options {
|
||||
const char *audio_codec_options;
|
||||
const char *video_encoder;
|
||||
const char *audio_encoder;
|
||||
#ifdef HAVE_V4L2
|
||||
const char *v4l2_device;
|
||||
#endif
|
||||
enum sc_log_level log_level;
|
||||
enum sc_codec video_codec;
|
||||
enum sc_codec audio_codec;
|
||||
enum sc_audio_source audio_source;
|
||||
enum sc_record_format record_format;
|
||||
enum sc_keyboard_input_mode keyboard_input_mode;
|
||||
enum sc_mouse_input_mode mouse_input_mode;
|
||||
@ -125,8 +140,13 @@ struct scrcpy_options {
|
||||
uint16_t window_height;
|
||||
uint32_t display_id;
|
||||
sc_tick display_buffer;
|
||||
sc_tick v4l2_buffer;
|
||||
sc_tick audio_buffer;
|
||||
sc_tick audio_output_buffer;
|
||||
sc_tick time_limit;
|
||||
#ifdef HAVE_V4L2
|
||||
const char *v4l2_device;
|
||||
sc_tick v4l2_buffer;
|
||||
#endif
|
||||
#ifdef HAVE_USB
|
||||
bool otg;
|
||||
#endif
|
||||
@ -134,7 +154,8 @@ struct scrcpy_options {
|
||||
bool fullscreen;
|
||||
bool always_on_top;
|
||||
bool control;
|
||||
bool display;
|
||||
bool video_playback;
|
||||
bool audio_playback;
|
||||
bool turn_screen_off;
|
||||
enum sc_key_inject_mode key_inject_mode;
|
||||
bool window_borderless;
|
||||
@ -155,10 +176,12 @@ struct scrcpy_options {
|
||||
bool cleanup;
|
||||
bool start_fps_counter;
|
||||
bool power_on;
|
||||
bool video;
|
||||
bool audio;
|
||||
bool require_audio;
|
||||
bool list_encoders;
|
||||
bool list_displays;
|
||||
bool kill_adb_on_close;
|
||||
};
|
||||
|
||||
extern const struct scrcpy_options scrcpy_options_default;
|
||||
|
@ -60,9 +60,17 @@ sc_recorder_queue_clear(struct sc_recorder_queue *queue) {
|
||||
static const char *
|
||||
sc_recorder_get_format_name(enum sc_record_format format) {
|
||||
switch (format) {
|
||||
case SC_RECORD_FORMAT_MP4: return "mp4";
|
||||
case SC_RECORD_FORMAT_MKV: return "matroska";
|
||||
default: return NULL;
|
||||
case SC_RECORD_FORMAT_MP4:
|
||||
case SC_RECORD_FORMAT_M4A:
|
||||
case SC_RECORD_FORMAT_AAC:
|
||||
return "mp4";
|
||||
case SC_RECORD_FORMAT_MKV:
|
||||
case SC_RECORD_FORMAT_MKA:
|
||||
return "matroska";
|
||||
case SC_RECORD_FORMAT_OPUS:
|
||||
return "opus";
|
||||
default:
|
||||
return NULL;
|
||||
}
|
||||
}
|
||||
|
||||
@ -88,23 +96,30 @@ sc_recorder_rescale_packet(AVStream *stream, AVPacket *packet) {
|
||||
}
|
||||
|
||||
static bool
|
||||
sc_recorder_write_stream(struct sc_recorder *recorder, int stream_index,
|
||||
AVPacket *packet) {
|
||||
AVStream *stream = recorder->ctx->streams[stream_index];
|
||||
sc_recorder_write_stream(struct sc_recorder *recorder,
|
||||
struct sc_recorder_stream *st, AVPacket *packet) {
|
||||
AVStream *stream = recorder->ctx->streams[st->index];
|
||||
sc_recorder_rescale_packet(stream, packet);
|
||||
if (st->last_pts != AV_NOPTS_VALUE && packet->pts <= st->last_pts) {
|
||||
LOGW("Fixing PTS non monotonically increasing in stream %d "
|
||||
"(%" PRIi64 " >= %" PRIi64 ")",
|
||||
st->index, st->last_pts, packet->pts);
|
||||
packet->pts = ++st->last_pts;
|
||||
packet->dts = packet->pts;
|
||||
} else {
|
||||
st->last_pts = packet->pts;
|
||||
}
|
||||
return av_interleaved_write_frame(recorder->ctx, packet) >= 0;
|
||||
}
|
||||
|
||||
static inline bool
|
||||
sc_recorder_write_video(struct sc_recorder *recorder, AVPacket *packet) {
|
||||
return sc_recorder_write_stream(recorder, recorder->video_stream_index,
|
||||
packet);
|
||||
return sc_recorder_write_stream(recorder, &recorder->video_stream, packet);
|
||||
}
|
||||
|
||||
static inline bool
|
||||
sc_recorder_write_audio(struct sc_recorder *recorder, AVPacket *packet) {
|
||||
return sc_recorder_write_stream(recorder, recorder->audio_stream_index,
|
||||
packet);
|
||||
return sc_recorder_write_stream(recorder, &recorder->audio_stream, packet);
|
||||
}
|
||||
|
||||
static bool
|
||||
@ -150,75 +165,9 @@ sc_recorder_close_output_file(struct sc_recorder *recorder) {
|
||||
avformat_free_context(recorder->ctx);
|
||||
}
|
||||
|
||||
static bool
|
||||
sc_recorder_wait_video_stream(struct sc_recorder *recorder) {
|
||||
sc_mutex_lock(&recorder->mutex);
|
||||
while (!recorder->video_codec && !recorder->stopped) {
|
||||
sc_cond_wait(&recorder->stream_cond, &recorder->mutex);
|
||||
}
|
||||
const AVCodec *codec = recorder->video_codec;
|
||||
sc_mutex_unlock(&recorder->mutex);
|
||||
|
||||
if (codec) {
|
||||
AVStream *stream = avformat_new_stream(recorder->ctx, codec);
|
||||
if (!stream) {
|
||||
return false;
|
||||
}
|
||||
|
||||
stream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
|
||||
stream->codecpar->codec_id = codec->id;
|
||||
stream->codecpar->format = AV_PIX_FMT_YUV420P;
|
||||
stream->codecpar->width = recorder->declared_frame_size.width;
|
||||
stream->codecpar->height = recorder->declared_frame_size.height;
|
||||
|
||||
recorder->video_stream_index = stream->index;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static bool
|
||||
sc_recorder_wait_audio_stream(struct sc_recorder *recorder) {
|
||||
sc_mutex_lock(&recorder->mutex);
|
||||
while (!recorder->audio_codec && !recorder->audio_disabled
|
||||
&& !recorder->stopped) {
|
||||
sc_cond_wait(&recorder->stream_cond, &recorder->mutex);
|
||||
}
|
||||
|
||||
if (recorder->audio_disabled) {
|
||||
// Reset audio flag. From there, the recorder thread may access this
|
||||
// flag without any mutex.
|
||||
recorder->audio = false;
|
||||
}
|
||||
|
||||
const AVCodec *codec = recorder->audio_codec;
|
||||
sc_mutex_unlock(&recorder->mutex);
|
||||
|
||||
if (codec) {
|
||||
AVStream *stream = avformat_new_stream(recorder->ctx, codec);
|
||||
if (!stream) {
|
||||
return false;
|
||||
}
|
||||
|
||||
stream->codecpar->codec_type = AVMEDIA_TYPE_AUDIO;
|
||||
stream->codecpar->codec_id = codec->id;
|
||||
#ifdef SCRCPY_LAVU_HAS_CHLAYOUT
|
||||
stream->codecpar->ch_layout.nb_channels = 2;
|
||||
#else
|
||||
stream->codecpar->channel_layout = AV_CH_LAYOUT_STEREO;
|
||||
stream->codecpar->channels = 2;
|
||||
#endif
|
||||
stream->codecpar->sample_rate = 48000;
|
||||
|
||||
recorder->audio_stream_index = stream->index;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static inline bool
|
||||
sc_recorder_has_empty_queues(struct sc_recorder *recorder) {
|
||||
if (sc_vecdeque_is_empty(&recorder->video_queue)) {
|
||||
if (recorder->video && sc_vecdeque_is_empty(&recorder->video_queue)) {
|
||||
// The video queue is empty
|
||||
return true;
|
||||
}
|
||||
@ -236,11 +185,14 @@ static bool
|
||||
sc_recorder_process_header(struct sc_recorder *recorder) {
|
||||
sc_mutex_lock(&recorder->mutex);
|
||||
|
||||
while (!recorder->stopped && sc_recorder_has_empty_queues(recorder)) {
|
||||
sc_cond_wait(&recorder->queue_cond, &recorder->mutex);
|
||||
while (!recorder->stopped &&
|
||||
((recorder->video && !recorder->video_init)
|
||||
|| (recorder->audio && !recorder->audio_init)
|
||||
|| sc_recorder_has_empty_queues(recorder))) {
|
||||
sc_cond_wait(&recorder->cond, &recorder->mutex);
|
||||
}
|
||||
|
||||
if (sc_vecdeque_is_empty(&recorder->video_queue)) {
|
||||
if (recorder->video && sc_vecdeque_is_empty(&recorder->video_queue)) {
|
||||
assert(recorder->stopped);
|
||||
// If the recorder is stopped, don't process anything if there are not
|
||||
// at least video packets
|
||||
@ -248,7 +200,11 @@ sc_recorder_process_header(struct sc_recorder *recorder) {
|
||||
return false;
|
||||
}
|
||||
|
||||
AVPacket *video_pkt = sc_vecdeque_pop(&recorder->video_queue);
|
||||
AVPacket *video_pkt = NULL;
|
||||
if (!sc_vecdeque_is_empty(&recorder->video_queue)) {
|
||||
assert(recorder->video);
|
||||
video_pkt = sc_vecdeque_pop(&recorder->video_queue);
|
||||
}
|
||||
|
||||
AVPacket *audio_pkt = NULL;
|
||||
if (!sc_vecdeque_is_empty(&recorder->audio_queue)) {
|
||||
@ -260,17 +216,19 @@ sc_recorder_process_header(struct sc_recorder *recorder) {
|
||||
|
||||
int ret = false;
|
||||
|
||||
if (video_pkt->pts != AV_NOPTS_VALUE) {
|
||||
LOGE("The first video packet is not a config packet");
|
||||
goto end;
|
||||
}
|
||||
if (video_pkt) {
|
||||
if (video_pkt->pts != AV_NOPTS_VALUE) {
|
||||
LOGE("The first video packet is not a config packet");
|
||||
goto end;
|
||||
}
|
||||
|
||||
assert(recorder->video_stream_index >= 0);
|
||||
AVStream *video_stream =
|
||||
recorder->ctx->streams[recorder->video_stream_index];
|
||||
bool ok = sc_recorder_set_extradata(video_stream, video_pkt);
|
||||
if (!ok) {
|
||||
goto end;
|
||||
assert(recorder->video_stream.index >= 0);
|
||||
AVStream *video_stream =
|
||||
recorder->ctx->streams[recorder->video_stream.index];
|
||||
bool ok = sc_recorder_set_extradata(video_stream, video_pkt);
|
||||
if (!ok) {
|
||||
goto end;
|
||||
}
|
||||
}
|
||||
|
||||
if (audio_pkt) {
|
||||
@ -279,16 +237,16 @@ sc_recorder_process_header(struct sc_recorder *recorder) {
|
||||
goto end;
|
||||
}
|
||||
|
||||
assert(recorder->audio_stream_index >= 0);
|
||||
assert(recorder->audio_stream.index >= 0);
|
||||
AVStream *audio_stream =
|
||||
recorder->ctx->streams[recorder->audio_stream_index];
|
||||
ok = sc_recorder_set_extradata(audio_stream, audio_pkt);
|
||||
recorder->ctx->streams[recorder->audio_stream.index];
|
||||
bool ok = sc_recorder_set_extradata(audio_stream, audio_pkt);
|
||||
if (!ok) {
|
||||
goto end;
|
||||
}
|
||||
}
|
||||
|
||||
ok = avformat_write_header(recorder->ctx, NULL) >= 0;
|
||||
bool ok = avformat_write_header(recorder->ctx, NULL) >= 0;
|
||||
if (!ok) {
|
||||
LOGE("Failed to write header to %s", recorder->filename);
|
||||
goto end;
|
||||
@ -297,7 +255,9 @@ sc_recorder_process_header(struct sc_recorder *recorder) {
|
||||
ret = true;
|
||||
|
||||
end:
|
||||
av_packet_free(&video_pkt);
|
||||
if (video_pkt) {
|
||||
av_packet_free(&video_pkt);
|
||||
}
|
||||
if (audio_pkt) {
|
||||
av_packet_free(&audio_pkt);
|
||||
}
|
||||
@ -327,7 +287,8 @@ sc_recorder_process_packets(struct sc_recorder *recorder) {
|
||||
sc_mutex_lock(&recorder->mutex);
|
||||
|
||||
while (!recorder->stopped) {
|
||||
if (!video_pkt && !sc_vecdeque_is_empty(&recorder->video_queue)) {
|
||||
if (recorder->video && !video_pkt &&
|
||||
!sc_vecdeque_is_empty(&recorder->video_queue)) {
|
||||
// A new packet may be assigned to video_pkt and be processed
|
||||
break;
|
||||
}
|
||||
@ -336,12 +297,17 @@ sc_recorder_process_packets(struct sc_recorder *recorder) {
|
||||
// A new packet may be assigned to audio_pkt and be processed
|
||||
break;
|
||||
}
|
||||
sc_cond_wait(&recorder->queue_cond, &recorder->mutex);
|
||||
sc_cond_wait(&recorder->cond, &recorder->mutex);
|
||||
}
|
||||
|
||||
// If stopped is set, continue to process the remaining events (to
|
||||
// finish the recording) before actually stopping.
|
||||
|
||||
// If there is no video, then the video_queue will remain empty forever
|
||||
// and video_pkt will always be NULL.
|
||||
assert(recorder->video || (!video_pkt
|
||||
&& sc_vecdeque_is_empty(&recorder->video_queue)));
|
||||
|
||||
// If there is no audio, then the audio_queue will remain empty forever
|
||||
// and audio_pkt will always be NULL.
|
||||
assert(recorder->audio || (!audio_pkt
|
||||
@ -383,6 +349,9 @@ sc_recorder_process_packets(struct sc_recorder *recorder) {
|
||||
if (!recorder->audio) {
|
||||
assert(video_pkt);
|
||||
pts_origin = video_pkt->pts;
|
||||
} else if (!recorder->video) {
|
||||
assert(audio_pkt);
|
||||
pts_origin = audio_pkt->pts;
|
||||
} else if (video_pkt && audio_pkt) {
|
||||
pts_origin = MIN(video_pkt->pts, audio_pkt->pts);
|
||||
} else if (recorder->stopped) {
|
||||
@ -480,22 +449,6 @@ sc_recorder_record(struct sc_recorder *recorder) {
|
||||
return false;
|
||||
}
|
||||
|
||||
ok = sc_recorder_wait_video_stream(recorder);
|
||||
if (!ok) {
|
||||
sc_recorder_close_output_file(recorder);
|
||||
return false;
|
||||
}
|
||||
|
||||
if (recorder->audio) {
|
||||
ok = sc_recorder_wait_audio_stream(recorder);
|
||||
if (!ok) {
|
||||
sc_recorder_close_output_file(recorder);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
// If recorder->stopped, process any queued packet anyway
|
||||
|
||||
ok = sc_recorder_process_packets(recorder);
|
||||
sc_recorder_close_output_file(recorder);
|
||||
return ok;
|
||||
@ -536,9 +489,10 @@ run_recorder(void *data) {
|
||||
|
||||
static bool
|
||||
sc_recorder_video_packet_sink_open(struct sc_packet_sink *sink,
|
||||
const AVCodec *codec) {
|
||||
AVCodecContext *ctx) {
|
||||
struct sc_recorder *recorder = DOWNCAST_VIDEO(sink);
|
||||
assert(codec);
|
||||
// only written from this thread, no need to lock
|
||||
assert(!recorder->video_init);
|
||||
|
||||
sc_mutex_lock(&recorder->mutex);
|
||||
if (recorder->stopped) {
|
||||
@ -546,8 +500,22 @@ sc_recorder_video_packet_sink_open(struct sc_packet_sink *sink,
|
||||
return false;
|
||||
}
|
||||
|
||||
recorder->video_codec = codec;
|
||||
sc_cond_signal(&recorder->stream_cond);
|
||||
AVStream *stream = avformat_new_stream(recorder->ctx, ctx->codec);
|
||||
if (!stream) {
|
||||
sc_mutex_unlock(&recorder->mutex);
|
||||
return false;
|
||||
}
|
||||
|
||||
int r = avcodec_parameters_from_context(stream->codecpar, ctx);
|
||||
if (r < 0) {
|
||||
sc_mutex_unlock(&recorder->mutex);
|
||||
return false;
|
||||
}
|
||||
|
||||
recorder->video_stream.index = stream->index;
|
||||
|
||||
recorder->video_init = true;
|
||||
sc_cond_signal(&recorder->cond);
|
||||
sc_mutex_unlock(&recorder->mutex);
|
||||
|
||||
return true;
|
||||
@ -556,11 +524,13 @@ sc_recorder_video_packet_sink_open(struct sc_packet_sink *sink,
|
||||
static void
|
||||
sc_recorder_video_packet_sink_close(struct sc_packet_sink *sink) {
|
||||
struct sc_recorder *recorder = DOWNCAST_VIDEO(sink);
|
||||
// only written from this thread, no need to lock
|
||||
assert(recorder->video_init);
|
||||
|
||||
sc_mutex_lock(&recorder->mutex);
|
||||
// EOS also stops the recorder
|
||||
recorder->stopped = true;
|
||||
sc_cond_signal(&recorder->queue_cond);
|
||||
sc_cond_signal(&recorder->cond);
|
||||
sc_mutex_unlock(&recorder->mutex);
|
||||
}
|
||||
|
||||
@ -568,6 +538,8 @@ static bool
|
||||
sc_recorder_video_packet_sink_push(struct sc_packet_sink *sink,
|
||||
const AVPacket *packet) {
|
||||
struct sc_recorder *recorder = DOWNCAST_VIDEO(sink);
|
||||
// only written from this thread, no need to lock
|
||||
assert(recorder->video_init);
|
||||
|
||||
sc_mutex_lock(&recorder->mutex);
|
||||
|
||||
@ -584,7 +556,7 @@ sc_recorder_video_packet_sink_push(struct sc_packet_sink *sink,
|
||||
return false;
|
||||
}
|
||||
|
||||
rec->stream_index = recorder->video_stream_index;
|
||||
rec->stream_index = recorder->video_stream.index;
|
||||
|
||||
bool ok = sc_vecdeque_push(&recorder->video_queue, rec);
|
||||
if (!ok) {
|
||||
@ -593,7 +565,7 @@ sc_recorder_video_packet_sink_push(struct sc_packet_sink *sink,
|
||||
return false;
|
||||
}
|
||||
|
||||
sc_cond_signal(&recorder->queue_cond);
|
||||
sc_cond_signal(&recorder->cond);
|
||||
|
||||
sc_mutex_unlock(&recorder->mutex);
|
||||
return true;
|
||||
@ -601,16 +573,30 @@ sc_recorder_video_packet_sink_push(struct sc_packet_sink *sink,
|
||||
|
||||
static bool
|
||||
sc_recorder_audio_packet_sink_open(struct sc_packet_sink *sink,
|
||||
const AVCodec *codec) {
|
||||
AVCodecContext *ctx) {
|
||||
struct sc_recorder *recorder = DOWNCAST_AUDIO(sink);
|
||||
assert(recorder->audio);
|
||||
// only written from this thread, no need to lock
|
||||
assert(!recorder->audio_disabled);
|
||||
assert(codec);
|
||||
assert(!recorder->audio_init);
|
||||
|
||||
sc_mutex_lock(&recorder->mutex);
|
||||
recorder->audio_codec = codec;
|
||||
sc_cond_signal(&recorder->stream_cond);
|
||||
|
||||
AVStream *stream = avformat_new_stream(recorder->ctx, ctx->codec);
|
||||
if (!stream) {
|
||||
sc_mutex_unlock(&recorder->mutex);
|
||||
return false;
|
||||
}
|
||||
|
||||
int r = avcodec_parameters_from_context(stream->codecpar, ctx);
|
||||
if (r < 0) {
|
||||
sc_mutex_unlock(&recorder->mutex);
|
||||
return false;
|
||||
}
|
||||
|
||||
recorder->audio_stream.index = stream->index;
|
||||
|
||||
recorder->audio_init = true;
|
||||
sc_cond_signal(&recorder->cond);
|
||||
sc_mutex_unlock(&recorder->mutex);
|
||||
|
||||
return true;
|
||||
@ -621,12 +607,12 @@ sc_recorder_audio_packet_sink_close(struct sc_packet_sink *sink) {
|
||||
struct sc_recorder *recorder = DOWNCAST_AUDIO(sink);
|
||||
assert(recorder->audio);
|
||||
// only written from this thread, no need to lock
|
||||
assert(!recorder->audio_disabled);
|
||||
assert(recorder->audio_init);
|
||||
|
||||
sc_mutex_lock(&recorder->mutex);
|
||||
// EOS also stops the recorder
|
||||
recorder->stopped = true;
|
||||
sc_cond_signal(&recorder->queue_cond);
|
||||
sc_cond_signal(&recorder->cond);
|
||||
sc_mutex_unlock(&recorder->mutex);
|
||||
}
|
||||
|
||||
@ -636,7 +622,7 @@ sc_recorder_audio_packet_sink_push(struct sc_packet_sink *sink,
|
||||
struct sc_recorder *recorder = DOWNCAST_AUDIO(sink);
|
||||
assert(recorder->audio);
|
||||
// only written from this thread, no need to lock
|
||||
assert(!recorder->audio_disabled);
|
||||
assert(recorder->audio_init);
|
||||
|
||||
sc_mutex_lock(&recorder->mutex);
|
||||
|
||||
@ -653,7 +639,7 @@ sc_recorder_audio_packet_sink_push(struct sc_packet_sink *sink,
|
||||
return false;
|
||||
}
|
||||
|
||||
rec->stream_index = recorder->audio_stream_index;
|
||||
rec->stream_index = recorder->audio_stream.index;
|
||||
|
||||
bool ok = sc_vecdeque_push(&recorder->audio_queue, rec);
|
||||
if (!ok) {
|
||||
@ -662,7 +648,7 @@ sc_recorder_audio_packet_sink_push(struct sc_packet_sink *sink,
|
||||
return false;
|
||||
}
|
||||
|
||||
sc_cond_signal(&recorder->queue_cond);
|
||||
sc_cond_signal(&recorder->cond);
|
||||
|
||||
sc_mutex_unlock(&recorder->mutex);
|
||||
return true;
|
||||
@ -673,21 +659,26 @@ sc_recorder_audio_packet_sink_disable(struct sc_packet_sink *sink) {
|
||||
struct sc_recorder *recorder = DOWNCAST_AUDIO(sink);
|
||||
assert(recorder->audio);
|
||||
// only written from this thread, no need to lock
|
||||
assert(!recorder->audio_disabled);
|
||||
assert(!recorder->audio_codec);
|
||||
assert(!recorder->audio_init);
|
||||
|
||||
LOGW("Audio stream recording disabled");
|
||||
|
||||
sc_mutex_lock(&recorder->mutex);
|
||||
recorder->audio_disabled = true;
|
||||
sc_cond_signal(&recorder->stream_cond);
|
||||
recorder->audio = false;
|
||||
recorder->audio_init = true;
|
||||
sc_cond_signal(&recorder->cond);
|
||||
sc_mutex_unlock(&recorder->mutex);
|
||||
}
|
||||
|
||||
static void
|
||||
sc_recorder_stream_init(struct sc_recorder_stream *stream) {
|
||||
stream->index = -1;
|
||||
stream->last_pts = AV_NOPTS_VALUE;
|
||||
}
|
||||
|
||||
bool
|
||||
sc_recorder_init(struct sc_recorder *recorder, const char *filename,
|
||||
enum sc_record_format format, bool audio,
|
||||
struct sc_size declared_frame_size,
|
||||
enum sc_record_format format, bool video, bool audio,
|
||||
const struct sc_recorder_callbacks *cbs, void *cbs_userdata) {
|
||||
recorder->filename = strdup(filename);
|
||||
if (!recorder->filename) {
|
||||
@ -700,43 +691,40 @@ sc_recorder_init(struct sc_recorder *recorder, const char *filename,
|
||||
goto error_free_filename;
|
||||
}
|
||||
|
||||
ok = sc_cond_init(&recorder->queue_cond);
|
||||
ok = sc_cond_init(&recorder->cond);
|
||||
if (!ok) {
|
||||
goto error_mutex_destroy;
|
||||
}
|
||||
|
||||
ok = sc_cond_init(&recorder->stream_cond);
|
||||
if (!ok) {
|
||||
goto error_queue_cond_destroy;
|
||||
}
|
||||
|
||||
assert(video || audio);
|
||||
recorder->video = video;
|
||||
recorder->audio = audio;
|
||||
|
||||
sc_vecdeque_init(&recorder->video_queue);
|
||||
sc_vecdeque_init(&recorder->audio_queue);
|
||||
recorder->stopped = false;
|
||||
|
||||
recorder->video_codec = NULL;
|
||||
recorder->audio_codec = NULL;
|
||||
recorder->audio_disabled = false;
|
||||
recorder->video_init = false;
|
||||
recorder->audio_init = false;
|
||||
|
||||
recorder->video_stream_index = -1;
|
||||
recorder->audio_stream_index = -1;
|
||||
sc_recorder_stream_init(&recorder->video_stream);
|
||||
sc_recorder_stream_init(&recorder->audio_stream);
|
||||
|
||||
recorder->format = format;
|
||||
recorder->declared_frame_size = declared_frame_size;
|
||||
|
||||
assert(cbs && cbs->on_ended);
|
||||
recorder->cbs = cbs;
|
||||
recorder->cbs_userdata = cbs_userdata;
|
||||
|
||||
static const struct sc_packet_sink_ops video_ops = {
|
||||
.open = sc_recorder_video_packet_sink_open,
|
||||
.close = sc_recorder_video_packet_sink_close,
|
||||
.push = sc_recorder_video_packet_sink_push,
|
||||
};
|
||||
if (video) {
|
||||
static const struct sc_packet_sink_ops video_ops = {
|
||||
.open = sc_recorder_video_packet_sink_open,
|
||||
.close = sc_recorder_video_packet_sink_close,
|
||||
.push = sc_recorder_video_packet_sink_push,
|
||||
};
|
||||
|
||||
recorder->video_packet_sink.ops = &video_ops;
|
||||
recorder->video_packet_sink.ops = &video_ops;
|
||||
}
|
||||
|
||||
if (audio) {
|
||||
static const struct sc_packet_sink_ops audio_ops = {
|
||||
@ -751,8 +739,6 @@ sc_recorder_init(struct sc_recorder *recorder, const char *filename,
|
||||
|
||||
return true;
|
||||
|
||||
error_queue_cond_destroy:
|
||||
sc_cond_destroy(&recorder->queue_cond);
|
||||
error_mutex_destroy:
|
||||
sc_mutex_destroy(&recorder->mutex);
|
||||
error_free_filename:
|
||||
@ -777,8 +763,7 @@ void
|
||||
sc_recorder_stop(struct sc_recorder *recorder) {
|
||||
sc_mutex_lock(&recorder->mutex);
|
||||
recorder->stopped = true;
|
||||
sc_cond_signal(&recorder->queue_cond);
|
||||
sc_cond_signal(&recorder->stream_cond);
|
||||
sc_cond_signal(&recorder->cond);
|
||||
sc_mutex_unlock(&recorder->mutex);
|
||||
}
|
||||
|
||||
@ -789,8 +774,7 @@ sc_recorder_join(struct sc_recorder *recorder) {
|
||||
|
||||
void
|
||||
sc_recorder_destroy(struct sc_recorder *recorder) {
|
||||
sc_cond_destroy(&recorder->stream_cond);
|
||||
sc_cond_destroy(&recorder->queue_cond);
|
||||
sc_cond_destroy(&recorder->cond);
|
||||
sc_mutex_destroy(&recorder->mutex);
|
||||
free(recorder->filename);
|
||||
}
|
||||
|
@ -14,6 +14,11 @@
|
||||
|
||||
struct sc_recorder_queue SC_VECDEQUE(AVPacket *);
|
||||
|
||||
struct sc_recorder_stream {
|
||||
int index;
|
||||
int64_t last_pts;
|
||||
};
|
||||
|
||||
struct sc_recorder {
|
||||
struct sc_packet_sink video_packet_sink;
|
||||
struct sc_packet_sink audio_packet_sink;
|
||||
@ -27,30 +32,26 @@ struct sc_recorder {
|
||||
* may access it without data races.
|
||||
*/
|
||||
bool audio;
|
||||
bool video;
|
||||
|
||||
char *filename;
|
||||
enum sc_record_format format;
|
||||
AVFormatContext *ctx;
|
||||
struct sc_size declared_frame_size;
|
||||
|
||||
sc_thread thread;
|
||||
sc_mutex mutex;
|
||||
sc_cond queue_cond;
|
||||
sc_cond cond;
|
||||
// set on sc_recorder_stop(), packet_sink close or recording failure
|
||||
bool stopped;
|
||||
struct sc_recorder_queue video_queue;
|
||||
struct sc_recorder_queue audio_queue;
|
||||
|
||||
// wake up the recorder thread once the video or audio codec is known
|
||||
sc_cond stream_cond;
|
||||
const AVCodec *video_codec;
|
||||
const AVCodec *audio_codec;
|
||||
// Instead of providing an audio_codec, the demuxer may notify that the
|
||||
// stream is disabled if the device could not capture audio
|
||||
bool audio_disabled;
|
||||
bool video_init;
|
||||
bool audio_init;
|
||||
|
||||
int video_stream_index;
|
||||
int audio_stream_index;
|
||||
struct sc_recorder_stream video_stream;
|
||||
struct sc_recorder_stream audio_stream;
|
||||
|
||||
const struct sc_recorder_callbacks *cbs;
|
||||
void *cbs_userdata;
|
||||
@ -63,8 +64,7 @@ struct sc_recorder_callbacks {
|
||||
|
||||
bool
|
||||
sc_recorder_init(struct sc_recorder *recorder, const char *filename,
|
||||
enum sc_record_format format, bool audio,
|
||||
struct sc_size declared_frame_size,
|
||||
enum sc_record_format format, bool video, bool audio,
|
||||
const struct sc_recorder_callbacks *cbs, void *cbs_userdata);
|
||||
|
||||
bool
|
||||
|
169
app/src/scrcpy.c
169
app/src/scrcpy.c
@ -35,6 +35,7 @@
|
||||
#include "util/log.h"
|
||||
#include "util/net.h"
|
||||
#include "util/rand.h"
|
||||
#include "util/timeout.h"
|
||||
#ifdef HAVE_V4L2
|
||||
# include "v4l2_sink.h"
|
||||
#endif
|
||||
@ -73,6 +74,7 @@ struct scrcpy {
|
||||
struct sc_hid_mouse mouse_hid;
|
||||
#endif
|
||||
};
|
||||
struct sc_timeout timeout;
|
||||
};
|
||||
|
||||
static inline void
|
||||
@ -137,7 +139,7 @@ sdl_set_hints(const char *render_driver) {
|
||||
}
|
||||
|
||||
static void
|
||||
sdl_configure(bool display, bool disable_screensaver) {
|
||||
sdl_configure(bool video_playback, bool disable_screensaver) {
|
||||
#ifdef _WIN32
|
||||
// Clean up properly on Ctrl+C on Windows
|
||||
bool ok = SetConsoleCtrlHandler(windows_ctrl_handler, TRUE);
|
||||
@ -146,7 +148,7 @@ sdl_configure(bool display, bool disable_screensaver) {
|
||||
}
|
||||
#endif // _WIN32
|
||||
|
||||
if (!display) {
|
||||
if (!video_playback) {
|
||||
return;
|
||||
}
|
||||
|
||||
@ -171,11 +173,16 @@ event_loop(struct scrcpy *s) {
|
||||
case SC_EVENT_RECORDER_ERROR:
|
||||
LOGE("Recorder error");
|
||||
return SCRCPY_EXIT_FAILURE;
|
||||
case SC_EVENT_TIME_LIMIT_REACHED:
|
||||
LOGI("Time limit reached");
|
||||
return SCRCPY_EXIT_SUCCESS;
|
||||
case SDL_QUIT:
|
||||
LOGD("User requested to quit");
|
||||
return SCRCPY_EXIT_SUCCESS;
|
||||
default:
|
||||
sc_screen_handle_event(&s->screen, &event);
|
||||
if (!sc_screen_handle_event(&s->screen, &event)) {
|
||||
return SCRCPY_EXIT_FAILURE;
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
@ -278,6 +285,14 @@ sc_server_on_disconnected(struct sc_server *server, void *userdata) {
|
||||
// event
|
||||
}
|
||||
|
||||
static void
|
||||
sc_timeout_on_timeout(struct sc_timeout *timeout, void *userdata) {
|
||||
(void) timeout;
|
||||
(void) userdata;
|
||||
|
||||
PUSH_EVENT(SC_EVENT_TIME_LIMIT_REACHED);
|
||||
}
|
||||
|
||||
// Generate a scrcpy id to differentiate multiple running scrcpy instances
|
||||
static uint32_t
|
||||
scrcpy_generate_scid() {
|
||||
@ -319,6 +334,8 @@ scrcpy(struct scrcpy_options *options) {
|
||||
bool controller_initialized = false;
|
||||
bool controller_started = false;
|
||||
bool screen_initialized = false;
|
||||
bool timeout_initialized = false;
|
||||
bool timeout_started = false;
|
||||
|
||||
struct sc_acksync *acksync = NULL;
|
||||
|
||||
@ -332,6 +349,7 @@ scrcpy(struct scrcpy_options *options) {
|
||||
.log_level = options->log_level,
|
||||
.video_codec = options->video_codec,
|
||||
.audio_codec = options->audio_codec,
|
||||
.audio_source = options->audio_source,
|
||||
.crop = options->crop,
|
||||
.port_range = options->port_range,
|
||||
.tunnel_host = options->tunnel_host,
|
||||
@ -343,6 +361,7 @@ scrcpy(struct scrcpy_options *options) {
|
||||
.lock_video_orientation = options->lock_video_orientation,
|
||||
.control = options->control,
|
||||
.display_id = options->display_id,
|
||||
.video = options->video,
|
||||
.audio = options->audio,
|
||||
.show_touches = options->show_touches,
|
||||
.stay_awake = options->stay_awake,
|
||||
@ -360,6 +379,7 @@ scrcpy(struct scrcpy_options *options) {
|
||||
.power_on = options->power_on,
|
||||
.list_encoders = options->list_encoders,
|
||||
.list_displays = options->list_displays,
|
||||
.kill_adb_on_close = options->kill_adb_on_close,
|
||||
};
|
||||
|
||||
static const struct sc_server_callbacks cbs = {
|
||||
@ -383,24 +403,26 @@ scrcpy(struct scrcpy_options *options) {
|
||||
goto end;
|
||||
}
|
||||
|
||||
if (options->display) {
|
||||
sdl_set_hints(options->render_driver);
|
||||
}
|
||||
// playback implies capture
|
||||
assert(!options->video_playback || options->video);
|
||||
assert(!options->audio_playback || options->audio);
|
||||
|
||||
// Initialize SDL video in addition if display is enabled
|
||||
if (options->display) {
|
||||
if (options->video_playback) {
|
||||
sdl_set_hints(options->render_driver);
|
||||
if (SDL_Init(SDL_INIT_VIDEO)) {
|
||||
LOGE("Could not initialize SDL video: %s", SDL_GetError());
|
||||
goto end;
|
||||
}
|
||||
}
|
||||
|
||||
if (options->audio && SDL_Init(SDL_INIT_AUDIO)) {
|
||||
if (options->audio_playback) {
|
||||
if (SDL_Init(SDL_INIT_AUDIO)) {
|
||||
LOGE("Could not initialize SDL audio: %s", SDL_GetError());
|
||||
goto end;
|
||||
}
|
||||
}
|
||||
|
||||
sdl_configure(options->display, options->disable_screensaver);
|
||||
sdl_configure(options->video_playback, options->disable_screensaver);
|
||||
|
||||
// Await for server without blocking Ctrl+C handling
|
||||
bool connected;
|
||||
@ -426,7 +448,9 @@ scrcpy(struct scrcpy_options *options) {
|
||||
|
||||
struct sc_file_pusher *fp = NULL;
|
||||
|
||||
if (options->display && options->control) {
|
||||
// control implies video playback
|
||||
assert(!options->control || options->video_playback);
|
||||
if (options->control) {
|
||||
if (!sc_file_pusher_init(&s->file_pusher, serial,
|
||||
options->push_target)) {
|
||||
goto end;
|
||||
@ -435,11 +459,13 @@ scrcpy(struct scrcpy_options *options) {
|
||||
file_pusher_initialized = true;
|
||||
}
|
||||
|
||||
static const struct sc_demuxer_callbacks video_demuxer_cbs = {
|
||||
.on_ended = sc_video_demuxer_on_ended,
|
||||
};
|
||||
sc_demuxer_init(&s->video_demuxer, "video", s->server.video_socket,
|
||||
&video_demuxer_cbs, NULL);
|
||||
if (options->video) {
|
||||
static const struct sc_demuxer_callbacks video_demuxer_cbs = {
|
||||
.on_ended = sc_video_demuxer_on_ended,
|
||||
};
|
||||
sc_demuxer_init(&s->video_demuxer, "video", s->server.video_socket,
|
||||
&video_demuxer_cbs, NULL);
|
||||
}
|
||||
|
||||
if (options->audio) {
|
||||
static const struct sc_demuxer_callbacks audio_demuxer_cbs = {
|
||||
@ -449,8 +475,8 @@ scrcpy(struct scrcpy_options *options) {
|
||||
&audio_demuxer_cbs, options);
|
||||
}
|
||||
|
||||
bool needs_video_decoder = options->display;
|
||||
bool needs_audio_decoder = options->audio && options->display;
|
||||
bool needs_video_decoder = options->video_playback;
|
||||
bool needs_audio_decoder = options->audio_playback;
|
||||
#ifdef HAVE_V4L2
|
||||
needs_video_decoder |= !!options->v4l2_device;
|
||||
#endif
|
||||
@ -470,8 +496,8 @@ scrcpy(struct scrcpy_options *options) {
|
||||
.on_ended = sc_recorder_on_ended,
|
||||
};
|
||||
if (!sc_recorder_init(&s->recorder, options->record_filename,
|
||||
options->record_format, options->audio,
|
||||
info->frame_size, &recorder_cbs, NULL)) {
|
||||
options->record_format, options->video,
|
||||
options->audio, &recorder_cbs, NULL)) {
|
||||
goto end;
|
||||
}
|
||||
recorder_initialized = true;
|
||||
@ -481,8 +507,10 @@ scrcpy(struct scrcpy_options *options) {
|
||||
}
|
||||
recorder_started = true;
|
||||
|
||||
sc_packet_source_add_sink(&s->video_demuxer.packet_source,
|
||||
&s->recorder.video_packet_sink);
|
||||
if (options->video) {
|
||||
sc_packet_source_add_sink(&s->video_demuxer.packet_source,
|
||||
&s->recorder.video_packet_sink);
|
||||
}
|
||||
if (options->audio) {
|
||||
sc_packet_source_add_sink(&s->audio_demuxer.packet_source,
|
||||
&s->recorder.audio_packet_sink);
|
||||
@ -628,23 +656,12 @@ aoa_hid_end:
|
||||
}
|
||||
controller_started = true;
|
||||
controller = &s->controller;
|
||||
|
||||
if (options->turn_screen_off) {
|
||||
struct sc_control_msg msg;
|
||||
msg.type = SC_CONTROL_MSG_TYPE_SET_SCREEN_POWER_MODE;
|
||||
msg.set_screen_power_mode.mode = SC_SCREEN_POWER_MODE_OFF;
|
||||
|
||||
if (!sc_controller_push_msg(&s->controller, &msg)) {
|
||||
LOGW("Could not request 'set screen power mode'");
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
// There is a controller if and only if control is enabled
|
||||
assert(options->control == !!controller);
|
||||
|
||||
if (options->display) {
|
||||
if (options->video_playback) {
|
||||
const char *window_title =
|
||||
options->window_title ? options->window_title : info->device_name;
|
||||
|
||||
@ -658,7 +675,6 @@ aoa_hid_end:
|
||||
.clipboard_autosync = options->clipboard_autosync,
|
||||
.shortcut_mods = &options->shortcut_mods,
|
||||
.window_title = window_title,
|
||||
.frame_size = info->frame_size,
|
||||
.always_on_top = options->always_on_top,
|
||||
.window_x = options->window_x,
|
||||
.window_y = options->window_y,
|
||||
@ -671,11 +687,6 @@ aoa_hid_end:
|
||||
.start_fps_counter = options->start_fps_counter,
|
||||
};
|
||||
|
||||
if (!sc_screen_init(&s->screen, &screen_params)) {
|
||||
goto end;
|
||||
}
|
||||
screen_initialized = true;
|
||||
|
||||
struct sc_frame_source *src = &s->video_decoder.frame_source;
|
||||
if (options->display_buffer) {
|
||||
sc_delay_buffer_init(&s->display_buffer, options->display_buffer,
|
||||
@ -684,19 +695,24 @@ aoa_hid_end:
|
||||
src = &s->display_buffer.frame_source;
|
||||
}
|
||||
|
||||
sc_frame_source_add_sink(src, &s->screen.frame_sink);
|
||||
|
||||
if (options->audio) {
|
||||
sc_audio_player_init(&s->audio_player, options->audio_buffer);
|
||||
sc_frame_source_add_sink(&s->audio_decoder.frame_source,
|
||||
&s->audio_player.frame_sink);
|
||||
if (!sc_screen_init(&s->screen, &screen_params)) {
|
||||
goto end;
|
||||
}
|
||||
screen_initialized = true;
|
||||
|
||||
sc_frame_source_add_sink(src, &s->screen.frame_sink);
|
||||
}
|
||||
|
||||
if (options->audio_playback) {
|
||||
sc_audio_player_init(&s->audio_player, options->audio_buffer,
|
||||
options->audio_output_buffer);
|
||||
sc_frame_source_add_sink(&s->audio_decoder.frame_source,
|
||||
&s->audio_player.frame_sink);
|
||||
}
|
||||
|
||||
#ifdef HAVE_V4L2
|
||||
if (options->v4l2_device) {
|
||||
if (!sc_v4l2_sink_init(&s->v4l2_sink, options->v4l2_device,
|
||||
info->frame_size)) {
|
||||
if (!sc_v4l2_sink_init(&s->v4l2_sink, options->v4l2_device)) {
|
||||
goto end;
|
||||
}
|
||||
|
||||
@ -713,12 +729,15 @@ aoa_hid_end:
|
||||
}
|
||||
#endif
|
||||
|
||||
// now we consumed the header values, the socket receives the video stream
|
||||
// start the video demuxer
|
||||
if (!sc_demuxer_start(&s->video_demuxer)) {
|
||||
goto end;
|
||||
// Now that the header values have been consumed, the socket(s) will
|
||||
// receive the stream(s). Start the demuxer(s).
|
||||
|
||||
if (options->video) {
|
||||
if (!sc_demuxer_start(&s->video_demuxer)) {
|
||||
goto end;
|
||||
}
|
||||
video_demuxer_started = true;
|
||||
}
|
||||
video_demuxer_started = true;
|
||||
|
||||
if (options->audio) {
|
||||
if (!sc_demuxer_start(&s->audio_demuxer)) {
|
||||
@ -727,6 +746,39 @@ aoa_hid_end:
|
||||
audio_demuxer_started = true;
|
||||
}
|
||||
|
||||
// If the device screen is to be turned off, send the control message after
|
||||
// everything is set up
|
||||
if (options->control && options->turn_screen_off) {
|
||||
struct sc_control_msg msg;
|
||||
msg.type = SC_CONTROL_MSG_TYPE_SET_SCREEN_POWER_MODE;
|
||||
msg.set_screen_power_mode.mode = SC_SCREEN_POWER_MODE_OFF;
|
||||
|
||||
if (!sc_controller_push_msg(&s->controller, &msg)) {
|
||||
LOGW("Could not request 'set screen power mode'");
|
||||
}
|
||||
}
|
||||
|
||||
if (options->time_limit) {
|
||||
bool ok = sc_timeout_init(&s->timeout);
|
||||
if (!ok) {
|
||||
goto end;
|
||||
}
|
||||
|
||||
timeout_initialized = true;
|
||||
|
||||
sc_tick deadline = sc_tick_now() + options->time_limit;
|
||||
static const struct sc_timeout_callbacks cbs = {
|
||||
.on_timeout = sc_timeout_on_timeout,
|
||||
};
|
||||
|
||||
ok = sc_timeout_start(&s->timeout, deadline, &cbs, NULL);
|
||||
if (!ok) {
|
||||
goto end;
|
||||
}
|
||||
|
||||
timeout_started = true;
|
||||
}
|
||||
|
||||
ret = event_loop(s);
|
||||
LOGD("quit...");
|
||||
|
||||
@ -735,6 +787,10 @@ aoa_hid_end:
|
||||
sc_screen_hide_window(&s->screen);
|
||||
|
||||
end:
|
||||
if (timeout_started) {
|
||||
sc_timeout_stop(&s->timeout);
|
||||
}
|
||||
|
||||
// The demuxer is not stopped explicitly, because it will stop by itself on
|
||||
// end-of-stream
|
||||
#ifdef HAVE_USB
|
||||
@ -770,6 +826,13 @@ end:
|
||||
sc_server_stop(&s->server);
|
||||
}
|
||||
|
||||
if (timeout_started) {
|
||||
sc_timeout_join(&s->timeout);
|
||||
}
|
||||
if (timeout_initialized) {
|
||||
sc_timeout_destroy(&s->timeout);
|
||||
}
|
||||
|
||||
// now that the sockets are shutdown, the demuxer and controller are
|
||||
// interrupted, we can join them
|
||||
if (video_demuxer_started) {
|
||||
|
267
app/src/screen.c
267
app/src/screen.c
@ -56,6 +56,7 @@ static void
|
||||
set_window_size(struct sc_screen *screen, struct sc_size new_size) {
|
||||
assert(!screen->fullscreen);
|
||||
assert(!screen->maximized);
|
||||
assert(!screen->minimized);
|
||||
SDL_SetWindowSize(screen->window, new_size.width, new_size.height);
|
||||
}
|
||||
|
||||
@ -239,33 +240,6 @@ sc_screen_update_content_rect(struct sc_screen *screen) {
|
||||
}
|
||||
}
|
||||
|
||||
static inline SDL_Texture *
|
||||
create_texture(struct sc_screen *screen) {
|
||||
SDL_Renderer *renderer = screen->renderer;
|
||||
struct sc_size size = screen->frame_size;
|
||||
SDL_Texture *texture = SDL_CreateTexture(renderer, SDL_PIXELFORMAT_YV12,
|
||||
SDL_TEXTUREACCESS_STREAMING,
|
||||
size.width, size.height);
|
||||
if (!texture) {
|
||||
return NULL;
|
||||
}
|
||||
|
||||
if (screen->mipmaps) {
|
||||
struct sc_opengl *gl = &screen->gl;
|
||||
|
||||
SDL_GL_BindTexture(texture, NULL, NULL);
|
||||
|
||||
// Enable trilinear filtering for downscaling
|
||||
gl->TexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,
|
||||
GL_LINEAR_MIPMAP_LINEAR);
|
||||
gl->TexParameterf(GL_TEXTURE_2D, GL_TEXTURE_LOD_BIAS, -1.f);
|
||||
|
||||
SDL_GL_UnbindTexture(texture);
|
||||
}
|
||||
|
||||
return texture;
|
||||
}
|
||||
|
||||
// render the texture to the renderer
|
||||
//
|
||||
// Set the update_content_rect flag if the window or content size may have
|
||||
@ -276,35 +250,11 @@ sc_screen_render(struct sc_screen *screen, bool update_content_rect) {
|
||||
sc_screen_update_content_rect(screen);
|
||||
}
|
||||
|
||||
SDL_RenderClear(screen->renderer);
|
||||
if (screen->rotation == 0) {
|
||||
SDL_RenderCopy(screen->renderer, screen->texture, NULL, &screen->rect);
|
||||
} else {
|
||||
// rotation in RenderCopyEx() is clockwise, while screen->rotation is
|
||||
// counterclockwise (to be consistent with --lock-video-orientation)
|
||||
int cw_rotation = (4 - screen->rotation) % 4;
|
||||
double angle = 90 * cw_rotation;
|
||||
|
||||
SDL_Rect *dstrect = NULL;
|
||||
SDL_Rect rect;
|
||||
if (screen->rotation & 1) {
|
||||
rect.x = screen->rect.x + (screen->rect.w - screen->rect.h) / 2;
|
||||
rect.y = screen->rect.y + (screen->rect.h - screen->rect.w) / 2;
|
||||
rect.w = screen->rect.h;
|
||||
rect.h = screen->rect.w;
|
||||
dstrect = ▭
|
||||
} else {
|
||||
assert(screen->rotation == 2);
|
||||
dstrect = &screen->rect;
|
||||
}
|
||||
|
||||
SDL_RenderCopyEx(screen->renderer, screen->texture, NULL, dstrect,
|
||||
angle, NULL, 0);
|
||||
}
|
||||
SDL_RenderPresent(screen->renderer);
|
||||
enum sc_display_result res =
|
||||
sc_display_render(&screen->display, &screen->rect, screen->rotation);
|
||||
(void) res; // any error already logged
|
||||
}
|
||||
|
||||
|
||||
#if defined(__APPLE__) || defined(__WINDOWS__)
|
||||
# define CONTINUOUS_RESIZING_WORKAROUND
|
||||
#endif
|
||||
@ -335,7 +285,25 @@ sc_screen_frame_sink_open(struct sc_frame_sink *sink,
|
||||
(void) ctx;
|
||||
|
||||
struct sc_screen *screen = DOWNCAST(sink);
|
||||
(void) screen;
|
||||
|
||||
assert(ctx->width > 0 && ctx->width <= 0xFFFF);
|
||||
assert(ctx->height > 0 && ctx->height <= 0xFFFF);
|
||||
// screen->frame_size is never used before the event is pushed, and the
|
||||
// event acts as a memory barrier so it is safe without mutex
|
||||
screen->frame_size.width = ctx->width;
|
||||
screen->frame_size.height = ctx->height;
|
||||
|
||||
static SDL_Event event = {
|
||||
.type = SC_EVENT_SCREEN_INIT_SIZE,
|
||||
};
|
||||
|
||||
// Post the event on the UI thread (the texture must be created from there)
|
||||
int ret = SDL_PushEvent(&event);
|
||||
if (ret < 0) {
|
||||
LOGW("Could not post init size event: %s", SDL_GetError());
|
||||
return false;
|
||||
}
|
||||
|
||||
#ifndef NDEBUG
|
||||
screen->open = true;
|
||||
#endif
|
||||
@ -392,6 +360,7 @@ sc_screen_init(struct sc_screen *screen,
|
||||
screen->has_frame = false;
|
||||
screen->fullscreen = false;
|
||||
screen->maximized = false;
|
||||
screen->minimized = false;
|
||||
screen->mouse_capture_key_pressed = 0;
|
||||
|
||||
screen->req.x = params->window_x;
|
||||
@ -410,14 +379,10 @@ sc_screen_init(struct sc_screen *screen,
|
||||
goto error_destroy_frame_buffer;
|
||||
}
|
||||
|
||||
screen->frame_size = params->frame_size;
|
||||
screen->rotation = params->rotation;
|
||||
if (screen->rotation) {
|
||||
LOGI("Initial display rotation set to %u", screen->rotation);
|
||||
}
|
||||
struct sc_size content_size =
|
||||
get_rotated_size(screen->frame_size, screen->rotation);
|
||||
screen->content_size = content_size;
|
||||
|
||||
uint32_t window_flags = SDL_WINDOW_HIDDEN
|
||||
| SDL_WINDOW_RESIZABLE
|
||||
@ -437,46 +402,11 @@ sc_screen_init(struct sc_screen *screen,
|
||||
goto error_destroy_fps_counter;
|
||||
}
|
||||
|
||||
screen->renderer = SDL_CreateRenderer(screen->window, -1,
|
||||
SDL_RENDERER_ACCELERATED);
|
||||
if (!screen->renderer) {
|
||||
LOGE("Could not create renderer: %s", SDL_GetError());
|
||||
ok = sc_display_init(&screen->display, screen->window, params->mipmaps);
|
||||
if (!ok) {
|
||||
goto error_destroy_window;
|
||||
}
|
||||
|
||||
SDL_RendererInfo renderer_info;
|
||||
int r = SDL_GetRendererInfo(screen->renderer, &renderer_info);
|
||||
const char *renderer_name = r ? NULL : renderer_info.name;
|
||||
LOGI("Renderer: %s", renderer_name ? renderer_name : "(unknown)");
|
||||
|
||||
screen->mipmaps = false;
|
||||
|
||||
// starts with "opengl"
|
||||
bool use_opengl = renderer_name && !strncmp(renderer_name, "opengl", 6);
|
||||
if (use_opengl) {
|
||||
struct sc_opengl *gl = &screen->gl;
|
||||
sc_opengl_init(gl);
|
||||
|
||||
LOGI("OpenGL version: %s", gl->version);
|
||||
|
||||
if (params->mipmaps) {
|
||||
bool supports_mipmaps =
|
||||
sc_opengl_version_at_least(gl, 3, 0, /* OpenGL 3.0+ */
|
||||
2, 0 /* OpenGL ES 2.0+ */);
|
||||
if (supports_mipmaps) {
|
||||
LOGI("Trilinear filtering enabled");
|
||||
screen->mipmaps = true;
|
||||
} else {
|
||||
LOGW("Trilinear filtering disabled "
|
||||
"(OpenGL 3.0+ or ES 2.0+ required)");
|
||||
}
|
||||
} else {
|
||||
LOGI("Trilinear filtering disabled");
|
||||
}
|
||||
} else if (params->mipmaps) {
|
||||
LOGD("Trilinear filtering disabled (not an OpenGL renderer)");
|
||||
}
|
||||
|
||||
SDL_Surface *icon = scrcpy_icon_load();
|
||||
if (icon) {
|
||||
SDL_SetWindowIcon(screen->window, icon);
|
||||
@ -485,18 +415,10 @@ sc_screen_init(struct sc_screen *screen,
|
||||
LOGW("Could not load icon");
|
||||
}
|
||||
|
||||
LOGI("Initial texture: %" PRIu16 "x%" PRIu16, params->frame_size.width,
|
||||
params->frame_size.height);
|
||||
screen->texture = create_texture(screen);
|
||||
if (!screen->texture) {
|
||||
LOGE("Could not create texture: %s", SDL_GetError());
|
||||
goto error_destroy_renderer;
|
||||
}
|
||||
|
||||
screen->frame = av_frame_alloc();
|
||||
if (!screen->frame) {
|
||||
LOG_OOM();
|
||||
goto error_destroy_texture;
|
||||
goto error_destroy_display;
|
||||
}
|
||||
|
||||
struct sc_input_manager_params im_params = {
|
||||
@ -531,10 +453,8 @@ sc_screen_init(struct sc_screen *screen,
|
||||
|
||||
return true;
|
||||
|
||||
error_destroy_texture:
|
||||
SDL_DestroyTexture(screen->texture);
|
||||
error_destroy_renderer:
|
||||
SDL_DestroyRenderer(screen->renderer);
|
||||
error_destroy_display:
|
||||
sc_display_destroy(&screen->display);
|
||||
error_destroy_window:
|
||||
SDL_DestroyWindow(screen->window);
|
||||
error_destroy_fps_counter:
|
||||
@ -590,9 +510,8 @@ sc_screen_destroy(struct sc_screen *screen) {
|
||||
#ifndef NDEBUG
|
||||
assert(!screen->open);
|
||||
#endif
|
||||
sc_display_destroy(&screen->display);
|
||||
av_frame_free(&screen->frame);
|
||||
SDL_DestroyTexture(screen->texture);
|
||||
SDL_DestroyRenderer(screen->renderer);
|
||||
SDL_DestroyWindow(screen->window);
|
||||
sc_fps_counter_destroy(&screen->fps_counter);
|
||||
sc_frame_buffer_destroy(&screen->fb);
|
||||
@ -614,11 +533,11 @@ resize_for_content(struct sc_screen *screen, struct sc_size old_content_size,
|
||||
|
||||
static void
|
||||
set_content_size(struct sc_screen *screen, struct sc_size new_content_size) {
|
||||
if (!screen->fullscreen && !screen->maximized) {
|
||||
if (!screen->fullscreen && !screen->maximized && !screen->minimized) {
|
||||
resize_for_content(screen, screen->content_size, new_content_size);
|
||||
} else if (!screen->resize_pending) {
|
||||
// Store the windowed size to be able to compute the optimal size once
|
||||
// fullscreen and maximized are disabled
|
||||
// fullscreen/maximized/minimized are disabled
|
||||
screen->windowed_content_size = screen->content_size;
|
||||
screen->resize_pending = true;
|
||||
}
|
||||
@ -630,6 +549,7 @@ static void
|
||||
apply_pending_resize(struct sc_screen *screen) {
|
||||
assert(!screen->fullscreen);
|
||||
assert(!screen->maximized);
|
||||
assert(!screen->minimized);
|
||||
if (screen->resize_pending) {
|
||||
resize_for_content(screen, screen->windowed_content_size,
|
||||
screen->content_size);
|
||||
@ -655,47 +575,40 @@ sc_screen_set_rotation(struct sc_screen *screen, unsigned rotation) {
|
||||
sc_screen_render(screen, true);
|
||||
}
|
||||
|
||||
// recreate the texture and resize the window if the frame size has changed
|
||||
static bool
|
||||
prepare_for_frame(struct sc_screen *screen, struct sc_size new_frame_size) {
|
||||
if (screen->frame_size.width != new_frame_size.width
|
||||
|| screen->frame_size.height != new_frame_size.height) {
|
||||
// frame dimension changed, destroy texture
|
||||
SDL_DestroyTexture(screen->texture);
|
||||
sc_screen_init_size(struct sc_screen *screen) {
|
||||
// Before first frame
|
||||
assert(!screen->has_frame);
|
||||
|
||||
screen->frame_size = new_frame_size;
|
||||
// The requested size is passed via screen->frame_size
|
||||
|
||||
struct sc_size new_content_size =
|
||||
get_rotated_size(new_frame_size, screen->rotation);
|
||||
set_content_size(screen, new_content_size);
|
||||
struct sc_size content_size =
|
||||
get_rotated_size(screen->frame_size, screen->rotation);
|
||||
screen->content_size = content_size;
|
||||
|
||||
sc_screen_update_content_rect(screen);
|
||||
|
||||
LOGI("New texture: %" PRIu16 "x%" PRIu16,
|
||||
screen->frame_size.width, screen->frame_size.height);
|
||||
screen->texture = create_texture(screen);
|
||||
if (!screen->texture) {
|
||||
LOGE("Could not create texture: %s", SDL_GetError());
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
return true;
|
||||
enum sc_display_result res =
|
||||
sc_display_set_texture_size(&screen->display, screen->frame_size);
|
||||
return res != SC_DISPLAY_RESULT_ERROR;
|
||||
}
|
||||
|
||||
// write the frame into the texture
|
||||
static void
|
||||
update_texture(struct sc_screen *screen, const AVFrame *frame) {
|
||||
SDL_UpdateYUVTexture(screen->texture, NULL,
|
||||
frame->data[0], frame->linesize[0],
|
||||
frame->data[1], frame->linesize[1],
|
||||
frame->data[2], frame->linesize[2]);
|
||||
|
||||
if (screen->mipmaps) {
|
||||
SDL_GL_BindTexture(screen->texture, NULL, NULL);
|
||||
screen->gl.GenerateMipmap(GL_TEXTURE_2D);
|
||||
SDL_GL_UnbindTexture(screen->texture);
|
||||
// recreate the texture and resize the window if the frame size has changed
|
||||
static enum sc_display_result
|
||||
prepare_for_frame(struct sc_screen *screen, struct sc_size new_frame_size) {
|
||||
if (screen->frame_size.width == new_frame_size.width
|
||||
&& screen->frame_size.height == new_frame_size.height) {
|
||||
return SC_DISPLAY_RESULT_OK;
|
||||
}
|
||||
|
||||
// frame dimension changed
|
||||
screen->frame_size = new_frame_size;
|
||||
|
||||
struct sc_size new_content_size =
|
||||
get_rotated_size(new_frame_size, screen->rotation);
|
||||
set_content_size(screen, new_content_size);
|
||||
|
||||
sc_screen_update_content_rect(screen);
|
||||
|
||||
return sc_display_set_texture_size(&screen->display, screen->frame_size);
|
||||
}
|
||||
|
||||
static bool
|
||||
@ -707,10 +620,23 @@ sc_screen_update_frame(struct sc_screen *screen) {
|
||||
sc_fps_counter_add_rendered_frame(&screen->fps_counter);
|
||||
|
||||
struct sc_size new_frame_size = {frame->width, frame->height};
|
||||
if (!prepare_for_frame(screen, new_frame_size)) {
|
||||
enum sc_display_result res = prepare_for_frame(screen, new_frame_size);
|
||||
if (res == SC_DISPLAY_RESULT_ERROR) {
|
||||
return false;
|
||||
}
|
||||
update_texture(screen, frame);
|
||||
if (res == SC_DISPLAY_RESULT_PENDING) {
|
||||
// Not an error, but do not continue
|
||||
return true;
|
||||
}
|
||||
|
||||
res = sc_display_update_texture(&screen->display, frame);
|
||||
if (res == SC_DISPLAY_RESULT_ERROR) {
|
||||
return false;
|
||||
}
|
||||
if (res == SC_DISPLAY_RESULT_PENDING) {
|
||||
// Not an error, but do not continue
|
||||
return true;
|
||||
}
|
||||
|
||||
if (!screen->has_frame) {
|
||||
screen->has_frame = true;
|
||||
@ -736,7 +662,7 @@ sc_screen_switch_fullscreen(struct sc_screen *screen) {
|
||||
}
|
||||
|
||||
screen->fullscreen = !screen->fullscreen;
|
||||
if (!screen->fullscreen && !screen->maximized) {
|
||||
if (!screen->fullscreen && !screen->maximized && !screen->minimized) {
|
||||
apply_pending_resize(screen);
|
||||
}
|
||||
|
||||
@ -746,7 +672,7 @@ sc_screen_switch_fullscreen(struct sc_screen *screen) {
|
||||
|
||||
void
|
||||
sc_screen_resize_to_fit(struct sc_screen *screen) {
|
||||
if (screen->fullscreen || screen->maximized) {
|
||||
if (screen->fullscreen || screen->maximized || screen->minimized) {
|
||||
return;
|
||||
}
|
||||
|
||||
@ -770,7 +696,7 @@ sc_screen_resize_to_fit(struct sc_screen *screen) {
|
||||
|
||||
void
|
||||
sc_screen_resize_to_pixel_perfect(struct sc_screen *screen) {
|
||||
if (screen->fullscreen) {
|
||||
if (screen->fullscreen || screen->minimized) {
|
||||
return;
|
||||
}
|
||||
|
||||
@ -790,22 +716,32 @@ sc_screen_is_mouse_capture_key(SDL_Keycode key) {
|
||||
return key == SDLK_LALT || key == SDLK_LGUI || key == SDLK_RGUI;
|
||||
}
|
||||
|
||||
void
|
||||
sc_screen_handle_event(struct sc_screen *screen, SDL_Event *event) {
|
||||
bool
|
||||
sc_screen_handle_event(struct sc_screen *screen, const SDL_Event *event) {
|
||||
bool relative_mode = sc_screen_is_relative_mode(screen);
|
||||
|
||||
switch (event->type) {
|
||||
case SC_EVENT_SCREEN_INIT_SIZE: {
|
||||
// The initial size is passed via screen->frame_size
|
||||
bool ok = sc_screen_init_size(screen);
|
||||
if (!ok) {
|
||||
LOGE("Could not initialize screen size");
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
case SC_EVENT_NEW_FRAME: {
|
||||
bool ok = sc_screen_update_frame(screen);
|
||||
if (!ok) {
|
||||
LOGW("Frame update failed\n");
|
||||
LOGE("Frame update failed\n");
|
||||
return false;
|
||||
}
|
||||
return;
|
||||
return true;
|
||||
}
|
||||
case SDL_WINDOWEVENT:
|
||||
if (!screen->has_frame) {
|
||||
// Do nothing
|
||||
return;
|
||||
return true;
|
||||
}
|
||||
switch (event->window.event) {
|
||||
case SDL_WINDOWEVENT_EXPOSED:
|
||||
@ -817,6 +753,9 @@ sc_screen_handle_event(struct sc_screen *screen, SDL_Event *event) {
|
||||
case SDL_WINDOWEVENT_MAXIMIZED:
|
||||
screen->maximized = true;
|
||||
break;
|
||||
case SDL_WINDOWEVENT_MINIMIZED:
|
||||
screen->minimized = true;
|
||||
break;
|
||||
case SDL_WINDOWEVENT_RESTORED:
|
||||
if (screen->fullscreen) {
|
||||
// On Windows, in maximized+fullscreen, disabling
|
||||
@ -827,6 +766,7 @@ sc_screen_handle_event(struct sc_screen *screen, SDL_Event *event) {
|
||||
break;
|
||||
}
|
||||
screen->maximized = false;
|
||||
screen->minimized = false;
|
||||
apply_pending_resize(screen);
|
||||
sc_screen_render(screen, true);
|
||||
break;
|
||||
@ -836,7 +776,7 @@ sc_screen_handle_event(struct sc_screen *screen, SDL_Event *event) {
|
||||
}
|
||||
break;
|
||||
}
|
||||
return;
|
||||
return true;
|
||||
case SDL_KEYDOWN:
|
||||
if (relative_mode) {
|
||||
SDL_Keycode key = event->key.keysym.sym;
|
||||
@ -849,7 +789,7 @@ sc_screen_handle_event(struct sc_screen *screen, SDL_Event *event) {
|
||||
screen->mouse_capture_key_pressed = 0;
|
||||
}
|
||||
// Mouse capture keys are never forwarded to the device
|
||||
return;
|
||||
return true;
|
||||
}
|
||||
}
|
||||
break;
|
||||
@ -865,7 +805,7 @@ sc_screen_handle_event(struct sc_screen *screen, SDL_Event *event) {
|
||||
sc_screen_toggle_mouse_capture(screen);
|
||||
}
|
||||
// Mouse capture keys are never forwarded to the device
|
||||
return;
|
||||
return true;
|
||||
}
|
||||
}
|
||||
break;
|
||||
@ -875,7 +815,7 @@ sc_screen_handle_event(struct sc_screen *screen, SDL_Event *event) {
|
||||
if (relative_mode && !sc_screen_get_mouse_capture(screen)) {
|
||||
// Do not forward to input manager, the mouse will be captured
|
||||
// on SDL_MOUSEBUTTONUP
|
||||
return;
|
||||
return true;
|
||||
}
|
||||
break;
|
||||
case SDL_FINGERMOTION:
|
||||
@ -884,18 +824,19 @@ sc_screen_handle_event(struct sc_screen *screen, SDL_Event *event) {
|
||||
if (relative_mode) {
|
||||
// Touch events are not compatible with relative mode
|
||||
// (coordinates are not relative)
|
||||
return;
|
||||
return true;
|
||||
}
|
||||
break;
|
||||
case SDL_MOUSEBUTTONUP:
|
||||
if (relative_mode && !sc_screen_get_mouse_capture(screen)) {
|
||||
sc_screen_set_mouse_capture(screen, true);
|
||||
return;
|
||||
return true;
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
sc_input_manager_handle_event(&screen->im, event);
|
||||
return true;
|
||||
}
|
||||
|
||||
struct sc_point
|
||||
|
@ -9,6 +9,7 @@
|
||||
|
||||
#include "controller.h"
|
||||
#include "coords.h"
|
||||
#include "display.h"
|
||||
#include "fps_counter.h"
|
||||
#include "frame_buffer.h"
|
||||
#include "input_manager.h"
|
||||
@ -24,6 +25,7 @@ struct sc_screen {
|
||||
bool open; // track the open/close state to assert correct behavior
|
||||
#endif
|
||||
|
||||
struct sc_display display;
|
||||
struct sc_input_manager im;
|
||||
struct sc_frame_buffer fb;
|
||||
struct sc_fps_counter fps_counter;
|
||||
@ -39,9 +41,6 @@ struct sc_screen {
|
||||
} req;
|
||||
|
||||
SDL_Window *window;
|
||||
SDL_Renderer *renderer;
|
||||
SDL_Texture *texture;
|
||||
struct sc_opengl gl;
|
||||
struct sc_size frame_size;
|
||||
struct sc_size content_size; // rotated frame_size
|
||||
|
||||
@ -57,7 +56,7 @@ struct sc_screen {
|
||||
bool has_frame;
|
||||
bool fullscreen;
|
||||
bool maximized;
|
||||
bool mipmaps;
|
||||
bool minimized;
|
||||
|
||||
// To enable/disable mouse capture, a mouse capture key (LALT, LGUI or
|
||||
// RGUI) must be pressed. This variable tracks the pressed capture key.
|
||||
@ -78,7 +77,6 @@ struct sc_screen_params {
|
||||
const struct sc_shortcut_mods *shortcut_mods;
|
||||
|
||||
const char *window_title;
|
||||
struct sc_size frame_size;
|
||||
bool always_on_top;
|
||||
|
||||
int16_t window_x; // accepts SC_WINDOW_POSITION_UNDEFINED
|
||||
@ -136,8 +134,9 @@ void
|
||||
sc_screen_set_rotation(struct sc_screen *screen, unsigned rotation);
|
||||
|
||||
// react to SDL events
|
||||
void
|
||||
sc_screen_handle_event(struct sc_screen *screen, SDL_Event *event);
|
||||
// If this function returns false, scrcpy must exit with an error.
|
||||
bool
|
||||
sc_screen_handle_event(struct sc_screen *screen, const SDL_Event *event);
|
||||
|
||||
// convert point from window coordinates to frame coordinates
|
||||
// x and y are expressed in pixels
|
||||
|
108
app/src/server.c
108
app/src/server.c
@ -226,12 +226,16 @@ execute_server(struct sc_server *server,
|
||||
ADD_PARAM("scid=%08x", params->scid);
|
||||
ADD_PARAM("log_level=%s", log_level_to_server_string(params->log_level));
|
||||
|
||||
if (!params->video) {
|
||||
ADD_PARAM("video=false");
|
||||
}
|
||||
if (params->video_bit_rate) {
|
||||
ADD_PARAM("video_bit_rate=%" PRIu32, params->video_bit_rate);
|
||||
}
|
||||
if (!params->audio) {
|
||||
ADD_PARAM("audio=false");
|
||||
} else if (params->audio_bit_rate) {
|
||||
}
|
||||
if (params->audio_bit_rate) {
|
||||
ADD_PARAM("audio_bit_rate=%" PRIu32, params->audio_bit_rate);
|
||||
}
|
||||
if (params->video_codec != SC_CODEC_H264) {
|
||||
@ -242,6 +246,10 @@ execute_server(struct sc_server *server,
|
||||
ADD_PARAM("audio_codec=%s",
|
||||
sc_server_get_codec_name(params->audio_codec));
|
||||
}
|
||||
if (params->audio_source != SC_AUDIO_SOURCE_OUTPUT) {
|
||||
assert(params->audio_source == SC_AUDIO_SOURCE_MIC);
|
||||
ADD_PARAM("audio_source=mic");
|
||||
}
|
||||
if (params->max_size) {
|
||||
ADD_PARAM("max_size=%" PRIu16, params->max_size);
|
||||
}
|
||||
@ -441,9 +449,9 @@ sc_server_init(struct sc_server *server, const struct sc_server_params *params,
|
||||
static bool
|
||||
device_read_info(struct sc_intr *intr, sc_socket device_socket,
|
||||
struct sc_server_info *info) {
|
||||
unsigned char buf[SC_DEVICE_NAME_FIELD_LENGTH + 4];
|
||||
unsigned char buf[SC_DEVICE_NAME_FIELD_LENGTH];
|
||||
ssize_t r = net_recv_all_intr(intr, device_socket, buf, sizeof(buf));
|
||||
if (r < SC_DEVICE_NAME_FIELD_LENGTH + 4) {
|
||||
if (r < SC_DEVICE_NAME_FIELD_LENGTH) {
|
||||
LOGE("Could not retrieve device information");
|
||||
return false;
|
||||
}
|
||||
@ -451,9 +459,6 @@ device_read_info(struct sc_intr *intr, sc_socket device_socket,
|
||||
buf[SC_DEVICE_NAME_FIELD_LENGTH - 1] = '\0';
|
||||
memcpy(info->device_name, (char *) buf, sizeof(info->device_name));
|
||||
|
||||
unsigned char *fields = &buf[SC_DEVICE_NAME_FIELD_LENGTH];
|
||||
info->frame_size.width = sc_read16be(fields);
|
||||
info->frame_size.height = sc_read16be(&fields[2]);
|
||||
return true;
|
||||
}
|
||||
|
||||
@ -466,6 +471,7 @@ sc_server_connect_to(struct sc_server *server, struct sc_server_info *info) {
|
||||
const char *serial = server->serial;
|
||||
assert(serial);
|
||||
|
||||
bool video = server->params.video;
|
||||
bool audio = server->params.audio;
|
||||
bool control = server->params.control;
|
||||
|
||||
@ -473,9 +479,12 @@ sc_server_connect_to(struct sc_server *server, struct sc_server_info *info) {
|
||||
sc_socket audio_socket = SC_SOCKET_NONE;
|
||||
sc_socket control_socket = SC_SOCKET_NONE;
|
||||
if (!tunnel->forward) {
|
||||
video_socket = net_accept_intr(&server->intr, tunnel->server_socket);
|
||||
if (video_socket == SC_SOCKET_NONE) {
|
||||
goto fail;
|
||||
if (video) {
|
||||
video_socket =
|
||||
net_accept_intr(&server->intr, tunnel->server_socket);
|
||||
if (video_socket == SC_SOCKET_NONE) {
|
||||
goto fail;
|
||||
}
|
||||
}
|
||||
|
||||
if (audio) {
|
||||
@ -506,35 +515,45 @@ sc_server_connect_to(struct sc_server *server, struct sc_server_info *info) {
|
||||
|
||||
unsigned attempts = 100;
|
||||
sc_tick delay = SC_TICK_FROM_MS(100);
|
||||
video_socket = connect_to_server(server, attempts, delay, tunnel_host,
|
||||
tunnel_port);
|
||||
if (video_socket == SC_SOCKET_NONE) {
|
||||
sc_socket first_socket = connect_to_server(server, attempts, delay,
|
||||
tunnel_host, tunnel_port);
|
||||
if (first_socket == SC_SOCKET_NONE) {
|
||||
goto fail;
|
||||
}
|
||||
|
||||
if (video) {
|
||||
video_socket = first_socket;
|
||||
}
|
||||
|
||||
if (audio) {
|
||||
audio_socket = net_socket();
|
||||
if (audio_socket == SC_SOCKET_NONE) {
|
||||
goto fail;
|
||||
}
|
||||
bool ok = net_connect_intr(&server->intr, audio_socket, tunnel_host,
|
||||
tunnel_port);
|
||||
if (!ok) {
|
||||
goto fail;
|
||||
if (!video) {
|
||||
audio_socket = first_socket;
|
||||
} else {
|
||||
audio_socket = net_socket();
|
||||
if (audio_socket == SC_SOCKET_NONE) {
|
||||
goto fail;
|
||||
}
|
||||
bool ok = net_connect_intr(&server->intr, audio_socket, tunnel_host,
|
||||
tunnel_port);
|
||||
if (!ok) {
|
||||
goto fail;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (control) {
|
||||
// we know that the device is listening, we don't need several
|
||||
// attempts
|
||||
control_socket = net_socket();
|
||||
if (control_socket == SC_SOCKET_NONE) {
|
||||
goto fail;
|
||||
}
|
||||
bool ok = net_connect_intr(&server->intr, control_socket,
|
||||
tunnel_host, tunnel_port);
|
||||
if (!ok) {
|
||||
goto fail;
|
||||
if (!video && !audio) {
|
||||
control_socket = first_socket;
|
||||
} else {
|
||||
control_socket = net_socket();
|
||||
if (control_socket == SC_SOCKET_NONE) {
|
||||
goto fail;
|
||||
}
|
||||
bool ok = net_connect_intr(&server->intr, control_socket,
|
||||
tunnel_host, tunnel_port);
|
||||
if (!ok) {
|
||||
goto fail;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -543,13 +562,17 @@ sc_server_connect_to(struct sc_server *server, struct sc_server_info *info) {
|
||||
sc_adb_tunnel_close(tunnel, &server->intr, serial,
|
||||
server->device_socket_name);
|
||||
|
||||
sc_socket first_socket = video ? video_socket
|
||||
: audio ? audio_socket
|
||||
: control_socket;
|
||||
|
||||
// The sockets will be closed on stop if device_read_info() fails
|
||||
bool ok = device_read_info(&server->intr, video_socket, info);
|
||||
bool ok = device_read_info(&server->intr, first_socket, info);
|
||||
if (!ok) {
|
||||
goto fail;
|
||||
}
|
||||
|
||||
assert(video_socket != SC_SOCKET_NONE);
|
||||
assert(!video || video_socket != SC_SOCKET_NONE);
|
||||
assert(!audio || audio_socket != SC_SOCKET_NONE);
|
||||
assert(!control || control_socket != SC_SOCKET_NONE);
|
||||
|
||||
@ -771,6 +794,15 @@ sc_server_configure_tcpip_unknown_address(struct sc_server *server,
|
||||
return sc_server_connect_to_tcpip(server, ip_port);
|
||||
}
|
||||
|
||||
static void
|
||||
sc_server_kill_adb_if_requested(struct sc_server *server) {
|
||||
if (server->params.kill_adb_on_close) {
|
||||
LOGI("Killing adb server...");
|
||||
unsigned flags = SC_ADB_NO_STDOUT | SC_ADB_NO_STDERR | SC_ADB_NO_LOGERR;
|
||||
sc_adb_kill_server(&server->intr, flags);
|
||||
}
|
||||
}
|
||||
|
||||
static int
|
||||
run_server(void *data) {
|
||||
struct sc_server *server = data;
|
||||
@ -782,7 +814,7 @@ run_server(void *data) {
|
||||
// is parsed, so it is not output)
|
||||
bool ok = sc_adb_start_server(&server->intr, 0);
|
||||
if (!ok) {
|
||||
LOGE("Could not start adb daemon");
|
||||
LOGE("Could not start adb server");
|
||||
goto error_connection_failed;
|
||||
}
|
||||
|
||||
@ -933,8 +965,11 @@ run_server(void *data) {
|
||||
sc_mutex_unlock(&server->mutex);
|
||||
|
||||
// Interrupt sockets to wake up socket blocking calls on the server
|
||||
assert(server->video_socket != SC_SOCKET_NONE);
|
||||
net_interrupt(server->video_socket);
|
||||
|
||||
if (server->video_socket != SC_SOCKET_NONE) {
|
||||
// There is no video_socket if --no-video is set
|
||||
net_interrupt(server->video_socket);
|
||||
}
|
||||
|
||||
if (server->audio_socket != SC_SOCKET_NONE) {
|
||||
// There is no audio_socket if --no-audio is set
|
||||
@ -967,9 +1002,12 @@ run_server(void *data) {
|
||||
|
||||
sc_process_close(pid);
|
||||
|
||||
sc_server_kill_adb_if_requested(server);
|
||||
|
||||
return 0;
|
||||
|
||||
error_connection_failed:
|
||||
sc_server_kill_adb_if_requested(server);
|
||||
server->cbs->on_connection_failed(server, server->cbs_userdata);
|
||||
return -1;
|
||||
}
|
||||
|
@ -18,7 +18,6 @@
|
||||
#define SC_DEVICE_NAME_FIELD_LENGTH 64
|
||||
struct sc_server_info {
|
||||
char device_name[SC_DEVICE_NAME_FIELD_LENGTH];
|
||||
struct sc_size frame_size;
|
||||
};
|
||||
|
||||
struct sc_server_params {
|
||||
@ -27,6 +26,7 @@ struct sc_server_params {
|
||||
enum sc_log_level log_level;
|
||||
enum sc_codec video_codec;
|
||||
enum sc_codec audio_codec;
|
||||
enum sc_audio_source audio_source;
|
||||
const char *crop;
|
||||
const char *video_codec_options;
|
||||
const char *audio_codec_options;
|
||||
@ -42,6 +42,7 @@ struct sc_server_params {
|
||||
int8_t lock_video_orientation;
|
||||
bool control;
|
||||
uint32_t display_id;
|
||||
bool video;
|
||||
bool audio;
|
||||
bool show_touches;
|
||||
bool stay_awake;
|
||||
@ -57,6 +58,7 @@ struct sc_server_params {
|
||||
bool power_on;
|
||||
bool list_encoders;
|
||||
bool list_displays;
|
||||
bool kill_adb_on_close;
|
||||
};
|
||||
|
||||
struct sc_server {
|
||||
|
@ -7,8 +7,6 @@
|
||||
#include <stdbool.h>
|
||||
#include <libavcodec/avcodec.h>
|
||||
|
||||
typedef struct AVFrame AVFrame;
|
||||
|
||||
/**
|
||||
* Frame sink trait.
|
||||
*
|
||||
@ -19,6 +17,7 @@ struct sc_frame_sink {
|
||||
};
|
||||
|
||||
struct sc_frame_sink_ops {
|
||||
/* The codec context is valid until the sink is closed */
|
||||
bool (*open)(struct sc_frame_sink *sink, const AVCodecContext *ctx);
|
||||
void (*close)(struct sc_frame_sink *sink);
|
||||
bool (*push)(struct sc_frame_sink *sink, const AVFrame *frame);
|
||||
|
@ -5,9 +5,7 @@
|
||||
|
||||
#include <assert.h>
|
||||
#include <stdbool.h>
|
||||
|
||||
typedef struct AVCodec AVCodec;
|
||||
typedef struct AVPacket AVPacket;
|
||||
#include <libavcodec/avcodec.h>
|
||||
|
||||
/**
|
||||
* Packet sink trait.
|
||||
@ -19,8 +17,8 @@ struct sc_packet_sink {
|
||||
};
|
||||
|
||||
struct sc_packet_sink_ops {
|
||||
/* The codec instance is static, it is valid until the end of the program */
|
||||
bool (*open)(struct sc_packet_sink *sink, const AVCodec *codec);
|
||||
/* The codec context is valid until the sink is closed */
|
||||
bool (*open)(struct sc_packet_sink *sink, AVCodecContext *ctx);
|
||||
void (*close)(struct sc_packet_sink *sink);
|
||||
bool (*push)(struct sc_packet_sink *sink, const AVPacket *packet);
|
||||
|
||||
|
@ -25,11 +25,11 @@ sc_packet_source_sinks_close_firsts(struct sc_packet_source *source,
|
||||
|
||||
bool
|
||||
sc_packet_source_sinks_open(struct sc_packet_source *source,
|
||||
const AVCodec *codec) {
|
||||
AVCodecContext *ctx) {
|
||||
assert(source->sink_count);
|
||||
for (unsigned i = 0; i < source->sink_count; ++i) {
|
||||
struct sc_packet_sink *sink = source->sinks[i];
|
||||
if (!sink->ops->open(sink, codec)) {
|
||||
if (!sink->ops->open(sink, ctx)) {
|
||||
sc_packet_source_sinks_close_firsts(source, i);
|
||||
return false;
|
||||
}
|
||||
|
@ -26,7 +26,7 @@ sc_packet_source_add_sink(struct sc_packet_source *source,
|
||||
|
||||
bool
|
||||
sc_packet_source_sinks_open(struct sc_packet_source *source,
|
||||
const AVCodec *codec);
|
||||
AVCodecContext *ctx);
|
||||
|
||||
void
|
||||
sc_packet_source_sinks_close(struct sc_packet_source *source);
|
||||
|
@ -83,7 +83,7 @@ scrcpy_otg(struct scrcpy_options *options) {
|
||||
#ifdef _WIN32
|
||||
// On Windows, only one process could open a USB device
|
||||
// <https://github.com/Genymobile/scrcpy/issues/2773>
|
||||
LOGI("Killing adb daemon (if any)...");
|
||||
LOGI("Killing adb server (if any)...");
|
||||
unsigned flags = SC_ADB_NO_STDOUT | SC_ADB_NO_STDERR | SC_ADB_NO_LOGERR;
|
||||
// uninterruptible (intr == NULL), but in practice it's very quick
|
||||
sc_adb_kill_server(NULL, flags);
|
||||
|
94
app/src/util/audiobuf.h
Normal file
94
app/src/util/audiobuf.h
Normal file
@ -0,0 +1,94 @@
|
||||
#ifndef SC_AUDIOBUF_H
|
||||
#define SC_AUDIOBUF_H
|
||||
|
||||
#include "common.h"
|
||||
|
||||
#include <stdbool.h>
|
||||
#include <stdint.h>
|
||||
|
||||
#include "util/bytebuf.h"
|
||||
|
||||
/**
|
||||
* Wrapper around bytebuf to read and write samples
|
||||
*
|
||||
* Each sample takes sample_size bytes.
|
||||
*/
|
||||
struct sc_audiobuf {
|
||||
struct sc_bytebuf buf;
|
||||
size_t sample_size;
|
||||
};
|
||||
|
||||
static inline uint32_t
|
||||
sc_audiobuf_to_samples(struct sc_audiobuf *buf, size_t bytes) {
|
||||
assert(bytes % buf->sample_size == 0);
|
||||
return bytes / buf->sample_size;
|
||||
}
|
||||
|
||||
static inline size_t
|
||||
sc_audiobuf_to_bytes(struct sc_audiobuf *buf, uint32_t samples) {
|
||||
return samples * buf->sample_size;
|
||||
}
|
||||
|
||||
static inline bool
|
||||
sc_audiobuf_init(struct sc_audiobuf *buf, size_t sample_size,
|
||||
uint32_t capacity) {
|
||||
buf->sample_size = sample_size;
|
||||
return sc_bytebuf_init(&buf->buf, capacity * sample_size + 1);
|
||||
}
|
||||
|
||||
static inline void
|
||||
sc_audiobuf_read(struct sc_audiobuf *buf, uint8_t *to, uint32_t samples) {
|
||||
size_t bytes = sc_audiobuf_to_bytes(buf, samples);
|
||||
sc_bytebuf_read(&buf->buf, to, bytes);
|
||||
}
|
||||
|
||||
static inline void
|
||||
sc_audiobuf_skip(struct sc_audiobuf *buf, uint32_t samples) {
|
||||
size_t bytes = sc_audiobuf_to_bytes(buf, samples);
|
||||
sc_bytebuf_skip(&buf->buf, bytes);
|
||||
}
|
||||
|
||||
static inline void
|
||||
sc_audiobuf_write(struct sc_audiobuf *buf, const uint8_t *from,
|
||||
uint32_t samples) {
|
||||
size_t bytes = sc_audiobuf_to_bytes(buf, samples);
|
||||
sc_bytebuf_write(&buf->buf, from, bytes);
|
||||
}
|
||||
|
||||
static inline void
|
||||
sc_audiobuf_prepare_write(struct sc_audiobuf *buf, const uint8_t *from,
|
||||
uint32_t samples) {
|
||||
size_t bytes = sc_audiobuf_to_bytes(buf, samples);
|
||||
sc_bytebuf_prepare_write(&buf->buf, from, bytes);
|
||||
}
|
||||
|
||||
static inline void
|
||||
sc_audiobuf_commit_write(struct sc_audiobuf *buf, uint32_t samples) {
|
||||
size_t bytes = sc_audiobuf_to_bytes(buf, samples);
|
||||
sc_bytebuf_commit_write(&buf->buf, bytes);
|
||||
}
|
||||
|
||||
static inline uint32_t
|
||||
sc_audiobuf_can_read(struct sc_audiobuf *buf) {
|
||||
size_t bytes = sc_bytebuf_can_read(&buf->buf);
|
||||
return sc_audiobuf_to_samples(buf, bytes);
|
||||
}
|
||||
|
||||
static inline uint32_t
|
||||
sc_audiobuf_can_write(struct sc_audiobuf *buf) {
|
||||
size_t bytes = sc_bytebuf_can_write(&buf->buf);
|
||||
return sc_audiobuf_to_samples(buf, bytes);
|
||||
}
|
||||
|
||||
static inline uint32_t
|
||||
sc_audiobuf_capacity(struct sc_audiobuf *buf) {
|
||||
size_t bytes = sc_bytebuf_capacity(&buf->buf);
|
||||
return sc_audiobuf_to_samples(buf, bytes);
|
||||
}
|
||||
|
||||
static inline void
|
||||
sc_audiobuf_destroy(struct sc_audiobuf *buf) {
|
||||
sc_bytebuf_destroy(&buf->buf);
|
||||
}
|
||||
|
||||
#endif
|
@ -30,7 +30,7 @@ sc_bytebuf_destroy(struct sc_bytebuf *buf) {
|
||||
void
|
||||
sc_bytebuf_read(struct sc_bytebuf *buf, uint8_t *to, size_t len) {
|
||||
assert(len);
|
||||
assert(len <= sc_bytebuf_read_available(buf));
|
||||
assert(len <= sc_bytebuf_can_read(buf));
|
||||
assert(buf->tail != buf->head); // the buffer could not be empty
|
||||
|
||||
size_t right_limit = buf->tail < buf->head ? buf->head : buf->alloc_size;
|
||||
@ -50,7 +50,7 @@ sc_bytebuf_read(struct sc_bytebuf *buf, uint8_t *to, size_t len) {
|
||||
void
|
||||
sc_bytebuf_skip(struct sc_bytebuf *buf, size_t len) {
|
||||
assert(len);
|
||||
assert(len <= sc_bytebuf_read_available(buf));
|
||||
assert(len <= sc_bytebuf_can_read(buf));
|
||||
assert(buf->tail != buf->head); // the buffer could not be empty
|
||||
|
||||
buf->tail = (buf->tail + len) % buf->alloc_size;
|
||||
@ -78,7 +78,7 @@ sc_bytebuf_write_step1(struct sc_bytebuf *buf, size_t len) {
|
||||
void
|
||||
sc_bytebuf_write(struct sc_bytebuf *buf, const uint8_t *from, size_t len) {
|
||||
assert(len);
|
||||
assert(len <= sc_bytebuf_write_available(buf));
|
||||
assert(len <= sc_bytebuf_can_write(buf));
|
||||
|
||||
sc_bytebuf_write_step0(buf, from, len);
|
||||
sc_bytebuf_write_step1(buf, len);
|
||||
@ -99,6 +99,6 @@ sc_bytebuf_prepare_write(struct sc_bytebuf *buf, const uint8_t *from,
|
||||
|
||||
void
|
||||
sc_bytebuf_commit_write(struct sc_bytebuf *buf, size_t len) {
|
||||
assert(len <= sc_bytebuf_write_available(buf));
|
||||
assert(len <= sc_bytebuf_can_write(buf));
|
||||
sc_bytebuf_write_step1(buf, len);
|
||||
}
|
||||
|
@ -86,7 +86,7 @@ sc_bytebuf_commit_write(struct sc_bytebuf *buf, size_t len);
|
||||
* It is an error to read more bytes than available.
|
||||
*/
|
||||
static inline size_t
|
||||
sc_bytebuf_read_available(struct sc_bytebuf *buf) {
|
||||
sc_bytebuf_can_read(struct sc_bytebuf *buf) {
|
||||
return (buf->alloc_size + buf->head - buf->tail) % buf->alloc_size;
|
||||
}
|
||||
|
||||
@ -96,12 +96,12 @@ sc_bytebuf_read_available(struct sc_bytebuf *buf) {
|
||||
* It is an error to write more bytes than available.
|
||||
*/
|
||||
static inline size_t
|
||||
sc_bytebuf_write_available(struct sc_bytebuf *buf) {
|
||||
sc_bytebuf_can_write(struct sc_bytebuf *buf) {
|
||||
return (buf->alloc_size + buf->tail - buf->head - 1) % buf->alloc_size;
|
||||
}
|
||||
|
||||
/**
|
||||
* Return the actual capacity of the buffer (read available + write available)
|
||||
* Return the actual capacity of the buffer (can_read() + can_write())
|
||||
*/
|
||||
static inline size_t
|
||||
sc_bytebuf_capacity(struct sc_bytebuf *buf) {
|
||||
|
77
app/src/util/timeout.c
Normal file
77
app/src/util/timeout.c
Normal file
@ -0,0 +1,77 @@
|
||||
#include "timeout.h"
|
||||
|
||||
#include <assert.h>
|
||||
|
||||
#include "log.h"
|
||||
|
||||
bool
|
||||
sc_timeout_init(struct sc_timeout *timeout) {
|
||||
bool ok = sc_mutex_init(&timeout->mutex);
|
||||
if (!ok) {
|
||||
return false;
|
||||
}
|
||||
|
||||
ok = sc_cond_init(&timeout->cond);
|
||||
if (!ok) {
|
||||
return false;
|
||||
}
|
||||
|
||||
timeout->stopped = false;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static int
|
||||
run_timeout(void *data) {
|
||||
struct sc_timeout *timeout = data;
|
||||
sc_tick deadline = timeout->deadline;
|
||||
|
||||
sc_mutex_lock(&timeout->mutex);
|
||||
bool timed_out = false;
|
||||
while (!timeout->stopped && !timed_out) {
|
||||
timed_out = !sc_cond_timedwait(&timeout->cond, &timeout->mutex,
|
||||
deadline);
|
||||
}
|
||||
sc_mutex_unlock(&timeout->mutex);
|
||||
|
||||
timeout->cbs->on_timeout(timeout, timeout->cbs_userdata);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
bool
|
||||
sc_timeout_start(struct sc_timeout *timeout, sc_tick deadline,
|
||||
const struct sc_timeout_callbacks *cbs, void *cbs_userdata) {
|
||||
bool ok = sc_thread_create(&timeout->thread, run_timeout, "scrcpy-timeout",
|
||||
timeout);
|
||||
if (!ok) {
|
||||
LOGE("Timeout: could not start thread");
|
||||
return false;
|
||||
}
|
||||
|
||||
timeout->deadline = deadline;
|
||||
|
||||
assert(cbs && cbs->on_timeout);
|
||||
timeout->cbs = cbs;
|
||||
timeout->cbs_userdata = cbs_userdata;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
void
|
||||
sc_timeout_stop(struct sc_timeout *timeout) {
|
||||
sc_mutex_lock(&timeout->mutex);
|
||||
timeout->stopped = true;
|
||||
sc_mutex_unlock(&timeout->mutex);
|
||||
}
|
||||
|
||||
void
|
||||
sc_timeout_join(struct sc_timeout *timeout) {
|
||||
sc_thread_join(&timeout->thread, NULL);
|
||||
}
|
||||
|
||||
void
|
||||
sc_timeout_destroy(struct sc_timeout *timeout) {
|
||||
sc_mutex_destroy(&timeout->mutex);
|
||||
sc_cond_destroy(&timeout->cond);
|
||||
}
|
43
app/src/util/timeout.h
Normal file
43
app/src/util/timeout.h
Normal file
@ -0,0 +1,43 @@
|
||||
#ifndef SC_TIMEOUT_H
|
||||
#define SC_TIMEOUT_H
|
||||
|
||||
#include "common.h"
|
||||
|
||||
#include <stdbool.h>
|
||||
|
||||
#include "thread.h"
|
||||
#include "tick.h"
|
||||
|
||||
struct sc_timeout {
|
||||
sc_thread thread;
|
||||
sc_tick deadline;
|
||||
|
||||
sc_mutex mutex;
|
||||
sc_cond cond;
|
||||
bool stopped;
|
||||
|
||||
const struct sc_timeout_callbacks *cbs;
|
||||
void *cbs_userdata;
|
||||
};
|
||||
|
||||
struct sc_timeout_callbacks {
|
||||
void (*on_timeout)(struct sc_timeout *timeout, void *userdata);
|
||||
};
|
||||
|
||||
bool
|
||||
sc_timeout_init(struct sc_timeout *timeout);
|
||||
|
||||
void
|
||||
sc_timeout_destroy(struct sc_timeout *timeout);
|
||||
|
||||
bool
|
||||
sc_timeout_start(struct sc_timeout *timeout, sc_tick deadline,
|
||||
const struct sc_timeout_callbacks *cbs, void *cbs_userdata);
|
||||
|
||||
void
|
||||
sc_timeout_stop(struct sc_timeout *timeout);
|
||||
|
||||
void
|
||||
sc_timeout_join(struct sc_timeout *timeout);
|
||||
|
||||
#endif
|
@ -205,11 +205,13 @@ sc_v4l2_sink_open(struct sc_v4l2_sink *vs, const AVCodecContext *ctx) {
|
||||
goto error_avformat_free_context;
|
||||
}
|
||||
|
||||
ostream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
|
||||
int r = avcodec_parameters_from_context(ostream->codecpar, ctx);
|
||||
if (r < 0) {
|
||||
goto error_avformat_free_context;
|
||||
}
|
||||
|
||||
// The codec is from the v4l2 encoder, not from the decoder
|
||||
ostream->codecpar->codec_id = encoder->id;
|
||||
ostream->codecpar->format = AV_PIX_FMT_YUV420P;
|
||||
ostream->codecpar->width = vs->frame_size.width;
|
||||
ostream->codecpar->height = vs->frame_size.height;
|
||||
|
||||
int ret = avio_open(&vs->format_ctx->pb, vs->device_name, AVIO_FLAG_WRITE);
|
||||
if (ret < 0) {
|
||||
@ -224,8 +226,8 @@ sc_v4l2_sink_open(struct sc_v4l2_sink *vs, const AVCodecContext *ctx) {
|
||||
goto error_avio_close;
|
||||
}
|
||||
|
||||
vs->encoder_ctx->width = vs->frame_size.width;
|
||||
vs->encoder_ctx->height = vs->frame_size.height;
|
||||
vs->encoder_ctx->width = ctx->width;
|
||||
vs->encoder_ctx->height = ctx->height;
|
||||
vs->encoder_ctx->pix_fmt = AV_PIX_FMT_YUV420P;
|
||||
vs->encoder_ctx->time_base.num = 1;
|
||||
vs->encoder_ctx->time_base.den = 1;
|
||||
@ -341,16 +343,13 @@ sc_v4l2_frame_sink_push(struct sc_frame_sink *sink, const AVFrame *frame) {
|
||||
}
|
||||
|
||||
bool
|
||||
sc_v4l2_sink_init(struct sc_v4l2_sink *vs, const char *device_name,
|
||||
struct sc_size frame_size) {
|
||||
sc_v4l2_sink_init(struct sc_v4l2_sink *vs, const char *device_name) {
|
||||
vs->device_name = strdup(device_name);
|
||||
if (!vs->device_name) {
|
||||
LOGE("Could not strdup v4l2 device name");
|
||||
return false;
|
||||
}
|
||||
|
||||
vs->frame_size = frame_size;
|
||||
|
||||
static const struct sc_frame_sink_ops ops = {
|
||||
.open = sc_v4l2_frame_sink_open,
|
||||
.close = sc_v4l2_frame_sink_close,
|
||||
|
@ -19,7 +19,6 @@ struct sc_v4l2_sink {
|
||||
AVCodecContext *encoder_ctx;
|
||||
|
||||
char *device_name;
|
||||
struct sc_size frame_size;
|
||||
|
||||
sc_thread thread;
|
||||
sc_mutex mutex;
|
||||
@ -33,8 +32,7 @@ struct sc_v4l2_sink {
|
||||
};
|
||||
|
||||
bool
|
||||
sc_v4l2_sink_init(struct sc_v4l2_sink *vs, const char *device_name,
|
||||
struct sc_size frame_size);
|
||||
sc_v4l2_sink_init(struct sc_v4l2_sink *vs, const char *device_name);
|
||||
|
||||
void
|
||||
sc_v4l2_sink_destroy(struct sc_v4l2_sink *vs);
|
||||
|
@ -217,6 +217,18 @@ static void test_get_ip_multiline_second_ok(void) {
|
||||
free(ip);
|
||||
}
|
||||
|
||||
static void test_get_ip_multiline_second_ok_without_cr(void) {
|
||||
char ip_route[] = "10.0.0.0/24 dev rmnet proto kernel scope link src "
|
||||
"10.0.0.3\n"
|
||||
"192.168.1.0/24 dev wlan0 proto kernel scope link src "
|
||||
"192.168.1.3\n";
|
||||
|
||||
char *ip = sc_adb_parse_device_ip(ip_route);
|
||||
assert(ip);
|
||||
assert(!strcmp(ip, "192.168.1.3"));
|
||||
free(ip);
|
||||
}
|
||||
|
||||
static void test_get_ip_no_wlan(void) {
|
||||
char ip_route[] = "192.168.1.0/24 dev rmnet proto kernel scope link src "
|
||||
"192.168.12.34\r\r\n";
|
||||
@ -259,6 +271,7 @@ int main(int argc, char *argv[]) {
|
||||
test_get_ip_single_line_with_trailing_space();
|
||||
test_get_ip_multiline_first_ok();
|
||||
test_get_ip_multiline_second_ok();
|
||||
test_get_ip_multiline_second_ok_without_cr();
|
||||
test_get_ip_no_wlan();
|
||||
test_get_ip_no_wlan_without_eol();
|
||||
test_get_ip_truncated();
|
||||
|
@ -13,23 +13,23 @@ void test_bytebuf_simple(void) {
|
||||
assert(ok);
|
||||
|
||||
sc_bytebuf_write(&buf, (uint8_t *) "hello", sizeof("hello") - 1);
|
||||
assert(sc_bytebuf_read_available(&buf) == 5);
|
||||
assert(sc_bytebuf_can_read(&buf) == 5);
|
||||
|
||||
sc_bytebuf_read(&buf, data, 4);
|
||||
assert(!strncmp((char *) data, "hell", 4));
|
||||
|
||||
sc_bytebuf_write(&buf, (uint8_t *) " world", sizeof(" world") - 1);
|
||||
assert(sc_bytebuf_read_available(&buf) == 7);
|
||||
assert(sc_bytebuf_can_read(&buf) == 7);
|
||||
|
||||
sc_bytebuf_write(&buf, (uint8_t *) "!", 1);
|
||||
assert(sc_bytebuf_read_available(&buf) == 8);
|
||||
assert(sc_bytebuf_can_read(&buf) == 8);
|
||||
|
||||
sc_bytebuf_read(&buf, &data[4], 8);
|
||||
assert(sc_bytebuf_read_available(&buf) == 0);
|
||||
assert(sc_bytebuf_can_read(&buf) == 0);
|
||||
|
||||
data[12] = '\0';
|
||||
assert(!strcmp((char *) data, "hello world!"));
|
||||
assert(sc_bytebuf_read_available(&buf) == 0);
|
||||
assert(sc_bytebuf_can_read(&buf) == 0);
|
||||
|
||||
sc_bytebuf_destroy(&buf);
|
||||
}
|
||||
@ -42,31 +42,31 @@ void test_bytebuf_boundaries(void) {
|
||||
assert(ok);
|
||||
|
||||
sc_bytebuf_write(&buf, (uint8_t *) "hello ", sizeof("hello ") - 1);
|
||||
assert(sc_bytebuf_read_available(&buf) == 6);
|
||||
assert(sc_bytebuf_can_read(&buf) == 6);
|
||||
|
||||
sc_bytebuf_write(&buf, (uint8_t *) "hello ", sizeof("hello ") - 1);
|
||||
assert(sc_bytebuf_read_available(&buf) == 12);
|
||||
assert(sc_bytebuf_can_read(&buf) == 12);
|
||||
|
||||
sc_bytebuf_write(&buf, (uint8_t *) "hello ", sizeof("hello ") - 1);
|
||||
assert(sc_bytebuf_read_available(&buf) == 18);
|
||||
assert(sc_bytebuf_can_read(&buf) == 18);
|
||||
|
||||
sc_bytebuf_read(&buf, data, 9);
|
||||
assert(!strncmp((char *) data, "hello hel", 9));
|
||||
assert(sc_bytebuf_read_available(&buf) == 9);
|
||||
assert(sc_bytebuf_can_read(&buf) == 9);
|
||||
|
||||
sc_bytebuf_write(&buf, (uint8_t *) "world", sizeof("world") - 1);
|
||||
assert(sc_bytebuf_read_available(&buf) == 14);
|
||||
assert(sc_bytebuf_can_read(&buf) == 14);
|
||||
|
||||
sc_bytebuf_write(&buf, (uint8_t *) "!", 1);
|
||||
assert(sc_bytebuf_read_available(&buf) == 15);
|
||||
assert(sc_bytebuf_can_read(&buf) == 15);
|
||||
|
||||
sc_bytebuf_skip(&buf, 3);
|
||||
assert(sc_bytebuf_read_available(&buf) == 12);
|
||||
assert(sc_bytebuf_can_read(&buf) == 12);
|
||||
|
||||
sc_bytebuf_read(&buf, data, 12);
|
||||
data[12] = '\0';
|
||||
assert(!strcmp((char *) data, "hello world!"));
|
||||
assert(sc_bytebuf_read_available(&buf) == 0);
|
||||
assert(sc_bytebuf_can_read(&buf) == 0);
|
||||
|
||||
sc_bytebuf_destroy(&buf);
|
||||
}
|
||||
@ -79,37 +79,37 @@ void test_bytebuf_two_steps_write(void) {
|
||||
assert(ok);
|
||||
|
||||
sc_bytebuf_write(&buf, (uint8_t *) "hello ", sizeof("hello ") - 1);
|
||||
assert(sc_bytebuf_read_available(&buf) == 6);
|
||||
assert(sc_bytebuf_can_read(&buf) == 6);
|
||||
|
||||
sc_bytebuf_write(&buf, (uint8_t *) "hello ", sizeof("hello ") - 1);
|
||||
assert(sc_bytebuf_read_available(&buf) == 12);
|
||||
assert(sc_bytebuf_can_read(&buf) == 12);
|
||||
|
||||
sc_bytebuf_prepare_write(&buf, (uint8_t *) "hello ", sizeof("hello ") - 1);
|
||||
assert(sc_bytebuf_read_available(&buf) == 12); // write not committed yet
|
||||
assert(sc_bytebuf_can_read(&buf) == 12); // write not committed yet
|
||||
|
||||
sc_bytebuf_read(&buf, data, 9);
|
||||
assert(!strncmp((char *) data, "hello hel", 3));
|
||||
assert(sc_bytebuf_read_available(&buf) == 3);
|
||||
assert(sc_bytebuf_can_read(&buf) == 3);
|
||||
|
||||
sc_bytebuf_commit_write(&buf, sizeof("hello ") - 1);
|
||||
assert(sc_bytebuf_read_available(&buf) == 9);
|
||||
assert(sc_bytebuf_can_read(&buf) == 9);
|
||||
|
||||
sc_bytebuf_prepare_write(&buf, (uint8_t *) "world", sizeof("world") - 1);
|
||||
assert(sc_bytebuf_read_available(&buf) == 9); // write not committed yet
|
||||
assert(sc_bytebuf_can_read(&buf) == 9); // write not committed yet
|
||||
|
||||
sc_bytebuf_commit_write(&buf, sizeof("world") - 1);
|
||||
assert(sc_bytebuf_read_available(&buf) == 14);
|
||||
assert(sc_bytebuf_can_read(&buf) == 14);
|
||||
|
||||
sc_bytebuf_write(&buf, (uint8_t *) "!", 1);
|
||||
assert(sc_bytebuf_read_available(&buf) == 15);
|
||||
assert(sc_bytebuf_can_read(&buf) == 15);
|
||||
|
||||
sc_bytebuf_skip(&buf, 3);
|
||||
assert(sc_bytebuf_read_available(&buf) == 12);
|
||||
assert(sc_bytebuf_can_read(&buf) == 12);
|
||||
|
||||
sc_bytebuf_read(&buf, data, 12);
|
||||
data[12] = '\0';
|
||||
assert(!strcmp((char *) data, "hello world!"));
|
||||
assert(sc_bytebuf_read_available(&buf) == 0);
|
||||
assert(sc_bytebuf_can_read(&buf) == 0);
|
||||
|
||||
sc_bytebuf_destroy(&buf);
|
||||
}
|
||||
|
@ -53,7 +53,7 @@ static void test_options(void) {
|
||||
"--max-size", "1024",
|
||||
"--lock-video-orientation=2", // optional arguments require '='
|
||||
// "--no-control" is not compatible with "--turn-screen-off"
|
||||
// "--no-display" is not compatible with "--fulscreen"
|
||||
// "--no-playback" is not compatible with "--fulscreen"
|
||||
"--port", "1234:1236",
|
||||
"--push-target", "/sdcard/Movies",
|
||||
"--record", "file",
|
||||
@ -108,8 +108,8 @@ static void test_options2(void) {
|
||||
char *argv[] = {
|
||||
"scrcpy",
|
||||
"--no-control",
|
||||
"--no-display",
|
||||
"--record", "file.mp4", // cannot enable --no-display without recording
|
||||
"--no-playback",
|
||||
"--record", "file.mp4", // cannot enable --no-playback without recording
|
||||
};
|
||||
|
||||
bool ok = scrcpy_parse_args(&args, ARRAY_LEN(argv), argv);
|
||||
@ -117,7 +117,8 @@ static void test_options2(void) {
|
||||
|
||||
const struct scrcpy_options *opts = &args.opts;
|
||||
assert(!opts->control);
|
||||
assert(!opts->display);
|
||||
assert(!opts->video_playback);
|
||||
assert(!opts->audio_playback);
|
||||
assert(!strcmp(opts->record_filename, "file.mp4"));
|
||||
assert(opts->record_format == SC_RECORD_FORMAT_MP4);
|
||||
}
|
||||
|
@ -1,79 +0,0 @@
|
||||
#include "common.h"
|
||||
|
||||
#include <assert.h>
|
||||
|
||||
#include "clock.h"
|
||||
|
||||
void test_small_rolling_sum(void) {
|
||||
struct sc_clock clock;
|
||||
sc_clock_init(&clock);
|
||||
|
||||
assert(clock.count == 0);
|
||||
assert(clock.left_sum.system == 0);
|
||||
assert(clock.left_sum.stream == 0);
|
||||
assert(clock.right_sum.system == 0);
|
||||
assert(clock.right_sum.stream == 0);
|
||||
|
||||
sc_clock_update(&clock, 2, 3);
|
||||
assert(clock.count == 1);
|
||||
assert(clock.left_sum.system == 0);
|
||||
assert(clock.left_sum.stream == 0);
|
||||
assert(clock.right_sum.system == 2);
|
||||
assert(clock.right_sum.stream == 3);
|
||||
|
||||
sc_clock_update(&clock, 10, 20);
|
||||
assert(clock.count == 2);
|
||||
assert(clock.left_sum.system == 2);
|
||||
assert(clock.left_sum.stream == 3);
|
||||
assert(clock.right_sum.system == 10);
|
||||
assert(clock.right_sum.stream == 20);
|
||||
|
||||
sc_clock_update(&clock, 40, 80);
|
||||
assert(clock.count == 3);
|
||||
assert(clock.left_sum.system == 2);
|
||||
assert(clock.left_sum.stream == 3);
|
||||
assert(clock.right_sum.system == 50);
|
||||
assert(clock.right_sum.stream == 100);
|
||||
|
||||
sc_clock_update(&clock, 400, 800);
|
||||
assert(clock.count == 4);
|
||||
assert(clock.left_sum.system == 12);
|
||||
assert(clock.left_sum.stream == 23);
|
||||
assert(clock.right_sum.system == 440);
|
||||
assert(clock.right_sum.stream == 880);
|
||||
}
|
||||
|
||||
void test_large_rolling_sum(void) {
|
||||
const unsigned half_range = SC_CLOCK_RANGE / 2;
|
||||
|
||||
struct sc_clock clock1;
|
||||
sc_clock_init(&clock1);
|
||||
for (unsigned i = 0; i < 5 * half_range; ++i) {
|
||||
sc_clock_update(&clock1, i, 2 * i + 1);
|
||||
}
|
||||
|
||||
struct sc_clock clock2;
|
||||
sc_clock_init(&clock2);
|
||||
for (unsigned i = 3 * half_range; i < 5 * half_range; ++i) {
|
||||
sc_clock_update(&clock2, i, 2 * i + 1);
|
||||
}
|
||||
|
||||
assert(clock1.count == SC_CLOCK_RANGE);
|
||||
assert(clock2.count == SC_CLOCK_RANGE);
|
||||
|
||||
// The values before the last SC_CLOCK_RANGE points in clock1 should have
|
||||
// no impact
|
||||
assert(clock1.left_sum.system == clock2.left_sum.system);
|
||||
assert(clock1.left_sum.stream == clock2.left_sum.stream);
|
||||
assert(clock1.right_sum.system == clock2.right_sum.system);
|
||||
assert(clock1.right_sum.stream == clock2.right_sum.stream);
|
||||
}
|
||||
|
||||
int main(int argc, char *argv[]) {
|
||||
(void) argc;
|
||||
(void) argv;
|
||||
|
||||
test_small_rolling_sum();
|
||||
test_large_rolling_sum();
|
||||
return 0;
|
||||
};
|
@ -16,6 +16,6 @@ cpu = 'i686'
|
||||
endian = 'little'
|
||||
|
||||
[properties]
|
||||
prebuilt_ffmpeg = 'ffmpeg-6.0-scrcpy-2/win32'
|
||||
prebuilt_sdl2 = 'SDL2-2.26.1/i686-w64-mingw32'
|
||||
prebuilt_ffmpeg = 'ffmpeg-6.0-scrcpy-4/win32'
|
||||
prebuilt_sdl2 = 'SDL2-2.28.0/i686-w64-mingw32'
|
||||
prebuilt_libusb = 'libusb-1.0.26/libusb-MinGW-Win32'
|
||||
|
@ -16,6 +16,6 @@ cpu = 'x86_64'
|
||||
endian = 'little'
|
||||
|
||||
[properties]
|
||||
prebuilt_ffmpeg = 'ffmpeg-6.0-scrcpy-2/win64'
|
||||
prebuilt_sdl2 = 'SDL2-2.26.1/x86_64-w64-mingw32'
|
||||
prebuilt_ffmpeg = 'ffmpeg-6.0-scrcpy-4/win64'
|
||||
prebuilt_sdl2 = 'SDL2-2.28.0/x86_64-w64-mingw32'
|
||||
prebuilt_libusb = 'libusb-1.0.26/libusb-MinGW-x64'
|
||||
|
137
doc/audio.md
Normal file
137
doc/audio.md
Normal file
@ -0,0 +1,137 @@
|
||||
# Audio
|
||||
|
||||
Audio forwarding is supported for devices with Android 11 or higher, and it is
|
||||
enabled by default:
|
||||
|
||||
- For **Android 12 or newer**, it works out-of-the-box.
|
||||
- For **Android 11**, you'll need to ensure that the device screen is unlocked
|
||||
when starting scrcpy. A fake popup will briefly appear to make the system
|
||||
think that the shell app is in the foreground. Without this, audio capture
|
||||
will fail.
|
||||
- For **Android 10 or earlier**, audio cannot be captured and is automatically
|
||||
disabled.
|
||||
|
||||
If audio capture fails, then mirroring continues with video only (since audio is
|
||||
enabled by default, it is not acceptable to make scrcpy fail if it is not
|
||||
available), unless `--require-audio` is set.
|
||||
|
||||
|
||||
## No audio
|
||||
|
||||
To disable audio:
|
||||
|
||||
```
|
||||
scrcpy --no-audio
|
||||
```
|
||||
|
||||
To disable only the audio playback, see [no playback](video.md#no-playback).
|
||||
|
||||
## Audio only
|
||||
|
||||
To play audio only, disable the video:
|
||||
|
||||
```bash
|
||||
scrcpy --no-video
|
||||
# interrupt with Ctrl+C
|
||||
```
|
||||
|
||||
Without video, the audio latency is typically not criticial, so it might be
|
||||
interesting to add [buffering](#buffering) to minimize glitches:
|
||||
|
||||
```
|
||||
scrcpy --no-video --audio-buffer=200
|
||||
```
|
||||
|
||||
## Source
|
||||
|
||||
By default, the device audio output is forwarded.
|
||||
|
||||
It is possible to capture the device microphone instead:
|
||||
|
||||
```
|
||||
scrcpy --audio-source=mic
|
||||
```
|
||||
|
||||
For example, to use the device as a dictaphone and record a capture directly on
|
||||
the computer:
|
||||
|
||||
```
|
||||
scrcpy --audio-source=mic --no-video --no-playback --record=file.opus
|
||||
```
|
||||
|
||||
|
||||
## Codec
|
||||
|
||||
The audio codec can be selected. The possible values are `opus` (default), `aac`
|
||||
and `raw` (uncompressed PCM 16-bit LE):
|
||||
|
||||
```bash
|
||||
scrcpy --audio-codec=opus # default
|
||||
scrcpy --audio-codec=aac
|
||||
scrcpy --audio-codec=raw
|
||||
```
|
||||
|
||||
Several encoders may be available on the device. They can be listed by:
|
||||
|
||||
```bash
|
||||
scrcpy --list-encoders
|
||||
```
|
||||
|
||||
To select a specific encoder:
|
||||
|
||||
```
|
||||
scrcpy --audio-codec=opus --audio-encoder='c2.android.opus.encoder'
|
||||
```
|
||||
|
||||
For advanced usage, to pass arbitrary parameters to the [`MediaFormat`],
|
||||
check `--audio-codec-options` in the manpage or in `scrcpy --help`.
|
||||
|
||||
[`MediaFormat`]: https://developer.android.com/reference/android/media/MediaFormat
|
||||
|
||||
|
||||
## Bit rate
|
||||
|
||||
The default video bit-rate is 128Kbps. To change it:
|
||||
|
||||
```bash
|
||||
scrcpy --audio-bit-rate=64K
|
||||
scrcpy --audio-bit-rate=64000 # equivalent
|
||||
```
|
||||
|
||||
_This parameter does not apply to RAW audio codec (`--audio-codec=raw`)._
|
||||
|
||||
|
||||
## Buffering
|
||||
|
||||
Audio buffering is unavoidable. It must be kept small enough so that the latency
|
||||
is acceptable, but large enough to minimize buffer underrun (causing audio
|
||||
glitches).
|
||||
|
||||
The default buffer size is set to 50ms. It can be adjusted:
|
||||
|
||||
```bash
|
||||
scrcpy --audio-buffer=40 # smaller than default
|
||||
scrcpy --audio-buffer=100 # higher than default
|
||||
```
|
||||
|
||||
Note that this option changes the _target_ buffering. It is possible that this
|
||||
target buffering might not be reached (on frequent buffer underflow typically).
|
||||
|
||||
If you don't interact with the device (to watch a video for example), a higher
|
||||
latency (for both [video](video.md#buffering) and audio) might be preferable to
|
||||
avoid glitches and smooth the playback:
|
||||
|
||||
```
|
||||
scrcpy --display-buffer=200 --audio-buffer=200
|
||||
```
|
||||
|
||||
It is also possible to configure another audio buffer (the audio output buffer),
|
||||
by default set to 5ms. Don't change it, unless you get some [robotic and glitchy
|
||||
sound][#3793]:
|
||||
|
||||
```bash
|
||||
# Only if absolutely necessary
|
||||
scrcpy --audio-output-buffer=10
|
||||
```
|
||||
|
||||
[#3793]: https://github.com/Genymobile/scrcpy/issues/3793
|
@ -2,57 +2,16 @@
|
||||
|
||||
Here are the instructions to build _scrcpy_ (client and server).
|
||||
|
||||
|
||||
## Simple
|
||||
|
||||
If you just want to install the latest release from `master`, follow this
|
||||
simplified process.
|
||||
|
||||
First, you need to install the required packages:
|
||||
|
||||
```bash
|
||||
# for Debian/Ubuntu
|
||||
sudo apt install ffmpeg libsdl2-2.0-0 adb wget \
|
||||
gcc git pkg-config meson ninja-build libsdl2-dev \
|
||||
libavcodec-dev libavdevice-dev libavformat-dev libavutil-dev \
|
||||
libswresample-dev libusb-1.0-0 libusb-1.0-0-dev
|
||||
```
|
||||
|
||||
Then clone the repo and execute the installation script
|
||||
([source](install_release.sh)):
|
||||
|
||||
```bash
|
||||
git clone https://github.com/Genymobile/scrcpy
|
||||
cd scrcpy
|
||||
./install_release.sh
|
||||
```
|
||||
|
||||
When a new release is out, update the repo and reinstall:
|
||||
|
||||
```bash
|
||||
git pull
|
||||
./install_release.sh
|
||||
```
|
||||
|
||||
To uninstall:
|
||||
|
||||
```bash
|
||||
sudo ninja -Cbuild-auto uninstall
|
||||
```
|
||||
|
||||
If you just want to build and install the latest release, follow the simplified
|
||||
process described in [doc/linux.md](linux.md).
|
||||
|
||||
## Branches
|
||||
|
||||
### `master`
|
||||
|
||||
The `master` branch concerns the latest release, and is the home page of the
|
||||
project on GitHub.
|
||||
|
||||
|
||||
### `dev`
|
||||
|
||||
`dev` is the current development branch. Every commit present in `dev` will be
|
||||
in the next release.
|
||||
There are two main branches:
|
||||
- `master`: contains the latest release. It is the home page of the project on
|
||||
GitHub.
|
||||
- `dev`: the current development branch. Every commit present in `dev` will be
|
||||
in the next release.
|
||||
|
||||
If you want to contribute code, please base your commits on the latest `dev`
|
||||
branch.
|
||||
@ -69,6 +28,8 @@ the following files to a directory accessible from your `PATH`:
|
||||
- `AdbWinApi.dll`
|
||||
- `AdbWinUsbApi.dll`
|
||||
|
||||
It is also available in scrcpy releases.
|
||||
|
||||
The client requires [FFmpeg] and [LibSDL2]. Just follow the instructions.
|
||||
|
||||
[adb]: https://developer.android.com/studio/command-line/adb.html
|
||||
@ -272,10 +233,10 @@ install` must be run as root)._
|
||||
|
||||
#### Option 2: Use prebuilt server
|
||||
|
||||
- [`scrcpy-server-v1.25`][direct-scrcpy-server]
|
||||
<sub>SHA-256: `ce0306c7bbd06ae72f6d06f7ec0ee33774995a65de71e0a83813ecb67aec9bdb`</sub>
|
||||
- [`scrcpy-server-v2.0`][direct-scrcpy-server]
|
||||
<sub>SHA-256: `9e241615f578cd690bb43311000debdecf6a9c50a7082b001952f18f6f21ddc2`</sub>
|
||||
|
||||
[direct-scrcpy-server]: https://github.com/Genymobile/scrcpy/releases/download/v1.25/scrcpy-server-v1.25
|
||||
[direct-scrcpy-server]: https://github.com/Genymobile/scrcpy/releases/download/v2.0/scrcpy-server-v2.0
|
||||
|
||||
Download the prebuilt server somewhere, and specify its path during the Meson
|
||||
configuration:
|
||||
@ -314,7 +275,8 @@ This installs several files:
|
||||
- `/usr/local/share/zsh/site-functions/_scrcpy` (zsh completion)
|
||||
- `/usr/local/share/bash-completion/completions/scrcpy` (bash completion)
|
||||
|
||||
You can then [run](README.md#run) `scrcpy`.
|
||||
You can then run `scrcpy`.
|
||||
|
||||
|
||||
### Uninstall
|
||||
|
149
doc/control.md
Normal file
149
doc/control.md
Normal file
@ -0,0 +1,149 @@
|
||||
# Control
|
||||
|
||||
## Read-only
|
||||
|
||||
To disable controls (everything which can interact with the device: input keys,
|
||||
mouse events, drag&drop files):
|
||||
|
||||
```bash
|
||||
scrcpy --no-control
|
||||
scrcpy -n # short version
|
||||
```
|
||||
|
||||
|
||||
## Text injection preference
|
||||
|
||||
Two kinds of [events][textevents] are generated when typing text:
|
||||
- _key events_, signaling that a key is pressed or released;
|
||||
- _text events_, signaling that a text has been entered.
|
||||
|
||||
By default, letters are injected using key events, so that the keyboard behaves
|
||||
as expected in games (typically for WASD keys).
|
||||
|
||||
But this may [cause issues][prefertext]. If you encounter such a problem, you
|
||||
can avoid it by:
|
||||
|
||||
```bash
|
||||
scrcpy --prefer-text
|
||||
```
|
||||
|
||||
(but this will break keyboard behavior in games)
|
||||
|
||||
On the contrary, you could force to always inject raw key events:
|
||||
|
||||
```bash
|
||||
scrcpy --raw-key-events
|
||||
```
|
||||
|
||||
These options have no effect on HID keyboard (all key events are sent as
|
||||
scancodes in this mode).
|
||||
|
||||
[textevents]: https://blog.rom1v.com/2018/03/introducing-scrcpy/#handle-text-input
|
||||
[prefertext]: https://github.com/Genymobile/scrcpy/issues/650#issuecomment-512945343
|
||||
|
||||
|
||||
## Copy-paste
|
||||
|
||||
Any time the Android clipboard changes, it is automatically synchronized to the
|
||||
computer clipboard.
|
||||
|
||||
Any <kbd>Ctrl</kbd> shortcut is forwarded to the device. In particular:
|
||||
- <kbd>Ctrl</kbd>+<kbd>c</kbd> typically copies
|
||||
- <kbd>Ctrl</kbd>+<kbd>x</kbd> typically cuts
|
||||
- <kbd>Ctrl</kbd>+<kbd>v</kbd> typically pastes (after computer-to-device
|
||||
clipboard synchronization)
|
||||
|
||||
This typically works as you expect.
|
||||
|
||||
The actual behavior depends on the active application though. For example,
|
||||
_Termux_ sends SIGINT on <kbd>Ctrl</kbd>+<kbd>c</kbd> instead, and _K-9 Mail_
|
||||
composes a new message.
|
||||
|
||||
To copy, cut and paste in such cases (but only supported on Android >= 7):
|
||||
- <kbd>MOD</kbd>+<kbd>c</kbd> injects `COPY`
|
||||
- <kbd>MOD</kbd>+<kbd>x</kbd> injects `CUT`
|
||||
- <kbd>MOD</kbd>+<kbd>v</kbd> injects `PASTE` (after computer-to-device
|
||||
clipboard synchronization)
|
||||
|
||||
In addition, <kbd>MOD</kbd>+<kbd>Shift</kbd>+<kbd>v</kbd> injects the computer
|
||||
clipboard text as a sequence of key events. This is useful when the component
|
||||
does not accept text pasting (for example in _Termux_), but it can break
|
||||
non-ASCII content.
|
||||
|
||||
**WARNING:** Pasting the computer clipboard to the device (either via
|
||||
<kbd>Ctrl</kbd>+<kbd>v</kbd> or <kbd>MOD</kbd>+<kbd>v</kbd>) copies the content
|
||||
into the Android clipboard. As a consequence, any Android application could read
|
||||
its content. You should avoid pasting sensitive content (like passwords) that
|
||||
way.
|
||||
|
||||
Some Android devices do not behave as expected when setting the device clipboard
|
||||
programmatically. An option `--legacy-paste` is provided to change the behavior
|
||||
of <kbd>Ctrl</kbd>+<kbd>v</kbd> and <kbd>MOD</kbd>+<kbd>v</kbd> so that they
|
||||
also inject the computer clipboard text as a sequence of key events (the same
|
||||
way as <kbd>MOD</kbd>+<kbd>Shift</kbd>+<kbd>v</kbd>).
|
||||
|
||||
To disable automatic clipboard synchronization, use
|
||||
`--no-clipboard-autosync`.
|
||||
|
||||
## Pinch-to-zoom
|
||||
|
||||
To simulate "pinch-to-zoom": <kbd>Ctrl</kbd>+_click-and-move_.
|
||||
|
||||
More precisely, hold down <kbd>Ctrl</kbd> while pressing the left-click button.
|
||||
Until the left-click button is released, all mouse movements scale and rotate
|
||||
the content (if supported by the app) relative to the center of the screen.
|
||||
|
||||
Technically, _scrcpy_ generates additional touch events from a "virtual finger"
|
||||
at a location inverted through the center of the screen.
|
||||
|
||||
|
||||
## Key repeat
|
||||
|
||||
By default, holding a key down generates repeated key events. This can cause
|
||||
performance problems in some games, where these events are useless anyway.
|
||||
|
||||
To avoid forwarding repeated key events:
|
||||
|
||||
```bash
|
||||
scrcpy --no-key-repeat
|
||||
```
|
||||
|
||||
This option has no effect on HID keyboard (key repeat is handled by Android
|
||||
directly in this mode).
|
||||
|
||||
|
||||
## Right-click and middle-click
|
||||
|
||||
By default, right-click triggers BACK (or POWER on) and middle-click triggers
|
||||
HOME. To disable these shortcuts and forward the clicks to the device instead:
|
||||
|
||||
```bash
|
||||
scrcpy --forward-all-clicks
|
||||
```
|
||||
|
||||
## File drop
|
||||
|
||||
### Install APK
|
||||
|
||||
To install an APK, drag & drop an APK file (ending with `.apk`) to the _scrcpy_
|
||||
window.
|
||||
|
||||
There is no visual feedback, a log is printed to the console.
|
||||
|
||||
|
||||
### Push file to device
|
||||
|
||||
To push a file to `/sdcard/Download/` on the device, drag & drop a (non-APK)
|
||||
file to the _scrcpy_ window.
|
||||
|
||||
There is no visual feedback, a log is printed to the console.
|
||||
|
||||
The target directory can be changed on start:
|
||||
|
||||
```bash
|
||||
scrcpy --push-target=/sdcard/Movies/
|
||||
```
|
||||
|
||||
## Physical keyboard and mouse simulation
|
||||
|
||||
See the dedicated [HID/OTG](hid-otg.md) page.
|
488
doc/develop.md
Normal file
488
doc/develop.md
Normal file
@ -0,0 +1,488 @@
|
||||
# scrcpy for developers
|
||||
|
||||
## Overview
|
||||
|
||||
This application is composed of two parts:
|
||||
- the server (`scrcpy-server`), to be executed on the device,
|
||||
- the client (the `scrcpy` binary), executed on the host computer.
|
||||
|
||||
The client is responsible to push the server to the device and start its
|
||||
execution.
|
||||
|
||||
The client and the server establish communication using separate sockets for
|
||||
video, audio and controls. Any of them may be disabled (but not all), so
|
||||
there are 1, 2 or 3 socket(s).
|
||||
|
||||
The server initially sends the device name on the first socket (it is used for
|
||||
the scrcpy window title), then each socket is used for its own purpose. All
|
||||
reads and writes are performed from a dedicated thread for each socket, both on
|
||||
the client and on the server.
|
||||
|
||||
If video is enabled, then the server sends a raw video stream (H.264 by default)
|
||||
of the device screen, with some additional headers for each packet. The client
|
||||
decodes the video frames, and displays them as soon as possible, without
|
||||
buffering (unless `--display-buffer=delay` is specified) to minimize latency.
|
||||
The client is not aware of the device rotation (which is handled by the server),
|
||||
it just knows the dimensions of the video frames it receives.
|
||||
|
||||
Similarly, if audio is enabled, then the server sends a raw audio stream (OPUS
|
||||
by default) of the device audio output (or the microphone if
|
||||
`--audio-source=mic` is specified), with some additional headers for each
|
||||
packet. The client decodes the stream, attempts to keep a minimal latency by
|
||||
maintaining an average buffering. The [blog post][scrcpy2] of the scrcpy v2.0
|
||||
release gives more details about the audio feature.
|
||||
|
||||
If control is enabled, then the client captures relevant keyboard and mouse
|
||||
events, that it transmits to the server, which injects them to the device. This
|
||||
is the only socket which is used in both direction: input events are sent from
|
||||
the client to the device, and when the device clipboard changes, the new content
|
||||
is sent from the device to the client to support seamless copy-paste.
|
||||
|
||||
[scrcpy2]: https://blog.rom1v.com/2023/03/scrcpy-2-0-with-audio/
|
||||
|
||||
Note that the client-server roles are expressed at the application level:
|
||||
|
||||
- the server _serves_ video and audio streams, and handle requests from the
|
||||
client,
|
||||
- the client _controls_ the device through the server.
|
||||
|
||||
However, by default (when `--force-adb-forward` is not set), the roles are
|
||||
reversed at the network level:
|
||||
|
||||
- the client opens a server socket and listen on a port before starting the
|
||||
server,
|
||||
- the server connects to the client.
|
||||
|
||||
This role inversion guarantees that the connection will not fail due to race
|
||||
conditions without polling.
|
||||
|
||||
|
||||
## Server
|
||||
|
||||
|
||||
### Privileges
|
||||
|
||||
Capturing the screen requires some privileges, which are granted to `shell`.
|
||||
|
||||
The server is a Java application (with a [`public static void main(String...
|
||||
args)`][main] method), compiled against the Android framework, and executed as
|
||||
`shell` on the Android device.
|
||||
|
||||
[main]: https://github.com/Genymobile/scrcpy/blob/a3cdf1a6b86ea22786e1f7d09b9c202feabc6949/server/src/main/java/com/genymobile/scrcpy/Server.java#L193
|
||||
|
||||
To run such a Java application, the classes must be [_dexed_][dex] (typically,
|
||||
to `classes.dex`). If `my.package.MainClass` is the main class, compiled to
|
||||
`classes.dex`, pushed to the device in `/data/local/tmp`, then it can be run
|
||||
with:
|
||||
|
||||
adb shell CLASSPATH=/data/local/tmp/classes.dex app_process / my.package.MainClass
|
||||
|
||||
_The path `/data/local/tmp` is a good candidate to push the server, since it's
|
||||
readable and writable by `shell`, but not world-writable, so a malicious
|
||||
application may not replace the server just before the client executes it._
|
||||
|
||||
Instead of a raw _dex_ file, `app_process` accepts a _jar_ containing
|
||||
`classes.dex` (e.g. an [APK]). For simplicity, and to benefit from the gradle
|
||||
build system, the server is built to an (unsigned) APK (renamed to
|
||||
`scrcpy-server.jar`).
|
||||
|
||||
[dex]: https://en.wikipedia.org/wiki/Dalvik_(software)
|
||||
[apk]: https://en.wikipedia.org/wiki/Android_application_package
|
||||
|
||||
|
||||
### Hidden methods
|
||||
|
||||
Although compiled against the Android framework, [hidden] methods and classes are
|
||||
not directly accessible (and they may differ from one Android version to
|
||||
another).
|
||||
|
||||
They can be called using reflection though. The communication with hidden
|
||||
components is provided by [_wrappers_ classes][wrappers] and [aidl].
|
||||
|
||||
[hidden]: https://stackoverflow.com/a/31908373/1987178
|
||||
[wrappers]: https://github.com/Genymobile/scrcpy/tree/master/server/src/main/java/com/genymobile/scrcpy/wrappers
|
||||
[aidl]: https://github.com/Genymobile/scrcpy/tree/master/server/src/main/aidl
|
||||
|
||||
|
||||
|
||||
### Execution
|
||||
|
||||
The server is started by the client basically by executing the following
|
||||
commands:
|
||||
|
||||
```bash
|
||||
adb push scrcpy-server /data/local/tmp/scrcpy-server.jar
|
||||
adb forward tcp:27183 localabstract:scrcpy
|
||||
adb shell CLASSPATH=/data/local/tmp/scrcpy-server.jar app_process / com.genymobile.scrcpy.Server 2.1
|
||||
```
|
||||
|
||||
The first argument (`2.1` in the example) is the client scrcpy version. The
|
||||
server fails if the client and the server do not have the exact same version.
|
||||
The protocol between the client and the server may change from version to
|
||||
version (see [protocol](#protocol) below), and there is no backward or forward
|
||||
compatibility (there is no point to use different client and server versions).
|
||||
This check allows to detect misconfiguration (running an older or newer server
|
||||
by mistake).
|
||||
|
||||
It is followed by any number of arguments, in the form of `key=value` pairs.
|
||||
Their order is irrelevant. The possible keys and associated value types can be
|
||||
found in the [server][server-options] and [client][client-options] code.
|
||||
|
||||
[server-options]: https://github.com/Genymobile/scrcpy/blob/a3cdf1a6b86ea22786e1f7d09b9c202feabc6949/server/src/main/java/com/genymobile/scrcpy/Options.java#L181
|
||||
[client-options]: https://github.com/Genymobile/scrcpy/blob/a3cdf1a6b86ea22786e1f7d09b9c202feabc6949/app/src/server.c#L226
|
||||
|
||||
For example, if we execute `scrcpy -m1920 --no-audio`, then the server
|
||||
execution will look like this:
|
||||
|
||||
```bash
|
||||
# scid is a random number to identify different clients running on the same device
|
||||
adb shell CLASSPATH=/data/local/tmp/scrcpy-server.jar app_process / com.genymobile.scrcpy.Server 2.1 scid=12345678 log_level=info audio=false max_size=1920
|
||||
```
|
||||
|
||||
### Components
|
||||
|
||||
When executed, its [`main()`][main] method is executed (on the "main" thread).
|
||||
It parses the arguments, establishes the connection with the client and starts
|
||||
the other "components":
|
||||
- the **video** streamer: it captures the video screen and send encoded video
|
||||
packets on the _video_ socket (from the _video_ thread).
|
||||
- the **audio** streamer: it uses several threads to capture raw packets,
|
||||
submits them to encoding and retrieve encoded packets, which it sends on the
|
||||
_audio_ socket.
|
||||
- the **controller**: it receives _control messages_ (typically input events)
|
||||
on the _control_ socket from one thread, and sends _device messages_ (e.g. to
|
||||
transmit the device clipboard content to the client) on the same _control
|
||||
socket_ from another thread. Thus, the _control_ socket is used in both
|
||||
directions (contrary to the _video_ and _audio_ sockets).
|
||||
|
||||
|
||||
### Screen video encoding
|
||||
|
||||
The encoding is managed by [`ScreenEncoder`].
|
||||
|
||||
The video is encoded using the [`MediaCodec`] API. The codec encodes the content
|
||||
of a `Surface` associated to the display, and writes the encoding packets to the
|
||||
client (on the _video_ socket).
|
||||
|
||||
[`ScreenEncoder`]: https://github.com/Genymobile/scrcpy/blob/a3cdf1a6b86ea22786e1f7d09b9c202feabc6949/server/src/main/java/com/genymobile/scrcpy/ScreenEncoder.java
|
||||
[`MediaCodec`]: https://developer.android.com/reference/android/media/MediaCodec.html
|
||||
|
||||
On device rotation (or folding), the encoding session is [reset] and restarted.
|
||||
|
||||
New frames are produced only when changes occur on the surface. This avoids to
|
||||
send unnecessary frames, but by default there might be drawbacks:
|
||||
|
||||
- it does not send any frame on start if the device screen does not change,
|
||||
- after fast motion changes, the last frame may have poor quality.
|
||||
|
||||
Both problems are [solved][repeat] by the flag
|
||||
[`KEY_REPEAT_PREVIOUS_FRAME_AFTER`][repeat-flag].
|
||||
|
||||
[reset]: https://github.com/Genymobile/scrcpy/blob/a3cdf1a6b86ea22786e1f7d09b9c202feabc6949/server/src/main/java/com/genymobile/scrcpy/ScreenEncoder.java#L179
|
||||
[rotation]: https://github.com/Genymobile/scrcpy/blob/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/server/src/main/java/com/genymobile/scrcpy/ScreenEncoder.java#L90
|
||||
[repeat]: https://github.com/Genymobile/scrcpy/blob/a3cdf1a6b86ea22786e1f7d09b9c202feabc6949/server/src/main/java/com/genymobile/scrcpy/ScreenEncoder.java#L246-L247
|
||||
[repeat-flag]: https://developer.android.com/reference/android/media/MediaFormat.html#KEY_REPEAT_PREVIOUS_FRAME_AFTER
|
||||
|
||||
|
||||
### Audio encoding
|
||||
|
||||
Similarly, the audio is [captured] using an [`AudioRecord`], and [encoded] using
|
||||
the [`MediaCodec`] asynchronous API.
|
||||
|
||||
More details are available on the [blog post][scrcpy2] introducing the audio feature.
|
||||
|
||||
[captured]: https://github.com/Genymobile/scrcpy/blob/a3cdf1a6b86ea22786e1f7d09b9c202feabc6949/server/src/main/java/com/genymobile/scrcpy/AudioCapture.java
|
||||
[encoded]: https://github.com/Genymobile/scrcpy/blob/a3cdf1a6b86ea22786e1f7d09b9c202feabc6949/server/src/main/java/com/genymobile/scrcpy/AudioEncoder.java
|
||||
[`AudioRecord`]: https://developer.android.com/reference/android/media/AudioRecord
|
||||
|
||||
|
||||
### Input events injection
|
||||
|
||||
_Control messages_ are received from the client by the [`Controller`] (run in a
|
||||
separate thread). There are several types of input events:
|
||||
- keycode (cf [`KeyEvent`]),
|
||||
- text (special characters may not be handled by keycodes directly),
|
||||
- mouse motion/click,
|
||||
- mouse scroll,
|
||||
- other commands (e.g. to switch the screen on or to copy the clipboard).
|
||||
|
||||
Some of them need to inject input events to the system. To do so, they use the
|
||||
_hidden_ method [`InputManager.injectInputEvent()`] (exposed by the
|
||||
[`InputManager` wrapper][inject-wrapper]).
|
||||
|
||||
[`Controller`]: https://github.com/Genymobile/scrcpy/blob/a3cdf1a6b86ea22786e1f7d09b9c202feabc6949/server/src/main/java/com/genymobile/scrcpy/Controller.java
|
||||
[`KeyEvent`]: https://developer.android.com/reference/android/view/KeyEvent.html
|
||||
[`MotionEvent`]: https://developer.android.com/reference/android/view/MotionEvent.html
|
||||
[`InputManager.injectInputEvent()`]: https://github.com/Genymobile/scrcpy/blob/a3cdf1a6b86ea22786e1f7d09b9c202feabc6949/server/src/main/java/com/genymobile/scrcpy/wrappers/InputManager.java#L34
|
||||
[inject-wrapper]: https://github.com/Genymobile/scrcpy/blob/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/server/src/main/java/com/genymobile/scrcpy/wrappers/InputManager.java#L27
|
||||
|
||||
|
||||
|
||||
## Client
|
||||
|
||||
The client relies on [SDL], which provides cross-platform API for UI, input
|
||||
events, threading, etc.
|
||||
|
||||
The video and audio streams are decoded by [FFmpeg].
|
||||
|
||||
[SDL]: https://www.libsdl.org
|
||||
[ffmpeg]: https://ffmpeg.org/
|
||||
|
||||
|
||||
### Initialization
|
||||
|
||||
The client parses the command line arguments, then [runs one of two code
|
||||
paths][run]:
|
||||
- scrcpy in "normal" mode ([`scrcpy.c`])
|
||||
- scrcpy in [OTG mode](hid-otg.md) ([`scrcpy_otg.c`])
|
||||
|
||||
[run]: https://github.com/Genymobile/scrcpy/blob/a3cdf1a6b86ea22786e1f7d09b9c202feabc6949/app/src/main.c#L81-L82
|
||||
[`scrcpy.c`]: https://github.com/Genymobile/scrcpy/blob/a3cdf1a6b86ea22786e1f7d09b9c202feabc6949/app/src/scrcpy.c#L292-L293
|
||||
[`scrcpy_otg.c`]: https://github.com/Genymobile/scrcpy/blob/a3cdf1a6b86ea22786e1f7d09b9c202feabc6949/app/src/usb/scrcpy_otg.c#L51-L52
|
||||
|
||||
In the remaining of this document, we assume that the "normal" mode is used
|
||||
(read the code for the OTG mode).
|
||||
|
||||
On startup, the client:
|
||||
- opens the _video_, _audio_ and _control_ sockets;
|
||||
- pushes and starts the server on the device;
|
||||
- initializes its components (demuxers, decoders, recorder…).
|
||||
|
||||
|
||||
### Video and audio streams
|
||||
|
||||
Depending on the arguments passed to `scrcpy`, several components may be used.
|
||||
Here is an overview of the video and audio components:
|
||||
|
||||
```
|
||||
V4L2 sink
|
||||
/
|
||||
decoder
|
||||
/ \
|
||||
VIDEO -------------> demuxer display
|
||||
\
|
||||
recorder
|
||||
/
|
||||
AUDIO -------------> demuxer
|
||||
\
|
||||
decoder --- audio player
|
||||
```
|
||||
|
||||
The _demuxer_ is responsible to extract video and audio packets (read some
|
||||
header, split the video stream into packets at correct boundaries, etc.).
|
||||
|
||||
The demuxed packets may be sent to a _decoder_ (one per stream, to produce
|
||||
frames) and to a recorder (receiving both video and audio stream to record a
|
||||
single file). The packets are encoded on the device (by `MediaCodec`), but when
|
||||
recording, they are _muxed_ (asynchronously) into a container (MKV or MP4) on
|
||||
the client side.
|
||||
|
||||
Video frames are sent to the screen/display to be rendered in the scrcpy window.
|
||||
They may also be sent to a [V4L2 sink](v4l2.md).
|
||||
|
||||
Audio "frames" (an array of decoded samples) are sent to the audio player.
|
||||
|
||||
|
||||
### Controller
|
||||
|
||||
The _controller_ is responsible to send _control messages_ to the device. It
|
||||
runs in a separate thread, to avoid I/O on the main thread.
|
||||
|
||||
On SDL event, received on the main thread, the _input manager_ creates
|
||||
appropriate _control messages_. It is responsible to convert SDL events to
|
||||
Android events. It then pushes the _control messages_ to a queue hold by the
|
||||
controller. On its own thread, the controller takes messages from the queue,
|
||||
that it serializes and sends to the client.
|
||||
|
||||
|
||||
## Protocol
|
||||
|
||||
The protocol between the client and the server must be considered _internal_: it
|
||||
may (and will) change at any time for any reason. Everything may change (the
|
||||
number of sockets, the order in which the sockets must be opened, the data
|
||||
format on the wire…) from version to version. A client must always be run with a
|
||||
matching server version.
|
||||
|
||||
This section documents the current protocol in scrcpy v2.1.
|
||||
|
||||
### Connection
|
||||
|
||||
Firstly, the client sets up an adb tunnel:
|
||||
|
||||
```bash
|
||||
# By default, a reverse redirection: the computer listens, the device connects
|
||||
adb reverse localabstract:scrcpy_<SCID> tcp:27183
|
||||
|
||||
# As a fallback (or if --force-adb forward is set), a forward redirection:
|
||||
# the device listens, the computer connects
|
||||
adb forward tcp:27183 localabstract:scrcpy_<SCID>
|
||||
```
|
||||
|
||||
(`<SCID>` is a 31-bit random number, so that it does not fail when several
|
||||
scrcpy instances start "at the same time" for the same device.)
|
||||
|
||||
Then, up to 3 sockets are opened, in that order:
|
||||
- a _video_ socket
|
||||
- an _audio_ socket
|
||||
- a _control_ socket
|
||||
|
||||
Each one may be disabled (respectively by `--no-video`, `--no-audio` and
|
||||
`--no-control`, directly or indirectly). For example, if `--no-audio` is set,
|
||||
then the _video_ socket is opened first, then the _control_ socket.
|
||||
|
||||
On the _first_ socket opened (whichever it is), if the tunnel is _forward_, then
|
||||
a [dummy byte] is sent from the device to the client. This allows to detect a
|
||||
connection error (the client connection does not fail as long as there is an adb
|
||||
forward redirection, even if nothing is listening on the device side).
|
||||
|
||||
Still on this _first_ socket, the device sends some [metadata][device meta] to
|
||||
the client (currently only the device name, used as the window title, but there
|
||||
might be other fields in the future).
|
||||
|
||||
[dummy byte]: https://github.com/Genymobile/scrcpy/blob/a3cdf1a6b86ea22786e1f7d09b9c202feabc6949/server/src/main/java/com/genymobile/scrcpy/DesktopConnection.java#L93
|
||||
[device meta]: https://github.com/Genymobile/scrcpy/blob/a3cdf1a6b86ea22786e1f7d09b9c202feabc6949/server/src/main/java/com/genymobile/scrcpy/DesktopConnection.java#L151
|
||||
|
||||
You can read the [client][client-connection] and [server][server-connection]
|
||||
code for more details.
|
||||
|
||||
[client-connection]: https://github.com/Genymobile/scrcpy/blob/a3cdf1a6b86ea22786e1f7d09b9c202feabc6949/app/src/server.c#L465-L466
|
||||
[server-connection]: https://github.com/Genymobile/scrcpy/blob/a3cdf1a6b86ea22786e1f7d09b9c202feabc6949/server/src/main/java/com/genymobile/scrcpy/DesktopConnection.java#L63
|
||||
|
||||
Then each socket is used for its intended purpose.
|
||||
|
||||
### Video and audio
|
||||
|
||||
On the _video_ and _audio_ sockets, the device first sends some [codec
|
||||
metadata]:
|
||||
- On the _video_ socket, 12 bytes:
|
||||
- the codec id (`u32`) (H264, H265 or AV1)
|
||||
- the initial video width (`u32`)
|
||||
- the initial video height (`u32`)
|
||||
- On the _audio_ socket, 4 bytes:
|
||||
- the codec id ('u32`) (OPUS, AAC or RAW)
|
||||
|
||||
[codec metadata]: https://github.com/Genymobile/scrcpy/blob/a3cdf1a6b86ea22786e1f7d09b9c202feabc6949/server/src/main/java/com/genymobile/scrcpy/Streamer.java#L33-L51
|
||||
|
||||
Then each packet produced by `MediaCodec` is sent, prefixed by a 12-byte [frame
|
||||
header]:
|
||||
- config packet flag (`u1`)
|
||||
- key frame flag (`u1`)
|
||||
- PTS (`u62`)
|
||||
- packet size (`u32`)
|
||||
|
||||
Here is a schema describing the frame header:
|
||||
|
||||
```
|
||||
[. . . . . . . .|. . . .]. . . . . . . . . . . . . . . ...
|
||||
<-------------> <-----> <-----------------------------...
|
||||
PTS packet raw packet
|
||||
size
|
||||
<--------------------->
|
||||
frame header
|
||||
|
||||
The most significant bits of the PTS are used for packet flags:
|
||||
|
||||
byte 7 byte 6 byte 5 byte 4 byte 3 byte 2 byte 1 byte 0
|
||||
CK...... ........ ........ ........ ........ ........ ........ ........
|
||||
^^<------------------------------------------------------------------->
|
||||
|| PTS
|
||||
| `- key frame
|
||||
`-- config packet
|
||||
```
|
||||
|
||||
[frame header]: https://github.com/Genymobile/scrcpy/blob/a3cdf1a6b86ea22786e1f7d09b9c202feabc6949/server/src/main/java/com/genymobile/scrcpy/Streamer.java#L83
|
||||
|
||||
|
||||
### Controls
|
||||
|
||||
Controls messages are sent via a custom binary protocol.
|
||||
|
||||
The only documentation for this protocol is the set of unit tests on both sides:
|
||||
- `ControlMessage` (from client to device): [serialization](https://github.com/Genymobile/scrcpy/blob/master/app/tests/test_control_msg_serialize.c) | [deserialization](https://github.com/Genymobile/scrcpy/blob/master/server/src/test/java/com/genymobile/scrcpy/ControlMessageReaderTest.java)
|
||||
- `DeviceMessage` (from device to client) [serialization](https://github.com/Genymobile/scrcpy/blob/master/server/src/test/java/com/genymobile/scrcpy/DeviceMessageWriterTest.java) | [deserialization](https://github.com/Genymobile/scrcpy/blob/master/app/tests/test_device_msg_deserialize.c)
|
||||
|
||||
|
||||
## Standalone server
|
||||
|
||||
Although the server is designed to work for the scrcpy client, it can be used
|
||||
with any client which uses the same protocol.
|
||||
|
||||
For simplicity, some [server-specific options] have been added to produce raw
|
||||
streams easily:
|
||||
- `send_device_meta=false`: disable the device metata (in practice, the device
|
||||
name) sent on the _first_ socket
|
||||
- `send_frame_meta=false`: disable the 12-byte header for each packet
|
||||
- `send_dummy_byte`: disable the dummy byte sent on forward connections
|
||||
- `send_codec_meta`: disable the codec information (and initial device size for
|
||||
video)
|
||||
- `raw_stream`: disable all the above
|
||||
|
||||
[server-specific options]: https://github.com/Genymobile/scrcpy/blob/a3cdf1a6b86ea22786e1f7d09b9c202feabc6949/server/src/main/java/com/genymobile/scrcpy/Options.java#L309-L329
|
||||
|
||||
Concretely, here is how to expose a raw H.264 stream on a TCP socket:
|
||||
|
||||
```bash
|
||||
adb push scrcpy-server-v2.1 /data/local/tmp/scrcpy-server-manual.jar
|
||||
adb forward tcp:1234 localabstract:scrcpy
|
||||
adb shell CLASSPATH=/data/local/tmp/scrcpy-server-manual.jar \
|
||||
app_process / com.genymobile.scrcpy.Server 2.1 \
|
||||
tunnel_forward=true audio=false control=false cleanup=false \
|
||||
raw_stream=true max_size=1920
|
||||
```
|
||||
|
||||
As soon as a client connects over TCP on port 1234, the device will start
|
||||
streaming the video. For example, VLC can play the video (although you will
|
||||
experience a very high latency, more details [here][vlc-0latency]):
|
||||
|
||||
```
|
||||
vlc -Idummy --demux=h264 --network-caching=0 tcp://localhost:1234
|
||||
```
|
||||
|
||||
[vlc-0latency]: https://code.videolan.org/rom1v/vlc/-/merge_requests/20
|
||||
|
||||
|
||||
## Hack
|
||||
|
||||
For more details, go read the code!
|
||||
|
||||
If you find a bug, or have an awesome idea to implement, please discuss and
|
||||
contribute ;-)
|
||||
|
||||
|
||||
### Debug the server
|
||||
|
||||
The server is pushed to the device by the client on startup.
|
||||
|
||||
To debug it, enable the server debugger during configuration:
|
||||
|
||||
```bash
|
||||
meson setup x -Dserver_debugger=true
|
||||
# or, if x is already configured
|
||||
meson configure x -Dserver_debugger=true
|
||||
```
|
||||
|
||||
If your device runs Android 8 or below, set the `server_debugger_method` to
|
||||
`old` in addition:
|
||||
|
||||
```bash
|
||||
meson setup x -Dserver_debugger=true -Dserver_debugger_method=old
|
||||
# or, if x is already configured
|
||||
meson configure x -Dserver_debugger=true -Dserver_debugger_method=old
|
||||
```
|
||||
|
||||
Then recompile.
|
||||
|
||||
When you start scrcpy, it will start a debugger on port 5005 on the device.
|
||||
Redirect that port to the computer:
|
||||
|
||||
```bash
|
||||
adb forward tcp:5005 tcp:5005
|
||||
```
|
||||
|
||||
In Android Studio, _Run_ > _Debug_ > _Edit configurations..._ On the left, click on
|
||||
`+`, _Remote_, and fill the form:
|
||||
|
||||
- Host: `localhost`
|
||||
- Port: `5005`
|
||||
|
||||
Then click on _Debug_.
|
228
doc/device.md
Normal file
228
doc/device.md
Normal file
@ -0,0 +1,228 @@
|
||||
# Device
|
||||
|
||||
## Selection
|
||||
|
||||
If exactly one device is connected (i.e. listed by `adb devices`), then it is
|
||||
automatically selected.
|
||||
|
||||
However, if there are multiple devices connected, you must specify the one to
|
||||
use in one of 4 ways:
|
||||
- by its serial:
|
||||
```bash
|
||||
scrcpy --serial=0123456789abcdef
|
||||
scrcpy -s 0123456789abcdef # short version
|
||||
|
||||
# the serial is the ip:port if connected over TCP/IP (same behavior as adb)
|
||||
scrcpy --serial=192.168.1.1:5555
|
||||
```
|
||||
- the one connected over USB (if there is exactly one):
|
||||
```bash
|
||||
scrcpy --select-usb
|
||||
scrcpy -d # short version
|
||||
```
|
||||
- the one connected over TCP/IP (if there is exactly one):
|
||||
```bash
|
||||
scrcpy --select-tcpip
|
||||
scrcpy -e # short version
|
||||
```
|
||||
- a device already listening on TCP/IP (see [below](#tcpip-wireless)):
|
||||
```bash
|
||||
scrcpy --tcpip=192.168.1.1:5555
|
||||
scrcpy --tcpip=192.168.1.1 # default port is 5555
|
||||
```
|
||||
|
||||
The serial may also be provided via the environment variable `ANDROID_SERIAL`
|
||||
(also used by `adb`):
|
||||
|
||||
```bash
|
||||
# in bash
|
||||
export ANDROID_SERIAL=0123456789abcdef
|
||||
scrcpy
|
||||
```
|
||||
|
||||
```cmd
|
||||
:: in cmd
|
||||
set ANDROID_SERIAL=0123456789abcdef
|
||||
scrcpy
|
||||
```
|
||||
|
||||
```powershell
|
||||
# in PowerShell
|
||||
$env:ANDROID_SERIAL = '0123456789abcdef'
|
||||
scrcpy
|
||||
```
|
||||
|
||||
|
||||
## TCP/IP (wireless)
|
||||
|
||||
_Scrcpy_ uses `adb` to communicate with the device, and `adb` can [connect] to a
|
||||
device over TCP/IP. The device must be connected on the same network as the
|
||||
computer.
|
||||
|
||||
[connect]: https://developer.android.com/studio/command-line/adb.html#wireless
|
||||
|
||||
|
||||
### Automatic
|
||||
|
||||
An option `--tcpip` allows to configure the connection automatically. There are
|
||||
two variants.
|
||||
|
||||
If the device (accessible at 192.168.1.1 in this example) already listens on a
|
||||
port (typically 5555) for incoming _adb_ connections, then run:
|
||||
|
||||
```bash
|
||||
scrcpy --tcpip=192.168.1.1 # default port is 5555
|
||||
scrcpy --tcpip=192.168.1.1:5555
|
||||
```
|
||||
|
||||
If _adb_ TCP/IP mode is disabled on the device (or if you don't know the IP
|
||||
address), connect the device over USB, then run:
|
||||
|
||||
```bash
|
||||
scrcpy --tcpip # without arguments
|
||||
```
|
||||
|
||||
It will automatically find the device IP address and adb port, enable TCP/IP
|
||||
mode if necessary, then connect to the device before starting.
|
||||
|
||||
|
||||
### Manual
|
||||
|
||||
Alternatively, it is possible to enable the TCP/IP connection manually using
|
||||
`adb`:
|
||||
|
||||
1. Plug the device into a USB port on your computer.
|
||||
2. Connect the device to the same Wi-Fi network as your computer.
|
||||
3. Get your device IP address, in Settings → About phone → Status, or by
|
||||
executing this command:
|
||||
|
||||
```bash
|
||||
adb shell ip route | awk '{print $9}'
|
||||
```
|
||||
|
||||
4. Enable `adb` over TCP/IP on your device: `adb tcpip 5555`.
|
||||
5. Unplug your device.
|
||||
6. Connect to your device: `adb connect DEVICE_IP:5555` _(replace `DEVICE_IP`
|
||||
with the device IP address you found)_.
|
||||
7. Run `scrcpy` as usual.
|
||||
8. Run `adb disconnect` once you're done.
|
||||
|
||||
Since Android 11, a [wireless debugging option][adb-wireless] allows to bypass
|
||||
having to physically connect your device directly to your computer.
|
||||
|
||||
[adb-wireless]: https://developer.android.com/studio/command-line/adb#wireless-android11-command-line
|
||||
|
||||
|
||||
## Autostart
|
||||
|
||||
A small tool (by the scrcpy author) allows to run arbitrary commands whenever a
|
||||
new Android device is connected: [AutoAdb]. It can be used to start scrcpy:
|
||||
|
||||
```bash
|
||||
autoadb scrcpy -s '{}'
|
||||
```
|
||||
|
||||
[AutoAdb]: https://github.com/rom1v/autoadb
|
||||
|
||||
|
||||
## Display
|
||||
|
||||
If several displays are available on the Android device, it is possible to
|
||||
select the display to mirror:
|
||||
|
||||
```bash
|
||||
scrcpy --display=1
|
||||
```
|
||||
|
||||
The list of display ids can be retrieved by:
|
||||
|
||||
```bash
|
||||
scrcpy --list-displays
|
||||
```
|
||||
|
||||
A secondary display may only be controlled if the device runs at least Android
|
||||
10 (otherwise it is mirrored as read-only).
|
||||
|
||||
|
||||
## Actions
|
||||
|
||||
Some command line arguments perform actions on the device itself while scrcpy is
|
||||
running.
|
||||
|
||||
|
||||
### Stay awake
|
||||
|
||||
To prevent the device from sleeping after a delay **when the device is plugged
|
||||
in**:
|
||||
|
||||
```bash
|
||||
scrcpy --stay-awake
|
||||
scrcpy -w
|
||||
```
|
||||
|
||||
The initial state is restored when _scrcpy_ is closed.
|
||||
|
||||
If the device is not plugged in (i.e. only connected over TCP/IP),
|
||||
`--stay-awake` has no effect (this is the Android behavior).
|
||||
|
||||
|
||||
### Turn screen off
|
||||
|
||||
It is possible to turn the device screen off while mirroring on start with a
|
||||
command-line option:
|
||||
|
||||
```bash
|
||||
scrcpy --turn-screen-off
|
||||
scrcpy -S # short version
|
||||
```
|
||||
|
||||
Or by pressing <kbd>MOD</kbd>+<kbd>o</kbd> at any time (see
|
||||
[shortcuts](shortcuts.md)).
|
||||
|
||||
To turn it back on, press <kbd>MOD</kbd>+<kbd>Shift</kbd>+<kbd>o</kbd>.
|
||||
|
||||
On Android, the `POWER` button always turns the screen on. For convenience, if
|
||||
`POWER` is sent via _scrcpy_ (via right-click or <kbd>MOD</kbd>+<kbd>p</kbd>),
|
||||
it will force to turn the screen off after a small delay (on a best effort
|
||||
basis). The physical `POWER` button will still cause the screen to be turned on.
|
||||
|
||||
It can also be useful to prevent the device from sleeping:
|
||||
|
||||
```bash
|
||||
scrcpy --turn-screen-off --stay-awake
|
||||
scrcpy -Sw # short version
|
||||
```
|
||||
|
||||
|
||||
### Show touches
|
||||
|
||||
For presentations, it may be useful to show physical touches (on the physical
|
||||
device). Android exposes this feature in _Developers options_.
|
||||
|
||||
_Scrcpy_ provides an option to enable this feature on start and restore the
|
||||
initial value on exit:
|
||||
|
||||
```bash
|
||||
scrcpy --show-touches
|
||||
scrcpy -t # short version
|
||||
```
|
||||
|
||||
Note that it only shows _physical_ touches (by a finger on the device).
|
||||
|
||||
|
||||
### Power off on close
|
||||
|
||||
To turn the device screen off when closing _scrcpy_:
|
||||
|
||||
```bash
|
||||
scrcpy --power-off-on-close
|
||||
```
|
||||
|
||||
### Power on on start
|
||||
|
||||
By default, on start, the device is powered on. To prevent this behavior:
|
||||
|
||||
```bash
|
||||
scrcpy --no-power-on
|
||||
```
|
||||
|
108
doc/hid-otg.md
Normal file
108
doc/hid-otg.md
Normal file
@ -0,0 +1,108 @@
|
||||
# HID/OTG
|
||||
|
||||
By default, _scrcpy_ injects input events at the Android API level. As an
|
||||
alternative, when connected over USB, it is possible to send HID events, so that
|
||||
scrcpy behaves as if it was a physical keyboard and/or mouse connected to the
|
||||
Android device.
|
||||
|
||||
A special [OTG](#otg) mode allows to control the device without mirroring (and
|
||||
without USB debugging).
|
||||
|
||||
|
||||
## Physical keyboard simulation
|
||||
|
||||
By default, _scrcpy_ uses Android key or text injection. It works everywhere,
|
||||
but is limited to ASCII.
|
||||
|
||||
Instead, it can simulate a physical USB keyboard on Android to provide a better
|
||||
input experience (using [USB HID over AOAv2][hid-aoav2]): the virtual keyboard
|
||||
is disabled and it works for all characters and IME.
|
||||
|
||||
[hid-aoav2]: https://source.android.com/devices/accessories/aoa2#hid-support
|
||||
|
||||
However, it only works if the device is connected via USB.
|
||||
|
||||
Note: On Windows, it may only work in [OTG mode](#otg), not while mirroring (it
|
||||
is not possible to open a USB device if it is already open by another process
|
||||
like the _adb daemon_).
|
||||
|
||||
To enable this mode:
|
||||
|
||||
```bash
|
||||
scrcpy --hid-keyboard
|
||||
scrcpy -K # short version
|
||||
```
|
||||
|
||||
If it fails for some reason (for example because the device is not connected via
|
||||
USB), it automatically fallbacks to the default mode (with a log in the
|
||||
console). This allows using the same command line options when connected over
|
||||
USB and TCP/IP.
|
||||
|
||||
In this mode, raw key events (scancodes) are sent to the device, independently
|
||||
of the host key mapping. Therefore, if your keyboard layout does not match, it
|
||||
must be configured on the Android device, in Settings → System → Languages and
|
||||
input → [Physical keyboard].
|
||||
|
||||
This settings page can be started directly:
|
||||
|
||||
```bash
|
||||
adb shell am start -a android.settings.HARD_KEYBOARD_SETTINGS
|
||||
```
|
||||
|
||||
However, the option is only available when the HID keyboard is enabled (or when
|
||||
a physical keyboard is connected).
|
||||
|
||||
[Physical keyboard]: https://github.com/Genymobile/scrcpy/pull/2632#issuecomment-923756915
|
||||
|
||||
|
||||
## Physical mouse simulation
|
||||
|
||||
By default, _scrcpy_ uses Android mouse events injection with absolute
|
||||
coordinates. By simulating a physical mouse, a mouse pointer appears on the
|
||||
Android device, and relative mouse motion, clicks and scrolls are injected.
|
||||
|
||||
To enable this mode:
|
||||
|
||||
```bash
|
||||
scrcpy --hid-mouse
|
||||
scrcpy -M # short version
|
||||
```
|
||||
|
||||
When this mode is enabled, the computer mouse is "captured" (the mouse pointer
|
||||
disappears from the computer and appears on the Android device instead).
|
||||
|
||||
Special capture keys, either <kbd>Alt</kbd> or <kbd>Super</kbd>, toggle
|
||||
(disable or enable) the mouse capture. Use one of them to give the control of
|
||||
the mouse back to the computer.
|
||||
|
||||
|
||||
## OTG
|
||||
|
||||
It is possible to run _scrcpy_ with only physical keyboard and mouse simulation
|
||||
(HID), as if the computer keyboard and mouse were plugged directly to the device
|
||||
via an OTG cable.
|
||||
|
||||
In this mode, `adb` (USB debugging) is not necessary, and mirroring is disabled.
|
||||
|
||||
This is similar to `--hid-keyboard --hid-mouse`, but without mirroring.
|
||||
|
||||
To enable OTG mode:
|
||||
|
||||
```bash
|
||||
scrcpy --otg
|
||||
# Pass the serial if several USB devices are available
|
||||
scrcpy --otg -s 0123456789abcdef
|
||||
```
|
||||
|
||||
It is possible to enable only HID keyboard or HID mouse:
|
||||
|
||||
```bash
|
||||
scrcpy --otg --hid-keyboard # keyboard only
|
||||
scrcpy --otg --hid-mouse # mouse only
|
||||
scrcpy --otg --hid-keyboard --hid-mouse # keyboard and mouse
|
||||
# for convenience, enable both by default
|
||||
scrcpy --otg # keyboard and mouse
|
||||
```
|
||||
|
||||
Like `--hid-keyboard` and `--hid-mouse`, it only works if the device is
|
||||
connected over USB.
|
81
doc/linux.md
Normal file
81
doc/linux.md
Normal file
@ -0,0 +1,81 @@
|
||||
# On Linux
|
||||
|
||||
## Install
|
||||
|
||||
<a href="https://repology.org/project/scrcpy/versions"><img src="https://repology.org/badge/vertical-allrepos/scrcpy.svg" alt="Packaging status" align="right"></a>
|
||||
|
||||
Scrcpy is packaged in several distributions and package managers:
|
||||
|
||||
- Debian/Ubuntu: `apt install scrcpy`
|
||||
- Arch Linux: `pacman -S scrcpy`
|
||||
- Fedora: `dnf copr enable zeno/scrcpy && dnf install scrcpy`
|
||||
- Gentoo: [ebuild][ebuild-link] file
|
||||
- Snap: `snap install scrcpy`
|
||||
- … (see [repology](https://repology.org/project/scrcpy/versions))
|
||||
|
||||
[ebuild-link]: https://github.com/maggu2810/maggu2810-overlay/tree/master/app-mobilephone/scrcpy
|
||||
|
||||
### Latest version
|
||||
|
||||
However, the packaged version is not always the latest release. To install the
|
||||
latest release from `master`, follow this simplified process.
|
||||
|
||||
First, you need to install the required packages:
|
||||
|
||||
```bash
|
||||
# for Debian/Ubuntu
|
||||
sudo apt install ffmpeg libsdl2-2.0-0 adb wget \
|
||||
gcc git pkg-config meson ninja-build libsdl2-dev \
|
||||
libavcodec-dev libavdevice-dev libavformat-dev libavutil-dev \
|
||||
libswresample-dev libusb-1.0-0 libusb-1.0-0-dev
|
||||
```
|
||||
|
||||
Then clone the repo and execute the installation script
|
||||
([source](/install_release.sh)):
|
||||
|
||||
```bash
|
||||
git clone https://github.com/Genymobile/scrcpy
|
||||
cd scrcpy
|
||||
./install_release.sh
|
||||
```
|
||||
|
||||
When a new release is out, update the repo and reinstall:
|
||||
|
||||
```bash
|
||||
git pull
|
||||
./install_release.sh
|
||||
```
|
||||
|
||||
To uninstall:
|
||||
|
||||
```bash
|
||||
sudo ninja -Cbuild-auto uninstall
|
||||
```
|
||||
|
||||
_Note that this simplified process only works for released versions (it
|
||||
downloads a prebuilt server binary), so for example you can't use it for testing
|
||||
the development branch (`dev`)._
|
||||
|
||||
_See [build.md](build.md) to build and install the app manually._
|
||||
|
||||
|
||||
## Run
|
||||
|
||||
_Make sure that your device meets the [prerequisites](/README.md#prerequisites)._
|
||||
|
||||
Once installed, run from a terminal:
|
||||
|
||||
```bash
|
||||
scrcpy
|
||||
```
|
||||
|
||||
or with arguments (here to disable audio and record to `file.mkv`):
|
||||
|
||||
```bash
|
||||
scrcpy --no-audio --record=file.mkv
|
||||
```
|
||||
|
||||
Documentation for command line arguments is available:
|
||||
- `man scrcpy`
|
||||
- `scrcpy --help`
|
||||
- on [github](/README.md)
|
49
doc/macos.md
Normal file
49
doc/macos.md
Normal file
@ -0,0 +1,49 @@
|
||||
# On macOS
|
||||
|
||||
## Install
|
||||
|
||||
Scrcpy is available in [Homebrew]:
|
||||
|
||||
```bash
|
||||
brew install scrcpy
|
||||
```
|
||||
|
||||
[Homebrew]: https://brew.sh/
|
||||
|
||||
You need `adb`, accessible from your `PATH`. If you don't have it yet:
|
||||
|
||||
```bash
|
||||
brew install android-platform-tools
|
||||
```
|
||||
|
||||
Alternatively, Scrcpy is also available in [MacPorts], which sets up `adb` for you:
|
||||
|
||||
```bash
|
||||
sudo port install scrcpy
|
||||
```
|
||||
|
||||
[MacPorts]: https://www.macports.org/
|
||||
|
||||
_See [build.md](build.md) to build and install the app manually._
|
||||
|
||||
|
||||
## Run
|
||||
|
||||
_Make sure that your device meets the [prerequisites](/README.md#prerequisites)._
|
||||
|
||||
Once installed, run from a terminal:
|
||||
|
||||
```bash
|
||||
scrcpy
|
||||
```
|
||||
|
||||
or with arguments (here to disable audio and record to `file.mkv`):
|
||||
|
||||
```bash
|
||||
scrcpy --no-audio --record=file.mkv
|
||||
```
|
||||
|
||||
Documentation for command line arguments is available:
|
||||
- `man scrcpy`
|
||||
- `scrcpy --help`
|
||||
- on [github](/README.md)
|
78
doc/recording.md
Normal file
78
doc/recording.md
Normal file
@ -0,0 +1,78 @@
|
||||
# Recording
|
||||
|
||||
To record video and audio streams while mirroring:
|
||||
|
||||
```bash
|
||||
scrcpy --record=file.mp4
|
||||
scrcpy -r file.mkv
|
||||
```
|
||||
|
||||
To record only the video:
|
||||
|
||||
```bash
|
||||
scrcpy --no-audio --record=file.mp4
|
||||
```
|
||||
|
||||
To record only the audio:
|
||||
|
||||
```bash
|
||||
scrcpy --no-video --record=file.opus
|
||||
scrcpy --no-video --audio-codec=aac --record=file.aac
|
||||
# .m4a/.mp4 and .mka/.mkv are also supported for both opus and aac
|
||||
```
|
||||
|
||||
Timestamps are captured on the device, so [packet delay variation] does not
|
||||
impact the recorded file, which is always clean (only if you use `--record` of
|
||||
course, not if you capture your scrcpy window and audio output on the computer).
|
||||
|
||||
[packet delay variation]: https://en.wikipedia.org/wiki/Packet_delay_variation
|
||||
|
||||
|
||||
## Format
|
||||
|
||||
The video and audio streams are encoded on the device, but are muxed on the
|
||||
client side. Two formats (containers) are supported:
|
||||
- Matroska (`.mkv`)
|
||||
- MP4 (`.mp4`)
|
||||
|
||||
The container is automatically selected based on the filename.
|
||||
|
||||
It is also possible to explicitly select a container (in that case the filename
|
||||
needs not end with `.mkv` or `.mp4`):
|
||||
|
||||
```
|
||||
scrcpy --record=file --record-format=mkv
|
||||
```
|
||||
|
||||
|
||||
## No playback
|
||||
|
||||
To disable playback while recording:
|
||||
|
||||
```bash
|
||||
scrcpy --no-playback --record=file.mp4
|
||||
scrcpy -Nr file.mkv
|
||||
# interrupt recording with Ctrl+C
|
||||
```
|
||||
|
||||
It is also possible to disable video and audio playback separately:
|
||||
|
||||
```bash
|
||||
# Record both video and audio, but only play video
|
||||
scrcpy --record=file.mkv --no-audio-playback
|
||||
```
|
||||
|
||||
## Time limit
|
||||
|
||||
To limit the recording time:
|
||||
|
||||
```bash
|
||||
scrcpy --record=file.mkv --time-limit=20 # in seconds
|
||||
```
|
||||
|
||||
The `--time-limit` option is not limited to recording, it also impacts simple
|
||||
mirroring:
|
||||
|
||||
```
|
||||
scrcpy --time-limit=20
|
||||
```
|
68
doc/shortcuts.md
Normal file
68
doc/shortcuts.md
Normal file
@ -0,0 +1,68 @@
|
||||
# Shortcuts
|
||||
|
||||
Actions can be performed on the scrcpy window using keyboard and mouse
|
||||
shortcuts.
|
||||
|
||||
In the following list, <kbd>MOD</kbd> is the shortcut modifier. By default, it's
|
||||
(left) <kbd>Alt</kbd> or (left) <kbd>Super</kbd>.
|
||||
|
||||
It can be changed using `--shortcut-mod`. Possible keys are `lctrl`, `rctrl`,
|
||||
`lalt`, `ralt`, `lsuper` and `rsuper`. For example:
|
||||
|
||||
```bash
|
||||
# use RCtrl for shortcuts
|
||||
scrcpy --shortcut-mod=rctrl
|
||||
|
||||
# use either LCtrl+LAlt or LSuper for shortcuts
|
||||
scrcpy --shortcut-mod=lctrl+lalt,lsuper
|
||||
```
|
||||
|
||||
_<kbd>[Super]</kbd> is typically the <kbd>Windows</kbd> or <kbd>Cmd</kbd> key._
|
||||
|
||||
[Super]: https://en.wikipedia.org/wiki/Super_key_(keyboard_button)
|
||||
|
||||
| Action | Shortcut
|
||||
| ------------------------------------------- |:-----------------------------
|
||||
| Switch fullscreen mode | <kbd>MOD</kbd>+<kbd>f</kbd>
|
||||
| Rotate display left | <kbd>MOD</kbd>+<kbd>←</kbd> _(left)_
|
||||
| Rotate display right | <kbd>MOD</kbd>+<kbd>→</kbd> _(right)_
|
||||
| Resize window to 1:1 (pixel-perfect) | <kbd>MOD</kbd>+<kbd>g</kbd>
|
||||
| Resize window to remove black borders | <kbd>MOD</kbd>+<kbd>w</kbd> \| _Double-left-click¹_
|
||||
| Click on `HOME` | <kbd>MOD</kbd>+<kbd>h</kbd> \| _Middle-click_
|
||||
| Click on `BACK` | <kbd>MOD</kbd>+<kbd>b</kbd> \| _Right-click²_
|
||||
| Click on `APP_SWITCH` | <kbd>MOD</kbd>+<kbd>s</kbd> \| _4th-click³_
|
||||
| Click on `MENU` (unlock screen)⁴ | <kbd>MOD</kbd>+<kbd>m</kbd>
|
||||
| Click on `VOLUME_UP` | <kbd>MOD</kbd>+<kbd>↑</kbd> _(up)_
|
||||
| Click on `VOLUME_DOWN` | <kbd>MOD</kbd>+<kbd>↓</kbd> _(down)_
|
||||
| Click on `POWER` | <kbd>MOD</kbd>+<kbd>p</kbd>
|
||||
| Power on | _Right-click²_
|
||||
| Turn device screen off (keep mirroring) | <kbd>MOD</kbd>+<kbd>o</kbd>
|
||||
| Turn device screen on | <kbd>MOD</kbd>+<kbd>Shift</kbd>+<kbd>o</kbd>
|
||||
| Rotate device screen | <kbd>MOD</kbd>+<kbd>r</kbd>
|
||||
| Expand notification panel | <kbd>MOD</kbd>+<kbd>n</kbd> \| _5th-click³_
|
||||
| Expand settings panel | <kbd>MOD</kbd>+<kbd>n</kbd>+<kbd>n</kbd> \| _Double-5th-click³_
|
||||
| Collapse panels | <kbd>MOD</kbd>+<kbd>Shift</kbd>+<kbd>n</kbd>
|
||||
| Copy to clipboard⁵ | <kbd>MOD</kbd>+<kbd>c</kbd>
|
||||
| Cut to clipboard⁵ | <kbd>MOD</kbd>+<kbd>x</kbd>
|
||||
| Synchronize clipboards and paste⁵ | <kbd>MOD</kbd>+<kbd>v</kbd>
|
||||
| Inject computer clipboard text | <kbd>MOD</kbd>+<kbd>Shift</kbd>+<kbd>v</kbd>
|
||||
| Enable/disable FPS counter (on stdout) | <kbd>MOD</kbd>+<kbd>i</kbd>
|
||||
| Pinch-to-zoom | <kbd>Ctrl</kbd>+_click-and-move_
|
||||
| Drag & drop APK file | Install APK from computer
|
||||
| Drag & drop non-APK file | [Push file to device](control.md#push-file-to-device)
|
||||
|
||||
_¹Double-click on black borders to remove them._
|
||||
_²Right-click turns the screen on if it was off, presses BACK otherwise._
|
||||
_³4th and 5th mouse buttons, if your mouse has them._
|
||||
_⁴For react-native apps in development, `MENU` triggers development menu._
|
||||
_⁵Only on Android >= 7._
|
||||
|
||||
Shortcuts with repeated keys are executed by releasing and pressing the key a
|
||||
second time. For example, to execute "Expand settings panel":
|
||||
|
||||
1. Press and keep pressing <kbd>MOD</kbd>.
|
||||
2. Then double-press <kbd>n</kbd>.
|
||||
3. Finally, release <kbd>MOD</kbd>.
|
||||
|
||||
All <kbd>Ctrl</kbd>+_key_ shortcuts are forwarded to the device, so they are
|
||||
handled by the active application.
|
123
doc/tunnels.md
Normal file
123
doc/tunnels.md
Normal file
@ -0,0 +1,123 @@
|
||||
# Tunnels
|
||||
|
||||
Scrcpy is designed to mirror local Android devices. Tunnels allow to connect to
|
||||
a remote device (e.g. over the Internet).
|
||||
|
||||
To connect to a remote device, it is possible to connect a local `adb` client to
|
||||
a remote `adb` server (provided they use the same version of the _adb_
|
||||
protocol).
|
||||
|
||||
|
||||
## Remote ADB server
|
||||
|
||||
To connect to a remote _adb server_, make the server listen on all interfaces:
|
||||
|
||||
```bash
|
||||
adb kill-server
|
||||
adb -a nodaemon server start
|
||||
# keep this open
|
||||
```
|
||||
|
||||
**Warning: all communications between clients and the _adb server_ are
|
||||
unencrypted.**
|
||||
|
||||
Suppose that this server is accessible at 192.168.1.2. Then, from another
|
||||
terminal, run `scrcpy`:
|
||||
|
||||
```bash
|
||||
# in bash
|
||||
export ADB_SERVER_SOCKET=tcp:192.168.1.2:5037
|
||||
scrcpy --tunnel-host=192.168.1.2
|
||||
```
|
||||
|
||||
```cmd
|
||||
:: in cmd
|
||||
set ADB_SERVER_SOCKET=tcp:192.168.1.2:5037
|
||||
scrcpy --tunnel-host=192.168.1.2
|
||||
```
|
||||
|
||||
```powershell
|
||||
# in PowerShell
|
||||
$env:ADB_SERVER_SOCKET = 'tcp:192.168.1.2:5037'
|
||||
scrcpy --tunnel-host=192.168.1.2
|
||||
```
|
||||
|
||||
By default, `scrcpy` uses the local port used for `adb forward` tunnel
|
||||
establishment (typically `27183`, see `--port`). It is also possible to force a
|
||||
different tunnel port (it may be useful in more complex situations, when more
|
||||
redirections are involved):
|
||||
|
||||
```
|
||||
scrcpy --tunnel-port=1234
|
||||
```
|
||||
|
||||
|
||||
## SSH tunnel
|
||||
|
||||
To communicate with a remote _adb server_ securely, it is preferable to use an
|
||||
SSH tunnel.
|
||||
|
||||
First, make sure the _adb server_ is running on the remote computer:
|
||||
|
||||
```bash
|
||||
adb start-server
|
||||
```
|
||||
|
||||
Then, establish an SSH tunnel:
|
||||
|
||||
```bash
|
||||
# local 5038 --> remote 5037
|
||||
# local 27183 <-- remote 27183
|
||||
ssh -CN -L5038:localhost:5037 -R27183:localhost:27183 your_remote_computer
|
||||
# keep this open
|
||||
```
|
||||
|
||||
From another terminal, run `scrcpy`:
|
||||
|
||||
```bash
|
||||
# in bash
|
||||
export ADB_SERVER_SOCKET=tcp:localhost:5038
|
||||
scrcpy
|
||||
```
|
||||
|
||||
```cmd
|
||||
:: in cmd
|
||||
set ADB_SERVER_SOCKET=tcp:localhost:5038
|
||||
scrcpy
|
||||
```
|
||||
|
||||
```powershell
|
||||
# in PowerShell
|
||||
$env:ADB_SERVER_SOCKET = 'tcp:localhost:5038'
|
||||
scrcpy
|
||||
```
|
||||
|
||||
To avoid enabling remote port forwarding, you could force a forward connection
|
||||
instead (notice the `-L` instead of `-R`):
|
||||
|
||||
```bash
|
||||
# local 5038 --> remote 5037
|
||||
# local 27183 --> remote 27183
|
||||
ssh -CN -L5038:localhost:5037 -L27183:localhost:27183 your_remote_computer
|
||||
# keep this open
|
||||
```
|
||||
|
||||
From another terminal, run `scrcpy`:
|
||||
|
||||
```bash
|
||||
# in bash
|
||||
export ADB_SERVER_SOCKET=tcp:localhost:5038
|
||||
scrcpy --force-adb-forward
|
||||
```
|
||||
|
||||
```cmd
|
||||
:: in cmd
|
||||
set ADB_SERVER_SOCKET=tcp:localhost:5038
|
||||
scrcpy --force-adb-forward
|
||||
```
|
||||
|
||||
```powershell
|
||||
# in PowerShell
|
||||
$env:ADB_SERVER_SOCKET = 'tcp:localhost:5038'
|
||||
scrcpy --force-adb-forward
|
||||
```
|
65
doc/v4l2.md
Normal file
65
doc/v4l2.md
Normal file
@ -0,0 +1,65 @@
|
||||
# Video4Linux
|
||||
|
||||
On Linux, it is possible to send the video stream to a [v4l2] loopback device,
|
||||
so that the Android device can be opened like a webcam by any v4l2-capable tool.
|
||||
|
||||
[v4l2]: https://en.wikipedia.org/wiki/Video4Linux
|
||||
|
||||
The module `v4l2loopback` must be installed:
|
||||
|
||||
```bash
|
||||
sudo apt install v4l2loopback-dkms
|
||||
```
|
||||
|
||||
To create a v4l2 device:
|
||||
|
||||
```bash
|
||||
sudo modprobe v4l2loopback
|
||||
```
|
||||
|
||||
This will create a new video device in `/dev/videoN`, where `N` is an integer
|
||||
(more [options](https://github.com/umlaeute/v4l2loopback#options) are available
|
||||
to create several devices or devices with specific IDs).
|
||||
|
||||
To list the enabled devices:
|
||||
|
||||
```bash
|
||||
# requires v4l-utils package
|
||||
v4l2-ctl --list-devices
|
||||
|
||||
# simple but might be sufficient
|
||||
ls /dev/video*
|
||||
```
|
||||
|
||||
To start `scrcpy` using a v4l2 sink:
|
||||
|
||||
```bash
|
||||
scrcpy --v4l2-sink=/dev/videoN
|
||||
scrcpy --v4l2-sink=/dev/videoN --no-video-playback # disable playback window
|
||||
```
|
||||
|
||||
(replace `N` with the device ID, check with `ls /dev/video*`)
|
||||
|
||||
Once enabled, you can open your video stream with a v4l2-capable tool:
|
||||
|
||||
```bash
|
||||
ffplay -i /dev/videoN
|
||||
vlc v4l2:///dev/videoN # VLC might add some buffering delay
|
||||
```
|
||||
|
||||
For example, you could capture the video within [OBS] or within your video
|
||||
conference tool.
|
||||
|
||||
[OBS]: https://obsproject.com/
|
||||
|
||||
|
||||
## Buffering
|
||||
|
||||
By default, there is no video buffering, to get the lowest possible latency.
|
||||
|
||||
As for the [video display](video.md#buffering), it is possible to add
|
||||
buffering to delay the v4l2 stream:
|
||||
|
||||
```bash
|
||||
scrcpy --v4l2-buffer=300 # add 300ms buffering for v4l2 sink
|
||||
```
|
196
doc/video.md
Normal file
196
doc/video.md
Normal file
@ -0,0 +1,196 @@
|
||||
# Video
|
||||
|
||||
## Size
|
||||
|
||||
By default, scrcpy attempts to mirror at the Android device resolution.
|
||||
|
||||
It might be useful to mirror at a lower definition to increase performance. To
|
||||
limit both width and height to some maximum value (here 1024):
|
||||
|
||||
```bash
|
||||
scrcpy --max-size=1024
|
||||
scrcpy -m 1024 # short version
|
||||
```
|
||||
|
||||
The other dimension is computed so that the Android device aspect ratio is
|
||||
preserved. That way, a device in 1920×1080 will be mirrored at 1024×576.
|
||||
|
||||
If encoding fails, scrcpy automatically tries again with a lower definition
|
||||
(unless `--no-downsize-on-error` is enabled).
|
||||
|
||||
|
||||
## Bit rate
|
||||
|
||||
The default video bit-rate is 8 Mbps. To change it:
|
||||
|
||||
```bash
|
||||
scrcpy --video-bit-rate=2M
|
||||
scrcpy --video-bit-rate=2000000 # equivalent
|
||||
scrcpy -b 2M # short version
|
||||
```
|
||||
|
||||
|
||||
## Frame rate
|
||||
|
||||
The capture frame rate can be limited:
|
||||
|
||||
```bash
|
||||
scrcpy --max-fps=15
|
||||
```
|
||||
|
||||
The actual capture frame rate may be printed to the console:
|
||||
|
||||
```
|
||||
scrcpy --print-fps
|
||||
```
|
||||
|
||||
It may also be enabled or disabled at anytime with <kbd>MOD</kbd>+<kbd>i</kbd>
|
||||
(see [shortcuts](shortcuts.md)).
|
||||
|
||||
The frame rate is intrinsically variable: a new frame is produced only when the
|
||||
screen content changes. For example, if you play a fullscreen video at 24fps on
|
||||
your device, you should not get more than 24 frames per second in scrcpy.
|
||||
|
||||
|
||||
## Codec
|
||||
|
||||
The video codec can be selected. The possible values are `h264` (default),
|
||||
`h265` and `av1`:
|
||||
|
||||
```bash
|
||||
scrcpy --video-codec=h264 # default
|
||||
scrcpy --video-codec=h265
|
||||
scrcpy --video-codec=av1
|
||||
```
|
||||
|
||||
H265 may provide better quality, but H264 should provide lower latency.
|
||||
AV1 encoders are not common on current Android devices.
|
||||
|
||||
Several encoders may be available on the device. They can be listed by:
|
||||
|
||||
```bash
|
||||
scrcpy --list-encoders
|
||||
```
|
||||
|
||||
Sometimes, the default encoder may have issues or even crash, so it is useful to
|
||||
try another one:
|
||||
|
||||
```bash
|
||||
scrcpy --video-codec=h264 --video-encoder='OMX.qcom.video.encoder.avc'
|
||||
```
|
||||
|
||||
For advanced usage, to pass arbitrary parameters to the [`MediaFormat`],
|
||||
check `--video-codec-options` in the manpage or in `scrcpy --help`.
|
||||
|
||||
[`MediaFormat`]: https://developer.android.com/reference/android/media/MediaFormat
|
||||
|
||||
|
||||
## Rotation
|
||||
|
||||
The rotation may be applied at 3 different levels:
|
||||
- The [shortcut](shortcuts.md) <kbd>MOD</kbd>+<kbd>r</kbd> requests the
|
||||
device to switch between portrait and landscape (the current running app may
|
||||
refuse, if it does not support the requested orientation).
|
||||
- `--lock-video-orientation` changes the mirroring orientation (the orientation
|
||||
of the video sent from the device to the computer). This affects the
|
||||
recording.
|
||||
- `--rotation` rotates only the window content. This only affects the display,
|
||||
not the recording. It may be changed dynamically at any time using the
|
||||
[shortcuts](shortcuts.md) <kbd>MOD</kbd>+<kbd>←</kbd> and
|
||||
<kbd>MOD</kbd>+<kbd>→</kbd>.
|
||||
|
||||
To lock the mirroring orientation:
|
||||
|
||||
```bash
|
||||
scrcpy --lock-video-orientation # initial (current) orientation
|
||||
scrcpy --lock-video-orientation=0 # natural orientation
|
||||
scrcpy --lock-video-orientation=1 # 90° counterclockwise
|
||||
scrcpy --lock-video-orientation=2 # 180°
|
||||
scrcpy --lock-video-orientation=3 # 90° clockwise
|
||||
```
|
||||
|
||||
To set an initial window rotation:
|
||||
|
||||
```bash
|
||||
scrcpy --rotation=0 # no rotation
|
||||
scrcpy --rotation=1 # 90 degrees counterclockwise
|
||||
scrcpy --rotation=2 # 180 degrees
|
||||
scrcpy --rotation=3 # 90 degrees clockwise
|
||||
```
|
||||
|
||||
## Crop
|
||||
|
||||
The device screen may be cropped to mirror only part of the screen.
|
||||
|
||||
This is useful, for example, to mirror only one eye of the Oculus Go:
|
||||
|
||||
```bash
|
||||
scrcpy --crop=1224:1440:0:0 # 1224x1440 at offset (0,0)
|
||||
```
|
||||
|
||||
The values are expressed in the device natural orientation (portrait for a
|
||||
phone, landscape for a tablet).
|
||||
|
||||
If `--max-size` is also specified, resizing is applied after cropping.
|
||||
|
||||
|
||||
## Buffering
|
||||
|
||||
By default, there is no video buffering, to get the lowest possible latency.
|
||||
|
||||
Buffering can be added to delay the video stream and compensate for jitter to
|
||||
get a smoother playback (see [#2464]).
|
||||
|
||||
[#2464]: https://github.com/Genymobile/scrcpy/issues/2464
|
||||
|
||||
The configuration is available independently for the display,
|
||||
[v4l2 sinks](video.md#video4linux) and [audio](audio.md#buffering) playback.
|
||||
|
||||
```bash
|
||||
scrcpy --display-buffer=50 # add 50ms buffering for display
|
||||
scrcpy --v4l2-buffer=300 # add 300ms buffering for v4l2 sink
|
||||
scrcpy --audio-buffer=200 # set 200ms buffering for audio playback
|
||||
```
|
||||
|
||||
They can be applied simultaneously:
|
||||
|
||||
```bash
|
||||
scrcpy --display-buffer=50 --v4l2-buffer=300
|
||||
```
|
||||
|
||||
|
||||
## No playback
|
||||
|
||||
It is possible to capture an Android device without playing video or audio on
|
||||
the computer. This option is useful when [recording](recording.md) or when
|
||||
[v4l2](#video4linux) is enabled:
|
||||
|
||||
```bash
|
||||
scrcpy --v4l2-sink=/dev/video2 --no-playback
|
||||
scrcpy --record=file.mkv --no-playback
|
||||
# interrupt with Ctrl+C
|
||||
```
|
||||
|
||||
It is also possible to disable video and audio playback separately:
|
||||
|
||||
```bash
|
||||
# Send video to V4L2 sink without playing it, but keep audio playback
|
||||
scrcpy --v4l2-sink=/dev/video2 --no-video-playback
|
||||
|
||||
# Record both video and audio, but only play video
|
||||
scrcpy --record=file.mkv --no-audio-playback
|
||||
```
|
||||
|
||||
|
||||
## No video
|
||||
|
||||
To disable video forwarding completely, so that only audio is forwarded:
|
||||
|
||||
```
|
||||
scrcpy --no-video
|
||||
```
|
||||
|
||||
|
||||
## Video4Linux
|
||||
|
||||
See the dedicated [Video4Linux](v4l2.md) page.
|
55
doc/window.md
Normal file
55
doc/window.md
Normal file
@ -0,0 +1,55 @@
|
||||
# Window
|
||||
|
||||
## Title
|
||||
|
||||
By default, the window title is the device model. It can be changed:
|
||||
|
||||
```bash
|
||||
scrcpy --window-title='My device'
|
||||
```
|
||||
|
||||
## Position and size
|
||||
|
||||
The initial window position and size may be specified:
|
||||
|
||||
```bash
|
||||
scrcpy --window-x=100 --window-y=100 --window-width=800 --window-height=600
|
||||
```
|
||||
|
||||
## Borderless
|
||||
|
||||
To disable window decorations:
|
||||
|
||||
```bash
|
||||
scrcpy --window-borderless
|
||||
```
|
||||
|
||||
## Always on top
|
||||
|
||||
To keep the window always on top:
|
||||
|
||||
```bash
|
||||
scrcpy --always-on-top
|
||||
```
|
||||
|
||||
## Fullscreen
|
||||
|
||||
The app may be started directly in fullscreen:
|
||||
|
||||
```bash
|
||||
scrcpy --fullscreen
|
||||
scrcpy -f # short version
|
||||
```
|
||||
|
||||
Fullscreen mode can then be toggled dynamically with <kbd>MOD</kbd>+<kbd>f</kbd>
|
||||
(see [shortcuts](shortcuts.md)).
|
||||
|
||||
|
||||
## Disable screensaver
|
||||
|
||||
By default, _scrcpy_ does not prevent the screensaver from running on the
|
||||
computer. To disable it:
|
||||
|
||||
```bash
|
||||
scrcpy --disable-screensaver
|
||||
```
|
89
doc/windows.md
Normal file
89
doc/windows.md
Normal file
@ -0,0 +1,89 @@
|
||||
# On Windows
|
||||
|
||||
## Install
|
||||
|
||||
Download the [latest release]:
|
||||
|
||||
- [`scrcpy-win64-v2.0.zip`][direct-win64] (64-bit)
|
||||
<sub>SHA-256: `ae4c8d37a496b43f8974ba8f07f708e22a9570ba0cddc3dc3a36edbccd4d2a20`</sub>
|
||||
- [`scrcpy-win32-v2.0.zip`][direct-win32] (32-bit)
|
||||
<sub>SHA-256: `15d98c02cb0e0bbd84f8b5d54991e0f6925569b1286a86a40743944fcb1c2d8c`</sub>
|
||||
|
||||
[latest release]: https://github.com/Genymobile/scrcpy/releases/latest
|
||||
[direct-win64]: https://github.com/Genymobile/scrcpy/releases/download/v2.0/scrcpy-win64-v2.0.zip
|
||||
[direct-win32]: https://github.com/Genymobile/scrcpy/releases/download/v2.0/scrcpy-win32-v2.0.zip
|
||||
|
||||
and extract it.
|
||||
|
||||
Alternatively, you could install it from packages manager, like [Chocolatey]:
|
||||
|
||||
```bash
|
||||
choco install scrcpy
|
||||
choco install adb # if you don't have it yet
|
||||
```
|
||||
|
||||
or [Scoop]:
|
||||
|
||||
|
||||
```bash
|
||||
scoop install scrcpy
|
||||
scoop install adb # if you don't have it yet
|
||||
```
|
||||
|
||||
[Chocolatey]: https://chocolatey.org/
|
||||
[Scoop]: https://scoop.sh
|
||||
|
||||
_See [build.md](build.md) to build and install the app manually._
|
||||
|
||||
|
||||
## Run
|
||||
|
||||
_Make sure that your device meets the [prerequisites](/README.md#prerequisites)._
|
||||
|
||||
Scrcpy is a command line application: it is mainly intended to be executed from
|
||||
a terminal with command line arguments.
|
||||
|
||||
To open a terminal at the expected location, double-click on
|
||||
`open_a_terminal_here.bat` in your scrcpy directory, then type your command. For
|
||||
example, without arguments:
|
||||
|
||||
```bash
|
||||
scrcpy
|
||||
```
|
||||
|
||||
or with arguments (here to disable audio and record to `file.mkv`):
|
||||
|
||||
```
|
||||
scrcpy --no-audio --record=file.mkv
|
||||
```
|
||||
|
||||
Documentation for command line arguments is available:
|
||||
- `scrcpy --help`
|
||||
- on [github](/README.md)
|
||||
|
||||
To start scrcpy directly without opening a terminal, double-click on one of
|
||||
these files:
|
||||
- `scrcpy-console.bat`: start with a terminal open (it will close when scrcpy
|
||||
terminates, unless an error occurs);
|
||||
- `scrcpy-noconsole.vbs`: start without a terminal (but you won't see any error
|
||||
message).
|
||||
|
||||
_Avoid double-clicking on `scrcpy.exe` directly: on error, the terminal would
|
||||
close immediately and you won't have time to read any error message (this
|
||||
executable is intended to be run from the terminal). Use `scrcpy-console.bat`
|
||||
instead._
|
||||
|
||||
If you plan to always use the same arguments, create a file `myscrcpy.bat`
|
||||
(enable [show file extensions] to avoid confusion) containing your command, For
|
||||
example:
|
||||
|
||||
```bash
|
||||
scrcpy --prefer-text --turn-screen-off --stay-awake
|
||||
```
|
||||
|
||||
[show file extensions]: https://www.howtogeek.com/205086/beginner-how-to-make-windows-show-file-extensions/
|
||||
|
||||
Then just double-click on that file.
|
||||
|
||||
You could also edit (a copy of) `scrcpy-console.bat` or `scrcpy-noconsole.vbs`
|
||||
to add some arguments.
|
@ -2,8 +2,8 @@
|
||||
set -e
|
||||
|
||||
BUILDDIR=build-auto
|
||||
PREBUILT_SERVER_URL=https://github.com/Genymobile/scrcpy/releases/download/v1.25/scrcpy-server-v1.25
|
||||
PREBUILT_SERVER_SHA256=ce0306c7bbd06ae72f6d06f7ec0ee33774995a65de71e0a83813ecb67aec9bdb
|
||||
PREBUILT_SERVER_URL=https://github.com/Genymobile/scrcpy/releases/download/v2.0/scrcpy-server-v2.0
|
||||
PREBUILT_SERVER_SHA256=9e241615f578cd690bb43311000debdecf6a9c50a7082b001952f18f6f21ddc2
|
||||
|
||||
echo "[scrcpy] Downloading prebuilt server..."
|
||||
wget "$PREBUILT_SERVER_URL" -O scrcpy-server
|
||||
|
@ -1,5 +1,5 @@
|
||||
project('scrcpy', 'c',
|
||||
version: '1.25',
|
||||
version: '2.0',
|
||||
meson_version: '>= 0.48',
|
||||
default_options: [
|
||||
'c_std=c11',
|
||||
|
34
release.mk
34
release.mk
@ -94,15 +94,14 @@ dist-win32: build-server build-win32
|
||||
cp app/data/scrcpy-noconsole.vbs "$(DIST)/$(WIN32_TARGET_DIR)"
|
||||
cp app/data/icon.png "$(DIST)/$(WIN32_TARGET_DIR)"
|
||||
cp app/data/open_a_terminal_here.bat "$(DIST)/$(WIN32_TARGET_DIR)"
|
||||
cp app/prebuilt-deps/data/ffmpeg-6.0-scrcpy-2/win32/bin/avutil-58.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
|
||||
cp app/prebuilt-deps/data/ffmpeg-6.0-scrcpy-2/win32/bin/avcodec-60.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
|
||||
cp app/prebuilt-deps/data/ffmpeg-6.0-scrcpy-2/win32/bin/avformat-60.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
|
||||
cp app/prebuilt-deps/data/ffmpeg-6.0-scrcpy-2/win32/bin/swresample-4.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
|
||||
cp app/prebuilt-deps/data/ffmpeg-6.0-scrcpy-2/win32/bin/zlib1.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
|
||||
cp app/prebuilt-deps/data/platform-tools-33.0.3/adb.exe "$(DIST)/$(WIN32_TARGET_DIR)/"
|
||||
cp app/prebuilt-deps/data/platform-tools-33.0.3/AdbWinApi.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
|
||||
cp app/prebuilt-deps/data/platform-tools-33.0.3/AdbWinUsbApi.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
|
||||
cp app/prebuilt-deps/data/SDL2-2.26.1/i686-w64-mingw32/bin/SDL2.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
|
||||
cp app/prebuilt-deps/data/ffmpeg-6.0-scrcpy-4/win32/bin/avutil-58.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
|
||||
cp app/prebuilt-deps/data/ffmpeg-6.0-scrcpy-4/win32/bin/avcodec-60.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
|
||||
cp app/prebuilt-deps/data/ffmpeg-6.0-scrcpy-4/win32/bin/avformat-60.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
|
||||
cp app/prebuilt-deps/data/ffmpeg-6.0-scrcpy-4/win32/bin/swresample-4.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
|
||||
cp app/prebuilt-deps/data/platform-tools-34.0.1/adb.exe "$(DIST)/$(WIN32_TARGET_DIR)/"
|
||||
cp app/prebuilt-deps/data/platform-tools-34.0.1/AdbWinApi.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
|
||||
cp app/prebuilt-deps/data/platform-tools-34.0.1/AdbWinUsbApi.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
|
||||
cp app/prebuilt-deps/data/SDL2-2.28.0/i686-w64-mingw32/bin/SDL2.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
|
||||
cp app/prebuilt-deps/data/libusb-1.0.26/libusb-MinGW-Win32/bin/msys-usb-1.0.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
|
||||
|
||||
dist-win64: build-server build-win64
|
||||
@ -113,15 +112,14 @@ dist-win64: build-server build-win64
|
||||
cp app/data/scrcpy-noconsole.vbs "$(DIST)/$(WIN64_TARGET_DIR)"
|
||||
cp app/data/icon.png "$(DIST)/$(WIN64_TARGET_DIR)"
|
||||
cp app/data/open_a_terminal_here.bat "$(DIST)/$(WIN64_TARGET_DIR)"
|
||||
cp app/prebuilt-deps/data/ffmpeg-6.0-scrcpy-2/win64/bin/avutil-58.dll "$(DIST)/$(WIN64_TARGET_DIR)/"
|
||||
cp app/prebuilt-deps/data/ffmpeg-6.0-scrcpy-2/win64/bin/avcodec-60.dll "$(DIST)/$(WIN64_TARGET_DIR)/"
|
||||
cp app/prebuilt-deps/data/ffmpeg-6.0-scrcpy-2/win64/bin/avformat-60.dll "$(DIST)/$(WIN64_TARGET_DIR)/"
|
||||
cp app/prebuilt-deps/data/ffmpeg-6.0-scrcpy-2/win64/bin/swresample-4.dll "$(DIST)/$(WIN64_TARGET_DIR)/"
|
||||
cp app/prebuilt-deps/data/ffmpeg-6.0-scrcpy-2/win64/bin/zlib1.dll "$(DIST)/$(WIN64_TARGET_DIR)/"
|
||||
cp app/prebuilt-deps/data/platform-tools-33.0.3/adb.exe "$(DIST)/$(WIN64_TARGET_DIR)/"
|
||||
cp app/prebuilt-deps/data/platform-tools-33.0.3/AdbWinApi.dll "$(DIST)/$(WIN64_TARGET_DIR)/"
|
||||
cp app/prebuilt-deps/data/platform-tools-33.0.3/AdbWinUsbApi.dll "$(DIST)/$(WIN64_TARGET_DIR)/"
|
||||
cp app/prebuilt-deps/data/SDL2-2.26.1/x86_64-w64-mingw32/bin/SDL2.dll "$(DIST)/$(WIN64_TARGET_DIR)/"
|
||||
cp app/prebuilt-deps/data/ffmpeg-6.0-scrcpy-4/win64/bin/avutil-58.dll "$(DIST)/$(WIN64_TARGET_DIR)/"
|
||||
cp app/prebuilt-deps/data/ffmpeg-6.0-scrcpy-4/win64/bin/avcodec-60.dll "$(DIST)/$(WIN64_TARGET_DIR)/"
|
||||
cp app/prebuilt-deps/data/ffmpeg-6.0-scrcpy-4/win64/bin/avformat-60.dll "$(DIST)/$(WIN64_TARGET_DIR)/"
|
||||
cp app/prebuilt-deps/data/ffmpeg-6.0-scrcpy-4/win64/bin/swresample-4.dll "$(DIST)/$(WIN64_TARGET_DIR)/"
|
||||
cp app/prebuilt-deps/data/platform-tools-34.0.1/adb.exe "$(DIST)/$(WIN64_TARGET_DIR)/"
|
||||
cp app/prebuilt-deps/data/platform-tools-34.0.1/AdbWinApi.dll "$(DIST)/$(WIN64_TARGET_DIR)/"
|
||||
cp app/prebuilt-deps/data/platform-tools-34.0.1/AdbWinUsbApi.dll "$(DIST)/$(WIN64_TARGET_DIR)/"
|
||||
cp app/prebuilt-deps/data/SDL2-2.28.0/x86_64-w64-mingw32/bin/SDL2.dll "$(DIST)/$(WIN64_TARGET_DIR)/"
|
||||
cp app/prebuilt-deps/data/libusb-1.0.26/libusb-MinGW-x64/bin/msys-usb-1.0.dll "$(DIST)/$(WIN64_TARGET_DIR)/"
|
||||
|
||||
zip-win32: dist-win32
|
||||
|
@ -7,8 +7,8 @@ android {
|
||||
applicationId "com.genymobile.scrcpy"
|
||||
minSdkVersion 21
|
||||
targetSdkVersion 33
|
||||
versionCode 12500
|
||||
versionName "1.25"
|
||||
versionCode 20000
|
||||
versionName "2.0"
|
||||
testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"
|
||||
}
|
||||
buildTypes {
|
||||
|
@ -12,7 +12,7 @@
|
||||
set -e
|
||||
|
||||
SCRCPY_DEBUG=false
|
||||
SCRCPY_VERSION_NAME=1.25
|
||||
SCRCPY_VERSION_NAME=2.0
|
||||
|
||||
PLATFORM=${ANDROID_PLATFORM:-33}
|
||||
BUILD_TOOLS=${ANDROID_BUILD_TOOLS:-33.0.0}
|
||||
|
26
server/src/main/aidl/android/view/IDisplayFoldListener.aidl
Normal file
26
server/src/main/aidl/android/view/IDisplayFoldListener.aidl
Normal file
@ -0,0 +1,26 @@
|
||||
/*
|
||||
* Copyright (C) 2019 The Android Open Source Project
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package android.view;
|
||||
|
||||
/**
|
||||
* {@hide}
|
||||
*/
|
||||
oneway interface IDisplayFoldListener
|
||||
{
|
||||
/** Called when the foldedness of a display changes */
|
||||
void onDisplayFoldChanged(int displayId, boolean folded);
|
||||
}
|
@ -1,7 +1,16 @@
|
||||
package com.genymobile.scrcpy;
|
||||
|
||||
public interface AsyncProcessor {
|
||||
void start();
|
||||
interface TerminationListener {
|
||||
/**
|
||||
* Notify processor termination
|
||||
*
|
||||
* @param fatalError {@code true} if this must cause the termination of the whole scrcpy-server.
|
||||
*/
|
||||
void onTerminated(boolean fatalError);
|
||||
}
|
||||
|
||||
void start(TerminationListener listener);
|
||||
void stop();
|
||||
void join() throws InterruptedException;
|
||||
}
|
||||
|
@ -10,11 +10,9 @@ import android.media.AudioFormat;
|
||||
import android.media.AudioRecord;
|
||||
import android.media.AudioTimestamp;
|
||||
import android.media.MediaCodec;
|
||||
import android.media.MediaRecorder;
|
||||
import android.os.Build;
|
||||
import android.os.SystemClock;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.nio.ByteBuffer;
|
||||
|
||||
public final class AudioCapture {
|
||||
@ -22,22 +20,29 @@ public final class AudioCapture {
|
||||
public static final int SAMPLE_RATE = 48000;
|
||||
public static final int CHANNEL_CONFIG = AudioFormat.CHANNEL_IN_STEREO;
|
||||
public static final int CHANNELS = 2;
|
||||
public static final int FORMAT = AudioFormat.ENCODING_PCM_16BIT;
|
||||
public static final int CHANNEL_MASK = AudioFormat.CHANNEL_IN_LEFT | AudioFormat.CHANNEL_IN_RIGHT;
|
||||
public static final int ENCODING = AudioFormat.ENCODING_PCM_16BIT;
|
||||
public static final int BYTES_PER_SAMPLE = 2;
|
||||
|
||||
private final int audioSource;
|
||||
|
||||
private AudioRecord recorder;
|
||||
|
||||
private final AudioTimestamp timestamp = new AudioTimestamp();
|
||||
private long previousPts = 0;
|
||||
private long nextPts = 0;
|
||||
|
||||
public AudioCapture(AudioSource audioSource) {
|
||||
this.audioSource = audioSource.value();
|
||||
}
|
||||
|
||||
public static int millisToBytes(int millis) {
|
||||
return SAMPLE_RATE * CHANNELS * BYTES_PER_SAMPLE * millis / 1000;
|
||||
}
|
||||
|
||||
private static AudioFormat createAudioFormat() {
|
||||
AudioFormat.Builder builder = new AudioFormat.Builder();
|
||||
builder.setEncoding(FORMAT);
|
||||
builder.setEncoding(ENCODING);
|
||||
builder.setSampleRate(SAMPLE_RATE);
|
||||
builder.setChannelMask(CHANNEL_CONFIG);
|
||||
return builder.build();
|
||||
@ -45,60 +50,80 @@ public final class AudioCapture {
|
||||
|
||||
@TargetApi(Build.VERSION_CODES.M)
|
||||
@SuppressLint({"WrongConstant", "MissingPermission"})
|
||||
private static AudioRecord createAudioRecord() {
|
||||
private static AudioRecord createAudioRecord(int audioSource) {
|
||||
AudioRecord.Builder builder = new AudioRecord.Builder();
|
||||
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.S) {
|
||||
// On older APIs, Workarounds.fillAppInfo() must be called beforehand
|
||||
builder.setContext(FakeContext.get());
|
||||
}
|
||||
builder.setAudioSource(MediaRecorder.AudioSource.REMOTE_SUBMIX);
|
||||
builder.setAudioSource(audioSource);
|
||||
builder.setAudioFormat(createAudioFormat());
|
||||
int minBufferSize = AudioRecord.getMinBufferSize(SAMPLE_RATE, CHANNEL_CONFIG, FORMAT);
|
||||
int minBufferSize = AudioRecord.getMinBufferSize(SAMPLE_RATE, CHANNEL_CONFIG, ENCODING);
|
||||
// This buffer size does not impact latency
|
||||
builder.setBufferSizeInBytes(8 * minBufferSize);
|
||||
return builder.build();
|
||||
}
|
||||
|
||||
private static void startWorkaroundAndroid11() {
|
||||
if (Build.VERSION.SDK_INT == Build.VERSION_CODES.R) {
|
||||
// Android 11 requires Apps to be at foreground to record audio.
|
||||
// Normally, each App has its own user ID, so Android checks whether the requesting App has the user ID that's at the foreground.
|
||||
// But scrcpy server is NOT an App, it's a Java application started from Android shell, so it has the same user ID (2000) with Android
|
||||
// shell ("com.android.shell").
|
||||
// If there is an Activity from Android shell running at foreground, then the permission system will believe scrcpy is also in the
|
||||
// foreground.
|
||||
if (Build.VERSION.SDK_INT == Build.VERSION_CODES.R) {
|
||||
Intent intent = new Intent(Intent.ACTION_MAIN);
|
||||
intent.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
|
||||
intent.addCategory(Intent.CATEGORY_LAUNCHER);
|
||||
intent.setComponent(new ComponentName(FakeContext.PACKAGE_NAME, "com.android.shell.HeapDumpActivity"));
|
||||
ServiceManager.getActivityManager().startActivityAsUserWithFeature(intent);
|
||||
// Wait for activity to start
|
||||
SystemClock.sleep(150);
|
||||
}
|
||||
}
|
||||
// Android 11 requires Apps to be at foreground to record audio.
|
||||
// Normally, each App has its own user ID, so Android checks whether the requesting App has the user ID that's at the foreground.
|
||||
// But scrcpy server is NOT an App, it's a Java application started from Android shell, so it has the same user ID (2000) with Android
|
||||
// shell ("com.android.shell").
|
||||
// If there is an Activity from Android shell running at foreground, then the permission system will believe scrcpy is also in the
|
||||
// foreground.
|
||||
Intent intent = new Intent(Intent.ACTION_MAIN);
|
||||
intent.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
|
||||
intent.addCategory(Intent.CATEGORY_LAUNCHER);
|
||||
intent.setComponent(new ComponentName(FakeContext.PACKAGE_NAME, "com.android.shell.HeapDumpActivity"));
|
||||
ServiceManager.getActivityManager().startActivityAsUserWithFeature(intent);
|
||||
}
|
||||
|
||||
private static void stopWorkaroundAndroid11() {
|
||||
if (Build.VERSION.SDK_INT == Build.VERSION_CODES.R) {
|
||||
ServiceManager.getActivityManager().forceStopPackage(FakeContext.PACKAGE_NAME);
|
||||
ServiceManager.getActivityManager().forceStopPackage(FakeContext.PACKAGE_NAME);
|
||||
}
|
||||
|
||||
private void tryStartRecording(int attempts, int delayMs) throws AudioCaptureForegroundException {
|
||||
while (attempts-- > 0) {
|
||||
// Wait for activity to start
|
||||
SystemClock.sleep(delayMs);
|
||||
try {
|
||||
startRecording();
|
||||
return; // it worked
|
||||
} catch (UnsupportedOperationException e) {
|
||||
if (attempts == 0) {
|
||||
Ln.e("Failed to start audio capture");
|
||||
Ln.e("On Android 11, audio capture must be started in the foreground, make sure that the device is unlocked when starting "
|
||||
+ "scrcpy.");
|
||||
throw new AudioCaptureForegroundException();
|
||||
} else {
|
||||
Ln.d("Failed to start audio capture, retrying...");
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
public void start() throws AudioCaptureForegroundException {
|
||||
startWorkaroundAndroid11();
|
||||
private void startRecording() {
|
||||
try {
|
||||
recorder = createAudioRecord();
|
||||
recorder.startRecording();
|
||||
} catch (UnsupportedOperationException e) {
|
||||
if (Build.VERSION.SDK_INT == Build.VERSION_CODES.R) {
|
||||
Ln.e("Failed to start audio capture");
|
||||
Ln.e("On Android 11, it is only possible to capture in foreground, make sure that the device is unlocked when starting scrcpy.");
|
||||
throw new AudioCaptureForegroundException();
|
||||
recorder = createAudioRecord(audioSource);
|
||||
} catch (NullPointerException e) {
|
||||
// Creating an AudioRecord using an AudioRecord.Builder does not work on Vivo phones:
|
||||
// - <https://github.com/Genymobile/scrcpy/issues/3805>
|
||||
// - <https://github.com/Genymobile/scrcpy/pull/3862>
|
||||
recorder = Workarounds.createAudioRecord(audioSource, SAMPLE_RATE, CHANNEL_CONFIG, CHANNELS, CHANNEL_MASK, ENCODING);
|
||||
}
|
||||
recorder.startRecording();
|
||||
}
|
||||
|
||||
public void start() throws AudioCaptureForegroundException {
|
||||
if (Build.VERSION.SDK_INT == Build.VERSION_CODES.R) {
|
||||
startWorkaroundAndroid11();
|
||||
try {
|
||||
tryStartRecording(3, 100);
|
||||
} finally {
|
||||
stopWorkaroundAndroid11();
|
||||
}
|
||||
throw e;
|
||||
} finally {
|
||||
stopWorkaroundAndroid11();
|
||||
} else {
|
||||
startRecording();
|
||||
}
|
||||
}
|
||||
|
||||
@ -110,9 +135,9 @@ public final class AudioCapture {
|
||||
}
|
||||
|
||||
@TargetApi(Build.VERSION_CODES.N)
|
||||
public int read(ByteBuffer directBuffer, int size, MediaCodec.BufferInfo outBufferInfo) throws IOException {
|
||||
public int read(ByteBuffer directBuffer, int size, MediaCodec.BufferInfo outBufferInfo) {
|
||||
int r = recorder.read(directBuffer, size);
|
||||
if (r < 0) {
|
||||
if (r <= 0) {
|
||||
return r;
|
||||
}
|
||||
|
||||
|
@ -40,6 +40,7 @@ public final class AudioEncoder implements AsyncProcessor {
|
||||
private static final int READ_MS = 5; // milliseconds
|
||||
private static final int READ_SIZE = AudioCapture.millisToBytes(READ_MS);
|
||||
|
||||
private final AudioCapture capture;
|
||||
private final Streamer streamer;
|
||||
private final int bitRate;
|
||||
private final List<CodecOption> codecOptions;
|
||||
@ -58,7 +59,8 @@ public final class AudioEncoder implements AsyncProcessor {
|
||||
|
||||
private boolean ended;
|
||||
|
||||
public AudioEncoder(Streamer streamer, int bitRate, List<CodecOption> codecOptions, String encoderName) {
|
||||
public AudioEncoder(AudioCapture capture, Streamer streamer, int bitRate, List<CodecOption> codecOptions, String encoderName) {
|
||||
this.capture = capture;
|
||||
this.streamer = streamer;
|
||||
this.bitRate = bitRate;
|
||||
this.codecOptions = codecOptions;
|
||||
@ -92,7 +94,7 @@ public final class AudioEncoder implements AsyncProcessor {
|
||||
InputTask task = inputTasks.take();
|
||||
ByteBuffer buffer = mediaCodec.getInputBuffer(task.index);
|
||||
int r = capture.read(buffer, READ_SIZE, bufferInfo);
|
||||
if (r < 0) {
|
||||
if (r <= 0) {
|
||||
throw new IOException("Could not read audio: " + r);
|
||||
}
|
||||
|
||||
@ -101,7 +103,7 @@ public final class AudioEncoder implements AsyncProcessor {
|
||||
}
|
||||
|
||||
private void outputThread(MediaCodec mediaCodec) throws IOException, InterruptedException {
|
||||
streamer.writeHeader();
|
||||
streamer.writeAudioHeader();
|
||||
|
||||
while (!Thread.currentThread().isInterrupted()) {
|
||||
OutputTask task = outputTasks.take();
|
||||
@ -114,21 +116,29 @@ public final class AudioEncoder implements AsyncProcessor {
|
||||
}
|
||||
}
|
||||
|
||||
public void start() {
|
||||
@Override
|
||||
public void start(TerminationListener listener) {
|
||||
thread = new Thread(() -> {
|
||||
boolean fatalError = false;
|
||||
try {
|
||||
encode();
|
||||
} catch (ConfigurationException | AudioCaptureForegroundException e) {
|
||||
} catch (ConfigurationException e) {
|
||||
// Do not print stack trace, a user-friendly error-message has already been logged
|
||||
fatalError = true;
|
||||
} catch (AudioCaptureForegroundException e) {
|
||||
// Do not print stack trace, a user-friendly error-message has already been logged
|
||||
} catch (IOException e) {
|
||||
Ln.e("Audio encoding error", e);
|
||||
fatalError = true;
|
||||
} finally {
|
||||
Ln.d("Audio encoder stopped");
|
||||
listener.onTerminated(fatalError);
|
||||
}
|
||||
});
|
||||
}, "audio-encoder");
|
||||
thread.start();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void stop() {
|
||||
if (thread != null) {
|
||||
// Just wake up the blocking wait from the thread, so that it properly releases all its resources and terminates
|
||||
@ -136,6 +146,7 @@ public final class AudioEncoder implements AsyncProcessor {
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void join() throws InterruptedException {
|
||||
if (thread != null) {
|
||||
thread.join();
|
||||
@ -166,14 +177,13 @@ public final class AudioEncoder implements AsyncProcessor {
|
||||
}
|
||||
|
||||
MediaCodec mediaCodec = null;
|
||||
AudioCapture capture = new AudioCapture();
|
||||
|
||||
boolean mediaCodecStarted = false;
|
||||
try {
|
||||
Codec codec = streamer.getCodec();
|
||||
mediaCodec = createMediaCodec(codec, encoderName);
|
||||
|
||||
mediaCodecThread = new HandlerThread("AudioEncoder");
|
||||
mediaCodecThread = new HandlerThread("media-codec");
|
||||
mediaCodecThread.start();
|
||||
|
||||
MediaFormat format = createFormat(codec.getMimeType(), bitRate, codecOptions);
|
||||
@ -183,16 +193,15 @@ public final class AudioEncoder implements AsyncProcessor {
|
||||
capture.start();
|
||||
|
||||
final MediaCodec mediaCodecRef = mediaCodec;
|
||||
final AudioCapture captureRef = capture;
|
||||
inputThread = new Thread(() -> {
|
||||
try {
|
||||
inputThread(mediaCodecRef, captureRef);
|
||||
inputThread(mediaCodecRef, capture);
|
||||
} catch (IOException | InterruptedException e) {
|
||||
Ln.e("Audio capture error", e);
|
||||
} finally {
|
||||
end();
|
||||
}
|
||||
});
|
||||
}, "audio-in");
|
||||
|
||||
outputThread = new Thread(() -> {
|
||||
try {
|
||||
@ -207,7 +216,7 @@ public final class AudioEncoder implements AsyncProcessor {
|
||||
} finally {
|
||||
end();
|
||||
}
|
||||
});
|
||||
}, "audio-out");
|
||||
|
||||
mediaCodec.start();
|
||||
mediaCodecStarted = true;
|
||||
@ -271,13 +280,22 @@ public final class AudioEncoder implements AsyncProcessor {
|
||||
try {
|
||||
return MediaCodec.createByCodecName(encoderName);
|
||||
} catch (IllegalArgumentException e) {
|
||||
Ln.e("Encoder '" + encoderName + "' for " + codec.getName() + " not found\n" + LogUtils.buildAudioEncoderListMessage());
|
||||
Ln.e("Audio encoder '" + encoderName + "' for " + codec.getName() + " not found\n" + LogUtils.buildAudioEncoderListMessage());
|
||||
throw new ConfigurationException("Unknown encoder: " + encoderName);
|
||||
} catch (IOException e) {
|
||||
Ln.e("Could not create audio encoder '" + encoderName + "' for " + codec.getName() + "\n" + LogUtils.buildAudioEncoderListMessage());
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
MediaCodec mediaCodec = MediaCodec.createEncoderByType(codec.getMimeType());
|
||||
Ln.d("Using audio encoder: '" + mediaCodec.getName() + "'");
|
||||
return mediaCodec;
|
||||
|
||||
try {
|
||||
MediaCodec mediaCodec = MediaCodec.createEncoderByType(codec.getMimeType());
|
||||
Ln.d("Using audio encoder: '" + mediaCodec.getName() + "'");
|
||||
return mediaCodec;
|
||||
} catch (IOException | IllegalArgumentException e) {
|
||||
Ln.e("Could not create default audio encoder for " + codec.getName() + "\n" + LogUtils.buildAudioEncoderListMessage());
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
|
||||
private class EncoderCallback extends MediaCodec.Callback {
|
||||
|
@ -1,12 +1,14 @@
|
||||
package com.genymobile.scrcpy;
|
||||
|
||||
import android.media.MediaCodec;
|
||||
import android.os.Build;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.nio.ByteBuffer;
|
||||
|
||||
public final class AudioRawRecorder implements AsyncProcessor {
|
||||
|
||||
private final AudioCapture capture;
|
||||
private final Streamer streamer;
|
||||
|
||||
private Thread thread;
|
||||
@ -14,19 +16,25 @@ public final class AudioRawRecorder implements AsyncProcessor {
|
||||
private static final int READ_MS = 5; // milliseconds
|
||||
private static final int READ_SIZE = AudioCapture.millisToBytes(READ_MS);
|
||||
|
||||
public AudioRawRecorder(Streamer streamer) {
|
||||
public AudioRawRecorder(AudioCapture capture, Streamer streamer) {
|
||||
this.capture = capture;
|
||||
this.streamer = streamer;
|
||||
}
|
||||
|
||||
private void record() throws IOException, AudioCaptureForegroundException {
|
||||
if (Build.VERSION.SDK_INT < Build.VERSION_CODES.R) {
|
||||
Ln.w("Audio disabled: it is not supported before Android 11");
|
||||
streamer.writeDisableStream(false);
|
||||
return;
|
||||
}
|
||||
|
||||
final ByteBuffer buffer = ByteBuffer.allocateDirect(READ_SIZE);
|
||||
final MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();
|
||||
|
||||
AudioCapture capture = new AudioCapture();
|
||||
try {
|
||||
capture.start();
|
||||
|
||||
streamer.writeHeader();
|
||||
streamer.writeAudioHeader();
|
||||
while (!Thread.currentThread().isInterrupted()) {
|
||||
buffer.position(0);
|
||||
int r = capture.read(buffer, READ_SIZE, bufferInfo);
|
||||
@ -46,27 +54,33 @@ public final class AudioRawRecorder implements AsyncProcessor {
|
||||
}
|
||||
}
|
||||
|
||||
public void start() {
|
||||
@Override
|
||||
public void start(TerminationListener listener) {
|
||||
thread = new Thread(() -> {
|
||||
boolean fatalError = false;
|
||||
try {
|
||||
record();
|
||||
} catch (AudioCaptureForegroundException e) {
|
||||
// Do not print stack trace, a user-friendly error-message has already been logged
|
||||
} catch (IOException e) {
|
||||
Ln.e("Audio recording error", e);
|
||||
fatalError = true;
|
||||
} finally {
|
||||
Ln.d("Audio recorder stopped");
|
||||
listener.onTerminated(fatalError);
|
||||
}
|
||||
});
|
||||
}, "audio-raw");
|
||||
thread.start();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void stop() {
|
||||
if (thread != null) {
|
||||
thread.interrupt();
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void join() throws InterruptedException {
|
||||
if (thread != null) {
|
||||
thread.join();
|
||||
|
30
server/src/main/java/com/genymobile/scrcpy/AudioSource.java
Normal file
30
server/src/main/java/com/genymobile/scrcpy/AudioSource.java
Normal file
@ -0,0 +1,30 @@
|
||||
package com.genymobile.scrcpy;
|
||||
|
||||
import android.media.MediaRecorder;
|
||||
|
||||
public enum AudioSource {
|
||||
OUTPUT("output", MediaRecorder.AudioSource.REMOTE_SUBMIX),
|
||||
MIC("mic", MediaRecorder.AudioSource.MIC);
|
||||
|
||||
private final String name;
|
||||
private final int value;
|
||||
|
||||
AudioSource(String name, int value) {
|
||||
this.name = name;
|
||||
this.value = value;
|
||||
}
|
||||
|
||||
int value() {
|
||||
return value;
|
||||
}
|
||||
|
||||
static AudioSource findByName(String name) {
|
||||
for (AudioSource audioSource : AudioSource.values()) {
|
||||
if (name.equals(audioSource.name)) {
|
||||
return audioSource;
|
||||
}
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
}
|
@ -84,7 +84,8 @@ public class Controller implements AsyncProcessor {
|
||||
}
|
||||
}
|
||||
|
||||
public void start() {
|
||||
@Override
|
||||
public void start(TerminationListener listener) {
|
||||
thread = new Thread(() -> {
|
||||
try {
|
||||
control();
|
||||
@ -92,12 +93,14 @@ public class Controller implements AsyncProcessor {
|
||||
// this is expected on close
|
||||
} finally {
|
||||
Ln.d("Controller stopped");
|
||||
listener.onTerminated(true);
|
||||
}
|
||||
});
|
||||
}, "control-recv");
|
||||
thread.start();
|
||||
sender.start();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void stop() {
|
||||
if (thread != null) {
|
||||
thread.interrupt();
|
||||
@ -105,6 +108,7 @@ public class Controller implements AsyncProcessor {
|
||||
sender.stop();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void join() throws InterruptedException {
|
||||
if (thread != null) {
|
||||
thread.join();
|
||||
|
@ -41,7 +41,7 @@ public final class DesktopConnection implements Closeable {
|
||||
controlInputStream = null;
|
||||
controlOutputStream = null;
|
||||
}
|
||||
videoFd = videoSocket.getFileDescriptor();
|
||||
videoFd = videoSocket != null ? videoSocket.getFileDescriptor() : null;
|
||||
audioFd = audioSocket != null ? audioSocket.getFileDescriptor() : null;
|
||||
}
|
||||
|
||||
@ -60,29 +60,43 @@ public final class DesktopConnection implements Closeable {
|
||||
return SOCKET_NAME_PREFIX + String.format("_%08x", scid);
|
||||
}
|
||||
|
||||
public static DesktopConnection open(int scid, boolean tunnelForward, boolean audio, boolean control, boolean sendDummyByte) throws IOException {
|
||||
public static DesktopConnection open(int scid, boolean tunnelForward, boolean video, boolean audio, boolean control, boolean sendDummyByte)
|
||||
throws IOException {
|
||||
String socketName = getSocketName(scid);
|
||||
|
||||
LocalSocket firstSocket = null;
|
||||
|
||||
LocalSocket videoSocket = null;
|
||||
LocalSocket audioSocket = null;
|
||||
LocalSocket controlSocket = null;
|
||||
try {
|
||||
if (tunnelForward) {
|
||||
try (LocalServerSocket localServerSocket = new LocalServerSocket(socketName)) {
|
||||
videoSocket = localServerSocket.accept();
|
||||
if (sendDummyByte) {
|
||||
// send one byte so the client may read() to detect a connection error
|
||||
videoSocket.getOutputStream().write(0);
|
||||
if (video) {
|
||||
videoSocket = localServerSocket.accept();
|
||||
firstSocket = videoSocket;
|
||||
}
|
||||
if (audio) {
|
||||
audioSocket = localServerSocket.accept();
|
||||
if (firstSocket == null) {
|
||||
firstSocket = audioSocket;
|
||||
}
|
||||
}
|
||||
if (control) {
|
||||
controlSocket = localServerSocket.accept();
|
||||
if (firstSocket == null) {
|
||||
firstSocket = controlSocket;
|
||||
}
|
||||
}
|
||||
if (sendDummyByte) {
|
||||
// send one byte so the client may read() to detect a connection error
|
||||
firstSocket.getOutputStream().write(0);
|
||||
}
|
||||
}
|
||||
} else {
|
||||
videoSocket = connect(socketName);
|
||||
if (video) {
|
||||
videoSocket = connect(socketName);
|
||||
}
|
||||
if (audio) {
|
||||
audioSocket = connect(socketName);
|
||||
}
|
||||
@ -106,10 +120,22 @@ public final class DesktopConnection implements Closeable {
|
||||
return new DesktopConnection(videoSocket, audioSocket, controlSocket);
|
||||
}
|
||||
|
||||
private LocalSocket getFirstSocket() {
|
||||
if (videoSocket != null) {
|
||||
return videoSocket;
|
||||
}
|
||||
if (audioSocket != null) {
|
||||
return audioSocket;
|
||||
}
|
||||
return controlSocket;
|
||||
}
|
||||
|
||||
public void close() throws IOException {
|
||||
videoSocket.shutdownInput();
|
||||
videoSocket.shutdownOutput();
|
||||
videoSocket.close();
|
||||
if (videoSocket != null) {
|
||||
videoSocket.shutdownInput();
|
||||
videoSocket.shutdownOutput();
|
||||
videoSocket.close();
|
||||
}
|
||||
if (audioSocket != null) {
|
||||
audioSocket.shutdownInput();
|
||||
audioSocket.shutdownOutput();
|
||||
@ -122,19 +148,16 @@ public final class DesktopConnection implements Closeable {
|
||||
}
|
||||
}
|
||||
|
||||
public void sendDeviceMeta(String deviceName, int width, int height) throws IOException {
|
||||
byte[] buffer = new byte[DEVICE_NAME_FIELD_LENGTH + 4];
|
||||
public void sendDeviceMeta(String deviceName) throws IOException {
|
||||
byte[] buffer = new byte[DEVICE_NAME_FIELD_LENGTH];
|
||||
|
||||
byte[] deviceNameBytes = deviceName.getBytes(StandardCharsets.UTF_8);
|
||||
int len = StringUtils.getUtf8TruncationIndex(deviceNameBytes, DEVICE_NAME_FIELD_LENGTH - 1);
|
||||
System.arraycopy(deviceNameBytes, 0, buffer, 0, len);
|
||||
// byte[] are always 0-initialized in java, no need to set '\0' explicitly
|
||||
|
||||
buffer[DEVICE_NAME_FIELD_LENGTH] = (byte) (width >> 8);
|
||||
buffer[DEVICE_NAME_FIELD_LENGTH + 1] = (byte) width;
|
||||
buffer[DEVICE_NAME_FIELD_LENGTH + 2] = (byte) (height >> 8);
|
||||
buffer[DEVICE_NAME_FIELD_LENGTH + 3] = (byte) height;
|
||||
IO.writeFully(videoFd, buffer, 0, buffer.length);
|
||||
FileDescriptor fd = getFirstSocket().getFileDescriptor();
|
||||
IO.writeFully(fd, buffer, 0, buffer.length);
|
||||
}
|
||||
|
||||
public FileDescriptor getVideoFd() {
|
||||
|
@ -12,6 +12,7 @@ import android.os.Build;
|
||||
import android.os.IBinder;
|
||||
import android.os.SystemClock;
|
||||
import android.view.IRotationWatcher;
|
||||
import android.view.IDisplayFoldListener;
|
||||
import android.view.InputDevice;
|
||||
import android.view.InputEvent;
|
||||
import android.view.KeyCharacterMap;
|
||||
@ -35,6 +36,10 @@ public final class Device {
|
||||
void onRotationChanged(int rotation);
|
||||
}
|
||||
|
||||
public interface FoldListener {
|
||||
void onFoldChanged(int displayId, boolean folded);
|
||||
}
|
||||
|
||||
public interface ClipboardListener {
|
||||
void onClipboardTextChanged(String text);
|
||||
}
|
||||
@ -46,6 +51,7 @@ public final class Device {
|
||||
|
||||
private ScreenInfo screenInfo;
|
||||
private RotationListener rotationListener;
|
||||
private FoldListener foldListener;
|
||||
private ClipboardListener clipboardListener;
|
||||
private final AtomicBoolean isSettingClipboard = new AtomicBoolean();
|
||||
|
||||
@ -93,6 +99,26 @@ public final class Device {
|
||||
}
|
||||
}, displayId);
|
||||
|
||||
ServiceManager.getWindowManager().registerDisplayFoldListener(new IDisplayFoldListener.Stub() {
|
||||
@Override
|
||||
public void onDisplayFoldChanged(int displayId, boolean folded) {
|
||||
synchronized (Device.this) {
|
||||
DisplayInfo displayInfo = ServiceManager.getDisplayManager().getDisplayInfo(displayId);
|
||||
if (displayInfo == null) {
|
||||
Ln.e("Display " + displayId + " not found\n" + LogUtils.buildDisplayListMessage());
|
||||
return;
|
||||
}
|
||||
|
||||
screenInfo = ScreenInfo.computeScreenInfo(displayInfo.getRotation(), displayInfo.getSize(), options.getCrop(),
|
||||
options.getMaxSize(), options.getLockVideoOrientation());
|
||||
// notify
|
||||
if (foldListener != null) {
|
||||
foldListener.onFoldChanged(displayId, folded);
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
if (options.getControl() && options.getClipboardAutosync()) {
|
||||
// If control and autosync are enabled, synchronize Android clipboard to the computer automatically
|
||||
ClipboardManager clipboardManager = ServiceManager.getClipboardManager();
|
||||
@ -224,6 +250,10 @@ public final class Device {
|
||||
this.rotationListener = rotationListener;
|
||||
}
|
||||
|
||||
public synchronized void setFoldListener(FoldListener foldlistener) {
|
||||
this.foldListener = foldlistener;
|
||||
}
|
||||
|
||||
public synchronized void setClipboardListener(ClipboardListener clipboardListener) {
|
||||
this.clipboardListener = clipboardListener;
|
||||
}
|
||||
@ -288,10 +318,7 @@ public final class Device {
|
||||
boolean allOk = true;
|
||||
for (long physicalDisplayId : physicalDisplayIds) {
|
||||
IBinder binder = SurfaceControl.getPhysicalDisplayToken(physicalDisplayId);
|
||||
boolean ok = SurfaceControl.setDisplayPowerMode(binder, mode);
|
||||
if (!ok) {
|
||||
allOk = false;
|
||||
}
|
||||
allOk &= SurfaceControl.setDisplayPowerMode(binder, mode);
|
||||
}
|
||||
return allOk;
|
||||
}
|
||||
|
@ -60,7 +60,7 @@ public final class DeviceMessageSender {
|
||||
} finally {
|
||||
Ln.d("Device message sender stopped");
|
||||
}
|
||||
});
|
||||
}, "control-send");
|
||||
thread.start();
|
||||
}
|
||||
|
||||
|
@ -2,11 +2,11 @@ package com.genymobile.scrcpy;
|
||||
|
||||
import android.annotation.TargetApi;
|
||||
import android.content.AttributionSource;
|
||||
import android.content.ContextWrapper;
|
||||
import android.content.MutableContextWrapper;
|
||||
import android.os.Build;
|
||||
import android.os.Process;
|
||||
|
||||
public final class FakeContext extends ContextWrapper {
|
||||
public final class FakeContext extends MutableContextWrapper {
|
||||
|
||||
public static final String PACKAGE_NAME = "com.android.shell";
|
||||
public static final int ROOT_UID = 0; // Like android.os.Process.ROOT_UID, but before API 29
|
||||
@ -38,4 +38,10 @@ public final class FakeContext extends ContextWrapper {
|
||||
builder.setPackageName(PACKAGE_NAME);
|
||||
return builder.build();
|
||||
}
|
||||
|
||||
// @Override to be added on SDK upgrade for Android 14
|
||||
@SuppressWarnings("unused")
|
||||
public int getDeviceId() {
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
@ -3,15 +3,18 @@ package com.genymobile.scrcpy;
|
||||
import android.graphics.Rect;
|
||||
|
||||
import java.util.List;
|
||||
import java.util.Locale;
|
||||
|
||||
public class Options {
|
||||
|
||||
private Ln.Level logLevel = Ln.Level.DEBUG;
|
||||
private int scid = -1; // 31-bit non-negative value, or -1
|
||||
private boolean video = true;
|
||||
private boolean audio = true;
|
||||
private int maxSize;
|
||||
private VideoCodec videoCodec = VideoCodec.H264;
|
||||
private AudioCodec audioCodec = AudioCodec.OPUS;
|
||||
private AudioSource audioSource = AudioSource.OUTPUT;
|
||||
private int videoBitRate = 8000000;
|
||||
private int audioBitRate = 128000;
|
||||
private int maxFps;
|
||||
@ -40,172 +43,96 @@ public class Options {
|
||||
private boolean sendDeviceMeta = true; // send device name and size
|
||||
private boolean sendFrameMeta = true; // send PTS so that the client may record properly
|
||||
private boolean sendDummyByte = true; // write a byte on start to detect connection issues
|
||||
private boolean sendCodecId = true; // write the codec ID (4 bytes) before the stream
|
||||
private boolean sendCodecMeta = true; // write the codec metadata before the stream
|
||||
|
||||
public Ln.Level getLogLevel() {
|
||||
return logLevel;
|
||||
}
|
||||
|
||||
public void setLogLevel(Ln.Level logLevel) {
|
||||
this.logLevel = logLevel;
|
||||
}
|
||||
|
||||
public int getScid() {
|
||||
return scid;
|
||||
}
|
||||
|
||||
public void setScid(int scid) {
|
||||
this.scid = scid;
|
||||
public boolean getVideo() {
|
||||
return video;
|
||||
}
|
||||
|
||||
public boolean getAudio() {
|
||||
return audio;
|
||||
}
|
||||
|
||||
public void setAudio(boolean audio) {
|
||||
this.audio = audio;
|
||||
}
|
||||
|
||||
public int getMaxSize() {
|
||||
return maxSize;
|
||||
}
|
||||
|
||||
public void setMaxSize(int maxSize) {
|
||||
this.maxSize = maxSize;
|
||||
}
|
||||
|
||||
public VideoCodec getVideoCodec() {
|
||||
return videoCodec;
|
||||
}
|
||||
|
||||
public void setVideoCodec(VideoCodec videoCodec) {
|
||||
this.videoCodec = videoCodec;
|
||||
}
|
||||
|
||||
public AudioCodec getAudioCodec() {
|
||||
return audioCodec;
|
||||
}
|
||||
|
||||
public void setAudioCodec(AudioCodec audioCodec) {
|
||||
this.audioCodec = audioCodec;
|
||||
public AudioSource getAudioSource() {
|
||||
return audioSource;
|
||||
}
|
||||
|
||||
public int getVideoBitRate() {
|
||||
return videoBitRate;
|
||||
}
|
||||
|
||||
public void setVideoBitRate(int videoBitRate) {
|
||||
this.videoBitRate = videoBitRate;
|
||||
}
|
||||
|
||||
public int getAudioBitRate() {
|
||||
return audioBitRate;
|
||||
}
|
||||
|
||||
public void setAudioBitRate(int audioBitRate) {
|
||||
this.audioBitRate = audioBitRate;
|
||||
}
|
||||
|
||||
public int getMaxFps() {
|
||||
return maxFps;
|
||||
}
|
||||
|
||||
public void setMaxFps(int maxFps) {
|
||||
this.maxFps = maxFps;
|
||||
}
|
||||
|
||||
public int getLockVideoOrientation() {
|
||||
return lockVideoOrientation;
|
||||
}
|
||||
|
||||
public void setLockVideoOrientation(int lockVideoOrientation) {
|
||||
this.lockVideoOrientation = lockVideoOrientation;
|
||||
}
|
||||
|
||||
public boolean isTunnelForward() {
|
||||
return tunnelForward;
|
||||
}
|
||||
|
||||
public void setTunnelForward(boolean tunnelForward) {
|
||||
this.tunnelForward = tunnelForward;
|
||||
}
|
||||
|
||||
public Rect getCrop() {
|
||||
return crop;
|
||||
}
|
||||
|
||||
public void setCrop(Rect crop) {
|
||||
this.crop = crop;
|
||||
}
|
||||
|
||||
public boolean getControl() {
|
||||
return control;
|
||||
}
|
||||
|
||||
public void setControl(boolean control) {
|
||||
this.control = control;
|
||||
}
|
||||
|
||||
public int getDisplayId() {
|
||||
return displayId;
|
||||
}
|
||||
|
||||
public void setDisplayId(int displayId) {
|
||||
this.displayId = displayId;
|
||||
}
|
||||
|
||||
public boolean getShowTouches() {
|
||||
return showTouches;
|
||||
}
|
||||
|
||||
public void setShowTouches(boolean showTouches) {
|
||||
this.showTouches = showTouches;
|
||||
}
|
||||
|
||||
public boolean getStayAwake() {
|
||||
return stayAwake;
|
||||
}
|
||||
|
||||
public void setStayAwake(boolean stayAwake) {
|
||||
this.stayAwake = stayAwake;
|
||||
}
|
||||
|
||||
public List<CodecOption> getVideoCodecOptions() {
|
||||
return videoCodecOptions;
|
||||
}
|
||||
|
||||
public void setVideoCodecOptions(List<CodecOption> videoCodecOptions) {
|
||||
this.videoCodecOptions = videoCodecOptions;
|
||||
}
|
||||
|
||||
public List<CodecOption> getAudioCodecOptions() {
|
||||
return audioCodecOptions;
|
||||
}
|
||||
|
||||
public void setAudioCodecOptions(List<CodecOption> audioCodecOptions) {
|
||||
this.audioCodecOptions = audioCodecOptions;
|
||||
}
|
||||
|
||||
public String getVideoEncoder() {
|
||||
return videoEncoder;
|
||||
}
|
||||
|
||||
public void setVideoEncoder(String videoEncoder) {
|
||||
this.videoEncoder = videoEncoder;
|
||||
}
|
||||
|
||||
public String getAudioEncoder() {
|
||||
return audioEncoder;
|
||||
}
|
||||
|
||||
public void setAudioEncoder(String audioEncoder) {
|
||||
this.audioEncoder = audioEncoder;
|
||||
}
|
||||
|
||||
public void setPowerOffScreenOnClose(boolean powerOffScreenOnClose) {
|
||||
this.powerOffScreenOnClose = powerOffScreenOnClose;
|
||||
}
|
||||
|
||||
public boolean getPowerOffScreenOnClose() {
|
||||
return this.powerOffScreenOnClose;
|
||||
}
|
||||
@ -214,79 +141,214 @@ public class Options {
|
||||
return clipboardAutosync;
|
||||
}
|
||||
|
||||
public void setClipboardAutosync(boolean clipboardAutosync) {
|
||||
this.clipboardAutosync = clipboardAutosync;
|
||||
}
|
||||
|
||||
public boolean getDownsizeOnError() {
|
||||
return downsizeOnError;
|
||||
}
|
||||
|
||||
public void setDownsizeOnError(boolean downsizeOnError) {
|
||||
this.downsizeOnError = downsizeOnError;
|
||||
}
|
||||
|
||||
public boolean getCleanup() {
|
||||
return cleanup;
|
||||
}
|
||||
|
||||
public void setCleanup(boolean cleanup) {
|
||||
this.cleanup = cleanup;
|
||||
}
|
||||
|
||||
public boolean getPowerOn() {
|
||||
return powerOn;
|
||||
}
|
||||
|
||||
public void setPowerOn(boolean powerOn) {
|
||||
this.powerOn = powerOn;
|
||||
}
|
||||
|
||||
public boolean getListEncoders() {
|
||||
return listEncoders;
|
||||
}
|
||||
|
||||
public void setListEncoders(boolean listEncoders) {
|
||||
this.listEncoders = listEncoders;
|
||||
}
|
||||
|
||||
public boolean getListDisplays() {
|
||||
return listDisplays;
|
||||
}
|
||||
|
||||
public void setListDisplays(boolean listDisplays) {
|
||||
this.listDisplays = listDisplays;
|
||||
}
|
||||
|
||||
public boolean getSendDeviceMeta() {
|
||||
return sendDeviceMeta;
|
||||
}
|
||||
|
||||
public void setSendDeviceMeta(boolean sendDeviceMeta) {
|
||||
this.sendDeviceMeta = sendDeviceMeta;
|
||||
}
|
||||
|
||||
public boolean getSendFrameMeta() {
|
||||
return sendFrameMeta;
|
||||
}
|
||||
|
||||
public void setSendFrameMeta(boolean sendFrameMeta) {
|
||||
this.sendFrameMeta = sendFrameMeta;
|
||||
}
|
||||
|
||||
public boolean getSendDummyByte() {
|
||||
return sendDummyByte;
|
||||
}
|
||||
|
||||
public void setSendDummyByte(boolean sendDummyByte) {
|
||||
this.sendDummyByte = sendDummyByte;
|
||||
public boolean getSendCodecMeta() {
|
||||
return sendCodecMeta;
|
||||
}
|
||||
|
||||
public boolean getSendCodecId() {
|
||||
return sendCodecId;
|
||||
@SuppressWarnings("MethodLength")
|
||||
public static Options parse(String... args) {
|
||||
if (args.length < 1) {
|
||||
throw new IllegalArgumentException("Missing client version");
|
||||
}
|
||||
|
||||
String clientVersion = args[0];
|
||||
if (!clientVersion.equals(BuildConfig.VERSION_NAME)) {
|
||||
throw new IllegalArgumentException(
|
||||
"The server version (" + BuildConfig.VERSION_NAME + ") does not match the client " + "(" + clientVersion + ")");
|
||||
}
|
||||
|
||||
Options options = new Options();
|
||||
|
||||
for (int i = 1; i < args.length; ++i) {
|
||||
String arg = args[i];
|
||||
int equalIndex = arg.indexOf('=');
|
||||
if (equalIndex == -1) {
|
||||
throw new IllegalArgumentException("Invalid key=value pair: \"" + arg + "\"");
|
||||
}
|
||||
String key = arg.substring(0, equalIndex);
|
||||
String value = arg.substring(equalIndex + 1);
|
||||
switch (key) {
|
||||
case "scid":
|
||||
int scid = Integer.parseInt(value, 0x10);
|
||||
if (scid < -1) {
|
||||
throw new IllegalArgumentException("scid may not be negative (except -1 for 'none'): " + scid);
|
||||
}
|
||||
options.scid = scid;
|
||||
break;
|
||||
case "log_level":
|
||||
options.logLevel = Ln.Level.valueOf(value.toUpperCase(Locale.ENGLISH));
|
||||
break;
|
||||
case "video":
|
||||
options.video = Boolean.parseBoolean(value);
|
||||
break;
|
||||
case "audio":
|
||||
options.audio = Boolean.parseBoolean(value);
|
||||
break;
|
||||
case "video_codec":
|
||||
VideoCodec videoCodec = VideoCodec.findByName(value);
|
||||
if (videoCodec == null) {
|
||||
throw new IllegalArgumentException("Video codec " + value + " not supported");
|
||||
}
|
||||
options.videoCodec = videoCodec;
|
||||
break;
|
||||
case "audio_codec":
|
||||
AudioCodec audioCodec = AudioCodec.findByName(value);
|
||||
if (audioCodec == null) {
|
||||
throw new IllegalArgumentException("Audio codec " + value + " not supported");
|
||||
}
|
||||
options.audioCodec = audioCodec;
|
||||
break;
|
||||
case "audio_source":
|
||||
AudioSource audioSource = AudioSource.findByName(value);
|
||||
if (audioSource == null) {
|
||||
throw new IllegalArgumentException("Audio source " + value + " not supported");
|
||||
}
|
||||
options.audioSource = audioSource;
|
||||
break;
|
||||
case "max_size":
|
||||
options.maxSize = Integer.parseInt(value) & ~7; // multiple of 8
|
||||
break;
|
||||
case "video_bit_rate":
|
||||
options.videoBitRate = Integer.parseInt(value);
|
||||
break;
|
||||
case "audio_bit_rate":
|
||||
options.audioBitRate = Integer.parseInt(value);
|
||||
break;
|
||||
case "max_fps":
|
||||
options.maxFps = Integer.parseInt(value);
|
||||
break;
|
||||
case "lock_video_orientation":
|
||||
options.lockVideoOrientation = Integer.parseInt(value);
|
||||
break;
|
||||
case "tunnel_forward":
|
||||
options.tunnelForward = Boolean.parseBoolean(value);
|
||||
break;
|
||||
case "crop":
|
||||
options.crop = parseCrop(value);
|
||||
break;
|
||||
case "control":
|
||||
options.control = Boolean.parseBoolean(value);
|
||||
break;
|
||||
case "display_id":
|
||||
options.displayId = Integer.parseInt(value);
|
||||
break;
|
||||
case "show_touches":
|
||||
options.showTouches = Boolean.parseBoolean(value);
|
||||
break;
|
||||
case "stay_awake":
|
||||
options.stayAwake = Boolean.parseBoolean(value);
|
||||
break;
|
||||
case "video_codec_options":
|
||||
options.videoCodecOptions = CodecOption.parse(value);
|
||||
break;
|
||||
case "audio_codec_options":
|
||||
options.audioCodecOptions = CodecOption.parse(value);
|
||||
break;
|
||||
case "video_encoder":
|
||||
if (!value.isEmpty()) {
|
||||
options.videoEncoder = value;
|
||||
}
|
||||
break;
|
||||
case "audio_encoder":
|
||||
if (!value.isEmpty()) {
|
||||
options.audioEncoder = value;
|
||||
}
|
||||
case "power_off_on_close":
|
||||
options.powerOffScreenOnClose = Boolean.parseBoolean(value);
|
||||
break;
|
||||
case "clipboard_autosync":
|
||||
options.clipboardAutosync = Boolean.parseBoolean(value);
|
||||
break;
|
||||
case "downsize_on_error":
|
||||
options.downsizeOnError = Boolean.parseBoolean(value);
|
||||
break;
|
||||
case "cleanup":
|
||||
options.cleanup = Boolean.parseBoolean(value);
|
||||
break;
|
||||
case "power_on":
|
||||
options.powerOn = Boolean.parseBoolean(value);
|
||||
break;
|
||||
case "list_encoders":
|
||||
options.listEncoders = Boolean.parseBoolean(value);
|
||||
break;
|
||||
case "list_displays":
|
||||
options.listDisplays = Boolean.parseBoolean(value);
|
||||
break;
|
||||
case "send_device_meta":
|
||||
options.sendDeviceMeta = Boolean.parseBoolean(value);
|
||||
break;
|
||||
case "send_frame_meta":
|
||||
options.sendFrameMeta = Boolean.parseBoolean(value);
|
||||
break;
|
||||
case "send_dummy_byte":
|
||||
options.sendDummyByte = Boolean.parseBoolean(value);
|
||||
break;
|
||||
case "send_codec_meta":
|
||||
options.sendCodecMeta = Boolean.parseBoolean(value);
|
||||
break;
|
||||
case "raw_stream":
|
||||
boolean rawStream = Boolean.parseBoolean(value);
|
||||
if (rawStream) {
|
||||
options.sendDeviceMeta = false;
|
||||
options.sendFrameMeta = false;
|
||||
options.sendDummyByte = false;
|
||||
options.sendCodecMeta = false;
|
||||
}
|
||||
break;
|
||||
default:
|
||||
Ln.w("Unknown server option: " + key);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
return options;
|
||||
}
|
||||
|
||||
public void setSendCodecId(boolean sendCodecId) {
|
||||
this.sendCodecId = sendCodecId;
|
||||
private static Rect parseCrop(String crop) {
|
||||
if (crop.isEmpty()) {
|
||||
return null;
|
||||
}
|
||||
// input format: "width:height:x:y"
|
||||
String[] tokens = crop.split(":");
|
||||
if (tokens.length != 4) {
|
||||
throw new IllegalArgumentException("Crop must contains 4 values separated by colons: \"" + crop + "\"");
|
||||
}
|
||||
int width = Integer.parseInt(tokens[0]);
|
||||
int height = Integer.parseInt(tokens[1]);
|
||||
int x = Integer.parseInt(tokens[2]);
|
||||
int y = Integer.parseInt(tokens[3]);
|
||||
return new Rect(x, y, x + width, y + height);
|
||||
}
|
||||
}
|
||||
|
@ -16,7 +16,7 @@ import java.nio.ByteBuffer;
|
||||
import java.util.List;
|
||||
import java.util.concurrent.atomic.AtomicBoolean;
|
||||
|
||||
public class ScreenEncoder implements Device.RotationListener {
|
||||
public class ScreenEncoder implements Device.RotationListener, Device.FoldListener, AsyncProcessor {
|
||||
|
||||
private static final int DEFAULT_I_FRAME_INTERVAL = 10; // seconds
|
||||
private static final int REPEAT_FRAME_DELAY_US = 100_000; // repeat after 100ms
|
||||
@ -26,7 +26,7 @@ public class ScreenEncoder implements Device.RotationListener {
|
||||
private static final int[] MAX_SIZE_FALLBACK = {2560, 1920, 1600, 1280, 1024, 800};
|
||||
private static final int MAX_CONSECUTIVE_ERRORS = 3;
|
||||
|
||||
private final AtomicBoolean rotationChanged = new AtomicBoolean();
|
||||
private final AtomicBoolean resetCapture = new AtomicBoolean();
|
||||
|
||||
private final Device device;
|
||||
private final Streamer streamer;
|
||||
@ -39,6 +39,9 @@ public class ScreenEncoder implements Device.RotationListener {
|
||||
private boolean firstFrameSent;
|
||||
private int consecutiveErrors;
|
||||
|
||||
private Thread thread;
|
||||
private final AtomicBoolean stopped = new AtomicBoolean();
|
||||
|
||||
public ScreenEncoder(Device device, Streamer streamer, int videoBitRate, int maxFps, List<CodecOption> codecOptions, String encoderName,
|
||||
boolean downsizeOnError) {
|
||||
this.device = device;
|
||||
@ -50,23 +53,29 @@ public class ScreenEncoder implements Device.RotationListener {
|
||||
this.downsizeOnError = downsizeOnError;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onFoldChanged(int displayId, boolean folded) {
|
||||
resetCapture.set(true);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onRotationChanged(int rotation) {
|
||||
rotationChanged.set(true);
|
||||
resetCapture.set(true);
|
||||
}
|
||||
|
||||
public boolean consumeRotationChange() {
|
||||
return rotationChanged.getAndSet(false);
|
||||
private boolean consumeResetCapture() {
|
||||
return resetCapture.getAndSet(false);
|
||||
}
|
||||
|
||||
public void streamScreen() throws IOException, ConfigurationException {
|
||||
private void streamScreen() throws IOException, ConfigurationException {
|
||||
Codec codec = streamer.getCodec();
|
||||
MediaCodec mediaCodec = createMediaCodec(codec, encoderName);
|
||||
MediaFormat format = createFormat(codec.getMimeType(), videoBitRate, maxFps, codecOptions);
|
||||
IBinder display = createDisplay();
|
||||
device.setRotationListener(this);
|
||||
device.setFoldListener(this);
|
||||
|
||||
streamer.writeHeader();
|
||||
streamer.writeVideoHeader(device.getScreenInfo().getVideoSize());
|
||||
|
||||
boolean alive;
|
||||
try {
|
||||
@ -112,6 +121,7 @@ public class ScreenEncoder implements Device.RotationListener {
|
||||
} finally {
|
||||
mediaCodec.release();
|
||||
device.setRotationListener(null);
|
||||
device.setFoldListener(null);
|
||||
SurfaceControl.destroyDisplay(display);
|
||||
}
|
||||
}
|
||||
@ -163,12 +173,17 @@ public class ScreenEncoder implements Device.RotationListener {
|
||||
|
||||
private boolean encode(MediaCodec codec, Streamer streamer) throws IOException {
|
||||
boolean eof = false;
|
||||
boolean alive = true;
|
||||
MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();
|
||||
|
||||
while (!consumeRotationChange() && !eof) {
|
||||
while (!consumeResetCapture() && !eof) {
|
||||
if (stopped.get()) {
|
||||
alive = false;
|
||||
break;
|
||||
}
|
||||
int outputBufferId = codec.dequeueOutputBuffer(bufferInfo, -1);
|
||||
try {
|
||||
if (consumeRotationChange()) {
|
||||
if (consumeResetCapture()) {
|
||||
// must restart encoding with new size
|
||||
break;
|
||||
}
|
||||
@ -193,7 +208,7 @@ public class ScreenEncoder implements Device.RotationListener {
|
||||
}
|
||||
}
|
||||
|
||||
return !eof;
|
||||
return !eof && alive;
|
||||
}
|
||||
|
||||
private static MediaCodec createMediaCodec(Codec codec, String encoderName) throws IOException, ConfigurationException {
|
||||
@ -202,13 +217,22 @@ public class ScreenEncoder implements Device.RotationListener {
|
||||
try {
|
||||
return MediaCodec.createByCodecName(encoderName);
|
||||
} catch (IllegalArgumentException e) {
|
||||
Ln.e("Encoder '" + encoderName + "' for " + codec.getName() + " not found\n" + LogUtils.buildVideoEncoderListMessage());
|
||||
Ln.e("Video encoder '" + encoderName + "' for " + codec.getName() + " not found\n" + LogUtils.buildVideoEncoderListMessage());
|
||||
throw new ConfigurationException("Unknown encoder: " + encoderName);
|
||||
} catch (IOException e) {
|
||||
Ln.e("Could not create video encoder '" + encoderName + "' for " + codec.getName() + "\n" + LogUtils.buildVideoEncoderListMessage());
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
MediaCodec mediaCodec = MediaCodec.createEncoderByType(codec.getMimeType());
|
||||
Ln.d("Using encoder: '" + mediaCodec.getName() + "'");
|
||||
return mediaCodec;
|
||||
|
||||
try {
|
||||
MediaCodec mediaCodec = MediaCodec.createEncoderByType(codec.getMimeType());
|
||||
Ln.d("Using video encoder: '" + mediaCodec.getName() + "'");
|
||||
return mediaCodec;
|
||||
} catch (IOException | IllegalArgumentException e) {
|
||||
Ln.e("Could not create default video encoder for " + codec.getName() + "\n" + LogUtils.buildVideoEncoderListMessage());
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
|
||||
private static MediaFormat createFormat(String videoMimeType, int bitRate, int maxFps, List<CodecOption> codecOptions) {
|
||||
@ -258,4 +282,38 @@ public class ScreenEncoder implements Device.RotationListener {
|
||||
SurfaceControl.closeTransaction();
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void start(TerminationListener listener) {
|
||||
thread = new Thread(() -> {
|
||||
try {
|
||||
streamScreen();
|
||||
} catch (ConfigurationException e) {
|
||||
// Do not print stack trace, a user-friendly error-message has already been logged
|
||||
} catch (IOException e) {
|
||||
// Broken pipe is expected on close, because the socket is closed by the client
|
||||
if (!IO.isBrokenPipe(e)) {
|
||||
Ln.e("Video encoding error", e);
|
||||
}
|
||||
} finally {
|
||||
Ln.d("Screen streaming stopped");
|
||||
listener.onTerminated(true);
|
||||
}
|
||||
}, "video");
|
||||
thread.start();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void stop() {
|
||||
if (thread != null) {
|
||||
stopped.set(true);
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void join() throws InterruptedException {
|
||||
if (thread != null) {
|
||||
thread.join();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -1,16 +1,43 @@
|
||||
package com.genymobile.scrcpy;
|
||||
|
||||
import android.graphics.Rect;
|
||||
import android.os.BatteryManager;
|
||||
import android.os.Build;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
import java.util.Locale;
|
||||
|
||||
public final class Server {
|
||||
|
||||
private static class Completion {
|
||||
private int running;
|
||||
private boolean fatalError;
|
||||
|
||||
Completion(int running) {
|
||||
this.running = running;
|
||||
}
|
||||
|
||||
synchronized void addCompleted(boolean fatalError) {
|
||||
--running;
|
||||
if (fatalError) {
|
||||
this.fatalError = true;
|
||||
}
|
||||
if (running == 0 || this.fatalError) {
|
||||
notify();
|
||||
}
|
||||
}
|
||||
|
||||
synchronized void await() {
|
||||
try {
|
||||
while (running > 0 && !fatalError) {
|
||||
wait();
|
||||
}
|
||||
} catch (InterruptedException e) {
|
||||
// ignore
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private Server() {
|
||||
// not instantiable
|
||||
}
|
||||
@ -60,7 +87,7 @@ public final class Server {
|
||||
}
|
||||
|
||||
private static void scrcpy(Options options) throws IOException, ConfigurationException {
|
||||
Ln.i("Device: " + Build.MANUFACTURER + " " + Build.MODEL + " (Android " + Build.VERSION.RELEASE + ")");
|
||||
Ln.i("Device: [" + Build.MANUFACTURER + "] " + Build.BRAND + " " + Build.MODEL + " (Android " + Build.VERSION.RELEASE + ")");
|
||||
final Device device = new Device(options);
|
||||
|
||||
Thread initThread = startInitThread(options);
|
||||
@ -68,36 +95,18 @@ public final class Server {
|
||||
int scid = options.getScid();
|
||||
boolean tunnelForward = options.isTunnelForward();
|
||||
boolean control = options.getControl();
|
||||
boolean video = options.getVideo();
|
||||
boolean audio = options.getAudio();
|
||||
boolean sendDummyByte = options.getSendDummyByte();
|
||||
|
||||
Workarounds.prepareMainLooper();
|
||||
|
||||
// Workarounds must be applied for Meizu phones:
|
||||
// - <https://github.com/Genymobile/scrcpy/issues/240>
|
||||
// - <https://github.com/Genymobile/scrcpy/issues/365>
|
||||
// - <https://github.com/Genymobile/scrcpy/issues/2656>
|
||||
//
|
||||
// But only apply when strictly necessary, since workarounds can cause other issues:
|
||||
// - <https://github.com/Genymobile/scrcpy/issues/940>
|
||||
// - <https://github.com/Genymobile/scrcpy/issues/994>
|
||||
boolean mustFillAppInfo = Build.BRAND.equalsIgnoreCase("meizu");
|
||||
|
||||
// Before Android 11, audio is not supported.
|
||||
// Since Android 12, we can properly set a context on the AudioRecord.
|
||||
// Only on Android 11 we must fill app info for the AudioRecord to work.
|
||||
mustFillAppInfo |= audio && Build.VERSION.SDK_INT == Build.VERSION_CODES.R;
|
||||
|
||||
if (mustFillAppInfo) {
|
||||
Workarounds.fillAppInfo();
|
||||
}
|
||||
Workarounds.apply(audio);
|
||||
|
||||
List<AsyncProcessor> asyncProcessors = new ArrayList<>();
|
||||
|
||||
try (DesktopConnection connection = DesktopConnection.open(scid, tunnelForward, audio, control, sendDummyByte)) {
|
||||
DesktopConnection connection = DesktopConnection.open(scid, tunnelForward, video, audio, control, sendDummyByte);
|
||||
try {
|
||||
if (options.getSendDeviceMeta()) {
|
||||
Size videoSize = device.getScreenInfo().getVideoSize();
|
||||
connection.sendDeviceMeta(Device.getDeviceName(), videoSize.getWidth(), videoSize.getHeight());
|
||||
connection.sendDeviceMeta(Device.getDeviceName());
|
||||
}
|
||||
|
||||
if (control) {
|
||||
@ -108,38 +117,35 @@ public final class Server {
|
||||
|
||||
if (audio) {
|
||||
AudioCodec audioCodec = options.getAudioCodec();
|
||||
Streamer audioStreamer = new Streamer(connection.getAudioFd(), audioCodec, options.getSendCodecId(),
|
||||
options.getSendFrameMeta());
|
||||
AudioCapture audioCapture = new AudioCapture(options.getAudioSource());
|
||||
Streamer audioStreamer = new Streamer(connection.getAudioFd(), audioCodec, options.getSendCodecMeta(), options.getSendFrameMeta());
|
||||
AsyncProcessor audioRecorder;
|
||||
if (audioCodec == AudioCodec.RAW) {
|
||||
audioRecorder = new AudioRawRecorder(audioStreamer);
|
||||
audioRecorder = new AudioRawRecorder(audioCapture, audioStreamer);
|
||||
} else {
|
||||
audioRecorder = new AudioEncoder(audioStreamer, options.getAudioBitRate(), options.getAudioCodecOptions(),
|
||||
audioRecorder = new AudioEncoder(audioCapture, audioStreamer, options.getAudioBitRate(), options.getAudioCodecOptions(),
|
||||
options.getAudioEncoder());
|
||||
}
|
||||
asyncProcessors.add(audioRecorder);
|
||||
}
|
||||
|
||||
Streamer videoStreamer = new Streamer(connection.getVideoFd(), options.getVideoCodec(), options.getSendCodecId(),
|
||||
options.getSendFrameMeta());
|
||||
ScreenEncoder screenEncoder = new ScreenEncoder(device, videoStreamer, options.getVideoBitRate(), options.getMaxFps(),
|
||||
options.getVideoCodecOptions(), options.getVideoEncoder(), options.getDownsizeOnError());
|
||||
if (video) {
|
||||
Streamer videoStreamer = new Streamer(connection.getVideoFd(), options.getVideoCodec(), options.getSendCodecMeta(),
|
||||
options.getSendFrameMeta());
|
||||
ScreenEncoder screenEncoder = new ScreenEncoder(device, videoStreamer, options.getVideoBitRate(), options.getMaxFps(),
|
||||
options.getVideoCodecOptions(), options.getVideoEncoder(), options.getDownsizeOnError());
|
||||
asyncProcessors.add(screenEncoder);
|
||||
}
|
||||
|
||||
Completion completion = new Completion(asyncProcessors.size());
|
||||
for (AsyncProcessor asyncProcessor : asyncProcessors) {
|
||||
asyncProcessor.start();
|
||||
asyncProcessor.start((fatalError) -> {
|
||||
completion.addCompleted(fatalError);
|
||||
});
|
||||
}
|
||||
|
||||
try {
|
||||
// synchronous
|
||||
screenEncoder.streamScreen();
|
||||
} catch (IOException e) {
|
||||
// Broken pipe is expected on close, because the socket is closed by the client
|
||||
if (!IO.isBrokenPipe(e)) {
|
||||
Ln.e("Video encoding error", e);
|
||||
}
|
||||
}
|
||||
completion.await();
|
||||
} finally {
|
||||
Ln.d("Screen streaming stopped");
|
||||
initThread.interrupt();
|
||||
for (AsyncProcessor asyncProcessor : asyncProcessors) {
|
||||
asyncProcessor.stop();
|
||||
@ -153,212 +159,23 @@ public final class Server {
|
||||
} catch (InterruptedException e) {
|
||||
// ignore
|
||||
}
|
||||
|
||||
connection.close();
|
||||
}
|
||||
}
|
||||
|
||||
private static Thread startInitThread(final Options options) {
|
||||
Thread thread = new Thread(() -> initAndCleanUp(options));
|
||||
Thread thread = new Thread(() -> initAndCleanUp(options), "init-cleanup");
|
||||
thread.start();
|
||||
return thread;
|
||||
}
|
||||
|
||||
@SuppressWarnings("MethodLength")
|
||||
private static Options createOptions(String... args) {
|
||||
if (args.length < 1) {
|
||||
throw new IllegalArgumentException("Missing client version");
|
||||
}
|
||||
|
||||
String clientVersion = args[0];
|
||||
if (!clientVersion.equals(BuildConfig.VERSION_NAME)) {
|
||||
throw new IllegalArgumentException(
|
||||
"The server version (" + BuildConfig.VERSION_NAME + ") does not match the client " + "(" + clientVersion + ")");
|
||||
}
|
||||
|
||||
Options options = new Options();
|
||||
|
||||
for (int i = 1; i < args.length; ++i) {
|
||||
String arg = args[i];
|
||||
int equalIndex = arg.indexOf('=');
|
||||
if (equalIndex == -1) {
|
||||
throw new IllegalArgumentException("Invalid key=value pair: \"" + arg + "\"");
|
||||
}
|
||||
String key = arg.substring(0, equalIndex);
|
||||
String value = arg.substring(equalIndex + 1);
|
||||
switch (key) {
|
||||
case "scid":
|
||||
int scid = Integer.parseInt(value, 0x10);
|
||||
if (scid < -1) {
|
||||
throw new IllegalArgumentException("scid may not be negative (except -1 for 'none'): " + scid);
|
||||
}
|
||||
options.setScid(scid);
|
||||
break;
|
||||
case "log_level":
|
||||
Ln.Level level = Ln.Level.valueOf(value.toUpperCase(Locale.ENGLISH));
|
||||
options.setLogLevel(level);
|
||||
break;
|
||||
case "audio":
|
||||
boolean audio = Boolean.parseBoolean(value);
|
||||
options.setAudio(audio);
|
||||
break;
|
||||
case "video_codec":
|
||||
VideoCodec videoCodec = VideoCodec.findByName(value);
|
||||
if (videoCodec == null) {
|
||||
throw new IllegalArgumentException("Video codec " + value + " not supported");
|
||||
}
|
||||
options.setVideoCodec(videoCodec);
|
||||
break;
|
||||
case "audio_codec":
|
||||
AudioCodec audioCodec = AudioCodec.findByName(value);
|
||||
if (audioCodec == null) {
|
||||
throw new IllegalArgumentException("Audio codec " + value + " not supported");
|
||||
}
|
||||
options.setAudioCodec(audioCodec);
|
||||
break;
|
||||
case "max_size":
|
||||
int maxSize = Integer.parseInt(value) & ~7; // multiple of 8
|
||||
options.setMaxSize(maxSize);
|
||||
break;
|
||||
case "video_bit_rate":
|
||||
int videoBitRate = Integer.parseInt(value);
|
||||
options.setVideoBitRate(videoBitRate);
|
||||
break;
|
||||
case "audio_bit_rate":
|
||||
int audioBitRate = Integer.parseInt(value);
|
||||
options.setAudioBitRate(audioBitRate);
|
||||
break;
|
||||
case "max_fps":
|
||||
int maxFps = Integer.parseInt(value);
|
||||
options.setMaxFps(maxFps);
|
||||
break;
|
||||
case "lock_video_orientation":
|
||||
int lockVideoOrientation = Integer.parseInt(value);
|
||||
options.setLockVideoOrientation(lockVideoOrientation);
|
||||
break;
|
||||
case "tunnel_forward":
|
||||
boolean tunnelForward = Boolean.parseBoolean(value);
|
||||
options.setTunnelForward(tunnelForward);
|
||||
break;
|
||||
case "crop":
|
||||
Rect crop = parseCrop(value);
|
||||
options.setCrop(crop);
|
||||
break;
|
||||
case "control":
|
||||
boolean control = Boolean.parseBoolean(value);
|
||||
options.setControl(control);
|
||||
break;
|
||||
case "display_id":
|
||||
int displayId = Integer.parseInt(value);
|
||||
options.setDisplayId(displayId);
|
||||
break;
|
||||
case "show_touches":
|
||||
boolean showTouches = Boolean.parseBoolean(value);
|
||||
options.setShowTouches(showTouches);
|
||||
break;
|
||||
case "stay_awake":
|
||||
boolean stayAwake = Boolean.parseBoolean(value);
|
||||
options.setStayAwake(stayAwake);
|
||||
break;
|
||||
case "video_codec_options":
|
||||
List<CodecOption> videoCodecOptions = CodecOption.parse(value);
|
||||
options.setVideoCodecOptions(videoCodecOptions);
|
||||
break;
|
||||
case "audio_codec_options":
|
||||
List<CodecOption> audioCodecOptions = CodecOption.parse(value);
|
||||
options.setAudioCodecOptions(audioCodecOptions);
|
||||
break;
|
||||
case "video_encoder":
|
||||
if (!value.isEmpty()) {
|
||||
options.setVideoEncoder(value);
|
||||
}
|
||||
break;
|
||||
case "audio_encoder":
|
||||
if (!value.isEmpty()) {
|
||||
options.setAudioEncoder(value);
|
||||
}
|
||||
case "power_off_on_close":
|
||||
boolean powerOffScreenOnClose = Boolean.parseBoolean(value);
|
||||
options.setPowerOffScreenOnClose(powerOffScreenOnClose);
|
||||
break;
|
||||
case "clipboard_autosync":
|
||||
boolean clipboardAutosync = Boolean.parseBoolean(value);
|
||||
options.setClipboardAutosync(clipboardAutosync);
|
||||
break;
|
||||
case "downsize_on_error":
|
||||
boolean downsizeOnError = Boolean.parseBoolean(value);
|
||||
options.setDownsizeOnError(downsizeOnError);
|
||||
break;
|
||||
case "cleanup":
|
||||
boolean cleanup = Boolean.parseBoolean(value);
|
||||
options.setCleanup(cleanup);
|
||||
break;
|
||||
case "power_on":
|
||||
boolean powerOn = Boolean.parseBoolean(value);
|
||||
options.setPowerOn(powerOn);
|
||||
break;
|
||||
case "list_encoders":
|
||||
boolean listEncoders = Boolean.parseBoolean(value);
|
||||
options.setListEncoders(listEncoders);
|
||||
break;
|
||||
case "list_displays":
|
||||
boolean listDisplays = Boolean.parseBoolean(value);
|
||||
options.setListDisplays(listDisplays);
|
||||
break;
|
||||
case "send_device_meta":
|
||||
boolean sendDeviceMeta = Boolean.parseBoolean(value);
|
||||
options.setSendDeviceMeta(sendDeviceMeta);
|
||||
break;
|
||||
case "send_frame_meta":
|
||||
boolean sendFrameMeta = Boolean.parseBoolean(value);
|
||||
options.setSendFrameMeta(sendFrameMeta);
|
||||
break;
|
||||
case "send_dummy_byte":
|
||||
boolean sendDummyByte = Boolean.parseBoolean(value);
|
||||
options.setSendDummyByte(sendDummyByte);
|
||||
break;
|
||||
case "send_codec_id":
|
||||
boolean sendCodecId = Boolean.parseBoolean(value);
|
||||
options.setSendCodecId(sendCodecId);
|
||||
break;
|
||||
case "raw_video_stream":
|
||||
boolean rawVideoStream = Boolean.parseBoolean(value);
|
||||
if (rawVideoStream) {
|
||||
options.setSendDeviceMeta(false);
|
||||
options.setSendFrameMeta(false);
|
||||
options.setSendDummyByte(false);
|
||||
options.setSendCodecId(false);
|
||||
}
|
||||
break;
|
||||
default:
|
||||
Ln.w("Unknown server option: " + key);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
return options;
|
||||
}
|
||||
|
||||
private static Rect parseCrop(String crop) {
|
||||
if (crop.isEmpty()) {
|
||||
return null;
|
||||
}
|
||||
// input format: "width:height:x:y"
|
||||
String[] tokens = crop.split(":");
|
||||
if (tokens.length != 4) {
|
||||
throw new IllegalArgumentException("Crop must contains 4 values separated by colons: \"" + crop + "\"");
|
||||
}
|
||||
int width = Integer.parseInt(tokens[0]);
|
||||
int height = Integer.parseInt(tokens[1]);
|
||||
int x = Integer.parseInt(tokens[2]);
|
||||
int y = Integer.parseInt(tokens[3]);
|
||||
return new Rect(x, y, x + width, y + height);
|
||||
}
|
||||
|
||||
public static void main(String... args) throws Exception {
|
||||
Thread.setDefaultUncaughtExceptionHandler((t, e) -> {
|
||||
Ln.e("Exception on thread " + t, e);
|
||||
});
|
||||
|
||||
Options options = createOptions(args);
|
||||
Options options = Options.parse(args);
|
||||
|
||||
Ln.initLogLevel(options.getLogLevel());
|
||||
|
||||
|
@ -15,24 +15,23 @@ public final class Streamer {
|
||||
|
||||
private final FileDescriptor fd;
|
||||
private final Codec codec;
|
||||
private final boolean sendCodecId;
|
||||
private final boolean sendCodecMeta;
|
||||
private final boolean sendFrameMeta;
|
||||
|
||||
private final ByteBuffer headerBuffer = ByteBuffer.allocate(12);
|
||||
|
||||
public Streamer(FileDescriptor fd, Codec codec, boolean sendCodecId, boolean sendFrameMeta) {
|
||||
public Streamer(FileDescriptor fd, Codec codec, boolean sendCodecMeta, boolean sendFrameMeta) {
|
||||
this.fd = fd;
|
||||
this.codec = codec;
|
||||
this.sendCodecId = sendCodecId;
|
||||
this.sendCodecMeta = sendCodecMeta;
|
||||
this.sendFrameMeta = sendFrameMeta;
|
||||
}
|
||||
|
||||
public Codec getCodec() {
|
||||
return codec;
|
||||
}
|
||||
|
||||
public void writeHeader() throws IOException {
|
||||
if (sendCodecId) {
|
||||
public void writeAudioHeader() throws IOException {
|
||||
if (sendCodecMeta) {
|
||||
ByteBuffer buffer = ByteBuffer.allocate(4);
|
||||
buffer.putInt(codec.getId());
|
||||
buffer.flip();
|
||||
@ -40,6 +39,17 @@ public final class Streamer {
|
||||
}
|
||||
}
|
||||
|
||||
public void writeVideoHeader(Size videoSize) throws IOException {
|
||||
if (sendCodecMeta) {
|
||||
ByteBuffer buffer = ByteBuffer.allocate(12);
|
||||
buffer.putInt(codec.getId());
|
||||
buffer.putInt(videoSize.getWidth());
|
||||
buffer.putInt(videoSize.getHeight());
|
||||
buffer.flip();
|
||||
IO.writeFully(fd, buffer);
|
||||
}
|
||||
}
|
||||
|
||||
public void writeDisableStream(boolean error) throws IOException {
|
||||
// Writing a specific code as codec-id means that the device disables the stream
|
||||
// code 0: it explicitly disables the stream (because it could not capture audio), scrcpy should continue mirroring video only
|
||||
|
@ -1,21 +1,83 @@
|
||||
package com.genymobile.scrcpy;
|
||||
|
||||
import android.annotation.SuppressLint;
|
||||
import android.annotation.TargetApi;
|
||||
import android.app.Application;
|
||||
import android.content.AttributionSource;
|
||||
import android.content.Context;
|
||||
import android.content.ContextWrapper;
|
||||
import android.content.pm.ApplicationInfo;
|
||||
import android.media.AudioAttributes;
|
||||
import android.media.AudioManager;
|
||||
import android.media.AudioRecord;
|
||||
import android.os.Build;
|
||||
import android.os.Looper;
|
||||
import android.os.Parcel;
|
||||
|
||||
import java.lang.ref.WeakReference;
|
||||
import java.lang.reflect.Constructor;
|
||||
import java.lang.reflect.Field;
|
||||
import java.lang.reflect.Method;
|
||||
|
||||
public final class Workarounds {
|
||||
|
||||
private static Class<?> activityThreadClass;
|
||||
private static Object activityThread;
|
||||
|
||||
private Workarounds() {
|
||||
// not instantiable
|
||||
}
|
||||
|
||||
public static void apply(boolean audio) {
|
||||
Workarounds.prepareMainLooper();
|
||||
|
||||
boolean mustFillAppInfo = false;
|
||||
boolean mustFillBaseContext = false;
|
||||
boolean mustFillAppContext = false;
|
||||
|
||||
|
||||
if (Build.BRAND.equalsIgnoreCase("meizu")) {
|
||||
// Workarounds must be applied for Meizu phones:
|
||||
// - <https://github.com/Genymobile/scrcpy/issues/240>
|
||||
// - <https://github.com/Genymobile/scrcpy/issues/365>
|
||||
// - <https://github.com/Genymobile/scrcpy/issues/2656>
|
||||
//
|
||||
// But only apply when strictly necessary, since workarounds can cause other issues:
|
||||
// - <https://github.com/Genymobile/scrcpy/issues/940>
|
||||
// - <https://github.com/Genymobile/scrcpy/issues/994>
|
||||
mustFillAppInfo = true;
|
||||
} else if (Build.BRAND.equalsIgnoreCase("honor")) {
|
||||
// More workarounds must be applied for Honor devices:
|
||||
// - <https://github.com/Genymobile/scrcpy/issues/4015>
|
||||
//
|
||||
// The system context must not be set for all devices, because it would cause other problems:
|
||||
// - <https://github.com/Genymobile/scrcpy/issues/4015#issuecomment-1595382142>
|
||||
// - <https://github.com/Genymobile/scrcpy/issues/3805#issuecomment-1596148031>
|
||||
mustFillAppInfo = true;
|
||||
mustFillBaseContext = true;
|
||||
mustFillAppContext = true;
|
||||
}
|
||||
|
||||
if (audio && Build.VERSION.SDK_INT == Build.VERSION_CODES.R) {
|
||||
// Before Android 11, audio is not supported.
|
||||
// Since Android 12, we can properly set a context on the AudioRecord.
|
||||
// Only on Android 11 we must fill the application context for the AudioRecord to work.
|
||||
mustFillAppContext = true;
|
||||
}
|
||||
|
||||
if (mustFillAppInfo) {
|
||||
Workarounds.fillAppInfo();
|
||||
}
|
||||
if (mustFillBaseContext) {
|
||||
Workarounds.fillBaseContext();
|
||||
}
|
||||
if (mustFillAppContext) {
|
||||
Workarounds.fillAppContext();
|
||||
}
|
||||
}
|
||||
|
||||
@SuppressWarnings("deprecation")
|
||||
public static void prepareMainLooper() {
|
||||
private static void prepareMainLooper() {
|
||||
// Some devices internally create a Handler when creating an input Surface, causing an exception:
|
||||
// "Can't create handler inside thread that has not called Looper.prepare()"
|
||||
// <https://github.com/Genymobile/scrcpy/issues/240>
|
||||
@ -28,18 +90,25 @@ public final class Workarounds {
|
||||
}
|
||||
|
||||
@SuppressLint("PrivateApi,DiscouragedPrivateApi")
|
||||
public static void fillAppInfo() {
|
||||
try {
|
||||
private static void fillActivityThread() throws Exception {
|
||||
if (activityThread == null) {
|
||||
// ActivityThread activityThread = new ActivityThread();
|
||||
Class<?> activityThreadClass = Class.forName("android.app.ActivityThread");
|
||||
activityThreadClass = Class.forName("android.app.ActivityThread");
|
||||
Constructor<?> activityThreadConstructor = activityThreadClass.getDeclaredConstructor();
|
||||
activityThreadConstructor.setAccessible(true);
|
||||
Object activityThread = activityThreadConstructor.newInstance();
|
||||
activityThread = activityThreadConstructor.newInstance();
|
||||
|
||||
// ActivityThread.sCurrentActivityThread = activityThread;
|
||||
Field sCurrentActivityThreadField = activityThreadClass.getDeclaredField("sCurrentActivityThread");
|
||||
sCurrentActivityThreadField.setAccessible(true);
|
||||
sCurrentActivityThreadField.set(null, activityThread);
|
||||
}
|
||||
}
|
||||
|
||||
@SuppressLint("PrivateApi,DiscouragedPrivateApi")
|
||||
private static void fillAppInfo() {
|
||||
try {
|
||||
fillActivityThread();
|
||||
|
||||
// ActivityThread.AppBindData appBindData = new ActivityThread.AppBindData();
|
||||
Class<?> appBindDataClass = Class.forName("android.app.ActivityThread$AppBindData");
|
||||
@ -59,6 +128,16 @@ public final class Workarounds {
|
||||
Field mBoundApplicationField = activityThreadClass.getDeclaredField("mBoundApplication");
|
||||
mBoundApplicationField.setAccessible(true);
|
||||
mBoundApplicationField.set(activityThread, appBindData);
|
||||
} catch (Throwable throwable) {
|
||||
// this is a workaround, so failing is not an error
|
||||
Ln.d("Could not fill app info: " + throwable.getMessage());
|
||||
}
|
||||
}
|
||||
|
||||
@SuppressLint("PrivateApi,DiscouragedPrivateApi")
|
||||
private static void fillAppContext() {
|
||||
try {
|
||||
fillActivityThread();
|
||||
|
||||
Application app = Application.class.newInstance();
|
||||
Field baseField = ContextWrapper.class.getDeclaredField("mBase");
|
||||
@ -71,7 +150,156 @@ public final class Workarounds {
|
||||
mInitialApplicationField.set(activityThread, app);
|
||||
} catch (Throwable throwable) {
|
||||
// this is a workaround, so failing is not an error
|
||||
Ln.d("Could not fill app info: " + throwable.getMessage());
|
||||
Ln.d("Could not fill app context: " + throwable.getMessage());
|
||||
}
|
||||
}
|
||||
|
||||
public static void fillBaseContext() {
|
||||
try {
|
||||
fillActivityThread();
|
||||
|
||||
Method getSystemContextMethod = activityThreadClass.getDeclaredMethod("getSystemContext");
|
||||
Context context = (Context) getSystemContextMethod.invoke(activityThread);
|
||||
FakeContext.get().setBaseContext(context);
|
||||
} catch (Throwable throwable) {
|
||||
// this is a workaround, so failing is not an error
|
||||
Ln.d("Could not fill base context: " + throwable.getMessage());
|
||||
}
|
||||
}
|
||||
|
||||
@TargetApi(Build.VERSION_CODES.R)
|
||||
@SuppressLint("WrongConstant,MissingPermission,BlockedPrivateApi,SoonBlockedPrivateApi,DiscouragedPrivateApi")
|
||||
public static AudioRecord createAudioRecord(int source, int sampleRate, int channelConfig, int channels, int channelMask, int encoding) {
|
||||
// Vivo (and maybe some other third-party ROMs) modified `AudioRecord`'s constructor, requiring `Context`s from real App environment.
|
||||
//
|
||||
// This method invokes the `AudioRecord(long nativeRecordInJavaObj)` constructor to create an empty `AudioRecord` instance, then uses
|
||||
// reflections to initialize it like the normal constructor do (or the `AudioRecord.Builder.build()` method do).
|
||||
// As a result, the modified code was not executed.
|
||||
try {
|
||||
// AudioRecord audioRecord = new AudioRecord(0L);
|
||||
Constructor<AudioRecord> audioRecordConstructor = AudioRecord.class.getDeclaredConstructor(long.class);
|
||||
audioRecordConstructor.setAccessible(true);
|
||||
AudioRecord audioRecord = audioRecordConstructor.newInstance(0L);
|
||||
|
||||
// audioRecord.mRecordingState = RECORDSTATE_STOPPED;
|
||||
Field mRecordingStateField = AudioRecord.class.getDeclaredField("mRecordingState");
|
||||
mRecordingStateField.setAccessible(true);
|
||||
mRecordingStateField.set(audioRecord, AudioRecord.RECORDSTATE_STOPPED);
|
||||
|
||||
Looper looper = Looper.myLooper();
|
||||
if (looper == null) {
|
||||
looper = Looper.getMainLooper();
|
||||
}
|
||||
|
||||
// audioRecord.mInitializationLooper = looper;
|
||||
Field mInitializationLooperField = AudioRecord.class.getDeclaredField("mInitializationLooper");
|
||||
mInitializationLooperField.setAccessible(true);
|
||||
mInitializationLooperField.set(audioRecord, looper);
|
||||
|
||||
// Create `AudioAttributes` with fixed capture preset
|
||||
int capturePreset = source;
|
||||
AudioAttributes.Builder audioAttributesBuilder = new AudioAttributes.Builder();
|
||||
Method setInternalCapturePresetMethod = AudioAttributes.Builder.class.getMethod("setInternalCapturePreset", int.class);
|
||||
setInternalCapturePresetMethod.invoke(audioAttributesBuilder, capturePreset);
|
||||
AudioAttributes attributes = audioAttributesBuilder.build();
|
||||
|
||||
// audioRecord.mAudioAttributes = attributes;
|
||||
Field mAudioAttributesField = AudioRecord.class.getDeclaredField("mAudioAttributes");
|
||||
mAudioAttributesField.setAccessible(true);
|
||||
mAudioAttributesField.set(audioRecord, attributes);
|
||||
|
||||
// audioRecord.audioParamCheck(capturePreset, sampleRate, encoding);
|
||||
Method audioParamCheckMethod = AudioRecord.class.getDeclaredMethod("audioParamCheck", int.class, int.class, int.class);
|
||||
audioParamCheckMethod.setAccessible(true);
|
||||
audioParamCheckMethod.invoke(audioRecord, capturePreset, sampleRate, encoding);
|
||||
|
||||
// audioRecord.mChannelCount = channels
|
||||
Field mChannelCountField = AudioRecord.class.getDeclaredField("mChannelCount");
|
||||
mChannelCountField.setAccessible(true);
|
||||
mChannelCountField.set(audioRecord, channels);
|
||||
|
||||
// audioRecord.mChannelMask = channelMask
|
||||
Field mChannelMaskField = AudioRecord.class.getDeclaredField("mChannelMask");
|
||||
mChannelMaskField.setAccessible(true);
|
||||
mChannelMaskField.set(audioRecord, channelMask);
|
||||
|
||||
int minBufferSize = AudioRecord.getMinBufferSize(sampleRate, channelConfig, encoding);
|
||||
int bufferSizeInBytes = minBufferSize * 8;
|
||||
|
||||
// audioRecord.audioBuffSizeCheck(bufferSizeInBytes)
|
||||
Method audioBuffSizeCheckMethod = AudioRecord.class.getDeclaredMethod("audioBuffSizeCheck", int.class);
|
||||
audioBuffSizeCheckMethod.setAccessible(true);
|
||||
audioBuffSizeCheckMethod.invoke(audioRecord, bufferSizeInBytes);
|
||||
|
||||
final int channelIndexMask = 0;
|
||||
|
||||
int[] sampleRateArray = new int[]{sampleRate};
|
||||
int[] session = new int[]{AudioManager.AUDIO_SESSION_ID_GENERATE};
|
||||
|
||||
int initResult;
|
||||
if (Build.VERSION.SDK_INT < Build.VERSION_CODES.S) {
|
||||
// private native final int native_setup(Object audiorecord_this,
|
||||
// Object /*AudioAttributes*/ attributes,
|
||||
// int[] sampleRate, int channelMask, int channelIndexMask, int audioFormat,
|
||||
// int buffSizeInBytes, int[] sessionId, String opPackageName,
|
||||
// long nativeRecordInJavaObj);
|
||||
Method nativeSetupMethod = AudioRecord.class.getDeclaredMethod("native_setup", Object.class, Object.class, int[].class, int.class,
|
||||
int.class, int.class, int.class, int[].class, String.class, long.class);
|
||||
nativeSetupMethod.setAccessible(true);
|
||||
initResult = (int) nativeSetupMethod.invoke(audioRecord, new WeakReference<AudioRecord>(audioRecord), attributes, sampleRateArray,
|
||||
channelMask, channelIndexMask, audioRecord.getAudioFormat(), bufferSizeInBytes, session, FakeContext.get().getOpPackageName(),
|
||||
0L);
|
||||
} else {
|
||||
// Assume `context` is never `null`
|
||||
AttributionSource attributionSource = FakeContext.get().getAttributionSource();
|
||||
|
||||
// Assume `attributionSource.getPackageName()` is never null
|
||||
|
||||
// ScopedParcelState attributionSourceState = attributionSource.asScopedParcelState()
|
||||
Method asScopedParcelStateMethod = AttributionSource.class.getDeclaredMethod("asScopedParcelState");
|
||||
asScopedParcelStateMethod.setAccessible(true);
|
||||
|
||||
try (AutoCloseable attributionSourceState = (AutoCloseable) asScopedParcelStateMethod.invoke(attributionSource)) {
|
||||
Method getParcelMethod = attributionSourceState.getClass().getDeclaredMethod("getParcel");
|
||||
Parcel attributionSourceParcel = (Parcel) getParcelMethod.invoke(attributionSourceState);
|
||||
|
||||
// private native int native_setup(Object audiorecordThis,
|
||||
// Object /*AudioAttributes*/ attributes,
|
||||
// int[] sampleRate, int channelMask, int channelIndexMask, int audioFormat,
|
||||
// int buffSizeInBytes, int[] sessionId, @NonNull Parcel attributionSource,
|
||||
// long nativeRecordInJavaObj, int maxSharedAudioHistoryMs);
|
||||
Method nativeSetupMethod = AudioRecord.class.getDeclaredMethod("native_setup", Object.class, Object.class, int[].class, int.class,
|
||||
int.class, int.class, int.class, int[].class, Parcel.class, long.class, int.class);
|
||||
nativeSetupMethod.setAccessible(true);
|
||||
initResult = (int) nativeSetupMethod.invoke(audioRecord, new WeakReference<AudioRecord>(audioRecord), attributes, sampleRateArray,
|
||||
channelMask, channelIndexMask, audioRecord.getAudioFormat(), bufferSizeInBytes, session, attributionSourceParcel, 0L, 0);
|
||||
}
|
||||
}
|
||||
|
||||
if (initResult != AudioRecord.SUCCESS) {
|
||||
Ln.e("Error code " + initResult + " when initializing native AudioRecord object.");
|
||||
throw new RuntimeException("Cannot create AudioRecord");
|
||||
}
|
||||
|
||||
// mSampleRate = sampleRate[0]
|
||||
Field mSampleRateField = AudioRecord.class.getDeclaredField("mSampleRate");
|
||||
mSampleRateField.setAccessible(true);
|
||||
mSampleRateField.set(audioRecord, sampleRateArray[0]);
|
||||
|
||||
// audioRecord.mSessionId = session[0]
|
||||
Field mSessionIdField = AudioRecord.class.getDeclaredField("mSessionId");
|
||||
mSessionIdField.setAccessible(true);
|
||||
mSessionIdField.set(audioRecord, session[0]);
|
||||
|
||||
// audioRecord.mState = AudioRecord.STATE_INITIALIZED
|
||||
Field mStateField = AudioRecord.class.getDeclaredField("mState");
|
||||
mStateField.setAccessible(true);
|
||||
mStateField.set(audioRecord, AudioRecord.STATE_INITIALIZED);
|
||||
|
||||
return audioRecord;
|
||||
} catch (Exception e) {
|
||||
Ln.e("Failed to invoke AudioRecord.<init>.", e);
|
||||
throw new RuntimeException("Cannot create AudioRecord");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -16,9 +16,9 @@ public class ClipboardManager {
|
||||
private Method getPrimaryClipMethod;
|
||||
private Method setPrimaryClipMethod;
|
||||
private Method addPrimaryClipChangedListener;
|
||||
private boolean alternativeGetMethod;
|
||||
private boolean alternativeSetMethod;
|
||||
private boolean alternativeAddListenerMethod;
|
||||
private int getMethodVersion;
|
||||
private int setMethodVersion;
|
||||
private int addListenerMethodVersion;
|
||||
|
||||
public ClipboardManager(IInterface manager) {
|
||||
this.manager = manager;
|
||||
@ -31,9 +31,20 @@ public class ClipboardManager {
|
||||
} else {
|
||||
try {
|
||||
getPrimaryClipMethod = manager.getClass().getMethod("getPrimaryClip", String.class, int.class);
|
||||
} catch (NoSuchMethodException e) {
|
||||
getPrimaryClipMethod = manager.getClass().getMethod("getPrimaryClip", String.class, String.class, int.class);
|
||||
alternativeGetMethod = true;
|
||||
getMethodVersion = 0;
|
||||
} catch (NoSuchMethodException e1) {
|
||||
try {
|
||||
getPrimaryClipMethod = manager.getClass().getMethod("getPrimaryClip", String.class, String.class, int.class);
|
||||
getMethodVersion = 1;
|
||||
} catch (NoSuchMethodException e2) {
|
||||
try {
|
||||
getPrimaryClipMethod = manager.getClass().getMethod("getPrimaryClip", String.class, String.class, int.class, int.class);
|
||||
getMethodVersion = 2;
|
||||
} catch (NoSuchMethodException e3) {
|
||||
getPrimaryClipMethod = manager.getClass().getMethod("getPrimaryClip", String.class, int.class, String.class);
|
||||
getMethodVersion = 3;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -47,41 +58,64 @@ public class ClipboardManager {
|
||||
} else {
|
||||
try {
|
||||
setPrimaryClipMethod = manager.getClass().getMethod("setPrimaryClip", ClipData.class, String.class, int.class);
|
||||
} catch (NoSuchMethodException e) {
|
||||
setPrimaryClipMethod = manager.getClass().getMethod("setPrimaryClip", ClipData.class, String.class, String.class, int.class);
|
||||
alternativeSetMethod = true;
|
||||
setMethodVersion = 0;
|
||||
} catch (NoSuchMethodException e1) {
|
||||
try {
|
||||
setPrimaryClipMethod = manager.getClass().getMethod("setPrimaryClip", ClipData.class, String.class, String.class, int.class);
|
||||
setMethodVersion = 1;
|
||||
} catch (NoSuchMethodException e2) {
|
||||
setPrimaryClipMethod = manager.getClass()
|
||||
.getMethod("setPrimaryClip", ClipData.class, String.class, String.class, int.class, int.class);
|
||||
setMethodVersion = 2;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return setPrimaryClipMethod;
|
||||
}
|
||||
|
||||
private static ClipData getPrimaryClip(Method method, boolean alternativeMethod, IInterface manager)
|
||||
private static ClipData getPrimaryClip(Method method, int methodVersion, IInterface manager)
|
||||
throws InvocationTargetException, IllegalAccessException {
|
||||
if (Build.VERSION.SDK_INT < Build.VERSION_CODES.Q) {
|
||||
return (ClipData) method.invoke(manager, FakeContext.PACKAGE_NAME);
|
||||
}
|
||||
if (alternativeMethod) {
|
||||
return (ClipData) method.invoke(manager, FakeContext.PACKAGE_NAME, null, FakeContext.ROOT_UID);
|
||||
|
||||
switch (methodVersion) {
|
||||
case 0:
|
||||
return (ClipData) method.invoke(manager, FakeContext.PACKAGE_NAME, FakeContext.ROOT_UID);
|
||||
case 1:
|
||||
return (ClipData) method.invoke(manager, FakeContext.PACKAGE_NAME, null, FakeContext.ROOT_UID);
|
||||
case 2:
|
||||
return (ClipData) method.invoke(manager, FakeContext.PACKAGE_NAME, null, FakeContext.ROOT_UID, 0);
|
||||
default:
|
||||
return (ClipData) method.invoke(manager, FakeContext.PACKAGE_NAME, FakeContext.ROOT_UID, null);
|
||||
}
|
||||
return (ClipData) method.invoke(manager, FakeContext.PACKAGE_NAME, FakeContext.ROOT_UID);
|
||||
}
|
||||
|
||||
private static void setPrimaryClip(Method method, boolean alternativeMethod, IInterface manager, ClipData clipData)
|
||||
private static void setPrimaryClip(Method method, int methodVersion, IInterface manager, ClipData clipData)
|
||||
throws InvocationTargetException, IllegalAccessException {
|
||||
if (Build.VERSION.SDK_INT < Build.VERSION_CODES.Q) {
|
||||
method.invoke(manager, clipData, FakeContext.PACKAGE_NAME);
|
||||
} else if (alternativeMethod) {
|
||||
method.invoke(manager, clipData, FakeContext.PACKAGE_NAME, null, FakeContext.ROOT_UID);
|
||||
} else {
|
||||
method.invoke(manager, clipData, FakeContext.PACKAGE_NAME, FakeContext.ROOT_UID);
|
||||
return;
|
||||
}
|
||||
|
||||
switch (methodVersion) {
|
||||
case 0:
|
||||
method.invoke(manager, clipData, FakeContext.PACKAGE_NAME, FakeContext.ROOT_UID);
|
||||
break;
|
||||
case 1:
|
||||
method.invoke(manager, clipData, FakeContext.PACKAGE_NAME, null, FakeContext.ROOT_UID);
|
||||
break;
|
||||
default:
|
||||
method.invoke(manager, clipData, FakeContext.PACKAGE_NAME, null, FakeContext.ROOT_UID, 0);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
public CharSequence getText() {
|
||||
try {
|
||||
Method method = getGetPrimaryClipMethod();
|
||||
ClipData clipData = getPrimaryClip(method, alternativeGetMethod, manager);
|
||||
ClipData clipData = getPrimaryClip(method, getMethodVersion, manager);
|
||||
if (clipData == null || clipData.getItemCount() == 0) {
|
||||
return null;
|
||||
}
|
||||
@ -96,7 +130,7 @@ public class ClipboardManager {
|
||||
try {
|
||||
Method method = getSetPrimaryClipMethod();
|
||||
ClipData clipData = ClipData.newPlainText(null, text);
|
||||
setPrimaryClip(method, alternativeSetMethod, manager, clipData);
|
||||
setPrimaryClip(method, setMethodVersion, manager, clipData);
|
||||
return true;
|
||||
} catch (InvocationTargetException | IllegalAccessException | NoSuchMethodException e) {
|
||||
Ln.e("Could not invoke method", e);
|
||||
@ -104,14 +138,23 @@ public class ClipboardManager {
|
||||
}
|
||||
}
|
||||
|
||||
private static void addPrimaryClipChangedListener(Method method, boolean alternativeMethod, IInterface manager,
|
||||
private static void addPrimaryClipChangedListener(Method method, int methodVersion, IInterface manager,
|
||||
IOnPrimaryClipChangedListener listener) throws InvocationTargetException, IllegalAccessException {
|
||||
if (Build.VERSION.SDK_INT < Build.VERSION_CODES.Q) {
|
||||
method.invoke(manager, listener, FakeContext.PACKAGE_NAME);
|
||||
} else if (alternativeMethod) {
|
||||
method.invoke(manager, listener, FakeContext.PACKAGE_NAME, null, FakeContext.ROOT_UID);
|
||||
} else {
|
||||
method.invoke(manager, listener, FakeContext.PACKAGE_NAME, FakeContext.ROOT_UID);
|
||||
return;
|
||||
}
|
||||
|
||||
switch (methodVersion) {
|
||||
case 0:
|
||||
method.invoke(manager, listener, FakeContext.PACKAGE_NAME, FakeContext.ROOT_UID);
|
||||
break;
|
||||
case 1:
|
||||
method.invoke(manager, listener, FakeContext.PACKAGE_NAME, null, FakeContext.ROOT_UID);
|
||||
break;
|
||||
default:
|
||||
method.invoke(manager, listener, FakeContext.PACKAGE_NAME, null, FakeContext.ROOT_UID, 0);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
@ -124,10 +167,19 @@ public class ClipboardManager {
|
||||
try {
|
||||
addPrimaryClipChangedListener = manager.getClass()
|
||||
.getMethod("addPrimaryClipChangedListener", IOnPrimaryClipChangedListener.class, String.class, int.class);
|
||||
} catch (NoSuchMethodException e) {
|
||||
addPrimaryClipChangedListener = manager.getClass()
|
||||
.getMethod("addPrimaryClipChangedListener", IOnPrimaryClipChangedListener.class, String.class, String.class, int.class);
|
||||
alternativeAddListenerMethod = true;
|
||||
addListenerMethodVersion = 0;
|
||||
} catch (NoSuchMethodException e1) {
|
||||
try {
|
||||
addPrimaryClipChangedListener = manager.getClass()
|
||||
.getMethod("addPrimaryClipChangedListener", IOnPrimaryClipChangedListener.class, String.class, String.class,
|
||||
int.class);
|
||||
addListenerMethodVersion = 1;
|
||||
} catch (NoSuchMethodException e2) {
|
||||
addPrimaryClipChangedListener = manager.getClass()
|
||||
.getMethod("addPrimaryClipChangedListener", IOnPrimaryClipChangedListener.class, String.class, String.class,
|
||||
int.class, int.class);
|
||||
addListenerMethodVersion = 2;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -137,7 +189,7 @@ public class ClipboardManager {
|
||||
public boolean addPrimaryClipChangedListener(IOnPrimaryClipChangedListener listener) {
|
||||
try {
|
||||
Method method = getAddPrimaryClipChangedListener();
|
||||
addPrimaryClipChangedListener(method, alternativeAddListenerMethod, manager, listener);
|
||||
addPrimaryClipChangedListener(method, addListenerMethodVersion, manager, listener);
|
||||
return true;
|
||||
} catch (InvocationTargetException | IllegalAccessException | NoSuchMethodException e) {
|
||||
Ln.e("Could not invoke method", e);
|
||||
|
@ -14,13 +14,13 @@ public final class InputManager {
|
||||
public static final int INJECT_INPUT_EVENT_MODE_WAIT_FOR_RESULT = 1;
|
||||
public static final int INJECT_INPUT_EVENT_MODE_WAIT_FOR_FINISH = 2;
|
||||
|
||||
private final android.hardware.input.InputManager manager;
|
||||
private final Object manager;
|
||||
private Method injectInputEventMethod;
|
||||
|
||||
private static Method setDisplayIdMethod;
|
||||
private static Method setActionButtonMethod;
|
||||
|
||||
public InputManager(android.hardware.input.InputManager manager) {
|
||||
public InputManager(Object manager) {
|
||||
this.manager = manager;
|
||||
}
|
||||
|
||||
|
@ -62,11 +62,21 @@ public final class ServiceManager {
|
||||
return displayManager;
|
||||
}
|
||||
|
||||
public static Class<?> getInputManagerClass() {
|
||||
try {
|
||||
// Parts of the InputManager class have been moved to a new InputManagerGlobal class in Android 14 preview
|
||||
return Class.forName("android.hardware.input.InputManagerGlobal");
|
||||
} catch (ClassNotFoundException e) {
|
||||
return android.hardware.input.InputManager.class;
|
||||
}
|
||||
}
|
||||
|
||||
public static InputManager getInputManager() {
|
||||
if (inputManager == null) {
|
||||
try {
|
||||
Method getInstanceMethod = android.hardware.input.InputManager.class.getDeclaredMethod("getInstance");
|
||||
android.hardware.input.InputManager im = (android.hardware.input.InputManager) getInstanceMethod.invoke(null);
|
||||
Class<?> inputManagerClass = getInputManagerClass();
|
||||
Method getInstanceMethod = inputManagerClass.getDeclaredMethod("getInstance");
|
||||
Object im = getInstanceMethod.invoke(null);
|
||||
inputManager = new InputManager(im);
|
||||
} catch (NoSuchMethodException | IllegalAccessException | InvocationTargetException e) {
|
||||
throw new AssertionError(e);
|
||||
|
@ -4,6 +4,7 @@ import com.genymobile.scrcpy.Ln;
|
||||
|
||||
import android.os.IInterface;
|
||||
import android.view.IRotationWatcher;
|
||||
import android.view.IDisplayFoldListener;
|
||||
|
||||
import java.lang.reflect.InvocationTargetException;
|
||||
import java.lang.reflect.Method;
|
||||
@ -108,4 +109,13 @@ public final class WindowManager {
|
||||
throw new AssertionError(e);
|
||||
}
|
||||
}
|
||||
|
||||
public void registerDisplayFoldListener(IDisplayFoldListener foldListener) {
|
||||
try {
|
||||
Class<?> cls = manager.getClass();
|
||||
cls.getMethod("registerDisplayFoldListener", IDisplayFoldListener.class).invoke(manager, foldListener);
|
||||
} catch (Exception e) {
|
||||
throw new AssertionError(e);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -12,7 +12,6 @@ import java.io.IOException;
|
||||
import java.nio.charset.StandardCharsets;
|
||||
import java.util.Arrays;
|
||||
|
||||
|
||||
public class ControlMessageReaderTest {
|
||||
|
||||
@Test
|
||||
|
Reference in New Issue
Block a user