Compare commits

..

80 Commits

Author SHA1 Message Date
fa6c5b5149 Upgrade FFmpeg (6.0) for Windows
Use the latest version (specifically built for scrcpy).

Refs <https://www.ffmpeg.org/download.html#release_6.0>
2023-02-28 12:35:08 +01:00
884997e854 Use minimal prebuilt FFmpeg for Windows
On the scrcpy-deps repo, I built FFmpeg 5.1.2 binaries for Windows with
only the features used by scrcpy.

For comparison, here are the sizes of the dll for FFmpeg 5.1.2:
 - before: 89M
 - after: 4.7M

It also allows to upgrade the old FFmpeg version (4.3.1) used for win32.

Refs <https://github.com/rom1v/scrcpy-deps>
Refs <https://github.com/Genymobile/scrcpy/issues/1753>
2023-02-28 12:35:08 +01:00
a0fa9967b8 Simplify libusb prebuilt scripts
In theory, include/ might be slightly different for win32 and win64
builds. Use each one separately to simplify.
2023-02-28 12:35:08 +01:00
260edc318f Add compat support for FFmpeg < 5.1
The new chlayout API has been introduced in FFmpeg 5.1. Use the old
channel_layout API on older versions.
2023-02-28 12:35:08 +01:00
e48098ec8d Add workaround to capture audio on Android 11
On Android 11, it is possible to start the capture only when the running
app is in foreground. But scrcpy is not an app, it's a Java application
started from shell.

As a workaround, start an existing Android shell existing activity just
to start the capture, then close it immediately.

Co-authored-by: Romain Vimont <rom@rom1v.com>
Signed-off-by: Romain Vimont <rom@rom1v.com>
2023-02-28 12:35:08 +01:00
8fad02aafa Add audio player
Play the decoded audio using SDL.

The audio player frame sink receives the audio frames, resample them
and write them to a byte buffer (introduced by this commit).

On SDL audio callback (from an internal SDL thread), copy samples from
this byte buffer to the SDL audio buffer.

The byte buffer is protected by the SDL_AudioDeviceLock(), but it has
been designed so that the producer and the consumer may write and read
in parallel, provided that they don't access the same slices of the
ring-buffer buffer.

Co-authored-by: Simon Chan <1330321+yume-chan@users.noreply.github.com>
2023-02-28 12:35:08 +01:00
bb935764ae Add two-step write feature to bytebuf
If there is exactly one producer, then it can assume that the remaining
space in the buffer will only increase until it write something.

This assumption may allow the producer to write to the buffer (up to a
known safe size) without any synchronization mechanism, thus allowing
to read and write different parts of the buffer in parallel.

The producer can then commit the write with lock held, and update its
knowledge of the safe empty remaining space.
2023-02-28 12:35:08 +01:00
fe127af72e Introduce bytebuf util
Add a ring-buffer for bytes. It will be useful for buffering audio.
2023-02-28 12:35:08 +01:00
8e32d15e6c Pass AVCodecContext to frame sinks
Frame consumers may need details about the frame format.
2023-02-28 12:35:08 +01:00
7f2989e1d5 Add an audio decoder 2023-02-28 12:35:08 +01:00
da57902dd5 Give a name to decoder instances
This will be useful in logs.
2023-02-28 12:35:08 +01:00
28701090f6 Rename decoder to video_decoder 2023-02-28 12:35:08 +01:00
c50dc53bc2 Log display sizes in display list
This is more convenient than just the display id alone.
2023-02-28 12:35:08 +01:00
30b8429752 Add --list-device-displays 2023-02-28 12:35:08 +01:00
38e5dafba6 Move log message helpers to LogUtils
This class will also contain other log helpers.
2023-02-28 12:35:08 +01:00
d60a502485 Quit on audio configuration failure
When audio capture fails on the device, scrcpy continue mirroring the
video stream. This allows to enable audio by default only when
supported.

However, if an audio configuration occurs (for example the user
explicitly selected an unknown audio encoder), this must be treated as
an error and scrcpy must exit.
2023-02-28 12:35:08 +01:00
55f4c42f19 Add --list-encoders
Add an option to list the device encoders properly.
2023-02-28 12:35:08 +01:00
7a9eefb04a Move await_for_server() logs
Print the logs on the caller side. This will allow to call the function
in another context without printing the logs.
2023-02-28 12:35:08 +01:00
e39ad0f695 Add --audio-encoder
Similar to --video-encoder, but for audio.
2023-02-28 12:35:08 +01:00
377b6f57c5 Extract unknown encoder error message
This will allow to reuse the same code for audio encoder selection.
2023-02-28 12:35:08 +01:00
aa8ed923f0 Add --audio-codec-options
Similar to --video-codec-options, but for audio.
2023-02-28 12:35:08 +01:00
13211f82a1 Extract application of codec options
This will allow to reuse the same code for audio codec options.
2023-02-28 12:35:08 +01:00
7f50ed2458 Add support for AAC audio codec
Add option --audio-codec=aac.
2023-02-28 12:35:08 +01:00
9187472014 Add --audio-codec
Introduce the selection mechanism. Alternative codecs will be added
later.
2023-02-28 12:35:08 +01:00
a331c2c653 Add --audio-bit-rate
Add an option to configure the audio bit-rate.
2023-02-28 12:35:08 +01:00
f75dc2e477 Disable MethodLength checkstyle on createOptions()
This method will grow as needed to initialize options.
2023-02-28 12:35:08 +01:00
8a5be9e2a6 Rename --encoder to --video-encoder
This prepares the introduction of --audio-encoder.
2023-02-28 12:35:08 +01:00
705d69aaea Rename --codec-options to --video-codec-options
This prepares the introduction of --audio-codec-options.
2023-02-28 12:35:08 +01:00
ad51a2b411 Rename --bit-rate to --video-bit-rate
This prepares the introduction of --audio-bit-rate.
2023-02-28 12:35:08 +01:00
7581dc10d3 Rename --codec to --video-codec
This prepares the introduction of --audio-codec.
2023-02-28 12:35:08 +01:00
68cd396e1f Remove default bit-rate on client side
If no bit-rate is passed, let the server use the default value (8Mbps).

This avoids to define a default value on both sides, and to pass the
default bit-rate as an argument when starting the server.
2023-02-28 12:35:08 +01:00
8dc1fd172a Record at least video packets on stop
If the recorder is stopped while it has not received any audio packet
yet, make sure the video stream is correctly recorded.
2023-02-28 12:35:08 +01:00
9c34c34e5d Disable audio before Android 11
The permission "android.permission.RECORD_AUDIO" has been added for
shell in Android 11.

Moreover, on lower versions, it may make the server segfault on the
device (happened on a Nexus 5 with Android 6.0.1).

Refs <4feeee8891%5E%21/>
2023-02-28 12:35:08 +01:00
0abb268432 Disable audio on initialization error
By default, audio is enabled (--no-audio must be explicitly passed to
disable it).

However, some devices may not support audio capture (typically devices
below Android 11, or Android 11 when the shell application is not
foreground on start).

In that case, make the server notify the client to dynamically disable
audio forwarding so that it does not wait indefinitely for an audio
stream.

Also disable audio on unknown codec or missing decoder on the
client-side, for the same reasons.
2023-02-28 12:35:08 +01:00
f2c65808fa Add record audio support
Make the recorder accept two input sources (video and audio), and mux
them into a single file.
2023-02-28 12:35:08 +01:00
93e86a5661 Rename video-specific variables in recorder
This paves the way to add audio-specific variables.
2023-02-28 12:35:08 +01:00
e614e19df4 Do not merge config audio packets
For video streams (at least H.264 and H.265), the config packet
containing SPS/PPS must be prepended to the next packet (the following
keyframe).

For audio streams (at least OPUS), they must not be merged.
2023-02-28 12:35:08 +01:00
e7c931139b Add an audio demuxer
Add a demuxer which will read the stream from the audio socket.
2023-02-28 12:35:08 +01:00
98ece15421 Give a name to demuxer instances
This will be useful in logs.
2023-02-28 12:35:08 +01:00
8050de011c Rename demuxer to video_demuxer
There will be another demuxer instance for audio.
2023-02-28 12:35:08 +01:00
714b01204a Extract OPUS extradata
For OPUS codec, FFmpeg expects the raw extradata, but MediaCodec wraps
it in some structure.

Fix the config packet to send only the raw extradata.
2023-02-28 12:35:08 +01:00
9e831005c4 Use a streamer to send the audio stream
Send each encoded audio packet using a streamer.
2023-02-28 12:35:08 +01:00
55eb874ed6 Encode recorded audio on the device
For now, the encoded packets are just logged into the console.
2023-02-28 12:35:08 +01:00
667882e9c4 Capture device audio
Create an AudioRecorder to capture the audio source REMOTE_SUBMIX.

For now, the captured packets are just logged into the console.

Co-authored-by: Romain Vimont <rom@rom1v.com>
Signed-off-by: Romain Vimont <rom@rom1v.com>
2023-02-28 12:35:08 +01:00
0e62580570 Add a new socket for audio stream
When audio is enabled, open a new socket to send the audio stream from
the device to the client.

Co-authored-by: Romain Vimont <rom@rom1v.com>
Signed-off-by: Romain Vimont <rom@rom1v.com>
2023-02-28 12:35:08 +01:00
ded19ca0f0 Add --no-audio option
Audio will be enabled by default (when supported). Add an option to
disable it.

Co-authored-by: Romain Vimont <rom@rom1v.com>
Signed-off-by: Romain Vimont <rom@rom1v.com>
2023-02-28 12:35:08 +01:00
ad683461d6 Use FakeContext for Application instance
This will expose the correct package name and UID to the application
context.
2023-02-28 12:35:08 +01:00
504793b5c9 Use shell package name for workarounds
For consistency.
2023-02-28 12:35:08 +01:00
00e88acfaa Use ROOT_UID from FakeContext
Remove USER_ID from ServiceManager, and replace it by a constant in
FakeContext.

This is the same as android.os.Process.ROOT_UID, but this constant has
been introduced in API 29.
2023-02-28 12:35:08 +01:00
7c0ee70261 Use PACKAGE_NAME from FakeContext
Remove duplicated constant.
2023-02-28 12:35:08 +01:00
2a4eec702d Use AttributionSource from FakeContext
FakeContext already provides an AttributeSource instance.

Co-authored-by: Simon Chan <1330321+yume-chan@users.noreply.github.com>
2023-02-28 12:35:08 +01:00
1e113feb59 Add a fake Android Context
Since scrcpy-server is not an Android application (it's a java
executable), it has no Context.

Some features will require a Context instance to get the package name
and the UID. Add a FakeContext for this purpose.

Co-authored-by: Romain Vimont <rom@rom1v.com>
Signed-off-by: Romain Vimont <rom@rom1v.com>
2023-02-28 12:35:08 +01:00
08da34835c Improve error message for unknown encoder
The provided encoder name depends on the selected codec. Improve the
error message and the suggestions.
2023-02-28 12:35:08 +01:00
7b3a39bdc7 Rename "codec" variable to "mediaCodec"
This will allow to use "codec" for the Codec type.
2023-02-28 12:35:08 +01:00
5586335276 Make streamer independent of codec type
Rename VideoStreamer to Streamer, and extract a Codec interface which
will also support audio codecs.
2023-02-28 12:35:08 +01:00
8bb63cf14c Pass all args to ScreenEncoder constructor
There is no good reason to pass some of them in the constructor and some
others as parameters of the streamScreen() method.
2023-02-28 12:35:08 +01:00
5b01457364 Move screen encoder initialization
This prepares further refactors.
2023-02-28 12:35:08 +01:00
010da4df59 Write streamer header from ScreenEncoder
The screen encoder is responsible to write data to the video streamer.
2023-02-28 12:35:08 +01:00
de332e3e96 Use VideoStreamer directly from ScreenEncoder
The Callbacks interface notifies new packets. But in addition, the
screen encoder will need to write headers on start.

We could add a function onStart(), but for simplicity, just remove the
interface, which brings no value, and call the streamer directly.

Refs 87972e2022
2023-02-28 12:35:08 +01:00
e679e3a966 Simplify error handling on socket creation
On any error, all previously opened sockets must be closed.

Handle these errors in a single catch-block. Currently, there are only 2
sockets, but this will simplify even more with more sockets.

Note: this commit is better displayed with --ignore-space-change (-b).
2023-02-28 12:35:08 +01:00
97ae0a2d13 Reorder initialization
Initialize components in the pipeline order: demuxer first, decoder and
recorder second.
2023-02-28 12:35:08 +01:00
f1a4349834 Refactor recorder logic
Process the initial config packet (necessary to write the header)
separately.
2023-02-28 12:35:08 +01:00
a95bfe4f01 Move last packet recording
Write the last packet at the end.
2023-02-28 12:35:08 +01:00
84f1792c6f Add start() function for recorder
For consistency with the other components, do not start the internal
thread from an init() function.
2023-02-28 12:35:08 +01:00
317a5e93bb Open recording file from the recorder thread
The recorder opened the target file from the packet sink open()
callback, called by the demuxer. Only then the recorder thread was
started.

One golden rule for the recorder is to never block the demuxer for I/O,
because it would impact mirroring. This rule is respected on recording
packets, but not for the initial recorder opening.

Therefore, start the recorder thread from sc_recorder_init(), open the
file immediately from the recorder thread, then make it wait for the
stream to start (on packet sink open()).

Now that the recorder can report errors directly (rather than making the
demuxer call fail), it is possible to report file opening error even
before the packet sink is open.
2023-02-28 12:35:08 +01:00
6b669d2dba Inline packet_sink impl in recorder
Remove useless wrappers.
2023-02-28 12:35:08 +01:00
21dd946edc Initialize recorder fields from init()
The recorder has two initialization phases: one to initialize the
concrete recorder object, and one to open its packet_sink trait.

Initialize mutex and condvar as part of the object initialization.

If there were several packet_sink traits, the mutex and condvar would
still be initialized only once.
2023-02-28 12:35:08 +01:00
669cbc7457 Report recorder errors
Stop scrcpy on recorder errors.

It was previously indirectly stopped by the demuxer, which failed to
push packets to a recorder in error. Report it directly instead:
 - it avoids to wait for the next demuxer call;
 - it will allow to open the target file from a separate thread and stop
   immediately on any I/O error.
2023-02-28 12:35:08 +01:00
d3adda176b Move previous packet to a local variable
It is only used from run_recorder().
2023-02-28 12:35:08 +01:00
e7fa099be4 Move pts_origin to a local variable
It is only used from run_recorder().
2023-02-28 12:35:08 +01:00
7ec2c7e232 Change PTS origin type from uint64_t to int64_t
It is initialized from AVPacket.pts, which is an int64_t.
2023-02-28 12:35:08 +01:00
932e698cdd Fix --encoder documentation
Mention that it depends on the codec provided by --codec (which is not
necessarily H264 anymore).
2023-02-28 12:35:08 +01:00
ca5b962377 Do not print stacktraces when unnecessary
User-friendly error messages are printed on specific configuration
exceptions. In that case, do not print the stacktrace.

Also handle the user-friendly error message directly where the error
occurs, and print multiline messages in a single log call, to avoid
confusing interleaving.
2023-02-28 12:35:08 +01:00
9233f1990e Fix --no-clipboard-autosync bash completion
Fix typo.
2023-02-28 12:35:08 +01:00
17a486d763 Split server stop() and join()
For consistency with the other components, call stop() and join()
separately.

This allows to stop all components, then join them all.
2023-02-28 12:35:08 +01:00
65e9206b3f Print FFmpeg logs
FFmpeg logs are redirected to a specific SDL log category.

Initialize the log level for this category to print them as expected.
2023-02-28 12:35:08 +01:00
bf7c0f2c33 Move FFmpeg callback initialization
Configure FFmpeg log redirection on start from a log helper.
2023-02-28 12:35:08 +01:00
dc585f033e Silence lint warning about constant in API 29
MediaFormat.MIMETYPE_VIDEO_AV1 has been added in API 29, but it is not
a problem to inline the constant in older versions.
2023-02-28 12:35:08 +01:00
b67ea5173c Remove manifest package name
As reported by gradle:

> Setting the namespace via a source AndroidManifest.xml's package
> attribute is deprecated.
>
> Please instead set the namespace (or testNamespace) in the module's
> build.gradle file, as described here:
> https://developer.android.com/studio/build/configure-app-module#set-namespace
2023-02-28 12:35:08 +01:00
cf9718bee4 Upgrade gradle build tools to 7.4.0
Plugin version 7.4.0.
Gradle version 7.5.

Refs <https://developer.android.com/studio/releases/gradle-plugin#updating-gradle>
2023-02-28 12:35:08 +01:00
155 changed files with 4641 additions and 8706 deletions

1
.gitignore vendored
View File

@ -7,4 +7,3 @@ build/
.gradle/
/x/
local.properties
/scrcpy-server

View File

@ -2,15 +2,56 @@
Here are the instructions to build _scrcpy_ (client and server).
If you just want to build and install the latest release, follow the simplified
process described in [doc/linux.md](linux.md).
## Simple
If you just want to install the latest release from `master`, follow this
simplified process.
First, you need to install the required packages:
```bash
# for Debian/Ubuntu
sudo apt install ffmpeg libsdl2-2.0-0 adb wget \
gcc git pkg-config meson ninja-build libsdl2-dev \
libavcodec-dev libavdevice-dev libavformat-dev libavutil-dev \
libusb-1.0-0 libusb-1.0-0-dev
```
Then clone the repo and execute the installation script
([source](install_release.sh)):
```bash
git clone https://github.com/Genymobile/scrcpy
cd scrcpy
./install_release.sh
```
When a new release is out, update the repo and reinstall:
```bash
git pull
./install_release.sh
```
To uninstall:
```bash
sudo ninja -Cbuild-auto uninstall
```
## Branches
There are two main branches:
- `master`: contains the latest release. It is the home page of the project on
GitHub.
- `dev`: the current development branch. Every commit present in `dev` will be
### `master`
The `master` branch concerns the latest release, and is the home page of the
project on GitHub.
### `dev`
`dev` is the current development branch. Every commit present in `dev` will be
in the next release.
If you want to contribute code, please base your commits on the latest `dev`
@ -28,8 +69,6 @@ the following files to a directory accessible from your `PATH`:
- `AdbWinApi.dll`
- `AdbWinUsbApi.dll`
It is also available in scrcpy releases.
The client requires [FFmpeg] and [LibSDL2]. Just follow the instructions.
[adb]: https://developer.android.com/studio/command-line/adb.html
@ -55,7 +94,7 @@ sudo apt install ffmpeg libsdl2-2.0-0 adb libusb-1.0-0
# client build dependencies
sudo apt install gcc git pkg-config meson ninja-build libsdl2-dev \
libavcodec-dev libavdevice-dev libavformat-dev libavutil-dev \
libswresample-dev libusb-1.0-0-dev
libusb-1.0-0-dev
# server build dependencies
sudo apt install openjdk-11-jdk
@ -77,7 +116,7 @@ pip3 install meson
sudo dnf install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm
# client build dependencies
sudo dnf install SDL2-devel ffms2-devel libusb1-devel meson gcc make
sudo dnf install SDL2-devel ffms2-devel libusb-devel meson gcc make
# server build dependencies
sudo dnf install java-devel
@ -233,10 +272,10 @@ install` must be run as root)._
#### Option 2: Use prebuilt server
- [`scrcpy-server-v2.1.1`][direct-scrcpy-server]
<sub>SHA-256: `9558db6c56743a1dc03b38f59801fb40e91cc891f8fc0c89e5b0b067761f148e`</sub>
- [`scrcpy-server-v1.25`][direct-scrcpy-server]
<sub>SHA-256: `ce0306c7bbd06ae72f6d06f7ec0ee33774995a65de71e0a83813ecb67aec9bdb`</sub>
[direct-scrcpy-server]: https://github.com/Genymobile/scrcpy/releases/download/v2.1.1/scrcpy-server-v2.1.1
[direct-scrcpy-server]: https://github.com/Genymobile/scrcpy/releases/download/v1.25/scrcpy-server-v1.25
Download the prebuilt server somewhere, and specify its path during the Meson
configuration:
@ -275,8 +314,7 @@ This installs several files:
- `/usr/local/share/zsh/site-functions/_scrcpy` (zsh completion)
- `/usr/local/share/bash-completion/completions/scrcpy` (bash completion)
You can then run `scrcpy`.
You can then [run](README.md#run) `scrcpy`.
### Uninstall

309
DEVELOP.md Normal file
View File

@ -0,0 +1,309 @@
# scrcpy for developers
## Overview
This application is composed of two parts:
- the server (`scrcpy-server`), to be executed on the device,
- the client (the `scrcpy` binary), executed on the host computer.
The client is responsible to push the server to the device and start its
execution.
Once the client and the server are connected to each other, the server initially
sends device information (name and initial screen dimensions), then starts to
send a raw H.264 video stream of the device screen. The client decodes the video
frames, and display them as soon as possible, without buffering, to minimize
latency. The client is not aware of the device rotation (which is handled by the
server), it just knows the dimensions of the video frames.
The client captures relevant keyboard and mouse events, that it transmits to the
server, which injects them to the device.
## Server
### Privileges
Capturing the screen requires some privileges, which are granted to `shell`.
The server is a Java application (with a [`public static void main(String...
args)`][main] method), compiled against the Android framework, and executed as
`shell` on the Android device.
[main]: https://github.com/Genymobile/scrcpy/blob/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/server/src/main/java/com/genymobile/scrcpy/Server.java#L123
To run such a Java application, the classes must be [_dexed_][dex] (typically,
to `classes.dex`). If `my.package.MainClass` is the main class, compiled to
`classes.dex`, pushed to the device in `/data/local/tmp`, then it can be run
with:
adb shell CLASSPATH=/data/local/tmp/classes.dex \
app_process / my.package.MainClass
_The path `/data/local/tmp` is a good candidate to push the server, since it's
readable and writable by `shell`, but not world-writable, so a malicious
application may not replace the server just before the client executes it._
Instead of a raw _dex_ file, `app_process` accepts a _jar_ containing
`classes.dex` (e.g. an [APK]). For simplicity, and to benefit from the gradle
build system, the server is built to an (unsigned) APK (renamed to
`scrcpy-server`).
[dex]: https://en.wikipedia.org/wiki/Dalvik_(software)
[apk]: https://en.wikipedia.org/wiki/Android_application_package
### Hidden methods
Although compiled against the Android framework, [hidden] methods and classes are
not directly accessible (and they may differ from one Android version to
another).
They can be called using reflection though. The communication with hidden
components is provided by [_wrappers_ classes][wrappers] and [aidl].
[hidden]: https://stackoverflow.com/a/31908373/1987178
[wrappers]: https://github.com/Genymobile/scrcpy/tree/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/server/src/main/java/com/genymobile/scrcpy/wrappers
[aidl]: https://github.com/Genymobile/scrcpy/tree/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/server/src/main/aidl/android/view
### Threading
The server uses 3 threads:
- the **main** thread, encoding and streaming the video to the client;
- the **controller** thread, listening for _control messages_ (typically,
keyboard and mouse events) from the client;
- the **receiver** thread (managed by the controller), sending _device messages_
to the clients (currently, it is only used to send the device clipboard
content).
Since the video encoding is typically hardware, there would be no benefit in
encoding and streaming in two different threads.
### Screen video encoding
The encoding is managed by [`ScreenEncoder`].
The video is encoded using the [`MediaCodec`] API. The codec takes its input
from a [surface] associated to the display, and writes the resulting H.264
stream to the provided output stream (the socket connected to the client).
[`ScreenEncoder`]: https://github.com/Genymobile/scrcpy/blob/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/server/src/main/java/com/genymobile/scrcpy/ScreenEncoder.java
[`MediaCodec`]: https://developer.android.com/reference/android/media/MediaCodec.html
[surface]: https://github.com/Genymobile/scrcpy/blob/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/server/src/main/java/com/genymobile/scrcpy/ScreenEncoder.java#L68-L69
On device [rotation], the codec, surface and display are reinitialized, and a
new video stream is produced.
New frames are produced only when changes occur on the surface. This is good
because it avoids to send unnecessary frames, but there are drawbacks:
- it does not send any frame on start if the device screen does not change,
- after fast motion changes, the last frame may have poor quality.
Both problems are [solved][repeat] by the flag
[`KEY_REPEAT_PREVIOUS_FRAME_AFTER`][repeat-flag].
[rotation]: https://github.com/Genymobile/scrcpy/blob/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/server/src/main/java/com/genymobile/scrcpy/ScreenEncoder.java#L90
[repeat]: https://github.com/Genymobile/scrcpy/blob/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/server/src/main/java/com/genymobile/scrcpy/ScreenEncoder.java#L147-L148
[repeat-flag]: https://developer.android.com/reference/android/media/MediaFormat.html#KEY_REPEAT_PREVIOUS_FRAME_AFTER
### Input events injection
_Control messages_ are received from the client by the [`Controller`] (run in a
separate thread). There are several types of input events:
- keycode (cf [`KeyEvent`]),
- text (special characters may not be handled by keycodes directly),
- mouse motion/click,
- mouse scroll,
- other commands (e.g. to switch the screen on or to copy the clipboard).
Some of them need to inject input events to the system. To do so, they use the
_hidden_ method [`InputManager.injectInputEvent`] (exposed by our
[`InputManager` wrapper][inject-wrapper]).
[`Controller`]: https://github.com/Genymobile/scrcpy/blob/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/server/src/main/java/com/genymobile/scrcpy/Controller.java#L81
[`KeyEvent`]: https://developer.android.com/reference/android/view/KeyEvent.html
[`MotionEvent`]: https://developer.android.com/reference/android/view/MotionEvent.html
[`InputManager.injectInputEvent`]: https://android.googlesource.com/platform/frameworks/base/+/oreo-release/core/java/android/hardware/input/InputManager.java#857
[inject-wrapper]: https://github.com/Genymobile/scrcpy/blob/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/server/src/main/java/com/genymobile/scrcpy/wrappers/InputManager.java#L27
## Client
The client relies on [SDL], which provides cross-platform API for UI, input
events, threading, etc.
The video stream is decoded by [libav] (FFmpeg).
[SDL]: https://www.libsdl.org
[libav]: https://www.libav.org/
### Initialization
On startup, in addition to _libav_ and _SDL_ initialization, the client must
push and start the server on the device, and open two sockets (one for the video
stream, one for control) so that they may communicate.
Note that the client-server roles are expressed at the application level:
- the server _serves_ video stream and handle requests from the client,
- the client _controls_ the device through the server.
However, the roles are reversed at the network level:
- the client opens a server socket and listen on a port before starting the
server,
- the server connects to the client.
This role inversion guarantees that the connection will not fail due to race
conditions, and avoids polling.
_(Note that over TCP/IP, the roles are not reversed, due to a bug in `adb
reverse`. See commit [1038bad] and [issue #5].)_
Once the server is connected, it sends the device information (name and initial
screen dimensions). Thus, the client may init the window and renderer, before
the first frame is available.
To minimize startup time, SDL initialization is performed while listening for
the connection from the server (see commit [90a46b4]).
[1038bad]: https://github.com/Genymobile/scrcpy/commit/1038bad3850f18717a048a4d5c0f8110e54ee172
[issue #5]: https://github.com/Genymobile/scrcpy/issues/5
[90a46b4]: https://github.com/Genymobile/scrcpy/commit/90a46b4c45637d083e877020d85ade52a9a5fa8e
### Threading
The client uses 4 threads:
- the **main** thread, executing the SDL event loop,
- the **stream** thread, receiving the video and used for decoding and
recording,
- the **controller** thread, sending _control messages_ to the server,
- the **receiver** thread (managed by the controller), receiving _device
messages_ from the server.
In addition, another thread can be started if necessary to handle APK
installation or file push requests (via drag&drop on the main window) or to
print the framerate regularly in the console.
### Stream
The video [stream] is received from the socket (connected to the server on the
device) in a separate thread.
If a [decoder] is present (i.e. `--no-display` is not set), then it uses _libav_
to decode the H.264 stream from the socket, and notifies the main thread when a
new frame is available.
There are two [frames][video_buffer] simultaneously in memory:
- the **decoding** frame, written by the decoder from the decoder thread,
- the **rendering** frame, rendered in a texture from the main thread.
When a new decoded frame is available, the decoder _swaps_ the decoding and
rendering frame (with proper synchronization). Thus, it immediately starts
to decode a new frame while the main thread renders the last one.
If a [recorder] is present (i.e. `--record` is enabled), then it muxes the raw
H.264 packet to the output video file.
[stream]: https://github.com/Genymobile/scrcpy/blob/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/app/src/stream.h
[decoder]: https://github.com/Genymobile/scrcpy/blob/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/app/src/decoder.h
[video_buffer]: https://github.com/Genymobile/scrcpy/blob/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/app/src/video_buffer.h
[recorder]: https://github.com/Genymobile/scrcpy/blob/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/app/src/recorder.h
```
+----------+ +----------+
---> | decoder | ---> | screen |
+---------+ / +----------+ +----------+
socket ---> | stream | ----
+---------+ \ +----------+
---> | recorder |
+----------+
```
### Controller
The [controller] is responsible to send _control messages_ to the device. It
runs in a separate thread, to avoid I/O on the main thread.
On SDL event, received on the main thread, the [input manager][inputmanager]
creates appropriate [_control messages_][controlmsg]. It is responsible to
convert SDL events to Android events (using [convert]). It pushes the _control
messages_ to a queue hold by the controller. On its own thread, the controller
takes messages from the queue, that it serializes and sends to the client.
[controller]: https://github.com/Genymobile/scrcpy/blob/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/app/src/controller.h
[controlmsg]: https://github.com/Genymobile/scrcpy/blob/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/app/src/control_msg.h
[inputmanager]: https://github.com/Genymobile/scrcpy/blob/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/app/src/input_manager.h
[convert]: https://github.com/Genymobile/scrcpy/blob/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/app/src/convert.h
### UI and event loop
Initialization, input events and rendering are all [managed][scrcpy] in the main
thread.
Events are handled in the [event loop], which either updates the [screen] or
delegates to the [input manager][inputmanager].
[scrcpy]: https://github.com/Genymobile/scrcpy/blob/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/app/src/scrcpy.c
[event loop]: https://github.com/Genymobile/scrcpy/blob/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/app/src/scrcpy.c#L201
[screen]: https://github.com/Genymobile/scrcpy/blob/ffe0417228fb78ab45b7ee4e202fc06fc8875bf3/app/src/screen.h
## Hack
For more details, go read the code!
If you find a bug, or have an awesome idea to implement, please discuss and
contribute ;-)
### Debug the server
The server is pushed to the device by the client on startup.
To debug it, enable the server debugger during configuration:
```bash
meson setup x -Dserver_debugger=true
# or, if x is already configured
meson configure x -Dserver_debugger=true
```
If your device runs Android 8 or below, set the `server_debugger_method` to
`old` in addition:
```bash
meson setup x -Dserver_debugger=true -Dserver_debugger_method=old
# or, if x is already configured
meson configure x -Dserver_debugger=true -Dserver_debugger_method=old
```
Then recompile.
When you start scrcpy, it will start a debugger on port 5005 on the device.
Redirect that port to the computer:
```bash
adb forward tcp:5005 tcp:5005
```
In Android Studio, _Run_ > _Debug_ > _Edit configurations..._ On the left, click on
`+`, _Remote_, and fill the form:
- Host: `localhost`
- Port: `5005`
Then click on _Debug_.

149
FAQ.md
View File

@ -7,7 +7,7 @@ Here are the common reported problems and their status.
If you encounter any error, the first step is to upgrade to the latest version.
## `adb` and USB issues
## `adb` issues
`scrcpy` execute `adb` commands to initialize the connection with the device. If
`adb` fails, then scrcpy will not work.
@ -133,21 +133,6 @@ Try with another USB cable or plug it into another USB port. See [#281] and
[#283]: https://github.com/Genymobile/scrcpy/issues/283
## HID/OTG issues on Windows
On Windows, if `scrcpy --otg` (or `--hid-keyboard`/`--hid-mouse`) results in:
> ERROR: Could not find any USB device
(or if only unrelated USB devices are detected), there might be drivers issues.
Please read [#3654], in particular [this comment][#3654-comment1] and [the next
one][#3654-comment2].
[#3654]: https://github.com/Genymobile/scrcpy/issues/3654
[#3654-comment1]: https://github.com/Genymobile/scrcpy/issues/3654#issuecomment-1369278232
[#3654-comment2]: https://github.com/Genymobile/scrcpy/issues/3654#issuecomment-1369295011
## Control issues
@ -159,8 +144,6 @@ In developer options, enable:
> **USB debugging (Security settings)**
> _Allow granting permissions and simulating input via USB debugging_
Rebooting the device is necessary once this option is set.
[simulating input]: https://github.com/Genymobile/scrcpy/issues/70#issuecomment-373286323
@ -170,16 +153,43 @@ The default text injection method is [limited to ASCII characters][text-input].
A trick allows to also inject some [accented characters][accented-characters],
but that's all. See [#37].
It is also possible to simulate a [physical keyboard][hid] (HID).
Since scrcpy v1.20 on Linux, it is possible to simulate a [physical
keyboard][hid] (HID).
[text-input]: https://github.com/Genymobile/scrcpy/issues?q=is%3Aopen+is%3Aissue+label%3Aunicode
[accented-characters]: https://blog.rom1v.com/2018/03/introducing-scrcpy/#handle-accented-characters
[#37]: https://github.com/Genymobile/scrcpy/issues/37
[hid]: doc/hid-otg.md
[hid]: README.md#physical-keyboard-simulation-hid
## Client issues
### The quality is low
If the definition of your client window is smaller than that of your device
screen, then you might get poor quality, especially visible on text (see [#40]).
[#40]: https://github.com/Genymobile/scrcpy/issues/40
This problem should be fixed in scrcpy v1.22: **update to the latest version**.
On older versions, you must configure the [scaling behavior]:
> `scrcpy.exe` > Properties > Compatibility > Change high DPI settings >
> Override high DPI scaling behavior > Scaling performed by: _Application_.
[scaling behavior]: https://github.com/Genymobile/scrcpy/issues/40#issuecomment-424466723
Also, to improve downscaling quality, trilinear filtering is enabled
automatically if the renderer is OpenGL and if it supports mipmapping.
On Windows, you might want to force OpenGL to enable mipmapping:
```
scrcpy --render-driver=opengl
```
### Issue with Wayland
By default, SDL uses x11 on Linux. The [video driver] can be changed via the
@ -214,15 +224,102 @@ As a workaround, [disable "Block compositing"][kwin].
### Exception
If you get any exception related to `MediaCodec`:
There may be many reasons. One common cause is that the hardware encoder of your
device is not able to encode at the given definition:
> ```
> ERROR: Exception on thread Thread[main,5,main]
> android.media.MediaCodec$CodecException: Error 0xfffffc0e
> ...
> Exit due to uncaughtException in main thread:
> ERROR: Could not open video stream
> INFO: Initial texture: 1080x2336
> ```
or
> ```
> ERROR: Exception on thread Thread[main,5,main]
> java.lang.IllegalStateException
> at android.media.MediaCodec.native_dequeueOutputBuffer(Native Method)
> ```
Just try with a lower definition:
```
ERROR: Exception on thread Thread[main,5,main]
java.lang.IllegalStateException
at android.media.MediaCodec.native_dequeueOutputBuffer(Native Method)
scrcpy -m 1920
scrcpy -m 1024
scrcpy -m 800
```
then try with another [encoder](doc/video.md#codec).
Since scrcpy v1.22, scrcpy automatically tries again with a lower definition
before failing. This behavior can be disabled with `--no-downsize-on-error`.
You could also try another [encoder](README.md#encoder).
If you encounter this exception on Android 12, then just upgrade to scrcpy >=
1.18 (see [#2129]):
```
> ERROR: Exception on thread Thread[main,5,main]
java.lang.AssertionError: java.lang.reflect.InvocationTargetException
at com.genymobile.scrcpy.wrappers.SurfaceControl.setDisplaySurface(SurfaceControl.java:75)
...
Caused by: java.lang.reflect.InvocationTargetException
at java.lang.reflect.Method.invoke(Native Method)
at com.genymobile.scrcpy.wrappers.SurfaceControl.setDisplaySurface(SurfaceControl.java:73)
... 7 more
Caused by: java.lang.IllegalArgumentException: displayToken must not be null
at android.view.SurfaceControl$Transaction.setDisplaySurface(SurfaceControl.java:3067)
at android.view.SurfaceControl.setDisplaySurface(SurfaceControl.java:2147)
... 9 more
```
[#2129]: https://github.com/Genymobile/scrcpy/issues/2129
## Command line on Windows
Since v1.22, a "shortcut" has been added to directly open a terminal in the
scrcpy directory. Double-click on `open_a_terminal_here.bat`, then type your
command. For example:
```
scrcpy --record file.mkv
```
You could also open a terminal and go to the scrcpy folder manually:
1. Press <kbd>Windows</kbd>+<kbd>r</kbd>, this opens a dialog box.
2. Type `cmd` and press <kbd>Enter</kbd>, this opens a terminal.
3. Go to your _scrcpy_ directory, by typing (adapt the path):
```bat
cd C:\Users\user\Downloads\scrcpy-win64-xxx
```
and press <kbd>Enter</kbd>
4. Type your command. For example:
```bat
scrcpy --record file.mkv
```
If you plan to always use the same arguments, create a file `myscrcpy.bat`
(enable [show file extensions] to avoid confusion) in the `scrcpy` directory,
containing your command. For example:
```bat
scrcpy --prefer-text --turn-screen-off --stay-awake
```
Then just double-click on that file.
You could also edit (a copy of) `scrcpy-console.bat` or `scrcpy-noconsole.vbs`
to add some arguments.
[show file extensions]: https://www.howtogeek.com/205086/beginner-how-to-make-windows-show-file-extensions/
## Translations
@ -231,4 +328,4 @@ Translations of this FAQ in other languages are available in the [wiki].
[wiki]: https://github.com/Genymobile/scrcpy/wiki
Only this FAQ file is guaranteed to be up-to-date.
Only this README file is guaranteed to be up-to-date.

View File

@ -188,7 +188,7 @@
identification within third-party archives.
Copyright (C) 2018 Genymobile
Copyright (C) 2018-2023 Romain Vimont
Copyright (C) 2018-2022 Romain Vimont
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

1226
README.md

File diff suppressed because it is too large Load Diff

View File

@ -3,84 +3,64 @@ _scrcpy() {
local opts="
--always-on-top
--audio-bit-rate=
--audio-buffer=
--audio-codec=
--audio-codec-options=
--audio-encoder=
--audio-source=
--audio-output-buffer=
-b --video-bit-rate=
--camera-ar=
--camera-id=
--camera-facing=
--camera-fps=
--camera-high-speed
--camera-size=
--crop=
-d --select-usb
--disable-screensaver
--display-id=
--display=
--display-buffer=
-e --select-tcpip
-f --fullscreen
--force-adb-forward
--forward-all-clicks
-h --help
--kill-adb-on-close
-f --fullscreen
-K --hid-keyboard
-h --help
--legacy-paste
--list-camera-sizes
--list-cameras
--list-displays
--list-encoders
--lock-video-orientation
--lock-video-orientation=
-m --max-size=
-M --hid-mouse
--max-fps=
-n --no-control
-N --no-playback
-M --hid-mouse
-m --max-size=
--no-audio
--no-audio-playback
--no-cleanup
--no-clipboard-autosync
--no-downsize-on-error
-n --no-control
-N --no-display
--no-key-repeat
--no-mipmaps
--no-power-on
--no-video
--no-video-playback
--otg
-p --port=
--pause-on-exit
--pause-on-exit=
--power-off-on-close
--prefer-text
--print-fps
--push-target=
-r --record=
--raw-key-events
-r --record=
--record-format=
--render-driver=
--require-audio
--rotation=
-s --serial=
-S --turn-screen-off
--shortcut-mod=
-S --turn-screen-off
-t --show-touches
--tcpip
--tcpip=
--time-limit=
--tunnel-host=
--tunnel-port=
--v4l2-buffer=
--v4l2-sink=
-v --version
-V --verbosity=
-v --version
--video-codec=
--video-codec-options=
--video-encoder=
--video-source=
-w --stay-awake
--window-borderless
--window-title=
@ -97,29 +77,13 @@ _scrcpy() {
return
;;
--audio-codec)
COMPREPLY=($(compgen -W 'opus aac raw' -- "$cur"))
return
;;
--video-source)
COMPREPLY=($(compgen -W 'display camera' -- "$cur"))
return
;;
--audio-source)
COMPREPLY=($(compgen -W 'output mic' -- "$cur"))
return
;;
--camera-facing)
COMPREPLY=($(compgen -W 'front back external' -- "$cur"))
COMPREPLY=($(compgen -W 'opus aac' -- "$cur"))
return
;;
--lock-video-orientation)
COMPREPLY=($(compgen -W 'unlocked initial 0 1 2 3' -- "$cur"))
return
;;
--pause-on-exit)
COMPREPLY=($(compgen -W 'true false if-error' -- "$cur"))
return
;;
-r|--record)
COMPREPLY=($(compgen -f -- "$cur"))
return
@ -150,30 +114,20 @@ _scrcpy() {
COMPREPLY=($(compgen -W "$("${ADB:-adb}" devices | awk '$2 == "device" {print $1}')" -- ${cur}))
return
;;
--audio-bit-rate \
|--audio-buffer \
|-b|--video-bit-rate \
|--audio-codec-options \
|--audio-encoder \
|--audio-output-buffer \
|--camera-ar \
|--camera-id \
|--camera-fps \
|--camera-size \
-b|--video-bit-rate \
|--codec-options \
|--crop \
|--display-id \
|--display \
|--display-buffer \
|--encoder \
|--max-fps \
|-m|--max-size \
|-p|--port \
|--push-target \
|--rotation \
|--tunnel-host \
|--tunnel-port \
|--v4l2-buffer \
|--v4l2-sink \
|--video-codec-options \
|--video-encoder \
|--tcpip \
|--window-*)
# Option accepting an argument, but nothing to auto-complete

View File

@ -1,2 +1,4 @@
@echo off
scrcpy.exe --pause-on-exit=if-error %*
scrcpy.exe %*
:: if the exit code is >= 1, then pause
if errorlevel 1 pause

View File

@ -5,7 +5,7 @@ Comment=Display and control your Android device
# For some users, the PATH or ADB environment variables are set from the shell
# startup file, like .bashrc or .zshrc… Run an interactive shell to get
# environment correctly initialized.
Exec=/bin/sh -c "\"\\$SHELL\" -i -c scrcpy --pause-on-exit=if-error"
Exec=/bin/bash --norc --noprofile -i -c '"$SHELL" -i -c scrcpy || read -p "Press any key to quit..."'
Icon=scrcpy
Terminal=true
Type=Application

View File

@ -5,7 +5,7 @@ Comment=Display and control your Android device
# For some users, the PATH or ADB environment variables are set from the shell
# startup file, like .bashrc or .zshrc… Run an interactive shell to get
# environment correctly initialized.
Exec=/bin/sh -c "\"\\$SHELL\" -i -c scrcpy"
Exec=/bin/sh -c '"$SHELL" -i -c scrcpy'
Icon=scrcpy
Terminal=false
Type=Application

View File

@ -10,81 +10,62 @@ local arguments
arguments=(
'--always-on-top[Make scrcpy window always on top \(above other windows\)]'
'--audio-bit-rate=[Encode the audio at the given bit-rate]'
'--audio-buffer=[Configure the audio buffering delay (in milliseconds)]'
'--audio-codec=[Select the audio codec]:codec:(opus aac raw)'
'--audio-codec=[Select the audio codec]:codec:(opus aac)'
'--audio-codec-options=[Set a list of comma-separated key\:type=value options for the device audio encoder]'
'--audio-encoder=[Use a specific MediaCodec audio encoder]'
'--audio-source=[Select the audio source]:source:(output mic)'
'--audio-output-buffer=[Configure the size of the SDL audio output buffer (in milliseconds)]'
{-b,--video-bit-rate=}'[Encode the video at the given bit-rate]'
'--camera-ar=[Select the camera size by its aspect ratio]'
'--camera-high-speed=[Enable high-speed camera capture mode]'
'--camera-id=[Specify the camera id to mirror]'
'--camera-facing=[Select the device camera by its facing direction]:facing:(front back external)'
'--camera-fps=[Specify the camera capture frame rate]'
'--camera-size=[Specify an explicit camera capture size]'
'--crop=[\[width\:height\:x\:y\] Crop the device screen on the server]'
{-d,--select-usb}'[Use USB device]'
'--disable-screensaver[Disable screensaver while scrcpy is running]'
'--display-id=[Specify the display id to mirror]'
'--display=[Specify the display id to mirror]'
'--display-buffer=[Add a buffering delay \(in milliseconds\) before displaying]'
{-e,--select-tcpip}'[Use TCP/IP device]'
{-f,--fullscreen}'[Start in fullscreen]'
'--force-adb-forward[Do not attempt to use \"adb reverse\" to connect to the device]'
'--forward-all-clicks[Forward clicks to device]'
{-h,--help}'[Print the help]'
'--kill-adb-on-close[Kill adb when scrcpy terminates]'
{-f,--fullscreen}'[Start in fullscreen]'
{-K,--hid-keyboard}'[Simulate a physical keyboard by using HID over AOAv2]'
{-h,--help}'[Print the help]'
'--legacy-paste[Inject computer clipboard text as a sequence of key events on Ctrl+v]'
'--list-camera-sizes[List the valid camera capture sizes]'
'--list-cameras[List cameras available on the device]'
'--list-displays[List displays available on the device]'
'--list-encoders[List video and audio encoders available on the device]'
'--lock-video-orientation=[Lock video orientation]:orientation:(unlocked initial 0 1 2 3)'
{-m,--max-size=}'[Limit both the width and height of the video to value]'
{-M,--hid-mouse}'[Simulate a physical mouse by using HID over AOAv2]'
'--max-fps=[Limit the frame rate of screen capture]'
{-n,--no-control}'[Disable device control \(mirror the device in read only\)]'
{-N,--no-playback}'[Disable video and audio playback]'
{-M,--hid-mouse}'[Simulate a physical mouse by using HID over AOAv2]'
{-m,--max-size=}'[Limit both the width and height of the video to value]'
'--no-audio[Disable audio forwarding]'
'--no-audio-playback[Disable audio playback]'
'--no-cleanup[Disable device cleanup actions on exit]'
'--no-clipboard-autosync[Disable automatic clipboard synchronization]'
'--no-downsize-on-error[Disable lowering definition on MediaCodec error]'
{-n,--no-control}'[Disable device control \(mirror the device in read only\)]'
{-N,--no-display}'[Do not display device \(during screen recording or when V4L2 sink is enabled\)]'
'--no-key-repeat[Do not forward repeated key events when a key is held down]'
'--no-mipmaps[Disable the generation of mipmaps]'
'--no-power-on[Do not power on the device on start]'
'--no-video[Disable video forwarding]'
'--no-video-playback[Disable video playback]'
'--otg[Run in OTG mode \(simulating physical keyboard and mouse\)]'
{-p,--port=}'[\[port\[\:port\]\] Set the TCP port \(range\) used by the client to listen]'
'--pause-on-exit=[Make scrcpy pause before exiting]:mode:(true false if-error)'
'--power-off-on-close[Turn the device screen off when closing scrcpy]'
'--prefer-text[Inject alpha characters and space as text events instead of key events]'
'--print-fps[Start FPS counter, to print frame logs to the console]'
'--push-target=[Set the target directory for pushing files to the device by drag and drop]'
{-r,--record=}'[Record screen to file]:record file:_files'
'--raw-key-events[Inject key events for all input keys, and ignore text events]'
{-r,--record=}'[Record screen to file]:record file:_files'
'--record-format=[Force recording format]:format:(mp4 mkv)'
'--render-driver=[Request SDL to use the given render driver]:driver name:(direct3d opengl opengles2 opengles metal software)'
'--require-audio=[Make scrcpy fail if audio is enabled but does not work]'
'--rotation=[Set the initial display rotation]:rotation values:(0 1 2 3)'
{-s,--serial=}'[The device serial number \(mandatory for multiple devices only\)]:serial:($("${ADB-adb}" devices | awk '\''$2 == "device" {print $1}'\''))'
{-S,--turn-screen-off}'[Turn the device screen off immediately]'
'--shortcut-mod=[\[key1,key2+key3,...\] Specify the modifiers to use for scrcpy shortcuts]:shortcut mod:(lctrl rctrl lalt ralt lsuper rsuper)'
{-S,--turn-screen-off}'[Turn the device screen off immediately]'
{-t,--show-touches}'[Show physical touches]'
'--tcpip[\(optional \[ip\:port\]\) Configure and connect the device over TCP/IP]'
'--time-limit=[Set the maximum mirroring time, in seconds]'
'--tunnel-host=[Set the IP address of the adb tunnel to reach the scrcpy server]'
'--tunnel-port=[Set the TCP port of the adb tunnel to reach the scrcpy server]'
'--v4l2-buffer=[Add a buffering delay \(in milliseconds\) before pushing frames]'
'--v4l2-sink=[\[\/dev\/videoN\] Output to v4l2loopback device]'
{-v,--version}'[Print the version of scrcpy]'
{-V,--verbosity=}'[Set the log level]:verbosity:(verbose debug info warn error)'
{-v,--version}'[Print the version of scrcpy]'
'--video-codec=[Select the video codec]:codec:(h264 h265 av1)'
'--video-codec-options=[Set a list of comma-separated key\:type=value options for the device video encoder]'
'--video-encoder=[Use a specific MediaCodec video encoder]'
'--video-source=[Select the video source]:source:(display camera)'
{-w,--stay-awake}'[Keep the device on while scrcpy is running, when the device is plugged in]'
'--window-borderless[Disable window decorations \(display borderless window\)]'
'--window-title=[Set a custom window title]'

View File

@ -11,10 +11,8 @@ src = [
'src/control_msg.c',
'src/controller.c',
'src/decoder.c',
'src/delay_buffer.c',
'src/demuxer.c',
'src/device_msg.c',
'src/display.c',
'src/icon.c',
'src/file_pusher.c',
'src/fps_counter.c',
@ -31,8 +29,7 @@ src = [
'src/screen.c',
'src/server.c',
'src/version.c',
'src/trait/frame_source.c',
'src/trait/packet_source.c',
'src/video_buffer.c',
'src/util/acksync.c',
'src/util/average.c',
'src/util/bytebuf.c',
@ -40,7 +37,6 @@ src = [
'src/util/intmap.c',
'src/util/intr.c',
'src/util/log.c',
'src/util/memory.c',
'src/util/net.c',
'src/util/net_intr.c',
'src/util/process.c',
@ -51,7 +47,6 @@ src = [
'src/util/term.c',
'src/util/thread.c',
'src/util/tick.c',
'src/util/timeout.c',
]
conf = configuration_data()
@ -178,7 +173,6 @@ check_functions = [
'vasprintf',
'nrand48',
'jrand48',
'reallocarray',
]
foreach f : check_functions
@ -269,6 +263,9 @@ if get_option('buildtype') == 'debug'
'tests/test_bytebuf.c',
'src/util/bytebuf.c',
]],
['test_cbuf', [
'tests/test_cbuf.c',
]],
['test_cli', [
'tests/test_cli.c',
'src/cli.c',
@ -279,6 +276,10 @@ if get_option('buildtype') == 'debug'
'src/util/strbuf.c',
'src/util/term.c',
]],
['test_clock', [
'tests/test_clock.c',
'src/clock.c',
]],
['test_control_msg_serialize', [
'tests/test_control_msg_serialize.c',
'src/control_msg.c',
@ -289,6 +290,9 @@ if get_option('buildtype') == 'debug'
'tests/test_device_msg_deserialize.c',
'src/device_msg.c',
]],
['test_queue', [
'tests/test_queue.c',
]],
['test_strbuf', [
'tests/test_strbuf.c',
'src/util/strbuf.c',
@ -298,18 +302,13 @@ if get_option('buildtype') == 'debug'
'src/util/str.c',
'src/util/strbuf.c',
]],
['test_vecdeque', [
'tests/test_vecdeque.c',
'src/util/memory.c',
]],
['test_vector', [
'tests/test_vector.c',
]],
]
foreach t : tests
sources = t[1] + ['src/compat.c']
exe = executable(t[0], sources,
exe = executable(t[0], t[1],
include_directories: src_dir,
dependencies: dependencies,
c_args: ['-DSDL_MAIN_HANDLED', '-DSC_TEST'])

View File

@ -6,10 +6,10 @@ cd "$DIR"
mkdir -p "$PREBUILT_DATA_DIR"
cd "$PREBUILT_DATA_DIR"
DEP_DIR=platform-tools-34.0.5
DEP_DIR=platform-tools-33.0.3
FILENAME=platform-tools_r34.0.5-windows.zip
SHA256SUM=3f8320152704377de150418a3c4c9d07d16d80a6c0d0d8f7289c22c499e33571
FILENAME=platform-tools_r33.0.3-windows.zip
SHA256SUM=1e59afd40a74c5c0eab0a9fad3f0faf8a674267106e0b19921be9f67081808c2
if [[ -d "$DEP_DIR" ]]
then

View File

@ -6,11 +6,11 @@ cd "$DIR"
mkdir -p "$PREBUILT_DATA_DIR"
cd "$PREBUILT_DATA_DIR"
VERSION=6.0-scrcpy-4
VERSION=6.0-scrcpy
DEP_DIR="ffmpeg-$VERSION"
FILENAME="$DEP_DIR".7z
SHA256SUM=39274b321491ce83e76cab5d24e7cbe3f402d3ccf382f739b13be5651c146b60
SHA256SUM=f3956295b4325a84aada05447ba3f314fbed96697811666d495de4de40d59f98
if [[ -d "$DEP_DIR" ]]
then

View File

@ -6,10 +6,10 @@ cd "$DIR"
mkdir -p "$PREBUILT_DATA_DIR"
cd "$PREBUILT_DATA_DIR"
DEP_DIR=SDL2-2.28.4
DEP_DIR=SDL2-2.26.1
FILENAME=SDL2-devel-2.28.4-mingw.tar.gz
SHA256SUM=779d091072cf97291f80030f5232d97aa3d48ab0f2c14fe0b9d9a33c593cdc35
FILENAME=SDL2-devel-2.26.1-mingw.tar.gz
SHA256SUM=aa43e1531a89551f9f9e14b27953a81d4ac946a9e574b5813cd0f2b36e83cc1c
if [[ -d "$DEP_DIR" ]]
then

View File

@ -13,7 +13,7 @@ BEGIN
VALUE "LegalCopyright", "Romain Vimont, Genymobile"
VALUE "OriginalFilename", "scrcpy.exe"
VALUE "ProductName", "scrcpy"
VALUE "ProductVersion", "v2.2"
VALUE "ProductVersion", "1.25"
END
END
BLOCK "VarFileInfo"

View File

@ -21,94 +21,28 @@ Make scrcpy window always on top (above other windows).
.TP
.BI "\-\-audio\-bit\-rate " value
Encode the audio at the given bit rate, expressed in bits/s. Unit suffixes are supported: '\fBK\fR' (x1000) and '\fBM\fR' (x1000000).
Encode the audio at the given bit\-rate, expressed in bits/s. Unit suffixes are supported: '\fBK\fR' (x1000) and '\fBM\fR' (x1000000).
Default is 128K (128000).
.TP
.BI "\-\-audio\-buffer ms
Configure the audio buffering delay (in milliseconds).
Lower values decrease the latency, but increase the likelyhood of buffer underrun (causing audio glitches).
Default is 50.
Default is 196K (196000).
.TP
.BI "\-\-audio\-codec " name
Select an audio codec (opus, aac or raw).
Select an audio codec (opus or aac).
Default is opus.
.TP
.BI "\-\-audio\-codec\-options " key\fR[:\fItype\fR]=\fIvalue\fR[,...]
Set a list of comma-separated key:type=value options for the device audio encoder.
The possible values for 'type' are 'int' (default), 'long', 'float' and 'string'.
The list of possible codec options is available in the Android documentation:
<https://d.android.com/reference/android/media/MediaFormat>
.TP
.BI "\-\-audio\-encoder " name
Use a specific MediaCodec audio encoder (depending on the codec provided by \fB\-\-audio\-codec\fR).
The available encoders can be listed by \fB\-\-list\-encoders\fR.
.TP
.BI "\-\-audio\-source " source
Select the audio source (output or mic).
Default is output.
.TP
.BI "\-\-audio\-output\-buffer ms
Configure the size of the SDL audio output buffer (in milliseconds).
If you get "robotic" audio playback, you should test with a higher value (10). Do not change this setting otherwise.
Default is 5.
The available encoders can be listed by \-\-list\-encoders.
.TP
.BI "\-b, \-\-video\-bit\-rate " value
Encode the video at the given bit rate, expressed in bits/s. Unit suffixes are supported: '\fBK\fR' (x1000) and '\fBM\fR' (x1000000).
Encode the video at the given bit\-rate, expressed in bits/s. Unit suffixes are supported: '\fBK\fR' (x1000) and '\fBM\fR' (x1000000).
Default is 8M (8000000).
.TP
.BI "\-\-camera\-ar " ar
Select the camera size by its aspect ratio (+/- 10%).
Possible values are "sensor" (use the camera sensor aspect ratio), "\fInum\fR:\fIden\fR" (e.g. "4:3") and "\fIvalue\fR" (e.g. "1.6").
.TP
.B \-\-camera\-high\-speed
Enable high-speed camera capture mode.
This mode is restricted to specific resolutions and frame rates, listed by \fB\-\-list\-camera\-sizes\fR.
.TP
.BI "\-\-camera\-id " id
Specify the device camera id to mirror.
The available camera ids can be listed by \fB\-\-list\-cameras\fR.
.TP
.BI "\-\-camera\-facing " facing
Select the device camera by its facing direction.
Possible values are "front", "back" and "external".
.TP
.BI "\-\-camera\-fps " fps
Specify the camera capture frame rate.
If not specified, Android's default frame rate (30 fps) is used.
.TP
.BI "\-\-camera\-size " width\fRx\fIheight
Specify an explicit camera capture size.
.TP
.BI "\-\-crop " width\fR:\fIheight\fR:\fIx\fR:\fIy
Crop the device screen on the server.
@ -128,10 +62,10 @@ Also see \fB\-e\fR (\fB\-\-select\-tcpip\fR).
Disable screensaver while scrcpy is running.
.TP
.BI "\-\-display\-id " id
.BI "\-\-display " id
Specify the device display id to mirror.
The available display ids can be listed by \fB\-\-list\-displays\fR.
The available display ids can be listed by \-\-list\-displays.
Default is 0.
@ -147,10 +81,6 @@ Use TCP/IP device (if there is exactly one, like adb -e).
Also see \fB\-d\fR (\fB\-\-select\-usb\fR).
.TP
.B \-f, \-\-fullscreen
Start in fullscreen.
.TP
.B \-\-force\-adb\-forward
Do not attempt to use "adb reverse" to connect to the device.
@ -160,12 +90,12 @@ Do not attempt to use "adb reverse" to connect to the device.
By default, right-click triggers BACK (or POWER on) and middle-click triggers HOME. This option disables these shortcuts and forward the clicks to the device instead.
.TP
.B \-h, \-\-help
Print this help.
.B \-f, \-\-fullscreen
Start in fullscreen.
.TP
.B \-\-kill\-adb\-on\-close
Kill adb when scrcpy terminates.
.B \-h, \-\-help
Print this help.
.TP
.B \-K, \-\-hid\-keyboard
@ -189,14 +119,6 @@ Inject computer clipboard text as a sequence of key events on Ctrl+v (like MOD+S
This is a workaround for some devices not behaving as expected when setting the device clipboard programmatically.
.TP
.B \-\-list\-camera\-sizes
List the valid camera capture sizes.
.TP
.B \-\-list\-cameras
List cameras available on the device.
.TP
.B \-\-list\-encoders
List video and audio encoders available on the device.
@ -213,6 +135,10 @@ Default is "unlocked".
Passing the option without argument is equivalent to passing "initial".
.TP
.BI "\-\-max\-fps " value
Limit the framerate of screen capture (officially supported since Android 10, but may work on earlier versions).
.TP
.BI "\-m, \-\-max\-size " value
Limit both the width and height of the video to \fIvalue\fR. The other dimension is computed so that the device aspect\-ratio is preserved.
@ -231,26 +157,6 @@ It may only work over USB.
Also see \fB\-\-hid\-keyboard\fR.
.TP
.BI "\-\-max\-fps " value
Limit the framerate of screen capture (officially supported since Android 10, but may work on earlier versions).
.TP
.B \-n, \-\-no\-control
Disable device control (mirror the device in read\-only).
.TP
.B \-N, \-\-no\-playback
Disable video and audio playback on the computer (equivalent to \fB\-\-no\-video\-playback \-\-no\-audio\-playback\fR).
.TP
.B \-\-no\-audio
Disable audio forwarding.
.TP
.B \-\-no\-audio\-playback
Disable audio playback on the computer.
.TP
.B \-\-no\-cleanup
By default, scrcpy removes the server binary from the device and restores the device state (show touches, stay awake and power mode) on exit.
@ -269,6 +175,14 @@ By default, on MediaCodec error, scrcpy automatically tries again with a lower d
This option disables this behavior.
.TP
.B \-n, \-\-no\-control
Disable device control (mirror the device in read\-only).
.TP
.B \-N, \-\-no\-display
Do not display device (only when screen recording is enabled).
.TP
.B \-\-no\-key\-repeat
Do not forward repeated key events when a key is held down.
@ -281,14 +195,6 @@ If the renderer is OpenGL 3.0+ or OpenGL ES 2.0+, then mipmaps are automatically
.B \-\-no\-power\-on
Do not power on the device on start.
.TP
.B \-\-no\-video
Disable video forwarding.
.TP
.B \-\-no\-video\-playback
Disable video playback on the computer.
.TP
.B \-\-otg
Run in OTG mode: simulate physical keyboard and mouse, as if the computer keyboard and mouse were plugged directly to the device via an OTG cable.
@ -309,16 +215,6 @@ Set the TCP port (range) used by the client to listen.
Default is 27183:27199.
.TP
\fB\-\-pause\-on\-exit\fR[=\fImode\fR]
Configure pause on exit. Possible values are "true" (always pause on exit), "false" (never pause on exit) and "if-error" (pause only if an error occured).
This is useful to prevent the terminal window from automatically closing, so that error messages can be read.
Default is "false".
Passing the option without argument is equivalent to passing "true".
.TP
.B \-\-power\-off\-on\-close
Turn the device screen off when closing scrcpy.
@ -340,6 +236,10 @@ Set the target directory for pushing files to the device by drag & drop. It is p
Default is "/sdcard/Download/".
.TP
.B \-\-raw\-key\-events
Inject key events for all input keys, and ignore text events.
.TP
.BI "\-r, \-\-record " file
Record screen to
@ -349,10 +249,6 @@ The format is determined by the
.B \-\-record\-format
option if set, or by the file extension (.mp4 or .mkv).
.TP
.B \-\-raw\-key\-events
Inject key events for all input keys, and ignore text events.
.TP
.BI "\-\-record\-format " format
Force recording format (either mp4 or mkv).
@ -363,11 +259,8 @@ Request SDL to use the given render driver (this is just a hint).
Supported names are currently "direct3d", "opengl", "opengles2", "opengles", "metal" and "software".
<https://wiki.libsdl.org/SDL_HINT_RENDER_DRIVER>
.TP
.B \-\-require\-audio
By default, scrcpy mirrors only the video if audio capture fails on the device. This option makes scrcpy fail if audio is enabled but does not work.
.UR https://wiki.libsdl.org/SDL_HINT_RENDER_DRIVER
.UE
.TP
.BI "\-\-rotation " value
@ -377,10 +270,6 @@ Set the initial display rotation. Possibles values are 0, 1, 2 and 3. Each incre
.BI "\-s, \-\-serial " number
The device serial number. Mandatory only if several devices are connected to adb.
.TP
.B \-S, \-\-turn\-screen\-off
Turn the device screen off immediately.
.TP
.BI "\-\-shortcut\-mod " key\fR[+...]][,...]
Specify the modifiers to use for scrcpy shortcuts. Possible keys are "lctrl", "rctrl", "lalt", "ralt", "lsuper" and "rsuper".
@ -391,12 +280,6 @@ For example, to use either LCtrl+LAlt or LSuper for scrcpy shortcuts, pass "lctr
Default is "lalt,lsuper" (left-Alt or left-Super).
.TP
.B \-t, \-\-show\-touches
Enable "show touches" on start, restore the initial value on exit.
It only shows physical touches (not clicks from scrcpy).
.TP
.BI "\-\-tcpip\fR[=\fIip\fR[:\fIport\fR]]
Configure and reconnect the device over TCP/IP.
@ -406,31 +289,27 @@ If a destination address is provided, then scrcpy connects to this address befor
If no destination address is provided, then scrcpy attempts to find the IP address and adb port of the current device (typically connected over USB), enables TCP/IP mode if necessary, then connects to this address before starting.
.TP
.BI "\-\-time\-limit " seconds
Set the maximum mirroring time, in seconds.
.B \-S, \-\-turn\-screen\-off
Turn the device screen off immediately.
.TP
.B \-t, \-\-show\-touches
Enable "show touches" on start, restore the initial value on exit.
It only shows physical touches (not clicks from scrcpy).
.TP
.BI "\-\-tunnel\-host " ip
Set the IP address of the adb tunnel to reach the scrcpy server. This option automatically enables \fB\-\-force\-adb\-forward\fR.
Set the IP address of the adb tunnel to reach the scrcpy server. This option automatically enables --force-adb-forward.
Default is localhost.
.TP
.BI "\-\-tunnel\-port " port
Set the TCP port of the adb tunnel to reach the scrcpy server. This option automatically enables \fB\-\-force\-adb\-forward\fR.
Set the TCP port of the adb tunnel to reach the scrcpy server. This option automatically enables --force-adb-forward.
Default is 0 (not forced): the local port used for establishing the tunnel will be used.
.TP
.B \-v, \-\-version
Print the version of scrcpy.
.TP
.BI "\-V, \-\-verbosity " value
Set the log level ("verbose", "debug", "info", "warn" or "error").
Default is "info" for release builds, "debug" for debug builds.
.TP
.BI "\-\-v4l2-sink " /dev/videoN
Output to v4l2loopback device.
@ -445,6 +324,16 @@ This option is similar to \fB\-\-display\-buffer\fR, but specific to V4L2 sink.
Default is 0 (no buffering).
.TP
.BI "\-V, \-\-verbosity " value
Set the log level ("verbose", "debug", "info", "warn" or "error").
Default is "info" for release builds, "debug" for debug builds.
.TP
.B \-v, \-\-version
Print the version of scrcpy.
.TP
.BI "\-\-video\-codec " name
Select a video codec (h264, h265 or av1).
@ -457,23 +346,15 @@ Set a list of comma-separated key:type=value options for the device video encode
The possible values for 'type' are 'int' (default), 'long', 'float' and 'string'.
The list of possible codec options is available in the Android documentation:
<https://d.android.com/reference/android/media/MediaFormat>
The list of possible codec options is available in the Android documentation
.UR https://d.android.com/reference/android/media/MediaFormat
.UE .
.TP
.BI "\-\-video\-encoder " name
Use a specific MediaCodec video encoder (depending on the codec provided by \fB\-\-video\-codec\fR).
The available encoders can be listed by \fB\-\-list\-encoders\fR.
.TP
.BI "\-\-video\-source " source
Select the video source (display or camera).
Camera mirroring requires Android 12+.
Default is display.
The available encoders can be listed by \-\-list\-encoders.
.TP
.B \-w, \-\-stay-awake
@ -635,7 +516,7 @@ Path to adb.
.TP
.B ANDROID_SERIAL
Device serial to use if no selector (\fB-s\fR, \fB-d\fR, \fB-e\fR or \fB\-\-tcpip=\fIaddr\fR) is specified.
Device serial to use if no selector (-s, -d, -e or --tcpip=<addr>) is specified.
.TP
.B SCRCPY_ICON_PATH
@ -658,14 +539,23 @@ for the Debian Project (and may be used by others).
.SH "REPORTING BUGS"
Report bugs to <https://github.com/Genymobile/scrcpy/issues>.
Report bugs to
.UR https://github.com/Genymobile/scrcpy/issues
.UE .
.SH COPYRIGHT
Copyright \(co 2018 Genymobile <https://www.genymobile.com>
Copyright \(co 2018 Genymobile
.UR https://www.genymobile.com
Genymobile
.UE
Copyright \(co 2018\-2023 Romain Vimont <rom@rom1v.com>
Copyright \(co 2018\-2022
.MT rom@rom1v.com
Romain Vimont
.ME
Licensed under the Apache License, Version 2.0.
.SH WWW
<https://github.com/Genymobile/scrcpy>
.UR https://github.com/Genymobile/scrcpy
.UE

View File

@ -70,7 +70,7 @@ argv_to_string(const char *const *argv, char *buf, size_t bufsize) {
}
static void
show_adb_installation_msg(void) {
show_adb_installation_msg() {
#ifndef __WINDOWS__
static const struct {
const char *binary;
@ -218,16 +218,8 @@ sc_adb_forward(struct sc_intr *intr, const char *serial, uint16_t local_port,
const char *device_socket_name, unsigned flags) {
char local[4 + 5 + 1]; // tcp:PORT
char remote[108 + 14 + 1]; // localabstract:NAME
int r = snprintf(local, sizeof(local), "tcp:%" PRIu16, local_port);
assert(r >= 0 && (size_t) r < sizeof(local));
r = snprintf(remote, sizeof(remote), "localabstract:%s",
device_socket_name);
if (r < 0 || (size_t) r >= sizeof(remote)) {
LOGE("Could not write socket name");
return false;
}
sprintf(local, "tcp:%" PRIu16, local_port);
snprintf(remote, sizeof(remote), "localabstract:%s", device_socket_name);
assert(serial);
const char *const argv[] =
@ -241,9 +233,7 @@ bool
sc_adb_forward_remove(struct sc_intr *intr, const char *serial,
uint16_t local_port, unsigned flags) {
char local[4 + 5 + 1]; // tcp:PORT
int r = snprintf(local, sizeof(local), "tcp:%" PRIu16, local_port);
assert(r >= 0 && (size_t) r < sizeof(local));
(void) r;
sprintf(local, "tcp:%" PRIu16, local_port);
assert(serial);
const char *const argv[] =
@ -259,16 +249,8 @@ sc_adb_reverse(struct sc_intr *intr, const char *serial,
unsigned flags) {
char local[4 + 5 + 1]; // tcp:PORT
char remote[108 + 14 + 1]; // localabstract:NAME
int r = snprintf(local, sizeof(local), "tcp:%" PRIu16, local_port);
assert(r >= 0 && (size_t) r < sizeof(local));
r = snprintf(remote, sizeof(remote), "localabstract:%s",
device_socket_name);
if (r < 0 || (size_t) r >= sizeof(remote)) {
LOGE("Could not write socket name");
return false;
}
sprintf(local, "tcp:%" PRIu16, local_port);
snprintf(remote, sizeof(remote), "localabstract:%s", device_socket_name);
assert(serial);
const char *const argv[] =
SC_ADB_COMMAND("-s", serial, "reverse", remote, local);
@ -281,12 +263,7 @@ bool
sc_adb_reverse_remove(struct sc_intr *intr, const char *serial,
const char *device_socket_name, unsigned flags) {
char remote[108 + 14 + 1]; // localabstract:NAME
int r = snprintf(remote, sizeof(remote), "localabstract:%s",
device_socket_name);
if (r < 0 || (size_t) r >= sizeof(remote)) {
LOGE("Device socket name too long");
return false;
}
snprintf(remote, sizeof(remote), "localabstract:%s", device_socket_name);
assert(serial);
const char *const argv[] =
@ -356,9 +333,7 @@ bool
sc_adb_tcpip(struct sc_intr *intr, const char *serial, uint16_t port,
unsigned flags) {
char port_string[5 + 1];
int r = snprintf(port_string, sizeof(port_string), "%" PRIu16, port);
assert(r >= 0 && (size_t) r < sizeof(port_string));
(void) r;
sprintf(port_string, "%" PRIu16, port);
assert(serial);
const char *const argv[] =
@ -653,8 +628,8 @@ sc_adb_select_device(struct sc_intr *intr,
return false;
}
LOGI("ADB device found:");
sc_adb_devices_log(SC_LOG_LEVEL_INFO, vec.data, vec.size);
LOGD("ADB device found:");
sc_adb_devices_log(SC_LOG_LEVEL_DEBUG, vec.data, vec.size);
// Move devics into out_device (do not destroy device)
sc_adb_device_move(out_device, device);

View File

@ -7,7 +7,7 @@
#include "util/log.h"
#include "util/str.h"
static bool
bool
sc_adb_parse_device(char *line, struct sc_adb_device *device) {
// One device line looks like:
// "0123456789abcdef device usb:2-1 product:MyProduct model:MyModel "
@ -204,7 +204,6 @@ sc_adb_parse_device_ip(char *str) {
while (str[idx_line] != '\0') {
char *line = &str[idx_line];
size_t len = strcspn(line, "\n");
bool is_last_line = line[len] == '\0';
// The same, but without any trailing '\r'
size_t line_len = sc_str_remove_trailing_cr(line, len);
@ -216,12 +215,12 @@ sc_adb_parse_device_ip(char *str) {
return ip;
}
if (is_last_line) {
break;
}
idx_line += len;
if (str[idx_line] != '\0') {
// The next line starts after the '\n'
idx_line += len + 1;
++idx_line;
}
}
return NULL;

View File

@ -1,127 +1,93 @@
#include "audio_player.h"
#include <libavcodec/avcodec.h>
#include <libavutil/opt.h>
#include "util/log.h"
#define SC_AUDIO_PLAYER_NDEBUG // comment to debug
/**
* Real-time audio player with configurable latency
*
* As input, the player regularly receives AVFrames of decoded audio samples.
* As output, an SDL callback regularly requests audio samples to be played.
* In the middle, an audio buffer stores the samples produced but not consumed
* yet.
*
* The goal of the player is to feed the audio output with a latency as low as
* possible while avoiding buffer underrun (i.e. not being able to provide
* samples when requested).
*
* The player aims to feed the audio output with as little latency as possible
* while avoiding buffer underrun. To achieve this, it attempts to maintain the
* average buffering (the number of samples present in the buffer) around a
* target value. If this target buffering is too low, then buffer underrun will
* occur frequently. If it is too high, then latency will become unacceptable.
* This target value is configured using the scrcpy option --audio-buffer.
*
* The player cannot adjust the sample input rate (it receives samples produced
* in real-time) or the sample output rate (it must provide samples as
* requested by the audio output callback). Therefore, it may only apply
* compensation by resampling (converting _m_ input samples to _n_ output
* samples).
*
* The compensation itself is applied by libswresample (FFmpeg). It is
* configured using swr_set_compensation(). An important work for the player
* is to estimate the compensation value regularly and apply it.
*
* The estimated buffering level is the result of averaging the "natural"
* buffering (samples are produced and consumed by blocks, so it must be
* smoothed), and making instant adjustments resulting of its own actions
* (explicit compensation and silence insertion on underflow), which are not
* smoothed.
*
* Buffer underflow events can occur when packets arrive too late. In that case,
* the player inserts silence. Once the packets finally arrive (late), one
* strategy could be to drop the samples that were replaced by silence, in
* order to keep a minimal latency. However, dropping samples in case of buffer
* underflow is inadvisable, as it would temporarily increase the underflow
* even more and cause very noticeable audio glitches.
*
* Therefore, the player doesn't drop any sample on underflow. The compensation
* mechanism will absorb the delay introduced by the inserted silence.
*/
/** Downcast frame_sink to sc_audio_player */
#define DOWNCAST(SINK) container_of(SINK, struct sc_audio_player, frame_sink)
#define SC_AV_SAMPLE_FMT AV_SAMPLE_FMT_FLT
#define SC_SDL_SAMPLE_FMT AUDIO_F32
#define TO_BYTES(SAMPLES) sc_audiobuf_to_bytes(&ap->buf, (SAMPLES))
#define TO_SAMPLES(BYTES) sc_audiobuf_to_samples(&ap->buf, (BYTES))
#define SC_AUDIO_OUTPUT_BUFFER_SAMPLES 480 // 10ms at 48000Hz
static void SDLCALL
// The target number of buffered samples between the producer and the consumer.
// This value is directly use for compensation.
#define SC_TARGET_BUFFERED_SAMPLES (3 * SC_AUDIO_OUTPUT_BUFFER_SAMPLES)
// If the consumer is too late, skip samples to keep at most this value
#define SC_BUFFERED_SAMPLES_THRESHOLD 2400 // 50ms at 48000Hz
// Use a ring-buffer of 1 second (at 48000Hz) between the producer and the
// consumer. It too big, but it guarantees that the producer and the consumer
// will be able to access it in parallel without locking.
#define SC_BYTEBUF_SIZE_IN_SAMPLES 48000
void
sc_audio_player_sdl_callback(void *userdata, uint8_t *stream, int len_int) {
struct sc_audio_player *ap = userdata;
// This callback is called with the lock used by SDL_AudioDeviceLock(), so
// the audiobuf is protected
// the bytebuf is protected
assert(len_int > 0);
size_t len = len_int;
uint32_t count = TO_SAMPLES(len);
#ifndef SC_AUDIO_PLAYER_NDEBUG
LOGD("[Audio] SDL callback requests %" PRIu32 " samples", count);
LOGD("[Audio] SDL callback requests %" SC_PRIsizet " samples",
len / (ap->nb_channels * ap->out_bytes_per_sample));
#endif
uint32_t buffered_samples = sc_audiobuf_can_read(&ap->buf);
if (!ap->played) {
// Part of the buffering is handled by inserting initial silence. The
// remaining (margin) last samples will be handled by compensation.
uint32_t margin = 30 * ap->sample_rate / 1000; // 30ms
if (buffered_samples + margin < ap->target_buffering) {
LOGV("[Audio] Inserting initial buffering silence: %" PRIu32
" samples", count);
// Delay playback starting to reach the target buffering. Fill the
// whole buffer with silence (len is small compared to the
// arbitrary margin value).
memset(stream, 0, len);
return;
}
size_t read = sc_bytebuf_read_remaining(&ap->buf);
size_t max_buffered_bytes = SC_BUFFERED_SAMPLES_THRESHOLD
* ap->nb_channels * ap->out_bytes_per_sample;
if (read > max_buffered_bytes + len) {
size_t skip = read - (max_buffered_bytes + len);
#ifndef SC_AUDIO_PLAYER_NDEBUG
LOGD("[Audio] Buffered samples threshold exceeded: %" SC_PRIsizet
" bytes, skipping %" SC_PRIsizet " bytes", read, skip);
#endif
// After this callback, exactly max_buffered_bytes will remain
sc_bytebuf_skip(&ap->buf, skip);
read = max_buffered_bytes + len;
}
uint32_t read = MIN(buffered_samples, count);
// Number of buffered samples (may be negative on underflow)
float buffered_samples = ((float) read - len_int)
/ (ap->nb_channels * ap->out_bytes_per_sample);
sc_average_push(&ap->avg_buffered_samples, buffered_samples);
if (read) {
sc_audiobuf_read(&ap->buf, stream, read);
if (read > len) {
read = len;
}
sc_bytebuf_read(&ap->buf, stream, read);
}
if (read < count) {
uint32_t silence = count - read;
// Insert silence. In theory, the inserted silent samples replace the
// missing real samples, which will arrive later, so they should be
// dropped to keep the latency minimal. However, this would cause very
// audible glitches, so let the clock compensation restore the target
// latency.
LOGD("[Audio] Buffer underflow, inserting silence: %" PRIu32 " samples",
silence);
memset(stream + TO_BYTES(read), 0, TO_BYTES(silence));
if (ap->received) {
// Inserting additional samples immediately increases buffering
ap->underflow += silence;
if (read < len) {
// Insert silence
#ifndef SC_AUDIO_PLAYER_NDEBUG
LOGD("[Audio] Buffer underflow, inserting silence: %" SC_PRIsizet
" bytes", len - read);
#endif
memset(stream + read, 0, len - read);
}
}
ap->played = true;
static size_t
sc_audio_player_get_buf_size(struct sc_audio_player *ap, size_t samples) {
assert(ap->nb_channels);
assert(ap->out_bytes_per_sample);
return samples * ap->nb_channels * ap->out_bytes_per_sample;
}
static uint8_t *
sc_audio_player_get_swr_buf(struct sc_audio_player *ap, uint32_t min_samples) {
size_t min_buf_size = TO_BYTES(min_samples);
if (min_buf_size > ap->swr_buf_alloc_size) {
sc_audio_player_get_swr_buf(struct sc_audio_player *ap, size_t min_samples) {
size_t min_buf_size = sc_audio_player_get_buf_size(ap, min_samples);
if (min_buf_size < ap->swr_buf_alloc_size) {
size_t new_size = min_buf_size + 4096;
uint8_t *buf = realloc(ap->swr_buf, new_size);
if (!buf) {
@ -136,188 +102,6 @@ sc_audio_player_get_swr_buf(struct sc_audio_player *ap, uint32_t min_samples) {
return ap->swr_buf;
}
static bool
sc_audio_player_frame_sink_push(struct sc_frame_sink *sink,
const AVFrame *frame) {
struct sc_audio_player *ap = DOWNCAST(sink);
SwrContext *swr_ctx = ap->swr_ctx;
int64_t swr_delay = swr_get_delay(swr_ctx, ap->sample_rate);
// No need to av_rescale_rnd(), input and output sample rates are the same.
// Add more space (256) for clock compensation.
int dst_nb_samples = swr_delay + frame->nb_samples + 256;
uint8_t *swr_buf = sc_audio_player_get_swr_buf(ap, dst_nb_samples);
if (!swr_buf) {
return false;
}
int ret = swr_convert(swr_ctx, &swr_buf, dst_nb_samples,
(const uint8_t **) frame->data, frame->nb_samples);
if (ret < 0) {
LOGE("Resampling failed: %d", ret);
return false;
}
// swr_convert() returns the number of samples which would have been
// written if the buffer was big enough.
uint32_t samples_written = MIN(ret, dst_nb_samples);
#ifndef SC_AUDIO_PLAYER_NDEBUG
LOGD("[Audio] %" PRIu32 " samples written to buffer", samples_written);
#endif
// Since this function is the only writer, the current available space is
// at least the previous available space. In practice, it should almost
// always be possible to write without lock.
bool lockless_write = samples_written <= ap->previous_can_write;
if (lockless_write) {
sc_audiobuf_prepare_write(&ap->buf, swr_buf, samples_written);
}
SDL_LockAudioDevice(ap->device);
uint32_t buffered_samples = sc_audiobuf_can_read(&ap->buf);
if (lockless_write) {
sc_audiobuf_commit_write(&ap->buf, samples_written);
} else {
uint32_t can_write = sc_audiobuf_can_write(&ap->buf);
if (samples_written > can_write) {
// Entering this branch is very unlikely, the audio buffer is
// allocated with a size sufficient to store 1 second more than the
// target buffering. If this happens, though, we have to skip old
// samples.
uint32_t cap = sc_audiobuf_capacity(&ap->buf);
if (samples_written > cap) {
// Very very unlikely: a single resampled frame should never
// exceed the audio buffer size (or something is very wrong).
// Ignore the first bytes in swr_buf
swr_buf += TO_BYTES(samples_written - cap);
// This change in samples_written will impact the
// instant_compensation below
samples_written = cap;
}
assert(samples_written >= can_write);
if (samples_written > can_write) {
uint32_t skip_samples = samples_written - can_write;
assert(buffered_samples >= skip_samples);
sc_audiobuf_skip(&ap->buf, skip_samples);
buffered_samples -= skip_samples;
if (ap->played) {
// Dropping input samples instantly decreases buffering
ap->avg_buffering.avg -= skip_samples;
}
}
// It should remain exactly the expected size to write the new
// samples.
assert(sc_audiobuf_can_write(&ap->buf) == samples_written);
}
sc_audiobuf_write(&ap->buf, swr_buf, samples_written);
}
buffered_samples += samples_written;
assert(buffered_samples == sc_audiobuf_can_read(&ap->buf));
// Read with lock held, to be used after unlocking
bool played = ap->played;
uint32_t underflow = ap->underflow;
if (played) {
uint32_t max_buffered_samples = ap->target_buffering
+ 12 * ap->output_buffer
+ ap->target_buffering / 10;
if (buffered_samples > max_buffered_samples) {
uint32_t skip_samples = buffered_samples - max_buffered_samples;
sc_audiobuf_skip(&ap->buf, skip_samples);
LOGD("[Audio] Buffering threshold exceeded, skipping %" PRIu32
" samples", skip_samples);
}
// reset (the current value was copied to a local variable)
ap->underflow = 0;
} else {
// SDL playback not started yet, do not accumulate more than
// max_initial_buffering samples, this would cause unnecessary delay
// (and glitches to compensate) on start.
uint32_t max_initial_buffering = ap->target_buffering
+ 2 * ap->output_buffer;
if (buffered_samples > max_initial_buffering) {
uint32_t skip_samples = buffered_samples - max_initial_buffering;
sc_audiobuf_skip(&ap->buf, skip_samples);
#ifndef SC_AUDIO_PLAYER_NDEBUG
LOGD("[Audio] Playback not started, skipping %" PRIu32 " samples",
skip_samples);
#endif
}
}
ap->previous_can_write = sc_audiobuf_can_write(&ap->buf);
ap->received = true;
SDL_UnlockAudioDevice(ap->device);
if (played) {
// Number of samples added (or removed, if negative) for compensation
int32_t instant_compensation =
(int32_t) samples_written - frame->nb_samples;
int32_t inserted_silence = (int32_t) underflow;
// The compensation must apply instantly, it must not be smoothed
ap->avg_buffering.avg += instant_compensation + inserted_silence;
// However, the buffering level must be smoothed
sc_average_push(&ap->avg_buffering, buffered_samples);
#ifndef SC_AUDIO_PLAYER_NDEBUG
LOGD("[Audio] buffered_samples=%" PRIu32 " avg_buffering=%f",
buffered_samples, sc_average_get(&ap->avg_buffering));
#endif
ap->samples_since_resync += samples_written;
if (ap->samples_since_resync >= ap->sample_rate) {
// Recompute compensation every second
ap->samples_since_resync = 0;
float avg = sc_average_get(&ap->avg_buffering);
int diff = ap->target_buffering - avg;
if (abs(diff) < (int) ap->sample_rate / 1000) {
// Do not compensate for less than 1ms, the error is just noise
diff = 0;
} else if (diff < 0 && buffered_samples < ap->target_buffering) {
// Do not accelerate if the instant buffering level is below
// the average, this would increase underflow
diff = 0;
}
// Compensate the diff over 4 seconds (but will be recomputed after
// 1 second)
int distance = 4 * ap->sample_rate;
// Limit compensation rate to 2%
int abs_max_diff = distance / 50;
diff = CLAMP(diff, -abs_max_diff, abs_max_diff);
LOGV("[Audio] Buffering: target=%" PRIu32 " avg=%f cur=%" PRIu32
" compensation=%d", ap->target_buffering, avg,
buffered_samples, diff);
if (diff != ap->compensation) {
int ret = swr_set_compensation(swr_ctx, diff, distance);
if (ret < 0) {
LOGW("Resampling compensation failed: %d", ret);
// not fatal
} else {
ap->compensation = diff;
}
}
}
}
return true;
}
static bool
sc_audio_player_frame_sink_open(struct sc_frame_sink *sink,
const AVCodecContext *ctx) {
@ -331,28 +115,11 @@ sc_audio_player_frame_sink_open(struct sc_frame_sink *sink,
unsigned nb_channels = tmp;
#endif
assert(ctx->sample_rate > 0);
assert(!av_sample_fmt_is_planar(SC_AV_SAMPLE_FMT));
int out_bytes_per_sample = av_get_bytes_per_sample(SC_AV_SAMPLE_FMT);
assert(out_bytes_per_sample > 0);
ap->sample_rate = ctx->sample_rate;
ap->nb_channels = nb_channels;
ap->out_bytes_per_sample = out_bytes_per_sample;
ap->target_buffering = ap->target_buffering_delay * ap->sample_rate
/ SC_TICK_FREQ;
uint64_t aout_samples = ap->output_buffer_duration * ap->sample_rate
/ SC_TICK_FREQ;
assert(aout_samples <= 0xFFFF);
ap->output_buffer = (uint16_t) aout_samples;
SDL_AudioSpec desired = {
.freq = ctx->sample_rate,
.format = SC_SDL_SAMPLE_FMT,
.channels = nb_channels,
.samples = aout_samples,
.samples = SC_AUDIO_OUTPUT_BUFFER_SAMPLES,
.callback = sc_audio_player_sdl_callback,
.userdata = ap,
};
@ -371,6 +138,12 @@ sc_audio_player_frame_sink_open(struct sc_frame_sink *sink,
}
ap->swr_ctx = swr_ctx;
assert(ctx->sample_rate > 0);
assert(!av_sample_fmt_is_planar(SC_AV_SAMPLE_FMT));
int out_bytes_per_sample = av_get_bytes_per_sample(SC_AV_SAMPLE_FMT);
assert(out_bytes_per_sample > 0);
#ifdef SCRCPY_LAVU_HAS_CHLAYOUT
av_opt_set_chlayout(swr_ctx, "in_chlayout", &ctx->ch_layout, 0);
av_opt_set_chlayout(swr_ctx, "out_chlayout", &ctx->ch_layout, 0);
@ -393,52 +166,37 @@ sc_audio_player_frame_sink_open(struct sc_frame_sink *sink,
goto error_free_swr_ctx;
}
// Use a ring-buffer of the target buffering size plus 1 second between the
// producer and the consumer. It's too big on purpose, to guarantee that
// the producer and the consumer will be able to access it in parallel
// without locking.
size_t audiobuf_samples = ap->target_buffering + ap->sample_rate;
ap->sample_rate = ctx->sample_rate;
ap->nb_channels = nb_channels;
ap->out_bytes_per_sample = out_bytes_per_sample;
size_t sample_size = ap->nb_channels * ap->out_bytes_per_sample;
bool ok = sc_audiobuf_init(&ap->buf, sample_size, audiobuf_samples);
size_t bytebuf_size =
sc_audio_player_get_buf_size(ap, SC_BYTEBUF_SIZE_IN_SAMPLES);
bool ok = sc_bytebuf_init(&ap->buf, bytebuf_size);
if (!ok) {
goto error_free_swr_ctx;
}
size_t initial_swr_buf_size = TO_BYTES(4096);
ap->safe_empty_buffer = sc_bytebuf_write_remaining(&ap->buf);
size_t initial_swr_buf_size = sc_audio_player_get_buf_size(ap, 4096);
ap->swr_buf = malloc(initial_swr_buf_size);
if (!ap->swr_buf) {
LOG_OOM();
goto error_destroy_audiobuf;
goto error_destroy_bytebuf;
}
ap->swr_buf_alloc_size = initial_swr_buf_size;
ap->previous_can_write = sc_audiobuf_can_write(&ap->buf);
// Samples are produced and consumed by blocks, so the buffering must be
// smoothed to get a relatively stable value.
sc_average_init(&ap->avg_buffering, 32);
sc_average_init(&ap->avg_buffered_samples, 32);
ap->samples_since_resync = 0;
ap->received = false;
ap->played = false;
ap->underflow = 0;
ap->compensation = 0;
// The thread calling open() is the thread calling push(), which fills the
// audio buffer consumed by the SDL audio thread.
ok = sc_thread_set_priority(SC_THREAD_PRIORITY_TIME_CRITICAL);
if (!ok) {
ok = sc_thread_set_priority(SC_THREAD_PRIORITY_HIGH);
(void) ok; // We don't care if it worked, at least we tried
}
SDL_PauseAudioDevice(ap->device, 0);
return true;
error_destroy_audiobuf:
sc_audiobuf_destroy(&ap->buf);
error_destroy_bytebuf:
sc_bytebuf_destroy(&ap->buf);
error_free_swr_ctx:
swr_free(&ap->swr_ctx);
error_close_audio_device:
@ -456,16 +214,86 @@ sc_audio_player_frame_sink_close(struct sc_frame_sink *sink) {
SDL_CloseAudioDevice(ap->device);
free(ap->swr_buf);
sc_audiobuf_destroy(&ap->buf);
sc_bytebuf_destroy(&ap->buf);
swr_free(&ap->swr_ctx);
}
void
sc_audio_player_init(struct sc_audio_player *ap, sc_tick target_buffering,
sc_tick output_buffer_duration) {
ap->target_buffering_delay = target_buffering;
ap->output_buffer_duration = output_buffer_duration;
static bool
sc_audio_player_frame_sink_push(struct sc_frame_sink *sink, const AVFrame *frame) {
struct sc_audio_player *ap = DOWNCAST(sink);
SwrContext *swr_ctx = ap->swr_ctx;
int64_t delay = swr_get_delay(swr_ctx, ap->sample_rate);
// No need to av_rescale_rnd(), input and output sample rates are the same
int dst_nb_samples = delay + frame->nb_samples;
uint8_t *swr_buf = sc_audio_player_get_swr_buf(ap, frame->nb_samples);
if (!swr_buf) {
return false;
}
int ret = swr_convert(swr_ctx, &swr_buf, dst_nb_samples,
(const uint8_t **) frame->data, frame->nb_samples);
if (ret < 0) {
LOGE("Resampling failed: %d", ret);
return false;
}
size_t samples_written = ret;
size_t swr_buf_size = sc_audio_player_get_buf_size(ap, samples_written);
#ifndef SC_AUDIO_PLAYER_NDEBUG
LOGI("[Audio] %" SC_PRIsizet " samples written to buffer", samples_written);
#endif
// It should almost always be possible to write without lock
bool can_write_without_lock = swr_buf_size <= ap->safe_empty_buffer;
if (can_write_without_lock) {
sc_bytebuf_prepare_write(&ap->buf, swr_buf, swr_buf_size);
}
SDL_LockAudioDevice(ap->device);
if (can_write_without_lock) {
sc_bytebuf_commit_write(&ap->buf, swr_buf_size);
} else {
sc_bytebuf_write(&ap->buf, swr_buf, swr_buf_size);
}
// The next time, it will remain at least the current empty space
ap->safe_empty_buffer = sc_bytebuf_write_remaining(&ap->buf);
// Read the value written by the SDL thread under lock
float avg;
bool has_avg = sc_average_get(&ap->avg_buffered_samples, &avg);
SDL_UnlockAudioDevice(ap->device);
if (has_avg) {
ap->samples_since_resync += samples_written;
if (ap->samples_since_resync >= ap->sample_rate) {
// Resync every second
ap->samples_since_resync = 0;
int diff = SC_TARGET_BUFFERED_SAMPLES - avg;
#ifndef SC_AUDIO_PLAYER_NDEBUG
LOGI("[Audio] Average buffered samples = %f, compensation %d",
avg, diff);
#endif
// Compensate the diff over 3 seconds (but will be recomputed after
// 1 second)
int ret = swr_set_compensation(swr_ctx, diff, 3 * ap->sample_rate);
if (ret < 0) {
LOGW("Resampling compensation failed: %d", ret);
// not fatal
}
}
}
return true;
}
void
sc_audio_player_init(struct sc_audio_player *ap) {
static const struct sc_frame_sink_ops ops = {
.open = sc_audio_player_frame_sink_open,
.close = sc_audio_player_frame_sink_close,

View File

@ -5,10 +5,9 @@
#include <stdbool.h>
#include "trait/frame_sink.h"
#include <util/audiobuf.h>
#include <util/average.h>
#include <util/bytebuf.h>
#include <util/thread.h>
#include <util/tick.h>
#include <libavformat/avformat.h>
#include <libswresample/swresample.h>
@ -19,61 +18,27 @@ struct sc_audio_player {
SDL_AudioDeviceID device;
// The target buffering between the producer and the consumer. This value
// is directly use for compensation.
// Since audio capture and/or encoding on the device typically produce
// blocks of 960 samples (20ms) or 1024 samples (~21.3ms), this target
// value should be higher.
sc_tick target_buffering_delay;
uint32_t target_buffering; // in samples
// protected by SDL_AudioDeviceLock()
struct sc_bytebuf buf;
// Number of bytes which could be written without locking
size_t safe_empty_buffer;
// SDL audio output buffer size.
sc_tick output_buffer_duration;
uint16_t output_buffer;
// Audio buffer to communicate between the receiver and the SDL audio
// callback (protected by SDL_AudioDeviceLock())
struct sc_audiobuf buf;
// The previous empty space in the buffer (only used by the receiver
// thread)
uint32_t previous_can_write;
// Resampler (only used from the receiver thread)
struct SwrContext *swr_ctx;
// The sample rate is the same for input and output
unsigned sample_rate;
// The number of channels is the same for input and output
unsigned nb_channels;
// The number of bytes per sample for a single channel
unsigned out_bytes_per_sample;
// Target buffer for resampling (only used by the receiver thread)
// Target buffer for resampling
uint8_t *swr_buf;
size_t swr_buf_alloc_size;
// Number of buffered samples (may be negative on underflow) (only used by
// the receiver thread)
struct sc_average avg_buffering;
// Count the number of samples to trigger a compensation update regularly
// (only used by the receiver thread)
uint32_t samples_since_resync;
// Number of silence samples inserted since the last received packet
// (protected by SDL_AudioDeviceLock())
uint32_t underflow;
// Current applied compensation value (only used by the receiver thread)
int compensation;
// Set to true the first time a sample is received (protected by
// SDL_AudioDeviceLock())
bool received;
// Set to true the first time the SDL callback is called (protected by
// SDL_AudioDeviceLock())
bool played;
// Number of buffered samples (may be negative on underflow)
struct sc_average avg_buffered_samples;
unsigned samples_since_resync;
const struct sc_audio_player_callbacks *cbs;
void *cbs_userdata;
@ -84,7 +49,6 @@ struct sc_audio_player_callbacks {
};
void
sc_audio_player_init(struct sc_audio_player *ap, sc_tick target_buffering,
sc_tick audio_output_buffer);
sc_audio_player_init(struct sc_audio_player *ap);
#endif

File diff suppressed because it is too large Load Diff

View File

@ -7,17 +7,10 @@
#include "options.h"
enum sc_pause_on_exit {
SC_PAUSE_ON_EXIT_TRUE,
SC_PAUSE_ON_EXIT_FALSE,
SC_PAUSE_ON_EXIT_IF_ERROR,
};
struct scrcpy_cli_args {
struct scrcpy_options opts;
bool help;
bool version;
enum sc_pause_on_exit pause_on_exit;
};
void

View File

@ -1,36 +1,111 @@
#include "clock.h"
#include <assert.h>
#include "util/log.h"
#define SC_CLOCK_NDEBUG // comment to debug
#define SC_CLOCK_RANGE 32
void
sc_clock_init(struct sc_clock *clock) {
clock->range = 0;
clock->offset = 0;
clock->count = 0;
clock->head = 0;
clock->left_sum.system = 0;
clock->left_sum.stream = 0;
clock->right_sum.system = 0;
clock->right_sum.stream = 0;
}
// Estimate the affine function f(stream) = slope * stream + offset
static void
sc_clock_estimate(struct sc_clock *clock,
double *out_slope, sc_tick *out_offset) {
assert(clock->count > 1); // two points are necessary
struct sc_clock_point left_avg = {
.system = clock->left_sum.system / (clock->count / 2),
.stream = clock->left_sum.stream / (clock->count / 2),
};
struct sc_clock_point right_avg = {
.system = clock->right_sum.system / ((clock->count + 1) / 2),
.stream = clock->right_sum.stream / ((clock->count + 1) / 2),
};
double slope = (double) (right_avg.system - left_avg.system)
/ (right_avg.stream - left_avg.stream);
if (clock->count < SC_CLOCK_RANGE) {
/* The first frames are typically received and decoded with more delay
* than the others, causing a wrong slope estimation on start. To
* compensate, assume an initial slope of 1, then progressively use the
* estimated slope. */
slope = (clock->count * slope + (SC_CLOCK_RANGE - clock->count))
/ SC_CLOCK_RANGE;
}
struct sc_clock_point global_avg = {
.system = (clock->left_sum.system + clock->right_sum.system)
/ clock->count,
.stream = (clock->left_sum.stream + clock->right_sum.stream)
/ clock->count,
};
sc_tick offset = global_avg.system - (sc_tick) (global_avg.stream * slope);
*out_slope = slope;
*out_offset = offset;
}
void
sc_clock_update(struct sc_clock *clock, sc_tick system, sc_tick stream) {
if (clock->range < SC_CLOCK_RANGE) {
++clock->range;
struct sc_clock_point *point = &clock->points[clock->head];
if (clock->count == SC_CLOCK_RANGE || clock->count & 1) {
// One point passes from the right sum to the left sum
unsigned mid;
if (clock->count == SC_CLOCK_RANGE) {
mid = (clock->head + SC_CLOCK_RANGE / 2) % SC_CLOCK_RANGE;
} else {
// Only for the first frames
mid = clock->count / 2;
}
sc_tick offset = system - stream;
clock->offset = ((clock->range - 1) * clock->offset + offset)
/ clock->range;
struct sc_clock_point *mid_point = &clock->points[mid];
clock->left_sum.system += mid_point->system;
clock->left_sum.stream += mid_point->stream;
clock->right_sum.system -= mid_point->system;
clock->right_sum.stream -= mid_point->stream;
}
if (clock->count == SC_CLOCK_RANGE) {
// The current point overwrites the previous value in the circular
// array, update the left sum accordingly
clock->left_sum.system -= point->system;
clock->left_sum.stream -= point->stream;
} else {
++clock->count;
}
point->system = system;
point->stream = stream;
clock->right_sum.system += system;
clock->right_sum.stream += stream;
clock->head = (clock->head + 1) % SC_CLOCK_RANGE;
if (clock->count > 1) {
// Update estimation
sc_clock_estimate(clock, &clock->slope, &clock->offset);
#ifndef SC_CLOCK_NDEBUG
LOGD("Clock estimation: pts + %" PRItick, clock->offset);
LOGD("Clock estimation: %f * pts + %" PRItick,
clock->slope, clock->offset);
#endif
}
}
sc_tick
sc_clock_to_system_time(struct sc_clock *clock, sc_tick stream) {
assert(clock->range); // sc_clock_update() must have been called
return stream + clock->offset;
assert(clock->count > 1); // sc_clock_update() must have been called
return (sc_tick) (stream * clock->slope) + clock->offset;
}

View File

@ -3,8 +3,13 @@
#include "common.h"
#include <assert.h>
#include "util/tick.h"
#define SC_CLOCK_RANGE 32
static_assert(!(SC_CLOCK_RANGE & 1), "SC_CLOCK_RANGE must be even");
struct sc_clock_point {
sc_tick system;
sc_tick stream;
@ -16,18 +21,40 @@ struct sc_clock_point {
*
* f(stream) = slope * stream + offset
*
* Theoretically, the slope encodes the drift between the device clock and the
* computer clock. It is expected to be very close to 1.
* To that end, it stores the SC_CLOCK_RANGE last clock points (the timestamps
* of a frame expressed both in stream time and system time) in a circular
* array.
*
* Since the clock is used to estimate very close points in the future (which
* are reestimated on every clock update, see delay_buffer), the error caused
* by clock drift is totally negligible, so it is better to assume that the
* slope is 1 than to estimate it (the estimation error would be larger).
* To estimate the slope, it splits the last SC_CLOCK_RANGE points into two
* sets of SC_CLOCK_RANGE/2 points, and computes their centroid ("average
* point"). The slope of the estimated affine function is that of the line
* passing through these two points.
*
* Therefore, only the offset is estimated.
* To estimate the offset, it computes the centroid of all the SC_CLOCK_RANGE
* points. The resulting affine function passes by this centroid.
*
* With a circular array, the rolling sums (and average) are quick to compute.
* In practice, the estimation is stable and the evolution is smooth.
*/
struct sc_clock {
unsigned range;
// Circular array
struct sc_clock_point points[SC_CLOCK_RANGE];
// Number of points in the array (count <= SC_CLOCK_RANGE)
unsigned count;
// Index of the next point to write
unsigned head;
// Sum of the first count/2 points
struct sc_clock_point left_sum;
// Sum of the last (count+1)/2 points
struct sc_clock_point right_sum;
// Estimated slope and offset
// (computed on sc_clock_update(), used by sc_clock_to_system_time())
double slope;
sc_tick offset;
};

View File

@ -5,8 +5,8 @@
#include "compat.h"
#define ARRAY_LEN(a) (sizeof(a) / sizeof(a[0]))
#define MIN(X,Y) ((X) < (Y) ? (X) : (Y))
#define MAX(X,Y) ((X) > (Y) ? (X) : (Y))
#define MIN(X,Y) (X) < (Y) ? (X) : (Y)
#define MAX(X,Y) (X) > (Y) ? (X) : (Y)
#define CLAMP(V,X,Y) MIN( MAX((V),(X)), (Y) )
#define container_of(ptr, type, member) \

View File

@ -3,9 +3,6 @@
#include "config.h"
#include <assert.h>
#ifndef HAVE_REALLOCARRAY
# include <errno.h>
#endif
#include <stdlib.h>
#include <stdio.h>
#include <stdarg.h>
@ -96,15 +93,5 @@ long jrand48(unsigned short xsubi[3]) {
return v.i;
}
#endif
#endif
#ifndef HAVE_REALLOCARRAY
void *reallocarray(void *ptr, size_t nmemb, size_t size) {
size_t bytes;
if (__builtin_mul_overflow(nmemb, size, &bytes)) {
errno = ENOMEM;
return NULL;
}
return realloc(ptr, bytes);
}
#endif

View File

@ -25,12 +25,6 @@
# define SCRCPY_LAVF_REQUIRES_REGISTER_ALL
#endif
// Not documented in ffmpeg/doc/APIchanges, but AV_CODEC_ID_AV1 has been added
// by FFmpeg commit d42809f9835a4e9e5c7c63210abb09ad0ef19cfb (included in tag
// n3.3).
#if LIBAVFORMAT_VERSION_INT >= AV_VERSION_INT(57, 89, 100)
# define SCRCPY_LAVC_HAS_AV1
#endif
// In ffmpeg/doc/APIchanges:
// 2018-01-28 - ea3672b7d6 - lavf 58.7.100 - avformat.h
@ -60,10 +54,6 @@
# define SCRCPY_SDL_HAS_HINT_VIDEO_X11_NET_WM_BYPASS_COMPOSITOR
#endif
#if SDL_VERSION_ATLEAST(2, 0, 16)
# define SCRCPY_SDL_HAS_THREAD_PRIORITY_TIME_CRITICAL
#endif
#ifndef HAVE_STRDUP
char *strdup(const char *s);
#endif
@ -84,8 +74,4 @@ long nrand48(unsigned short xsubi[3]);
long jrand48(unsigned short xsubi[3]);
#endif
#ifndef HAVE_REALLOCARRAY
void *reallocarray(void *ptr, size_t nmemb, size_t size);
#endif
#endif

View File

@ -4,28 +4,19 @@
#include "util/log.h"
#define SC_CONTROL_MSG_QUEUE_MAX 64
bool
sc_controller_init(struct sc_controller *controller, sc_socket control_socket,
struct sc_acksync *acksync) {
sc_vecdeque_init(&controller->queue);
cbuf_init(&controller->queue);
bool ok = sc_vecdeque_reserve(&controller->queue, SC_CONTROL_MSG_QUEUE_MAX);
bool ok = sc_receiver_init(&controller->receiver, control_socket, acksync);
if (!ok) {
return false;
}
ok = sc_receiver_init(&controller->receiver, control_socket, acksync);
if (!ok) {
sc_vecdeque_destroy(&controller->queue);
return false;
}
ok = sc_mutex_init(&controller->mutex);
if (!ok) {
sc_receiver_destroy(&controller->receiver);
sc_vecdeque_destroy(&controller->queue);
return false;
}
@ -33,7 +24,6 @@ sc_controller_init(struct sc_controller *controller, sc_socket control_socket,
if (!ok) {
sc_receiver_destroy(&controller->receiver);
sc_mutex_destroy(&controller->mutex);
sc_vecdeque_destroy(&controller->queue);
return false;
}
@ -48,12 +38,10 @@ sc_controller_destroy(struct sc_controller *controller) {
sc_cond_destroy(&controller->msg_cond);
sc_mutex_destroy(&controller->mutex);
while (!sc_vecdeque_is_empty(&controller->queue)) {
struct sc_control_msg *msg = sc_vecdeque_popref(&controller->queue);
assert(msg);
sc_control_msg_destroy(msg);
struct sc_control_msg msg;
while (cbuf_take(&controller->queue, &msg)) {
sc_control_msg_destroy(&msg);
}
sc_vecdeque_destroy(&controller->queue);
sc_receiver_destroy(&controller->receiver);
}
@ -66,19 +54,13 @@ sc_controller_push_msg(struct sc_controller *controller,
}
sc_mutex_lock(&controller->mutex);
bool full = sc_vecdeque_is_full(&controller->queue);
if (!full) {
bool was_empty = sc_vecdeque_is_empty(&controller->queue);
sc_vecdeque_push_noresize(&controller->queue, *msg);
bool was_empty = cbuf_is_empty(&controller->queue);
bool res = cbuf_push(&controller->queue, *msg);
if (was_empty) {
sc_cond_signal(&controller->msg_cond);
}
}
// Otherwise (if the queue is full), the msg is discarded
sc_mutex_unlock(&controller->mutex);
return !full;
return res;
}
static bool
@ -100,8 +82,7 @@ run_controller(void *data) {
for (;;) {
sc_mutex_lock(&controller->mutex);
while (!controller->stopped
&& sc_vecdeque_is_empty(&controller->queue)) {
while (!controller->stopped && cbuf_is_empty(&controller->queue)) {
sc_cond_wait(&controller->msg_cond, &controller->mutex);
}
if (controller->stopped) {
@ -109,9 +90,10 @@ run_controller(void *data) {
sc_mutex_unlock(&controller->mutex);
break;
}
assert(!sc_vecdeque_is_empty(&controller->queue));
struct sc_control_msg msg = sc_vecdeque_pop(&controller->queue);
struct sc_control_msg msg;
bool non_empty = cbuf_take(&controller->queue, &msg);
assert(non_empty);
(void) non_empty;
sc_mutex_unlock(&controller->mutex);
bool ok = process_msg(controller, &msg);

View File

@ -8,11 +8,11 @@
#include "control_msg.h"
#include "receiver.h"
#include "util/acksync.h"
#include "util/cbuf.h"
#include "util/net.h"
#include "util/thread.h"
#include "util/vecdeque.h"
struct sc_control_msg_queue SC_VECDEQUE(struct sc_control_msg);
struct sc_control_msg_queue CBUF(struct sc_control_msg, 64);
struct sc_controller {
sc_socket control_socket;

View File

@ -5,34 +5,106 @@
#include <libavutil/channel_layout.h>
#include "events.h"
#include "video_buffer.h"
#include "trait/frame_sink.h"
#include "util/log.h"
/** Downcast packet_sink to decoder */
#define DOWNCAST(SINK) container_of(SINK, struct sc_decoder, packet_sink)
static void
sc_decoder_close_first_sinks(struct sc_decoder *decoder, unsigned count) {
while (count) {
struct sc_frame_sink *sink = decoder->sinks[--count];
sink->ops->close(sink);
}
}
static inline void
sc_decoder_close_sinks(struct sc_decoder *decoder) {
sc_decoder_close_first_sinks(decoder, decoder->sink_count);
}
static bool
sc_decoder_open(struct sc_decoder *decoder, AVCodecContext *ctx) {
decoder->frame = av_frame_alloc();
if (!decoder->frame) {
sc_decoder_open_sinks(struct sc_decoder *decoder, const AVCodecContext *ctx) {
for (unsigned i = 0; i < decoder->sink_count; ++i) {
struct sc_frame_sink *sink = decoder->sinks[i];
if (!sink->ops->open(sink, ctx)) {
sc_decoder_close_first_sinks(decoder, i);
return false;
}
}
return true;
}
static bool
sc_decoder_open(struct sc_decoder *decoder, const AVCodec *codec) {
decoder->codec_ctx = avcodec_alloc_context3(codec);
if (!decoder->codec_ctx) {
LOG_OOM();
return false;
}
if (!sc_frame_source_sinks_open(&decoder->frame_source, ctx)) {
av_frame_free(&decoder->frame);
decoder->codec_ctx->flags |= AV_CODEC_FLAG_LOW_DELAY;
if (codec->type == AVMEDIA_TYPE_VIDEO) {
// Hardcoded video properties
decoder->codec_ctx->pix_fmt = AV_PIX_FMT_YUV420P;
} else {
// Hardcoded audio properties
#ifdef SCRCPY_LAVU_HAS_CHLAYOUT
decoder->codec_ctx->ch_layout =
(AVChannelLayout) AV_CHANNEL_LAYOUT_STEREO;
#else
decoder->codec_ctx->channel_layout = AV_CH_LAYOUT_STEREO;
decoder->codec_ctx->channels = 2;
#endif
decoder->codec_ctx->sample_rate = 48000;
}
if (avcodec_open2(decoder->codec_ctx, codec, NULL) < 0) {
LOGE("Decoder '%s': could not open codec", decoder->name);
avcodec_free_context(&decoder->codec_ctx);
return false;
}
decoder->ctx = ctx;
decoder->frame = av_frame_alloc();
if (!decoder->frame) {
LOG_OOM();
avcodec_close(decoder->codec_ctx);
avcodec_free_context(&decoder->codec_ctx);
return false;
}
if (!sc_decoder_open_sinks(decoder, decoder->codec_ctx)) {
av_frame_free(&decoder->frame);
avcodec_close(decoder->codec_ctx);
avcodec_free_context(&decoder->codec_ctx);
return false;
}
return true;
}
static void
sc_decoder_close(struct sc_decoder *decoder) {
sc_frame_source_sinks_close(&decoder->frame_source);
sc_decoder_close_sinks(decoder);
av_frame_free(&decoder->frame);
avcodec_close(decoder->codec_ctx);
avcodec_free_context(&decoder->codec_ctx);
}
static bool
push_frame_to_sinks(struct sc_decoder *decoder, const AVFrame *frame) {
for (unsigned i = 0; i < decoder->sink_count; ++i) {
struct sc_frame_sink *sink = decoder->sinks[i];
if (!sink->ops->push(sink, frame)) {
return false;
}
}
return true;
}
static bool
@ -43,42 +115,33 @@ sc_decoder_push(struct sc_decoder *decoder, const AVPacket *packet) {
return true;
}
int ret = avcodec_send_packet(decoder->ctx, packet);
int ret = avcodec_send_packet(decoder->codec_ctx, packet);
if (ret < 0 && ret != AVERROR(EAGAIN)) {
LOGE("Decoder '%s': could not send video packet: %d",
decoder->name, ret);
return false;
}
ret = avcodec_receive_frame(decoder->codec_ctx, decoder->frame);
if (!ret) {
// a frame was received
bool ok = push_frame_to_sinks(decoder, decoder->frame);
// A frame lost should not make the whole pipeline fail. The error, if
// any, is already logged.
(void) ok;
for (;;) {
ret = avcodec_receive_frame(decoder->ctx, decoder->frame);
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
break;
}
if (ret) {
av_frame_unref(decoder->frame);
} else if (ret != AVERROR(EAGAIN)) {
LOGE("Decoder '%s', could not receive video frame: %d",
decoder->name, ret);
return false;
}
// a frame was received
bool ok = sc_frame_source_sinks_push(&decoder->frame_source,
decoder->frame);
av_frame_unref(decoder->frame);
if (!ok) {
// Error already logged
return false;
}
}
return true;
}
static bool
sc_decoder_packet_sink_open(struct sc_packet_sink *sink, AVCodecContext *ctx) {
sc_decoder_packet_sink_open(struct sc_packet_sink *sink, const AVCodec *codec) {
struct sc_decoder *decoder = DOWNCAST(sink);
return sc_decoder_open(decoder, ctx);
return sc_decoder_open(decoder, codec);
}
static void
@ -97,7 +160,7 @@ sc_decoder_packet_sink_push(struct sc_packet_sink *sink,
void
sc_decoder_init(struct sc_decoder *decoder, const char *name) {
decoder->name = name; // statically allocated
sc_frame_source_init(&decoder->frame_source);
decoder->sink_count = 0;
static const struct sc_packet_sink_ops ops = {
.open = sc_decoder_packet_sink_open,
@ -107,3 +170,11 @@ sc_decoder_init(struct sc_decoder *decoder, const char *name) {
decoder->packet_sink.ops = &ops;
}
void
sc_decoder_add_sink(struct sc_decoder *decoder, struct sc_frame_sink *sink) {
assert(decoder->sink_count < SC_DECODER_MAX_SINKS);
assert(sink);
assert(sink->ops);
decoder->sinks[decoder->sink_count++] = sink;
}

View File

@ -3,20 +3,23 @@
#include "common.h"
#include "trait/frame_source.h"
#include "trait/packet_sink.h"
#include <stdbool.h>
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#define SC_DECODER_MAX_SINKS 2
struct sc_decoder {
struct sc_packet_sink packet_sink; // packet sink trait
struct sc_frame_source frame_source; // frame source trait
const char *name; // must be statically allocated (e.g. a string literal)
AVCodecContext *ctx;
struct sc_frame_sink *sinks[SC_DECODER_MAX_SINKS];
unsigned sink_count;
AVCodecContext *codec_ctx;
AVFrame *frame;
};
@ -24,4 +27,7 @@ struct sc_decoder {
void
sc_decoder_init(struct sc_decoder *decoder, const char *name);
void
sc_decoder_add_sink(struct sc_decoder *decoder, struct sc_frame_sink *sink);
#endif

View File

@ -1,244 +0,0 @@
#include "delay_buffer.h"
#include <assert.h>
#include <stdlib.h>
#include <libavutil/avutil.h>
#include <libavformat/avformat.h>
#include "util/log.h"
#define SC_BUFFERING_NDEBUG // comment to debug
/** Downcast frame_sink to sc_delay_buffer */
#define DOWNCAST(SINK) container_of(SINK, struct sc_delay_buffer, frame_sink)
static bool
sc_delayed_frame_init(struct sc_delayed_frame *dframe, const AVFrame *frame) {
dframe->frame = av_frame_alloc();
if (!dframe->frame) {
LOG_OOM();
return false;
}
if (av_frame_ref(dframe->frame, frame)) {
LOG_OOM();
av_frame_free(&dframe->frame);
return false;
}
return true;
}
static void
sc_delayed_frame_destroy(struct sc_delayed_frame *dframe) {
av_frame_unref(dframe->frame);
av_frame_free(&dframe->frame);
}
static int
run_buffering(void *data) {
struct sc_delay_buffer *db = data;
assert(db->delay > 0);
for (;;) {
sc_mutex_lock(&db->mutex);
while (!db->stopped && sc_vecdeque_is_empty(&db->queue)) {
sc_cond_wait(&db->queue_cond, &db->mutex);
}
if (db->stopped) {
sc_mutex_unlock(&db->mutex);
goto stopped;
}
struct sc_delayed_frame dframe = sc_vecdeque_pop(&db->queue);
sc_tick max_deadline = sc_tick_now() + db->delay;
// PTS (written by the server) are expressed in microseconds
sc_tick pts = SC_TICK_FROM_US(dframe.frame->pts);
bool timed_out = false;
while (!db->stopped && !timed_out) {
sc_tick deadline = sc_clock_to_system_time(&db->clock, pts)
+ db->delay;
if (deadline > max_deadline) {
deadline = max_deadline;
}
timed_out =
!sc_cond_timedwait(&db->wait_cond, &db->mutex, deadline);
}
bool stopped = db->stopped;
sc_mutex_unlock(&db->mutex);
if (stopped) {
sc_delayed_frame_destroy(&dframe);
goto stopped;
}
#ifndef SC_BUFFERING_NDEBUG
LOGD("Buffering: %" PRItick ";%" PRItick ";%" PRItick,
pts, dframe.push_date, sc_tick_now());
#endif
bool ok = sc_frame_source_sinks_push(&db->frame_source, dframe.frame);
sc_delayed_frame_destroy(&dframe);
if (!ok) {
LOGE("Delayed frame could not be pushed, stopping");
sc_mutex_lock(&db->mutex);
// Prevent to push any new frame
db->stopped = true;
sc_mutex_unlock(&db->mutex);
goto stopped;
}
}
stopped:
assert(db->stopped);
// Flush queue
while (!sc_vecdeque_is_empty(&db->queue)) {
struct sc_delayed_frame *dframe = sc_vecdeque_popref(&db->queue);
sc_delayed_frame_destroy(dframe);
}
LOGD("Buffering thread ended");
return 0;
}
static bool
sc_delay_buffer_frame_sink_open(struct sc_frame_sink *sink,
const AVCodecContext *ctx) {
struct sc_delay_buffer *db = DOWNCAST(sink);
(void) ctx;
bool ok = sc_mutex_init(&db->mutex);
if (!ok) {
return false;
}
ok = sc_cond_init(&db->queue_cond);
if (!ok) {
goto error_destroy_mutex;
}
ok = sc_cond_init(&db->wait_cond);
if (!ok) {
goto error_destroy_queue_cond;
}
sc_clock_init(&db->clock);
sc_vecdeque_init(&db->queue);
if (!sc_frame_source_sinks_open(&db->frame_source, ctx)) {
goto error_destroy_wait_cond;
}
ok = sc_thread_create(&db->thread, run_buffering, "scrcpy-dbuf", db);
if (!ok) {
LOGE("Could not start buffering thread");
goto error_close_sinks;
}
return true;
error_close_sinks:
sc_frame_source_sinks_close(&db->frame_source);
error_destroy_wait_cond:
sc_cond_destroy(&db->wait_cond);
error_destroy_queue_cond:
sc_cond_destroy(&db->queue_cond);
error_destroy_mutex:
sc_mutex_destroy(&db->mutex);
return false;
}
static void
sc_delay_buffer_frame_sink_close(struct sc_frame_sink *sink) {
struct sc_delay_buffer *db = DOWNCAST(sink);
sc_mutex_lock(&db->mutex);
db->stopped = true;
sc_cond_signal(&db->queue_cond);
sc_cond_signal(&db->wait_cond);
sc_mutex_unlock(&db->mutex);
sc_thread_join(&db->thread, NULL);
sc_frame_source_sinks_close(&db->frame_source);
sc_cond_destroy(&db->wait_cond);
sc_cond_destroy(&db->queue_cond);
sc_mutex_destroy(&db->mutex);
}
static bool
sc_delay_buffer_frame_sink_push(struct sc_frame_sink *sink,
const AVFrame *frame) {
struct sc_delay_buffer *db = DOWNCAST(sink);
sc_mutex_lock(&db->mutex);
if (db->stopped) {
sc_mutex_unlock(&db->mutex);
return false;
}
sc_tick pts = SC_TICK_FROM_US(frame->pts);
sc_clock_update(&db->clock, sc_tick_now(), pts);
sc_cond_signal(&db->wait_cond);
if (db->first_frame_asap && db->clock.range == 1) {
sc_mutex_unlock(&db->mutex);
return sc_frame_source_sinks_push(&db->frame_source, frame);
}
struct sc_delayed_frame dframe;
bool ok = sc_delayed_frame_init(&dframe, frame);
if (!ok) {
sc_mutex_unlock(&db->mutex);
return false;
}
#ifndef SC_BUFFERING_NDEBUG
dframe.push_date = sc_tick_now();
#endif
ok = sc_vecdeque_push(&db->queue, dframe);
if (!ok) {
sc_mutex_unlock(&db->mutex);
LOG_OOM();
return false;
}
sc_cond_signal(&db->queue_cond);
sc_mutex_unlock(&db->mutex);
return true;
}
void
sc_delay_buffer_init(struct sc_delay_buffer *db, sc_tick delay,
bool first_frame_asap) {
assert(delay > 0);
db->delay = delay;
db->first_frame_asap = first_frame_asap;
sc_frame_source_init(&db->frame_source);
static const struct sc_frame_sink_ops ops = {
.open = sc_delay_buffer_frame_sink_open,
.close = sc_delay_buffer_frame_sink_close,
.push = sc_delay_buffer_frame_sink_push,
};
db->frame_sink.ops = &ops;
}

View File

@ -1,60 +0,0 @@
#ifndef SC_DELAY_BUFFER_H
#define SC_DELAY_BUFFER_H
#include "common.h"
#include <stdbool.h>
#include "clock.h"
#include "trait/frame_source.h"
#include "trait/frame_sink.h"
#include "util/thread.h"
#include "util/tick.h"
#include "util/vecdeque.h"
// forward declarations
typedef struct AVFrame AVFrame;
struct sc_delayed_frame {
AVFrame *frame;
#ifndef NDEBUG
sc_tick push_date;
#endif
};
struct sc_delayed_frame_queue SC_VECDEQUE(struct sc_delayed_frame);
struct sc_delay_buffer {
struct sc_frame_source frame_source; // frame source trait
struct sc_frame_sink frame_sink; // frame sink trait
sc_tick delay;
bool first_frame_asap;
sc_thread thread;
sc_mutex mutex;
sc_cond queue_cond;
sc_cond wait_cond;
struct sc_clock clock;
struct sc_delayed_frame_queue queue;
bool stopped;
};
struct sc_delay_buffer_callbacks {
bool (*on_new_frame)(struct sc_delay_buffer *db, const AVFrame *frame,
void *userdata);
};
/**
* Initialize a delay buffer.
*
* \param delay a (strictly) positive delay
* \param first_frame_asap if true, do not delay the first frame (useful for
a video stream).
*/
void
sc_delay_buffer_init(struct sc_delay_buffer *db, sc_tick delay,
bool first_frame_asap);
#endif

View File

@ -1,7 +1,6 @@
#include "demuxer.h"
#include <assert.h>
#include <libavutil/channel_layout.h>
#include <libavutil/time.h>
#include <unistd.h>
@ -26,25 +25,17 @@ sc_demuxer_to_avcodec_id(uint32_t codec_id) {
#define SC_CODEC_ID_AV1 UINT32_C(0x00617631) // "av1" in ASCII
#define SC_CODEC_ID_OPUS UINT32_C(0x6f707573) // "opus" in ASCII
#define SC_CODEC_ID_AAC UINT32_C(0x00616163) // "aac in ASCII"
#define SC_CODEC_ID_RAW UINT32_C(0x00726177) // "raw" in ASCII
switch (codec_id) {
case SC_CODEC_ID_H264:
return AV_CODEC_ID_H264;
case SC_CODEC_ID_H265:
return AV_CODEC_ID_HEVC;
case SC_CODEC_ID_AV1:
#ifdef SCRCPY_LAVC_HAS_AV1
return AV_CODEC_ID_AV1;
#else
LOGE("AV1 not supported by this FFmpeg version");
return AV_CODEC_ID_NONE;
#endif
case SC_CODEC_ID_OPUS:
return AV_CODEC_ID_OPUS;
case SC_CODEC_ID_AAC:
return AV_CODEC_ID_AAC;
case SC_CODEC_ID_RAW:
return AV_CODEC_ID_PCM_S16LE;
default:
LOGE("Unknown codec id 0x%08" PRIx32, codec_id);
return AV_CODEC_ID_NONE;
@ -63,24 +54,11 @@ sc_demuxer_recv_codec_id(struct sc_demuxer *demuxer, uint32_t *codec_id) {
return true;
}
static bool
sc_demuxer_recv_video_size(struct sc_demuxer *demuxer, uint32_t *width,
uint32_t *height) {
uint8_t data[8];
ssize_t r = net_recv_all(demuxer->socket, data, 8);
if (r < 8) {
return false;
}
*width = sc_read32be(data);
*height = sc_read32be(data + 4);
return true;
}
static bool
sc_demuxer_recv_packet(struct sc_demuxer *demuxer, AVPacket *packet) {
// The video and audio streams contain a sequence of raw packets (as
// provided by MediaCodec), each prefixed with a "meta" header.
// The video stream contains raw packets, without time information. When we
// record, we retrieve the timestamps separately, from a "meta" header
// added by the server before each raw packet.
//
// The "meta" header length is 12 bytes:
// [. . . . . . . .|. . . .]. . . . . . . . . . . . . . . ...
@ -134,26 +112,86 @@ sc_demuxer_recv_packet(struct sc_demuxer *demuxer, AVPacket *packet) {
return true;
}
static bool
push_packet_to_sinks(struct sc_demuxer *demuxer, const AVPacket *packet) {
for (unsigned i = 0; i < demuxer->sink_count; ++i) {
struct sc_packet_sink *sink = demuxer->sinks[i];
if (!sink->ops->push(sink, packet)) {
return false;
}
}
return true;
}
static bool
sc_demuxer_push_packet(struct sc_demuxer *demuxer, AVPacket *packet) {
bool ok = push_packet_to_sinks(demuxer, packet);
if (!ok) {
LOGE("Demuxer '%s': could not process packet", demuxer->name);
return false;
}
return true;
}
static void
sc_demuxer_close_first_sinks(struct sc_demuxer *demuxer, unsigned count) {
while (count) {
struct sc_packet_sink *sink = demuxer->sinks[--count];
sink->ops->close(sink);
}
}
static inline void
sc_demuxer_close_sinks(struct sc_demuxer *demuxer) {
sc_demuxer_close_first_sinks(demuxer, demuxer->sink_count);
}
static bool
sc_demuxer_open_sinks(struct sc_demuxer *demuxer, const AVCodec *codec) {
for (unsigned i = 0; i < demuxer->sink_count; ++i) {
struct sc_packet_sink *sink = demuxer->sinks[i];
if (!sink->ops->open(sink, codec)) {
sc_demuxer_close_first_sinks(demuxer, i);
return false;
}
}
return true;
}
static void
sc_demuxer_disable_sinks(struct sc_demuxer *demuxer) {
for (unsigned i = 0; i < demuxer->sink_count; ++i) {
struct sc_packet_sink *sink = demuxer->sinks[i];
if (sink->ops->disable) {
sink->ops->disable(sink);
}
}
}
static int
run_demuxer(void *data) {
struct sc_demuxer *demuxer = data;
// Flag to report end-of-stream (i.e. device disconnected)
enum sc_demuxer_status status = SC_DEMUXER_STATUS_ERROR;
bool eos = false;
uint32_t raw_codec_id;
bool ok = sc_demuxer_recv_codec_id(demuxer, &raw_codec_id);
if (!ok) {
LOGE("Demuxer '%s': stream disabled due to connection error",
demuxer->name);
eos = true;
goto end;
}
if (raw_codec_id == 0) {
LOGW("Demuxer '%s': stream explicitly disabled by the device",
demuxer->name);
sc_packet_source_sinks_disable(&demuxer->packet_source);
status = SC_DEMUXER_STATUS_DISABLED;
sc_demuxer_disable_sinks(demuxer);
eos = true;
goto end;
}
@ -167,7 +205,7 @@ run_demuxer(void *data) {
if (codec_id == AV_CODEC_ID_NONE) {
LOGE("Demuxer '%s': stream disabled due to unsupported codec",
demuxer->name);
sc_packet_source_sinks_disable(&demuxer->packet_source);
sc_demuxer_disable_sinks(demuxer);
goto end;
}
@ -175,49 +213,14 @@ run_demuxer(void *data) {
if (!codec) {
LOGE("Demuxer '%s': stream disabled due to missing decoder",
demuxer->name);
sc_packet_source_sinks_disable(&demuxer->packet_source);
sc_demuxer_disable_sinks(demuxer);
goto end;
}
AVCodecContext *codec_ctx = avcodec_alloc_context3(codec);
if (!codec_ctx) {
LOG_OOM();
if (!sc_demuxer_open_sinks(demuxer, codec)) {
goto end;
}
codec_ctx->flags |= AV_CODEC_FLAG_LOW_DELAY;
if (codec->type == AVMEDIA_TYPE_VIDEO) {
uint32_t width;
uint32_t height;
ok = sc_demuxer_recv_video_size(demuxer, &width, &height);
if (!ok) {
goto finally_free_context;
}
codec_ctx->width = width;
codec_ctx->height = height;
codec_ctx->pix_fmt = AV_PIX_FMT_YUV420P;
} else {
// Hardcoded audio properties
#ifdef SCRCPY_LAVU_HAS_CHLAYOUT
codec_ctx->ch_layout = (AVChannelLayout) AV_CHANNEL_LAYOUT_STEREO;
#else
codec_ctx->channel_layout = AV_CH_LAYOUT_STEREO;
codec_ctx->channels = 2;
#endif
codec_ctx->sample_rate = 48000;
}
if (avcodec_open2(codec_ctx, codec, NULL) < 0) {
LOGE("Demuxer '%s': could not open codec", demuxer->name);
goto finally_free_context;
}
if (!sc_packet_source_sinks_open(&demuxer->packet_source, codec_ctx)) {
goto finally_free_context;
}
// Config packets must be merged with the next non-config packet only for
// video streams
bool must_merge_config_packet = codec->type == AVMEDIA_TYPE_VIDEO;
@ -238,7 +241,7 @@ run_demuxer(void *data) {
bool ok = sc_demuxer_recv_packet(demuxer, packet);
if (!ok) {
// end of stream
status = SC_DEMUXER_STATUS_EOS;
eos = true;
break;
}
@ -251,10 +254,10 @@ run_demuxer(void *data) {
}
}
ok = sc_packet_source_sinks_push(&demuxer->packet_source, packet);
ok = sc_demuxer_push_packet(demuxer, packet);
av_packet_unref(packet);
if (!ok) {
// The sink already logged its concrete error
// cannot process packet (error already logged)
break;
}
}
@ -267,12 +270,9 @@ run_demuxer(void *data) {
av_packet_free(&packet);
finally_close_sinks:
sc_packet_source_sinks_close(&demuxer->packet_source);
finally_free_context:
// This also calls avcodec_close() internally
avcodec_free_context(&codec_ctx);
sc_demuxer_close_sinks(demuxer);
end:
demuxer->cbs->on_ended(demuxer, status, demuxer->cbs_userdata);
demuxer->cbs->on_ended(demuxer, eos, demuxer->cbs_userdata);
return 0;
}
@ -284,7 +284,7 @@ sc_demuxer_init(struct sc_demuxer *demuxer, const char *name, sc_socket socket,
demuxer->name = name; // statically allocated
demuxer->socket = socket;
sc_packet_source_init(&demuxer->packet_source);
demuxer->sink_count = 0;
assert(cbs && cbs->on_ended);
@ -292,6 +292,14 @@ sc_demuxer_init(struct sc_demuxer *demuxer, const char *name, sc_socket socket,
demuxer->cbs_userdata = cbs_userdata;
}
void
sc_demuxer_add_sink(struct sc_demuxer *demuxer, struct sc_packet_sink *sink) {
assert(demuxer->sink_count < SC_DEMUXER_MAX_SINKS);
assert(sink);
assert(sink->ops);
demuxer->sinks[demuxer->sink_count++] = sink;
}
bool
sc_demuxer_start(struct sc_demuxer *demuxer) {
LOGD("Demuxer '%s': starting thread", demuxer->name);

View File

@ -8,32 +8,27 @@
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include "trait/packet_source.h"
#include "trait/packet_sink.h"
#include "util/net.h"
#include "util/thread.h"
struct sc_demuxer {
struct sc_packet_source packet_source; // packet source trait
#define SC_DEMUXER_MAX_SINKS 2
struct sc_demuxer {
const char *name; // must be statically allocated (e.g. a string literal)
sc_socket socket;
sc_thread thread;
struct sc_packet_sink *sinks[SC_DEMUXER_MAX_SINKS];
unsigned sink_count;
const struct sc_demuxer_callbacks *cbs;
void *cbs_userdata;
};
enum sc_demuxer_status {
SC_DEMUXER_STATUS_EOS,
SC_DEMUXER_STATUS_DISABLED,
SC_DEMUXER_STATUS_ERROR,
};
struct sc_demuxer_callbacks {
void (*on_ended)(struct sc_demuxer *demuxer, enum sc_demuxer_status,
void *userdata);
void (*on_ended)(struct sc_demuxer *demuxer, bool eos, void *userdata);
};
// The name must be statically allocated (e.g. a string literal)
@ -41,6 +36,9 @@ void
sc_demuxer_init(struct sc_demuxer *demuxer, const char *name, sc_socket socket,
const struct sc_demuxer_callbacks *cbs, void *cbs_userdata);
void
sc_demuxer_add_sink(struct sc_demuxer *demuxer, struct sc_packet_sink *sink);
bool
sc_demuxer_start(struct sc_demuxer *demuxer);

View File

@ -1,285 +0,0 @@
#include "display.h"
#include <assert.h>
#include "util/log.h"
bool
sc_display_init(struct sc_display *display, SDL_Window *window, bool mipmaps) {
display->renderer =
SDL_CreateRenderer(window, -1, SDL_RENDERER_ACCELERATED);
if (!display->renderer) {
LOGE("Could not create renderer: %s", SDL_GetError());
return false;
}
SDL_RendererInfo renderer_info;
int r = SDL_GetRendererInfo(display->renderer, &renderer_info);
const char *renderer_name = r ? NULL : renderer_info.name;
LOGI("Renderer: %s", renderer_name ? renderer_name : "(unknown)");
display->mipmaps = false;
// starts with "opengl"
bool use_opengl = renderer_name && !strncmp(renderer_name, "opengl", 6);
if (use_opengl) {
#ifdef SC_DISPLAY_FORCE_OPENGL_CORE_PROFILE
// Persuade macOS to give us something better than OpenGL 2.1.
// If we create a Core Profile context, we get the best OpenGL version.
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK,
SDL_GL_CONTEXT_PROFILE_CORE);
LOGD("Creating OpenGL Core Profile context");
display->gl_context = SDL_GL_CreateContext(window);
if (!display->gl_context) {
LOGE("Could not create OpenGL context: %s", SDL_GetError());
SDL_DestroyRenderer(display->renderer);
return false;
}
#endif
struct sc_opengl *gl = &display->gl;
sc_opengl_init(gl);
LOGI("OpenGL version: %s", gl->version);
if (mipmaps) {
bool supports_mipmaps =
sc_opengl_version_at_least(gl, 3, 0, /* OpenGL 3.0+ */
2, 0 /* OpenGL ES 2.0+ */);
if (supports_mipmaps) {
LOGI("Trilinear filtering enabled");
display->mipmaps = true;
} else {
LOGW("Trilinear filtering disabled "
"(OpenGL 3.0+ or ES 2.0+ required)");
}
} else {
LOGI("Trilinear filtering disabled");
}
} else if (mipmaps) {
LOGD("Trilinear filtering disabled (not an OpenGL renderer");
}
display->pending.flags = 0;
display->pending.frame = NULL;
return true;
}
void
sc_display_destroy(struct sc_display *display) {
if (display->pending.frame) {
av_frame_free(&display->pending.frame);
}
#ifdef SC_DISPLAY_FORCE_OPENGL_CORE_PROFILE
SDL_GL_DeleteContext(display->gl_context);
#endif
if (display->texture) {
SDL_DestroyTexture(display->texture);
}
SDL_DestroyRenderer(display->renderer);
}
static SDL_Texture *
sc_display_create_texture(struct sc_display *display,
struct sc_size size) {
SDL_Renderer *renderer = display->renderer;
SDL_Texture *texture = SDL_CreateTexture(renderer, SDL_PIXELFORMAT_YV12,
SDL_TEXTUREACCESS_STREAMING,
size.width, size.height);
if (!texture) {
LOGD("Could not create texture: %s", SDL_GetError());
return NULL;
}
if (display->mipmaps) {
struct sc_opengl *gl = &display->gl;
SDL_GL_BindTexture(texture, NULL, NULL);
// Enable trilinear filtering for downscaling
gl->TexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,
GL_LINEAR_MIPMAP_LINEAR);
gl->TexParameterf(GL_TEXTURE_2D, GL_TEXTURE_LOD_BIAS, -1.f);
SDL_GL_UnbindTexture(texture);
}
return texture;
}
static inline void
sc_display_set_pending_size(struct sc_display *display, struct sc_size size) {
assert(!display->texture);
display->pending.size = size;
display->pending.flags |= SC_DISPLAY_PENDING_FLAG_SIZE;
}
static bool
sc_display_set_pending_frame(struct sc_display *display, const AVFrame *frame) {
if (!display->pending.frame) {
display->pending.frame = av_frame_alloc();
if (!display->pending.frame) {
LOG_OOM();
return false;
}
}
int r = av_frame_ref(display->pending.frame, frame);
if (r) {
LOGE("Could not ref frame: %d", r);
return false;
}
display->pending.flags |= SC_DISPLAY_PENDING_FLAG_FRAME;
return true;
}
static bool
sc_display_apply_pending(struct sc_display *display) {
if (display->pending.flags & SC_DISPLAY_PENDING_FLAG_SIZE) {
assert(!display->texture);
display->texture =
sc_display_create_texture(display, display->pending.size);
if (!display->texture) {
return false;
}
display->pending.flags &= ~SC_DISPLAY_PENDING_FLAG_SIZE;
}
if (display->pending.flags & SC_DISPLAY_PENDING_FLAG_FRAME) {
assert(display->pending.frame);
bool ok = sc_display_update_texture(display, display->pending.frame);
if (!ok) {
return false;
}
av_frame_unref(display->pending.frame);
display->pending.flags &= ~SC_DISPLAY_PENDING_FLAG_FRAME;
}
return true;
}
static bool
sc_display_set_texture_size_internal(struct sc_display *display,
struct sc_size size) {
assert(size.width && size.height);
if (display->texture) {
SDL_DestroyTexture(display->texture);
}
display->texture = sc_display_create_texture(display, size);
if (!display->texture) {
return false;
}
LOGI("Texture: %" PRIu16 "x%" PRIu16, size.width, size.height);
return true;
}
enum sc_display_result
sc_display_set_texture_size(struct sc_display *display, struct sc_size size) {
bool ok = sc_display_set_texture_size_internal(display, size);
if (!ok) {
sc_display_set_pending_size(display, size);
return SC_DISPLAY_RESULT_PENDING;
}
return SC_DISPLAY_RESULT_OK;
}
static bool
sc_display_update_texture_internal(struct sc_display *display,
const AVFrame *frame) {
int ret = SDL_UpdateYUVTexture(display->texture, NULL,
frame->data[0], frame->linesize[0],
frame->data[1], frame->linesize[1],
frame->data[2], frame->linesize[2]);
if (ret) {
LOGD("Could not update texture: %s", SDL_GetError());
return false;
}
if (display->mipmaps) {
SDL_GL_BindTexture(display->texture, NULL, NULL);
display->gl.GenerateMipmap(GL_TEXTURE_2D);
SDL_GL_UnbindTexture(display->texture);
}
return true;
}
enum sc_display_result
sc_display_update_texture(struct sc_display *display, const AVFrame *frame) {
bool ok = sc_display_update_texture_internal(display, frame);
if (!ok) {
ok = sc_display_set_pending_frame(display, frame);
if (!ok) {
LOGE("Could not set pending frame");
return SC_DISPLAY_RESULT_ERROR;
}
return SC_DISPLAY_RESULT_PENDING;
}
return SC_DISPLAY_RESULT_OK;
}
enum sc_display_result
sc_display_render(struct sc_display *display, const SDL_Rect *geometry,
unsigned rotation) {
SDL_RenderClear(display->renderer);
if (display->pending.flags) {
bool ok = sc_display_apply_pending(display);
if (!ok) {
return SC_DISPLAY_RESULT_PENDING;
}
}
SDL_Renderer *renderer = display->renderer;
SDL_Texture *texture = display->texture;
if (rotation == 0) {
int ret = SDL_RenderCopy(renderer, texture, NULL, geometry);
if (ret) {
LOGE("Could not render texture: %s", SDL_GetError());
return SC_DISPLAY_RESULT_ERROR;
}
} else {
// rotation in RenderCopyEx() is clockwise, while screen->rotation is
// counterclockwise (to be consistent with --lock-video-orientation)
int cw_rotation = (4 - rotation) % 4;
double angle = 90 * cw_rotation;
const SDL_Rect *dstrect = NULL;
SDL_Rect rect;
if (rotation & 1) {
rect.x = geometry->x + (geometry->w - geometry->h) / 2;
rect.y = geometry->y + (geometry->h - geometry->w) / 2;
rect.w = geometry->h;
rect.h = geometry->w;
dstrect = &rect;
} else {
assert(rotation == 2);
dstrect = geometry;
}
int ret = SDL_RenderCopyEx(renderer, texture, NULL, dstrect, angle,
NULL, 0);
if (ret) {
LOGE("Could not render texture: %s", SDL_GetError());
return SC_DISPLAY_RESULT_ERROR;
}
}
SDL_RenderPresent(display->renderer);
return SC_DISPLAY_RESULT_OK;
}

View File

@ -1,59 +0,0 @@
#ifndef SC_DISPLAY_H
#define SC_DISPLAY_H
#include "common.h"
#include <stdbool.h>
#include <libavformat/avformat.h>
#include <SDL2/SDL.h>
#include "coords.h"
#include "opengl.h"
#ifdef __APPLE__
# define SC_DISPLAY_FORCE_OPENGL_CORE_PROFILE
#endif
struct sc_display {
SDL_Renderer *renderer;
SDL_Texture *texture;
struct sc_opengl gl;
#ifdef SC_DISPLAY_FORCE_OPENGL_CORE_PROFILE
SDL_GLContext *gl_context;
#endif
bool mipmaps;
struct {
#define SC_DISPLAY_PENDING_FLAG_SIZE 1
#define SC_DISPLAY_PENDING_FLAG_FRAME 2
int8_t flags;
struct sc_size size;
AVFrame *frame;
} pending;
};
enum sc_display_result {
SC_DISPLAY_RESULT_OK,
SC_DISPLAY_RESULT_PENDING,
SC_DISPLAY_RESULT_ERROR,
};
bool
sc_display_init(struct sc_display *display, SDL_Window *window, bool mipmaps);
void
sc_display_destroy(struct sc_display *display);
enum sc_display_result
sc_display_set_texture_size(struct sc_display *display, struct sc_size size);
enum sc_display_result
sc_display_update_texture(struct sc_display *display, const AVFrame *frame);
enum sc_display_result
sc_display_render(struct sc_display *display, const SDL_Rect *geometry,
unsigned rotation);
#endif

View File

@ -5,5 +5,3 @@
#define SC_EVENT_USB_DEVICE_DISCONNECTED (SDL_USEREVENT + 4)
#define SC_EVENT_DEMUXER_ERROR (SDL_USEREVENT + 5)
#define SC_EVENT_RECORDER_ERROR (SDL_USEREVENT + 6)
#define SC_EVENT_SCREEN_INIT_SIZE (SDL_USEREVENT + 7)
#define SC_EVENT_TIME_LIMIT_REACHED (SDL_USEREVENT + 8)

View File

@ -19,7 +19,7 @@ sc_file_pusher_init(struct sc_file_pusher *fp, const char *serial,
const char *push_target) {
assert(serial);
sc_vecdeque_init(&fp->queue);
cbuf_init(&fp->queue);
bool ok = sc_mutex_init(&fp->mutex);
if (!ok) {
@ -65,10 +65,9 @@ sc_file_pusher_destroy(struct sc_file_pusher *fp) {
sc_intr_destroy(&fp->intr);
free(fp->serial);
while (!sc_vecdeque_is_empty(&fp->queue)) {
struct sc_file_pusher_request *req = sc_vecdeque_popref(&fp->queue);
assert(req);
sc_file_pusher_request_destroy(req);
struct sc_file_pusher_request req;
while (cbuf_take(&fp->queue, &req)) {
sc_file_pusher_request_destroy(&req);
}
}
@ -92,20 +91,13 @@ sc_file_pusher_request(struct sc_file_pusher *fp,
};
sc_mutex_lock(&fp->mutex);
bool was_empty = sc_vecdeque_is_empty(&fp->queue);
bool res = sc_vecdeque_push(&fp->queue, req);
if (!res) {
LOG_OOM();
sc_mutex_unlock(&fp->mutex);
return false;
}
bool was_empty = cbuf_is_empty(&fp->queue);
bool res = cbuf_push(&fp->queue, req);
if (was_empty) {
sc_cond_signal(&fp->event_cond);
}
sc_mutex_unlock(&fp->mutex);
return true;
return res;
}
static int
@ -121,7 +113,7 @@ run_file_pusher(void *data) {
for (;;) {
sc_mutex_lock(&fp->mutex);
while (!fp->stopped && sc_vecdeque_is_empty(&fp->queue)) {
while (!fp->stopped && cbuf_is_empty(&fp->queue)) {
sc_cond_wait(&fp->event_cond, &fp->mutex);
}
if (fp->stopped) {
@ -129,9 +121,10 @@ run_file_pusher(void *data) {
sc_mutex_unlock(&fp->mutex);
break;
}
assert(!sc_vecdeque_is_empty(&fp->queue));
struct sc_file_pusher_request req = sc_vecdeque_pop(&fp->queue);
struct sc_file_pusher_request req;
bool non_empty = cbuf_take(&fp->queue, &req);
assert(non_empty);
(void) non_empty;
sc_mutex_unlock(&fp->mutex);
if (req.action == SC_FILE_PUSHER_ACTION_INSTALL_APK) {
@ -172,18 +165,14 @@ sc_file_pusher_start(struct sc_file_pusher *fp) {
void
sc_file_pusher_stop(struct sc_file_pusher *fp) {
if (fp->initialized) {
sc_mutex_lock(&fp->mutex);
fp->stopped = true;
sc_cond_signal(&fp->event_cond);
sc_intr_interrupt(&fp->intr);
sc_mutex_unlock(&fp->mutex);
}
}
void
sc_file_pusher_join(struct sc_file_pusher *fp) {
if (fp->initialized) {
sc_thread_join(&fp->thread, NULL);
}
}

View File

@ -5,9 +5,9 @@
#include <stdbool.h>
#include "util/intr.h"
#include "util/cbuf.h"
#include "util/thread.h"
#include "util/vecdeque.h"
#include "util/intr.h"
enum sc_file_pusher_action {
SC_FILE_PUSHER_ACTION_INSTALL_APK,
@ -19,7 +19,7 @@ struct sc_file_pusher_request {
char *file;
};
struct sc_file_pusher_request_queue SC_VECDEQUE(struct sc_file_pusher_request);
struct sc_file_pusher_request_queue CBUF(struct sc_file_pusher_request, 16);
struct sc_file_pusher {
char *serial;

View File

@ -96,7 +96,6 @@ run_fps_counter(void *data) {
bool
sc_fps_counter_start(struct sc_fps_counter *counter) {
sc_mutex_lock(&counter->mutex);
counter->interrupted = false;
counter->next_timestamp = sc_tick_now() + SC_FPS_COUNTER_INTERVAL;
counter->nr_rendered = 0;
counter->nr_skipped = 0;

View File

@ -271,7 +271,7 @@ error:
}
SDL_Surface *
scrcpy_icon_load(void) {
scrcpy_icon_load() {
char *icon_path = get_icon_path();
if (!icon_path) {
return NULL;

View File

@ -797,8 +797,7 @@ sc_input_manager_process_file(struct sc_input_manager *im,
}
void
sc_input_manager_handle_event(struct sc_input_manager *im,
const SDL_Event *event) {
sc_input_manager_handle_event(struct sc_input_manager *im, SDL_Event *event) {
bool control = im->controller;
switch (event->type) {
case SDL_TEXTINPUT:

View File

@ -61,7 +61,6 @@ sc_input_manager_init(struct sc_input_manager *im,
const struct sc_input_manager_params *params);
void
sc_input_manager_handle_event(struct sc_input_manager *im,
const SDL_Event *event);
sc_input_manager_handle_event(struct sc_input_manager *im, SDL_Event *event);
#endif

View File

@ -23,7 +23,7 @@
#include "util/str.h"
#endif
static int
int
main_scrcpy(int argc, char *argv[]) {
#ifdef _WIN32
// disable buffering, we want logs immediately
@ -39,32 +39,26 @@ main_scrcpy(int argc, char *argv[]) {
.opts = scrcpy_options_default,
.help = false,
.version = false,
.pause_on_exit = SC_PAUSE_ON_EXIT_FALSE,
};
#ifndef NDEBUG
args.opts.log_level = SC_LOG_LEVEL_DEBUG;
#endif
enum scrcpy_exit_code ret;
if (!scrcpy_parse_args(&args, argc, argv)) {
ret = SCRCPY_EXIT_FAILURE;
goto end;
return SCRCPY_EXIT_FAILURE;
}
sc_set_log_level(args.opts.log_level);
if (args.help) {
scrcpy_print_usage(argv[0]);
ret = SCRCPY_EXIT_SUCCESS;
goto end;
return SCRCPY_EXIT_SUCCESS;
}
if (args.version) {
scrcpy_print_version();
ret = SCRCPY_EXIT_SUCCESS;
goto end;
return SCRCPY_EXIT_SUCCESS;
}
#ifdef SCRCPY_LAVF_REQUIRES_REGISTER_ALL
@ -78,26 +72,18 @@ main_scrcpy(int argc, char *argv[]) {
#endif
if (!net_init()) {
ret = SCRCPY_EXIT_FAILURE;
goto end;
return SCRCPY_EXIT_FAILURE;
}
sc_log_configure();
#ifdef HAVE_USB
ret = args.opts.otg ? scrcpy_otg(&args.opts) : scrcpy(&args.opts);
enum scrcpy_exit_code ret = args.opts.otg ? scrcpy_otg(&args.opts)
: scrcpy(&args.opts);
#else
ret = scrcpy(&args.opts);
enum scrcpy_exit_code ret = scrcpy(&args.opts);
#endif
end:
if (args.pause_on_exit == SC_PAUSE_ON_EXIT_TRUE ||
(args.pause_on_exit == SC_PAUSE_ON_EXIT_IF_ERROR &&
ret != SCRCPY_EXIT_SUCCESS)) {
printf("Press Enter to continue...\n");
getchar();
}
return ret;
}

View File

@ -11,19 +11,15 @@ const struct scrcpy_options scrcpy_options_default = {
.audio_codec_options = NULL,
.video_encoder = NULL,
.audio_encoder = NULL,
.camera_id = NULL,
.camera_size = NULL,
.camera_ar = NULL,
.camera_fps = 0,
#ifdef HAVE_V4L2
.v4l2_device = NULL,
#endif
.log_level = SC_LOG_LEVEL_INFO,
.video_codec = SC_CODEC_H264,
.audio_codec = SC_CODEC_OPUS,
.video_source = SC_VIDEO_SOURCE_DISPLAY,
.audio_source = SC_AUDIO_SOURCE_AUTO,
.record_format = SC_RECORD_FORMAT_AUTO,
.keyboard_input_mode = SC_KEYBOARD_INPUT_MODE_INJECT,
.mouse_input_mode = SC_MOUSE_INPUT_MODE_INJECT,
.camera_facing = SC_CAMERA_FACING_ANY,
.port_range = {
.first = DEFAULT_LOCAL_PORT_RANGE_FIRST,
.last = DEFAULT_LOCAL_PORT_RANGE_LAST,
@ -46,13 +42,7 @@ const struct scrcpy_options scrcpy_options_default = {
.window_height = 0,
.display_id = 0,
.display_buffer = 0,
.audio_buffer = SC_TICK_FROM_MS(50),
.audio_output_buffer = SC_TICK_FROM_MS(5),
.time_limit = 0,
#ifdef HAVE_V4L2
.v4l2_device = NULL,
.v4l2_buffer = 0,
#endif
#ifdef HAVE_USB
.otg = false,
#endif
@ -60,8 +50,7 @@ const struct scrcpy_options scrcpy_options_default = {
.fullscreen = false,
.always_on_top = false,
.control = true,
.video_playback = true,
.audio_playback = true,
.display = true,
.turn_screen_off = false,
.key_inject_mode = SC_KEY_INJECT_MODE_MIXED,
.window_borderless = false,
@ -82,10 +71,7 @@ const struct scrcpy_options scrcpy_options_default = {
.cleanup = true,
.start_fps_counter = false,
.power_on = true,
.video = true,
.audio = true,
.require_audio = false,
.kill_adb_on_close = false,
.camera_high_speed = false,
.list = 0,
.list_encoders = false,
.list_displays = false,
};

View File

@ -21,45 +21,14 @@ enum sc_record_format {
SC_RECORD_FORMAT_AUTO,
SC_RECORD_FORMAT_MP4,
SC_RECORD_FORMAT_MKV,
SC_RECORD_FORMAT_M4A,
SC_RECORD_FORMAT_MKA,
SC_RECORD_FORMAT_OPUS,
SC_RECORD_FORMAT_AAC,
};
static inline bool
sc_record_format_is_audio_only(enum sc_record_format fmt) {
return fmt == SC_RECORD_FORMAT_M4A
|| fmt == SC_RECORD_FORMAT_MKA
|| fmt == SC_RECORD_FORMAT_OPUS
|| fmt == SC_RECORD_FORMAT_AAC;
}
enum sc_codec {
SC_CODEC_H264,
SC_CODEC_H265,
SC_CODEC_AV1,
SC_CODEC_OPUS,
SC_CODEC_AAC,
SC_CODEC_RAW,
};
enum sc_video_source {
SC_VIDEO_SOURCE_DISPLAY,
SC_VIDEO_SOURCE_CAMERA,
};
enum sc_audio_source {
SC_AUDIO_SOURCE_AUTO, // OUTPUT for video DISPLAY, MIC for video CAMERA
SC_AUDIO_SOURCE_OUTPUT,
SC_AUDIO_SOURCE_MIC,
};
enum sc_camera_facing {
SC_CAMERA_FACING_ANY,
SC_CAMERA_FACING_FRONT,
SC_CAMERA_FACING_BACK,
SC_CAMERA_FACING_EXTERNAL,
};
enum sc_lock_video_orientation {
@ -130,19 +99,15 @@ struct scrcpy_options {
const char *audio_codec_options;
const char *video_encoder;
const char *audio_encoder;
const char *camera_id;
const char *camera_size;
const char *camera_ar;
uint16_t camera_fps;
#ifdef HAVE_V4L2
const char *v4l2_device;
#endif
enum sc_log_level log_level;
enum sc_codec video_codec;
enum sc_codec audio_codec;
enum sc_video_source video_source;
enum sc_audio_source audio_source;
enum sc_record_format record_format;
enum sc_keyboard_input_mode keyboard_input_mode;
enum sc_mouse_input_mode mouse_input_mode;
enum sc_camera_facing camera_facing;
struct sc_port_range port_range;
uint32_t tunnel_host;
uint16_t tunnel_port;
@ -159,13 +124,7 @@ struct scrcpy_options {
uint16_t window_height;
uint32_t display_id;
sc_tick display_buffer;
sc_tick audio_buffer;
sc_tick audio_output_buffer;
sc_tick time_limit;
#ifdef HAVE_V4L2
const char *v4l2_device;
sc_tick v4l2_buffer;
#endif
#ifdef HAVE_USB
bool otg;
#endif
@ -173,8 +132,7 @@ struct scrcpy_options {
bool fullscreen;
bool always_on_top;
bool control;
bool video_playback;
bool audio_playback;
bool display;
bool turn_screen_off;
enum sc_key_inject_mode key_inject_mode;
bool window_borderless;
@ -195,16 +153,9 @@ struct scrcpy_options {
bool cleanup;
bool start_fps_counter;
bool power_on;
bool video;
bool audio;
bool require_audio;
bool kill_adb_on_close;
bool camera_high_speed;
#define SC_OPTION_LIST_ENCODERS 0x1
#define SC_OPTION_LIST_DISPLAYS 0x2
#define SC_OPTION_LIST_CAMERAS 0x4
#define SC_OPTION_LIST_CAMERA_SIZES 0x8
uint8_t list;
bool list_encoders;
bool list_displays;
};
extern const struct scrcpy_options scrcpy_options_default;

View File

@ -33,44 +33,50 @@ find_muxer(const char *name) {
return oformat;
}
static AVPacket *
sc_recorder_packet_ref(const AVPacket *packet) {
AVPacket *p = av_packet_alloc();
if (!p) {
static struct sc_record_packet *
sc_record_packet_new(const AVPacket *packet) {
struct sc_record_packet *rec = malloc(sizeof(*rec));
if (!rec) {
LOG_OOM();
return NULL;
}
if (av_packet_ref(p, packet)) {
av_packet_free(&p);
rec->packet = av_packet_alloc();
if (!rec->packet) {
LOG_OOM();
free(rec);
return NULL;
}
return p;
if (av_packet_ref(rec->packet, packet)) {
av_packet_free(&rec->packet);
free(rec);
return NULL;
}
return rec;
}
static void
sc_record_packet_delete(struct sc_record_packet *rec) {
av_packet_free(&rec->packet);
free(rec);
}
static void
sc_recorder_queue_clear(struct sc_recorder_queue *queue) {
while (!sc_vecdeque_is_empty(queue)) {
AVPacket *p = sc_vecdeque_pop(queue);
av_packet_free(&p);
while (!sc_queue_is_empty(queue)) {
struct sc_record_packet *rec;
sc_queue_take(queue, next, &rec);
sc_record_packet_delete(rec);
}
}
static const char *
sc_recorder_get_format_name(enum sc_record_format format) {
switch (format) {
case SC_RECORD_FORMAT_MP4:
case SC_RECORD_FORMAT_M4A:
case SC_RECORD_FORMAT_AAC:
return "mp4";
case SC_RECORD_FORMAT_MKV:
case SC_RECORD_FORMAT_MKA:
return "matroska";
case SC_RECORD_FORMAT_OPUS:
return "opus";
default:
return NULL;
case SC_RECORD_FORMAT_MP4: return "mp4";
case SC_RECORD_FORMAT_MKV: return "matroska";
default: return NULL;
}
}
@ -96,30 +102,23 @@ sc_recorder_rescale_packet(AVStream *stream, AVPacket *packet) {
}
static bool
sc_recorder_write_stream(struct sc_recorder *recorder,
struct sc_recorder_stream *st, AVPacket *packet) {
AVStream *stream = recorder->ctx->streams[st->index];
sc_recorder_write_stream(struct sc_recorder *recorder, int stream_index,
AVPacket *packet) {
AVStream *stream = recorder->ctx->streams[stream_index];
sc_recorder_rescale_packet(stream, packet);
if (st->last_pts != AV_NOPTS_VALUE && packet->pts <= st->last_pts) {
LOGW("Fixing PTS non monotonically increasing in stream %d "
"(%" PRIi64 " >= %" PRIi64 ")",
st->index, st->last_pts, packet->pts);
packet->pts = ++st->last_pts;
packet->dts = packet->pts;
} else {
st->last_pts = packet->pts;
}
return av_interleaved_write_frame(recorder->ctx, packet) >= 0;
}
static inline bool
sc_recorder_write_video(struct sc_recorder *recorder, AVPacket *packet) {
return sc_recorder_write_stream(recorder, &recorder->video_stream, packet);
return sc_recorder_write_stream(recorder, recorder->video_stream_index,
packet);
}
static inline bool
sc_recorder_write_audio(struct sc_recorder *recorder, AVPacket *packet) {
return sc_recorder_write_stream(recorder, &recorder->audio_stream, packet);
return sc_recorder_write_stream(recorder, recorder->audio_stream_index,
packet);
}
static bool
@ -165,14 +164,80 @@ sc_recorder_close_output_file(struct sc_recorder *recorder) {
avformat_free_context(recorder->ctx);
}
static bool
sc_recorder_wait_video_stream(struct sc_recorder *recorder) {
sc_mutex_lock(&recorder->mutex);
while (!recorder->video_codec && !recorder->stopped) {
sc_cond_wait(&recorder->stream_cond, &recorder->mutex);
}
const AVCodec *codec = recorder->video_codec;
sc_mutex_unlock(&recorder->mutex);
if (codec) {
AVStream *stream = avformat_new_stream(recorder->ctx, codec);
if (!stream) {
return false;
}
stream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
stream->codecpar->codec_id = codec->id;
stream->codecpar->format = AV_PIX_FMT_YUV420P;
stream->codecpar->width = recorder->declared_frame_size.width;
stream->codecpar->height = recorder->declared_frame_size.height;
recorder->video_stream_index = stream->index;
}
return true;
}
static bool
sc_recorder_wait_audio_stream(struct sc_recorder *recorder) {
sc_mutex_lock(&recorder->mutex);
while (!recorder->audio_codec && !recorder->audio_disabled
&& !recorder->stopped) {
sc_cond_wait(&recorder->stream_cond, &recorder->mutex);
}
if (recorder->audio_disabled) {
// Reset audio flag. From there, the recorder thread may access this
// flag without any mutex.
recorder->audio = false;
}
const AVCodec *codec = recorder->audio_codec;
sc_mutex_unlock(&recorder->mutex);
if (codec) {
AVStream *stream = avformat_new_stream(recorder->ctx, codec);
if (!stream) {
return false;
}
stream->codecpar->codec_type = AVMEDIA_TYPE_AUDIO;
stream->codecpar->codec_id = codec->id;
#ifdef SCRCPY_LAVU_HAS_CHLAYOUT
stream->codecpar->ch_layout.nb_channels = 2;
#else
stream->codecpar->channel_layout = AV_CH_LAYOUT_STEREO;
stream->codecpar->channels = 2;
#endif
stream->codecpar->sample_rate = 48000;
recorder->audio_stream_index = stream->index;
}
return true;
}
static inline bool
sc_recorder_has_empty_queues(struct sc_recorder *recorder) {
if (recorder->video && sc_vecdeque_is_empty(&recorder->video_queue)) {
if (sc_queue_is_empty(&recorder->video_queue)) {
// The video queue is empty
return true;
}
if (recorder->audio && sc_vecdeque_is_empty(&recorder->audio_queue)) {
if (recorder->audio && sc_queue_is_empty(&recorder->audio_queue)) {
// The audio queue is empty (when audio is enabled)
return true;
}
@ -185,68 +250,59 @@ static bool
sc_recorder_process_header(struct sc_recorder *recorder) {
sc_mutex_lock(&recorder->mutex);
while (!recorder->stopped &&
((recorder->video && !recorder->video_init)
|| (recorder->audio && !recorder->audio_init)
|| sc_recorder_has_empty_queues(recorder))) {
sc_cond_wait(&recorder->cond, &recorder->mutex);
while (!recorder->stopped && sc_recorder_has_empty_queues(recorder)) {
sc_cond_wait(&recorder->queue_cond, &recorder->mutex);
}
if (recorder->video && sc_vecdeque_is_empty(&recorder->video_queue)) {
assert(recorder->stopped);
if (recorder->stopped && sc_queue_is_empty(&recorder->video_queue)) {
// If the recorder is stopped, don't process anything if there are not
// at least video packets
sc_mutex_unlock(&recorder->mutex);
return false;
}
AVPacket *video_pkt = NULL;
if (!sc_vecdeque_is_empty(&recorder->video_queue)) {
assert(recorder->video);
video_pkt = sc_vecdeque_pop(&recorder->video_queue);
}
struct sc_record_packet *video_pkt;
sc_queue_take(&recorder->video_queue, next, &video_pkt);
AVPacket *audio_pkt = NULL;
if (!sc_vecdeque_is_empty(&recorder->audio_queue)) {
struct sc_record_packet *audio_pkt = NULL;
if (!sc_queue_is_empty(&recorder->audio_queue)) {
assert(recorder->audio);
audio_pkt = sc_vecdeque_pop(&recorder->audio_queue);
sc_queue_take(&recorder->audio_queue, next, &audio_pkt);
}
sc_mutex_unlock(&recorder->mutex);
int ret = false;
if (video_pkt) {
if (video_pkt->pts != AV_NOPTS_VALUE) {
if (video_pkt->packet->pts != AV_NOPTS_VALUE) {
LOGE("The first video packet is not a config packet");
goto end;
}
assert(recorder->video_stream.index >= 0);
assert(recorder->video_stream_index >= 0);
AVStream *video_stream =
recorder->ctx->streams[recorder->video_stream.index];
bool ok = sc_recorder_set_extradata(video_stream, video_pkt);
recorder->ctx->streams[recorder->video_stream_index];
bool ok = sc_recorder_set_extradata(video_stream, video_pkt->packet);
if (!ok) {
goto end;
}
}
if (audio_pkt) {
if (audio_pkt->pts != AV_NOPTS_VALUE) {
if (audio_pkt->packet->pts != AV_NOPTS_VALUE) {
LOGE("The first audio packet is not a config packet");
goto end;
}
assert(recorder->audio_stream.index >= 0);
assert(recorder->audio_stream_index >= 0);
AVStream *audio_stream =
recorder->ctx->streams[recorder->audio_stream.index];
bool ok = sc_recorder_set_extradata(audio_stream, audio_pkt);
recorder->ctx->streams[recorder->audio_stream_index];
ok = sc_recorder_set_extradata(audio_stream, audio_pkt->packet);
if (!ok) {
goto end;
}
}
bool ok = avformat_write_header(recorder->ctx, NULL) >= 0;
ok = avformat_write_header(recorder->ctx, NULL) >= 0;
if (!ok) {
LOGE("Failed to write header to %s", recorder->filename);
goto end;
@ -255,11 +311,9 @@ sc_recorder_process_header(struct sc_recorder *recorder) {
ret = true;
end:
if (video_pkt) {
av_packet_free(&video_pkt);
}
sc_record_packet_delete(video_pkt);
if (audio_pkt) {
av_packet_free(&audio_pkt);
sc_record_packet_delete(audio_pkt);
}
return ret;
@ -274,12 +328,12 @@ sc_recorder_process_packets(struct sc_recorder *recorder) {
return false;
}
AVPacket *video_pkt = NULL;
AVPacket *audio_pkt = NULL;
struct sc_record_packet *video_pkt = NULL;
struct sc_record_packet *audio_pkt = NULL;
// We can write a video packet only once we received the next one so that
// we can set its duration (next_pts - current_pts)
AVPacket *video_pkt_previous = NULL;
struct sc_record_packet *video_pkt_previous = NULL;
bool error = false;
@ -287,43 +341,37 @@ sc_recorder_process_packets(struct sc_recorder *recorder) {
sc_mutex_lock(&recorder->mutex);
while (!recorder->stopped) {
if (recorder->video && !video_pkt &&
!sc_vecdeque_is_empty(&recorder->video_queue)) {
if (!video_pkt && !sc_queue_is_empty(&recorder->video_queue)) {
// A new packet may be assigned to video_pkt and be processed
break;
}
if (recorder->audio && !audio_pkt
&& !sc_vecdeque_is_empty(&recorder->audio_queue)) {
&& !sc_queue_is_empty(&recorder->audio_queue)) {
// A new packet may be assigned to audio_pkt and be processed
break;
}
sc_cond_wait(&recorder->cond, &recorder->mutex);
sc_cond_wait(&recorder->queue_cond, &recorder->mutex);
}
// If stopped is set, continue to process the remaining events (to
// finish the recording) before actually stopping.
// If there is no video, then the video_queue will remain empty forever
// and video_pkt will always be NULL.
assert(recorder->video || (!video_pkt
&& sc_vecdeque_is_empty(&recorder->video_queue)));
// If there is no audio, then the audio_queue will remain empty forever
// and audio_pkt will always be NULL.
assert(recorder->audio || (!audio_pkt
&& sc_vecdeque_is_empty(&recorder->audio_queue)));
assert(recorder->audio
|| (!audio_pkt && sc_queue_is_empty(&recorder->audio_queue)));
if (!video_pkt && !sc_vecdeque_is_empty(&recorder->video_queue)) {
video_pkt = sc_vecdeque_pop(&recorder->video_queue);
if (!video_pkt && !sc_queue_is_empty(&recorder->video_queue)) {
sc_queue_take(&recorder->video_queue, next, &video_pkt);
}
if (!audio_pkt && !sc_vecdeque_is_empty(&recorder->audio_queue)) {
audio_pkt = sc_vecdeque_pop(&recorder->audio_queue);
if (!audio_pkt && !sc_queue_is_empty(&recorder->audio_queue)) {
sc_queue_take(&recorder->audio_queue, next, &audio_pkt);
}
if (recorder->stopped && !video_pkt && !audio_pkt) {
assert(sc_vecdeque_is_empty(&recorder->video_queue));
assert(sc_vecdeque_is_empty(&recorder->audio_queue));
assert(sc_queue_is_empty(&recorder->video_queue));
assert(sc_queue_is_empty(&recorder->audio_queue));
sc_mutex_unlock(&recorder->mutex);
break;
}
@ -335,35 +383,38 @@ sc_recorder_process_packets(struct sc_recorder *recorder) {
// Ignore further config packets (e.g. on device orientation
// change). The next non-config packet will have the config packet
// data prepended.
if (video_pkt && video_pkt->pts == AV_NOPTS_VALUE) {
av_packet_free(&video_pkt);
if (video_pkt && video_pkt->packet->pts == AV_NOPTS_VALUE) {
sc_record_packet_delete(video_pkt);
video_pkt = NULL;
}
if (audio_pkt && audio_pkt->pts == AV_NOPTS_VALUE) {
av_packet_free(&audio_pkt);
if (audio_pkt && audio_pkt->packet->pts == AV_NOPTS_VALUE) {
sc_record_packet_delete(audio_pkt);
audio_pkt= NULL;
}
if (pts_origin == AV_NOPTS_VALUE) {
if (!recorder->audio) {
assert(video_pkt);
pts_origin = video_pkt->pts;
} else if (!recorder->video) {
assert(audio_pkt);
pts_origin = audio_pkt->pts;
pts_origin = video_pkt->packet->pts;
} else if (video_pkt && audio_pkt) {
pts_origin = MIN(video_pkt->pts, audio_pkt->pts);
pts_origin =
MIN(video_pkt->packet->pts, audio_pkt->packet->pts);
} else if (recorder->stopped) {
if (video_pkt) {
// The recorder is stopped without audio, record the video
// packets
pts_origin = video_pkt->pts;
pts_origin = video_pkt->packet->pts;
} else {
// Fail if there is no video
error = true;
goto end;
}
// If the recorder is stopped while one of the streams has no
// packets, then we must avoid a live-loop and correctly record
// the stream having packets.
pts_origin = video_pkt ? video_pkt->packet->pts
: audio_pkt->packet->pts;
} else {
// We need both video and audio packets to initialize pts_origin
continue;
@ -373,16 +424,17 @@ sc_recorder_process_packets(struct sc_recorder *recorder) {
assert(pts_origin != AV_NOPTS_VALUE);
if (video_pkt) {
video_pkt->pts -= pts_origin;
video_pkt->dts = video_pkt->pts;
video_pkt->packet->pts -= pts_origin;
video_pkt->packet->dts = video_pkt->packet->pts;
if (video_pkt_previous) {
// we now know the duration of the previous packet
video_pkt_previous->duration = video_pkt->pts
- video_pkt_previous->pts;
video_pkt_previous->packet->duration =
video_pkt->packet->pts - video_pkt_previous->packet->pts;
bool ok = sc_recorder_write_video(recorder, video_pkt_previous);
av_packet_free(&video_pkt_previous);
bool ok = sc_recorder_write_video(recorder,
video_pkt_previous->packet);
sc_record_packet_delete(video_pkt_previous);
if (!ok) {
LOGE("Could not record video packet");
error = true;
@ -395,34 +447,34 @@ sc_recorder_process_packets(struct sc_recorder *recorder) {
}
if (audio_pkt) {
audio_pkt->pts -= pts_origin;
audio_pkt->dts = audio_pkt->pts;
audio_pkt->packet->pts -= pts_origin;
audio_pkt->packet->dts = audio_pkt->packet->pts;
bool ok = sc_recorder_write_audio(recorder, audio_pkt);
bool ok = sc_recorder_write_audio(recorder, audio_pkt->packet);
if (!ok) {
LOGE("Could not record audio packet");
error = true;
goto end;
}
av_packet_free(&audio_pkt);
sc_record_packet_delete(audio_pkt);
audio_pkt = NULL;
}
}
// Write the last video packet
AVPacket *last = video_pkt_previous;
struct sc_record_packet *last = video_pkt_previous;
if (last) {
// assign an arbitrary duration to the last packet
last->duration = 100000;
bool ok = sc_recorder_write_video(recorder, last);
last->packet->duration = 100000;
bool ok = sc_recorder_write_video(recorder, last->packet);
if (!ok) {
// failing to write the last frame is not very serious, no
// future frame may depend on it, so the resulting file
// will still be valid
LOGW("Could not record last packet");
}
av_packet_free(&last);
sc_record_packet_delete(last);
}
int ret = av_write_trailer(recorder->ctx);
@ -433,10 +485,10 @@ sc_recorder_process_packets(struct sc_recorder *recorder) {
end:
if (video_pkt) {
av_packet_free(&video_pkt);
sc_record_packet_delete(video_pkt);
}
if (audio_pkt) {
av_packet_free(&audio_pkt);
sc_record_packet_delete(audio_pkt);
}
return !error;
@ -449,6 +501,22 @@ sc_recorder_record(struct sc_recorder *recorder) {
return false;
}
ok = sc_recorder_wait_video_stream(recorder);
if (!ok) {
sc_recorder_close_output_file(recorder);
return false;
}
if (recorder->audio) {
ok = sc_recorder_wait_audio_stream(recorder);
if (!ok) {
sc_recorder_close_output_file(recorder);
return false;
}
}
// If recorder->stopped, process any queued packet anyway
ok = sc_recorder_process_packets(recorder);
sc_recorder_close_output_file(recorder);
return ok;
@ -458,10 +526,6 @@ static int
run_recorder(void *data) {
struct sc_recorder *recorder = data;
// Recording is a background task
bool ok = sc_thread_set_priority(SC_THREAD_PRIORITY_LOW);
(void) ok; // We don't care if it worked
bool success = sc_recorder_record(recorder);
sc_mutex_lock(&recorder->mutex);
@ -469,7 +533,6 @@ run_recorder(void *data) {
recorder->stopped = true;
// Discard pending packets
sc_recorder_queue_clear(&recorder->video_queue);
sc_recorder_queue_clear(&recorder->audio_queue);
sc_mutex_unlock(&recorder->mutex);
if (success) {
@ -489,10 +552,9 @@ run_recorder(void *data) {
static bool
sc_recorder_video_packet_sink_open(struct sc_packet_sink *sink,
AVCodecContext *ctx) {
const AVCodec *codec) {
struct sc_recorder *recorder = DOWNCAST_VIDEO(sink);
// only written from this thread, no need to lock
assert(!recorder->video_init);
assert(codec);
sc_mutex_lock(&recorder->mutex);
if (recorder->stopped) {
@ -500,22 +562,8 @@ sc_recorder_video_packet_sink_open(struct sc_packet_sink *sink,
return false;
}
AVStream *stream = avformat_new_stream(recorder->ctx, ctx->codec);
if (!stream) {
sc_mutex_unlock(&recorder->mutex);
return false;
}
int r = avcodec_parameters_from_context(stream->codecpar, ctx);
if (r < 0) {
sc_mutex_unlock(&recorder->mutex);
return false;
}
recorder->video_stream.index = stream->index;
recorder->video_init = true;
sc_cond_signal(&recorder->cond);
recorder->video_codec = codec;
sc_cond_signal(&recorder->stream_cond);
sc_mutex_unlock(&recorder->mutex);
return true;
@ -524,13 +572,11 @@ sc_recorder_video_packet_sink_open(struct sc_packet_sink *sink,
static void
sc_recorder_video_packet_sink_close(struct sc_packet_sink *sink) {
struct sc_recorder *recorder = DOWNCAST_VIDEO(sink);
// only written from this thread, no need to lock
assert(recorder->video_init);
sc_mutex_lock(&recorder->mutex);
// EOS also stops the recorder
recorder->stopped = true;
sc_cond_signal(&recorder->cond);
sc_cond_signal(&recorder->queue_cond);
sc_mutex_unlock(&recorder->mutex);
}
@ -538,8 +584,6 @@ static bool
sc_recorder_video_packet_sink_push(struct sc_packet_sink *sink,
const AVPacket *packet) {
struct sc_recorder *recorder = DOWNCAST_VIDEO(sink);
// only written from this thread, no need to lock
assert(recorder->video_init);
sc_mutex_lock(&recorder->mutex);
@ -549,23 +593,17 @@ sc_recorder_video_packet_sink_push(struct sc_packet_sink *sink,
return false;
}
AVPacket *rec = sc_recorder_packet_ref(packet);
struct sc_record_packet *rec = sc_record_packet_new(packet);
if (!rec) {
LOG_OOM();
sc_mutex_unlock(&recorder->mutex);
return false;
}
rec->stream_index = recorder->video_stream.index;
rec->packet->stream_index = 0;
bool ok = sc_vecdeque_push(&recorder->video_queue, rec);
if (!ok) {
LOG_OOM();
sc_mutex_unlock(&recorder->mutex);
return false;
}
sc_cond_signal(&recorder->cond);
sc_queue_push(&recorder->video_queue, next, rec);
sc_cond_signal(&recorder->queue_cond);
sc_mutex_unlock(&recorder->mutex);
return true;
@ -573,30 +611,16 @@ sc_recorder_video_packet_sink_push(struct sc_packet_sink *sink,
static bool
sc_recorder_audio_packet_sink_open(struct sc_packet_sink *sink,
AVCodecContext *ctx) {
const AVCodec *codec) {
struct sc_recorder *recorder = DOWNCAST_AUDIO(sink);
assert(recorder->audio);
// only written from this thread, no need to lock
assert(!recorder->audio_init);
assert(!recorder->audio_disabled);
assert(codec);
sc_mutex_lock(&recorder->mutex);
AVStream *stream = avformat_new_stream(recorder->ctx, ctx->codec);
if (!stream) {
sc_mutex_unlock(&recorder->mutex);
return false;
}
int r = avcodec_parameters_from_context(stream->codecpar, ctx);
if (r < 0) {
sc_mutex_unlock(&recorder->mutex);
return false;
}
recorder->audio_stream.index = stream->index;
recorder->audio_init = true;
sc_cond_signal(&recorder->cond);
recorder->audio_codec = codec;
sc_cond_signal(&recorder->stream_cond);
sc_mutex_unlock(&recorder->mutex);
return true;
@ -607,12 +631,12 @@ sc_recorder_audio_packet_sink_close(struct sc_packet_sink *sink) {
struct sc_recorder *recorder = DOWNCAST_AUDIO(sink);
assert(recorder->audio);
// only written from this thread, no need to lock
assert(recorder->audio_init);
assert(!recorder->audio_disabled);
sc_mutex_lock(&recorder->mutex);
// EOS also stops the recorder
recorder->stopped = true;
sc_cond_signal(&recorder->cond);
sc_cond_signal(&recorder->queue_cond);
sc_mutex_unlock(&recorder->mutex);
}
@ -622,7 +646,7 @@ sc_recorder_audio_packet_sink_push(struct sc_packet_sink *sink,
struct sc_recorder *recorder = DOWNCAST_AUDIO(sink);
assert(recorder->audio);
// only written from this thread, no need to lock
assert(recorder->audio_init);
assert(!recorder->audio_disabled);
sc_mutex_lock(&recorder->mutex);
@ -632,23 +656,17 @@ sc_recorder_audio_packet_sink_push(struct sc_packet_sink *sink,
return false;
}
AVPacket *rec = sc_recorder_packet_ref(packet);
struct sc_record_packet *rec = sc_record_packet_new(packet);
if (!rec) {
LOG_OOM();
sc_mutex_unlock(&recorder->mutex);
return false;
}
rec->stream_index = recorder->audio_stream.index;
rec->packet->stream_index = 1;
bool ok = sc_vecdeque_push(&recorder->audio_queue, rec);
if (!ok) {
LOG_OOM();
sc_mutex_unlock(&recorder->mutex);
return false;
}
sc_cond_signal(&recorder->cond);
sc_queue_push(&recorder->audio_queue, next, rec);
sc_cond_signal(&recorder->queue_cond);
sc_mutex_unlock(&recorder->mutex);
return true;
@ -659,26 +677,21 @@ sc_recorder_audio_packet_sink_disable(struct sc_packet_sink *sink) {
struct sc_recorder *recorder = DOWNCAST_AUDIO(sink);
assert(recorder->audio);
// only written from this thread, no need to lock
assert(!recorder->audio_init);
assert(!recorder->audio_disabled);
assert(!recorder->audio_codec);
LOGW("Audio stream recording disabled");
sc_mutex_lock(&recorder->mutex);
recorder->audio = false;
recorder->audio_init = true;
sc_cond_signal(&recorder->cond);
recorder->audio_disabled = true;
sc_cond_signal(&recorder->stream_cond);
sc_mutex_unlock(&recorder->mutex);
}
static void
sc_recorder_stream_init(struct sc_recorder_stream *stream) {
stream->index = -1;
stream->last_pts = AV_NOPTS_VALUE;
}
bool
sc_recorder_init(struct sc_recorder *recorder, const char *filename,
enum sc_record_format format, bool video, bool audio,
enum sc_record_format format, bool audio,
struct sc_size declared_frame_size,
const struct sc_recorder_callbacks *cbs, void *cbs_userdata) {
recorder->filename = strdup(filename);
if (!recorder->filename) {
@ -691,32 +704,36 @@ sc_recorder_init(struct sc_recorder *recorder, const char *filename,
goto error_free_filename;
}
ok = sc_cond_init(&recorder->cond);
ok = sc_cond_init(&recorder->queue_cond);
if (!ok) {
goto error_mutex_destroy;
}
assert(video || audio);
recorder->video = video;
ok = sc_cond_init(&recorder->stream_cond);
if (!ok) {
goto error_queue_cond_destroy;
}
recorder->audio = audio;
sc_vecdeque_init(&recorder->video_queue);
sc_vecdeque_init(&recorder->audio_queue);
sc_queue_init(&recorder->video_queue);
sc_queue_init(&recorder->audio_queue);
recorder->stopped = false;
recorder->video_init = false;
recorder->audio_init = false;
recorder->video_codec = NULL;
recorder->audio_codec = NULL;
recorder->audio_disabled = false;
sc_recorder_stream_init(&recorder->video_stream);
sc_recorder_stream_init(&recorder->audio_stream);
recorder->video_stream_index = -1;
recorder->audio_stream_index = -1;
recorder->format = format;
recorder->declared_frame_size = declared_frame_size;
assert(cbs && cbs->on_ended);
recorder->cbs = cbs;
recorder->cbs_userdata = cbs_userdata;
if (video) {
static const struct sc_packet_sink_ops video_ops = {
.open = sc_recorder_video_packet_sink_open,
.close = sc_recorder_video_packet_sink_close,
@ -724,7 +741,6 @@ sc_recorder_init(struct sc_recorder *recorder, const char *filename,
};
recorder->video_packet_sink.ops = &video_ops;
}
if (audio) {
static const struct sc_packet_sink_ops audio_ops = {
@ -739,6 +755,8 @@ sc_recorder_init(struct sc_recorder *recorder, const char *filename,
return true;
error_queue_cond_destroy:
sc_cond_destroy(&recorder->queue_cond);
error_mutex_destroy:
sc_mutex_destroy(&recorder->mutex);
error_free_filename:
@ -763,7 +781,8 @@ void
sc_recorder_stop(struct sc_recorder *recorder) {
sc_mutex_lock(&recorder->mutex);
recorder->stopped = true;
sc_cond_signal(&recorder->cond);
sc_cond_signal(&recorder->queue_cond);
sc_cond_signal(&recorder->stream_cond);
sc_mutex_unlock(&recorder->mutex);
}
@ -774,7 +793,8 @@ sc_recorder_join(struct sc_recorder *recorder) {
void
sc_recorder_destroy(struct sc_recorder *recorder) {
sc_cond_destroy(&recorder->cond);
sc_cond_destroy(&recorder->stream_cond);
sc_cond_destroy(&recorder->queue_cond);
sc_mutex_destroy(&recorder->mutex);
free(recorder->filename);
}

View File

@ -9,16 +9,16 @@
#include "coords.h"
#include "options.h"
#include "trait/packet_sink.h"
#include "util/queue.h"
#include "util/thread.h"
#include "util/vecdeque.h"
struct sc_recorder_queue SC_VECDEQUE(AVPacket *);
struct sc_recorder_stream {
int index;
int64_t last_pts;
struct sc_record_packet {
AVPacket *packet;
struct sc_record_packet *next;
};
struct sc_recorder_queue SC_QUEUE(struct sc_record_packet);
struct sc_recorder {
struct sc_packet_sink video_packet_sink;
struct sc_packet_sink audio_packet_sink;
@ -32,26 +32,30 @@ struct sc_recorder {
* may access it without data races.
*/
bool audio;
bool video;
char *filename;
enum sc_record_format format;
AVFormatContext *ctx;
struct sc_size declared_frame_size;
sc_thread thread;
sc_mutex mutex;
sc_cond cond;
sc_cond queue_cond;
// set on sc_recorder_stop(), packet_sink close or recording failure
bool stopped;
struct sc_recorder_queue video_queue;
struct sc_recorder_queue audio_queue;
// wake up the recorder thread once the video or audio codec is known
bool video_init;
bool audio_init;
sc_cond stream_cond;
const AVCodec *video_codec;
const AVCodec *audio_codec;
// Instead of providing an audio_codec, the demuxer may notify that the
// stream is disabled if the device could not capture audio
bool audio_disabled;
struct sc_recorder_stream video_stream;
struct sc_recorder_stream audio_stream;
int video_stream_index;
int audio_stream_index;
const struct sc_recorder_callbacks *cbs;
void *cbs_userdata;
@ -64,7 +68,8 @@ struct sc_recorder_callbacks {
bool
sc_recorder_init(struct sc_recorder *recorder, const char *filename,
enum sc_record_format format, bool video, bool audio,
enum sc_record_format format, bool audio,
struct sc_size declared_frame_size,
const struct sc_recorder_callbacks *cbs, void *cbs_userdata);
bool

View File

@ -16,7 +16,6 @@
#include "audio_player.h"
#include "controller.h"
#include "decoder.h"
#include "delay_buffer.h"
#include "demuxer.h"
#include "events.h"
#include "file_pusher.h"
@ -35,7 +34,6 @@
#include "util/log.h"
#include "util/net.h"
#include "util/rand.h"
#include "util/timeout.h"
#ifdef HAVE_V4L2
# include "v4l2_sink.h"
#endif
@ -49,10 +47,8 @@ struct scrcpy {
struct sc_decoder video_decoder;
struct sc_decoder audio_decoder;
struct sc_recorder recorder;
struct sc_delay_buffer display_buffer;
#ifdef HAVE_V4L2
struct sc_v4l2_sink v4l2_sink;
struct sc_delay_buffer v4l2_buffer;
#endif
struct sc_controller controller;
struct sc_file_pusher file_pusher;
@ -74,7 +70,6 @@ struct scrcpy {
struct sc_hid_mouse mouse_hid;
#endif
};
struct sc_timeout timeout;
};
static inline void
@ -90,7 +85,7 @@ push_event(uint32_t type, const char *name) {
#define PUSH_EVENT(TYPE) push_event(TYPE, # TYPE)
#ifdef _WIN32
static BOOL WINAPI windows_ctrl_handler(DWORD ctrl_type) {
BOOL WINAPI windows_ctrl_handler(DWORD ctrl_type) {
if (ctrl_type == CTRL_C_EVENT) {
PUSH_EVENT(SDL_QUIT);
return TRUE;
@ -139,7 +134,7 @@ sdl_set_hints(const char *render_driver) {
}
static void
sdl_configure(bool video_playback, bool disable_screensaver) {
sdl_configure(bool display, bool disable_screensaver) {
#ifdef _WIN32
// Clean up properly on Ctrl+C on Windows
bool ok = SetConsoleCtrlHandler(windows_ctrl_handler, TRUE);
@ -148,7 +143,7 @@ sdl_configure(bool video_playback, bool disable_screensaver) {
}
#endif // _WIN32
if (!video_playback) {
if (!display) {
return;
}
@ -173,16 +168,11 @@ event_loop(struct scrcpy *s) {
case SC_EVENT_RECORDER_ERROR:
LOGE("Recorder error");
return SCRCPY_EXIT_FAILURE;
case SC_EVENT_TIME_LIMIT_REACHED:
LOGI("Time limit reached");
return SCRCPY_EXIT_SUCCESS;
case SDL_QUIT:
LOGD("User requested to quit");
return SCRCPY_EXIT_SUCCESS;
default:
if (!sc_screen_handle_event(&s->screen, &event)) {
return SCRCPY_EXIT_FAILURE;
}
sc_screen_handle_event(&s->screen, &event);
break;
}
}
@ -228,15 +218,12 @@ sc_recorder_on_ended(struct sc_recorder *recorder, bool success,
}
static void
sc_video_demuxer_on_ended(struct sc_demuxer *demuxer,
enum sc_demuxer_status status, void *userdata) {
sc_video_demuxer_on_ended(struct sc_demuxer *demuxer, bool eos,
void *userdata) {
(void) demuxer;
(void) userdata;
// The device may not decide to disable the video
assert(status != SC_DEMUXER_STATUS_DISABLED);
if (status == SC_DEMUXER_STATUS_EOS) {
if (eos) {
PUSH_EVENT(SC_EVENT_DEVICE_DISCONNECTED);
} else {
PUSH_EVENT(SC_EVENT_DEMUXER_ERROR);
@ -244,19 +231,20 @@ sc_video_demuxer_on_ended(struct sc_demuxer *demuxer,
}
static void
sc_audio_demuxer_on_ended(struct sc_demuxer *demuxer,
enum sc_demuxer_status status, void *userdata) {
sc_audio_demuxer_on_ended(struct sc_demuxer *demuxer, bool eos,
void *userdata) {
(void) demuxer;
(void) userdata;
const struct scrcpy_options *options = userdata;
// Contrary to the video demuxer, keep mirroring if only the audio fails.
// 'eos' is true on end-of-stream, including when audio capture is not
// possible on the device (so that scrcpy continue to mirror video without
// failing).
// However, if an audio configuration failure occurs (for example the user
// explicitly selected an unknown audio encoder), 'eos' is false and scrcpy
// must exit.
// Contrary to the video demuxer, keep mirroring if only the audio fails
// (unless --require-audio is set).
if (status == SC_DEMUXER_STATUS_EOS) {
PUSH_EVENT(SC_EVENT_DEVICE_DISCONNECTED);
} else if (status == SC_DEMUXER_STATUS_ERROR
|| (status == SC_DEMUXER_STATUS_DISABLED
&& options->require_audio)) {
if (!eos) {
PUSH_EVENT(SC_EVENT_DEMUXER_ERROR);
}
}
@ -287,17 +275,9 @@ sc_server_on_disconnected(struct sc_server *server, void *userdata) {
// event
}
static void
sc_timeout_on_timeout(struct sc_timeout *timeout, void *userdata) {
(void) timeout;
(void) userdata;
PUSH_EVENT(SC_EVENT_TIME_LIMIT_REACHED);
}
// Generate a scrcpy id to differentiate multiple running scrcpy instances
static uint32_t
scrcpy_generate_scid(void) {
scrcpy_generate_scid() {
struct sc_rand rand;
sc_rand_init(&rand);
// Only use 31 bits to avoid issues with signed values on the Java-side
@ -336,8 +316,6 @@ scrcpy(struct scrcpy_options *options) {
bool controller_initialized = false;
bool controller_started = false;
bool screen_initialized = false;
bool timeout_initialized = false;
bool timeout_started = false;
struct sc_acksync *acksync = NULL;
@ -351,9 +329,6 @@ scrcpy(struct scrcpy_options *options) {
.log_level = options->log_level,
.video_codec = options->video_codec,
.audio_codec = options->audio_codec,
.video_source = options->video_source,
.audio_source = options->audio_source,
.camera_facing = options->camera_facing,
.crop = options->crop,
.port_range = options->port_range,
.tunnel_host = options->tunnel_host,
@ -365,7 +340,6 @@ scrcpy(struct scrcpy_options *options) {
.lock_video_orientation = options->lock_video_orientation,
.control = options->control,
.display_id = options->display_id,
.video = options->video,
.audio = options->audio,
.show_touches = options->show_touches,
.stay_awake = options->stay_awake,
@ -373,10 +347,6 @@ scrcpy(struct scrcpy_options *options) {
.audio_codec_options = options->audio_codec_options,
.video_encoder = options->video_encoder,
.audio_encoder = options->audio_encoder,
.camera_id = options->camera_id,
.camera_size = options->camera_size,
.camera_ar = options->camera_ar,
.camera_fps = options->camera_fps,
.force_adb_forward = options->force_adb_forward,
.power_off_on_close = options->power_off_on_close,
.clipboard_autosync = options->clipboard_autosync,
@ -385,9 +355,8 @@ scrcpy(struct scrcpy_options *options) {
.tcpip_dst = options->tcpip_dst,
.cleanup = options->cleanup,
.power_on = options->power_on,
.kill_adb_on_close = options->kill_adb_on_close,
.camera_high_speed = options->camera_high_speed,
.list = options->list,
.list_encoders = options->list_encoders,
.list_displays = options->list_displays,
};
static const struct sc_server_callbacks cbs = {
@ -405,32 +374,30 @@ scrcpy(struct scrcpy_options *options) {
server_started = true;
if (options->list) {
if (options->list_encoders || options->list_displays) {
bool ok = await_for_server(NULL);
ret = ok ? SCRCPY_EXIT_SUCCESS : SCRCPY_EXIT_FAILURE;
goto end;
}
// playback implies capture
assert(!options->video_playback || options->video);
assert(!options->audio_playback || options->audio);
if (options->video_playback) {
if (options->display) {
sdl_set_hints(options->render_driver);
}
// Initialize SDL video in addition if display is enabled
if (options->display) {
if (SDL_Init(SDL_INIT_VIDEO)) {
LOGE("Could not initialize SDL video: %s", SDL_GetError());
goto end;
}
}
if (options->audio_playback) {
if (SDL_Init(SDL_INIT_AUDIO)) {
if (options->audio && SDL_Init(SDL_INIT_AUDIO)) {
LOGE("Could not initialize SDL audio: %s", SDL_GetError());
goto end;
}
}
sdl_configure(options->video_playback, options->disable_screensaver);
sdl_configure(options->display, options->disable_screensaver);
// Await for server without blocking Ctrl+C handling
bool connected;
@ -456,7 +423,7 @@ scrcpy(struct scrcpy_options *options) {
struct sc_file_pusher *fp = NULL;
if (options->video_playback && options->control) {
if (options->display && options->control) {
if (!sc_file_pusher_init(&s->file_pusher, serial,
options->push_target)) {
goto end;
@ -465,36 +432,32 @@ scrcpy(struct scrcpy_options *options) {
file_pusher_initialized = true;
}
if (options->video) {
static const struct sc_demuxer_callbacks video_demuxer_cbs = {
.on_ended = sc_video_demuxer_on_ended,
};
sc_demuxer_init(&s->video_demuxer, "video", s->server.video_socket,
&video_demuxer_cbs, NULL);
}
if (options->audio) {
static const struct sc_demuxer_callbacks audio_demuxer_cbs = {
.on_ended = sc_audio_demuxer_on_ended,
};
sc_demuxer_init(&s->audio_demuxer, "audio", s->server.audio_socket,
&audio_demuxer_cbs, options);
&audio_demuxer_cbs, NULL);
}
bool needs_video_decoder = options->video_playback;
bool needs_audio_decoder = options->audio_playback;
bool needs_video_decoder = options->display;
bool needs_audio_decoder = options->audio && options->display;
#ifdef HAVE_V4L2
needs_video_decoder |= !!options->v4l2_device;
#endif
if (needs_video_decoder) {
sc_decoder_init(&s->video_decoder, "video");
sc_packet_source_add_sink(&s->video_demuxer.packet_source,
&s->video_decoder.packet_sink);
sc_demuxer_add_sink(&s->video_demuxer, &s->video_decoder.packet_sink);
}
if (needs_audio_decoder) {
sc_decoder_init(&s->audio_decoder, "audio");
sc_packet_source_add_sink(&s->audio_demuxer.packet_source,
&s->audio_decoder.packet_sink);
sc_demuxer_add_sink(&s->audio_demuxer, &s->audio_decoder.packet_sink);
}
if (options->record_filename) {
@ -502,8 +465,8 @@ scrcpy(struct scrcpy_options *options) {
.on_ended = sc_recorder_on_ended,
};
if (!sc_recorder_init(&s->recorder, options->record_filename,
options->record_format, options->video,
options->audio, &recorder_cbs, NULL)) {
options->record_format, options->audio,
info->frame_size, &recorder_cbs, NULL)) {
goto end;
}
recorder_initialized = true;
@ -513,12 +476,9 @@ scrcpy(struct scrcpy_options *options) {
}
recorder_started = true;
if (options->video) {
sc_packet_source_add_sink(&s->video_demuxer.packet_source,
&s->recorder.video_packet_sink);
}
sc_demuxer_add_sink(&s->video_demuxer, &s->recorder.video_packet_sink);
if (options->audio) {
sc_packet_source_add_sink(&s->audio_demuxer.packet_source,
sc_demuxer_add_sink(&s->audio_demuxer,
&s->recorder.audio_packet_sink);
}
}
@ -662,12 +622,23 @@ aoa_hid_end:
}
controller_started = true;
controller = &s->controller;
if (options->turn_screen_off) {
struct sc_control_msg msg;
msg.type = SC_CONTROL_MSG_TYPE_SET_SCREEN_POWER_MODE;
msg.set_screen_power_mode.mode = SC_SCREEN_POWER_MODE_OFF;
if (!sc_controller_push_msg(&s->controller, &msg)) {
LOGW("Could not request 'set screen power mode'");
}
}
}
// There is a controller if and only if control is enabled
assert(options->control == !!controller);
if (options->video_playback) {
if (options->display) {
const char *window_title =
options->window_title ? options->window_title : info->device_name;
@ -681,6 +652,7 @@ aoa_hid_end:
.clipboard_autosync = options->clipboard_autosync,
.shortcut_mods = &options->shortcut_mods,
.window_title = window_title,
.frame_size = info->frame_size,
.always_on_top = options->always_on_top,
.window_x = options->window_x,
.window_y = options->window_y,
@ -691,59 +663,41 @@ aoa_hid_end:
.mipmaps = options->mipmaps,
.fullscreen = options->fullscreen,
.start_fps_counter = options->start_fps_counter,
.buffering_time = options->display_buffer,
};
struct sc_frame_source *src = &s->video_decoder.frame_source;
if (options->display_buffer) {
sc_delay_buffer_init(&s->display_buffer, options->display_buffer,
true);
sc_frame_source_add_sink(src, &s->display_buffer.frame_sink);
src = &s->display_buffer.frame_source;
}
if (!sc_screen_init(&s->screen, &screen_params)) {
goto end;
}
screen_initialized = true;
sc_frame_source_add_sink(src, &s->screen.frame_sink);
}
sc_decoder_add_sink(&s->video_decoder, &s->screen.frame_sink);
if (options->audio_playback) {
sc_audio_player_init(&s->audio_player, options->audio_buffer,
options->audio_output_buffer);
sc_frame_source_add_sink(&s->audio_decoder.frame_source,
&s->audio_player.frame_sink);
if (options->audio) {
sc_audio_player_init(&s->audio_player);
sc_decoder_add_sink(&s->audio_decoder, &s->audio_player.frame_sink);
}
}
#ifdef HAVE_V4L2
if (options->v4l2_device) {
if (!sc_v4l2_sink_init(&s->v4l2_sink, options->v4l2_device)) {
if (!sc_v4l2_sink_init(&s->v4l2_sink, options->v4l2_device,
info->frame_size, options->v4l2_buffer)) {
goto end;
}
struct sc_frame_source *src = &s->video_decoder.frame_source;
if (options->v4l2_buffer) {
sc_delay_buffer_init(&s->v4l2_buffer, options->v4l2_buffer, true);
sc_frame_source_add_sink(src, &s->v4l2_buffer.frame_sink);
src = &s->v4l2_buffer.frame_source;
}
sc_frame_source_add_sink(src, &s->v4l2_sink.frame_sink);
sc_decoder_add_sink(&s->video_decoder, &s->v4l2_sink.frame_sink);
v4l2_sink_initialized = true;
}
#endif
// Now that the header values have been consumed, the socket(s) will
// receive the stream(s). Start the demuxer(s).
if (options->video) {
// now we consumed the header values, the socket receives the video stream
// start the video demuxer
if (!sc_demuxer_start(&s->video_demuxer)) {
goto end;
}
video_demuxer_started = true;
}
if (options->audio) {
if (!sc_demuxer_start(&s->audio_demuxer)) {
@ -752,39 +706,6 @@ aoa_hid_end:
audio_demuxer_started = true;
}
// If the device screen is to be turned off, send the control message after
// everything is set up
if (options->control && options->turn_screen_off) {
struct sc_control_msg msg;
msg.type = SC_CONTROL_MSG_TYPE_SET_SCREEN_POWER_MODE;
msg.set_screen_power_mode.mode = SC_SCREEN_POWER_MODE_OFF;
if (!sc_controller_push_msg(&s->controller, &msg)) {
LOGW("Could not request 'set screen power mode'");
}
}
if (options->time_limit) {
bool ok = sc_timeout_init(&s->timeout);
if (!ok) {
goto end;
}
timeout_initialized = true;
sc_tick deadline = sc_tick_now() + options->time_limit;
static const struct sc_timeout_callbacks cbs = {
.on_timeout = sc_timeout_on_timeout,
};
ok = sc_timeout_start(&s->timeout, deadline, &cbs, NULL);
if (!ok) {
goto end;
}
timeout_started = true;
}
ret = event_loop(s);
LOGD("quit...");
@ -793,10 +714,6 @@ aoa_hid_end:
sc_screen_hide_window(&s->screen);
end:
if (timeout_started) {
sc_timeout_stop(&s->timeout);
}
// The demuxer is not stopped explicitly, because it will stop by itself on
// end-of-stream
#ifdef HAVE_USB
@ -832,13 +749,6 @@ end:
sc_server_stop(&s->server);
}
if (timeout_started) {
sc_timeout_join(&s->timeout);
}
if (timeout_initialized) {
sc_timeout_destroy(&s->timeout);
}
// now that the sockets are shutdown, the demuxer and controller are
// interrupted, we can join them
if (video_demuxer_started) {

View File

@ -7,6 +7,7 @@
#include "events.h"
#include "icon.h"
#include "options.h"
#include "video_buffer.h"
#include "util/log.h"
#define DISPLAY_MARGINS 96
@ -56,7 +57,6 @@ static void
set_window_size(struct sc_screen *screen, struct sc_size new_size) {
assert(!screen->fullscreen);
assert(!screen->maximized);
assert(!screen->minimized);
SDL_SetWindowSize(screen->window, new_size.width, new_size.height);
}
@ -240,6 +240,33 @@ sc_screen_update_content_rect(struct sc_screen *screen) {
}
}
static inline SDL_Texture *
create_texture(struct sc_screen *screen) {
SDL_Renderer *renderer = screen->renderer;
struct sc_size size = screen->frame_size;
SDL_Texture *texture = SDL_CreateTexture(renderer, SDL_PIXELFORMAT_YV12,
SDL_TEXTUREACCESS_STREAMING,
size.width, size.height);
if (!texture) {
return NULL;
}
if (screen->mipmaps) {
struct sc_opengl *gl = &screen->gl;
SDL_GL_BindTexture(texture, NULL, NULL);
// Enable trilinear filtering for downscaling
gl->TexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,
GL_LINEAR_MIPMAP_LINEAR);
gl->TexParameterf(GL_TEXTURE_2D, GL_TEXTURE_LOD_BIAS, -1.f);
SDL_GL_UnbindTexture(texture);
}
return texture;
}
// render the texture to the renderer
//
// Set the update_content_rect flag if the window or content size may have
@ -250,11 +277,35 @@ sc_screen_render(struct sc_screen *screen, bool update_content_rect) {
sc_screen_update_content_rect(screen);
}
enum sc_display_result res =
sc_display_render(&screen->display, &screen->rect, screen->rotation);
(void) res; // any error already logged
SDL_RenderClear(screen->renderer);
if (screen->rotation == 0) {
SDL_RenderCopy(screen->renderer, screen->texture, NULL, &screen->rect);
} else {
// rotation in RenderCopyEx() is clockwise, while screen->rotation is
// counterclockwise (to be consistent with --lock-video-orientation)
int cw_rotation = (4 - screen->rotation) % 4;
double angle = 90 * cw_rotation;
SDL_Rect *dstrect = NULL;
SDL_Rect rect;
if (screen->rotation & 1) {
rect.x = screen->rect.x + (screen->rect.w - screen->rect.h) / 2;
rect.y = screen->rect.y + (screen->rect.h - screen->rect.w) / 2;
rect.w = screen->rect.h;
rect.h = screen->rect.w;
dstrect = &rect;
} else {
assert(screen->rotation == 2);
dstrect = &screen->rect;
}
SDL_RenderCopyEx(screen->renderer, screen->texture, NULL, dstrect,
angle, NULL, 0);
}
SDL_RenderPresent(screen->renderer);
}
#if defined(__APPLE__) || defined(__WINDOWS__)
# define CONTINUOUS_RESIZING_WORKAROUND
#endif
@ -285,25 +336,7 @@ sc_screen_frame_sink_open(struct sc_frame_sink *sink,
(void) ctx;
struct sc_screen *screen = DOWNCAST(sink);
assert(ctx->width > 0 && ctx->width <= 0xFFFF);
assert(ctx->height > 0 && ctx->height <= 0xFFFF);
// screen->frame_size is never used before the event is pushed, and the
// event acts as a memory barrier so it is safe without mutex
screen->frame_size.width = ctx->width;
screen->frame_size.height = ctx->height;
static SDL_Event event = {
.type = SC_EVENT_SCREEN_INIT_SIZE,
};
// Post the event on the UI thread (the texture must be created from there)
int ret = SDL_PushEvent(&event);
if (ret < 0) {
LOGW("Could not post init size event: %s", SDL_GetError());
return false;
}
(void) screen;
#ifndef NDEBUG
screen->open = true;
#endif
@ -326,18 +359,30 @@ sc_screen_frame_sink_close(struct sc_frame_sink *sink) {
static bool
sc_screen_frame_sink_push(struct sc_frame_sink *sink, const AVFrame *frame) {
struct sc_screen *screen = DOWNCAST(sink);
bool previous_skipped;
bool ok = sc_frame_buffer_push(&screen->fb, frame, &previous_skipped);
if (!ok) {
return false;
return sc_video_buffer_push(&screen->vb, frame);
}
static void
sc_video_buffer_on_new_frame(struct sc_video_buffer *vb, bool previous_skipped,
void *userdata) {
(void) vb;
struct sc_screen *screen = userdata;
// event_failed implies previous_skipped (the previous frame may not have
// been consumed if the event was not sent)
assert(!screen->event_failed || previous_skipped);
bool need_new_event;
if (previous_skipped) {
sc_fps_counter_add_skipped_frame(&screen->fps_counter);
// The SC_EVENT_NEW_FRAME triggered for the previous frame will consume
// this new frame instead
// this new frame instead, unless the previous event failed
need_new_event = screen->event_failed;
} else {
need_new_event = true;
}
if (need_new_event) {
static SDL_Event new_frame_event = {
.type = SC_EVENT_NEW_FRAME,
};
@ -346,11 +391,11 @@ sc_screen_frame_sink_push(struct sc_frame_sink *sink, const AVFrame *frame) {
int ret = SDL_PushEvent(&new_frame_event);
if (ret < 0) {
LOGW("Could not post new frame event: %s", SDL_GetError());
return false;
screen->event_failed = true;
} else {
screen->event_failed = false;
}
}
return true;
}
bool
@ -360,7 +405,7 @@ sc_screen_init(struct sc_screen *screen,
screen->has_frame = false;
screen->fullscreen = false;
screen->maximized = false;
screen->minimized = false;
screen->event_failed = false;
screen->mouse_capture_key_pressed = 0;
screen->req.x = params->window_x;
@ -370,19 +415,33 @@ sc_screen_init(struct sc_screen *screen,
screen->req.fullscreen = params->fullscreen;
screen->req.start_fps_counter = params->start_fps_counter;
bool ok = sc_frame_buffer_init(&screen->fb);
static const struct sc_video_buffer_callbacks cbs = {
.on_new_frame = sc_video_buffer_on_new_frame,
};
bool ok = sc_video_buffer_init(&screen->vb, params->buffering_time, &cbs,
screen);
if (!ok) {
return false;
}
if (!sc_fps_counter_init(&screen->fps_counter)) {
goto error_destroy_frame_buffer;
ok = sc_video_buffer_start(&screen->vb);
if (!ok) {
goto error_destroy_video_buffer;
}
if (!sc_fps_counter_init(&screen->fps_counter)) {
goto error_stop_and_join_video_buffer;
}
screen->frame_size = params->frame_size;
screen->rotation = params->rotation;
if (screen->rotation) {
LOGI("Initial display rotation set to %u", screen->rotation);
}
struct sc_size content_size =
get_rotated_size(screen->frame_size, screen->rotation);
screen->content_size = content_size;
uint32_t window_flags = SDL_WINDOW_HIDDEN
| SDL_WINDOW_RESIZABLE
@ -402,11 +461,46 @@ sc_screen_init(struct sc_screen *screen,
goto error_destroy_fps_counter;
}
ok = sc_display_init(&screen->display, screen->window, params->mipmaps);
if (!ok) {
screen->renderer = SDL_CreateRenderer(screen->window, -1,
SDL_RENDERER_ACCELERATED);
if (!screen->renderer) {
LOGE("Could not create renderer: %s", SDL_GetError());
goto error_destroy_window;
}
SDL_RendererInfo renderer_info;
int r = SDL_GetRendererInfo(screen->renderer, &renderer_info);
const char *renderer_name = r ? NULL : renderer_info.name;
LOGI("Renderer: %s", renderer_name ? renderer_name : "(unknown)");
screen->mipmaps = false;
// starts with "opengl"
bool use_opengl = renderer_name && !strncmp(renderer_name, "opengl", 6);
if (use_opengl) {
struct sc_opengl *gl = &screen->gl;
sc_opengl_init(gl);
LOGI("OpenGL version: %s", gl->version);
if (params->mipmaps) {
bool supports_mipmaps =
sc_opengl_version_at_least(gl, 3, 0, /* OpenGL 3.0+ */
2, 0 /* OpenGL ES 2.0+ */);
if (supports_mipmaps) {
LOGI("Trilinear filtering enabled");
screen->mipmaps = true;
} else {
LOGW("Trilinear filtering disabled "
"(OpenGL 3.0+ or ES 2.0+ required)");
}
} else {
LOGI("Trilinear filtering disabled");
}
} else if (params->mipmaps) {
LOGD("Trilinear filtering disabled (not an OpenGL renderer)");
}
SDL_Surface *icon = scrcpy_icon_load();
if (icon) {
SDL_SetWindowIcon(screen->window, icon);
@ -415,10 +509,18 @@ sc_screen_init(struct sc_screen *screen,
LOGW("Could not load icon");
}
LOGI("Initial texture: %" PRIu16 "x%" PRIu16, params->frame_size.width,
params->frame_size.height);
screen->texture = create_texture(screen);
if (!screen->texture) {
LOGE("Could not create texture: %s", SDL_GetError());
goto error_destroy_renderer;
}
screen->frame = av_frame_alloc();
if (!screen->frame) {
LOG_OOM();
goto error_destroy_display;
goto error_destroy_texture;
}
struct sc_input_manager_params im_params = {
@ -453,14 +555,19 @@ sc_screen_init(struct sc_screen *screen,
return true;
error_destroy_display:
sc_display_destroy(&screen->display);
error_destroy_texture:
SDL_DestroyTexture(screen->texture);
error_destroy_renderer:
SDL_DestroyRenderer(screen->renderer);
error_destroy_window:
SDL_DestroyWindow(screen->window);
error_destroy_fps_counter:
sc_fps_counter_destroy(&screen->fps_counter);
error_destroy_frame_buffer:
sc_frame_buffer_destroy(&screen->fb);
error_stop_and_join_video_buffer:
sc_video_buffer_stop(&screen->vb);
sc_video_buffer_join(&screen->vb);
error_destroy_video_buffer:
sc_video_buffer_destroy(&screen->vb);
return false;
}
@ -488,7 +595,6 @@ sc_screen_show_initial_window(struct sc_screen *screen) {
}
SDL_ShowWindow(screen->window);
sc_screen_update_content_rect(screen);
}
void
@ -498,11 +604,13 @@ sc_screen_hide_window(struct sc_screen *screen) {
void
sc_screen_interrupt(struct sc_screen *screen) {
sc_video_buffer_stop(&screen->vb);
sc_fps_counter_interrupt(&screen->fps_counter);
}
void
sc_screen_join(struct sc_screen *screen) {
sc_video_buffer_join(&screen->vb);
sc_fps_counter_join(&screen->fps_counter);
}
@ -511,11 +619,12 @@ sc_screen_destroy(struct sc_screen *screen) {
#ifndef NDEBUG
assert(!screen->open);
#endif
sc_display_destroy(&screen->display);
av_frame_free(&screen->frame);
SDL_DestroyTexture(screen->texture);
SDL_DestroyRenderer(screen->renderer);
SDL_DestroyWindow(screen->window);
sc_fps_counter_destroy(&screen->fps_counter);
sc_frame_buffer_destroy(&screen->fb);
sc_video_buffer_destroy(&screen->vb);
}
static void
@ -534,11 +643,11 @@ resize_for_content(struct sc_screen *screen, struct sc_size old_content_size,
static void
set_content_size(struct sc_screen *screen, struct sc_size new_content_size) {
if (!screen->fullscreen && !screen->maximized && !screen->minimized) {
if (!screen->fullscreen && !screen->maximized) {
resize_for_content(screen, screen->content_size, new_content_size);
} else if (!screen->resize_pending) {
// Store the windowed size to be able to compute the optimal size once
// fullscreen/maximized/minimized are disabled
// fullscreen and maximized are disabled
screen->windowed_content_size = screen->content_size;
screen->resize_pending = true;
}
@ -550,7 +659,6 @@ static void
apply_pending_resize(struct sc_screen *screen) {
assert(!screen->fullscreen);
assert(!screen->maximized);
assert(!screen->minimized);
if (screen->resize_pending) {
resize_for_content(screen, screen->windowed_content_size,
screen->content_size);
@ -576,31 +684,14 @@ sc_screen_set_rotation(struct sc_screen *screen, unsigned rotation) {
sc_screen_render(screen, true);
}
static bool
sc_screen_init_size(struct sc_screen *screen) {
// Before first frame
assert(!screen->has_frame);
// The requested size is passed via screen->frame_size
struct sc_size content_size =
get_rotated_size(screen->frame_size, screen->rotation);
screen->content_size = content_size;
enum sc_display_result res =
sc_display_set_texture_size(&screen->display, screen->frame_size);
return res != SC_DISPLAY_RESULT_ERROR;
}
// recreate the texture and resize the window if the frame size has changed
static enum sc_display_result
static bool
prepare_for_frame(struct sc_screen *screen, struct sc_size new_frame_size) {
if (screen->frame_size.width == new_frame_size.width
&& screen->frame_size.height == new_frame_size.height) {
return SC_DISPLAY_RESULT_OK;
}
if (screen->frame_size.width != new_frame_size.width
|| screen->frame_size.height != new_frame_size.height) {
// frame dimension changed, destroy texture
SDL_DestroyTexture(screen->texture);
// frame dimension changed
screen->frame_size = new_frame_size;
struct sc_size new_content_size =
@ -609,35 +700,46 @@ prepare_for_frame(struct sc_screen *screen, struct sc_size new_frame_size) {
sc_screen_update_content_rect(screen);
return sc_display_set_texture_size(&screen->display, screen->frame_size);
LOGI("New texture: %" PRIu16 "x%" PRIu16,
screen->frame_size.width, screen->frame_size.height);
screen->texture = create_texture(screen);
if (!screen->texture) {
LOGE("Could not create texture: %s", SDL_GetError());
return false;
}
}
return true;
}
// write the frame into the texture
static void
update_texture(struct sc_screen *screen, const AVFrame *frame) {
SDL_UpdateYUVTexture(screen->texture, NULL,
frame->data[0], frame->linesize[0],
frame->data[1], frame->linesize[1],
frame->data[2], frame->linesize[2]);
if (screen->mipmaps) {
SDL_GL_BindTexture(screen->texture, NULL, NULL);
screen->gl.GenerateMipmap(GL_TEXTURE_2D);
SDL_GL_UnbindTexture(screen->texture);
}
}
static bool
sc_screen_update_frame(struct sc_screen *screen) {
av_frame_unref(screen->frame);
sc_frame_buffer_consume(&screen->fb, screen->frame);
sc_video_buffer_consume(&screen->vb, screen->frame);
AVFrame *frame = screen->frame;
sc_fps_counter_add_rendered_frame(&screen->fps_counter);
struct sc_size new_frame_size = {frame->width, frame->height};
enum sc_display_result res = prepare_for_frame(screen, new_frame_size);
if (res == SC_DISPLAY_RESULT_ERROR) {
if (!prepare_for_frame(screen, new_frame_size)) {
return false;
}
if (res == SC_DISPLAY_RESULT_PENDING) {
// Not an error, but do not continue
return true;
}
res = sc_display_update_texture(&screen->display, frame);
if (res == SC_DISPLAY_RESULT_ERROR) {
return false;
}
if (res == SC_DISPLAY_RESULT_PENDING) {
// Not an error, but do not continue
return true;
}
update_texture(screen, frame);
if (!screen->has_frame) {
screen->has_frame = true;
@ -663,7 +765,7 @@ sc_screen_switch_fullscreen(struct sc_screen *screen) {
}
screen->fullscreen = !screen->fullscreen;
if (!screen->fullscreen && !screen->maximized && !screen->minimized) {
if (!screen->fullscreen && !screen->maximized) {
apply_pending_resize(screen);
}
@ -673,7 +775,7 @@ sc_screen_switch_fullscreen(struct sc_screen *screen) {
void
sc_screen_resize_to_fit(struct sc_screen *screen) {
if (screen->fullscreen || screen->maximized || screen->minimized) {
if (screen->fullscreen || screen->maximized) {
return;
}
@ -697,7 +799,7 @@ sc_screen_resize_to_fit(struct sc_screen *screen) {
void
sc_screen_resize_to_pixel_perfect(struct sc_screen *screen) {
if (screen->fullscreen || screen->minimized) {
if (screen->fullscreen) {
return;
}
@ -717,32 +819,22 @@ sc_screen_is_mouse_capture_key(SDL_Keycode key) {
return key == SDLK_LALT || key == SDLK_LGUI || key == SDLK_RGUI;
}
bool
sc_screen_handle_event(struct sc_screen *screen, const SDL_Event *event) {
void
sc_screen_handle_event(struct sc_screen *screen, SDL_Event *event) {
bool relative_mode = sc_screen_is_relative_mode(screen);
switch (event->type) {
case SC_EVENT_SCREEN_INIT_SIZE: {
// The initial size is passed via screen->frame_size
bool ok = sc_screen_init_size(screen);
if (!ok) {
LOGE("Could not initialize screen size");
return false;
}
return true;
}
case SC_EVENT_NEW_FRAME: {
bool ok = sc_screen_update_frame(screen);
if (!ok) {
LOGE("Frame update failed\n");
return false;
LOGW("Frame update failed\n");
}
return true;
return;
}
case SDL_WINDOWEVENT:
if (!screen->has_frame) {
// Do nothing
return true;
return;
}
switch (event->window.event) {
case SDL_WINDOWEVENT_EXPOSED:
@ -754,9 +846,6 @@ sc_screen_handle_event(struct sc_screen *screen, const SDL_Event *event) {
case SDL_WINDOWEVENT_MAXIMIZED:
screen->maximized = true;
break;
case SDL_WINDOWEVENT_MINIMIZED:
screen->minimized = true;
break;
case SDL_WINDOWEVENT_RESTORED:
if (screen->fullscreen) {
// On Windows, in maximized+fullscreen, disabling
@ -767,7 +856,6 @@ sc_screen_handle_event(struct sc_screen *screen, const SDL_Event *event) {
break;
}
screen->maximized = false;
screen->minimized = false;
apply_pending_resize(screen);
sc_screen_render(screen, true);
break;
@ -777,7 +865,7 @@ sc_screen_handle_event(struct sc_screen *screen, const SDL_Event *event) {
}
break;
}
return true;
return;
case SDL_KEYDOWN:
if (relative_mode) {
SDL_Keycode key = event->key.keysym.sym;
@ -790,7 +878,7 @@ sc_screen_handle_event(struct sc_screen *screen, const SDL_Event *event) {
screen->mouse_capture_key_pressed = 0;
}
// Mouse capture keys are never forwarded to the device
return true;
return;
}
}
break;
@ -806,7 +894,7 @@ sc_screen_handle_event(struct sc_screen *screen, const SDL_Event *event) {
sc_screen_toggle_mouse_capture(screen);
}
// Mouse capture keys are never forwarded to the device
return true;
return;
}
}
break;
@ -816,7 +904,7 @@ sc_screen_handle_event(struct sc_screen *screen, const SDL_Event *event) {
if (relative_mode && !sc_screen_get_mouse_capture(screen)) {
// Do not forward to input manager, the mouse will be captured
// on SDL_MOUSEBUTTONUP
return true;
return;
}
break;
case SDL_FINGERMOTION:
@ -825,19 +913,18 @@ sc_screen_handle_event(struct sc_screen *screen, const SDL_Event *event) {
if (relative_mode) {
// Touch events are not compatible with relative mode
// (coordinates are not relative)
return true;
return;
}
break;
case SDL_MOUSEBUTTONUP:
if (relative_mode && !sc_screen_get_mouse_capture(screen)) {
sc_screen_set_mouse_capture(screen, true);
return true;
return;
}
break;
}
sc_input_manager_handle_event(&screen->im, event);
return true;
}
struct sc_point
@ -849,8 +936,6 @@ sc_screen_convert_drawable_to_frame_coords(struct sc_screen *screen,
int32_t w = screen->content_size.width;
int32_t h = screen->content_size.height;
// screen->rect must be initialized to avoid a division by zero
assert(screen->rect.w && screen->rect.h);
x = (int64_t) (x - screen->rect.x) * w / screen->rect.w;
y = (int64_t) (y - screen->rect.y) * h / screen->rect.h;

View File

@ -9,14 +9,13 @@
#include "controller.h"
#include "coords.h"
#include "display.h"
#include "fps_counter.h"
#include "frame_buffer.h"
#include "input_manager.h"
#include "opengl.h"
#include "trait/key_processor.h"
#include "trait/frame_sink.h"
#include "trait/mouse_processor.h"
#include "video_buffer.h"
struct sc_screen {
struct sc_frame_sink frame_sink; // frame sink trait
@ -25,9 +24,8 @@ struct sc_screen {
bool open; // track the open/close state to assert correct behavior
#endif
struct sc_display display;
struct sc_input_manager im;
struct sc_frame_buffer fb;
struct sc_video_buffer vb;
struct sc_fps_counter fps_counter;
// The initial requested window properties
@ -41,6 +39,9 @@ struct sc_screen {
} req;
SDL_Window *window;
SDL_Renderer *renderer;
SDL_Texture *texture;
struct sc_opengl gl;
struct sc_size frame_size;
struct sc_size content_size; // rotated frame_size
@ -56,7 +57,9 @@ struct sc_screen {
bool has_frame;
bool fullscreen;
bool maximized;
bool minimized;
bool mipmaps;
bool event_failed; // in case SDL_PushEvent() returned an error
// To enable/disable mouse capture, a mouse capture key (LALT, LGUI or
// RGUI) must be pressed. This variable tracks the pressed capture key.
@ -77,6 +80,7 @@ struct sc_screen_params {
const struct sc_shortcut_mods *shortcut_mods;
const char *window_title;
struct sc_size frame_size;
bool always_on_top;
int16_t window_x; // accepts SC_WINDOW_POSITION_UNDEFINED
@ -91,6 +95,8 @@ struct sc_screen_params {
bool fullscreen;
bool start_fps_counter;
sc_tick buffering_time;
};
// initialize screen, create window, renderer and texture (window is hidden)
@ -134,9 +140,8 @@ void
sc_screen_set_rotation(struct sc_screen *screen, unsigned rotation);
// react to SDL events
// If this function returns false, scrcpy must exit with an error.
bool
sc_screen_handle_event(struct sc_screen *screen, const SDL_Event *event);
void
sc_screen_handle_event(struct sc_screen *screen, SDL_Event *event);
// convert point from window coordinates to frame coordinates
// x and y are expressed in pixels

View File

@ -76,8 +76,6 @@ sc_server_params_destroy(struct sc_server_params *params) {
free((char *) params->video_encoder);
free((char *) params->audio_encoder);
free((char *) params->tcpip_dst);
free((char *) params->camera_id);
free((char *) params->camera_ar);
}
static bool
@ -88,15 +86,14 @@ sc_server_params_copy(struct sc_server_params *dst,
// The params reference user-allocated memory, so we must copy them to
// handle them from another thread
#define COPY(FIELD) do { \
#define COPY(FIELD) \
dst->FIELD = NULL; \
if (src->FIELD) { \
dst->FIELD = strdup(src->FIELD); \
if (!dst->FIELD) { \
goto error; \
} \
} \
} while(0)
}
COPY(req_serial);
COPY(crop);
@ -105,8 +102,6 @@ sc_server_params_copy(struct sc_server_params *dst,
COPY(video_encoder);
COPY(audio_encoder);
COPY(tcpip_dst);
COPY(camera_id);
COPY(camera_ar);
#undef COPY
return true;
@ -178,22 +173,6 @@ sc_server_get_codec_name(enum sc_codec codec) {
return "opus";
case SC_CODEC_AAC:
return "aac";
case SC_CODEC_RAW:
return "raw";
default:
return NULL;
}
}
static const char *
sc_server_get_camera_facing_name(enum sc_camera_facing camera_facing) {
switch (camera_facing) {
case SC_CAMERA_FACING_FRONT:
return "front";
case SC_CAMERA_FACING_BACK:
return "back";
case SC_CAMERA_FACING_EXTERNAL:
return "external";
default:
return NULL;
}
@ -234,27 +213,23 @@ execute_server(struct sc_server *server,
cmd[count++] = SCRCPY_VERSION;
unsigned dyn_idx = count; // from there, the strings are allocated
#define ADD_PARAM(fmt, ...) do { \
char *p; \
#define ADD_PARAM(fmt, ...) { \
char *p = (char *) &cmd[count]; \
if (asprintf(&p, fmt, ## __VA_ARGS__) == -1) { \
goto end; \
} \
cmd[count++] = p; \
} while(0)
}
ADD_PARAM("scid=%08x", params->scid);
ADD_PARAM("log_level=%s", log_level_to_server_string(params->log_level));
if (!params->video) {
ADD_PARAM("video=false");
}
if (params->video_bit_rate) {
ADD_PARAM("video_bit_rate=%" PRIu32, params->video_bit_rate);
}
if (!params->audio) {
ADD_PARAM("audio=false");
}
if (params->audio_bit_rate) {
} else if (params->audio_bit_rate) {
ADD_PARAM("audio_bit_rate=%" PRIu32, params->audio_bit_rate);
}
if (params->video_codec != SC_CODEC_H264) {
@ -265,13 +240,6 @@ execute_server(struct sc_server *server,
ADD_PARAM("audio_codec=%s",
sc_server_get_codec_name(params->audio_codec));
}
if (params->video_source != SC_VIDEO_SOURCE_DISPLAY) {
assert(params->video_source == SC_VIDEO_SOURCE_CAMERA);
ADD_PARAM("video_source=camera");
}
if (params->audio_source == SC_AUDIO_SOURCE_MIC) {
ADD_PARAM("audio_source=mic");
}
if (params->max_size) {
ADD_PARAM("max_size=%" PRIu16, params->max_size);
}
@ -295,25 +263,6 @@ execute_server(struct sc_server *server,
if (params->display_id) {
ADD_PARAM("display_id=%" PRIu32, params->display_id);
}
if (params->camera_id) {
ADD_PARAM("camera_id=%s", params->camera_id);
}
if (params->camera_size) {
ADD_PARAM("camera_size=%s", params->camera_size);
}
if (params->camera_facing != SC_CAMERA_FACING_ANY) {
ADD_PARAM("camera_facing=%s",
sc_server_get_camera_facing_name(params->camera_facing));
}
if (params->camera_ar) {
ADD_PARAM("camera_ar=%s", params->camera_ar);
}
if (params->camera_fps) {
ADD_PARAM("camera_fps=%" PRIu16, params->camera_fps);
}
if (params->camera_high_speed) {
ADD_PARAM("camera_high_speed=true");
}
if (params->show_touches) {
ADD_PARAM("show_touches=true");
}
@ -351,18 +300,12 @@ execute_server(struct sc_server *server,
// By default, power_on is true
ADD_PARAM("power_on=false");
}
if (params->list & SC_OPTION_LIST_ENCODERS) {
if (params->list_encoders) {
ADD_PARAM("list_encoders=true");
}
if (params->list & SC_OPTION_LIST_DISPLAYS) {
if (params->list_displays) {
ADD_PARAM("list_displays=true");
}
if (params->list & SC_OPTION_LIST_CAMERAS) {
ADD_PARAM("list_cameras=true");
}
if (params->list & SC_OPTION_LIST_CAMERA_SIZES) {
ADD_PARAM("list_camera_sizes=true");
}
#undef ADD_PARAM
@ -496,9 +439,9 @@ sc_server_init(struct sc_server *server, const struct sc_server_params *params,
static bool
device_read_info(struct sc_intr *intr, sc_socket device_socket,
struct sc_server_info *info) {
unsigned char buf[SC_DEVICE_NAME_FIELD_LENGTH];
unsigned char buf[SC_DEVICE_NAME_FIELD_LENGTH + 4];
ssize_t r = net_recv_all_intr(intr, device_socket, buf, sizeof(buf));
if (r < SC_DEVICE_NAME_FIELD_LENGTH) {
if (r < SC_DEVICE_NAME_FIELD_LENGTH + 4) {
LOGE("Could not retrieve device information");
return false;
}
@ -506,6 +449,9 @@ device_read_info(struct sc_intr *intr, sc_socket device_socket,
buf[SC_DEVICE_NAME_FIELD_LENGTH - 1] = '\0';
memcpy(info->device_name, (char *) buf, sizeof(info->device_name));
unsigned char *fields = &buf[SC_DEVICE_NAME_FIELD_LENGTH];
info->frame_size.width = sc_read16be(fields);
info->frame_size.height = sc_read16be(&fields[2]);
return true;
}
@ -518,7 +464,6 @@ sc_server_connect_to(struct sc_server *server, struct sc_server_info *info) {
const char *serial = server->serial;
assert(serial);
bool video = server->params.video;
bool audio = server->params.audio;
bool control = server->params.control;
@ -526,13 +471,10 @@ sc_server_connect_to(struct sc_server *server, struct sc_server_info *info) {
sc_socket audio_socket = SC_SOCKET_NONE;
sc_socket control_socket = SC_SOCKET_NONE;
if (!tunnel->forward) {
if (video) {
video_socket =
net_accept_intr(&server->intr, tunnel->server_socket);
video_socket = net_accept_intr(&server->intr, tunnel->server_socket);
if (video_socket == SC_SOCKET_NONE) {
goto fail;
}
}
if (audio) {
audio_socket =
@ -562,36 +504,27 @@ sc_server_connect_to(struct sc_server *server, struct sc_server_info *info) {
unsigned attempts = 100;
sc_tick delay = SC_TICK_FROM_MS(100);
sc_socket first_socket = connect_to_server(server, attempts, delay,
tunnel_host, tunnel_port);
if (first_socket == SC_SOCKET_NONE) {
video_socket = connect_to_server(server, attempts, delay, tunnel_host,
tunnel_port);
if (video_socket == SC_SOCKET_NONE) {
goto fail;
}
if (video) {
video_socket = first_socket;
}
if (audio) {
if (!video) {
audio_socket = first_socket;
} else {
audio_socket = net_socket();
if (audio_socket == SC_SOCKET_NONE) {
goto fail;
}
bool ok = net_connect_intr(&server->intr, audio_socket,
tunnel_host, tunnel_port);
bool ok = net_connect_intr(&server->intr, audio_socket, tunnel_host,
tunnel_port);
if (!ok) {
goto fail;
}
}
}
if (control) {
if (!video && !audio) {
control_socket = first_socket;
} else {
// we know that the device is listening, we don't need several
// attempts
control_socket = net_socket();
if (control_socket == SC_SOCKET_NONE) {
goto fail;
@ -603,23 +536,18 @@ sc_server_connect_to(struct sc_server *server, struct sc_server_info *info) {
}
}
}
}
// we don't need the adb tunnel anymore
sc_adb_tunnel_close(tunnel, &server->intr, serial,
server->device_socket_name);
sc_socket first_socket = video ? video_socket
: audio ? audio_socket
: control_socket;
// The sockets will be closed on stop if device_read_info() fails
bool ok = device_read_info(&server->intr, first_socket, info);
bool ok = device_read_info(&server->intr, video_socket, info);
if (!ok) {
goto fail;
}
assert(!video || video_socket != SC_SOCKET_NONE);
assert(video_socket != SC_SOCKET_NONE);
assert(!audio || audio_socket != SC_SOCKET_NONE);
assert(!control || control_socket != SC_SOCKET_NONE);
@ -841,15 +769,6 @@ sc_server_configure_tcpip_unknown_address(struct sc_server *server,
return sc_server_connect_to_tcpip(server, ip_port);
}
static void
sc_server_kill_adb_if_requested(struct sc_server *server) {
if (server->params.kill_adb_on_close) {
LOGI("Killing adb server...");
unsigned flags = SC_ADB_NO_STDOUT | SC_ADB_NO_STDERR | SC_ADB_NO_LOGERR;
sc_adb_kill_server(&server->intr, flags);
}
}
static int
run_server(void *data) {
struct sc_server *server = data;
@ -861,7 +780,7 @@ run_server(void *data) {
// is parsed, so it is not output)
bool ok = sc_adb_start_server(&server->intr, 0);
if (!ok) {
LOGE("Could not start adb server");
LOGE("Could not start adb daemon");
goto error_connection_failed;
}
@ -942,7 +861,7 @@ run_server(void *data) {
// If --list-* is passed, then the server just prints the requested data
// then exits.
if (params->list) {
if (params->list_encoders || params->list_displays) {
sc_pid pid = execute_server(server, params);
if (pid == SC_PROCESS_NONE) {
goto error_connection_failed;
@ -1012,11 +931,8 @@ run_server(void *data) {
sc_mutex_unlock(&server->mutex);
// Interrupt sockets to wake up socket blocking calls on the server
if (server->video_socket != SC_SOCKET_NONE) {
// There is no video_socket if --no-video is set
assert(server->video_socket != SC_SOCKET_NONE);
net_interrupt(server->video_socket);
}
if (server->audio_socket != SC_SOCKET_NONE) {
// There is no audio_socket if --no-audio is set
@ -1049,12 +965,9 @@ run_server(void *data) {
sc_process_close(pid);
sc_server_kill_adb_if_requested(server);
return 0;
error_connection_failed:
sc_server_kill_adb_if_requested(server);
server->cbs->on_connection_failed(server, server->cbs_userdata);
return -1;
}

View File

@ -18,6 +18,7 @@
#define SC_DEVICE_NAME_FIELD_LENGTH 64
struct sc_server_info {
char device_name[SC_DEVICE_NAME_FIELD_LENGTH];
struct sc_size frame_size;
};
struct sc_server_params {
@ -26,18 +27,11 @@ struct sc_server_params {
enum sc_log_level log_level;
enum sc_codec video_codec;
enum sc_codec audio_codec;
enum sc_video_source video_source;
enum sc_audio_source audio_source;
enum sc_camera_facing camera_facing;
const char *crop;
const char *video_codec_options;
const char *audio_codec_options;
const char *video_encoder;
const char *audio_encoder;
const char *camera_id;
const char *camera_size;
const char *camera_ar;
uint16_t camera_fps;
struct sc_port_range port_range;
uint32_t tunnel_host;
uint16_t tunnel_port;
@ -48,7 +42,6 @@ struct sc_server_params {
int8_t lock_video_orientation;
bool control;
uint32_t display_id;
bool video;
bool audio;
bool show_touches;
bool stay_awake;
@ -62,9 +55,8 @@ struct sc_server_params {
bool select_tcpip;
bool cleanup;
bool power_on;
bool kill_adb_on_close;
bool camera_high_speed;
uint8_t list;
bool list_encoders;
bool list_displays;
};
struct sc_server {

View File

@ -7,6 +7,8 @@
#include <stdbool.h>
#include <libavcodec/avcodec.h>
typedef struct AVFrame AVFrame;
/**
* Frame sink trait.
*
@ -17,7 +19,6 @@ struct sc_frame_sink {
};
struct sc_frame_sink_ops {
/* The codec context is valid until the sink is closed */
bool (*open)(struct sc_frame_sink *sink, const AVCodecContext *ctx);
void (*close)(struct sc_frame_sink *sink);
bool (*push)(struct sc_frame_sink *sink, const AVFrame *frame);

View File

@ -1,59 +0,0 @@
#include "frame_source.h"
void
sc_frame_source_init(struct sc_frame_source *source) {
source->sink_count = 0;
}
void
sc_frame_source_add_sink(struct sc_frame_source *source,
struct sc_frame_sink *sink) {
assert(source->sink_count < SC_FRAME_SOURCE_MAX_SINKS);
assert(sink);
assert(sink->ops);
source->sinks[source->sink_count++] = sink;
}
static void
sc_frame_source_sinks_close_firsts(struct sc_frame_source *source,
unsigned count) {
while (count) {
struct sc_frame_sink *sink = source->sinks[--count];
sink->ops->close(sink);
}
}
bool
sc_frame_source_sinks_open(struct sc_frame_source *source,
const AVCodecContext *ctx) {
assert(source->sink_count);
for (unsigned i = 0; i < source->sink_count; ++i) {
struct sc_frame_sink *sink = source->sinks[i];
if (!sink->ops->open(sink, ctx)) {
sc_frame_source_sinks_close_firsts(source, i);
return false;
}
}
return true;
}
void
sc_frame_source_sinks_close(struct sc_frame_source *source) {
assert(source->sink_count);
sc_frame_source_sinks_close_firsts(source, source->sink_count);
}
bool
sc_frame_source_sinks_push(struct sc_frame_source *source,
const AVFrame *frame) {
assert(source->sink_count);
for (unsigned i = 0; i < source->sink_count; ++i) {
struct sc_frame_sink *sink = source->sinks[i];
if (!sink->ops->push(sink, frame)) {
return false;
}
}
return true;
}

View File

@ -1,38 +0,0 @@
#ifndef SC_FRAME_SOURCE_H
#define SC_FRAME_SOURCE_H
#include "common.h"
#include "frame_sink.h"
#define SC_FRAME_SOURCE_MAX_SINKS 2
/**
* Frame source trait
*
* Component able to send AVFrames should implement this trait.
*/
struct sc_frame_source {
struct sc_frame_sink *sinks[SC_FRAME_SOURCE_MAX_SINKS];
unsigned sink_count;
};
void
sc_frame_source_init(struct sc_frame_source *source);
void
sc_frame_source_add_sink(struct sc_frame_source *source,
struct sc_frame_sink *sink);
bool
sc_frame_source_sinks_open(struct sc_frame_source *source,
const AVCodecContext *ctx);
void
sc_frame_source_sinks_close(struct sc_frame_source *source);
bool
sc_frame_source_sinks_push(struct sc_frame_source *source,
const AVFrame *frame);
#endif

View File

@ -5,7 +5,9 @@
#include <assert.h>
#include <stdbool.h>
#include <libavcodec/avcodec.h>
typedef struct AVCodec AVCodec;
typedef struct AVPacket AVPacket;
/**
* Packet sink trait.
@ -17,8 +19,8 @@ struct sc_packet_sink {
};
struct sc_packet_sink_ops {
/* The codec context is valid until the sink is closed */
bool (*open)(struct sc_packet_sink *sink, AVCodecContext *ctx);
/* The codec instance is static, it is valid until the end of the program */
bool (*open)(struct sc_packet_sink *sink, const AVCodec *codec);
void (*close)(struct sc_packet_sink *sink);
bool (*push)(struct sc_packet_sink *sink, const AVPacket *packet);

View File

@ -1,70 +0,0 @@
#include "packet_source.h"
void
sc_packet_source_init(struct sc_packet_source *source) {
source->sink_count = 0;
}
void
sc_packet_source_add_sink(struct sc_packet_source *source,
struct sc_packet_sink *sink) {
assert(source->sink_count < SC_PACKET_SOURCE_MAX_SINKS);
assert(sink);
assert(sink->ops);
source->sinks[source->sink_count++] = sink;
}
static void
sc_packet_source_sinks_close_firsts(struct sc_packet_source *source,
unsigned count) {
while (count) {
struct sc_packet_sink *sink = source->sinks[--count];
sink->ops->close(sink);
}
}
bool
sc_packet_source_sinks_open(struct sc_packet_source *source,
AVCodecContext *ctx) {
assert(source->sink_count);
for (unsigned i = 0; i < source->sink_count; ++i) {
struct sc_packet_sink *sink = source->sinks[i];
if (!sink->ops->open(sink, ctx)) {
sc_packet_source_sinks_close_firsts(source, i);
return false;
}
}
return true;
}
void
sc_packet_source_sinks_close(struct sc_packet_source *source) {
assert(source->sink_count);
sc_packet_source_sinks_close_firsts(source, source->sink_count);
}
bool
sc_packet_source_sinks_push(struct sc_packet_source *source,
const AVPacket *packet) {
assert(source->sink_count);
for (unsigned i = 0; i < source->sink_count; ++i) {
struct sc_packet_sink *sink = source->sinks[i];
if (!sink->ops->push(sink, packet)) {
return false;
}
}
return true;
}
void
sc_packet_source_sinks_disable(struct sc_packet_source *source) {
assert(source->sink_count);
for (unsigned i = 0; i < source->sink_count; ++i) {
struct sc_packet_sink *sink = source->sinks[i];
if (sink->ops->disable) {
sink->ops->disable(sink);
}
}
}

View File

@ -1,41 +0,0 @@
#ifndef SC_PACKET_SOURCE_H
#define SC_PACKET_SOURCE_H
#include "common.h"
#include "packet_sink.h"
#define SC_PACKET_SOURCE_MAX_SINKS 2
/**
* Packet source trait
*
* Component able to send AVPackets should implement this trait.
*/
struct sc_packet_source {
struct sc_packet_sink *sinks[SC_PACKET_SOURCE_MAX_SINKS];
unsigned sink_count;
};
void
sc_packet_source_init(struct sc_packet_source *source);
void
sc_packet_source_add_sink(struct sc_packet_source *source,
struct sc_packet_sink *sink);
bool
sc_packet_source_sinks_open(struct sc_packet_source *source,
AVCodecContext *ctx);
void
sc_packet_source_sinks_close(struct sc_packet_source *source);
bool
sc_packet_source_sinks_push(struct sc_packet_source *source,
const AVPacket *packet);
void
sc_packet_source_sinks_disable(struct sc_packet_source *source);
#endif

View File

@ -14,8 +14,6 @@
#define DEFAULT_TIMEOUT 1000
#define SC_HID_EVENT_QUEUE_MAX 64
static void
sc_hid_event_log(const struct sc_hid_event *event) {
// HID Event: [00] FF FF FF FF...
@ -50,20 +48,14 @@ sc_hid_event_destroy(struct sc_hid_event *hid_event) {
bool
sc_aoa_init(struct sc_aoa *aoa, struct sc_usb *usb,
struct sc_acksync *acksync) {
sc_vecdeque_init(&aoa->queue);
if (!sc_vecdeque_reserve(&aoa->queue, SC_HID_EVENT_QUEUE_MAX)) {
return false;
}
cbuf_init(&aoa->queue);
if (!sc_mutex_init(&aoa->mutex)) {
sc_vecdeque_destroy(&aoa->queue);
return false;
}
if (!sc_cond_init(&aoa->event_cond)) {
sc_mutex_destroy(&aoa->mutex);
sc_vecdeque_destroy(&aoa->queue);
return false;
}
@ -77,10 +69,9 @@ sc_aoa_init(struct sc_aoa *aoa, struct sc_usb *usb,
void
sc_aoa_destroy(struct sc_aoa *aoa) {
// Destroy remaining events
while (!sc_vecdeque_is_empty(&aoa->queue)) {
struct sc_hid_event *event = sc_vecdeque_popref(&aoa->queue);
assert(event);
sc_hid_event_destroy(event);
struct sc_hid_event event;
while (cbuf_take(&aoa->queue, &event)) {
sc_hid_event_destroy(&event);
}
sc_cond_destroy(&aoa->event_cond);
@ -221,19 +212,13 @@ sc_aoa_push_hid_event(struct sc_aoa *aoa, const struct sc_hid_event *event) {
}
sc_mutex_lock(&aoa->mutex);
bool full = sc_vecdeque_is_full(&aoa->queue);
if (!full) {
bool was_empty = sc_vecdeque_is_empty(&aoa->queue);
sc_vecdeque_push_noresize(&aoa->queue, *event);
bool was_empty = cbuf_is_empty(&aoa->queue);
bool res = cbuf_push(&aoa->queue, *event);
if (was_empty) {
sc_cond_signal(&aoa->event_cond);
}
}
// Otherwise (if the queue is full), the event is discarded
sc_mutex_unlock(&aoa->mutex);
return !full;
return res;
}
static int
@ -242,7 +227,7 @@ run_aoa_thread(void *data) {
for (;;) {
sc_mutex_lock(&aoa->mutex);
while (!aoa->stopped && sc_vecdeque_is_empty(&aoa->queue)) {
while (!aoa->stopped && cbuf_is_empty(&aoa->queue)) {
sc_cond_wait(&aoa->event_cond, &aoa->mutex);
}
if (aoa->stopped) {
@ -250,9 +235,11 @@ run_aoa_thread(void *data) {
sc_mutex_unlock(&aoa->mutex);
break;
}
struct sc_hid_event event;
bool non_empty = cbuf_take(&aoa->queue, &event);
assert(non_empty);
(void) non_empty;
assert(!sc_vecdeque_is_empty(&aoa->queue));
struct sc_hid_event event = sc_vecdeque_pop(&aoa->queue);
uint64_t ack_to_wait = event.ack_to_wait;
sc_mutex_unlock(&aoa->mutex);

View File

@ -8,9 +8,9 @@
#include "usb.h"
#include "util/acksync.h"
#include "util/cbuf.h"
#include "util/thread.h"
#include "util/tick.h"
#include "util/vecdeque.h"
struct sc_hid_event {
uint16_t accessory_id;
@ -27,7 +27,7 @@ sc_hid_event_init(struct sc_hid_event *hid_event, uint16_t accessory_id,
void
sc_hid_event_destroy(struct sc_hid_event *hid_event);
struct sc_hid_event_queue SC_VECDEQUE(struct sc_hid_event);
struct sc_hid_event_queue CBUF(struct sc_hid_event, 64);
struct sc_aoa {
struct sc_usb *usb;

View File

@ -27,8 +27,7 @@
// keyboard support, though OS could support more keys via modifying the report
// desc. 6 should be enough for scrcpy.
#define HID_KEYBOARD_MAX_KEYS 6
#define HID_KEYBOARD_EVENT_SIZE \
(HID_KEYBOARD_INDEX_KEYS + HID_KEYBOARD_MAX_KEYS)
#define HID_KEYBOARD_EVENT_SIZE (2 + HID_KEYBOARD_MAX_KEYS)
#define HID_RESERVED 0x00
#define HID_ERROR_ROLL_OVER 0x01

View File

@ -83,7 +83,7 @@ scrcpy_otg(struct scrcpy_options *options) {
#ifdef _WIN32
// On Windows, only one process could open a USB device
// <https://github.com/Genymobile/scrcpy/issues/2773>
LOGI("Killing adb server (if any)...");
LOGI("Killing adb daemon (if any)...");
unsigned flags = SC_ADB_NO_STDOUT | SC_ADB_NO_STDERR | SC_ADB_NO_LOGERR;
// uninterruptible (intr == NULL), but in practice it's very quick
sc_adb_kill_server(NULL, flags);
@ -105,6 +105,10 @@ scrcpy_otg(struct scrcpy_options *options) {
usb_device_initialized = true;
LOGI("USB device: %s (%04x:%04x) %s %s", usb_device.serial,
(unsigned) usb_device.vid, (unsigned) usb_device.pid,
usb_device.manufacturer, usb_device.product);
ok = sc_usb_connect(&s->usb, usb_device.device, &cbs, NULL);
if (!ok) {
goto end;

View File

@ -93,7 +93,7 @@ sc_usb_device_move(struct sc_usb_device *dst, struct sc_usb_device *src) {
src->product = NULL;
}
static void
void
sc_usb_devices_destroy(struct sc_vec_usb_devices *usb_devices) {
for (size_t i = 0; i < usb_devices->size; ++i) {
sc_usb_device_destroy(&usb_devices->data[i]);
@ -213,8 +213,8 @@ sc_usb_select_device(struct sc_usb *usb, const char *serial,
assert(sel_count == 1); // sel_idx is valid only if sel_count == 1
struct sc_usb_device *device = &vec.data[sel_idx];
LOGI("USB device found:");
sc_usb_devices_log(SC_LOG_LEVEL_INFO, vec.data, vec.size);
LOGD("USB device found:");
sc_usb_devices_log(SC_LOG_LEVEL_DEBUG, vec.data, vec.size);
// Move device into out_device (do not destroy device)
sc_usb_device_move(out_device, device);

View File

@ -1,94 +0,0 @@
#ifndef SC_AUDIOBUF_H
#define SC_AUDIOBUF_H
#include "common.h"
#include <stdbool.h>
#include <stdint.h>
#include "util/bytebuf.h"
/**
* Wrapper around bytebuf to read and write samples
*
* Each sample takes sample_size bytes.
*/
struct sc_audiobuf {
struct sc_bytebuf buf;
size_t sample_size;
};
static inline uint32_t
sc_audiobuf_to_samples(struct sc_audiobuf *buf, size_t bytes) {
assert(bytes % buf->sample_size == 0);
return bytes / buf->sample_size;
}
static inline size_t
sc_audiobuf_to_bytes(struct sc_audiobuf *buf, uint32_t samples) {
return samples * buf->sample_size;
}
static inline bool
sc_audiobuf_init(struct sc_audiobuf *buf, size_t sample_size,
uint32_t capacity) {
buf->sample_size = sample_size;
return sc_bytebuf_init(&buf->buf, capacity * sample_size + 1);
}
static inline void
sc_audiobuf_read(struct sc_audiobuf *buf, uint8_t *to, uint32_t samples) {
size_t bytes = sc_audiobuf_to_bytes(buf, samples);
sc_bytebuf_read(&buf->buf, to, bytes);
}
static inline void
sc_audiobuf_skip(struct sc_audiobuf *buf, uint32_t samples) {
size_t bytes = sc_audiobuf_to_bytes(buf, samples);
sc_bytebuf_skip(&buf->buf, bytes);
}
static inline void
sc_audiobuf_write(struct sc_audiobuf *buf, const uint8_t *from,
uint32_t samples) {
size_t bytes = sc_audiobuf_to_bytes(buf, samples);
sc_bytebuf_write(&buf->buf, from, bytes);
}
static inline void
sc_audiobuf_prepare_write(struct sc_audiobuf *buf, const uint8_t *from,
uint32_t samples) {
size_t bytes = sc_audiobuf_to_bytes(buf, samples);
sc_bytebuf_prepare_write(&buf->buf, from, bytes);
}
static inline void
sc_audiobuf_commit_write(struct sc_audiobuf *buf, uint32_t samples) {
size_t bytes = sc_audiobuf_to_bytes(buf, samples);
sc_bytebuf_commit_write(&buf->buf, bytes);
}
static inline uint32_t
sc_audiobuf_can_read(struct sc_audiobuf *buf) {
size_t bytes = sc_bytebuf_can_read(&buf->buf);
return sc_audiobuf_to_samples(buf, bytes);
}
static inline uint32_t
sc_audiobuf_can_write(struct sc_audiobuf *buf) {
size_t bytes = sc_bytebuf_can_write(&buf->buf);
return sc_audiobuf_to_samples(buf, bytes);
}
static inline uint32_t
sc_audiobuf_capacity(struct sc_audiobuf *buf) {
size_t bytes = sc_bytebuf_capacity(&buf->buf);
return sc_audiobuf_to_samples(buf, bytes);
}
static inline void
sc_audiobuf_destroy(struct sc_audiobuf *buf) {
sc_bytebuf_destroy(&buf->buf);
}
#endif

View File

@ -19,8 +19,8 @@ sc_average_push(struct sc_average *avg, float value) {
avg->avg = ((avg->count - 1) * avg->avg + value) / avg->count;
}
float
sc_average_get(struct sc_average *avg) {
assert(avg->count);
return avg->avg;
bool
sc_average_get(struct sc_average *avg, float *value) {
*value = avg->avg;
return avg->count;
}

View File

@ -22,19 +22,15 @@ struct sc_average {
void
sc_average_init(struct sc_average *avg, unsigned range);
/**
* Push a new value to update the "rolling" average
*/
/* Push a new value to update the "rolling" average */
void
sc_average_push(struct sc_average *avg, float value);
/**
* Get the current average value
/* Get the current average value (if available)
*
* It is an error to call this function if sc_average_push() has not been
* called at least once.
* An average is available if sc_average_push() has been called at least once.
*/
float
sc_average_get(struct sc_average *avg);
bool
sc_average_get(struct sc_average *avg, float *value);
#endif

View File

@ -9,6 +9,7 @@
bool
sc_bytebuf_init(struct sc_bytebuf *buf, size_t alloc_size) {
assert(alloc_size);
// sufficient, but use more for alignment.
buf->data = malloc(alloc_size);
if (!buf->data) {
LOG_OOM();
@ -30,7 +31,7 @@ sc_bytebuf_destroy(struct sc_bytebuf *buf) {
void
sc_bytebuf_read(struct sc_bytebuf *buf, uint8_t *to, size_t len) {
assert(len);
assert(len <= sc_bytebuf_can_read(buf));
assert(sc_bytebuf_read_remaining(buf) >= len);
assert(buf->tail != buf->head); // the buffer could not be empty
size_t right_limit = buf->tail < buf->head ? buf->head : buf->alloc_size;
@ -50,40 +51,42 @@ sc_bytebuf_read(struct sc_bytebuf *buf, uint8_t *to, size_t len) {
void
sc_bytebuf_skip(struct sc_bytebuf *buf, size_t len) {
assert(len);
assert(len <= sc_bytebuf_can_read(buf));
assert(sc_bytebuf_read_remaining(buf) >= len);
assert(buf->tail != buf->head); // the buffer could not be empty
buf->tail = (buf->tail + len) % buf->alloc_size;
}
static inline void
sc_bytebuf_write_step0(struct sc_bytebuf *buf, const uint8_t *from,
size_t len) {
size_t right_len = buf->alloc_size - buf->head;
void
sc_bytebuf_write(struct sc_bytebuf *buf, const uint8_t *from, size_t len) {
assert(len);
size_t max_len = buf->alloc_size - 1;
if (len >= max_len) {
// Copy only the right-most bytes
memcpy(buf->data, from + len - max_len, max_len);
buf->tail = 0;
buf->head = max_len;
return;
}
size_t right_limit = buf->head < buf->tail ? buf->tail : buf->alloc_size;
size_t right_len = right_limit - buf->head;
if (len < right_len) {
right_len = len;
}
memcpy(buf->data + buf->head, from, right_len);
if (len > right_len) {
memcpy(buf->data, from + right_len, len - right_len);
}
}
static inline void
sc_bytebuf_write_step1(struct sc_bytebuf *buf, size_t len) {
size_t empty_space = sc_bytebuf_write_remaining(buf);
if (len > empty_space) {
buf->tail = (buf->tail + len - empty_space) % buf->alloc_size;
}
buf->head = (buf->head + len) % buf->alloc_size;
}
void
sc_bytebuf_write(struct sc_bytebuf *buf, const uint8_t *from, size_t len) {
assert(len);
assert(len <= sc_bytebuf_can_write(buf));
sc_bytebuf_write_step0(buf, from, len);
sc_bytebuf_write_step1(buf, len);
}
void
sc_bytebuf_prepare_write(struct sc_bytebuf *buf, const uint8_t *from,
size_t len) {
@ -94,11 +97,20 @@ sc_bytebuf_prepare_write(struct sc_bytebuf *buf, const uint8_t *from,
// be called with lock held).
assert(len < buf->alloc_size - 1);
sc_bytebuf_write_step0(buf, from, len);
size_t right_len = buf->alloc_size - buf->head;
if (len < right_len) {
right_len = len;
}
memcpy(buf->data + buf->head, from, right_len);
if (len > right_len) {
memcpy(buf->data, from + right_len, len - right_len);
}
}
void
sc_bytebuf_commit_write(struct sc_bytebuf *buf, size_t len) {
assert(len <= sc_bytebuf_can_write(buf));
sc_bytebuf_write_step1(buf, len);
assert(len <= sc_bytebuf_write_remaining(buf));
buf->head = (buf->head + len) % buf->alloc_size;
}

View File

@ -11,10 +11,10 @@ struct sc_bytebuf {
// The actual capacity is (allocated - 1) so that head == tail is
// non-ambiguous
size_t alloc_size;
size_t head; // writter cursor
size_t tail; // reader cursor
size_t head;
size_t tail;
// empty: tail == head
// full: ((tail + 1) % alloc_size) == head
// full: (tail + 1) % allocated == head
};
bool
@ -23,10 +23,10 @@ sc_bytebuf_init(struct sc_bytebuf *buf, size_t alloc_size);
/**
* Copy from the bytebuf to a user-provided array
*
* The caller must check that len <= sc_bytebuf_read_available() (it is an
* error to attempt to read more bytes than available).
* The caller must check that len <= buf->len (it is an error to attempt to read
* more bytes than available).
*
* This function is guaranteed not to write to buf->head.
* This function is guaranteed to not change the head.
*/
void
sc_bytebuf_read(struct sc_bytebuf *buf, uint8_t *to, size_t len);
@ -34,13 +34,13 @@ sc_bytebuf_read(struct sc_bytebuf *buf, uint8_t *to, size_t len);
/**
* Drop len bytes from the buffer
*
* The caller must check that len <= sc_bytebuf_read_available() (it is an
* error to attempt to skip more bytes than available).
* The caller must check that len <= buf->len (it is an error to attempt to skip
* more bytes than available).
*
* This function is guaranteed not to write to buf->head.
* This function is guaranteed to not change the head.
*
* It is equivalent to call sc_bytebuf_read() to some array and discard the
* array (but this function is more efficient since there is no copy).
* array (but more efficient since there is no copy).
*/
void
sc_bytebuf_skip(struct sc_bytebuf *buf, size_t len);
@ -48,10 +48,9 @@ sc_bytebuf_skip(struct sc_bytebuf *buf, size_t len);
/**
* Copy the user-provided array to the bytebuf
*
* The caller must check that len <= sc_bytebuf_write_available() (it is an
* error to write more bytes than the remaining available space).
*
* This function is guaranteed not to write to buf->tail.
* The length of the input array is not restricted:
* if len >= sc_bytebuf_write_remaining(buf), then the excessive input bytes
* will overwrite the oldest bytes in the buffer.
*/
void
sc_bytebuf_write(struct sc_bytebuf *buf, const uint8_t *from, size_t len);
@ -59,16 +58,14 @@ sc_bytebuf_write(struct sc_bytebuf *buf, const uint8_t *from, size_t len);
/**
* Copy the user-provided array to the bytebuf, but do not advance the cursor
*
* The caller must check that len <= sc_bytebuf_write_available() (it is an
* error to write more bytes than the remaining available space).
* The caller must check that len <= buf->len (it is an error to attempt to
* write more bytes than available).
*
* After this function is called, the write must be committed with
* sc_bytebuf_commit_write().
*
* The purpose of this mechanism is to acquire a lock only to commit the write,
* but not to perform the actual copy.
*
* This function is guaranteed not to access buf->tail.
*/
void
sc_bytebuf_prepare_write(struct sc_bytebuf *buf, const uint8_t *from,
@ -86,28 +83,21 @@ sc_bytebuf_commit_write(struct sc_bytebuf *buf, size_t len);
* It is an error to read more bytes than available.
*/
static inline size_t
sc_bytebuf_can_read(struct sc_bytebuf *buf) {
sc_bytebuf_read_remaining(struct sc_bytebuf *buf) {
return (buf->alloc_size + buf->head - buf->tail) % buf->alloc_size;
}
/**
* Return the number of bytes which can be written
* Return the number of bytes which can be written without overwriting
*
* It is an error to write more bytes than available.
* It is not an error to write more bytes than the available space, but this
* would overwrite the oldest bytes in the buffer.
*/
static inline size_t
sc_bytebuf_can_write(struct sc_bytebuf *buf) {
sc_bytebuf_write_remaining(struct sc_bytebuf *buf) {
return (buf->alloc_size + buf->tail - buf->head - 1) % buf->alloc_size;
}
/**
* Return the actual capacity of the buffer (can_read() + can_write())
*/
static inline size_t
sc_bytebuf_capacity(struct sc_bytebuf *buf) {
return buf->alloc_size - 1;
}
void
sc_bytebuf_destroy(struct sc_bytebuf *buf);

52
app/src/util/cbuf.h Normal file
View File

@ -0,0 +1,52 @@
// generic circular buffer (bounded queue) implementation
#ifndef SC_CBUF_H
#define SC_CBUF_H
#include "common.h"
#include <stdbool.h>
#include <unistd.h>
// To define a circular buffer type of 20 ints:
// struct cbuf_int CBUF(int, 20);
//
// data has length CAP + 1 to distinguish empty vs full.
#define CBUF(TYPE, CAP) { \
TYPE data[(CAP) + 1]; \
size_t head; \
size_t tail; \
}
#define cbuf_size_(PCBUF) \
(sizeof((PCBUF)->data) / sizeof(*(PCBUF)->data))
#define cbuf_is_empty(PCBUF) \
((PCBUF)->head == (PCBUF)->tail)
#define cbuf_is_full(PCBUF) \
(((PCBUF)->head + 1) % cbuf_size_(PCBUF) == (PCBUF)->tail)
#define cbuf_init(PCBUF) \
(void) ((PCBUF)->head = (PCBUF)->tail = 0)
#define cbuf_push(PCBUF, ITEM) \
({ \
bool ok = !cbuf_is_full(PCBUF); \
if (ok) { \
(PCBUF)->data[(PCBUF)->head] = (ITEM); \
(PCBUF)->head = ((PCBUF)->head + 1) % cbuf_size_(PCBUF); \
} \
ok; \
})
#define cbuf_take(PCBUF, PITEM) \
({ \
bool ok = !cbuf_is_empty(PCBUF); \
if (ok) { \
*(PITEM) = (PCBUF)->data[(PCBUF)->tail]; \
(PCBUF)->tail = ((PCBUF)->tail + 1) % cbuf_size_(PCBUF); \
} \
ok; \
})
#endif

View File

@ -125,30 +125,8 @@ sc_av_log_callback(void *avcl, int level, const char *fmt, va_list vl) {
free(local_fmt);
}
static const char *const sc_sdl_log_priority_names[SDL_NUM_LOG_PRIORITIES] = {
[SDL_LOG_PRIORITY_VERBOSE] = "VERBOSE",
[SDL_LOG_PRIORITY_DEBUG] = "DEBUG",
[SDL_LOG_PRIORITY_INFO] = "INFO",
[SDL_LOG_PRIORITY_WARN] = "WARN",
[SDL_LOG_PRIORITY_ERROR] = "ERROR",
[SDL_LOG_PRIORITY_CRITICAL] = "CRITICAL",
};
static void SDLCALL
sc_sdl_log_print(void *userdata, int category, SDL_LogPriority priority,
const char *message) {
(void) userdata;
(void) category;
FILE *out = priority < SDL_LOG_PRIORITY_WARN ? stdout : stderr;
assert(priority < SDL_NUM_LOG_PRIORITIES);
const char *prio_name = sc_sdl_log_priority_names[priority];
fprintf(out, "%s: %s\n", prio_name, message);
}
void
sc_log_configure(void) {
SDL_LogSetOutputFunction(sc_sdl_log_print, NULL);
sc_log_configure() {
// Redirect FFmpeg logs to SDL logs
av_log_set_callback(sc_av_log_callback);
}

View File

@ -36,6 +36,6 @@ sc_log_windows_error(const char *prefix, int error);
#endif
void
sc_log_configure(void);
sc_log_configure();
#endif

View File

@ -1,14 +0,0 @@
#include "memory.h"
#include <stdlib.h>
#include <errno.h>
void *
sc_allocarray(size_t nmemb, size_t size) {
size_t bytes;
if (__builtin_mul_overflow(nmemb, size, &bytes)) {
errno = ENOMEM;
return NULL;
}
return malloc(bytes);
}

View File

@ -1,15 +0,0 @@
#ifndef SC_MEMORY_H
#define SC_MEMORY_H
#include <stddef.h>
/**
* Allocate an array of `nmemb` items of `size` bytes each
*
* Like calloc(), but without initialization.
* Like reallocarray(), but without reallocation.
*/
void *
sc_allocarray(size_t nmemb, size_t size);
#endif

77
app/src/util/queue.h Normal file
View File

@ -0,0 +1,77 @@
// generic intrusive FIFO queue
#ifndef SC_QUEUE_H
#define SC_QUEUE_H
#include "common.h"
#include <assert.h>
#include <stdbool.h>
#include <stddef.h>
// To define a queue type of "struct foo":
// struct queue_foo QUEUE(struct foo);
#define SC_QUEUE(TYPE) { \
TYPE *first; \
TYPE *last; \
}
#define sc_queue_init(PQ) \
(void) ((PQ)->first = (PQ)->last = NULL)
#define sc_queue_is_empty(PQ) \
!(PQ)->first
// NEXTFIELD is the field in the ITEM type used for intrusive linked-list
//
// For example:
// struct foo {
// int value;
// struct foo *next;
// };
//
// // define the type "struct my_queue"
// struct my_queue SC_QUEUE(struct foo);
//
// struct my_queue queue;
// sc_queue_init(&queue);
//
// struct foo v1 = { .value = 42 };
// struct foo v2 = { .value = 27 };
//
// sc_queue_push(&queue, next, v1);
// sc_queue_push(&queue, next, v2);
//
// struct foo *foo;
// sc_queue_take(&queue, next, &foo);
// assert(foo->value == 42);
// sc_queue_take(&queue, next, &foo);
// assert(foo->value == 27);
// assert(sc_queue_is_empty(&queue));
//
// push a new item into the queue
#define sc_queue_push(PQ, NEXTFIELD, ITEM) \
(void) ({ \
(ITEM)->NEXTFIELD = NULL; \
if (sc_queue_is_empty(PQ)) { \
(PQ)->first = (PQ)->last = (ITEM); \
} else { \
(PQ)->last->NEXTFIELD = (ITEM); \
(PQ)->last = (ITEM); \
} \
})
// take the next item and remove it from the queue (the queue must not be empty)
// the result is stored in *(PITEM)
// (without typeof(), we could not store a local variable having the correct
// type so that we can "return" it)
#define sc_queue_take(PQ, NEXTFIELD, PITEM) \
(void) ({ \
assert(!sc_queue_is_empty(PQ)); \
*(PITEM) = (PQ)->first; \
(PQ)->first = (PQ)->first->NEXTFIELD; \
})
// no need to update (PQ)->last if the queue is left empty:
// (PQ)->last is undefined if !(PQ)->first anyway
#endif

View File

@ -23,39 +23,6 @@ sc_thread_create(sc_thread *thread, sc_thread_fn fn, const char *name,
return true;
}
static SDL_ThreadPriority
to_sdl_thread_priority(enum sc_thread_priority priority) {
switch (priority) {
case SC_THREAD_PRIORITY_TIME_CRITICAL:
#ifdef SCRCPY_SDL_HAS_THREAD_PRIORITY_TIME_CRITICAL
return SDL_THREAD_PRIORITY_TIME_CRITICAL;
#else
// fall through
#endif
case SC_THREAD_PRIORITY_HIGH:
return SDL_THREAD_PRIORITY_HIGH;
case SC_THREAD_PRIORITY_NORMAL:
return SDL_THREAD_PRIORITY_NORMAL;
case SC_THREAD_PRIORITY_LOW:
return SDL_THREAD_PRIORITY_LOW;
default:
assert(!"Unknown thread priority");
return 0;
}
}
bool
sc_thread_set_priority(enum sc_thread_priority priority) {
SDL_ThreadPriority sdl_priority = to_sdl_thread_priority(priority);
int r = SDL_SetThreadPriority(sdl_priority);
if (r) {
LOGD("Could not set thread priority: %s", SDL_GetError());
return false;
}
return true;
}
void
sc_thread_join(sc_thread *thread, int *status) {
SDL_WaitThread(thread->thread, status);

View File

@ -21,13 +21,6 @@ typedef struct sc_thread {
SDL_Thread *thread;
} sc_thread;
enum sc_thread_priority {
SC_THREAD_PRIORITY_LOW,
SC_THREAD_PRIORITY_NORMAL,
SC_THREAD_PRIORITY_HIGH,
SC_THREAD_PRIORITY_TIME_CRITICAL,
};
typedef struct sc_mutex {
SDL_mutex *mutex;
#ifndef NDEBUG
@ -46,9 +39,6 @@ sc_thread_create(sc_thread *thread, sc_thread_fn fn, const char *name,
void
sc_thread_join(sc_thread *thread, int *status);
bool
sc_thread_set_priority(enum sc_thread_priority priority);
bool
sc_mutex_init(sc_mutex *mutex);

View File

@ -1,77 +0,0 @@
#include "timeout.h"
#include <assert.h>
#include "log.h"
bool
sc_timeout_init(struct sc_timeout *timeout) {
bool ok = sc_mutex_init(&timeout->mutex);
if (!ok) {
return false;
}
ok = sc_cond_init(&timeout->cond);
if (!ok) {
return false;
}
timeout->stopped = false;
return true;
}
static int
run_timeout(void *data) {
struct sc_timeout *timeout = data;
sc_tick deadline = timeout->deadline;
sc_mutex_lock(&timeout->mutex);
bool timed_out = false;
while (!timeout->stopped && !timed_out) {
timed_out = !sc_cond_timedwait(&timeout->cond, &timeout->mutex,
deadline);
}
sc_mutex_unlock(&timeout->mutex);
timeout->cbs->on_timeout(timeout, timeout->cbs_userdata);
return 0;
}
bool
sc_timeout_start(struct sc_timeout *timeout, sc_tick deadline,
const struct sc_timeout_callbacks *cbs, void *cbs_userdata) {
bool ok = sc_thread_create(&timeout->thread, run_timeout, "scrcpy-timeout",
timeout);
if (!ok) {
LOGE("Timeout: could not start thread");
return false;
}
timeout->deadline = deadline;
assert(cbs && cbs->on_timeout);
timeout->cbs = cbs;
timeout->cbs_userdata = cbs_userdata;
return true;
}
void
sc_timeout_stop(struct sc_timeout *timeout) {
sc_mutex_lock(&timeout->mutex);
timeout->stopped = true;
sc_mutex_unlock(&timeout->mutex);
}
void
sc_timeout_join(struct sc_timeout *timeout) {
sc_thread_join(&timeout->thread, NULL);
}
void
sc_timeout_destroy(struct sc_timeout *timeout) {
sc_mutex_destroy(&timeout->mutex);
sc_cond_destroy(&timeout->cond);
}

View File

@ -1,43 +0,0 @@
#ifndef SC_TIMEOUT_H
#define SC_TIMEOUT_H
#include "common.h"
#include <stdbool.h>
#include "thread.h"
#include "tick.h"
struct sc_timeout {
sc_thread thread;
sc_tick deadline;
sc_mutex mutex;
sc_cond cond;
bool stopped;
const struct sc_timeout_callbacks *cbs;
void *cbs_userdata;
};
struct sc_timeout_callbacks {
void (*on_timeout)(struct sc_timeout *timeout, void *userdata);
};
bool
sc_timeout_init(struct sc_timeout *timeout);
void
sc_timeout_destroy(struct sc_timeout *timeout);
bool
sc_timeout_start(struct sc_timeout *timeout, sc_tick deadline,
const struct sc_timeout_callbacks *cbs, void *cbs_userdata);
void
sc_timeout_stop(struct sc_timeout *timeout);
void
sc_timeout_join(struct sc_timeout *timeout);
#endif

View File

@ -1,379 +0,0 @@
#ifndef SC_VECDEQUE_H
#define SC_VECDEQUE_H
#include "common.h"
#include <assert.h>
#include <stdbool.h>
#include <stddef.h>
#include <stdlib.h>
#include <string.h>
#include "util/memory.h"
/**
* A double-ended queue implemented with a growable ring buffer.
*
* Inspired from the Rust VecDeque type:
* <https://doc.rust-lang.org/std/collections/struct.VecDeque.html>
*/
/**
* VecDeque struct body
*
* A VecDeque is a dynamic ring-buffer, managed by the sc_vecdeque_* helpers.
*
* It is generic over the type of its items, so it is implemented via macros.
*
* To use a VecDeque, a new type must be defined:
*
* struct vecdeque_int SC_VECDEQUE(int);
*
* The struct may be anonymous:
*
* struct SC_VECDEQUE(const char *) names;
*
* Functions and macros having name ending with '_' are private.
*/
#define SC_VECDEQUE(type) { \
size_t cap; \
size_t origin; \
size_t size; \
type *data; \
}
/**
* Static initializer for a VecDeque
*/
#define SC_VECDEQUE_INITIALIZER { 0, 0, 0, NULL }
/**
* Initialize an empty VecDeque
*/
#define sc_vecdeque_init(pv) \
({ \
(pv)->cap = 0; \
(pv)->origin = 0; \
(pv)->size = 0; \
(pv)->data = NULL; \
})
/**
* Destroy a VecDeque
*/
#define sc_vecdeque_destroy(pv) \
free((pv)->data)
/**
* Clear a VecDeque
*
* Remove all items.
*/
#define sc_vecdeque_clear(pv) \
(void) ({ \
sc_vecdeque_destroy(pv); \
sc_vecdeque_init(pv); \
})
/**
* Returns the content size
*/
#define sc_vecdeque_size(pv) \
(pv)->size
/**
* Return whether the VecDeque is empty (i.e. its size is 0)
*/
#define sc_vecdeque_is_empty(pv) \
((pv)->size == 0)
/**
* Return whether the VecDeque is full
*
* A VecDeque is full when its size equals its current capacity. However, it
* does not prevent to push a new item (with sc_vecdeque_push()), since this
* will increase its capacity.
*/
#define sc_vecdeque_is_full(pv) \
((pv)->size == (pv)->cap)
/**
* The minimal allocation size, in number of items
*
* Private.
*/
#define SC_VECDEQUE_MINCAP_ ((size_t) 10)
/**
* The maximal allocation size, in number of items
*
* Use SIZE_MAX/2 to fit in ssize_t, and so that cap*1.5 does not overflow.
*
* Private.
*/
#define sc_vecdeque_max_cap_(pv) (SIZE_MAX / 2 / sizeof(*(pv)->data))
/**
* Realloc the internal array to a specific capacity
*
* On reallocation success, update the VecDeque capacity (`*pcap`) and origin
* (`*porigin`), and return the reallocated data.
*
* On reallocation failure, return NULL without any change.
*
* Private.
*
* \param ptr the current `data` field of the SC_VECDEQUE to realloc
* \param newcap the requested capacity, in number of items
* \param item_size the size of one item (the generic type is unknown from this
* function)
* \param pcap a pointer to the `cap` field of the SC_VECDEQUE [IN/OUT]
* \param porigin a pointer to pv->origin [IN/OUT]
* \param size the `size` field of the SC_VECDEQUE
* \return the new array to assign to the `data` field of the SC_VECDEQUE (if
* not NULL)
*/
static inline void *
sc_vecdeque_reallocdata_(void *ptr, size_t newcap, size_t item_size,
size_t *pcap, size_t *porigin, size_t size) {
size_t oldcap = *pcap;
size_t oldorigin = *porigin;
assert(newcap > oldcap); // Could only grow
if (oldorigin + size <= oldcap) {
// The current content will stay in place, just realloc
//
// As an example, here is the content of a ring-buffer (oldcap=10)
// before the realloc:
//
// _ _ 2 3 4 5 6 7 _ _
// ^
// origin
//
// It is resized (newcap=15), e.g. with sc_vecdeque_reserve():
//
// _ _ 2 3 4 5 6 7 _ _ _ _ _ _ _
// ^
// origin
void *newptr = reallocarray(ptr, newcap, item_size);
if (!newptr) {
return NULL;
}
*pcap = newcap;
return newptr;
}
// Copy the current content to the new array
//
// As an example, here is the content of a ring-buffer (oldcap=10) before
// the realloc:
//
// 5 6 7 _ _ 0 1 2 3 4
// ^
// origin
//
// It is resized (newcap=15), e.g. with sc_vecdeque_reserve():
//
// 0 1 2 3 4 5 6 7 _ _ _ _ _ _ _
// ^
// origin
assert(size);
void *newptr = sc_allocarray(newcap, item_size);
if (!newptr) {
return NULL;
}
size_t right_len = MIN(size, oldcap - oldorigin);
assert(right_len);
memcpy(newptr, (char *) ptr + (oldorigin * item_size), right_len * item_size);
if (size > right_len) {
memcpy((char *) newptr + (right_len * item_size), ptr,
(size - right_len) * item_size);
}
free(ptr);
*pcap = newcap;
*porigin = 0;
return newptr;
}
/**
* Macro to realloc the internal data to a new capacity
*
* Private.
*
* \retval true on success
* \retval false on allocation failure (the VecDeque is left untouched)
*/
#define sc_vecdeque_realloc_(pv, newcap) \
({ \
void *p = sc_vecdeque_reallocdata_((pv)->data, newcap, \
sizeof(*(pv)->data), &(pv)->cap, \
&(pv)->origin, (pv)->size); \
if (p) { \
(pv)->data = p; \
} \
(bool) p; \
});
static inline size_t
sc_vecdeque_growsize_(size_t value)
{
/* integer multiplication by 1.5 */
return value + (value >> 1);
}
/**
* Increase the capacity of the VecDeque to at least `mincap`
*
* \param pv a pointer to the VecDeque
* \param mincap (`size_t`) the requested capacity
* \retval true on success
* \retval false on allocation failure (the VecDeque is left untouched)
*/
#define sc_vecdeque_reserve(pv, mincap) \
({ \
assert(mincap <= sc_vecdeque_max_cap_(pv)); \
bool ok; \
/* avoid to allocate tiny arrays (< SC_VECDEQUE_MINCAP_) */ \
size_t mincap_ = MAX(mincap, SC_VECDEQUE_MINCAP_); \
if (mincap_ <= (pv)->cap) { \
/* nothing to do */ \
ok = true; \
} else if (mincap_ <= sc_vecdeque_max_cap_(pv)) { \
/* not too big */ \
size_t newsize = sc_vecdeque_growsize_((pv)->cap); \
newsize = CLAMP(newsize, mincap_, sc_vecdeque_max_cap_(pv)); \
ok = sc_vecdeque_realloc_(pv, newsize); \
} else { \
ok = false; \
} \
ok; \
})
/**
* Automatically grow the VecDeque capacity
*
* Private.
*
* \retval true on success
* \retval false on allocation failure (the VecDeque is left untouched)
*/
#define sc_vecdeque_grow_(pv) \
({ \
bool ok; \
if ((pv)->cap < sc_vecdeque_max_cap_(pv)) { \
size_t newsize = sc_vecdeque_growsize_((pv)->cap); \
newsize = CLAMP(newsize, SC_VECDEQUE_MINCAP_, \
sc_vecdeque_max_cap_(pv)); \
ok = sc_vecdeque_realloc_(pv, newsize); \
} else { \
ok = false; \
} \
ok; \
})
/**
* Grow the VecDeque capacity if it is full
*
* Private.
*
* \retval true on success
* \retval false on allocation failure (the VecDeque is left untouched)
*/
#define sc_vecdeque_grow_if_needed_(pv) \
(!sc_vecdeque_is_full(pv) || sc_vecdeque_grow_(pv))
/**
* Push an uninitialized item, and return a pointer to it
*
* It does not attempt to resize the VecDeque. It is an error to this function
* if the VecDeque is full.
*
* This function may not fail. It returns a valid non-NULL pointer to the
* uninitialized item just pushed.
*/
#define sc_vecdeque_push_hole_noresize(pv) \
({ \
assert(!sc_vecdeque_is_full(pv)); \
++(pv)->size; \
&(pv)->data[((pv)->origin + (pv)->size - 1) % (pv)->cap]; \
})
/**
* Push an uninitialized item, and return a pointer to it
*
* If the VecDeque is full, it is resized.
*
* This function returns either a valid non-NULL pointer to the uninitialized
* item just pushed, or NULL on reallocation failure.
*/
#define sc_vecdeque_push_hole(pv) \
(sc_vecdeque_grow_if_needed_(pv) ? \
sc_vecdeque_push_hole_noresize(pv) : NULL)
/**
* Push an item
*
* It does not attempt to resize the VecDeque. It is an error to this function
* if the VecDeque is full.
*
* This function may not fail.
*/
#define sc_vecdeque_push_noresize(pv, item) \
(void) ({ \
assert(!sc_vecdeque_is_full(pv)); \
++(pv)->size; \
(pv)->data[((pv)->origin + (pv)->size - 1) % (pv)->cap] = item; \
})
/**
* Push an item
*
* If the VecDeque is full, it is resized.
*
* \retval true on success
* \retval false on allocation failure (the VecDeque is left untouched)
*/
#define sc_vecdeque_push(pv, item) \
({ \
bool ok = sc_vecdeque_grow_if_needed_(pv); \
if (ok) { \
sc_vecdeque_push_noresize(pv, item); \
} \
ok; \
})
/**
* Pop an item and return a pointer to it (still in the VecDeque)
*
* Returning a pointer allows the caller to destroy it in place without copy
* (especially if the item type is big).
*
* It is an error to call this function if the VecDeque is empty.
*/
#define sc_vecdeque_popref(pv) \
({ \
assert(!sc_vecdeque_is_empty(pv)); \
size_t pos = (pv)->origin; \
(pv)->origin = ((pv)->origin + 1) % (pv)->cap; \
--(pv)->size; \
&(pv)->data[pos]; \
})
/**
* Pop an item and return it
*
* It is an error to call this function if the VecDeque is empty.
*/
#define sc_vecdeque_pop(pv) \
(*sc_vecdeque_popref(pv))
#endif

View File

@ -118,7 +118,7 @@ static inline void *
sc_vector_reallocdata_(void *ptr, size_t count, size_t size,
size_t *restrict pcap, size_t *restrict psize)
{
void *p = reallocarray(ptr, count, size);
void *p = realloc(ptr, count * size);
if (!p) {
return NULL;
}

View File

@ -126,7 +126,7 @@ run_v4l2_sink(void *data) {
vs->has_frame = false;
sc_mutex_unlock(&vs->mutex);
sc_frame_buffer_consume(&vs->fb, vs->frame);
sc_video_buffer_consume(&vs->vb, vs->frame);
bool ok = encode_and_write_frame(vs, vs->frame);
av_frame_unref(vs->frame);
@ -141,19 +141,42 @@ run_v4l2_sink(void *data) {
return 0;
}
static void
sc_video_buffer_on_new_frame(struct sc_video_buffer *vb, bool previous_skipped,
void *userdata) {
(void) vb;
struct sc_v4l2_sink *vs = userdata;
if (!previous_skipped) {
sc_mutex_lock(&vs->mutex);
vs->has_frame = true;
sc_cond_signal(&vs->cond);
sc_mutex_unlock(&vs->mutex);
}
}
static bool
sc_v4l2_sink_open(struct sc_v4l2_sink *vs, const AVCodecContext *ctx) {
assert(ctx->pix_fmt == AV_PIX_FMT_YUV420P);
(void) ctx;
bool ok = sc_frame_buffer_init(&vs->fb);
static const struct sc_video_buffer_callbacks cbs = {
.on_new_frame = sc_video_buffer_on_new_frame,
};
bool ok = sc_video_buffer_init(&vs->vb, vs->buffering_time, &cbs, vs);
if (!ok) {
return false;
}
ok = sc_video_buffer_start(&vs->vb);
if (!ok) {
goto error_video_buffer_destroy;
}
ok = sc_mutex_init(&vs->mutex);
if (!ok) {
goto error_frame_buffer_destroy;
goto error_video_buffer_stop_and_join;
}
ok = sc_cond_init(&vs->cond);
@ -205,13 +228,11 @@ sc_v4l2_sink_open(struct sc_v4l2_sink *vs, const AVCodecContext *ctx) {
goto error_avformat_free_context;
}
int r = avcodec_parameters_from_context(ostream->codecpar, ctx);
if (r < 0) {
goto error_avformat_free_context;
}
// The codec is from the v4l2 encoder, not from the decoder
ostream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
ostream->codecpar->codec_id = encoder->id;
ostream->codecpar->format = AV_PIX_FMT_YUV420P;
ostream->codecpar->width = vs->frame_size.width;
ostream->codecpar->height = vs->frame_size.height;
int ret = avio_open(&vs->format_ctx->pb, vs->device_name, AVIO_FLAG_WRITE);
if (ret < 0) {
@ -226,8 +247,8 @@ sc_v4l2_sink_open(struct sc_v4l2_sink *vs, const AVCodecContext *ctx) {
goto error_avio_close;
}
vs->encoder_ctx->width = ctx->width;
vs->encoder_ctx->height = ctx->height;
vs->encoder_ctx->width = vs->frame_size.width;
vs->encoder_ctx->height = vs->frame_size.height;
vs->encoder_ctx->pix_fmt = AV_PIX_FMT_YUV420P;
vs->encoder_ctx->time_base.num = 1;
vs->encoder_ctx->time_base.den = 1;
@ -280,8 +301,11 @@ error_cond_destroy:
sc_cond_destroy(&vs->cond);
error_mutex_destroy:
sc_mutex_destroy(&vs->mutex);
error_frame_buffer_destroy:
sc_frame_buffer_destroy(&vs->fb);
error_video_buffer_stop_and_join:
sc_video_buffer_stop(&vs->vb);
sc_video_buffer_join(&vs->vb);
error_video_buffer_destroy:
sc_video_buffer_destroy(&vs->vb);
return false;
}
@ -293,7 +317,10 @@ sc_v4l2_sink_close(struct sc_v4l2_sink *vs) {
sc_cond_signal(&vs->cond);
sc_mutex_unlock(&vs->mutex);
sc_video_buffer_stop(&vs->vb);
sc_thread_join(&vs->thread, NULL);
sc_video_buffer_join(&vs->vb);
av_packet_free(&vs->packet);
av_frame_free(&vs->frame);
@ -303,25 +330,12 @@ sc_v4l2_sink_close(struct sc_v4l2_sink *vs) {
avformat_free_context(vs->format_ctx);
sc_cond_destroy(&vs->cond);
sc_mutex_destroy(&vs->mutex);
sc_frame_buffer_destroy(&vs->fb);
sc_video_buffer_destroy(&vs->vb);
}
static bool
sc_v4l2_sink_push(struct sc_v4l2_sink *vs, const AVFrame *frame) {
bool previous_skipped;
bool ok = sc_frame_buffer_push(&vs->fb, frame, &previous_skipped);
if (!ok) {
return false;
}
if (!previous_skipped) {
sc_mutex_lock(&vs->mutex);
vs->has_frame = true;
sc_cond_signal(&vs->cond);
sc_mutex_unlock(&vs->mutex);
}
return true;
return sc_video_buffer_push(&vs->vb, frame);
}
static bool
@ -343,13 +357,17 @@ sc_v4l2_frame_sink_push(struct sc_frame_sink *sink, const AVFrame *frame) {
}
bool
sc_v4l2_sink_init(struct sc_v4l2_sink *vs, const char *device_name) {
sc_v4l2_sink_init(struct sc_v4l2_sink *vs, const char *device_name,
struct sc_size frame_size, sc_tick buffering_time) {
vs->device_name = strdup(device_name);
if (!vs->device_name) {
LOGE("Could not strdup v4l2 device name");
return false;
}
vs->frame_size = frame_size;
vs->buffering_time = buffering_time;
static const struct sc_frame_sink_ops ops = {
.open = sc_v4l2_frame_sink_open,
.close = sc_v4l2_frame_sink_close,

View File

@ -8,17 +8,19 @@
#include "coords.h"
#include "trait/frame_sink.h"
#include "frame_buffer.h"
#include "video_buffer.h"
#include "util/tick.h"
struct sc_v4l2_sink {
struct sc_frame_sink frame_sink; // frame sink trait
struct sc_frame_buffer fb;
struct sc_video_buffer vb;
AVFormatContext *format_ctx;
AVCodecContext *encoder_ctx;
char *device_name;
struct sc_size frame_size;
sc_tick buffering_time;
sc_thread thread;
sc_mutex mutex;
@ -32,7 +34,8 @@ struct sc_v4l2_sink {
};
bool
sc_v4l2_sink_init(struct sc_v4l2_sink *vs, const char *device_name);
sc_v4l2_sink_init(struct sc_v4l2_sink *vs, const char *device_name,
struct sc_size frame_size, sc_tick buffering_time);
void
sc_v4l2_sink_destroy(struct sc_v4l2_sink *vs);

254
app/src/video_buffer.c Normal file
View File

@ -0,0 +1,254 @@
#include "video_buffer.h"
#include <assert.h>
#include <stdlib.h>
#include <libavutil/avutil.h>
#include <libavformat/avformat.h>
#include "util/log.h"
#define SC_BUFFERING_NDEBUG // comment to debug
static struct sc_video_buffer_frame *
sc_video_buffer_frame_new(const AVFrame *frame) {
struct sc_video_buffer_frame *vb_frame = malloc(sizeof(*vb_frame));
if (!vb_frame) {
LOG_OOM();
return NULL;
}
vb_frame->frame = av_frame_alloc();
if (!vb_frame->frame) {
LOG_OOM();
free(vb_frame);
return NULL;
}
if (av_frame_ref(vb_frame->frame, frame)) {
av_frame_free(&vb_frame->frame);
free(vb_frame);
return NULL;
}
return vb_frame;
}
static void
sc_video_buffer_frame_delete(struct sc_video_buffer_frame *vb_frame) {
av_frame_unref(vb_frame->frame);
av_frame_free(&vb_frame->frame);
free(vb_frame);
}
static bool
sc_video_buffer_offer(struct sc_video_buffer *vb, const AVFrame *frame) {
bool previous_skipped;
bool ok = sc_frame_buffer_push(&vb->fb, frame, &previous_skipped);
if (!ok) {
return false;
}
vb->cbs->on_new_frame(vb, previous_skipped, vb->cbs_userdata);
return true;
}
static int
run_buffering(void *data) {
struct sc_video_buffer *vb = data;
assert(vb->buffering_time > 0);
for (;;) {
sc_mutex_lock(&vb->b.mutex);
while (!vb->b.stopped && sc_queue_is_empty(&vb->b.queue)) {
sc_cond_wait(&vb->b.queue_cond, &vb->b.mutex);
}
if (vb->b.stopped) {
sc_mutex_unlock(&vb->b.mutex);
goto stopped;
}
struct sc_video_buffer_frame *vb_frame;
sc_queue_take(&vb->b.queue, next, &vb_frame);
sc_tick max_deadline = sc_tick_now() + vb->buffering_time;
// PTS (written by the server) are expressed in microseconds
sc_tick pts = SC_TICK_TO_US(vb_frame->frame->pts);
bool timed_out = false;
while (!vb->b.stopped && !timed_out) {
sc_tick deadline = sc_clock_to_system_time(&vb->b.clock, pts)
+ vb->buffering_time;
if (deadline > max_deadline) {
deadline = max_deadline;
}
timed_out =
!sc_cond_timedwait(&vb->b.wait_cond, &vb->b.mutex, deadline);
}
if (vb->b.stopped) {
sc_video_buffer_frame_delete(vb_frame);
sc_mutex_unlock(&vb->b.mutex);
goto stopped;
}
sc_mutex_unlock(&vb->b.mutex);
#ifndef SC_BUFFERING_NDEBUG
LOGD("Buffering: %" PRItick ";%" PRItick ";%" PRItick,
pts, vb_frame->push_date, sc_tick_now());
#endif
sc_video_buffer_offer(vb, vb_frame->frame);
sc_video_buffer_frame_delete(vb_frame);
}
stopped:
// Flush queue
while (!sc_queue_is_empty(&vb->b.queue)) {
struct sc_video_buffer_frame *vb_frame;
sc_queue_take(&vb->b.queue, next, &vb_frame);
sc_video_buffer_frame_delete(vb_frame);
}
LOGD("Buffering thread ended");
return 0;
}
bool
sc_video_buffer_init(struct sc_video_buffer *vb, sc_tick buffering_time,
const struct sc_video_buffer_callbacks *cbs,
void *cbs_userdata) {
bool ok = sc_frame_buffer_init(&vb->fb);
if (!ok) {
return false;
}
assert(buffering_time >= 0);
if (buffering_time) {
ok = sc_mutex_init(&vb->b.mutex);
if (!ok) {
sc_frame_buffer_destroy(&vb->fb);
return false;
}
ok = sc_cond_init(&vb->b.queue_cond);
if (!ok) {
sc_mutex_destroy(&vb->b.mutex);
sc_frame_buffer_destroy(&vb->fb);
return false;
}
ok = sc_cond_init(&vb->b.wait_cond);
if (!ok) {
sc_cond_destroy(&vb->b.queue_cond);
sc_mutex_destroy(&vb->b.mutex);
sc_frame_buffer_destroy(&vb->fb);
return false;
}
sc_clock_init(&vb->b.clock);
sc_queue_init(&vb->b.queue);
}
assert(cbs);
assert(cbs->on_new_frame);
vb->buffering_time = buffering_time;
vb->cbs = cbs;
vb->cbs_userdata = cbs_userdata;
return true;
}
bool
sc_video_buffer_start(struct sc_video_buffer *vb) {
if (vb->buffering_time) {
bool ok =
sc_thread_create(&vb->b.thread, run_buffering, "scrcpy-vbuf", vb);
if (!ok) {
LOGE("Could not start buffering thread");
return false;
}
}
return true;
}
void
sc_video_buffer_stop(struct sc_video_buffer *vb) {
if (vb->buffering_time) {
sc_mutex_lock(&vb->b.mutex);
vb->b.stopped = true;
sc_cond_signal(&vb->b.queue_cond);
sc_cond_signal(&vb->b.wait_cond);
sc_mutex_unlock(&vb->b.mutex);
}
}
void
sc_video_buffer_join(struct sc_video_buffer *vb) {
if (vb->buffering_time) {
sc_thread_join(&vb->b.thread, NULL);
}
}
void
sc_video_buffer_destroy(struct sc_video_buffer *vb) {
sc_frame_buffer_destroy(&vb->fb);
if (vb->buffering_time) {
sc_cond_destroy(&vb->b.wait_cond);
sc_cond_destroy(&vb->b.queue_cond);
sc_mutex_destroy(&vb->b.mutex);
}
}
bool
sc_video_buffer_push(struct sc_video_buffer *vb, const AVFrame *frame) {
if (!vb->buffering_time) {
// No buffering
return sc_video_buffer_offer(vb, frame);
}
sc_mutex_lock(&vb->b.mutex);
sc_tick pts = SC_TICK_FROM_US(frame->pts);
sc_clock_update(&vb->b.clock, sc_tick_now(), pts);
sc_cond_signal(&vb->b.wait_cond);
if (vb->b.clock.count == 1) {
sc_mutex_unlock(&vb->b.mutex);
// First frame, offer it immediately, for two reasons:
// - not to delay the opening of the scrcpy window
// - the buffering estimation needs at least two clock points, so it
// could not handle the first frame
return sc_video_buffer_offer(vb, frame);
}
struct sc_video_buffer_frame *vb_frame = sc_video_buffer_frame_new(frame);
if (!vb_frame) {
sc_mutex_unlock(&vb->b.mutex);
LOG_OOM();
return false;
}
#ifndef SC_BUFFERING_NDEBUG
vb_frame->push_date = sc_tick_now();
#endif
sc_queue_push(&vb->b.queue, next, vb_frame);
sc_cond_signal(&vb->b.queue_cond);
sc_mutex_unlock(&vb->b.mutex);
return true;
}
void
sc_video_buffer_consume(struct sc_video_buffer *vb, AVFrame *dst) {
sc_frame_buffer_consume(&vb->fb, dst);
}

76
app/src/video_buffer.h Normal file
View File

@ -0,0 +1,76 @@
#ifndef SC_VIDEO_BUFFER_H
#define SC_VIDEO_BUFFER_H
#include "common.h"
#include <stdbool.h>
#include "clock.h"
#include "frame_buffer.h"
#include "util/queue.h"
#include "util/thread.h"
#include "util/tick.h"
// forward declarations
typedef struct AVFrame AVFrame;
struct sc_video_buffer_frame {
AVFrame *frame;
struct sc_video_buffer_frame *next;
#ifndef NDEBUG
sc_tick push_date;
#endif
};
struct sc_video_buffer_frame_queue SC_QUEUE(struct sc_video_buffer_frame);
struct sc_video_buffer {
struct sc_frame_buffer fb;
sc_tick buffering_time;
// only if buffering_time > 0
struct {
sc_thread thread;
sc_mutex mutex;
sc_cond queue_cond;
sc_cond wait_cond;
struct sc_clock clock;
struct sc_video_buffer_frame_queue queue;
bool stopped;
} b; // buffering
const struct sc_video_buffer_callbacks *cbs;
void *cbs_userdata;
};
struct sc_video_buffer_callbacks {
void (*on_new_frame)(struct sc_video_buffer *vb, bool previous_skipped,
void *userdata);
};
bool
sc_video_buffer_init(struct sc_video_buffer *vb, sc_tick buffering_time,
const struct sc_video_buffer_callbacks *cbs,
void *cbs_userdata);
bool
sc_video_buffer_start(struct sc_video_buffer *vb);
void
sc_video_buffer_stop(struct sc_video_buffer *vb);
void
sc_video_buffer_join(struct sc_video_buffer *vb);
void
sc_video_buffer_destroy(struct sc_video_buffer *vb);
bool
sc_video_buffer_push(struct sc_video_buffer *vb, const AVFrame *frame);
void
sc_video_buffer_consume(struct sc_video_buffer *vb, AVFrame *dst);
#endif

View File

@ -217,18 +217,6 @@ static void test_get_ip_multiline_second_ok(void) {
free(ip);
}
static void test_get_ip_multiline_second_ok_without_cr(void) {
char ip_route[] = "10.0.0.0/24 dev rmnet proto kernel scope link src "
"10.0.0.3\n"
"192.168.1.0/24 dev wlan0 proto kernel scope link src "
"192.168.1.3\n";
char *ip = sc_adb_parse_device_ip(ip_route);
assert(ip);
assert(!strcmp(ip, "192.168.1.3"));
free(ip);
}
static void test_get_ip_no_wlan(void) {
char ip_route[] = "192.168.1.0/24 dev rmnet proto kernel scope link src "
"192.168.12.34\r\r\n";
@ -271,7 +259,6 @@ int main(int argc, char *argv[]) {
test_get_ip_single_line_with_trailing_space();
test_get_ip_multiline_first_ok();
test_get_ip_multiline_second_ok();
test_get_ip_multiline_second_ok_without_cr();
test_get_ip_no_wlan();
test_get_ip_no_wlan_without_eol();
test_get_ip_truncated();

View File

@ -5,7 +5,7 @@
#include "util/bytebuf.h"
static void test_bytebuf_simple(void) {
void test_bytebuf_simple(void) {
struct sc_bytebuf buf;
uint8_t data[20];
@ -13,28 +13,28 @@ static void test_bytebuf_simple(void) {
assert(ok);
sc_bytebuf_write(&buf, (uint8_t *) "hello", sizeof("hello") - 1);
assert(sc_bytebuf_can_read(&buf) == 5);
assert(sc_bytebuf_read_remaining(&buf) == 5);
sc_bytebuf_read(&buf, data, 4);
assert(!strncmp((char *) data, "hell", 4));
sc_bytebuf_write(&buf, (uint8_t *) " world", sizeof(" world") - 1);
assert(sc_bytebuf_can_read(&buf) == 7);
assert(sc_bytebuf_read_remaining(&buf) == 7);
sc_bytebuf_write(&buf, (uint8_t *) "!", 1);
assert(sc_bytebuf_can_read(&buf) == 8);
assert(sc_bytebuf_read_remaining(&buf) == 8);
sc_bytebuf_read(&buf, &data[4], 8);
assert(sc_bytebuf_can_read(&buf) == 0);
assert(sc_bytebuf_read_remaining(&buf) == 0);
data[12] = '\0';
assert(!strcmp((char *) data, "hello world!"));
assert(sc_bytebuf_can_read(&buf) == 0);
assert(sc_bytebuf_read_remaining(&buf) == 0);
sc_bytebuf_destroy(&buf);
}
static void test_bytebuf_boundaries(void) {
void test_bytebuf_boundaries(void) {
struct sc_bytebuf buf;
uint8_t data[20];
@ -42,36 +42,63 @@ static void test_bytebuf_boundaries(void) {
assert(ok);
sc_bytebuf_write(&buf, (uint8_t *) "hello ", sizeof("hello ") - 1);
assert(sc_bytebuf_can_read(&buf) == 6);
assert(sc_bytebuf_read_remaining(&buf) == 6);
sc_bytebuf_write(&buf, (uint8_t *) "hello ", sizeof("hello ") - 1);
assert(sc_bytebuf_can_read(&buf) == 12);
assert(sc_bytebuf_read_remaining(&buf) == 12);
sc_bytebuf_write(&buf, (uint8_t *) "hello ", sizeof("hello ") - 1);
assert(sc_bytebuf_can_read(&buf) == 18);
assert(sc_bytebuf_read_remaining(&buf) == 18);
sc_bytebuf_read(&buf, data, 9);
assert(!strncmp((char *) data, "hello hel", 9));
assert(sc_bytebuf_can_read(&buf) == 9);
assert(sc_bytebuf_read_remaining(&buf) == 9);
sc_bytebuf_write(&buf, (uint8_t *) "world", sizeof("world") - 1);
assert(sc_bytebuf_can_read(&buf) == 14);
assert(sc_bytebuf_read_remaining(&buf) == 14);
sc_bytebuf_write(&buf, (uint8_t *) "!", 1);
assert(sc_bytebuf_can_read(&buf) == 15);
assert(sc_bytebuf_read_remaining(&buf) == 15);
sc_bytebuf_skip(&buf, 3);
assert(sc_bytebuf_can_read(&buf) == 12);
assert(sc_bytebuf_read_remaining(&buf) == 12);
sc_bytebuf_read(&buf, data, 12);
data[12] = '\0';
assert(!strcmp((char *) data, "hello world!"));
assert(sc_bytebuf_can_read(&buf) == 0);
assert(sc_bytebuf_read_remaining(&buf) == 0);
sc_bytebuf_destroy(&buf);
}
static void test_bytebuf_two_steps_write(void) {
void test_bytebuf_overwrite(void) {
struct sc_bytebuf buf;
uint8_t data[10];
bool ok = sc_bytebuf_init(&buf, 10); // so actual capacity is 9
assert(ok);
sc_bytebuf_write(&buf, (uint8_t *) "hello ", sizeof("hello ") - 1);
assert(sc_bytebuf_read_remaining(&buf) == 6);
sc_bytebuf_write(&buf, (uint8_t *) "abcdef", sizeof("abcdef") - 1);
assert(sc_bytebuf_read_remaining(&buf) == 9);
sc_bytebuf_read(&buf, data, 9);
assert(!strncmp((char *) data, "lo abcdef", 9));
sc_bytebuf_write(&buf, (uint8_t *) "a very big buffer",
sizeof("a very big buffer") - 1);
assert(sc_bytebuf_read_remaining(&buf) == 9);
sc_bytebuf_read(&buf, data, 9);
assert(!strncmp((char *) data, "ig buffer", 9));
assert(sc_bytebuf_read_remaining(&buf) == 0);
sc_bytebuf_destroy(&buf);
}
void test_bytebuf_two_steps_write(void) {
struct sc_bytebuf buf;
uint8_t data[20];
@ -79,37 +106,37 @@ static void test_bytebuf_two_steps_write(void) {
assert(ok);
sc_bytebuf_write(&buf, (uint8_t *) "hello ", sizeof("hello ") - 1);
assert(sc_bytebuf_can_read(&buf) == 6);
assert(sc_bytebuf_read_remaining(&buf) == 6);
sc_bytebuf_write(&buf, (uint8_t *) "hello ", sizeof("hello ") - 1);
assert(sc_bytebuf_can_read(&buf) == 12);
assert(sc_bytebuf_read_remaining(&buf) == 12);
sc_bytebuf_prepare_write(&buf, (uint8_t *) "hello ", sizeof("hello ") - 1);
assert(sc_bytebuf_can_read(&buf) == 12); // write not committed yet
assert(sc_bytebuf_read_remaining(&buf) == 12); // write not committed yet
sc_bytebuf_read(&buf, data, 9);
assert(!strncmp((char *) data, "hello hel", 3));
assert(sc_bytebuf_can_read(&buf) == 3);
assert(sc_bytebuf_read_remaining(&buf) == 3);
sc_bytebuf_commit_write(&buf, sizeof("hello ") - 1);
assert(sc_bytebuf_can_read(&buf) == 9);
assert(sc_bytebuf_read_remaining(&buf) == 9);
sc_bytebuf_prepare_write(&buf, (uint8_t *) "world", sizeof("world") - 1);
assert(sc_bytebuf_can_read(&buf) == 9); // write not committed yet
assert(sc_bytebuf_read_remaining(&buf) == 9); // write not committed yet
sc_bytebuf_commit_write(&buf, sizeof("world") - 1);
assert(sc_bytebuf_can_read(&buf) == 14);
assert(sc_bytebuf_read_remaining(&buf) == 14);
sc_bytebuf_write(&buf, (uint8_t *) "!", 1);
assert(sc_bytebuf_can_read(&buf) == 15);
assert(sc_bytebuf_read_remaining(&buf) == 15);
sc_bytebuf_skip(&buf, 3);
assert(sc_bytebuf_can_read(&buf) == 12);
assert(sc_bytebuf_read_remaining(&buf) == 12);
sc_bytebuf_read(&buf, data, 12);
data[12] = '\0';
assert(!strcmp((char *) data, "hello world!"));
assert(sc_bytebuf_can_read(&buf) == 0);
assert(sc_bytebuf_read_remaining(&buf) == 0);
sc_bytebuf_destroy(&buf);
}
@ -120,6 +147,7 @@ int main(int argc, char *argv[]) {
test_bytebuf_simple();
test_bytebuf_boundaries();
test_bytebuf_overwrite();
test_bytebuf_two_steps_write();
return 0;

78
app/tests/test_cbuf.c Normal file
View File

@ -0,0 +1,78 @@
#include "common.h"
#include <assert.h>
#include <string.h>
#include "util/cbuf.h"
struct int_queue CBUF(int, 32);
static void test_cbuf_empty(void) {
struct int_queue queue;
cbuf_init(&queue);
assert(cbuf_is_empty(&queue));
bool push_ok = cbuf_push(&queue, 42);
assert(push_ok);
assert(!cbuf_is_empty(&queue));
int item;
bool take_ok = cbuf_take(&queue, &item);
assert(take_ok);
assert(cbuf_is_empty(&queue));
bool take_empty_ok = cbuf_take(&queue, &item);
assert(!take_empty_ok); // the queue is empty
}
static void test_cbuf_full(void) {
struct int_queue queue;
cbuf_init(&queue);
assert(!cbuf_is_full(&queue));
// fill the queue
for (int i = 0; i < 32; ++i) {
bool ok = cbuf_push(&queue, i);
assert(ok);
}
bool ok = cbuf_push(&queue, 42);
assert(!ok); // the queue if full
int item;
bool take_ok = cbuf_take(&queue, &item);
assert(take_ok);
assert(!cbuf_is_full(&queue));
}
static void test_cbuf_push_take(void) {
struct int_queue queue;
cbuf_init(&queue);
bool push1_ok = cbuf_push(&queue, 42);
assert(push1_ok);
bool push2_ok = cbuf_push(&queue, 35);
assert(push2_ok);
int item;
bool take1_ok = cbuf_take(&queue, &item);
assert(take1_ok);
assert(item == 42);
bool take2_ok = cbuf_take(&queue, &item);
assert(take2_ok);
assert(item == 35);
}
int main(int argc, char *argv[]) {
(void) argc;
(void) argv;
test_cbuf_empty();
test_cbuf_full();
test_cbuf_push_take();
return 0;
}

View File

@ -53,7 +53,7 @@ static void test_options(void) {
"--max-size", "1024",
"--lock-video-orientation=2", // optional arguments require '='
// "--no-control" is not compatible with "--turn-screen-off"
// "--no-playback" is not compatible with "--fulscreen"
// "--no-display" is not compatible with "--fulscreen"
"--port", "1234:1236",
"--push-target", "/sdcard/Movies",
"--record", "file",
@ -108,8 +108,8 @@ static void test_options2(void) {
char *argv[] = {
"scrcpy",
"--no-control",
"--no-playback",
"--record", "file.mp4", // cannot enable --no-playback without recording
"--no-display",
"--record", "file.mp4", // cannot enable --no-display without recording
};
bool ok = scrcpy_parse_args(&args, ARRAY_LEN(argv), argv);
@ -117,8 +117,7 @@ static void test_options2(void) {
const struct scrcpy_options *opts = &args.opts;
assert(!opts->control);
assert(!opts->video_playback);
assert(!opts->audio_playback);
assert(!opts->display);
assert(!strcmp(opts->record_filename, "file.mp4"));
assert(opts->record_format == SC_RECORD_FORMAT_MP4);
}

79
app/tests/test_clock.c Normal file
View File

@ -0,0 +1,79 @@
#include "common.h"
#include <assert.h>
#include "clock.h"
void test_small_rolling_sum(void) {
struct sc_clock clock;
sc_clock_init(&clock);
assert(clock.count == 0);
assert(clock.left_sum.system == 0);
assert(clock.left_sum.stream == 0);
assert(clock.right_sum.system == 0);
assert(clock.right_sum.stream == 0);
sc_clock_update(&clock, 2, 3);
assert(clock.count == 1);
assert(clock.left_sum.system == 0);
assert(clock.left_sum.stream == 0);
assert(clock.right_sum.system == 2);
assert(clock.right_sum.stream == 3);
sc_clock_update(&clock, 10, 20);
assert(clock.count == 2);
assert(clock.left_sum.system == 2);
assert(clock.left_sum.stream == 3);
assert(clock.right_sum.system == 10);
assert(clock.right_sum.stream == 20);
sc_clock_update(&clock, 40, 80);
assert(clock.count == 3);
assert(clock.left_sum.system == 2);
assert(clock.left_sum.stream == 3);
assert(clock.right_sum.system == 50);
assert(clock.right_sum.stream == 100);
sc_clock_update(&clock, 400, 800);
assert(clock.count == 4);
assert(clock.left_sum.system == 12);
assert(clock.left_sum.stream == 23);
assert(clock.right_sum.system == 440);
assert(clock.right_sum.stream == 880);
}
void test_large_rolling_sum(void) {
const unsigned half_range = SC_CLOCK_RANGE / 2;
struct sc_clock clock1;
sc_clock_init(&clock1);
for (unsigned i = 0; i < 5 * half_range; ++i) {
sc_clock_update(&clock1, i, 2 * i + 1);
}
struct sc_clock clock2;
sc_clock_init(&clock2);
for (unsigned i = 3 * half_range; i < 5 * half_range; ++i) {
sc_clock_update(&clock2, i, 2 * i + 1);
}
assert(clock1.count == SC_CLOCK_RANGE);
assert(clock2.count == SC_CLOCK_RANGE);
// The values before the last SC_CLOCK_RANGE points in clock1 should have
// no impact
assert(clock1.left_sum.system == clock2.left_sum.system);
assert(clock1.left_sum.stream == clock2.left_sum.stream);
assert(clock1.right_sum.system == clock2.right_sum.system);
assert(clock1.right_sum.stream == clock2.right_sum.stream);
}
int main(int argc, char *argv[]) {
(void) argc;
(void) argv;
test_small_rolling_sum();
test_large_rolling_sum();
return 0;
};

43
app/tests/test_queue.c Normal file
View File

@ -0,0 +1,43 @@
#include "common.h"
#include <assert.h>
#include "util/queue.h"
struct foo {
int value;
struct foo *next;
};
static void test_queue(void) {
struct my_queue SC_QUEUE(struct foo) queue;
sc_queue_init(&queue);
assert(sc_queue_is_empty(&queue));
struct foo v1 = { .value = 42 };
struct foo v2 = { .value = 27 };
sc_queue_push(&queue, next, &v1);
sc_queue_push(&queue, next, &v2);
struct foo *foo;
assert(!sc_queue_is_empty(&queue));
sc_queue_take(&queue, next, &foo);
assert(foo->value == 42);
assert(!sc_queue_is_empty(&queue));
sc_queue_take(&queue, next, &foo);
assert(foo->value == 27);
assert(sc_queue_is_empty(&queue));
}
int main(int argc, char *argv[]) {
(void) argc;
(void) argv;
test_queue();
return 0;
}

View File

@ -269,25 +269,21 @@ static void test_parse_integer_with_suffix(void) {
char buf[32];
int r = snprintf(buf, sizeof(buf), "%ldk", LONG_MAX / 2000);
assert(r >= 0 && (size_t) r < sizeof(buf));
sprintf(buf, "%ldk", LONG_MAX / 2000);
ok = sc_str_parse_integer_with_suffix(buf, &value);
assert(ok);
assert(value == LONG_MAX / 2000 * 1000);
r = snprintf(buf, sizeof(buf), "%ldm", LONG_MAX / 2000);
assert(r >= 0 && (size_t) r < sizeof(buf));
sprintf(buf, "%ldm", LONG_MAX / 2000);
ok = sc_str_parse_integer_with_suffix(buf, &value);
assert(!ok);
r = snprintf(buf, sizeof(buf), "%ldk", LONG_MIN / 2000);
assert(r >= 0 && (size_t) r < sizeof(buf));
sprintf(buf, "%ldk", LONG_MIN / 2000);
ok = sc_str_parse_integer_with_suffix(buf, &value);
assert(ok);
assert(value == LONG_MIN / 2000 * 1000);
r = snprintf(buf, sizeof(buf), "%ldm", LONG_MIN / 2000);
assert(r >= 0 && (size_t) r < sizeof(buf));
sprintf(buf, "%ldm", LONG_MIN / 2000);
ok = sc_str_parse_integer_with_suffix(buf, &value);
assert(!ok);
}
@ -362,7 +358,7 @@ static void test_index_of_column(void) {
assert(sc_str_index_of_column(" a bc d", 1, " ") == 2);
}
static void test_remove_trailing_cr(void) {
static void test_remove_trailing_cr() {
char s[] = "abc\r";
sc_str_remove_trailing_cr(s, sizeof(s) - 1);
assert(!strcmp(s, "abc"));

View File

@ -1,197 +0,0 @@
#include "common.h"
#include <assert.h>
#include "util/vecdeque.h"
#define pr(pv) \
({ \
fprintf(stderr, "cap=%lu origin=%lu size=%lu\n", (pv)->cap, (pv)->origin, (pv)->size); \
for (size_t i = 0; i < (pv)->cap; ++i) \
fprintf(stderr, "%d ", (pv)->data[i]); \
fprintf(stderr, "\n"); \
})
static void test_vecdeque_push_pop(void) {
struct SC_VECDEQUE(int) vdq = SC_VECDEQUE_INITIALIZER;
assert(sc_vecdeque_is_empty(&vdq));
assert(sc_vecdeque_size(&vdq) == 0);
bool ok = sc_vecdeque_push(&vdq, 5);
assert(ok);
assert(sc_vecdeque_size(&vdq) == 1);
ok = sc_vecdeque_push(&vdq, 12);
assert(ok);
assert(sc_vecdeque_size(&vdq) == 2);
int v = sc_vecdeque_pop(&vdq);
assert(v == 5);
assert(sc_vecdeque_size(&vdq) == 1);
ok = sc_vecdeque_push(&vdq, 7);
assert(ok);
assert(sc_vecdeque_size(&vdq) == 2);
int *p = sc_vecdeque_popref(&vdq);
assert(p);
assert(*p == 12);
assert(sc_vecdeque_size(&vdq) == 1);
v = sc_vecdeque_pop(&vdq);
assert(v == 7);
assert(sc_vecdeque_size(&vdq) == 0);
assert(sc_vecdeque_is_empty(&vdq));
sc_vecdeque_destroy(&vdq);
}
static void test_vecdeque_reserve(void) {
struct SC_VECDEQUE(int) vdq = SC_VECDEQUE_INITIALIZER;
bool ok = sc_vecdeque_reserve(&vdq, 20);
assert(ok);
assert(vdq.cap == 20);
assert(sc_vecdeque_size(&vdq) == 0);
for (size_t i = 0; i < 20; ++i) {
ok = sc_vecdeque_push(&vdq, i);
assert(ok);
}
assert(sc_vecdeque_size(&vdq) == 20);
// It is now full
for (int i = 0; i < 5; ++i) {
int v = sc_vecdeque_pop(&vdq);
assert(v == i);
}
assert(sc_vecdeque_size(&vdq) == 15);
for (int i = 20; i < 25; ++i) {
ok = sc_vecdeque_push(&vdq, i);
assert(ok);
}
assert(sc_vecdeque_size(&vdq) == 20);
assert(vdq.cap == 20);
// Now, the content wraps around the ring buffer:
// 20 21 22 23 24 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
// ^
// origin
// It is now full, let's reserve some space
ok = sc_vecdeque_reserve(&vdq, 30);
assert(ok);
assert(vdq.cap == 30);
assert(sc_vecdeque_size(&vdq) == 20);
for (int i = 0; i < 20; ++i) {
// We should retrieve the items we inserted in order
int v = sc_vecdeque_pop(&vdq);
assert(v == i + 5);
}
assert(sc_vecdeque_size(&vdq) == 0);
sc_vecdeque_destroy(&vdq);
}
static void test_vecdeque_grow(void) {
struct SC_VECDEQUE(int) vdq = SC_VECDEQUE_INITIALIZER;
bool ok = sc_vecdeque_reserve(&vdq, 20);
assert(ok);
assert(vdq.cap == 20);
assert(sc_vecdeque_size(&vdq) == 0);
for (int i = 0; i < 500; ++i) {
ok = sc_vecdeque_push(&vdq, i);
assert(ok);
}
assert(sc_vecdeque_size(&vdq) == 500);
for (int i = 0; i < 100; ++i) {
int v = sc_vecdeque_pop(&vdq);
assert(v == i);
}
assert(sc_vecdeque_size(&vdq) == 400);
for (int i = 500; i < 1000; ++i) {
ok = sc_vecdeque_push(&vdq, i);
assert(ok);
}
assert(sc_vecdeque_size(&vdq) == 900);
for (int i = 100; i < 1000; ++i) {
int v = sc_vecdeque_pop(&vdq);
assert(v == i);
}
assert(sc_vecdeque_size(&vdq) == 0);
sc_vecdeque_destroy(&vdq);
}
static void test_vecdeque_push_hole(void) {
struct SC_VECDEQUE(int) vdq = SC_VECDEQUE_INITIALIZER;
bool ok = sc_vecdeque_reserve(&vdq, 20);
assert(ok);
assert(vdq.cap == 20);
assert(sc_vecdeque_size(&vdq) == 0);
for (int i = 0; i < 20; ++i) {
int *p = sc_vecdeque_push_hole(&vdq);
assert(p);
*p = i * 10;
}
assert(sc_vecdeque_size(&vdq) == 20);
for (int i = 0; i < 10; ++i) {
int v = sc_vecdeque_pop(&vdq);
assert(v == i * 10);
}
assert(sc_vecdeque_size(&vdq) == 10);
for (int i = 20; i < 30; ++i) {
int *p = sc_vecdeque_push_hole(&vdq);
assert(p);
*p = i * 10;
}
assert(sc_vecdeque_size(&vdq) == 20);
for (int i = 10; i < 30; ++i) {
int v = sc_vecdeque_pop(&vdq);
assert(v == i * 10);
}
assert(sc_vecdeque_size(&vdq) == 0);
sc_vecdeque_destroy(&vdq);
}
int main(int argc, char *argv[]) {
(void) argc;
(void) argv;
test_vecdeque_push_pop();
test_vecdeque_reserve();
test_vecdeque_grow();
test_vecdeque_push_hole();
return 0;
}

View File

@ -187,7 +187,7 @@ static void test_vector_index_of(void) {
sc_vector_destroy(&vec);
}
static void test_vector_grow(void) {
static void test_vector_grow() {
struct SC_VECTOR(int) vec = SC_VECTOR_INITIALIZER;
bool ok;

View File

@ -16,6 +16,6 @@ cpu = 'i686'
endian = 'little'
[properties]
prebuilt_ffmpeg = 'ffmpeg-6.0-scrcpy-4/win32'
prebuilt_sdl2 = 'SDL2-2.28.4/i686-w64-mingw32'
prebuilt_ffmpeg = 'ffmpeg-6.0-scrcpy/win32'
prebuilt_sdl2 = 'SDL2-2.26.1/i686-w64-mingw32'
prebuilt_libusb = 'libusb-1.0.26/libusb-MinGW-Win32'

View File

@ -16,6 +16,6 @@ cpu = 'x86_64'
endian = 'little'
[properties]
prebuilt_ffmpeg = 'ffmpeg-6.0-scrcpy-4/win64'
prebuilt_sdl2 = 'SDL2-2.28.4/x86_64-w64-mingw32'
prebuilt_ffmpeg = 'ffmpeg-6.0-scrcpy/win64'
prebuilt_sdl2 = 'SDL2-2.26.1/x86_64-w64-mingw32'
prebuilt_libusb = 'libusb-1.0.26/libusb-MinGW-x64'

View File

@ -1,146 +0,0 @@
# Audio
Audio forwarding is supported for devices with Android 11 or higher, and it is
enabled by default:
- For **Android 12 or newer**, it works out-of-the-box.
- For **Android 11**, you'll need to ensure that the device screen is unlocked
when starting scrcpy. A fake popup will briefly appear to make the system
think that the shell app is in the foreground. Without this, audio capture
will fail.
- For **Android 10 or earlier**, audio cannot be captured and is automatically
disabled.
If audio capture fails, then mirroring continues with video only (since audio is
enabled by default, it is not acceptable to make scrcpy fail if it is not
available), unless `--require-audio` is set.
## No audio
To disable audio:
```
scrcpy --no-audio
```
To disable only the audio playback, see [no playback](video.md#no-playback).
## Audio only
To play audio only, disable the video:
```bash
scrcpy --no-video
# interrupt with Ctrl+C
```
Without video, the audio latency is typically not criticial, so it might be
interesting to add [buffering](#buffering) to minimize glitches:
```
scrcpy --no-video --audio-buffer=200
```
## Source
By default, the device audio output is forwarded.
It is possible to capture the device microphone instead:
```
scrcpy --audio-source=mic
```
For example, to use the device as a dictaphone and record a capture directly on
the computer:
```
scrcpy --audio-source=mic --no-video --no-playback --record=file.opus
```
## Codec
The audio codec can be selected. The possible values are `opus` (default), `aac`
and `raw` (uncompressed PCM 16-bit LE):
```bash
scrcpy --audio-codec=opus # default
scrcpy --audio-codec=aac
scrcpy --audio-codec=raw
```
In particular, if you get the following error:
> Failed to initialize audio/opus, error 0xfffffffe
then your device has no Opus encoder: try `scrcpy --audio-codec=aac`.
For advanced usage, to pass arbitrary parameters to the [`MediaFormat`],
check `--audio-codec-options` in the manpage or in `scrcpy --help`.
[`MediaFormat`]: https://developer.android.com/reference/android/media/MediaFormat
## Encoder
Several encoders may be available on the device. They can be listed by:
```bash
scrcpy --list-encoders
```
To select a specific encoder:
```bash
scrcpy --audio-codec=opus --audio-encoder='c2.android.opus.encoder'
```
## Bit rate
The default audio bit rate is 128Kbps. To change it:
```bash
scrcpy --audio-bit-rate=64K
scrcpy --audio-bit-rate=64000 # equivalent
```
_This parameter does not apply to RAW audio codec (`--audio-codec=raw`)._
## Buffering
Audio buffering is unavoidable. It must be kept small enough so that the latency
is acceptable, but large enough to minimize buffer underrun (causing audio
glitches).
The default buffer size is set to 50ms. It can be adjusted:
```bash
scrcpy --audio-buffer=40 # smaller than default
scrcpy --audio-buffer=100 # higher than default
```
Note that this option changes the _target_ buffering. It is possible that this
target buffering might not be reached (on frequent buffer underflow typically).
If you don't interact with the device (to watch a video for example), a higher
latency (for both [video](video.md#buffering) and audio) might be preferable to
avoid glitches and smooth the playback:
```
scrcpy --display-buffer=200 --audio-buffer=200
```
It is also possible to configure another audio buffer (the audio output buffer),
by default set to 5ms. Don't change it, unless you get some [robotic and glitchy
sound][#3793]:
```bash
# Only if absolutely necessary
scrcpy --audio-output-buffer=10
```
[#3793]: https://github.com/Genymobile/scrcpy/issues/3793

View File

@ -1,150 +0,0 @@
# Camera
Camera mirroring is supported for devices with Android 12 or higher.
To capture the camera instead of the device screen:
```
scrcpy --video-source=camera
```
By default, it automatically switches [audio source](audio.md#source) to
microphone (as if `--audio-source=mic` were also passed).
```bash
scrcpy --video-source=display # default is --audio-source=output
scrcpy --video-source=camera # default is --audio-source=mic
scrcpy --video-source=display --audio-source=mic # force display AND microphone
scrcpy --video-source=camera --audio-source=output # force camera AND device audio output
```
## List
To list the cameras available (with their declared valid sizes and frame rates):
```
scrcpy --list-cameras
scrcpy --list-camera-sizes
```
_Note that the sizes and frame rates are declarative. They are not accurate on
all devices: some of them are declared but not supported, while some others are
not declared but supported._
## Selection
It is possible to pass an explicit camera id (as listed by `--list-cameras`):
```
scrcpy --video-source=camera --camera-id=0
```
Alternatively, the camera may be selected automatically:
```bash
scrcpy --video-source=camera # use the first camera
scrcpy --video-source=camera --camera-facing=front # use the first front camera
scrcpy --video-source=camera --camera-facing=back # use the first back camera
scrcpy --video-source=camera --camera-facing=external # use the first external camera
```
If `--camera-id` is specified, then `--camera-facing` is forbidden (the id
already determines the camera):
```bash
scrcpy --video-source=camera --camera-id=0 --camera-facing=front # error
```
### Size selection
It is possible to pass an explicit camera size:
```
scrcpy --video-source=camera --camera-size=1920x1080
```
The given size may be listed among the declared valid sizes
(`--list-camera-sizes`), but may also be anything else (some devices support
arbitrary sizes):
```
scrcpy --video-source=camera --camera-size=1840x444
```
Alternatively, a declared valid size (among the ones listed by
`list-camera-sizes`) may be selected automatically.
Two constraints are supported:
- `-m`/`--max-size` (already used for display mirroring), for example `-m1920`;
- `--camera-ar` to specify an aspect ratio (`<num>:<den>`, `<value>` or
`sensor`).
Some examples:
```bash
scrcpy --video-source=camera # use the greatest width and the greatest associated height
scrcpy --video-source=camera -m1920 # use the greatest width not above 1920 and the greatest associated height
scrcpy --video-source=camera --camera-ar=4:3 # use the greatest size with an aspect ratio of 4:3 (+/- 10%)
scrcpy --video-source=camera --camera-ar=1.6 # use the greatest size with an aspect ratio of 1.6 (+/- 10%)
scrcpy --video-source=camera --camera-ar=sensor # use the greatest size with the aspect ratio of the camera sensor (+/- 10%)
scrcpy --video-source=camera -m1920 --camera-ar=16:9 # use the greatest width not above 1920 and the closest to 16:9 aspect ratio
```
If `--camera-size` is specified, then `-m`/`--max-size` and `--camera-ar` are
forbidden (the size is determined by the value given explicitly):
```bash
scrcpy --video-source=camera --camera-size=1920x1080 -m3000 # error
```
## Frame rate
By default, camera is captured at Android's default frame rate (30 fps).
To configure a different frame rate:
```
scrcpy --video-source=camera --camera-fps=60
```
## High speed capture
The Android camera API also supports a [high speed capture mode][high speed].
This mode is restricted to specific resolutions and frame rates, listed by
`--list-camera-sizes`.
```
scrcpy --video-source=camera --camera-size=1920x1080 --camera-fps=240
```
[high speed]: https://developer.android.com/reference/android/hardware/camera2/CameraConstrainedHighSpeedCaptureSession
## Brace expansion tip
All camera options start with `--camera-`, so if your shell supports it, you can
benefit from [brace expansion] (for example, it is supported _bash_ and _zsh_):
```bash
scrcpy --video-source=camera --camera-{facing=back,ar=16:9,high-speed,fps=120}
```
This will be expanded as:
```bash
scrcpy --video-source=camera --camera-facing=back --camera-ar=16:9 --camera-high-speed --camera-fps=120
```
[brace expansion]: https://www.gnu.org/software/bash/manual/html_node/Brace-Expansion.html
## Webcam
Combined with the [V4L2](v4l2.md) feature on Linux, the Android device camera
may be used as a webcam on the computer.

Some files were not shown because too many files have changed in this diff Show More