Remove all ro.hwui.* tuning props and instead
calculate them from the screen resolution.
Or just hardcode them to what all devices
were hardcoding them to anyway.
Bug: 63741221
Test: Check cache size results on sailfish
Change-Id: I8b0d210572a246f4fefb076935cf5156a70c274c
Merged-In: I8b0d210572a246f4fefb076935cf5156a70c274c
(cherry picked from commit 8dc02f99d09130ace2ee738c2e689db1b3f33181)
To workaround a deadlock caused by bufferqueue locks
we force RenderThread over to use async mode which
we enable via eglSwapInterval(0)
Bug: 38372997
Test: steps in the bug
Change-Id: Ia305f73abbdd64ab0c25d1f7d32792cc6295a0ce
Added Macros for RENDERTHREAD_TESTS which run these tests using specific backends.
RENDERTHREAD_TESTS - Runs OpenGL, SkiaGL, and SkiaVulkan
RENDERTHREAD_SKIA_TESTS - Runs SkiaGL and SkiaVulkan
RENDERTHREAD_OPENGL_TESTS - Runs OpenGL
Test: manual running of hwui_unit_tests
Change-Id: Ia7420ee7a38803a15e2d58394d14b38cae8208d3
NOTE: Linear blending is currently disabled in this CL as the
feature is still a work in progress
Android currently performs all blending (any kind of linear math
on colors really) on gamma-encoded colors. Since Android assumes
that the default color space is sRGB, all bitmaps and colors
are encoded with the sRGB Opto-Electronic Conversion Function
(OECF, which can be approximated with a power function). Since
the power curve is not linear, our linear math is incorrect.
The result is that we generate colors that tend to be too dark;
this affects blending but also anti-aliasing, gradients, blurs,
etc.
The solution is to convert gamma-encoded colors back to linear
space before doing any math on them, using the sRGB Electo-Optical
Conversion Function (EOCF). This is achieved in different
ways in different parts of the pipeline:
- Using hardware conversions when sampling from OpenGL textures
or writing into OpenGL frame buffers
- Using software conversion functions, to translate app-supplied
colors to and from sRGB
- Using Skia's color spaces
Any type of processing on colors must roughly ollow these steps:
[sRGB input]->EOCF->[linear data]->[processing]->OECF->[sRGB output]
For the sRGB color space, the conversion functions are defined as
follows:
OECF(linear) :=
linear <= 0.0031308 ? linear * 12.92 : (pow(linear, 1/2.4) * 1.055) - 0.055
EOCF(srgb) :=
srgb <= 0.04045 ? srgb / 12.92 : pow((srgb + 0.055) / 1.055, 2.4)
The EOCF is simply the reciprocal of the OECF.
While it is highly recommended to use the exact sRGB conversion
functions everywhere possible, it is sometimes useful or beneficial
to rely on approximations:
- pow(x,2.2) and pow(x,1/2.2)
- x^2 and sqrt(x)
The latter is particularly useful in fragment shaders (for instance
to apply dithering in sRGB space), especially if the sqrt() can be
replaced with an inversesqrt().
Here is a fairly exhaustive list of modifications implemented
in this CL:
- Set TARGET_ENABLE_LINEAR_BLENDING := false in BoardConfig.mk
to disable linear blending. This is only for GLES 2.0 GPUs
with no hardware sRGB support. This flag is currently assumed
to be false (see note above)
- sRGB writes are disabled when entering a functor (WebView).
This will need to be fixed at some point
- Skia bitmaps are created with the sRGB color space
- Bitmaps using a 565 config are expanded to 888
- Linear blending is disabled when entering a functor
- External textures are not properly sampled (see below)
- Gradients are interpolated in linear space
- Texture-based dithering was replaced with analytical dithering
- Dithering is done in the quantization color space, which is
why we must do EOCF(OECF(color)+dither)
- Text is now gamma corrected differently depending on the luminance
of the source pixel. The asumption is that a bright pixel will be
blended on a dark background and the other way around. The source
alpha is gamma corrected to thicken dark on bright and thin
bright on dark to match the intended design of fonts. This also
matches the behavior of popular design/drawing applications
- Removed the asset atlas. It did not contain anything useful and
could not be sampled in sRGB without a yet-to-be-defined GL
extension
- The last column of color matrices is converted to linear space
because its value are added to linear colors
Missing features:
- Resource qualifier?
- Regeneration of goldeng images for automated tests
- Handle alpha8/grey8 properly
- Disable sRGB write for layers with external textures
Test: Manual testing while work in progress
Bug: 29940137
Change-Id: I6a07b15ab49b554377cd33a36b6d9971a15e9a0b
The maximum length of a system property is 31 bytes.
debug.hwui.enable_partial_updates is 33 bytes
Change-Id: Idb1b1a00294dd29f84530e8aee1f685094d0096f
HWUI calculates the texture size as w*h*bpp. In some cases, the
calculated path cache is small, but the actual memory allocated
in driver is 4k/8k/16k, much bigger than HWUI calculates.
Example: a 5*65 alpha texture, HWUI think it is 5*65*1 = 325 bytes,
but driver allocates 8K. An app can allocates up to 32M path textures,
which actually consumes 32M*(8*1024/325) = 806M memory.
Here we limit the number of path texture in the cache to 256, it
should be a pretty generous global limit.
Change-Id: I890819b73bb0b7f63e96bc3d9d0ff9469c16838c
Add a system property debug.hwui.default_renderer, which allows
to set rendering mode to OpenGL (default), Skia OpenGL or Vulkan.
Change-Id: I8bca5bacc5108f77437e340ac61f2d8db8cc4c39
Adds googlebench output format support
Adds offscreen rendering for >60fps benchmarking
Adds 'all' alias to run all registered TestScenes
Change-Id: I2579e40f2f4c941bfbd90c75efbee384c08a116b
Bug: 26912651
By setting debug.hwui.filter_test_overhead to true, hwui's
janktracker will attempt to filter out overhead caused
by the event injection that automated testing uses
Change-Id: I75c8dc5e7798e06e3009baf396108507c7240eec
bug:17478770
This removes a lot of redundant property query code, and puts the
queries all in one place, so defining them automatically will be simpler
in the future.
Change-Id: I0428550e6081f07bc6554ffdf73b22284325abb8
Bug: 21753739
Includes a revert of 13d1b4ab10fbee5e81a2ba1ac59cfae1e51d3ef0
as that only supported EGL_EXT_buffer_age
Change-Id: Ia86a47d19e3355c067934d7764c330b640c6958d
bug:19967854
Separate properties from Caches, into static, RenderThread-only class.
Also rewrites the means for java to set properties to correctly handle
threading, and adds an override for profile bars so that SysUi doesn't clutter
the screen with them.
Change-Id: I6e21a96065f52b9ecc49d1a126244804ba106fa9
Changes generated with clang-modernize.
Additionally, fixed some struct-vs-class usage to make clang happy.
Change-Id: Ic6ef2427401ff1e794d26f21f7b44868fc75fb72
Bug: 17479800
FBO cache is very expensive and no longer necessary, disable
it by just setting size to 0.
Change-Id: I664616f262c8339919e1d20baaafa5de2b628d7e
This reduces state changes when we draw 9patches and text together,
which happens *a lot*. Also disable the NV profiling extension by
default since it doesn't play nice with display lists deferrals.
To enable it set debug.hwui.nv_profiling to true.
Change-Id: I518b44b7d294e5def10c78911ceb9f01ae401609
The counter can be enabled by setting the system property called
debug.hwui.overdraw to the string "count". If the string is set
to "show", overdraw will be highlighted on screen instead of
printing out a simple counter.
Change-Id: I9a9c970d54bffab43138bbb7682f6c04bc2c40bd
When the Android runtime starts, the system preloads a series of assets
in the Zygote process. These assets are shared across all processes.
Unfortunately, each one of these assets is later uploaded in its own
OpenGL texture, once per process. This wastes memory and generates
unnecessary OpenGL state changes.
This CL introduces an asset server that provides an atlas to all processes.
Note: bitmaps used by skia shaders are *not* sampled from the atlas.
It's an uncommon use case and would require extra texture transforms
in the GL shaders.
WHAT IS THE ASSETS ATLAS
The "assets atlas" is a single, shareable graphic buffer that contains
all the system's preloaded bitmap drawables (this includes 9-patches.)
The atlas is made of two distinct objects: the graphic buffer that
contains the actual pixels and the map which indicates where each
preloaded bitmap can be found in the atlas (essentially a pair of
x and y coordinates.)
HOW IS THE ASSETS ATLAS GENERATED
Because we need to support a wide variety of devices and because it
is easy to change the list of preloaded drawables, the atlas is
generated at runtime, during the startup phase of the system process.
There are several steps that lead to the atlas generation:
1. If the device is booting for the first time, or if the device was
updated, we need to find the best atlas configuration. To do so,
the atlas service tries a number of width, height and algorithm
variations that allows us to pack as many assets as possible while
using as little memory as possible. Once a best configuration is found,
it gets written to disk in /data/system/framework_atlas
2. Given a best configuration (algorithm variant, dimensions and
number of bitmaps that can be packed in the atlas), the atlas service
packs all the preloaded bitmaps into a single graphic buffer object.
3. The packing is done using Skia in a temporary native bitmap. The
Skia bitmap is then copied into the graphic buffer using OpenGL ES
to benefit from texture swizzling.
HOW PROCESSES USE THE ATLAS
Whenever a process' hardware renderer initializes its EGL context,
it queries the atlas service for the graphic buffer and the map.
It is important to remember that both the context and the map will
be valid for the lifetime of the hardware renderer (if the system
process goes down, all apps get killed as well.)
Every time the hardware renderer needs to render a bitmap, it first
checks whether the bitmap can be found in the assets atlas. When
the bitmap is part of the atlas, texture coordinates are remapped
appropriately before rendering.
Change-Id: I8eaecf53e7f6a33d90da3d0047c5ceec89ea3af0
PBOs (Pixel Buffer Objects) can be used on OpenGL ES 3.0 to perform
asynchronous texture uploads to free up the CPU. This change does not
enable the use of PBOs unless a specific property is set (Adreno drivers
have issues with PBOs at the moment, Mali drivers work just fine.)
This change also cleans up Font/FontRenderer a little bit and improves
performance of drop shadows generations by using memcpy() instead of
a manual byte-by-byte copy.
On GL ES 2.0 devices, or when PBOs are disabled, a PixelBuffer instance
behaves like a simple byte array. The extra APIs introduced for PBOs
(map/unmap and bind/unbind) are pretty much no-ops for CPU pixel
buffers and won't introduce any significant overhead.
This change also fixes a bug with text drop shadows: if the drop
shadow is larger than the max texture size, the renderer would leave
the GL context in a bad state and generate 0x501 errors. This change
simply skips drop shadows if they are too large.
Change-Id: I2700aadb0c6093431dc5dee3d587d689190c4e23
When several CacheTextures are used to draw text, sort the
draw batches by texture ID to minimize state changes in the
driver.
This change also tweaks the font cache size and renames
a property that was too long to be set using setprop.
Change-Id: I0a36dfffe58c9e75dd7384592d3343c192d042b1
This change will greatly simplify the multi-threading of all
shape types.
This change also uses PathTessellator to render convex paths.
Change-Id: I4e65bc95c9d24ecae2183b72204de5c2dfb6ada4