Currently, font renderers eliminate some texture caches when
memory is trimmed. This change makes it go further by eliminating the
large-glyph caches for all font renderers. These caches are
only allocated as needed, but continue to consume large amounts of
memory (CPU and GPU) after that allocation. De-allocating this memory
on a trim operation should prevent background apps from holding onto
this memory in the possible case that they have allocated it by drawing
large glyphs.
Change-Id: Id7a3ab49b244e036b442d87252fb40aeca8fdb26
There were 2 issues remaining after a recent change to support
glyph caching from multiple textures:
- memory in the GPU for all textures was being allocated automatically.
This is now lazy, being allocated only when those textures are first
needed.
- filtering (applied when a rendered object is transformed) was ignoring
the new multiple-texture structure. Filtering should be applied correctly
whenever we change textures.
Change-Id: I5c8eb8d46c73cd01782a353fc79b11cacc2146ab
Currently, madvise(MADV_REMOVE) is called after deallocation.
Another thread might allocate (and even write) the same region between
deallocation and madvise(), in which case the new thread will fail to read
what it have written. So, call deallocate() after madvise(MADV_REMOVE).
Bug: 5654596
Change-Id: I26f36cd6013de499090768a0ddc68206a4a68219
Some GPU architectures could not handle the previous implementation
of our glyph cache. Frequent uploads would cause memory problems in the GPU
and eventually a crash due to these memory issues. The solution is to move to
a system of several, smaller caches instead of one monolythic cache for all
glyphs.
Change-Id: I0fc7a323360940d16d5a33eeb33abfab194c5920
This optimization along with the previous one lets us render an
application like Gmail using only 30% of the number of GL commands
previously required
Change-Id: Ifee63edaf495e04490b5abd5433bb9a07bc327a8
I don't know who's to blame, SGX or Tegra2 but one of those two GPUs is not
following the OpenGL ES 2.0 spec.
Change-Id: I2624e0efbc9c57d571c55c8b440a5e43f08a54f2
- add the ability to set the vsync delivery rate, when the rate is
set to N>1 (ie: receive every N vsync), SF process' is woken up for
all of vsync, but clients only see the every N events.
- add the concept of one-shot vsync events, with a call-back
to request the next one. currently the call-back is a binder IPC.
Change-Id: I09f71df0b0ba0d88ed997645e2e2497d553c9a1b
Bug #5706056
A newly introduced optimization relied on the display list renderer
to properly measure text to perform fast clipping. The paint used
to measure text needs to have AA and glyph id encoding set to return
the correct results. Unfortunately these properties were set by
the GL renderer and not by the display list renderer. This change
simply sets the properties in the display list renderer instead.
This change also improves the error message printed out when the
application attempts to use a bitmap larger than the max texture
size.
Change-Id: I4d84e1c7d194aed9ad476f69434eaa2c8f3836a8
This change adds a compile-time option for SurfaceTexture to use the
EGL_KHR_fence_sync extension to synchronize access to Gralloc buffers.
Bug: 5122031
Change-Id: I7e973a358631fff5308acf377581b811911fe790
use gui/DisplayEvent to receive the events. Events are
dispatched through a unix pipe, so the API is compatible
with utils/Looper. see gui/DisplayEvent.h for more info.
Bug: 1475048
Change-Id: Ia720f64d1b950328b47b22c6a86042e481d35f09
Revert "Add support for sending VSYNC events to the framework"
This reverts commit f3918c5bd4bc9f02f74da42995564150ca2dd382.
Change-Id: I998e3e1aa3fa310829ae973b64fe11b01f6f468f
* changes:
Add support for sending VSYNC events to the framework
BitTube::read now handles EAGAIN
split ComposerService out of SurfaceComposerClient.h
rewrite SF's message loop on top of Looper