android_frameworks_base/libs/hwui/renderstate/OffscreenBufferPool.cpp
Romain Guy 253f2c213f Linear blending, step 1
NOTE: Linear blending is currently disabled in this CL as the
      feature is still a work in progress

Android currently performs all blending (any kind of linear math
on colors really) on gamma-encoded colors. Since Android assumes
that the default color space is sRGB, all bitmaps and colors
are encoded with the sRGB Opto-Electronic Conversion Function
(OECF, which can be approximated with a power function). Since
the power curve is not linear, our linear math is incorrect.
The result is that we generate colors that tend to be too dark;
this affects blending but also anti-aliasing, gradients, blurs,
etc.

The solution is to convert gamma-encoded colors back to linear
space before doing any math on them, using the sRGB Electo-Optical
Conversion Function (EOCF). This is achieved in different
ways in different parts of the pipeline:

- Using hardware conversions when sampling from OpenGL textures
  or writing into OpenGL frame buffers
- Using software conversion functions, to translate app-supplied
  colors to and from sRGB
- Using Skia's color spaces

Any type of processing on colors must roughly ollow these steps:

[sRGB input]->EOCF->[linear data]->[processing]->OECF->[sRGB output]

For the sRGB color space, the conversion functions are defined as
follows:

OECF(linear) :=
linear <= 0.0031308 ? linear * 12.92 : (pow(linear, 1/2.4) * 1.055) - 0.055

EOCF(srgb) :=
srgb <= 0.04045 ? srgb / 12.92 : pow((srgb + 0.055) / 1.055, 2.4)

The EOCF is simply the reciprocal of the OECF.
While it is highly recommended to use the exact sRGB conversion
functions everywhere possible, it is sometimes useful or beneficial
to rely on approximations:

- pow(x,2.2) and pow(x,1/2.2)
- x^2 and sqrt(x)

The latter is particularly useful in fragment shaders (for instance
to apply dithering in sRGB space), especially if the sqrt() can be
replaced with an inversesqrt().

Here is a fairly exhaustive list of modifications implemented
in this CL:

- Set TARGET_ENABLE_LINEAR_BLENDING := false in BoardConfig.mk
  to disable linear blending. This is only for GLES 2.0 GPUs
  with no hardware sRGB support. This flag is currently assumed
  to be false (see note above)
- sRGB writes are disabled when entering a functor (WebView).
  This will need to be fixed at some point
- Skia bitmaps are created with the sRGB color space
- Bitmaps using a 565 config are expanded to 888
- Linear blending is disabled when entering a functor
- External textures are not properly sampled (see below)
- Gradients are interpolated in linear space
- Texture-based dithering was replaced with analytical dithering
- Dithering is done in the quantization color space, which is
  why we must do EOCF(OECF(color)+dither)
- Text is now gamma corrected differently depending on the luminance
  of the source pixel. The asumption is that a bright pixel will be
  blended on a dark background and the other way around. The source
  alpha is gamma corrected to thicken dark on bright and thin
  bright on dark to match the intended design of fonts. This also
  matches the behavior of popular design/drawing applications
- Removed the asset atlas. It did not contain anything useful and
  could not be sampled in sRGB without a yet-to-be-defined GL
  extension
- The last column of color matrices is converted to linear space
  because its value are added to linear colors

Missing features:
- Resource qualifier?
- Regeneration of goldeng images for automated tests
- Handle alpha8/grey8 properly
- Disable sRGB write for layers with external textures

Test: Manual testing while work in progress
Bug: 29940137

Change-Id: I6a07b15ab49b554377cd33a36b6d9971a15e9a0b
2016-10-11 17:47:58 -07:00

212 lines
6.9 KiB
C++

/*
* Copyright (C) 2015 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#include "OffscreenBufferPool.h"
#include "Caches.h"
#include "Properties.h"
#include "renderstate/RenderState.h"
#include "utils/FatVector.h"
#include "utils/TraceUtils.h"
#include <utils/Color.h>
#include <utils/Log.h>
#include <GLES2/gl2.h>
namespace android {
namespace uirenderer {
////////////////////////////////////////////////////////////////////////////////
// OffscreenBuffer
////////////////////////////////////////////////////////////////////////////////
OffscreenBuffer::OffscreenBuffer(RenderState& renderState, Caches& caches,
uint32_t viewportWidth, uint32_t viewportHeight)
: GpuMemoryTracker(GpuObjectType::OffscreenBuffer)
, renderState(renderState)
, viewportWidth(viewportWidth)
, viewportHeight(viewportHeight)
, texture(caches) {
uint32_t width = computeIdealDimension(viewportWidth);
uint32_t height = computeIdealDimension(viewportHeight);
ATRACE_FORMAT("Allocate %ux%u HW Layer", width, height);
caches.textureState().activateTexture(0);
texture.resize(width, height, caches.rgbaInternalFormat(), GL_RGBA);
texture.blend = true;
texture.setWrap(GL_CLAMP_TO_EDGE);
// not setting filter on texture, since it's set when drawing, based on transform
}
Rect OffscreenBuffer::getTextureCoordinates() {
const float texX = 1.0f / static_cast<float>(texture.width());
const float texY = 1.0f / static_cast<float>(texture.height());
return Rect(0, viewportHeight * texY, viewportWidth * texX, 0);
}
void OffscreenBuffer::dirty(Rect dirtyArea) {
dirtyArea.doIntersect(0, 0, viewportWidth, viewportHeight);
if (!dirtyArea.isEmpty()) {
region.orSelf(android::Rect(dirtyArea.left, dirtyArea.top,
dirtyArea.right, dirtyArea.bottom));
}
}
void OffscreenBuffer::updateMeshFromRegion() {
// avoid T-junctions as they cause artifacts in between the resultant
// geometry when complex transforms occur.
// TODO: generate the safeRegion only if necessary based on drawing transform
Region safeRegion = Region::createTJunctionFreeRegion(region);
size_t count;
const android::Rect* rects = safeRegion.getArray(&count);
const float texX = 1.0f / float(texture.width());
const float texY = 1.0f / float(texture.height());
FatVector<TextureVertex, 64> meshVector(count * 4); // uses heap if more than 64 vertices needed
TextureVertex* mesh = &meshVector[0];
for (size_t i = 0; i < count; i++) {
const android::Rect* r = &rects[i];
const float u1 = r->left * texX;
const float v1 = (viewportHeight - r->top) * texY;
const float u2 = r->right * texX;
const float v2 = (viewportHeight - r->bottom) * texY;
TextureVertex::set(mesh++, r->left, r->top, u1, v1);
TextureVertex::set(mesh++, r->right, r->top, u2, v1);
TextureVertex::set(mesh++, r->left, r->bottom, u1, v2);
TextureVertex::set(mesh++, r->right, r->bottom, u2, v2);
}
elementCount = count * 6;
renderState.meshState().genOrUpdateMeshBuffer(&vbo,
sizeof(TextureVertex) * count * 4,
&meshVector[0],
GL_DYNAMIC_DRAW); // TODO: GL_STATIC_DRAW if savelayer
}
uint32_t OffscreenBuffer::computeIdealDimension(uint32_t dimension) {
return uint32_t(ceilf(dimension / float(LAYER_SIZE)) * LAYER_SIZE);
}
OffscreenBuffer::~OffscreenBuffer() {
ATRACE_FORMAT("Destroy %ux%u HW Layer", texture.width(), texture.height());
texture.deleteTexture();
renderState.meshState().deleteMeshBuffer(vbo);
elementCount = 0;
vbo = 0;
}
///////////////////////////////////////////////////////////////////////////////
// OffscreenBufferPool
///////////////////////////////////////////////////////////////////////////////
OffscreenBufferPool::OffscreenBufferPool()
: mMaxSize(Properties::layerPoolSize) {
}
OffscreenBufferPool::~OffscreenBufferPool() {
clear(); // TODO: unique_ptr?
}
int OffscreenBufferPool::Entry::compare(const Entry& lhs, const Entry& rhs) {
int deltaInt = int(lhs.width) - int(rhs.width);
if (deltaInt != 0) return deltaInt;
return int(lhs.height) - int(rhs.height);
}
void OffscreenBufferPool::clear() {
for (auto& entry : mPool) {
delete entry.layer;
}
mPool.clear();
mSize = 0;
}
OffscreenBuffer* OffscreenBufferPool::get(RenderState& renderState,
const uint32_t width, const uint32_t height) {
OffscreenBuffer* layer = nullptr;
Entry entry(width, height);
auto iter = mPool.find(entry);
if (iter != mPool.end()) {
entry = *iter;
mPool.erase(iter);
layer = entry.layer;
layer->viewportWidth = width;
layer->viewportHeight = height;
mSize -= layer->getSizeInBytes();
} else {
layer = new OffscreenBuffer(renderState, Caches::getInstance(), width, height);
}
return layer;
}
OffscreenBuffer* OffscreenBufferPool::resize(OffscreenBuffer* layer,
const uint32_t width, const uint32_t height) {
RenderState& renderState = layer->renderState;
if (layer->texture.width() == OffscreenBuffer::computeIdealDimension(width)
&& layer->texture.height() == OffscreenBuffer::computeIdealDimension(height)) {
// resize in place
layer->viewportWidth = width;
layer->viewportHeight = height;
// entire area will be repainted (and may be smaller) so clear usage region
layer->region.clear();
return layer;
}
putOrDelete(layer);
return get(renderState, width, height);
}
void OffscreenBufferPool::dump() {
for (auto entry : mPool) {
ALOGD(" Layer size %dx%d", entry.width, entry.height);
}
}
void OffscreenBufferPool::putOrDelete(OffscreenBuffer* layer) {
const uint32_t size = layer->getSizeInBytes();
// Don't even try to cache a layer that's bigger than the cache
if (size < mMaxSize) {
// TODO: Use an LRU
while (mSize + size > mMaxSize) {
OffscreenBuffer* victim = mPool.begin()->layer;
mSize -= victim->getSizeInBytes();
delete victim;
mPool.erase(mPool.begin());
}
// clear region, since it's no longer valid
layer->region.clear();
Entry entry(layer);
mPool.insert(entry);
mSize += size;
} else {
delete layer;
}
}
}; // namespace uirenderer
}; // namespace android