Fix an issue with buffer lifecycle management which could cause audio
pops on timed outputs. There were two issues at work here.
1) During trim operations for the queued timed audio data, buffers
were being trimmed based on their starting PTS instead of when the
chunk of audio data actually ended. This means that if you have a
very large chunk of audio data (larger than the mixer lead time),
then a buffer at the head of the queue could be eligible to be
trimmed before its data had been completely mixed into the output
stream, even though the output stream was fully buffered and in no
danger of underflow.
2) The implementation of getNextBuffer and releaseBuffer for timed
audio tracks was not keeping anything like a reference to the data
that it handed out to the mixer. The original architecture here
seemed to be expecting a ring buffer design, but timed audio tracks
use a packet based design. Pieces of packets are handed out to the
mixer which then frequently will hold onto that chunk of data
across two mix operations, using the first part of the chunk to
finish a mix buffer and then using the end of the chunk for the
start of the next mix buffer. If the buffer that the mixer is
holding a piece of got trimmed before the start of the next mix
operation, it would return to its heap and could be filled with who
knows what by the time it actually got mixed. On debug builds,
they seem to get zero'ed out as they go back to the heap causing
obvious pops in presentation.
This change addresses both issues. Trim operations are now based on
ending presentation time for a chunk of audio, not the start. Also,
when the head of the queue is in flight to the mixer, it can no longer
be trimmed immediately, merely flagged for trim by the mixer when the
mixer finally does call releaseBuffer.
Signed-off-by: John Grossman <johngro@google.com>
Change-Id: Ia1ba08cb9dea35a698723ab2d9bcbf804f1682fe