* commit '628ba035b3c0636826f3eee3ae78da628cc1a8c8':
AsyncChannel to support remote death notification and post a disconnect message to the source handler.
* commit '1612e29826dfe55f8deca27374046c5931ce5335':
AsyncChannel to support remote death notification and post a disconnect message to the source handler.
Fixes the following problems with safe headphone volume warning message:
- Do not display the warning dialog when screen is off.
- Use the same 3 second timeout as for the volume slider to dismiss the dialog.
- Do not dismiss the warning dialog when touching outside of the slider window
but inside the warning window.
- Disable the volume slider when the warning message is displayed.
- When setting volume directly (touching the volume slider), and the warning
is displayed, save the requested volume and apply it if acknowledged by the user.
- Do not display the warning message when restoring safe volume after 20h of
cumulative listenening
Bug 7658641.
Change-Id: Ib3d1315193a433dad918aa5df78fa071062b2394
The behavior of mLinkUp has previously changed so that it only
represents the link state. This changes fixes
interfaceLinkStateChanged() so that it triggers a reconnect based
only on the present link state.
Change-Id: I950e04e1f5b5019d59d3b9530f369f8b8f13134a
Signed-off-by: Adam Hampson <ahampson@google.com>
If you install a lockscreen widget app on a secondary user, lockscreen fails to find it.
There were several places where the correct context and userId were required under the
covers - AppWidgetHost, AppWidgetHostView and RemoteViewsAdapter.
Set the user id in the required places and use it to query the package information.
Bug: 7662835
Change-Id: Ife482c8ab2a2e601650b7cfe2660e88d3b8f2050
If a rotation occurred while the electron beam surface was showing,
the surface may have appeared in the wrong orientation. We fix this
problem by adjusting the transformation matrix of the electron beam
surface according to the display orientation whenever a display
transaction occurs.
The rotation itself is allowed to proceed but it is not visible
to the user. We must let this happen so that the lock screen
is correctly oriented when the screen is turned back on.
Note that the electron beam surface serves two purposes.
First, it is used to play the screen off animation.
When the animation is finished, the surface remains visible but is
solid black. Then we turn the screen off.
Second, when we turn the screen back on we leave the electron beam
surface showing until the window manager is ready to show the
new content. This prevents the user from seeing a flash of the
old content while the screen is being turned on. When everything is
ready, we dismiss the electron beam.
It's important for the electron beam to remain visible for
the entire duration from just before the screen is turned off until
after the screen is turned on and is ready to be seen. This is
why we cannot fix the bug by deferring rotation or otherwise
getting in the way of the window manager doing what it needs
to do to get the screen ready when the screen is turned on again.
Bug: 7479740
Change-Id: I2fcf35114ad9b2e00fdfc67793be6df62c8dc4c3
Apps detaching/attaching large subtrees would waste a few milliseconds
dealing with dirty display lists. This change removes the need to do
ArrayList.remove() on every detachedFromWindow().
Change-Id: Icee72516c40d48ff0fd9d6f3128589f99bf61428
Sometimes a client needs to hold onto an accessibility node info and
this info may get into a stale state. The clent has to be able to
request a refresh of the info. This change adds a refresh call to
AccessibilityNodeInfo.
bug:6711796
Change-Id: I580a9a5d9fd1f705ea0a2cf4d3ff65543714c9c3
There are two things going on here:
(1) In secondary users, some times theme information such as whether
the window is full screen opaque was not being retrieved, so the window
manager didn't know that it could hide the windows behind the app.
This would just be a performance problem, except that:
(2) There appear to be a number of applications that declare that they
are full screen opaque, when in fact they are not. Instead they are
using window surfaces with an alpha channel, and setting some pixels
in their window to a non-opaque alpha level. This will allow you to
see whatever is behind the app. If the system happens to completely
remove the windows behind the app, and somebody is filling the frame
buffer with black, then you will see what the app intends -- those
parts of its UI blended with black. If one of those cases doesn't
hold (and though we have never guaranteed they would, in practice this
is generally what happens), then you will see something else.
At any rate, if nothing else than for performance reasons, we need to
fix issue #1.
It turns out what is happening here is that the AttributeCache used
by the activity manager and window manager to retreive theme and other
information about applications has not yet been updated for multi-user.
One of the things we retrieve from this is the theme information telling
the window manager whether an application's window should be treated
as full screen opaque, allowing it to hide any windows behind it. In
the current implementation, the AttributeCache always retrieves this
information about the application as the primary user (user 0).
So, if you have an application that is installed on a secondary user but
not installed on the primary user, when the AttributeCache tries to retrieve
the requested information for it, then from the perspective of the primary user
it considers the application not installed, and is not able to retrieve that
info.
The change here makes AttributeCache multi-user aware, keeping all of its
data separately per-user, and requiring that callers now provide the user
they want to retrieve information for. Activity manager and window manager
are updated to be able to pass in the user when needed. This required some
fiddling of the window manager to have that information available -- in
particular it needs to be associated with the AppWindowToken.
Change-Id: I4b50b4b3a41bab9d4689e61f3584778e451343c8
Bug: 7660973
RemoteViewsAdapter will now store the userId as part of the cache key
when caching remote views to optimize for orientation changes.
Change-Id: I7c4e52b3995d4f56ebfa35aa9516327e182ad892
1. The screen magnification feature was implemented entirely as a part of the accessibility
manager. To achieve that the window manager had to implement a bunch of hooks for an
external client to observe its internal state. This was problematic since it dilutes
the window manager interface and allows code that is deeply coupled with the window
manager to reside outside of it. Also the observer callbacks were IPCs which cannot
be called with the window manager's lock held. To avoid that the window manager had
to post messages requesting notification of interested parties which makes the code
consuming the callbacks to run asynchronously of the window manager. This causes timing
issues and adds unnecessary complexity.
Now the magnification logic is split in two halves. The first half that is responsible
to track the magnified portion of the screen and serve as a policy which windows can be
magnified and it is a part of the window manager. This part exposes higher level APIs
allowing interested parties with the right permissions to control the magnification
of a given display. The APIs also allow a client to be registered for callbacks on
interesting changes such as resize of the magnified region, etc. This part servers
as a mediator between magnification controllers and the window manager.
The second half is a controller that is responsible to drive the magnification
state based on touch interactions. It also presents a highlight when magnified to
suggest the magnified potion of the screen. The controller is responsible for auto
zooming out in case the user context changes - rotation, new actitivity. The controller
also auto pans if a dialog appears and it does not interesect the magnified frame.
bug:7410464
2. By design screen magnification and touch exploration work separately and together. If
magnification is enabled the user sees a larger version of the widgets and a sub section
of the screen content. Accessibility services use the introspection APIs to "see" what
is on the screen so they can speak it, navigate to the next item in response to a
gesture, etc. Hence, the information returned to accessibility services has to reflect
what a sighted user would see on the screen. Therefore, if the screen is magnified
we need to adjust the bounds and position of the infos describing views in a magnified
window such that the info bounds are equivalent to what the user sees.
To improve performance we keep accessibility node info caches in the client process.
However, when magnification state changes we have to clear these caches since the
bounds of the cached infos no longer reflect the screen content which just got smaller
or larger.
This patch propagates not only the window scale as before but also the X/Y pan and the
bounds of the magnified portion of the screen to the introspected app. This information
is used to adjust the bounds of the node infos coming from this window such that the
reported bounds are the same as the user sees not as the app thinks they are. Note that
if magnification is enabled we zoom the content and pan it along the X and Y axis. Also
recomputed is the isVisibleToUser property of the reported info since in a magnified
state the user sees a subset of the window content and the views not in the magnified
viewport should be reported as not visible to the user.
bug:7344059
Change-Id: I6f7832c7a6a65c5368b390eb1f1518d0c7afd7d2
An earlier fix to allow requestLayout() to be called during layout
didn't handle some of the requests properly, leaving some nodes
stranded with layout requests that didn't propagate all the way
up the hierarchy. The fix is to do, on the second layout pass, exactly
what we do in a normal layout pass: run measure, then layout.
Issue #7657033 Checkboxes not being updated immediately in Settings
Change-Id: I90be3992d3441e8f43629d26c8386c81a7c31482