On eng builds we have an event consistency verifier to log any
inconsistent event stream states due to mishandling of intercepted
events by an accessibility service. On non-eng builds this verifier
is null and a null check was lacking.
bug:8616711
Change-Id: Ib083a405dfa8340025090a65e50155eb10526a90
Now that we have gestures which are detected by the system and
interpreted by an accessibility service, there is an inconsistent
behavior between using the gestures and the keyboard. Some devices
have both. Therefore, an accessibility service should be able to
interpret keys in addition to gestures to provide consistent user
experience. Now an accessibility service can expose shortcuts for
each gestural action.
This change adds APIs for an accessibility service to observe and
intercept at will key events before they are dispatched to the
rest of the system. The service can return true or false from its
onKeyEvent to either consume the event or to let it be delivered
to the rest of the system. However, the service will *not* be
able to inject key events or modify the observed ones.
Previous ideas of allowing the service to say it "tracks" the event
so the latter is not delivered to the system until a subsequent
event is either "handled" or "not handled" will not work. If the
service tracks a key but no other key is pressed essentially this
key is not delivered to the app and at potentially much later point
this stashed event will be delivered in maybe a completely different
context.The correct way of implementing shortcuts is a combination
of modifier keys plus some other key/key sequence. Key events already
contain information about which modifier keys are down as well as
the service can track them as well.
bug:8088812
Change-Id: I81ba9a7de9f19ca6662661f27fdc852323e38c00
If no accessibility services are enabled, we disable the
accessibility event firing to save resources. When the last
such services is disabled the system was not unbinding. As
a result the user was seeing the touch exploration enable
dialog when the service that requested it is disabled. Also
there is one service the system is bound to that is not used.
bug:8439191
Change-Id: I6f37f2573a815bfb29870298aa0abbb1fa105588
UiAutomation registers a fake accessibility service to introspect
the screen. Upon the death of the shell process that started an
instrumentation in which a UiAutomation resides the connection
between the UiAutomation and the system is kept alive allowing
the instrumentation to introspect the screen even after the death
of the shell process.
bug:8285905
Change-Id: I1a16d78abbea032116c4baed175cfc0d5dedbf0c
If an accessibility service is connected but already removed
from the list of connecting services we get a NPE since the
call to set the service connection is made over a null
remove interface. Note that service connection is asynchronous.
bug:8229877
Change-Id: I7b05f219dd0c1da6286ee4ec90b1ef310908545d
When an accessibility service connects we get a callback in
which we either add the service, if this service is in the list
of connecting services (we still want the service to connect),
or we unbind and clear the state, if the service is no longer in
the list of connecting services (we do not want this service to
connect because something change between the bind request and
the connection callback).
The problem is that when the service connects and it is not in
the list of connecting services on service connected we called
the clean up code before the connection was complete. However,
the clean up code expects fully configured services. Now we
fully connect the service and in case there is a problem -
disconnect it.
bug:8232627
Change-Id: I939e544e31ffc1406035265a012c180f2ca95d7c
On user switch the transient state of the old user was not cleared
which means that when we switch back to this user the operational
state such as which event types were dispatched, what state was sent
to local managers, etc is stale. This leads to semi-updated state
and broken behavior. Now if the user becomes inactive, we are clearing
all transient state which will be recreated when the user becomes
active.
bug:8196652
Change-Id: Ie9e0d712b6d567e5074b328f1bb87afaa5395c06
The UI test automation service was not removed from the list of
enabled and installed service where it was explicitly added on
registration. This was leaving the accessibility manager service
in an inconsistent state.
bug:8185435
Change-Id: Ice17cdef361fe98ce34f8dd01ec11dbad6c4d0c2
1. The accessibility manager service updates its internal state
based on which settings are enabled, what accessibility services
are installed and what features are requested by the enabled
services. It was trying to do the minimal amount of work to
react to contextual changes like these which resulted in missed
cases and complex code. Now there is a single method that reads
the contextual information and single method that reacts to
contextual changes. This makes the code much easier to maintain.
2. The accessibility manager service was not updating its internal
state when requested features from accessibility services change.
It was relying on changing system settings and reacting to the
settings change. This is problematic since the internal state is
not updated atomically which leads to race condition bugs. For
example, if touch exploration is enabled and a service requests
it is disabled, the internal state will not be updated but a
request for a settings change will be made. Now while the settings
change is propagating another request form the same service
comes to enable touch exploration but the system incorrectly
thinks touch exploration is enabled. At the end the feature is
disabled even though it was requested.
3. Fixed a potential NPE if the accessibility input filter's event
handler was nullified between processing two event batches.
4. Fixed a bug where, if magnification is enabled, it does not work
on the settings screen since the magnified bounds are not pushed
from the window manager to the accessibility manager.
Change-Id: Idf629a06480e12f0d88372762df6c024fe0d7856
We use an input filter to manipulate the event stream in accessibility
mode. Some input events, i.e. touch and hover events, are delivered
to a touch explorer, if touch exploration is enabled, and to a magnifier,
if screen magnification is enabled. It is possible that at the moment
each of these features is enabled we are in the middle of a touch or
hover gesture. The touch explorer and screen magnifier expect to receive
an event stream that starts with an event that denotes the stream start.
This change ensures that hover or touch events are dispatched to the
touch explorer and the magnifier only after the start of the first
well-formed hover or touch sequence.
Change-Id: I8cd0ad8e1844c59fd55cf1dfacfb79af6a8916df
Currently text editing is pretty hard (certain operations even
impossible) for a blind person. To address the issue this change
adds APIs that enable an accessibility service to perform basic
text editing operations such as copy, paste, cut, set selection,
extend selection while moving at a given granularity.
The new APIs enable an accessibility service to expose a gesture
driven efficient text editing facility.
bug:8098384
Change-Id: I82b200138a3fdf4c0c316b774fc08a096ced29d0
Currently we have an "enhance web accessibility" setting that has to be
enabled to make sure web content is accessible. We added the setting to
get user consent because we are injecting JavaScript-based screen-reader
pulled from the Google infrastructure. However, many users do not know
that and (as expected) do not read the user documentation, resulting in
critique for lacking accessibility support in WebViews with JavaScript
enabled (Browser, Gmail, etc).
To smoothen the user experience now "enhance web accessibility" is a
feature an accessibility plug-in can request, similarly to explore by
touch. Now a user does not need to know that she has to explicitly
enable the setting and web accessibility will work out-of-the-box.
Before we were showing a dialog when a plug-in tries to put the device
in a touch exploration mode. However, now that we have one more feature
a plug-in can request, showing two dialogs (assume a plug-in wants both
features) will mean that a user should potentially deal with three
dialogs, one for enabling the service, and one for each feature. We
could merge the dialogs but still the user has to poke two dialogs.
It seems that the permission mechanism is a perfect fit for getting
user permission for an app to do something, in this case to enable
an accessibility feature. We need a separate permission for explore
by touch and enhance web accessibility since the former changes the
interaction model and the latter injects JavaScript in web pages. It
is critical to get user consent for the script injection part so we
need a well-documented permission rather a vague umbrella permission
for poking accessibility features. To allow better grouping of the
accessibility permissions this patch adds a permission group as well.
bug:8089372
Change-Id: Ic125514c34f191aea0416a469e4b3481ab3200b9
This change adds APIs support for implementing UI tests. Such tests do
not rely on internal application structure and can span across application
boundaries. UI automation APIs are encapsulated in the UiAutomation object
that is provided by an Instrumentation object. It is initialized by the
system and can be used for both introspecting the screen and performing
interactions simulating a user. UI test are normal instrumentation tests
and are executed on the device.
UiAutomation uses the accessibility APIs to introspect the screen and
a special delegate object to perform privileged operations such as
injecting input events. Since instrumentation tests are invoked by a shell
command, the shell program launching the tests creates a delegate object and
passes it as an argument to started instrumentation. This delegate
allows the APK that runs the tests to access some privileged operations
protected by a signature level permissions which are explicitly granted
to the shell user.
The UiAutomation object also supports running tests in the legacy way
where the tests are run as a Java shell program. This enables existing
UiAutomator tests to keep working while the new ones should be implemented
using the new APIs. The UiAutomation object exposes lower level APIs which
allow simulation of arbitrary user interactions and writing complete UI test
cases. Clients, such as UiAutomator, are encouraged to implement higher-
level APIs which minimize development effort and can be used as a helper
library by the test developer.
The benefit of this change is decoupling UiAutomator from the system
since the former was calling hidden APIs which required that it is
bundled in the system image. This prevented UiAutomator from being
evolved separately from the system. Also UiAutomator was creating
additional API surface in the system image. Another benefit of the new
design is that now test cases have access to a context and can use
public platform APIs in addition to the UiAutomator ones. Further,
third-parties can develop their own higher level test APIs on top
of the lower level ones exposes by UiAutomation.
bug:8028258
Also this change adds the fully qualified resource name of the view's
id in the emitted AccessibilityNodeInfo if a special flag is set while
configuring the accessibility service. Also added is API for looking
up node infos by this id. The id resource name is relatively more stable
compared to the generaed id number which may change from one build to
another. This API facilitate reuing the already defined ids for UI
automation.
bug:7678973
Change-Id: I589ad14790320dec8a33095953926c2a2dd0228b
1. This patch takes care of the case where a magnified window is covering an unmagnigied
one. One example is a dialog that covers the IME window.
bug:7634430
2. Ensuring that the UI automator tool can connect and correctly dump the screen.
bug:7694696
3. Removed the partial implementation for multi display magnification. It adds
unnecessary complexity since it cannot be implemented without support for
input from multiple screens. We will revisit when necessary.
4. Moved the magnified border window as a surface in the window manager.
5. Moved the mediator APIs on the window manager and the policy methods on the
WindowManagerPolicy.
6. Implemented batch event processing for the accessibility input filter.
Change-Id: I4ebf68b94fb07201e124794f69611ece388ec116
1. The screen magnification feature was implemented entirely as a part of the accessibility
manager. To achieve that the window manager had to implement a bunch of hooks for an
external client to observe its internal state. This was problematic since it dilutes
the window manager interface and allows code that is deeply coupled with the window
manager to reside outside of it. Also the observer callbacks were IPCs which cannot
be called with the window manager's lock held. To avoid that the window manager had
to post messages requesting notification of interested parties which makes the code
consuming the callbacks to run asynchronously of the window manager. This causes timing
issues and adds unnecessary complexity.
Now the magnification logic is split in two halves. The first half that is responsible
to track the magnified portion of the screen and serve as a policy which windows can be
magnified and it is a part of the window manager. This part exposes higher level APIs
allowing interested parties with the right permissions to control the magnification
of a given display. The APIs also allow a client to be registered for callbacks on
interesting changes such as resize of the magnified region, etc. This part servers
as a mediator between magnification controllers and the window manager.
The second half is a controller that is responsible to drive the magnification
state based on touch interactions. It also presents a highlight when magnified to
suggest the magnified potion of the screen. The controller is responsible for auto
zooming out in case the user context changes - rotation, new actitivity. The controller
also auto pans if a dialog appears and it does not interesect the magnified frame.
bug:7410464
2. By design screen magnification and touch exploration work separately and together. If
magnification is enabled the user sees a larger version of the widgets and a sub section
of the screen content. Accessibility services use the introspection APIs to "see" what
is on the screen so they can speak it, navigate to the next item in response to a
gesture, etc. Hence, the information returned to accessibility services has to reflect
what a sighted user would see on the screen. Therefore, if the screen is magnified
we need to adjust the bounds and position of the infos describing views in a magnified
window such that the info bounds are equivalent to what the user sees.
To improve performance we keep accessibility node info caches in the client process.
However, when magnification state changes we have to clear these caches since the
bounds of the cached infos no longer reflect the screen content which just got smaller
or larger.
This patch propagates not only the window scale as before but also the X/Y pan and the
bounds of the magnified portion of the screen to the introspected app. This information
is used to adjust the bounds of the node infos coming from this window such that the
reported bounds are the same as the user sees not as the app thinks they are. Note that
if magnification is enabled we zoom the content and pan it along the X and Y axis. Also
recomputed is the isVisibleToUser property of the reported info since in a magnified
state the user sees a subset of the window content and the views not in the magnified
viewport should be reported as not visible to the user.
bug:7344059
Change-Id: I6f7832c7a6a65c5368b390eb1f1518d0c7afd7d2
1. In touch exploration mode the system clicks in the center of the
accessibility focus rectangle. However, if this rectangle is only
partially shown on the window or on the screen the system may not
be able to perform the click, if the accessibility focus center
is not on the screen, or click on the wrong window, if the access
focus center is outside of the window.
This change clips the rectangle to the window bounds which and the
display bounds. This will ensure no clicks are sent to the wrong
window and no clicks are sent outside of the screen.
bug:7453839
Change-Id: I79f98971e7ebcbb391c37284467dc76076172c5f
1. The accessibility layer has a back door for a UI test automation code running
from the shell to attach. The unregister code does an incorrect identity check
and as a result the register UI test automation service is not disconnected
until its process is killed. The fix is super safe and simple.
bug:7409261
Change-Id: I4b1da18be6c5619dadd4a58fca6724529bc59dea
1. We cache some events to see if the user wants to trigger magnification. If no
magnification is triggered we inject these events with adjusted time and down
time to prevent subsequent transformations being confused by stale events.
After the cached events, which always have a down, are injected we need to also
update the down time of all subsequent non cached events.
bug:7379388
Change-Id: I41d8b831cc1016a0ee8f9c5ef5f42eb60a6f64d9
1. When magnifier, if a dialog that popped up is wider than the scree we pan to its upper
left corner. We now show the upper right corner if the locale direction is RTL.
2. Keyguard dialogs are not centered since they are used as a sign to recompute the
magnified area but an unnecessary else statement prevents such dialogs from being
properly show via a pan.
bug:7374331
Change-Id: I285e46b822a29f0082c502cb642f9da32dabfe6a
1. In the magnifier we are caching the touch events until we figure
out whether the user is triple tapping to enable magnification.
If the user is not trying to engage magnification we deliver the
stashed events. However, these events are stale and the subsequent
transformations such as the touch explorer get confused when trying
to detect a tap since the delay is longer than the tap slop.
This change compensates for the time the events were cached
before sending them to the next transformation in the chain.
bug:7362365
Change-Id: Idd8539ffed7ba4892c5a916bd34910fd2ef50f75
1. Sometimes unlocking the device when the IME is up and triple tapping on the keyboard
toggles screen magnification. The core reason is that when the kayguard window is
shown we hide all other windows and when it is hidden we show these windows. We did
not notify the screen magnifier for windows being shown and hidden. Also when the
windows are shown we may reassign layers to put the IME or the wallpaper in the
right Z order. The screen magnifier is now notified upon such layer reassignment
since window layers are used when computing the magnified region.
bug:7351531
Change-Id: I0931f4ba6cfa565d8eb1e3c432268ba1818feea6
1. We auto pan when certain type of window pop up to make sure the user
knows about the context change. This does not happen however for
fragment dialog since its window type is not in the list of one
we auto pan for. Updating the window type list.
bug:7332090
Change-Id: I9b097c57df929d2e4e807a948c3a0540f4092a76
1. If a bad magnification scale is persisted, i.e. it is
not between the min and max, the screen magnifier gets
into a bad state which even a reboot does not fix since
the scale is persisted in settings.
This change ensures that only valid scales are presisted.
In general a bad value should not be attempted to be
persisted but at this point this is the safest change.
bug:7288239
Change-Id: I3e9c7c091772fa64128ab8403c2127ce65cb94b8
1. The active window is the one that the user touches or the one
that has input focus. We recognize the user touching a window
by the received accessibility hover events and the user not
touching the screen by a call from the touch explorer. It is
possible that the user touches window that does not have
input focus and as soon as he lifts finger the active one
will become the window that has input focus but now we get
he hover accessibility events from the touched window which
incorrectly changes the active window to be the touched one.
Note that at this point the user is not touching the screen.
bug:7298484
Change-Id: Ife035a798a6e68133f9220eeeabdfcd35a431b56
1. We are showing a warning dialog if the user enables an accessibility
service that requests explore by touch. This dialog was shown only
for the owner but should be shown for the current user.
bug:7304437
Change-Id: I692b5112df16405e6d2e4890aafbfde79981f973
1. The reason is that the screen magnifier computes that the whole
screen is not magnifiable. The miscalculation was caused due to
an incorrect assumption that the non-magnified area is only at
the bottom. In fact, on a phone in landscape the non-magnified
area is both on the right and at the bottom. This change adds
a correct algorithm for computing the magnified region.
2. Increasing the delay for computing the magnified area when the
keyguard goes away to allow all windows hidden by the keyguard
to be shown. In rare occasions the previous delay was not long
enough resulting in a state where the keyboard is considered
a part of the magnified region.
3. Removed some dead code.
bug:7293097
Change-Id: Ic5ff91977df8bcf4afd77071685c3eb20555d4f3
1. The active window is the one the user is touching or the one
that has input focus. It has to be made current immediately
after the user has stopped touching the screen because if the
user types with the IME he should get a feedback for the
letter typed in the text view which is in the input focused
window. Note that we always deliver hover accessibility events
(they are a result of user touching the screen) so change of
the active window before all hover accessibility events from
the touched window are delivered is fine.
bug:7296890
Change-Id: I1ae87c8419e2f19bd8eb68de084c7117c66894bc
Created a new flag that indicates that a window should be shown
to all users. For the flag to be valid the owner of the window
must have system permissions.
Also separated system window types into those that show to all
users (e.g. StatusBar, Keyguard, ....) and those that appear only
to the owning users (e.g. Drag, ANR, TOAST, ...). Those that appear
only to their owner can override their default behavior using
the new flag (e.g. LowBattery).
Fixes bug 7211965.
Change-Id: I1fdca25d57b7b523f0c7f8bceb819af656c388d4
1. The initial implementation was not sending the gesture start and end
events until the the user has moved more than a given slop and did not
do it faster than a given velocity. However, there is the case where
if the user did not move or just taped on the screen an exploration
occurs. The system was not sending the exploration start and end
events for the latter case.
2. The delaued command for long press was not canceled when the pointer
moves more than the slop distance.
bug:7282811
Change-Id: I7d98470cd4d9ea9b2519326e5e550ff68b040747
1. Accessibility events for changes in the content of a given window, such as
click, focus, etc. are dispatched to clients only if they come from the
active window.
Events for changes in the state of a window, such as window got input focus
or a notification appeared, are always dispatched. The notification events
do not contain source, so a client cannot introspect the notification area
(unless the user explicitly touches it which generates hove events). The
events for a window getting input focus change the active window so they
have to be dispatched.
Events that are a result of the user touching the screen, such as hover
enter, first tocuh, etc. should always be dispatched.
bug:7282006
Change-Id: I96b79189f8571285175d9660a22394cc84f39559