1. The up event for a long press was not properly adjusted as the
long pressing finger may not be on top of the accessibility
focused item.
2. There was a scenario where two finger swipe leads to a crash.
One finger moves, second finger goes down but no finger moves,
the first finger goes up, and now the second finger moves. All
this has to happen before we decided that user is touch exploring.
Very hard to happen, this is why we could not easily repro the
crash.
3. We use the two finger vector angle to determine whether the
user is dragging or not. However, in some cases we were
unnecessarily waiting too long before performing the check
and as a result the notification shade on Manta was not
expandable.
bug:11341530
bug:11189225
Change-Id: Ieea39783444a1c20581f8addfd518d1c11485099
Specifically, ignore any flags that alter the visibility of the navigation
bar and transparency.
BUG: 11082573
Change-Id: I17264dc55a1c6c3cb9b9cf92d5121799cecee5b8
The scale gesture detector has a new behavior to make onScale
callbacks during swipe after a double tap. The screen magnification
is triggered after a trippe tap so if the user trippe taps and
holds to do a temporary magnification and tries to change the
zoom level with another finger, erroneous results are obtained.
The ScaleGestureDetector uses an APIs level check for the new
behavior but the ScreenMagnifier is a platform feature. We now
explicitly ask for the old behavior.
bug:11033376
Change-Id: I0dfb14dd3abcaa34ad1f40447c631b4203797378
1. The logic for finding the active pointer was incorrect. The code was
iterating over all pointer ids and taking the minimum, i.e. the pointer
that first went down. The problem was that the down time for pointers
that are not down was also considered (set to zero), thus sometimes we
would consider the first pointer that went down to be a pointer that
is not down at all. Now we are iterating only over the pointers that
are down.
2. The batched events while waiting to see if the user is exploring or
gesturing were added even if we were in touch exploration state at which
point we do not have to batch. As a result we ended up having lefovers
from a previous gesture when handling the delayed events and crash.
bug:10312546
Change-Id: I4728541ac12e4da4577d22e4314101dd169a52fb
1. Removed the inactive pointer filtering which was not reporting pointers
to the apps if they did not travel a minimal distance. This prohibits
developemnt of apps with innovative interaction models such as using
the screen as a virtual Braille keyboard.
2. We need the first pointer to travel some distance or a minimal amount of
time to pass before deciding if the user explores or performs a gesture.
In this period we were dropping events which was preventing inovative
interfaces such as gesture based typing since we were chopping off a
significant portion of the data.
Change-Id: I5c1aa98d14c83f356a9c59c93f4dc1f970c0faca
We were allowing the system and the shell user to access use the
screen introspection APIs but the root user was not able to to so.
This change enableS the root user to also use these APIs. Note that
we usually allow the root user to access privileged functionality
similarly to the shell user of the system.
bug:8877685
Change-Id: Ie4008339e864b835bd3a2d5e06b042e4431c5270
Initially the current user in the accessibility manager service is the
owner. This is correct since the system should be able to respond to
queries immediately and their result depends on the current user. However,
the system is calling the user switch callback with the current user
which is the same as the one we initialized with. Switching the user
causes clearing state for the old user winch is in case the current
one. Hence, we are losing state for the current user. This behavior was
masked from the fact that accidentally no events in the system were
fired before the first use user switch call.
repo Losing current user state puts the manager service in an inconsistent
state and it binds to accessibility services more than once. As a result
the accessibility layer starts to misbehave rendering the device useless
to a blind user.
Now we are ignoring user switch callbacks if the new user is the same
as the current one. Since we can no longer initialize at the first user
switch, this change adds explicit system ready method called from
the system server at the right moment.
bug:9496697
Change-Id: Icb39e929ea44e6c0360aba7ddc12f941ca2c9f98
This change adds several traits and properties to AccessibilityNodeInfo
aiming to allow better description of native Android components to
accessibility services as well as mapping web content to native Android
node info tree.
Change-Id: I36b893cbaa6213c9d02d805e9dc36b6d792b4961
For UI test automation purposes we register a fake accessibility service
and suspend all other services. When the UI automation serivce is unregistered
we restore the suspended ones. Since the UI automation serivce is fake and
incomplete, for example it has not resolve info, it should not be reported
to clients as being installed or enabled.
bug:8871034
Change-Id: I66792cd028159c1652d3c8a2982164821282ab24
Touch exploration and enhanced web accessibility can be toggled at
run time. However, the code that updates the state of these features
was not properly doing that. In particular, it did not write to the
settings if the feature gets disabled. Now the logic is much cleaner.
If there is a service that can request and requests a feature and
the feature is not enabled, we enabled it, otherwise the feature is
disabled.
bug:8790771
Change-Id: I218dfa12fd02220c94940b54f42bed578811a794
1. When a service dies we clear its state and remove it from the bound services waiting
for new onServiceConnected call in which to initialize and add the service. The
problem is that after clearing and removing a dead service there is a call to
onUserStateChangedLocked with will end up rebinding to the service, so we get
multiple onServiceConnected calls as a result of which we add the service twice and
it becomes a mess. Note that every time the service dies we end up being bound to
it twice as many times - royal mess! onUserStateChangedLocked is not even needed
since we cleare and remove the serivce and this method will be called when
the service is recreated.
2. When a service dies and is recreated by the system we were not adding it properly
since we regarded only services that we bond to and wait for the connecton. Now
we are also regarding service which died and are recreated.
bug:8796109
Change-Id: I5ec60c67bd3b057446bb8d90b48511c35d45289d
1. The helper query bridge service did not have the now capability
to query the screen content.
2. Fixing the build.
bug:8633951
Change-Id: Ief6a3387793710a83b83c18cc6c53b51dcda9bdf
We have APIs that allow an accessibility service to filter key events. The
service has to declare the capability to toggle event filtering in its
manifest and then it can set a flag to toggle the feature at runtime. The
problem was that even if no accessibility service was handling key events
these events were not fed back to the input system.
This change adds a new feature flag in the accessibility input filter that
is set only if at least one service can and wants to filter key events. If
the feature flag is set then the filter will deliver events to services and
,if they are not handled, to the system. This change also cleaned the logic
for updating the input filter.
bug:8713422
Change-Id: I4bc0c1348676569d1b76e9024708d1ed43ceb26a
Since the enable touch exploration capability is dynamically granted by
the user for apps targeting pre-JellybeanMR2 API level, we have to properly
update the accessibility service info for that service and also avoid
caching copies of the service info.
bug:8633951
Change-Id: I83dd1c852706ec55d40cda7209ad842889fb970a
1. UiAutomation#executeAndWaitForEvent method was invoking the passed
runnable while holding the lock which may lead to a deadlock. For
example: a runnable that calls getActivity() gets us into a state
like this.
2. UI automation services did not get all capabilities such a
service can have. Now a UI test service gets all of them.
3. When UiAutomation was exiting for event fired as a result of a
performed action, it was checking whether the received evnet time
is strictly before the time of executing the command that should
fire the event. However, if the execution is fast enough, i.e.
less than one millisecond, then the event time and the execution
time are the same. This was leading to a missed signal in rare
cases.
4. AccessibilityNodeInfoCache was not clearing the relevant state
for accessibility focus clearing event.
5. Accessibility text traversal in TextView was partially using text
and partially content description - broken. Now we are using the
text since for text view and content desc for other views. In other
words, we are using the most precise text we have.
6. AccessibilityManagerService was not granting capabilities of a
UiAutomation service - plainly wrong.
CTS change:https://googleplex-android-review.googlesource.com/#/c/300693/
bug:8695422
bug:8657560
Change-Id: I9afc5c3c69eb51f1c01930959232f44681b15e86
Accessibility services can perform special operations such as retrieve
the screen content, enable explore by touch, etc. To ensure the user
is aware that the service will perform special operations we were using
permissions. However, the special operations cannot be performed unless
the service is really enabled by the user and it is at this point that
we want to notify the user about the service capabilities.
This change adds capability attributes to the accessibility service's
meta-data XML file. The service has to declare the capability and when
it is enabled we show the user the capabilities in the warining dialog.
bug:8633951
Change-Id: Id3442dc71dad018e606888afdc40834682fdb037
This is a regression in which the input filter of the accessibility
manager service is not set if magnification is enabled but accessibility
is not - i.e. no accessibility serivces are enabled. Fixed the logic to
install the input filter if magnification is on but services are not
enabled in addition to services being enabled.
bug:8652765
Change-Id: Ia73e1064035f95ba0f246f4cabcc42d58c12a11f
When something that affects the state of accessibility in the sysytem
changes, we run a reolve method that reloads all relevant information and
if it changed we call a method that makes everyting right. One of the
interesting properties we read is the isntalled accessibliity services.
We are using equals to figure out whether these services have changed
but this is not correct since AccessibilityServiceInfo does not use all
internal members for equals and using all memthis is not reasible since
some of these internal members do not support equals propertly, for
example ResolveInfo.
Therefore, when a package is reinstalled we remove all installed services
from the list of ones we know about which forces them to be reloaded,
thus capturing the current state of a reinstalled package.
bug:8621960
Change-Id: Ie1ef4bf1036d8d6e033cd9528ea2292ce24e5320
It is possible that an accessibility service's package was force stopped
during whose handling the death recipient is unlinked and still get a call
on binderDied since the call was made before we unlink but was waiting on
the lock we held during the force stop handling. Added a check whether the
service is already disconnected and if so do nothing.
bug:8600388
Change-Id: I4a9ca305b9863d986b930a7c1ec8f9006b16a333
On eng builds we have an event consistency verifier to log any
inconsistent event stream states due to mishandling of intercepted
events by an accessibility service. On non-eng builds this verifier
is null and a null check was lacking.
bug:8616711
Change-Id: Ib083a405dfa8340025090a65e50155eb10526a90
If the connected service is not entirely setup when calling the method for
handling a change in the current user state we get a potential NPE since
the management method may have discarded the service, thus nullifying the
connection to it. Now the service is fully configured before calling the
state change management method.
bug:8600489
Change-Id: Ib0bf7c6d575e15c620da419d43ece22f4187fd34
Now that we have gestures which are detected by the system and
interpreted by an accessibility service, there is an inconsistent
behavior between using the gestures and the keyboard. Some devices
have both. Therefore, an accessibility service should be able to
interpret keys in addition to gestures to provide consistent user
experience. Now an accessibility service can expose shortcuts for
each gestural action.
This change adds APIs for an accessibility service to observe and
intercept at will key events before they are dispatched to the
rest of the system. The service can return true or false from its
onKeyEvent to either consume the event or to let it be delivered
to the rest of the system. However, the service will *not* be
able to inject key events or modify the observed ones.
Previous ideas of allowing the service to say it "tracks" the event
so the latter is not delivered to the system until a subsequent
event is either "handled" or "not handled" will not work. If the
service tracks a key but no other key is pressed essentially this
key is not delivered to the app and at potentially much later point
this stashed event will be delivered in maybe a completely different
context.The correct way of implementing shortcuts is a combination
of modifier keys plus some other key/key sequence. Key events already
contain information about which modifier keys are down as well as
the service can track them as well.
bug:8088812
Change-Id: I81ba9a7de9f19ca6662661f27fdc852323e38c00
If no accessibility services are enabled, we disable the
accessibility event firing to save resources. When the last
such services is disabled the system was not unbinding. As
a result the user was seeing the touch exploration enable
dialog when the service that requested it is disabled. Also
there is one service the system is bound to that is not used.
bug:8439191
Change-Id: I6f37f2573a815bfb29870298aa0abbb1fa105588
UiAutomation registers a fake accessibility service to introspect
the screen. Upon the death of the shell process that started an
instrumentation in which a UiAutomation resides the connection
between the UiAutomation and the system is kept alive allowing
the instrumentation to introspect the screen even after the death
of the shell process.
bug:8285905
Change-Id: I1a16d78abbea032116c4baed175cfc0d5dedbf0c
If an accessibility service is connected but already removed
from the list of connecting services we get a NPE since the
call to set the service connection is made over a null
remove interface. Note that service connection is asynchronous.
bug:8229877
Change-Id: I7b05f219dd0c1da6286ee4ec90b1ef310908545d
When an accessibility service connects we get a callback in
which we either add the service, if this service is in the list
of connecting services (we still want the service to connect),
or we unbind and clear the state, if the service is no longer in
the list of connecting services (we do not want this service to
connect because something change between the bind request and
the connection callback).
The problem is that when the service connects and it is not in
the list of connecting services on service connected we called
the clean up code before the connection was complete. However,
the clean up code expects fully configured services. Now we
fully connect the service and in case there is a problem -
disconnect it.
bug:8232627
Change-Id: I939e544e31ffc1406035265a012c180f2ca95d7c
On user switch the transient state of the old user was not cleared
which means that when we switch back to this user the operational
state such as which event types were dispatched, what state was sent
to local managers, etc is stale. This leads to semi-updated state
and broken behavior. Now if the user becomes inactive, we are clearing
all transient state which will be recreated when the user becomes
active.
bug:8196652
Change-Id: Ie9e0d712b6d567e5074b328f1bb87afaa5395c06
The UI test automation service was not removed from the list of
enabled and installed service where it was explicitly added on
registration. This was leaving the accessibility manager service
in an inconsistent state.
bug:8185435
Change-Id: Ice17cdef361fe98ce34f8dd01ec11dbad6c4d0c2
1. The accessibility manager service updates its internal state
based on which settings are enabled, what accessibility services
are installed and what features are requested by the enabled
services. It was trying to do the minimal amount of work to
react to contextual changes like these which resulted in missed
cases and complex code. Now there is a single method that reads
the contextual information and single method that reacts to
contextual changes. This makes the code much easier to maintain.
2. The accessibility manager service was not updating its internal
state when requested features from accessibility services change.
It was relying on changing system settings and reacting to the
settings change. This is problematic since the internal state is
not updated atomically which leads to race condition bugs. For
example, if touch exploration is enabled and a service requests
it is disabled, the internal state will not be updated but a
request for a settings change will be made. Now while the settings
change is propagating another request form the same service
comes to enable touch exploration but the system incorrectly
thinks touch exploration is enabled. At the end the feature is
disabled even though it was requested.
3. Fixed a potential NPE if the accessibility input filter's event
handler was nullified between processing two event batches.
4. Fixed a bug where, if magnification is enabled, it does not work
on the settings screen since the magnified bounds are not pushed
from the window manager to the accessibility manager.
Change-Id: Idf629a06480e12f0d88372762df6c024fe0d7856
We use an input filter to manipulate the event stream in accessibility
mode. Some input events, i.e. touch and hover events, are delivered
to a touch explorer, if touch exploration is enabled, and to a magnifier,
if screen magnification is enabled. It is possible that at the moment
each of these features is enabled we are in the middle of a touch or
hover gesture. The touch explorer and screen magnifier expect to receive
an event stream that starts with an event that denotes the stream start.
This change ensures that hover or touch events are dispatched to the
touch explorer and the magnifier only after the start of the first
well-formed hover or touch sequence.
Change-Id: I8cd0ad8e1844c59fd55cf1dfacfb79af6a8916df