The window manager now keeps track of the overscan of
each display, with an API to set it. The overscan impacts
how it positions windows in the display. There is a new set
of APIs for windows to say they would like to go into the
overscan region. There is a call into the window manager to
set the overscan region for a display, and it now has a
concept of display settings that it stores presistently.
Also added a new "wm" command, moving the window manager
specific commands from the "am" command to there and adding
a new now to set the overscan region.
Change-Id: Id2c8092db64fd0a982274fedac7658d82f30f9ff
Since fsblkcnt_t is 8 bytes, provide methods to access larger value
instead of casting to int. This would start being an issue around
8TB filesystems.
Also add convenience methods to calculate values in bytes.
Change-Id: Ib924425d8d6d82785466f611ca71800cc1e952b6
It appears that com.android.internal.util.Predicate is in the public
APIs but it is in the internal package. Leaking the predicate APIs is
a mistake and while we cannot fix that, this change is adding legit
public filter interface.
bug:8183223
Change-Id: I3e2c0ef685d7a832630aaa3ec2e8eae3fb058289
1. Accessibility service may set a flag to request a compressed
view of the node tree representing the screen. The compressed
state does not contain nodes that do to react to user actions
and do not draw content, i.e. they are dumb view managers. This
compressed hierarchy was very beneficial to the test team and
this change is exposing the APIs. The compression has to be
configurable since old tests are written against uncompressed
view tree. Basically we do not hide the fact that UIAutomation
is simply an accessibility service with some other useful APIs.
bug:8051095
2. Accessibility service can perform global actions such as opening
notifications, opening recent apps, etc. These are also needed
for UI testing since there is no other way to do it via the
existing UIAutomation APIs. Basically we do not hide the fact
that UIAutomation is simply an accessibility service with some
other useful APIs.
bug:8117582
Change-Id: I7b6e24b5f7a973fdada0cc199cff9f882b10720b
Deprecate transport layer statistics, leaving only the summarized
network layer statistics.
Improve documentation to be clear about layers where measurements
occur, and their behavior since boot. Under the hood, move to using
xt_qtaguid UID statistics.
Bug: 6818637, 7013662
Change-Id: I9f26992e5fcdebd88c671e5765bd91229e7b0016
We support text traversal at a granularity over non-next views with
content description, hence we should support setting the cursor position
in such views.
bug:8134469
Change-Id: I4dba225b0ade795b7a20c201fb906ae7146c065d
Add "mirrorForRtl" property for ProgressBar (default is "false") and
use it accordingly to the following RTL rules:
- time still goes from left to right
- clocks still rotate clockwise
Change-Id: Ib91ce6ab341aa6097c0f43b13703174a2ee9ec70
Currently text editing is pretty hard (certain operations even
impossible) for a blind person. To address the issue this change
adds APIs that enable an accessibility service to perform basic
text editing operations such as copy, paste, cut, set selection,
extend selection while moving at a given granularity.
The new APIs enable an accessibility service to expose a gesture
driven efficient text editing facility.
bug:8098384
Change-Id: I82b200138a3fdf4c0c316b774fc08a096ced29d0
Second changeset, first one was committed too hastily.
TTS Voice-data related API was originally written with
one engine in mind (pico sVox TTS). It exposes implementation
details that should be private to the engine implementation.
- Deprecating fields of ACTION_CHECK_TTS_DATA results that were
used by sVox language packs to find out location of voice data.
Those fields are TTS engine implementation details and should be
private:
EXTRA_VOICE_DATA_ROOT_DIRECTORY
EXTRA_VOICE_DATA_FILES
EXTRA_VOICE_DATA_FILES_INFO
- Deprecating fields of ACTION_CHECK_TTS_DATA request that are
providing unnescesary functionality (it can be easily done on client
side):
EXTRA_CHECK_VOICE_DATA_FOR
- Deprecating some of the return codes of ACTION_CHECK_TTS_DATA - they
are specific to sVox pico voice data and in all cases can be replaced
by CHECK_VOICE_DATA_FAIL result code.
CHECK_VOICE_DATA_BAD_DATA
CHECK_VOICE_DATA_MISSING_DATA
CHECK_VOICE_DATA_MISSING_VOLUME
- Changing semantics of ACTION_TTS_DATA_INSTALLED intent. It's now
more generic and covers any change of available voice data set (so, not only
adding languages, but also removing them should trigger broadcast. Adding and
removing features to existing locale (like embedded synthesis) should be marked
by broadcast as well).
- Deprecating its EXTRA_TTS_DATA_INSTALLED result field - client should discover
the change by running ACTION_CHECK_TTS_DATA intent.
- Making GetSampleText intent public again - it's used by most TTS engines to
provide unique demonstation data.
- Deprecating TextToSpeech.OnUtteranceCompletedListener - it was replaced
by UtteranceProgressListener in API level 15, but no one put deprecation tag
on it.
Change-Id: Ia58af7f218dc1568570712f435782d2003260e82
Currently we have an "enhance web accessibility" setting that has to be
enabled to make sure web content is accessible. We added the setting to
get user consent because we are injecting JavaScript-based screen-reader
pulled from the Google infrastructure. However, many users do not know
that and (as expected) do not read the user documentation, resulting in
critique for lacking accessibility support in WebViews with JavaScript
enabled (Browser, Gmail, etc).
To smoothen the user experience now "enhance web accessibility" is a
feature an accessibility plug-in can request, similarly to explore by
touch. Now a user does not need to know that she has to explicitly
enable the setting and web accessibility will work out-of-the-box.
Before we were showing a dialog when a plug-in tries to put the device
in a touch exploration mode. However, now that we have one more feature
a plug-in can request, showing two dialogs (assume a plug-in wants both
features) will mean that a user should potentially deal with three
dialogs, one for enabling the service, and one for each feature. We
could merge the dialogs but still the user has to poke two dialogs.
It seems that the permission mechanism is a perfect fit for getting
user permission for an app to do something, in this case to enable
an accessibility feature. We need a separate permission for explore
by touch and enhance web accessibility since the former changes the
interaction model and the latter injects JavaScript in web pages. It
is critical to get user consent for the script injection part so we
need a well-documented permission rather a vague umbrella permission
for poking accessibility features. To allow better grouping of the
accessibility permissions this patch adds a permission group as well.
bug:8089372
Change-Id: Ic125514c34f191aea0416a469e4b3481ab3200b9
Also add new ops for calendar and wi-fi scans, finish
implementing rejection of content provider calls, fix
issues with rejecting location calls, fix bug in the
new pm call to retrieve apps with permissions.
Change-Id: I29d9f8600bfbbf6561abf6d491907e2bbf6af417
When launching an assist, we have a new API allowing the
current foreground activity/application to provide additional
arbitrary contextual information that is stuffed in the
assist intent before it is launched.
Change-Id: I0b2a6f5a266dc42cc0175327fa76774f814af3b4
This change adds APIs support for implementing UI tests. Such tests do
not rely on internal application structure and can span across application
boundaries. UI automation APIs are encapsulated in the UiAutomation object
that is provided by an Instrumentation object. It is initialized by the
system and can be used for both introspecting the screen and performing
interactions simulating a user. UI test are normal instrumentation tests
and are executed on the device.
UiAutomation uses the accessibility APIs to introspect the screen and
a special delegate object to perform privileged operations such as
injecting input events. Since instrumentation tests are invoked by a shell
command, the shell program launching the tests creates a delegate object and
passes it as an argument to started instrumentation. This delegate
allows the APK that runs the tests to access some privileged operations
protected by a signature level permissions which are explicitly granted
to the shell user.
The UiAutomation object also supports running tests in the legacy way
where the tests are run as a Java shell program. This enables existing
UiAutomator tests to keep working while the new ones should be implemented
using the new APIs. The UiAutomation object exposes lower level APIs which
allow simulation of arbitrary user interactions and writing complete UI test
cases. Clients, such as UiAutomator, are encouraged to implement higher-
level APIs which minimize development effort and can be used as a helper
library by the test developer.
The benefit of this change is decoupling UiAutomator from the system
since the former was calling hidden APIs which required that it is
bundled in the system image. This prevented UiAutomator from being
evolved separately from the system. Also UiAutomator was creating
additional API surface in the system image. Another benefit of the new
design is that now test cases have access to a context and can use
public platform APIs in addition to the UiAutomator ones. Further,
third-parties can develop their own higher level test APIs on top
of the lower level ones exposes by UiAutomation.
bug:8028258
Also this change adds the fully qualified resource name of the view's
id in the emitted AccessibilityNodeInfo if a special flag is set while
configuring the accessibility service. Also added is API for looking
up node infos by this id. The id resource name is relatively more stable
compared to the generaed id number which may change from one build to
another. This API facilitate reuing the already defined ids for UI
automation.
bug:7678973
Change-Id: I589ad14790320dec8a33095953926c2a2dd0228b
The disabled state allows you to make an app disabled
except for whatever parts of the system still want to
provide access to them and automatically enable them
if the user want to use it.
Currently the input method manager service is the only
part of the system that supports this, so you can put
an IME in this state and it will generally look disabled
but still be available in the IME list and once selected
switched to the enabled state.
Change-Id: I77f01c70610d82ce9070d4aabbadec8ae2cff2a3
LocationManagerService now annotates incoming Location objects that
have come from mock location providers. The new isFromMockProvider()
method can be called on any Location to determine whether the
provider that supplied the Location was a mock location provider.
Bug: 6813235
Change-Id: Ib5140e93ea427f2e0b0036151047f87a02b4d23a
Also add MockContentResolver constructor to provide a Context, and
move to singleton ActivityThread, since there is only one inside
each process. This makes ActivityThread accessible from threads like
InstrumentationThread.
Change-Id: Ib8b18f1b9bba8820ff412d782a43511066eabf24
Add assignContactFromEmail(String, boolean, Bundle)
and assignContactFromPhone(String, boolean, Bundle)
that allow the caller to provide a bundle of extras to
pre-populate the ContactEditorFragment with if a contact
is not found with the requested email address or phone number.
Bug: 7038382
Change-Id: Ib77fa484e1c39cb60d7acc27efe3a3fcf3fee62f