This is in preparation to moving keyguard into its own process.
Moved keyguard source and resources into new .apk.
Got basic test app working. Still need to implement MockPatternUtils
and means to pass it into KeyguardService with local binder interface.
Added new ACCESS_KEYGUARD_SECURE_STORAGE permission.
Temporarily disabled USER_PRESENT broadcast.
Remove unintentional whitespace changes in PhoneWindowManager, etc.
Checkpoint basic working version.
Move to systemui process.
Synchronize with TOT.
Sync with recent user API changes.
Fix bug with returing interface instead of stub for IKeyguardResult. Create KeyguardServiceDelegate to allow
for runtime-selectable local or remote interface.
More keyguard crash robustness.
Keyguard crash recovery working. Currently fails safe (locked).
Fix selector view which was still using frameworks resources.
Remove more references to internal framework variables. Use aliases for those we should move but
currently have dependencies.
Allow runtime switching between service and local mode.
Fix layout issue on tablets where orientation was reading the incorrect constant
from the framework. Remove more framework dependencies.
Fix PIN keyboard input.
Remove unnecessary copy of orientation attrs.
Remove unused user selector widget and attempt to get multi user working again.
Fix multi-user avatar icon by grabbing it from UserManager rather than directly since
keyguard can no longer read it.
Merge with AppWidget userId changes in master.
Change-Id: I254d6fc6423ae40f6d7fef50aead4caa701e5ad2
Because ISms.aidl imports PendingIntent we couldn't easily
make opt/telephony part of the PDK. So this change moves
ISms.aidl and SmsRawData.*, which ISms.aidl also imports,
back to frameworks/base.
Change-Id: Ia64c6e771d5a292d9bfebb413a43f3745df55c85
# Via Android Git Automerger (4) and others
* commit '25f97435302d8468afeb4ade9f00d5243b393082':
Doc change: remove htmlified samples from docs build for now. Samples are still downloadable thorugh SDK Manager.
# Via Android Git Automerger (4) and Scott Main (1)
* commit '9dbf24797b82f4c70a75051050f32e53d1c35fe6':
docs: switch devsite doc build to use templates-sdk
# Via Android Git Automerger (4) and Scott Main (1)
* commit '0dd750349004579ca3356a155eb8a86994a45df2':
docs: add hdf bool for deviste, used to change aspects of the templates
This change adds APIs support for implementing UI tests. Such tests do
not rely on internal application structure and can span across application
boundaries. UI automation APIs are encapsulated in the UiAutomation object
that is provided by an Instrumentation object. It is initialized by the
system and can be used for both introspecting the screen and performing
interactions simulating a user. UI test are normal instrumentation tests
and are executed on the device.
UiAutomation uses the accessibility APIs to introspect the screen and
a special delegate object to perform privileged operations such as
injecting input events. Since instrumentation tests are invoked by a shell
command, the shell program launching the tests creates a delegate object and
passes it as an argument to started instrumentation. This delegate
allows the APK that runs the tests to access some privileged operations
protected by a signature level permissions which are explicitly granted
to the shell user.
The UiAutomation object also supports running tests in the legacy way
where the tests are run as a Java shell program. This enables existing
UiAutomator tests to keep working while the new ones should be implemented
using the new APIs. The UiAutomation object exposes lower level APIs which
allow simulation of arbitrary user interactions and writing complete UI test
cases. Clients, such as UiAutomator, are encouraged to implement higher-
level APIs which minimize development effort and can be used as a helper
library by the test developer.
The benefit of this change is decoupling UiAutomator from the system
since the former was calling hidden APIs which required that it is
bundled in the system image. This prevented UiAutomator from being
evolved separately from the system. Also UiAutomator was creating
additional API surface in the system image. Another benefit of the new
design is that now test cases have access to a context and can use
public platform APIs in addition to the UiAutomator ones. Further,
third-parties can develop their own higher level test APIs on top
of the lower level ones exposes by UiAutomation.
bug:8028258
Also this change adds the fully qualified resource name of the view's
id in the emitted AccessibilityNodeInfo if a special flag is set while
configuring the accessibility service. Also added is API for looking
up node infos by this id. The id resource name is relatively more stable
compared to the generaed id number which may change from one build to
another. This API facilitate reuing the already defined ids for UI
automation.
bug:7678973
Change-Id: I589ad14790320dec8a33095953926c2a2dd0228b
Initial implementation, tracking use of the vibrator, GPS,
and location reports.
Also includes an update to battery stats to also keep track of
vibrator usage (since I had to be in the vibrator code anyway
to instrument it).
The service itself is only half-done. Currently no API to
retrieve the data (which once there will allow us to show you
which apps are currently causing the GPS to run and who has
recently accessed your location), it doesn't persist its data
like it should, and no way to tell it to reject app requests
for various operations.
But hey, it's a start!
Change-Id: I05b8d76cc4a4f7f37bc758c1701f51f9e0550e15
1. This patch takes care of the case where a magnified window is covering an unmagnigied
one. One example is a dialog that covers the IME window.
bug:7634430
2. Ensuring that the UI automator tool can connect and correctly dump the screen.
bug:7694696
3. Removed the partial implementation for multi display magnification. It adds
unnecessary complexity since it cannot be implemented without support for
input from multiple screens. We will revisit when necessary.
4. Moved the magnified border window as a surface in the window manager.
5. Moved the mediator APIs on the window manager and the policy methods on the
WindowManagerPolicy.
6. Implemented batch event processing for the accessibility input filter.
Change-Id: I4ebf68b94fb07201e124794f69611ece388ec116
1. The screen magnification feature was implemented entirely as a part of the accessibility
manager. To achieve that the window manager had to implement a bunch of hooks for an
external client to observe its internal state. This was problematic since it dilutes
the window manager interface and allows code that is deeply coupled with the window
manager to reside outside of it. Also the observer callbacks were IPCs which cannot
be called with the window manager's lock held. To avoid that the window manager had
to post messages requesting notification of interested parties which makes the code
consuming the callbacks to run asynchronously of the window manager. This causes timing
issues and adds unnecessary complexity.
Now the magnification logic is split in two halves. The first half that is responsible
to track the magnified portion of the screen and serve as a policy which windows can be
magnified and it is a part of the window manager. This part exposes higher level APIs
allowing interested parties with the right permissions to control the magnification
of a given display. The APIs also allow a client to be registered for callbacks on
interesting changes such as resize of the magnified region, etc. This part servers
as a mediator between magnification controllers and the window manager.
The second half is a controller that is responsible to drive the magnification
state based on touch interactions. It also presents a highlight when magnified to
suggest the magnified potion of the screen. The controller is responsible for auto
zooming out in case the user context changes - rotation, new actitivity. The controller
also auto pans if a dialog appears and it does not interesect the magnified frame.
bug:7410464
2. By design screen magnification and touch exploration work separately and together. If
magnification is enabled the user sees a larger version of the widgets and a sub section
of the screen content. Accessibility services use the introspection APIs to "see" what
is on the screen so they can speak it, navigate to the next item in response to a
gesture, etc. Hence, the information returned to accessibility services has to reflect
what a sighted user would see on the screen. Therefore, if the screen is magnified
we need to adjust the bounds and position of the infos describing views in a magnified
window such that the info bounds are equivalent to what the user sees.
To improve performance we keep accessibility node info caches in the client process.
However, when magnification state changes we have to clear these caches since the
bounds of the cached infos no longer reflect the screen content which just got smaller
or larger.
This patch propagates not only the window scale as before but also the X/Y pan and the
bounds of the magnified portion of the screen to the introspected app. This information
is used to adjust the bounds of the node infos coming from this window such that the
reported bounds are the same as the user sees not as the app thinks they are. Note that
if magnification is enabled we zoom the content and pan it along the X and Y axis. Also
recomputed is the isVisibleToUser property of the reported info since in a magnified
state the user sees a subset of the window content and the views not in the magnified
viewport should be reported as not visible to the user.
bug:7344059
Change-Id: I6f7832c7a6a65c5368b390eb1f1518d0c7afd7d2
Currently doc-comment-check is the single longest-running process during
the make process. It usually takes 300-400s to finish on my Z600.
What's worse, it's usually the last straggler build job.
Not running this by default can save big build time.
Bug: 7253452
Change-Id: Idc868197b59e42c6b583c66f13a0e6a1bc8d5d4e