Because ISms.aidl imports PendingIntent we couldn't easily
make opt/telephony part of the PDK. So this change moves
ISms.aidl and SmsRawData.*, which ISms.aidl also imports,
back to frameworks/base.
Change-Id: Ia64c6e771d5a292d9bfebb413a43f3745df55c85
# Via Android Git Automerger (4) and others
* commit '25f97435302d8468afeb4ade9f00d5243b393082':
Doc change: remove htmlified samples from docs build for now. Samples are still downloadable thorugh SDK Manager.
# Via Android Git Automerger (4) and Scott Main (1)
* commit '9dbf24797b82f4c70a75051050f32e53d1c35fe6':
docs: switch devsite doc build to use templates-sdk
# Via Android Git Automerger (4) and Scott Main (1)
* commit '0dd750349004579ca3356a155eb8a86994a45df2':
docs: add hdf bool for deviste, used to change aspects of the templates
This change adds APIs support for implementing UI tests. Such tests do
not rely on internal application structure and can span across application
boundaries. UI automation APIs are encapsulated in the UiAutomation object
that is provided by an Instrumentation object. It is initialized by the
system and can be used for both introspecting the screen and performing
interactions simulating a user. UI test are normal instrumentation tests
and are executed on the device.
UiAutomation uses the accessibility APIs to introspect the screen and
a special delegate object to perform privileged operations such as
injecting input events. Since instrumentation tests are invoked by a shell
command, the shell program launching the tests creates a delegate object and
passes it as an argument to started instrumentation. This delegate
allows the APK that runs the tests to access some privileged operations
protected by a signature level permissions which are explicitly granted
to the shell user.
The UiAutomation object also supports running tests in the legacy way
where the tests are run as a Java shell program. This enables existing
UiAutomator tests to keep working while the new ones should be implemented
using the new APIs. The UiAutomation object exposes lower level APIs which
allow simulation of arbitrary user interactions and writing complete UI test
cases. Clients, such as UiAutomator, are encouraged to implement higher-
level APIs which minimize development effort and can be used as a helper
library by the test developer.
The benefit of this change is decoupling UiAutomator from the system
since the former was calling hidden APIs which required that it is
bundled in the system image. This prevented UiAutomator from being
evolved separately from the system. Also UiAutomator was creating
additional API surface in the system image. Another benefit of the new
design is that now test cases have access to a context and can use
public platform APIs in addition to the UiAutomator ones. Further,
third-parties can develop their own higher level test APIs on top
of the lower level ones exposes by UiAutomation.
bug:8028258
Also this change adds the fully qualified resource name of the view's
id in the emitted AccessibilityNodeInfo if a special flag is set while
configuring the accessibility service. Also added is API for looking
up node infos by this id. The id resource name is relatively more stable
compared to the generaed id number which may change from one build to
another. This API facilitate reuing the already defined ids for UI
automation.
bug:7678973
Change-Id: I589ad14790320dec8a33095953926c2a2dd0228b
Initial implementation, tracking use of the vibrator, GPS,
and location reports.
Also includes an update to battery stats to also keep track of
vibrator usage (since I had to be in the vibrator code anyway
to instrument it).
The service itself is only half-done. Currently no API to
retrieve the data (which once there will allow us to show you
which apps are currently causing the GPS to run and who has
recently accessed your location), it doesn't persist its data
like it should, and no way to tell it to reject app requests
for various operations.
But hey, it's a start!
Change-Id: I05b8d76cc4a4f7f37bc758c1701f51f9e0550e15
1. This patch takes care of the case where a magnified window is covering an unmagnigied
one. One example is a dialog that covers the IME window.
bug:7634430
2. Ensuring that the UI automator tool can connect and correctly dump the screen.
bug:7694696
3. Removed the partial implementation for multi display magnification. It adds
unnecessary complexity since it cannot be implemented without support for
input from multiple screens. We will revisit when necessary.
4. Moved the magnified border window as a surface in the window manager.
5. Moved the mediator APIs on the window manager and the policy methods on the
WindowManagerPolicy.
6. Implemented batch event processing for the accessibility input filter.
Change-Id: I4ebf68b94fb07201e124794f69611ece388ec116
1. The screen magnification feature was implemented entirely as a part of the accessibility
manager. To achieve that the window manager had to implement a bunch of hooks for an
external client to observe its internal state. This was problematic since it dilutes
the window manager interface and allows code that is deeply coupled with the window
manager to reside outside of it. Also the observer callbacks were IPCs which cannot
be called with the window manager's lock held. To avoid that the window manager had
to post messages requesting notification of interested parties which makes the code
consuming the callbacks to run asynchronously of the window manager. This causes timing
issues and adds unnecessary complexity.
Now the magnification logic is split in two halves. The first half that is responsible
to track the magnified portion of the screen and serve as a policy which windows can be
magnified and it is a part of the window manager. This part exposes higher level APIs
allowing interested parties with the right permissions to control the magnification
of a given display. The APIs also allow a client to be registered for callbacks on
interesting changes such as resize of the magnified region, etc. This part servers
as a mediator between magnification controllers and the window manager.
The second half is a controller that is responsible to drive the magnification
state based on touch interactions. It also presents a highlight when magnified to
suggest the magnified potion of the screen. The controller is responsible for auto
zooming out in case the user context changes - rotation, new actitivity. The controller
also auto pans if a dialog appears and it does not interesect the magnified frame.
bug:7410464
2. By design screen magnification and touch exploration work separately and together. If
magnification is enabled the user sees a larger version of the widgets and a sub section
of the screen content. Accessibility services use the introspection APIs to "see" what
is on the screen so they can speak it, navigate to the next item in response to a
gesture, etc. Hence, the information returned to accessibility services has to reflect
what a sighted user would see on the screen. Therefore, if the screen is magnified
we need to adjust the bounds and position of the infos describing views in a magnified
window such that the info bounds are equivalent to what the user sees.
To improve performance we keep accessibility node info caches in the client process.
However, when magnification state changes we have to clear these caches since the
bounds of the cached infos no longer reflect the screen content which just got smaller
or larger.
This patch propagates not only the window scale as before but also the X/Y pan and the
bounds of the magnified portion of the screen to the introspected app. This information
is used to adjust the bounds of the node infos coming from this window such that the
reported bounds are the same as the user sees not as the app thinks they are. Note that
if magnification is enabled we zoom the content and pan it along the X and Y axis. Also
recomputed is the isVisibleToUser property of the reported info since in a magnified
state the user sees a subset of the window content and the views not in the magnified
viewport should be reported as not visible to the user.
bug:7344059
Change-Id: I6f7832c7a6a65c5368b390eb1f1518d0c7afd7d2
Currently doc-comment-check is the single longest-running process during
the make process. It usually takes 300-400s to finish on my Z600.
What's worse, it's usually the last straggler build job.
Not running this by default can save big build time.
Bug: 7253452
Change-Id: Idc868197b59e42c6b583c66f13a0e6a1bc8d5d4e
- New public APIs to find out when a user goes to the foreground,
background, and is first initializing.
- New activity manager callback to be involved in the user switch
process, allowing other services to let it know when it is safe
to stop freezing the screen.
- Wallpaper service now implements this to handle its user switch,
telling the activity manager when it is done. (Currently this is
only handling the old wallpaper going away, we need a little more
work to correctly wait for the new wallpaper to get added.)
- Lock screen now implements the callback to do its user switch. It
also now locks itself when this happens, instead of relying on
some other entity making sure it is locked.
- Pre-boot broadcasts now go to all users.
- WallpaperManager now has an API to find out if a named wallpaper is
in use by any users.
Change-Id: I27877aef1d82126c0a1428c3d1861619ee5f8653
This change is the initial check in of the screen magnification
feature. This feature enables magnification of the screen via
global gestures (assuming it has been enabled from settings)
to allow a low vision user to efficiently use an Android device.
Interaction model:
1. Triple tap toggles permanent screen magnification which is magnifying
the area around the location of the triple tap. One can think of the
location of the triple tap as the center of the magnified viewport.
For example, a triple tap when not magnified would magnify the screen
and leave it in a magnified state. A triple tapping when magnified would
clear magnification and leave the screen in a not magnified state.
2. Triple tap and hold would magnify the screen if not magnified and enable
viewport dragging mode until the finger goes up. One can think of this
mode as a way to move the magnified viewport since the area around the
moving finger will be magnified to fit the screen. For example, if the
screen was not magnified and the user triple taps and holds the screen
would magnify and the viewport will follow the user's finger. When the
finger goes up the screen will clear zoom out. If the same user interaction
is performed when the screen is magnified, the viewport movement will
be the same but when the finger goes up the screen will stay magnified.
In other words, the initial magnified state is sticky.
3. Pinching with any number of additional fingers when viewport dragging
is enabled, i.e. the user triple tapped and holds, would adjust the
magnification scale which will become the current default magnification
scale. The next time the user magnifies the same magnification scale
would be used.
4. When in a permanent magnified state the user can use two or more fingers
to pan the viewport. Note that in this mode the content is panned as
opposed to the viewport dragging mode in which the viewport is moved.
5. When in a permanent magnified state the user can use three or more
fingers to change the magnification scale which will become the current
default magnification scale. The next time the user magnifies the same
magnification scale would be used.
6. The magnification scale will be persisted in settings and in the cloud.
Note: Since two fingers are used to pan the content in a permanently magnified
state no other two finger gestures in touch exploration or applications
will work unless the uses zooms out to normal state where all gestures
works as expected. This is an intentional tradeoff to allow efficient
panning since in a permanently magnified state this would be the dominant
action to be performed.
Design:
1. The window manager exposes APIs for setting accessibility transformation
which is a scale and offsets for X and Y axis. The window manager queries
the window policy for which windows will not be magnified. For example,
the IME windows and the navigation bar are not magnified including windows
that are attached to them.
2. The accessibility features such a screen magnification and touch
exploration are now impemented as a sequence of transformations on the
event stream. The accessibility manager service may request each
of these features or both. The behavior of the features is not changed
based on the fact that another one is enabled.
3. The screen magnifier keeps a viewport of the content that is magnified
which is surrounded by a glow in a magnified state. Interactions outside
of the viewport are delegated directly to the application without
interpretation. For example, a triple tap on the letter 'a' of the IME
would type three letters instead of toggling magnified state. The viewport
is updated on screen rotation and on window transitions. For example,
when the IME pops up the viewport shrinks.
4. The glow around the viewport is implemented as a special type of window
that does not take input focus, cannot be touched, is laid out in the
screen coordiates with width and height matching these of the screen.
When the magnified region changes the root view of the window draws the
hightlight but the size of the window does not change - unless a rotation
happens. All changes in the viewport size or showing or hiding it are
animated.
5. The viewport is encapsulated in a class that knows how to show,
hide, and resize the viewport - potentially animating that.
This class uses the new animation framework for animations.
6. The magnification is handled by a magnification controller that
keeps track of the current trnasformation to be applied to the screen
content and the desired such. If these two are not the same it is
responsibility of the magnification controller to reconcile them by
potentially animating the transition from one to the other.
7. A dipslay content observer wathces for winodw transitions, screen
rotations, and when a rectange on the screen has been reqeusted. This
class is responsible for handling interesting state changes such
as changing the viewport bounds on IME pop up or screen rotation,
panning the content to make a requested rectangle visible on the
screen, etc.
8. To implement viewport updates the window manger was updated with APIs
to watch for window transitions and when a rectangle has been requested
on the screen. These APIs are protected by a signature level permission.
Also a parcelable and poolable window info class has been added with
APIs for getting the window info given the window token. This enables
getting some useful information about a window. There APIs are also
signature protected.
bug:6795382
Change-Id: Iec93da8bf6376beebbd4f5167ab7723dc7d9bd00
Split the DisplayManager into two parts. One part is bound
to a Context and takes care of Display compatibility and
caching Display objects on behalf of the Context. The other
part is global and takes care of communicating with the
DisplayManagerService, handling callbacks, and caching
DisplayInfo objects on behalf of the process.
Implemented support for enumerating Displays and getting
callbacks when displays are added, removed or changed.
Elaborated the roles of DisplayManagerService, DisplayAdapter,
and DisplayDevice. We now support having multiple display
adapters registered, each of which can register multiple display
devices and configure them dynamically.
Added an OverlayDisplayAdapter which is used to simulate
secondary displays by means of overlay windows. Different
configurations of overlays can be selected using a new
setting in the Developer Settings panel. The overlays can
be repositioned and resized by the user for convenience.
At the moment, all displays are mirrors of display 0 and
no display transformations are applied. This will be improved
in future patches.
Refactored the way that the window manager creates its threads.
The OverlayDisplayAdapter needs to be able to use hardware
acceleration so it must share the same UI thread as the Keyguard
and window manager policy. We now handle this explicitly as
part of starting up the system server. This puts us in a
better position to consider how we might want to share (or not
share) Loopers among components.
Overlay displays are disabled when in safe mode or in only-core
mode to reduce the number of dependencies started in these modes.
Change-Id: Ic2a661d5448dde01b095ab150697cb6791d69bb5
The activity manager now keeps track of which users are running.
Initially, only user 0 is running.
When you switch to another user, that user is started so it is
running. It is only at this point that BOOT_COMPLETED is sent
for that user and it is allowed to execute anything.
You can stop any user except user 0, which brings it back to the
same state as when you first boot the device. This is also used
to be able to more cleaning delete a user, by first stopping it
before removing its data.
There is a new broadcast ACTION_USER_STOPPED sent when a user is
stopped; system services need to handle this like they currently
handle ACTION_PACKAGE_RESTARTED when individual packages are
restarted.
Change-Id: I89adbd7cbaf4a0bb72ea201385f93477f40a4119