...with protection flag PROTECTION_FLAG_DEVELOPMENT
Bring back the old grant/revoke code for development permissions.
Also some more dumpsys output to help debugging.
And new dumpsys command for checking a permission.
Change-Id: I6e27e62a9ca5ec1ecc0f102714a448ea02f0f41c
Add new Activity.isVoiceInteractionRoot() API that an activity can use
to determine whether it is the root activity of a voice interaction
session started by the user's designated voice interaction service.
This is a special new API that apps must explicitly check, because as
with visual activities the model behind an activity should usually be
that it accomplishes its task by interacting with the user (implicitly
getting their approval) rather than trusting that whoever invoked it
is telling it to do what the user once. In the voice world, however,
there are some cases where quick interactions want to allow for immediate
execution without further user involvement, so this API allows for that
without opening up security holes from other applications.
Change-Id: Ie02d2458f16cb0b12af825641bcf8beaf086931b
The main change here is to not allow the dialog to go in to its "focus
on the last app the user selected" when running in voice interaction mode,
instead just always giving a simple list.
This also fixes some problems with cleaning up active commands when
an activity finishes and not forcing the current session to go away
when the screen is turned off.
Also added some debug help, having activity print the state of the
voice interactor.
Change-Id: Ifebee9c74d78398a730a280bb4970f47789dadf5
...context and/or screenshot
Added new API to find out what contextual data has been globally disabled.
Also updated various documentation to make it clear what kind of contextual
data you will get (and when it will be null).
Also added a new Activity.showAssist() API because... well, I was already
in there, it was easy to do, it is safe, and maybe people will build cool
things with it.
Change-Id: Ia553d6bcdd098dc0fce4b9237fbfaca9652fc74b
New APIs allow the voice interaction service to set/retrieve a filter
for which of the show flags are allowed.
Change-Id: I588cbe55afee0548ad3afa22d3a7d3bc43cb54a6
TextView is now much smarter about the text it reports, limiting it
to what is visible (plus a bit more). Also add a facility for it to
report where the lines of text are, both as offsets in the text string
and their baselines on screen.
Part of fixing issue #22328792: Fix scalability issues in AssistStructure
Change-Id: Idddb8c3a3331355f381e2d4af06d520fe7c7ce8e
Just forgot to add the request to the active set.
Also eradicate a bunch of old cruft that has been replaced
by the final APIs, and improve voice interaction test to
sit fully on top of the final APIs and have a test for
command request.
Change-Id: Ieff7a6165ebf2a4c5fb80c1ebd020511a2ae63ee
...into account when calculating the position information
Actually what we need here is the full transformation matrix, if it
is available. And that means actually computing the location of
views on the screen requires doing this all through transformations,
so the AssistVisualizer has been changed to do this (while still
also keeping the old mechanism for comparison to verify that things
are working correctly).
Also added new properties for elevation and alpha.
And optimized the parcelling of AssistStructure to not write things
that aren't needed; this reduces the parcelled size by about half.
Change-Id: I50b0dd2e6599c74701a5d188617a3eff64b07d03
Add new APIs to associate a Request with a name, get all active
requests, and get active request by name.
Also add a new Activity.onProvideReferrer() which will allow
applications to propagate referrer information to the assistant
(and other apps they launch) in a consistent way.
Change-Id: I4ef74b5ed07447da9303a74a1bdf42e4966df363
The framework API that takes a CharSequence as a prompt is now
deprecated, so update the test code to use the new API.
Bug: 21695917
Change-Id: Ic4b7afa6c547a9885a900ed092910f3c6bdd1dd4
Start moving Assist* stuff to android.app.assist.
Clean up some more of the VoiceInteractionSession APIs.
Clearly document that finish() is not the same as hide(),
always call hide() instead, and fix the finish() path to
also always do a hide to make sure everything is cleaned
up correctly.
Change-Id: I962d4069fcb34fe89547a95d395ae1b9fa3b4148
Also rework how we transfer AssistContent and AssistStructure
to the assistant, so they are delivered as completely separate
objects rather than the kludgy bundling them in the assist
data thing.
Change-Id: Ib40cc3b152bafeb358fd3adec564a7dda3a0dd1d
Fix the various view assist related APIs.
Also remove the blockAssist view attribute, and instead use
the window's FLAG_SECURE to drive blocking of the entire
hierarchy (which is semantically correct, and will protect
existing apps that have already indicated they need it).
Change-Id: I6beebc86b202809cba0a356cae9607d8d0fb5e78
Issue #19912529: VI: VoiceInteractor callback ClassCastException
Fix to use correct argument.
Issue #19912636: VI: Documentation for VoiceInteractionSession.onBackPressed
Added documentation.
Issue #19912703: VI: VoiceInteractionSession NPE on Abort Request
Maybe fix this -- don't crash if there is no active session.
Issue #19953731: VI: Add value index to...
...android.app.VoiceInteractor.PickOptionRequest.Option
There is now an optional index integer that can be associated with
every Option object.
Issue #19912635: VI: Behavior of startActivity when in voice...
...interaction is unexpected
We now forcibly finish the current voice interaction task whenever
another activity takes focus from it.
Issue #20066569: Add API to request heap dumps
New ActivityManager API to set the pss limit to generate heap
dumps.
Also added app ops for assist receiving structure and screenshot
data, so that we can track when it does these things.
Change-Id: I688d4ff8f0bd0b8b9e3390a32375b4bb7875c1a1
New APIs on ViewAssistStructure all the app to request to
build a sub-tree asynchronously and indicate when it is done
with that. The overall AssistStructure is now only flattened
and transfered on-demand, when the app receiving it requests
its data -- and at that point we can wait for any asynchronous
building to complete.
New AsyncStructure view is a very simple example of using this
to asynchronously build a child view.
Change-Id: I14f9199bee64915ad3dc80b2190916ec874308af
Instead of collecting all of the data directly in AssistStructure,
we now have a dispatch mechanism down the hierarchy to do so.
While doing this, also added the ability to automatically collect
assist data from AccessibilityNodeProviders attached to views
(so now we see all of the data in for example Calendar).
This is a first step needed towards being able to asynchronously
populate assist data.
Change-Id: I59ee1ea104ca8207bad8df7a38195d93da1adea7
Add view ID information to the assist structure.
Also rework the API to simplify how it works by removing
the ViewNode wrapper around ViewNodeImpl -- these are now
just the same thing. And then add complexity by introducing
a formal WindowNode object that contains the top-level window
information (so I can add in some more window-specific info
in the future).
Change-Id: I5d525cf61ab6a73193e5cceb4c09d2d21cc27bae
New flag you pass in to startSession() to say you want it,
new callback on VoiceInteractionSession to receive it.
Change-Id: I61fdcfdee41a60d46036a2ef16681a9b4181115a
Also add API for voice interaction service to control
whether the system should hold a wake lock while it is
working with an activity (and actually *do* hold a wake
lock while doing so, duh!).
And while in there, clean up the launching wake lock to
correctly give blame to the app that is launching.
Change-Id: I7cc4d566b80f59fe0a9ac51ae9bbb7188a01f433
Optimize parceling of AssistData (which is now renamed to
AssistStructure) by pooling duplicated class name strings.
Change text associated with a view node to a CharSequence,
so styling information comes along.
Include global text attributes -- size, colors, etc.
Introduce a new AssistContent structure, which allows us
to propagate information about the intent and data the
activity is looking at. This further allows us to propagate
permission grants, so the assistant can dig in to that data.
The default implementation propagates the base intent of an
activity, so if for example you bring up the assistant while
doing a share the assistant itself has the same information
and access that was given to the share activity (so it could
for example share it in another way if it wanted to).
Did some optimization of loading PersistableBundle from xml,
to avoid duplicating hash maps and such.
Changed how we dispatch ACTION_ASSIST to no longer include
the more detailed AssistStructure (and new AssistContent)
data when launching; now the example code that intercepts
that needs to be sure to ask for assist data when it starts
its session. This is more like it will finally be, and allows
us to get to the UI more quickly.
Change-Id: I88420a55761bf48d34ce3013e81bd96a0e087637
We now have a formal concept of the session being shown and
hidden, with it being able to continue running while hidden
as long as there is enough RAM.
This changes the flow that a VoiceInteractionSession will
see: onCreate() is when it is first created, onCreateContentView()
is when its UI first needs to be built, onShow() is called each
time it needs to be shown and has the arguments given when the
show request was made (which has been renamed from startSession to
showSession), and then onHide() will be called when the UI is
no longer shown.
The methods show() and hide() now allow a VoiceInteractionSession
subclass to control when it is shown and hidden, working with the
shown state being maintained by the system.
Change-Id: Ic4a430ec7e8bf76a5441fd0425e2932806170fcc
Can switch from a pure overlay at the top of the screen,
to interactive mode with the voice UI drawing at the bottom
and pushing its target activity up like an IME.
Add mechanism to get assist data to the voice interaction UI.
Add some basic visualization of the assist data, outlining
where on the screen we have text.
Add a test ACTION_ASSIST handler, which can propagate the
assist data it gets to the voice interaction session so
you can see what kind of data we are getting from different
apps.
Change-Id: I18312fe1601d7926d1fb96a817638d60f6263771
- Make Callback an abstract class
- Split manage intents into 3 different methods
- Remove RECOGNITION_FLAGS_NONE
Bug: 17255602
Change-Id: I1329f889bb2ab35938f42d2ecfe755d2b17ec542
Tighten the API by taking in a locale rather than a string tag.
Tighten the checks when reading the enrollment metadata, bail out if any
attribute is missing or invalid.
Add missing recycle call for a TypedArray
Stop recognition when sound model(s) change. This is needed during
un-enrollment/re-enrollment.
Bug: 17187528
Bug: 17205230
Change-Id: Idb00b51ef8c4ea0a8f8993decea582223181fa3d
Internally we pause the recognition when:
- a phone call is active/off-hook/ringing
- or some other application grabs the microphone
we auto-resume when the condition that caused us to pause reverses.
Both these events are notified to the client via callbacks so that they can choose to display on their UI,
that the recognition is paused for some reason.
Bug: 16515468
Bug: 16740806
Bug: 16514535
Change-Id: Ib274d68522c8cf37d42402c875b16159957657f0
Also annotate the flags with @IntDef to make things clearer and safer
Add more debug logging
Revert to start/stop being synchronous since telephony and microphone will
need to be handled internally.
Bug: 16731586
Bug: 16514535
Bug: 16549061
Change-Id: I83695d52e9547269c95d443e4d921c9238b7401e
Don't call back for a detector being marked invalid because
that happens when someone else obtains a detector or VIS shuts down,
in either case we don't want a loop where two entities keep creating new detectors
and being invalidated.
Don't call back on an invalid detector for availability change/detected/started and stopped
only propagate errors.
This helps us with cases where a callback for the previous VIS may get called and then crash because it
tries to make calls without being the current VIS.
In the new scheme of things, if the VIS changes, or the current VIS obtains a new AlwaysOnHotwordDetector,
the previous one is shutdown and internally marked as invalid and all calls to it fail with an IllegalStateException.
Bug: 16629417
Change-Id: I74417bf76ba80916ebc21b042c18b3467857733e
- This is needed for telephony and audio integration which should happen via async callbacks
that'll end up starting/stopping recognition.
e.g. if a startRecognition happens when in a phone call - the onDetectionStarted will get called once the phone
call ends.
For now the transient stoppages due to internal reasons will not be propagated back to the client.
Bug: 16514535
Bug: 16515468
Change-Id: I1b2b8edd28f5c5e67c453f66c23e1a67a626114e
This helps us make the list sound models operation an async one, it also helps us
with the case where a detector is invalidated, so the client doesn't have to keep checking the
state.
Synchronize DatabaseHelper methods on its instance so that other VoiceInteractionManagerService
calls aren't blocked on db writes/reads.
It's still possible for the list operation to be blocked on update and vice-versa
Change-Id: Ib8ec4ac5056b62d443038560ce31d0641b4627b0
Add a callback for when any sound model change happens. This helps the VIS
to re-check the availability and either enroll the user, or start/stop recognition.
Also shut down any active recognition when VIS dies, or a different hotword detector instance is obtained from VIS.
Change-Id: I03f94e78c6ee307afe822a84aebc7e74c64de7b4