Add model management API skeleton to VoiceInteractionManagerService
Add an "interactor" for all always-on APIs
- The VoiceInteractionService will get an interactor for the given
keyphrase and locale.
- It can then check the availability and call methods to start and
stop recognition on this interactor.
- Add a common class to deal with SoundTrigger APIs
- Cleanup the keyphrase representation:
We now have separate representations for the keyphrase metadata and
a keyphrase being used for recognition.
This'll also help us to handle custom keyphrases in the
future easily.
This also ensures that for use within the framework,
we rely on the ID of the KeyphraseInfo rather than comparing the
text everytime.
Add a callback for the AlwaysOnHotwordDetector
This callback should be passed in by the VoiceInteractionService and is used to notify it
of recognition events.
Change-Id: I26252298773024f53a10cdd2af4404a4e6d74aae
class Keyphrase: replaced number of users by list of user IDs.
class RecognitionEvent: added capture preamble duration.
class KeyphraseRecognitionEvent: add keyphrase ID and explicit list of
user ID/confidence level pairs.
startRecognition(): takes a RecognitionConfig specifying the list of
keyphrases to listen to as well as for each keyphrase the recognition mode,
users and min confidence levels for each user.
Bug: 12378680.
Change-Id: I57036cc83b5d9e90512b8bb3f951af3a4a74c0ce
- We use a DB to store and persist the sound models.
- This'll be used by SoundTriggerModelManager, a service that lists,
deletes and registers new models. This'll be used by the enrollment
client to enroll and unenroll users.
- This needs the unique identifiers for sound model & keyphrases to be
present in the respective data structures
This is very early stage so please point out any concerns.
Change-Id: I82962895bf326167458f20e6ba995295551de025
New window layer that voice interaction service windows
go in to. Includes a new voice-specific content rectangle
that voice activities are placed in to.
Add specific animations for this layer, sliding down from
the top (though this can be customized by the voice interaction
service).
Also add the concept of activities running for voice interaction
services for purposes of adjusting the animation used for them,
again sliding from the top, but not (yet?) customizable by the
voice interaction service.
Change-Id: Ic9e0e8c843c2e2972d6abb4087dce0019326155d
This makes VoiceInteractionSession a more first-class
concept. Now the flow is that a VoiceInteractionService
calls startSession() when it wants to begin a session.
This will result in a new VoiceInteractionSession via the
VoiceInteractionSessionService containing it, and the
session at that point an decide what to do. It can now
show UI, and it is what has access to the startVoiceActivity
API.
Change-Id: Ie2b85b3020ef1206d3f44b335b128d064e8f9935
On the app side, requests are now composed by subclassing
from various types of Request objects.
On the service side, starting a voice interaction session
involves starting another service that will then manage the
session. This leads the service design much more to what
we want, where the long-running main service is very tiny
and all the heavy-weight transient session work is elsewhere
in another process.
Change-Id: I46c074c6fe27b6c1cf2583c6d216aed1de2f1143
This gives a basic working implementation of a persist
running service that can start a voice interaction when
it wants, with the target activity(s) able to go through
the protocol to interact with it. It may even work when
the screen is off by putting the activity manager in the
correct state to act like the screen is on.
Includes a sample app that is a voice interation service
and also has an activity it can launch.
Now that I have this initial implementation, I think I
want to rework some aspects of the API.
Change-Id: I7646d0af8fb4ac768c63a18fe3de43f8091f60e9