Previously, fdopen was being called on a file descriptor that was owned
by a unique_fd without releasing. This leads to a double close, since
both fclose and the unique_fd destructor will try to close the fd.
Bug: http://b/113880863
Test: treehugger
Change-Id: I6f6f48d304861b5e4d7efee0d3ad0e30178a95a4
Statsd hashes (using its own hashing function) raw strings to reduce the
upload data size when there are duplicate strings in the report. And in cloud,
the clearcut translator would backfill the strings.
In a few droidfood users, we find the translator was unable to do that. While
debugging the root cause, we first decided to provide an option to disable
the hashing from the cloud.
Test: statsd unit test, CTS test, tested manually
BUG: b/79943763
Change-Id: If0359c8cf3f3cf83a2938db9ebf95ea7906f0b0c
Pulled value metrics with conditions had a subtle bug that caused
us to leave the condition on even if it should've been false.
Bug: 79778783
Test: Added unit-test and verified on marlin-eng.
Change-Id: I31f34791118319b3471f7a6ea8a024e2d511cfe7
Right now in value metric, if a later pull produces a smaller number
than the previous one, we use absolute value of the current value.
This is not correct for some atoms as listed in the CL, which should
just take 0.
For some other atoms, this is unexpected error and should just dump
stale data.
Test: manual test
Bug: 79265262
Change-Id: I59fbfd96cbb57be22cd8d21cb57a7c60ca6856ee
Reports written to disk don't contain the strings used, which will
make this report unusable if there are strings that don't show up
again. We should always include the strings, so this option is
removed entirely.
Also, we hard-coded the wrong number of fields when pulling
ModemActivityInfo. There are actually 10 fields, not 6.
Bug: 79601503
Test: Tested unit-tests pass on marlin-eng.
Change-Id: I6834b096ced77418a9cc2ddd79b08d1c9c447fae
We notice devices uploading a bunch of bytes for the uidmap even if
the device is running an empty config, so there are no actual metrics
to report. This hardcodes some logic to skip the inclusion of the
uidmap if there are exactly 0 metrics.
Bug: 79381210
Test: Tested unit-tests on marlin-eng
Change-Id: I96348235341a7faf15ff57d4d1eccac635a3a999
We observe a single ConfigMetricsReportList can be greater than the
safe size for the binder transaction buffer since we only check the
size of the current metrics in progress, but we also return the
previous reports stored on disk.
This change will attempt to send another ConfigMetricsReportList
as soon as possible if there's already a report on disk.
Also fixes a bug when trying to trigger data fetch before the client
has registered the corresponding dataFetchOperation.
Bug: 79201869
Test: Tested manually on marlin-eng
Change-Id: I2d3677162804a27e7a7a95d482d80c46bd994a67
It only works on eng build. And all code is behind a build flag, so the
code will be stripped out in production builds.
Bug: 78239479
Test: manual
Change-Id: I20ee51822d18e6c77ca324a5327712cbed09593e
1. Hash the strings in metric dimensions.
2. Optimize the timestamp encoding in bucket.
Use bucket num for full bucket and millis for
partial bucket.
3. Encode the dimension path per metric and avoid
deduping it across dimensons.
Test: statsd test
Change-Id: I18f69654de85edb21a9c835c73edead756295e05
BUG: b/77813755
+ The socket listener is behind a flag. It's disabled until we get sepolicy changes in.
+ Data parsing code is from logd, because we use the same format.
+ Removed Davey from JankTracker because it violates our new sepolicy
Test: manually
Bug: 78239479
Change-Id: Ib17729fbc362cdb13385f780e2d636a95adf9bc3
+ Reuse the log_event_list from liblog. StatsLog's binary format remains unchanged
+ Copied socket write code from liblog, including the retry logic.
+ Added build flags to control the StatsLog channel (logd, statsd, or both for debugging)
Bug: 78239479
Test: locally tested and saw logs being written to statsd
Change-Id: I7b1f0069ead00bbf3c29e4bd5b7f363a7ce26abe
We notice that some of the pulled metrics have a ton of data, and
during app upgrades, we're forming partial buckets that represent
small periods of time but require many bytes of data. We now have an
option to drop these buckets that are too short. Note that we still
have to pull the data to keep the metrics for the next bucket
correct. We include a new field in the value and gauge metric outputs
so that it's easy to tell when a bucket was dropped.
We drop the partial buckets also from anomaly detection since we
should be computing anomalies from the same data that is reported.
Test: Added unit-tests for value and gauge metrics.
Bug: 77925710
Change-Id: Ic370496377c6afd380e02278a6c1ed8b521a2731