The idea here is to have a mapping per endpoint with the interesting key we want to use as metric.
That might be also used in the storage.py module.
The commented list is because we could need more information that just one bare key.
As i'm already unsure of the initial implementation, i'm going for a simple one first.
Implementation detail to have access of the data prior to any Response wrapping the encode_data does.
Again with the discussion about the grpc [1] which could ease our life.
Heads up, will need to be rebased on D1329 when it lands.
This will need adaptation for the snapshot_add endpoints will take a list of snapshots as input (instead of 1).
Then fill the pattern when using the metric which uses the unit.
I'm not entirely convinced by the introduction of the send_metrics indirection as is, using the statsd API itself would be much clearer.
To make it more explicit:
its name should be singular (it sends a single metric)
all its callers should use keyword arguments
it would probably make sense to move the dict key parsing/metric name stuff inside that function, which will help if we ever end up adding an "extrinsic" metric that would use a unit instead of a count.
If we're making metric names constants, this should happen for all of them so that they are defined in the same place.
... fill the pattern when using the metric which uses the unit.