-
Notifications
You must be signed in to change notification settings - Fork 233
Add exponential histogram support to CloudWatch PMD Exporter #1677
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
97c4271
to
995902e
Compare
return d.AddEntryWithUnit(value, weight, "") | ||
} | ||
|
||
func (d *ExpHistogramDistribution) AddDistribution(other *ExpHistogramDistribution) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we trying to match the Distribution
interface? It doesn't look like this would satisfy the interface as it is now. We would have to make the Distribution
interface generic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, you're right it doesn't satisfy the interface. Initially I was trying to do so but it ended up not working out because the Distribution interface is specific too HistogramDataPoint
. In hindsight, I don't think it's even possible to combine exp histograms and classic histograms anyways so not really necessary at all. I already had this code though and didn't think it was worth refactoring everything to make it fit nicely together so I left it as is for now
} | ||
// Assume function pointer is valid. | ||
ad.expHistDistribution = exph.NewExpHistogramDistribution() | ||
ad.expHistDistribution.ConvertFromOtel(dp, unit) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: If we store the unit in the MetricDatum already, why do we need to store it in the distribution?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is based off of how it currently works for the regular histograms. The unit is given to the distribution so that it can check that similar distributions are being combined. If the unit for distribution does match the unit for the incoming distribution to combine it with, a debug log is printed.
return datums | ||
} | ||
|
||
func (c *CloudWatch) buildMetricDataumExph(metric *aggregationDatum, dimensionsList [][]*cloudwatch.Dimension) []*cloudwatch.MetricDatum { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Typo
func (c *CloudWatch) buildMetricDataumExph(metric *aggregationDatum, dimensionsList [][]*cloudwatch.Dimension) []*cloudwatch.MetricDatum { | |
func (c *CloudWatch) buildMetricDatumExph(metric *aggregationDatum, dimensionsList [][]*cloudwatch.Dimension) []*cloudwatch.MetricDatum { |
This PR was marked stale due to lack of activity. |
This PR was marked stale due to lack of activity. |
995902e
to
936253b
Compare
07375ab
to
5362dc8
Compare
@@ -4,6 +4,10 @@ go 1.24.4 | |||
|
|||
replace github.com/influxdata/telegraf => github.com/aws/telegraf v0.10.2-0.20250113150713-a2dfaa4cdf6d | |||
|
|||
replace collectd.org v0.4.0 => github.com/collectd/go-collectd v0.4.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are these dependencies of cumulativetodeltaprocessor? I don't see go-collectd
or clock
used in the new code
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was hitting some issues with downloading the collectd dependency after clearing my local go cache and I needed to redirect to github to pick it up. I'm not exactly sure what happened but it looks like collectd.org stopped vending their package via collectd.org. The issue may have been resolved by now though, so I can try again.
../../../.gvm/pkgsets/go1.22.7/global/pkg/mod/github.com/aws/[email protected]/plugins/parsers/collectd/parser.go:8:2: unrecognized import path "collectd.org": https fetch: Get "https://collectd.org/?go-get=1": dial tcp: lookup collectd.org on 10.4.4.10:53: read udp 10.169.109.191:52627->10.4.4.10:53: i/o timeout
|
||
func (d *ExpHistogramDistribution) Size() int { | ||
size := len(d.negativeBuckets) + len(d.positiveBuckets) | ||
if d.zeroCount > 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is zeroCount? the number of datapoints with 0?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, pretty much. OTLP exponential histograms splits the data into three section: negative values, zero values, positive values. Positive and negative buckets are defined separately by a series of buckets+counts. The zero values don't have any buckets so its just a counter stored in the histogram structure. Side note: the definition of "0" in OTLP exponential histograms is loose. Datapoints with a magnitude less than the configurable "zero threshold" is treated as 0.
} | ||
|
||
func (d *ExpHistogramDistribution) Resize(_ int) []*ExpHistogramDistribution { | ||
// TODO: split data points into separate PMD requests if the number of buckets exceeds the API limit |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what happens if we exceed the API limit?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Based on the API documentation, I believe the PMD request will be rejected with error code 400 InvalidParameterValue, though I haven't tried.
|
||
// ValuesAndCounts outputs two arrays representing the midpoints of each exponential histogram bucket and the | ||
// counter of datapoints within the corresponding exponential histogram buckets | ||
func (d *ExpHistogramDistribution) ValuesAndCounts() ([]float64, []float64) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: we can make the name of the function more descriptive. Maybe GetMidpointsAndCounts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This naming scheme was based off of the existing function in the Distribution interface. I think ValuesAndCounts is a fairly descriptive name as that's what is actually pushed to CloudWatch in the PMD request (an array of values and an array of counts).
metric/distribution/exph/exph.go
Outdated
d.min = min(d.min, value) | ||
d.max = max(d.max, value) | ||
|
||
if math.Abs(value) > d.zeroThreshold { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so if my zero threshold is 1, then both values 2 and -2 will be counted in the zero bucket?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch, this logic is incorrect. In fact this function is left over work from trying to adhere to the Distribution interface that's not used nor needed. I'll remove this function entirely.
metric/distribution/exph/exph.go
Outdated
|
||
if math.Abs(value) > d.zeroThreshold { | ||
d.zeroCount += uint64(weight) | ||
} else if value > d.zeroThreshold { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would this ever happen? Wouldn't we always hit the first case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incorrect logic, but will be removing this function.
distList = resize(metric.distribution, c.config.MaxValuesPerDatum) | ||
} | ||
datums = c.buildMetricDatumDist(metric, dimensionsList) | ||
} else if metric.expHistDistribution != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there ever a chance we will have both distribution
and expHistDistribution
not nil? Probably shouldn't happen...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No (at least they shouldn't...). The agent will convert the different OTLP datatypes into aggregationDatums in ConvertOtelMetrics. The existing ConvertOtelHistogramDataPoints sets distribution
field and the new ConvertOtelExponentialHistogramDataPoints sets the expHistDistribution
field.
not quite complete. need more unit tests
* Move OTLP implementation to separate file * Simplify map key sorting
5362dc8
to
d71cfd4
Compare
@@ -78,34 +77,6 @@ func setNewDistributionFunc(maxValuesPerDatumLimit int) { | |||
} | |||
} | |||
|
|||
func resize(dist distribution.Distribution, listMaxSize int) (distList []distribution.Distribution) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
refactored as functions on each distribution
Description of the issue
The Cloudwatch/PMD exporter currently drops all exponential histogram metrics.
Description of changes
Note
See companion PR for updating cumulativetodelta processor: amazon-contributing/opentelemetry-collector-contrib#331
License
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
Tests
Note
See companion PR for integration test: aws/amazon-cloudwatch-agent-test#558
Integration test run: https://github.com/aws/amazon-cloudwatch-agent/actions/runs/16371811763
Histogram test: https://github.com/aws/amazon-cloudwatch-agent/actions/runs/16371811763/job/46261959442
Requirements
Before commit the code, please do the following steps.
make fmt
andmake fmt-sh
make lint