-
Notifications
You must be signed in to change notification settings - Fork 90
💥 Replace OTel Prometheus Exporter #942
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
4ac85f6
to
1c1c0de
Compare
1c1c0de
to
1d4bafb
Compare
1d4bafb
to
e089c11
Compare
task_token: self.info.task_token.clone(), | ||
details, | ||
}) | ||
if !self.info.is_local { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we log if trying to heartbeat from LA? Seems like something we wanna signal if it's not something users should be doing. Maybe dbg_panic
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. I think before merging we might want a draft PR in Python or .NET or something that updates the core submodule to this branch/commit and demonstrates the changes needed as a result of this. Maybe both, unsure, they do different things with metrics (Python uses metric buffer, but .NET uses the traits directly).
@@ -31,7 +32,7 @@ parking_lot = "0.12" | |||
slotmap = "1.0" | |||
thiserror = { workspace = true } | |||
tokio = "1.1" | |||
tonic = { workspace = true, features = ["tls", "tls-roots"] } | |||
tonic = { workspace = true, features = ["tls-ring", "tls-native-roots"] } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this related to the changes for this PR? Any concerns changing this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's part of the reason for all this, so that we could upgrade Tonic. The new feature flags should be the exact equivalent of what it was previously, they just got renamed
@@ -71,7 +69,7 @@ pub enum ConfigError { | |||
InvalidConfig(String), | |||
|
|||
#[error("Configuration loading error: {0}")] | |||
LoadError(anyhow::Error), | |||
LoadError(Box<dyn std::error::Error>), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This also seems unrelated?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's just something i didn't catch in the review for envconfig - I don't want anyhow
in the "public" core api.
if let Ok(c) = vector.get_metric_with(&labels.as_prom_labels()) { | ||
Ok(Box::new(CorePromCounter(c))) | ||
} else { | ||
Err(self.label_mismatch_err(attributes).into()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Curious, is this error really possible to hit from a user/caller POV since you create the vector before using?
This PR makes some significant alterations to the metrics abstractions and replaces the OTel Prometheus exporter with a custom one, since theirs is no longer maintained. The custom one is essentially a copy-paste of Prom's own first-party lib, except with some modifications made to allow for the registration of metrics with overlapping labels (this is fully permitted by Prom for ingestion, mind you, just prevented client side. They have their reasons for that, which are legitimate, but taking that option away would require fully breaking our API).
Breaking
Arc<dyn XXX>
are now concrete typeswith_attributes
methods now return Results, just no good way around this. Dealing with it in lang bridges should be fairly easy though since the possibility of throwing something generally existed already.New
adds
orrecords
(as opposed to justadd
) which can be used (afterwith_attributes
, typically) to record without passing labels again. This is more efficient for the prom backend. They needed a different name to avoid turbofish disambiguation nonsense.I did some testing to ensure performance isn't materially different here, and it's not in real-world situations (ex: the
workflow_load
integration test has no difference in overall runtime). That said, the specific bench I made does show that if you record a bajillion prom metrics as fast as you can from different threads, lock contention slows things down by as much as 50%. At the same time, OTel's implementation of what actually happens when you scrape prom metrics is to just basically create every single metric on demand from scratch and fill in the data, which is definitely slower than what happens now - so actual scraping should be quite a bit cheaper. TLDR: I don't have any reason to believe this would have any real impact, for anyone unless they're using custom metrics with the prom exporter and are just absolutely hammering away.If further perf improvements are desired in the future, we could add a
bind_with_schema
kind of API which would bind a metric to a certain set of labels, and allow recording on that metric with different label values without going through a lock.Closes #908
Closes #882