archive-staging's thanos sidecar service logs (we see it's pushing metrics to the store) [1].
We can metrics are present in the configured bucket [2] [3].
Though, when querying the thanos query service which is configured to also query that
new thanos gateway service, no metrics is showing up [4].
For example, executing the query to access the metrics swh_loader_git_total in the
thanos query ui, no data is received and the thanos query service logs [5]. Whereas
accessing the thanos sidecar ui outputs:
swh_loader_git_total{cluster_name="archive-staging",domain="staging",endpoint="http",environment="staging",has_parent_origins="False",has_parent_snapshot="False",has_previous_snapshot="False",incremental_enabled="True",infrastructure="kubernetes",instance="10.42.3.41:9102",job="prometheus-statsd-exporter",namespace="swh",pod="prometheus-statsd-exporter-5b9f5c7d54-kqfb4",prometheus="cattle-monitoring-system/rancher-monitoring-prometheus",prometheus_replica="prometheus-rancher-monitoring-prometheus-0",service="prometheus-statsd-exporter",visit_type="git"}
Note: after some local tampering to access those ui (ssh tunnel for the thanos query
itself, port-forwarding service for the thanos-sidecar service).
[1]
Context: archive-staging <0> tail <6> 1h <shift-c> Clear <t> Toggle Timestamp ____ __.________ Cluster: archive-staging <1> head <c> Copy <w> Toggle Wrap | |/ _/ __ \______ User: archive-staging <2> 1m <m> Mark | < \____ / ___/ K9s Rev: v0.26.3 <3> 5m <ctrl-s> Save | | \ / /\___ \ K8s Rev: v1.22.11 <4> 15m <s> Toggle AutoScroll |____|__ \ /____//____ > CPU: 31%↓ <5> 30m <f> Toggle FullScreen \/ \/ MEM: 74%↑ Warning Memory level! ┌────────────────────────────────────────────────────────────────────── Logs(cattle-monitoring-system/prometheus-rancher-monitoring-prometheus-0:thanos-sidecar)[head] ───────────────────────────────────────────────────────────────────────┐ │ Autoscroll:On FullScreen:Off Timestamps:Off Wrap:Off │ │ level=info ts=2022-09-02T11:59:13.590765601Z caller=main.go:98 msg="Tracing will be disabled" │ │ level=info ts=2022-09-02T11:59:13.593213886Z caller=options.go:23 protocol=gRPC msg="disabled TLS, key and cert must be set to enable" │ │ level=info ts=2022-09-02T11:59:13.595118268Z caller=factory.go:46 msg="loading bucket configuration" │ │ level=info ts=2022-09-02T11:59:13.74677224Z caller=azure.go:97 msg="Azure blob container successfully created" address=https://swhthanosmetrics.blob.core.windows.net/metrics-sesi-rocquencourt-rancher-staging-0 │ │ level=info ts=2022-09-02T11:59:13.74813846Z caller=sidecar.go:291 msg="starting sidecar" │ │ level=info ts=2022-09-02T11:59:13.749088305Z caller=intrumentation.go:60 msg="changing probe status" status=healthy │ │ level=info ts=2022-09-02T11:59:13.74921074Z caller=http.go:58 service=http/server component=sidecar msg="listening for requests and metrics" address=:10902 │ │ level=info ts=2022-09-02T11:59:13.749903018Z caller=intrumentation.go:48 msg="changing probe status" status=ready │ │ level=info ts=2022-09-02T11:59:13.750502606Z caller=reloader.go:183 component=reloader msg="nothing to be watched" │ │ level=info ts=2022-09-02T11:59:13.751772359Z caller=grpc.go:116 service=gRPC/server component=sidecar msg="listening for serving gRPC" address=:10901 │ │ level=info ts=2022-09-02T11:59:13.794189236Z caller=sidecar.go:155 msg="successfully loaded prometheus external labels" external_labels="{cluster_name=\"archive-staging\", domain=\"staging\", environment=\"staging\", infrastructure=\"k │ │ level=info ts=2022-09-02T11:59:13.794382711Z caller=intrumentation.go:48 msg="changing probe status" status=ready │ │ level=info ts=2022-09-02T12:05:15.870739714Z caller=shipper.go:334 msg="upload new block" id=01GBZ1E9Y09X83Q03WWKVREVPW │ │ level=info ts=2022-09-02T13:00:15.877389925Z caller=shipper.go:334 msg="upload new block" id=01GBZ4JTTMVDD5JVF0H56Z6PTA │ │ level=info ts=2022-09-02T15:00:15.854415997Z caller=shipper.go:334 msg="upload new block" id=01GBZBEJ2KSG0XE9AQV743K9G3 │ │ level=info ts=2022-09-02T17:00:15.854036031Z caller=shipper.go:334 msg="upload new block" id=01GBZJA9AHD6ZN9HAKZDEA1HMN │ │ level=info ts=2022-09-02T19:00:15.866057392Z caller=shipper.go:334 msg="upload new block" id=01GBZS60JM4K9H40KJ1N6BZNRK │ │ level=info ts=2022-09-02T21:00:15.860152187Z caller=shipper.go:334 msg="upload new block" id=01GC001QV1RPQRAAT48WC2V4SK │ │ level=info ts=2022-09-02T23:00:15.86444472Z caller=shipper.go:334 msg="upload new block" id=01GC06XF2MZCN2NXGZDZYP6T14 │ │ level=info ts=2022-09-03T01:00:15.882246784Z caller=shipper.go:334 msg="upload new block" id=01GC0DS6AK1RT7ZHR3T8CKTASP │ │ level=info ts=2022-09-03T03:00:15.861155072Z caller=shipper.go:334 msg="upload new block" id=01GC0MMXNFTBAPZA3HPPHKWSG3 │ │ level=info ts=2022-09-03T05:00:15.860189648Z caller=shipper.go:334 msg="upload new block" id=01GC0VGMTKFHT17HY1BJ77BW5H │ │ level=info ts=2022-09-03T07:00:15.850736988Z caller=shipper.go:334 msg="upload new block" id=01GC12CC2PGFJ13RDY3PHSN037 │ │ level=info ts=2022-09-03T09:00:15.860120739Z caller=shipper.go:334 msg="upload new block" id=01GC1983AM2PJAN1NFAA9DH7P5 │ │ level=info ts=2022-09-03T11:00:15.849640953Z caller=shipper.go:334 msg="upload new block" id=01GC1G3TJJSK8JYX3M9DEJJC94 │ │ level=info ts=2022-09-03T13:00:15.88503945Z caller=shipper.go:334 msg="upload new block" id=01GC1PZHTJFN419V89XGRX800E │ │ level=info ts=2022-09-03T15:00:15.860553464Z caller=shipper.go:334 msg="upload new block" id=01GC1XV92MY7D3F8F53ZRZNDRR │ │ level=info ts=2022-09-03T17:00:15.849441549Z caller=shipper.go:334 msg="upload new block" id=01GC24Q0AJY8PZ1NDY5ZBECF6A │ │ level=info ts=2022-09-03T19:00:15.869791071Z caller=shipper.go:334 msg="upload new block" id=01GC2BJQJNHHDGTGBHCNC053MX │ │ level=info ts=2022-09-03T21:00:15.884713292Z caller=shipper.go:334 msg="upload new block" id=01GC2JEETKM7AV9X5SA2JZ7P83 │ │ Stream closed EOF for cattle-monitoring-system/prometheus-rancher-monitoring-prometheus-0 (thanos-sidecar) │
[2]
/ $ thanos tools bucket ls --objstore.config-file=/tmp/thanos.yaml level=info ts=2022-09-05T14:49:02.487682495Z caller=main.go:98 msg="Tracing will be disabled" level=info ts=2022-09-05T14:49:02.487978101Z caller=factory.go:46 msg="loading bucket configuration" level=info ts=2022-09-05T14:49:04.726860894Z caller=fetcher.go:458 component=block.BaseFetcher msg="successfully synchronized block metadata" duration=2.020801216s cached=38 returned=38 partial=0 01GC2JEETKM7AV9X5SA2JZ7P83 01GC371MJHR333HQ2VGSP2GCAV 01GC6VRZTR6W628EXE2025QTEX 01GBZJA9AHD6ZN9HAKZDEA1HMN 01GC1983AM2PJAN1NFAA9DH7P5 01GBZ4JTTMVDD5JVF0H56Z6PTA 01GC5JJMAQ0J63QRNAN2SK33WX 01GC3MS32J5F47YXQWHBY9V0DJ 01GBZS60JM4K9H40KJ1N6BZNRK 01GC2BJQJNHHDGTGBHCNC053MX 01GC001QV1RPQRAAT48WC2V4SK 01GC3VMTAP9A1WA9SSXVRJKPNC 01GC60A2TPKYEAPRERV539GG2B 01GC42GHK2YFS7X4E8J47QX2NQ 01GC4G802PY3RXTAPK2JT330P1 01GC1G3TJJSK8JYX3M9DEJJC94 01GC675T2JFD3C7C9YBWMVFXVT 01GC54V5TJZ7SYJAG72NC55A7X 01GC12CC2PGFJ13RDY3PHSN037 01GC5SEBJPWZG5KFEZN31DXXD4 01GC6MX8JN1GN8AB4CX5TXEFNQ 01GC06XF2MZCN2NXGZDZYP6T14 01GC5BPX2J9Z4EPHJ957YH635P 01GC4Q3QAJMPGFQMMYZZMSW67J 01GBZBEJ2KSG0XE9AQV743K9G3 01GC305XANNG2BBRGWHF4SHN31 01GC1PZHTJFN419V89XGRX800E 01GC2SA62HN8YYQ72NT1BWQFD0 01GC4XZEJKGB553CX7PDH07X2R 01GC6E1HAMWNC32218DAK2N4DX 01GC0MMXNFTBAPZA3HPPHKWSG3 01GC0VGMTKFHT17HY1BJ77BW5H 01GC3DXBTK69GKXWJ2H68EBCTY 01GC49C8TG8F473X7NXNS3QW2J 01GC0DS6AK1RT7ZHR3T8CKTASP 01GBZ1E9Y09X83Q03WWKVREVPW 01GC24Q0AJY8PZ1NDY5ZBECF6A 01GC1XV92MY7D3F8F53ZRZNDRR level=info ts=2022-09-05T14:49:04.727532948Z caller=tools_bucket.go:261 msg="ls done" objects=38 level=info ts=2022-09-05T14:49:04.728402625Z caller=main.go:160 msg=exiting
[3]
/ $ thanos tools bucket inspect --objstore.config-file=/tmp/thanos.yaml level=info ts=2022-09-05T14:49:59.959075102Z caller=main.go:98 msg="Tracing will be disabled" level=info ts=2022-09-05T14:49:59.959336249Z caller=factory.go:46 msg="loading bucket configuration" level=info ts=2022-09-05T14:50:02.351808718Z caller=fetcher.go:458 component=block.BaseFetcher msg="successfully synchronized block metadata" duration=2.168055407s cached=38 returned=38 partial=0 | ULID | FROM | UNTIL | RANGE | UNTIL-DOWN | #SERIES | #SAMPLES | #CHUNKS | COMP-LEVEL | COMP-FAILED | LABELS | RESOLUTION | SOURCE | |----------------------------|---------------------|---------------------|--------------|-------------|---------|------------|---------|------------|-------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------|---------| | 01GBZ1E9Y09X83Q03WWKVREVPW | 02-09-2022 09:05:06 | 02-09-2022 10:00:00 | 54m53.821s | 39h5m6.179s | 80,920 | 8,911,563 | 81,678 | 1 | false | cluster_name=archive-staging,domain=staging,environment=staging,infrastructure=kubernetes,prometheus=cattle-monitoring-system/rancher-monitoring-prometheus,prometheus_replica=prometheus-rancher-monitoring-prometheus-0 | 0s | sidecar | | 01GBZ4JTTMVDD5JVF0H56Z6PTA | 02-09-2022 10:00:00 | 02-09-2022 12:00:00 | 1h59m59.82s | 38h0m0.18s | 79,315 | 19,527,959 | 162,699 | 1 | false | cluster_name=archive-staging,domain=staging,environment=staging,infrastructure=kubernetes,prometheus=cattle-monitoring-system/rancher-monitoring-prometheus,prometheus_replica=prometheus-rancher-monitoring-prometheus-0 | 0s | sidecar | | 01GBZBEJ2KSG0XE9AQV743K9G3 | 02-09-2022 12:00:00 | 02-09-2022 14:00:00 | 1h59m59.979s | 38h0m0.021s | 79,240 | 19,522,744 | 162,625 | 1 | false | cluster_name=archive-staging,domain=staging,environment=staging,infrastructure=kubernetes,prometheus=cattle-monitoring-system/rancher-monitoring-prometheus,prometheus_replica=prometheus-rancher-monitoring-prometheus-0 | 0s | sidecar | | 01GBZJA9AHD6ZN9HAKZDEA1HMN | 02-09-2022 14:00:00 | 02-09-2022 16:00:00 | 1h59m59.976s | 38h0m0.024s | 83,377 | 19,731,900 | 166,837 | 1 | false | cluster_name=archive-staging,domain=staging,environment=staging,infrastructure=kubernetes,prometheus=cattle-monitoring-system/rancher-monitoring-prometheus,prometheus_replica=prometheus-rancher-monitoring-prometheus-0 | 0s | sidecar | | 01GBZS60JM4K9H40KJ1N6BZNRK | 02-09-2022 16:00:00 | 02-09-2022 18:00:00 | 1h59m59.908s | 38h0m0.092s | 83,090 | 20,185,341 | 169,370 | 1 | false | cluster_name=archive-staging,domain=staging,environment=staging,infrastructure=kubernetes,prometheus=cattle-monitoring-system/rancher-monitoring-prometheus,prometheus_replica=prometheus-rancher-monitoring-prometheus-0 | 0s | sidecar | | 01GC001QV1RPQRAAT48WC2V4SK | 02-09-2022 18:00:00 | 02-09-2022 20:00:00 | 1h59m59.981s | 38h0m0.019s | 81,071 | 20,126,889 | 167,638 | 1 | false | cluster_name=archive-staging,domain=staging,environment=staging,infrastructure=kubernetes,prometheus=cattle-monitoring-system/rancher-monitoring-prometheus,prometheus_replica=prometheus-rancher-monitoring-prometheus-0 | 0s | sidecar | | 01GC06XF2MZCN2NXGZDZYP6T14 | 02-09-2022 20:00:00 | 02-09-2022 22:00:00 | 1h59m59.966s | 38h0m0.034s | 81,142 | 20,127,893 | 167,715 | 1 | false | cluster_name=archive-staging,domain=staging,environment=staging,infrastructure=kubernetes,prometheus=cattle-monitoring-system/rancher-monitoring-prometheus,prometheus_replica=prometheus-rancher-monitoring-prometheus-0 | 0s | sidecar | | 01GC0DS6AK1RT7ZHR3T8CKTASP | 02-09-2022 22:00:00 | 03-09-2022 00:00:00 | 1h59m59.97s | 38h0m0.03s | 81,112 | 20,132,497 | 167,715 | 1 | false | cluster_name=archive-staging,domain=staging,environment=staging,infrastructure=kubernetes,prometheus=cattle-monitoring-system/rancher-monitoring-prometheus,prometheus_replica=prometheus-rancher-monitoring-prometheus-0 | 0s | sidecar | | 01GC0MMXNFTBAPZA3HPPHKWSG3 | 03-09-2022 00:00:00 | 03-09-2022 02:00:00 | 2h0m0s | 38h0m0s | 81,057 | 20,132,322 | 167,730 | 1 | false | cluster_name=archive-staging,domain=staging,environment=staging,infrastructure=kubernetes,prometheus=cattle-monitoring-system/rancher-monitoring-prometheus,prometheus_replica=prometheus-rancher-monitoring-prometheus-0 | 0s | sidecar | | 01GC0VGMTKFHT17HY1BJ77BW5H | 03-09-2022 02:00:00 | 03-09-2022 04:00:00 | 1h59m59.977s | 38h0m0.023s | 81,130 | 20,131,717 | 167,801 | 1 | false | cluster_name=archive-staging,domain=staging,environment=staging,infrastructure=kubernetes,prometheus=cattle-monitoring-system/rancher-monitoring-prometheus,prometheus_replica=prometheus-rancher-monitoring-prometheus-0 | 0s | sidecar | | 01GC12CC2PGFJ13RDY3PHSN037 | 03-09-2022 04:00:00 | 03-09-2022 06:00:00 | 1h59m59.863s | 38h0m0.137s | 81,092 | 20,131,121 | 167,694 | 1 | false | cluster_name=archive-staging,domain=staging,environment=staging,infrastructure=kubernetes,prometheus=cattle-monitoring-system/rancher-monitoring-prometheus,prometheus_replica=prometheus-rancher-monitoring-prometheus-0 | 0s | sidecar | | 01GC1983AM2PJAN1NFAA9DH7P5 | 03-09-2022 06:00:00 | 03-09-2022 08:00:00 | 1h59m59.99s | 38h0m0.01s | 81,093 | 20,131,472 | 167,693 | 1 | false | cluster_name=archive-staging,domain=staging,environment=staging,infrastructure=kubernetes,prometheus=cattle-monitoring-system/rancher-monitoring-prometheus,prometheus_replica=prometheus-rancher-monitoring-prometheus-0 | 0s | sidecar | | 01GC1G3TJJSK8JYX3M9DEJJC94 | 03-09-2022 08:00:00 | 03-09-2022 10:00:00 | 1h59m59.969s | 38h0m0.031s | 81,091 | 20,132,572 | 167,691 | 1 | false | cluster_name=archive-staging,domain=staging,environment=staging,infrastructure=kubernetes,prometheus=cattle-monitoring-system/rancher-monitoring-prometheus,prometheus_replica=prometheus-rancher-monitoring-prometheus-0 | 0s | sidecar | | 01GC1PZHTJFN419V89XGRX800E | 03-09-2022 10:00:00 | 03-09-2022 12:00:00 | 1h59m59.898s | 38h0m0.102s | 81,458 | 20,140,768 | 168,131 | 1 | false | cluster_name=archive-staging,domain=staging,environment=staging,infrastructure=kubernetes,prometheus=cattle-monitoring-system/rancher-monitoring-prometheus,prometheus_replica=prometheus-rancher-monitoring-prometheus-0 | 0s | sidecar | | 01GC1XV92MY7D3F8F53ZRZNDRR | 03-09-2022 12:00:00 | 03-09-2022 14:00:00 | 1h59m59.884s | 38h0m0.116s | 82,186 | 20,275,212 | 169,108 | 1 | false | cluster_name=archive-staging,domain=staging,environment=staging,infrastructure=kubernetes,prometheus=cattle-monitoring-system/rancher-monitoring-prometheus,prometheus_replica=prometheus-rancher-monitoring-prometheus-0 | 0s | sidecar | | 01GC24Q0AJY8PZ1NDY5ZBECF6A | 03-09-2022 14:00:00 | 03-09-2022 16:00:00 | 1h59m59.982s | 38h0m0.018s | 82,270 | 20,296,417 | 169,593 | 1 | false | cluster_name=archive-staging,domain=staging,environment=staging,infrastructure=kubernetes,prometheus=cattle-monitoring-system/rancher-monitoring-prometheus,prometheus_replica=prometheus-rancher-monitoring-prometheus-0 | 0s | sidecar | | 01GC2BJQJNHHDGTGBHCNC053MX | 03-09-2022 16:00:00 | 03-09-2022 18:00:00 | 1h59m59.954s | 38h0m0.046s | 81,243 | 20,227,640 | 168,494 | 1 | false | cluster_name=archive-staging,domain=staging,environment=staging,infrastructure=kubernetes,prometheus=cattle-monitoring-system/rancher-monitoring-prometheus,prometheus_replica=prometheus-rancher-monitoring-prometheus-0 | 0s | sidecar | | 01GC2JEETKM7AV9X5SA2JZ7P83 | 03-09-2022 18:00:00 | 03-09-2022 20:00:00 | 1h59m59.972s | 38h0m0.028s | 81,245 | 20,230,932 | 168,585 | 1 | false | cluster_name=archive-staging,domain=staging,environment=staging,infrastructure=kubernetes,prometheus=cattle-monitoring-system/rancher-monitoring-prometheus,prometheus_replica=prometheus-rancher-monitoring-prometheus-0 | 0s | sidecar | | 01GC2SA62HN8YYQ72NT1BWQFD0 | 03-09-2022 20:00:00 | 03-09-2022 22:00:00 | 1h59m59.974s | 38h0m0.026s | 81,242 | 20,231,448 | 168,545 | 1 | false | cluster_name=archive-staging,domain=staging,environment=staging,infrastructure=kubernetes,prometheus=cattle-monitoring-system/rancher-monitoring-prometheus,prometheus_replica=prometheus-rancher-monitoring-prometheus-0 | 0s | sidecar | | 01GC305XANNG2BBRGWHF4SHN31 | 03-09-2022 22:00:00 | 04-09-2022 00:00:00 | 1h59m59.863s | 38h0m0.137s | 81,255 | 20,229,427 | 168,598 | 1 | false | cluster_name=archive-staging,domain=staging,environment=staging,infrastructure=kubernetes,prometheus=cattle-monitoring-system/rancher-monitoring-prometheus,prometheus_replica=prometheus-rancher-monitoring-prometheus-0 | 0s | sidecar | | 01GC371MJHR333HQ2VGSP2GCAV | 04-09-2022 00:00:00 | 04-09-2022 02:00:00 | 1h59m59.965s | 38h0m0.035s | 81,543 | 20,233,964 | 168,892 | 1 | false | cluster_name=archive-staging,domain=staging,environment=staging,infrastructure=kubernetes,prometheus=cattle-monitoring-system/rancher-monitoring-prometheus,prometheus_replica=prometheus-rancher-monitoring-prometheus-0 | 0s | sidecar | | 01GC3DXBTK69GKXWJ2H68EBCTY | 04-09-2022 02:00:00 | 04-09-2022 04:00:00 | 1h59m59.708s | 38h0m0.292s | 81,282 | 20,229,958 | 168,627 | 1 | false | cluster_name=archive-staging,domain=staging,environment=staging,infrastructure=kubernetes,prometheus=cattle-monitoring-system/rancher-monitoring-prometheus,prometheus_replica=prometheus-rancher-monitoring-prometheus-0 | 0s | sidecar | | 01GC3MS32J5F47YXQWHBY9V0DJ | 04-09-2022 04:00:00 | 04-09-2022 06:00:00 | 1h59m59.9s | 38h0m0.1s | 81,315 | 20,231,860 | 168,655 | 1 | false | cluster_name=archive-staging,domain=staging,environment=staging,infrastructure=kubernetes,prometheus=cattle-monitoring-system/rancher-monitoring-prometheus,prometheus_replica=prometheus-rancher-monitoring-prometheus-0 | 0s | sidecar | | 01GC3VMTAP9A1WA9SSXVRJKPNC | 04-09-2022 06:00:00 | 04-09-2022 08:00:00 | 1h59m59.9s | 38h0m0.1s | 81,263 | 20,230,929 | 168,609 | 1 | false | cluster_name=archive-staging,domain=staging,environment=staging,infrastructure=kubernetes,prometheus=cattle-monitoring-system/rancher-monitoring-prometheus,prometheus_replica=prometheus-rancher-monitoring-prometheus-0 | 0s | sidecar | | 01GC42GHK2YFS7X4E8J47QX2NQ | 04-09-2022 08:00:00 | 04-09-2022 10:00:00 | 1h59m59.999s | 38h0m0.001s | 81,248 | 20,232,463 | 168,595 | 1 | false | cluster_name=archive-staging,domain=staging,environment=staging,infrastructure=kubernetes,prometheus=cattle-monitoring-system/rancher-monitoring-prometheus,prometheus_replica=prometheus-rancher-monitoring-prometheus-0 | 0s | sidecar | | 01GC49C8TG8F473X7NXNS3QW2J | 04-09-2022 10:00:00 | 04-09-2022 12:00:00 | 1h59m59.715s | 38h0m0.285s | 81,253 | 20,231,504 | 168,609 | 1 | false | cluster_name=archive-staging,domain=staging,environment=staging,infrastructure=kubernetes,prometheus=cattle-monitoring-system/rancher-monitoring-prometheus,prometheus_replica=prometheus-rancher-monitoring-prometheus-0 | 0s | sidecar | | 01GC4G802PY3RXTAPK2JT330P1 | 04-09-2022 12:00:00 | 04-09-2022 14:00:00 | 1h59m59.926s | 38h0m0.074s | 82,325 | 20,356,757 | 169,781 | 1 | false | cluster_name=archive-staging,domain=staging,environment=staging,infrastructure=kubernetes,prometheus=cattle-monitoring-system/rancher-monitoring-prometheus,prometheus_replica=prometheus-rancher-monitoring-prometheus-0 | 0s | sidecar | | 01GC4Q3QAJMPGFQMMYZZMSW67J | 04-09-2022 14:00:00 | 04-09-2022 16:00:00 | 1h59m59.984s | 38h0m0.016s | 81,322 | 20,304,796 | 169,168 | 1 | false | cluster_name=archive-staging,domain=staging,environment=staging,infrastructure=kubernetes,prometheus=cattle-monitoring-system/rancher-monitoring-prometheus,prometheus_replica=prometheus-rancher-monitoring-prometheus-0 | 0s | sidecar | | 01GC4XZEJKGB553CX7PDH07X2R | 04-09-2022 16:00:00 | 04-09-2022 18:00:00 | 1h59m59.969s | 38h0m0.031s | 81,314 | 20,302,723 | 169,163 | 1 | false | cluster_name=archive-staging,domain=staging,environment=staging,infrastructure=kubernetes,prometheus=cattle-monitoring-system/rancher-monitoring-prometheus,prometheus_replica=prometheus-rancher-monitoring-prometheus-0 | 0s | sidecar | | 01GC54V5TJZ7SYJAG72NC55A7X | 04-09-2022 18:00:00 | 04-09-2022 20:00:00 | 1h59m59.752s | 38h0m0.248s | 81,316 | 20,303,905 | 169,163 | 1 | false | cluster_name=archive-staging,domain=staging,environment=staging,infrastructure=kubernetes,prometheus=cattle-monitoring-system/rancher-monitoring-prometheus,prometheus_replica=prometheus-rancher-monitoring-prometheus-0 | 0s | sidecar | | 01GC5BPX2J9Z4EPHJ957YH635P | 04-09-2022 20:00:00 | 04-09-2022 22:00:00 | 1h59m59.996s | 38h0m0.004s | 81,394 | 20,303,600 | 169,249 | 1 | false | cluster_name=archive-staging,domain=staging,environment=staging,infrastructure=kubernetes,prometheus=cattle-monitoring-system/rancher-monitoring-prometheus,prometheus_replica=prometheus-rancher-monitoring-prometheus-0 | 0s | sidecar | | 01GC5JJMAQ0J63QRNAN2SK33WX | 04-09-2022 22:00:00 | 05-09-2022 00:00:00 | 1h59m59.907s | 38h0m0.093s | 81,383 | 20,304,463 | 169,238 | 1 | false | cluster_name=archive-staging,domain=staging,environment=staging,infrastructure=kubernetes,prometheus=cattle-monitoring-system/rancher-monitoring-prometheus,prometheus_replica=prometheus-rancher-monitoring-prometheus-0 | 0s | sidecar | | 01GC5SEBJPWZG5KFEZN31DXXD4 | 05-09-2022 00:00:00 | 05-09-2022 02:00:00 | 1h59m59.971s | 38h0m0.029s | 81,384 | 20,304,130 | 169,237 | 1 | false | cluster_name=archive-staging,domain=staging,environment=staging,infrastructure=kubernetes,prometheus=cattle-monitoring-system/rancher-monitoring-prometheus,prometheus_replica=prometheus-rancher-monitoring-prometheus-0 | 0s | sidecar | | 01GC60A2TPKYEAPRERV539GG2B | 05-09-2022 02:00:00 | 05-09-2022 04:00:00 | 1h59m59.796s | 38h0m0.204s | 81,426 | 20,303,163 | 168,582 | 1 | false | cluster_name=archive-staging,domain=staging,environment=staging,infrastructure=kubernetes,prometheus=cattle-monitoring-system/rancher-monitoring-prometheus,prometheus_replica=prometheus-rancher-monitoring-prometheus-0 | 0s | sidecar | | 01GC675T2JFD3C7C9YBWMVFXVT | 05-09-2022 04:00:00 | 05-09-2022 06:00:00 | 1h59m59.937s | 38h0m0.063s | 81,423 | 20,303,382 | 169,280 | 1 | false | cluster_name=archive-staging,domain=staging,environment=staging,infrastructure=kubernetes,prometheus=cattle-monitoring-system/rancher-monitoring-prometheus,prometheus_replica=prometheus-rancher-monitoring-prometheus-0 | 0s | sidecar | | 01GC6E1HAMWNC32218DAK2N4DX | 05-09-2022 06:00:00 | 05-09-2022 08:00:00 | 1h59m59.972s | 38h0m0.028s | 81,317 | 20,300,978 | 168,496 | 1 | false | cluster_name=archive-staging,domain=staging,environment=staging,infrastructure=kubernetes,prometheus=cattle-monitoring-system/rancher-monitoring-prometheus,prometheus_replica=prometheus-rancher-monitoring-prometheus-0 | 0s | sidecar | | 01GC6MX8JN1GN8AB4CX5TXEFNQ | 05-09-2022 08:00:00 | 05-09-2022 10:00:00 | 1h59m59.795s | 38h0m0.205s | 81,316 | 20,301,146 | 168,498 | 1 | false | cluster_name=archive-staging,domain=staging,environment=staging,infrastructure=kubernetes,prometheus=cattle-monitoring-system/rancher-monitoring-prometheus,prometheus_replica=prometheus-rancher-monitoring-prometheus-0 | 0s | sidecar | | 01GC6VRZTR6W628EXE2025QTEX | 05-09-2022 10:00:00 | 05-09-2022 12:00:00 | 1h59m59.812s | 38h0m0.188s | 81,361 | 20,303,621 | 169,217 | 1 | false | cluster_name=archive-staging,domain=staging,environment=staging,infrastructure=kubernetes,prometheus=cattle-monitoring-system/rancher-monitoring-prometheus,prometheus_replica=prometheus-rancher-monitoring-prometheus-0 | 0s | sidecar | level=info ts=2022-09-05T14:50:02.457335636Z caller=main.go:160 msg=exiting
[4]
root@thanos:~# cat /etc/thanos/query-sd.yaml --- - targets: - mmca-thanos.softwareheritage.org:443 - pergamon.internal.softwareheritage.org:19090 - thanos.internal.admin.swh.network:19093 # <- thanos gateway service: historical data from pergamon - thanos.internal.admin.swh.network:19094 # <- thanos gateway service: mmca - thanos.internal.admin.swh.network:19095 # <- thanos gateway service: new archive-staging cluster
[5]
Sep 05 14:56:12 thanos thanos[1196354]: level=warn ts=2022-09-05T14:56:12.077043049Z caller=proxy.go:338 component=proxy request="min_time:1662389472041 max_time:1662389772041 matchers:<name:\"__name__\" value:\"swh_loader_git_total\" > aggregates:COUNT aggregates:SUM partial_response_disabled:true " err="No StoreAPIs matched for this query" stores="store Addr: mmca-thanos.softwareheritage.org:443 LabelSets: {environment=\"production\", replica=\"0\", tenant=\"mmca\"} Mint: 1662372007232 Maxt: 9223372036854775807 filtered out: __address__ mmca-thanos.softwareheritage.org:443 does not match debug store metadata matchers: [[__address__=\"thanos.internal.admin.swh.network:19095\"]];store Addr: pergamon.internal.softwareheritage.org:19090 LabelSets: {replica=\"0\", tenant=\"historical-data\"} Mint: 1630627200000 Maxt: 9223372036854775807 filtered out: __address__ pergamon.internal.softwareheritage.org:19090 does not match debug store metadata matchers: [[__address__=\"thanos.internal.admin.swh.network:19095\"]];store Addr: thanos.internal.admin.swh.network:19094 LabelSets: {environment=\"production\", replica=\"0\", tenant=\"mmca\"} Mint: 1659520677281 Maxt: 1662379200000 filtered out: does not have data within this time period: [1662389472041,1662389772041]. Store time ranges: [1659520677281,1662379200000];store Addr: thanos.internal.admin.swh.network:19095 LabelSets: {cluster_name=\"archive-staging\", domain=\"staging\", environment=\"staging\", infrastructure=\"kubernetes\", prometheus=\"cattle-monitoring-system/rancher-monitoring-prometheus\", prometheus_replica=\"prometheus-rancher-monitoring-prometheus-0\"} Mint: 1662109506179 Maxt: 1662379200000 filtered out: does not have data within this time period: [1662389472041,1662389772041]. Store time ranges: [1662109506179,1662379200000];store Addr: thanos.internal.admin.swh.network:19093 LabelSets: {replica=\"0\", tenant=\"historical-data\"} Mint: 1625378400000 Maxt: 1662379200000 filtered out: does not have data within this time period: [1662389472041,1662389772041]. Store time ranges: [1625378400000,1662379200000]"