- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
All Stories
Sep 3 2021
Build is green
Build is green
rebase
rebase
rebase
I discussed this option with Kat from Data Current (in charge of the SWH stories interface) and she was wondering if the highlighted text will be centered.
Seems from your example that this will be the case.
- Adapt according to review:
- Distinguish correctly between labels and values
- Make last_update a timestamp
- Drop '_metrics' from the name as prometheus will add it itself
- Keep name and instance_name as distinguished labels
- db1.staging: Activate metrics for staging scheduler db
- Drop secondary cluster as it's unnecessary a filter
- Make the db name a regexp to be compatible both with prod and staging dbs
This is really good!
I discussed this option with Kat from Data Current (in charge of the SWH stories interface) and she was wondering if the highlighted text will be centered.
Seems from your example that this will be the case.
@anlambert we have discussed this task this morning with @ardumont and @vlorentz.
I want to start working on the improvements of the deposit admin view to open it up for deposit clients.
Would you have time during September to help on this task? (it is a roadmap task, btw)
We should keep lister_name and lister_instance as separate labels. Labels don't cost anything as they're stored once per time series (and it would allow us to generate aggregated metrics for all instances of a given lister).
Easiest option would be to add a link to the API endpoint.
To discuss with @rdicosmo
This is possible with the main API:
I've opened a task to try and monitor this pattern [1].
I actually requested https://github.com/tue-alga/CoordinatedSchematization which did
not start for over 45 minutes or so (which is unusual), then noticed that the one
above was also in "scheduled" state over hours and decided to open a task here.
Thanks for the quick help!
I actually requested https://github.com/tue-alga/CoordinatedSchematization which did not start for over 45 minutes or so (which is unusual), then noticed that the one above was also in "scheduled" state over hours and decided to open a task here.
For some unknown reason, that particular origins is scheduled but I don't see any log about that run.
I triggered it back and it got finished almost immediately (uneventful).
(was looking into it)
I'd say this one qualifies (in UTC+2):
@mdidas what are the repositories you requested?
- puppet configuration deployed in staging
- read index updated with this script:
#!/bin/bash
Format query
The lag has recovered in ~ 12hours.
The content of the index looks goods (just cherry picked a couple of origin).
Sep 2 2021
also i just realize i should have mentioned this earlier @KShivendu.
To reproduce the issue, no need for any loader or whatever else,
just ipython in your venv:
Possibly a composition of requests adapter [1] and urllib.request.urlretrieve [2] should
or could be enough instead.
Looks good to me.
17:17 <+ardumont> just out of curiosity, not that i'm on that right now 17:20 <+ardumont> on an elastic node, we'd get 17:20 <+ardumont> ardumont@esnode1:~% apt-cache show journalbeat | grep Version | awk '{print $2}' | sort -V -r | head -1 17:20 <+ardumont> 7.14.1 17:20 <+ardumont> we'd have some choice 17:22 <+olasd> ardumont: no objections. the main concern is that the upstream mapping from journalctl fields to elasticsearch documents has changed, so we'd need to adapt the (overall, very few) filters we have 17:22 <+olasd> and probably some dashboard churn
This could possibly be done using https://pypi.org/project/requests-ftp/
In T3489#69769, @moranegg wrote:@anlambert : Looks very good!
Can you show an example for a code fragment where you view a content with a few lines marked-up ?
Build is green
undo change to dir tarball name
I found a couple of issues while testing, see inline comments.