grrr, come on, i already opened one... T2746...
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Nov 9 2020
Nov 4 2020
Almost, one last range eee... to fff... to run
Almost, one last range eee... to fff... to run [1]
Nov 3 2020
ETA should be roughly done during the night, tomorrow morning for sure (currently at the range between 888... and 999...).
Nov 2 2020
running on belevedere in a root tmux session btw (first delete query passed [1], I triggered the rest in that tmux session):
Now on to your initial hints about ranges which I missed the first time...
(Thanks ;)
In the current situation of the replica, you can just drop the table from the
publication while you're doing the removal.Ack, thanks.
In the current situation of the replica, you can just drop the table from the publication while you're doing the removal.
Instead of a like query which is not indexed, you should use a ranged query which will be able to use an index instead (id >= 'swh:1:snp:' and id < 'swh:1:snp;')
What's suggested by the psql prompt seems to the point. As far as my understanding goes, [2] from [1] would actually writes in replication logs the rows to be deleted from the replica.
@moranegg the table grew over 500GB over a couple of months.
What's the problem in the ERMDS?
Can you be more precise in the title, because seems it's a formatting issue for NPM or PyPi packages.. but I'm not sure.
It's actually less happy from the main db though:
Sep 22 2020
Sep 3 2020
Sep 1 2020
Nov 12 2019
Rebase to latest production branch
Jul 16 2019
Grafanalib dashboards to swh-grafana-dashboards in rTGRAee5d3074bf58 .
Jul 5 2019
Dec 1 2017
Nov 28 2017
As for the icinga2 issue, this was fixed with:
The fqdn must have been set wrong at some point during the deployment.
There was a duplicate node definition in puppetdb for a worker03.euwest.azure.euwest.azure.internal.softwareheritage.org node. The fqdn must have been set wrong at some point during the deployment.
Nov 23 2017
Oh yeah, also for icinga, we are possibly quite late there:
Nov 22 2017
Sep 18 2017
Sep 12 2017
Sep 7 2017
May 13 2016
Mar 10 2016
Mar 5 2016
Feb 29 2016
Feb 26 2016
This now works with proper dependencies.
Feb 25 2016
Feb 23 2016
I think that in the medium term we should get rid of flower and replicate the useful functionality through the scheduler interface.
I think that in the medium term we should get rid of flower and replicate the useful functionality through the scheduler interface.
Feb 22 2016
Feb 19 2016
Feb 4 2016
We now have a backup of all the contents that were stored on uffizi at the end of our first batch import.
Jan 24 2016
Full benchmark data are available at: https://intranet.softwareheritage.org/index.php?title=User:StefanoZacchiroli/Disk_array_benchmark
For completeness, here are the slowdown benchmarks for prado's SSD disks (bottom line: the slow down seems to be present there too, but "only" of the order of 20% or so):
Should be fixed.
Jan 23 2016
Jan 22 2016
The main bottleneck turned out to be seek time, that for 1.6B files really adds up.
By looking at bonnie++ output and doing some math, we have concluded that transfer slowness is essentially dominated by seek time.
Jan 21 2016
Here are some bonnie++ tests on both uffizi and banco. They seem consistent with the fact that reads on uffizi from the object storage are much slower (factor 3x) when compared with banco. But further investigation is needed.
This is back on hold now, as we discovered that the read performances on uffizi from the object store are not as good as they should.
PostgreSQL has now been updated to 9.5 (and split into three clusters).
pgbouncer is now listening on port 5432, and postgres 9.4 on port 5439.
Database cluster initialization and credentials sync (-g: dump only tablespaces and users):
pg_dumpall -g -p <old db port> | psql -p <new db port>