The munin query listing the object count in the archive returns overestimated results, as it uses the PostgreSQL tuple count instead of an actual count query. This error propagates to the website, as we publish data from munin there.
We're running a massive reindexation of all contents to add a new hash, and we're likely to do that again in the future. For each of those updates, PostgreSQL creates a new tuple, and marks the old one for deletion once the transaction completes. The old tuple is only deleted once vacuum runs, which only happens every so often.
Maybe using the pg_stat_user_tables statistics table instead of pg_class would give better data.