You should give a hint in your commit message on why you do this refactoring.
Mon, Feb 17
Wed, Feb 12
Thu, Feb 6
okay-ish but lifecycle of ES related services/objects is unclear to me.
Thanks for the contribution.
You must however ensure tests pass ok before we can accept it. Note that the tests you modify (in tests/test_storage.py) are executed by all the storage backends (postgres, cassandra and the in_memory one you really are targeting here). So make sure they are still OK with all the backends.
Mon, Feb 3
Fri, Jan 31
Looks good to me, but it would really be nice to have a bit more documentation/explanation on how stuff work and are organized in Cassandra, be it in the code itself and as docu material in doc/
Wed, Jan 29
Now that I think of it, we can decompose this in stages in the storage pipeline:
- add an input validating proxy high up the stack
- replace the journal writer calls sprinkled in all methods with a journal writing proxy
- add a "don't insert objects" filter low down the stack
so we'd end up with the following pipeline for workers:
- input validation proxy
- object bundling proxy
- object deduplication against read-only proxy
- journal writer proxy
- addition-blocking filter
- underlying read-only storage
and the following pipeline for the "main storage replayer":
- underlying read-write storage
(it's a very short pipeline... a pipedash?)
We already discussed this at the time we replaced the journal-publisher with journal-writer. Adding to Kafka after inserting to the DB means that Kafka will be missing some messages, and we would need to run a backfiller on a regular basis to fix it.
Tue, Jan 28
This component would centralize the "has this object already appeared?" logic, as well as the queueing+retry logic, and would replace the current kafka mirror component.
How does that sound?
Key metrics for the filter component:
- kafka consumer offset
- min(latest_attempt) where in_flight = true (time it takes for a message from submission in the buffer to (re-)processing by the filter; should stay close to the current time)
- count(*) where given_up = false group by topic (number of objects pending a retry, should be small)
- count(*) where in_flight = true group by topic (number of objects buffered for reprocessing, should be small)
- max(latest_attempt) (last processing time by the requeuing process)
- count(*) where given_up = true (checks whether the housekeeping process)
Note: haven't read the other comment below, just reacting at this one as I am reading it.
Jan 23 2020
Is this still "a thing"?
Since T1914 is high priority, this one is too.
What is the status of this issue? Do we still face this bug?
Agreed, this no longer need to be a high priority task.
Jan 20 2020
Fix typos and address ardumont's comments
closed by 490c2454749679186ffca9cdd3f480e50d2147c2
Jan 17 2020
Jan 16 2020
Not a definitive solution but as discussed IRL, let's quick fix the cran lister then refactor it "the proper way"
Jan 15 2020
I'm fine with the code, but as I already said, I'd really like the commit message to have a paragraph on why this is needed and what problem it solves.