-
Notifications
You must be signed in to change notification settings - Fork 97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Poll only mode for environments without listen/notify #281
Conversation
StatusFunc: client.monitor.SetProducerStatus, | ||
Workers: config.Workers, | ||
}) | ||
client.monitor.InitializeProducerStatus(queue) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moved initializing the producers up into New
here. The helper function was so small, and this makes seeing their dependencies super clear as they're initialized alongside all the other services.
// TODO: for now we only support a single instance per database/schema. | ||
// If we want to provide isolation within a single database/schema, | ||
// we'll need to add a client config for this. | ||
instanceNameDefault = "default" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moved this down from client so it's defined closer to where it's used. Totally fine to move it back up as necessary, but there's no plans AFAIK to make use of it anytime soon and this keeps things clearer for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It goes away with app-level notifications anyway. Will be a small conflict but no worries.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kk, hopefully not too big of a problem. Thx.
ce4715d
to
3a33aae
Compare
@bgentry Going to quiet down on changes after this, but we were 95% of the way to this feature and it required fairly minimal changes, so decided to take it through. Thoughts? |
3a33aae
to
02ec6d4
Compare
7554271
to
403cc80
Compare
client_test.go
Outdated
@@ -180,34 +180,47 @@ func runNewTestClient(ctx context.Context, t *testing.T, config *Config) *Client | |||
return client | |||
} | |||
|
|||
func Test_Client(t *testing.T) { | |||
func Test_Client_Standard(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
were you originally renaming this because you planned to have a separate poll-only test block? As of now it looks like you just added a single case in here and don't need to rename this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ugh, I think I was trying to rename this to prevent overzealous test matching when trying to run a single test with -run
. This is a long issue to explain, so reverting this and will start a separate discussion about it if necessary.
// TODO: for now we only support a single instance per database/schema. | ||
// If we want to provide isolation within a single database/schema, | ||
// we'll need to add a client config for this. | ||
instanceNameDefault = "default" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It goes away with app-level notifications anyway. Will be a small conflict but no worries.
@@ -210,7 +210,7 @@ func (p *producer) StartWorkContext(fetchCtx, workCtx context.Context) error { | |||
fetchLimiter.Call() | |||
} | |||
// TODO(brandur): Get rid of this retry loop after refactor. | |||
insertSub, err = notifier.ListenRetryLoop(fetchCtx, &p.BaseService, p.config.Notifier, notifier.NotificationTopicInsert, handleInsertNotification) | |||
insertSub, err = p.config.Notifier.Listen(fetchCtx, notifier.NotificationTopicInsert, handleInsertNotification) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
comment above still relevant?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oops, totally thought I killed these. Thanks!
@@ -241,7 +241,7 @@ func (p *producer) StartWorkContext(fetchCtx, workCtx context.Context) error { | |||
) | |||
} | |||
// TODO(brandur): Get rid of this retry loop after refactor. | |||
jobControlSub, err = notifier.ListenRetryLoop(fetchCtx, &p.BaseService, p.config.Notifier, notifier.NotificationTopicJobControl, handleJobControlNotification) | |||
jobControlSub, err = p.config.Notifier.Listen(fetchCtx, notifier.NotificationTopicJobControl, handleJobControlNotification) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
comment above still relevant?
403cc80
to
c55fb5b
Compare
Here, add a new River client option in `Config.PollOnly` which causes it to start in "poll only" mode where a notifier isn't initialized, and no `LISTEN` statements are issued. This has the downside of events like a leadership resignation or a new job being available not noticed as quickly, but the advantage of making River operable on systems where listen/notify aren't supported, like PgBouncer in transaction mode, or maybe even eventually MySQL or SQLite. A slightly unrelated change that I also made here was to give the elector a more traditional `Config` struct (like other services take) that encompasses more of its configuration instead of raw constructor parameters. I thought I needed to do this originally to better expose a custom elect interval for the client's `PollOnly` test, but it turned out that my slow test was actually due to a different problem and the extra configuration wasn't actually necessary. It seemed to me to clean things up somewhat though, so I left the refactor in. I also remove the listen retry loop on client start up found in the elector and producer. These are probably a bad idea because they may hide a real database problem as they enter their retry loops, and cause the client to require a very lengthy amount of time for its `Start` to fail because of the built-in sleep backoffs. But we'd added them over concerns that the notifier's previous implementation (prior to #253) of discarding all listen/notify problems may have papered over errors that would've otherwise been returned for people trying to use a River client against say a PgBouncer operating in transaction pooling mode. So instead, we now fail fast if there's a problem with listen/notify in their database, and for those wanting to use PgBouncer in transaction pooling mode, we can now recommend using the much cleaner approach of activating `PollOnly` instead. Users can implement their own retry loop on client `Start` if they'd like to protect against intermittent problems on listen/notify start up, but personally I wouldn't expect this to ever be desirable. (The alternative is to allow the program to fail fast and have its supervisor resurrect it, like most programs would do for any other start up hiccup.) The program will already fail fast under most circumstances due to the `SELECT` it performs on start.
c55fb5b
to
6caf6b0
Compare
Ty. |
Here, add a new River client option in
Config.PollOnly
which causes itto start in "poll only" mode where a notifier isn't initialized, and no
LISTEN
statements are issued. This has the downside of events like aleadership resignation or a new job being available not noticed as
quickly, but the advantage of making River operable on systems where
listen/notify aren't supported, like PgBouncer in transaction mode, or
maybe even eventually MySQL or SQLite.
A slightly unrelated change that I also made here was to give the
elector a more traditional
Config
struct (like other services take)that encompasses more of its configuration instead of raw constructor
parameters. I thought I needed to do this originally to better expose a
custom elect interval for the client's
PollOnly
test, but it turnedout that my slow test was actually due to a different problem and the
extra configuration wasn't actually necessary. It seemed to me to clean
things up somewhat though, so I left the refactor in.
I also remove the listen retry loop on client start up found in the
elector and producer. These are probably a bad idea because they may
hide a real database problem as they enter their retry loops, and cause
the client to require a very lengthy amount of time for its
Start
tofail because of the built-in sleep backoffs. But we'd added them over
concerns that the notifier's previous implementation (prior to #253) of
discarding all listen/notify problems may have papered over errors that
would've otherwise been returned for people trying to use a River client
against say a PgBouncer operating in transaction pooling mode.
So instead, we now fail fast if there's a problem with listen/notify in
their database, and for those wanting to use PgBouncer in transaction
pooling mode, we can now recommend using the much cleaner approach of
activating
PollOnly
instead. Users can implement their own retry loopon client
Start
if they'd like to protect against intermittentproblems on listen/notify start up, but personally I wouldn't expect
this to ever be desirable. (The alternative is to allow the program to
fail fast and have its supervisor resurrect it, like most programs would
do for any other start up hiccup.) The program will already fail fast
under most circumstances due to the
SELECT
it performs on start.