Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Poll for backlog in background thread instead of inline #31697

Draft
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

Naireen
Copy link
Contributor

@Naireen Naireen commented Jun 27, 2024

Poll for backlog in background thread instead of inline

Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:

  • Mention the appropriate issue in your description (for example: addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, comment fixes #<ISSUE NUMBER> instead.
  • Update CHANGES.md with noteworthy changes.
  • If this contribution is large, please file an Apache Individual Contributor License Agreement.

See the Contributor Guide for more tips on how to make review process smoother.

To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md

GitHub Actions Tests Status (on master branch)

Build python source distribution and wheels
Python tests
Java tests
Go tests

See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.

@Naireen
Copy link
Contributor Author

Naireen commented Jul 18, 2024

R: @scwhittle

Copy link
Contributor

Assigning reviewers. If you would like to opt out of this review, comment assign to next reviewer:

R: @kennknowles for label java.
R: @Abacn for label io.

Available commands:

  • stop reviewer notifications - opt out of the automated review tooling
  • remind me after tests pass - tag the comment author after tests pass
  • waiting on author - shift the attention set back to the author (any comment or push by the author will return the attention set to the reviewers)

The PR bot will only process comments in the main thread (not review comments).

Copy link
Contributor

Stopping reviewer notifications for this pull request: review requested by someone other than the bot, ceding control. If you'd like to restart, comment assign set of reviewers

return offsetConsumer.position(topicPartition);
}
},
1,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

make this a constant for now so it's more readable/discoverable?
we eventually might want to make it an option or increase it by default to reduce polling.

@@ -148,6 +150,12 @@
abstract class ReadFromKafkaDoFn<K, V>
extends DoFn<KafkaSourceDescriptor, KV<KafkaSourceDescriptor, KafkaRecord<K, V>>> {

private static final int OFFSET_UPDATE_INTERVAL_SECONDS = 1;

private transient ScheduledExecutorService backlogFetcherThread =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is per-dofn, it seems like it should be some static cache across all of the dofns.

You could see what I was doing for the caching the background reading threads in https://github.com/apache/beam/pull/31786/files#diff-0f4b915b82782e618addba2e443f07b3d4b86ad8b77c2bf7adeb0ad1d6864adb

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that this is a little different than the UnboundedReader where there is a single reader instance for each partition+topic and the caching across dofns is managed by the runner with the ReaderCache. Since these are just normal dofns, there may be many different DoFn instances (up to 1-per processing thread) and the same partition+topic may bounce between using different dofn instances.

@@ -277,7 +292,9 @@ protected void finalize() {

@Override
public long estimate() {
return memoizedBacklog.get();
memoizedBacklog = Preconditions.checkStateNotNull(backlogMap.get(topicPartition));
Long backlogValue = memoizedBacklog.get();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAICT this estimate is also called from the main processing thread and thus may still block issuing the backlog request.

Instead we may want to have estimate be non blocking and just return whatever the latest observed offset was, and separately change the background fetching thread to force refreshes of all the backlogs.

@Naireen Naireen marked this pull request as draft August 13, 2024 21:06
@Naireen Naireen force-pushed the backlog_async branch 2 times, most recently from c63defc to 51e4854 Compare August 13, 2024 21:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants