Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[GOBBLIN-1922]Add function in Kafka Source to recompute workUnits for filtered partitions #3798

Conversation

hanghangliu
Copy link
Contributor

Dear Gobblin maintainers,

Please accept this PR. I understand that it will not be reviewed until I have checked off all the steps below!

JIRA

Description

  • Here are some details about my PR, including screenshots (if applicable):
    Add a new feature in Kafka source to create workUnit for selected topics partitions. This feature can provide functionality for:
  1. Recompute and split tasks(workUnit) during runtime for selected topics partitions, instead of recompute for all Kafka topics
  2. Make topic level replan feasible
  3. One step closer to dynamic work unit allocation 

Need to have a followup to make work unit packer able to pack the recomputed workunits into a desired number of containers 

Tests

  • My PR adds the following unit tests OR does not need testing for this extremely good reason:

Commits

  • My commits all reference JIRA issues in their subject lines, and I have squashed multiple commits if they address the same issue. In addition, my commits follow the guidelines from "How to write a good git commit message":
    1. Subject is separated from body by a blank line
    2. Subject is limited to 50 characters
    3. Subject does not end with a period
    4. Subject uses the imperative mood ("add", not "adding")
    5. Body wraps at 72 characters
    6. Body explains "what" and "why", not "how"

@codecov-commenter
Copy link

codecov-commenter commented Oct 13, 2023

Codecov Report

Merging #3798 (0a351e2) into master (6266a12) will increase coverage by 0.31%.
Report is 3 commits behind head on master.
The diff coverage is 61.70%.

@@             Coverage Diff              @@
##             master    #3798      +/-   ##
============================================
+ Coverage     47.38%   47.70%   +0.31%     
- Complexity    10993    11051      +58     
============================================
  Files          2155     2155              
  Lines         85228    85260      +32     
  Branches       9478     9483       +5     
============================================
+ Hits          40386    40673     +287     
+ Misses        41183    40900     -283     
- Partials       3659     3687      +28     
Files Coverage Δ
.../kafka/client/AbstractBaseKafkaConsumerClient.java 0.00% <ø> (ø)
...in/source/extractor/extract/kafka/KafkaSource.java 45.70% <61.70%> (+39.37%) ⬆️

... and 21 files with indirect coverage changes

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

@@ -306,6 +303,96 @@ public String apply(KafkaTopic topic) {
}
}

public List<WorkUnit> getWorkunitsForFilteredPartitions(SourceState state, Map<String, List<String>> filteredTopicPartitionMap, int minContainer) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why min container is needed here and how will we determine it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Give an option to specify a numContainers that used by KafkaTopicGroupingWorkUnitPacker#pack . How it being used is determined by the implementation of concrete KafkaWorkUnitPacker class.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My next PR will be utilizing this minContainer in KafkaTopicGroupingWorkUnitPacker, which is not being respected in the current implementation.

@hanghangliu hanghangliu force-pushed the GOBBLIN-1922-Create-work-units-for-selected-topics-partitions-in-Kafka-source branch from da2ce02 to 3f43990 Compare October 19, 2023 21:41
@hanghangliu hanghangliu requested a review from ZihanLi58 October 19, 2023 21:42
//determine the number of mappers
int maxMapperNum =
state.getPropAsInt(ConfigurationKeys.MR_JOB_MAX_MAPPERS_KEY, ConfigurationKeys.DEFAULT_MR_JOB_MAX_MAPPERS);
createEmptyWorkUnitsForSkippedPartitions(kafkaTopicWorkunitMap, topicSpecificStateMap, state);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel this might cause problem as we intentionally skip some topic partitions, seems this method will add them back?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added a condition to check if filteredTopicPartition present. Only invoke the call the it's not present

List<WorkUnit> workUnitList = kafkaWorkUnitPacker.pack(workUnits, numOfMultiWorkunits);

addTopicSpecificPropsToWorkUnits(kafkaTopicWorkunitMap, topicSpecificStateMap);
List<WorkUnit> workUnitList = kafkaWorkUnitPacker.pack(kafkaTopicWorkunitMap, numOfMultiWorkunits);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this mean if we provide filtered topic partitions, we need to make sure there are only 1 topic? Otherwise how will the minimum container take effect

Copy link
Contributor Author

@hanghangliu hanghangliu Oct 25, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the design is to able to handle multiple topics.
My idea is in KafkaTopicGroupingWorkUnitPacker, before squeeze all MultiWU, we construct a priority queue based on WU weight, and keep splitting MultiWorkUnits, until the minContainer count reaches or can't split anymore(each MultiWU only contains 1 partition)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure whether you can achieve the goal with this method as for the scenario where we pack all partition in same container, normally they are pretty light in weight.

Copy link
Contributor Author

@hanghangliu hanghangliu Oct 25, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

take this example of before calling squeezeMultiWorkUnits for mwuGroups, it contains multiWU as:
topicA: partition 0-7
topicA: partition 8-15
topicB: partition 0-7

Say we set numContainers as 5, so we can further split the multiWU to:
topicA: partition 0-3
topicA: partition 4-7
topicA: partition 8-15
topicB: partition 0-3
topicB: partition 4-7
Then we squeeze them to 5 WU, will result in 5 containers. Do you think this process make sense?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but you're right in some cases the WU can be very light weighted, so maybe I can combine value of weight and partition count as the queue comparator.
Furthermore, if the top of the pq only contains 1 partition, we can keep iterate the queue until we find one that contains multiple partitions

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would rather suggest to keep the logic there simple but make sure when you call the method, you just call it for one single topic(or even one single failed task) at one time "split" the original work unit.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but anyway, you can do this in a separate PR

@hanghangliu hanghangliu requested a review from ZihanLi58 October 25, 2023 20:57
Copy link
Contributor

@ZihanLi58 ZihanLi58 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

@ZihanLi58 ZihanLi58 merged commit 918716f into apache:master Oct 26, 2023
6 checks passed
Will-Lo added a commit to Will-Lo/incubator-gobblin that referenced this pull request Dec 20, 2023
* [GOBBLIN-1921] Properly handle reminder events (apache#3790)

* Add millisecond level precision to timestamp cols & proper timezone conversion

	- existing tests pass with minor modifications

* Handle reminder events properly

* Fix compilation errors & add isReminder flag

* Add unit tests

* Address review comments

* Add newline to address comment

* Include reminder/original tag in logging

* Clarify timezone issues in comment

---------

Co-authored-by: Urmi Mustafi <[email protected]>

* [GOBBLIN-1924] Reminder event flag true (apache#3795)

* Set reminder event flag to true for reminders

* Update unit tests

* remove unused variable

---------

Co-authored-by: Urmi Mustafi <[email protected]>

* [GOBBLIN-1923] Add retention for lease arbiter table (apache#3792)

* Add retention for lease arbiter table

* Replace blocking thread with scheduled thread pool executor

* Make Calendar instance thread-safe

* Rename variables, make values more clear

* Update timestamp related cols

---------

Co-authored-by: Urmi Mustafi <[email protected]>

* Change debug statements to info temporarily to debug (apache#3796)

Co-authored-by: Urmi Mustafi <[email protected]>

* [GOBBLIN-1926] Fix Reminder Event Epsilon Comparison (apache#3797)

* Fix Reminder Event Epsilon Comparison

* Add TODO comment

---------

Co-authored-by: Urmi Mustafi <[email protected]>

* [GOBBLIN-1930] Improve Multi-active related logs and metrics (apache#3800)

* Improve Multi-active related logs and metrics

* Add more metrics and logs around forwarding dag action to DagManager

* Improve logs in response to review comments

* Replace flow execution id with trigger timestamp from multi-active

* Update flow action execution id within lease arbiter

* Fix test & make Lease Statuses more lean

* Update javadoc

---------

Co-authored-by: Urmi Mustafi <[email protected]>

* add dataset root in some common type of datasets

* [GOBBLIN-1927] Add topic validation support in KafkaSource, and add TopicNameValidator (apache#3793)

* * Add generic topic validation support
* Add the first validator TopicNameValidator into the validator chain, as a refactor of existing codes

* Refine to address comments

* Refine

---------

Co-authored-by: Tao Qin <[email protected]>

* [GOBBLIN-1931] Refactor dag action updating method & add clarifying comment (apache#3801)

* Refactor dag action updating method & add clarifying comment

* Log filtered out duplicate messages

* logs and metrics for missing messages from change monitor

* Only add gobblin.service prefix for dagActionStoreChangeMonitor

---------

Co-authored-by: Urmi Mustafi <[email protected]>

* [GOBBLIN-1934] Monitor High Level Consumer queue size (apache#3805)

* Emit metrics to monitor high level consumer queue size

* Empty commit to trigger tests

* Use BlockingQueue.size() func instead of atomic integer array

* Remove unused import & add DagActionChangeMonitor prefix to metric

* Refactor to avoid repeating code

* Make protected variables private where possible

* Fix white space

---------

Co-authored-by: Urmi Mustafi <[email protected]>

* [GOBBLIN-1935] Skip null dag action types unable to be processed (apache#3807)

* Skip over null dag actions from malformed messages

* Add new metric for skipped messages

---------

Co-authored-by: Urmi Mustafi <[email protected]>

* [GOBBLIN-1922]Add function in Kafka Source to recompute workUnits for filtered partitions (apache#3798)

* add function in Kafka Source to recompute workUnits for filtered partitions

* address comments

* set default min container value to 1

* add condition when create empty wu

* update the condition

* Expose functions to fetch record partitionColumn value (apache#3810)

* [GOBBLIN-1938] preserve x bit in manifest file based copy (apache#3804)

* preserve x bit in manifest file based copy
* fix project structure preventing running unit tests from intellij
* fix unit test

* [GOBBLIN-1919] Simplify a few elements of MR-related job exec before reusing code in Temporal-based execution (apache#3784)

* Simplify a few elements of MR-related job exec before reusing code in Temporal-based execution

* Add JSON-ification to several foundational config-state representations, plus encapsulated convience method `JobState.getJobIdFromProps`

* Update javadoc comments

* Encapsulate check for whether a path has the extension of a multi-work-unit

* [GOBBLIN-1939] Bump AWS version to use a compatible version of Jackson with Gobblin (apache#3809)

* Bump AWS version to use a compatible version of jackson with Gobblin

* use shared aws version

* [GOBBLIN-1937] Quantify Missed Work Completed by Reminders (apache#3808)

* Quantify Missed Work Completed by Reminders
   Also fix bug to filter out heartbeat events before extracting field

* Refactor changeMonitorUtils & add delimiter to metrics prefix

* Re-order params to group similar ones

---------

Co-authored-by: Urmi Mustafi <[email protected]>

* GOBBLIN-1933]Change the logic in completeness verifier to support multi reference tier (apache#3806)

* address comments

* use connectionmanager when httpclient is not cloesable

* [GOBBLIN-1933] Change the logic in completeness verifier to support multi reference tier

* add uite test

* fix typo

* change the javadoc

* change the javadoc

---------

Co-authored-by: Zihan Li <[email protected]>

* [GOBBLIN-1943] Use AWS version 1.12.261 to fix a security vulnerability in the previous version (apache#3813)

* [GOBBLIN-1941] Develop Temporal abstractions, including `Workload` for workflows of unbounded size through sub-workflow nesting (apache#3811)

* Define `Workload` abstraction for Temporal workflows of unbounded size through sub-workflow nesting

* Adjust Gobblin-Temporal configurability for consistency and abstraction

* Define `WorkerConfig`, to pass the `TemporalWorker`'s configuration to the workflows and activities it hosts

* Improve javadoc

* Javadoc fixup

* Minor changes

* Update per review suggestions

* Insert pause, to spread the load on the temporal server, before launch of each child workflow that may have direct leaves of its own

* Appease findbugs by having `SeqSliceBackedWorkSpan::next` throw `NoSuchElementException`

* Add comment

* [GOBBLIN-1944] Add gobblin-temporal load generator for a single subsuming super-workflow with a configurable number of activities nested beneath (apache#3815)

* Add gobblin-temporal load generator for a single subsuming super-workflow with a configurable number of activities nested beneath

* Update per findbugs advice

* Improve processing of int props

* [GOBBLIN-1945] Implement Distributed Data Movement (DDM) Gobblin-on-Temporal `WorkUnit` evaluation (apache#3816)

* Implement Distributed Data Movement (DDM) Gobblin-on-Temporal `WorkUnit` evaluation

* Adjust work unit processing tuning for start-to-close timeout and nested execution branching

* Rework `ProcessWorkUnitImpl` and fix `FileSystem` misuse; plus convenience abstractions to load `FileSystem`, `JobState`, and `StateStore<TaskState>`

* Fix `FileSystem` resource lifecycle, uniquely name each workflow, and drastically reduce worker concurrent task execution

* Heed findbugs advice

* prep before commit

* Improve processing of required props

* Update comment in response to PR feedback

* [GOBBLIN-1942] Create MySQL util class for re-usable methods and setup MysqlDagActio… (apache#3812)

* Create MySQL util class for re-usable methods and setup MysqlDagActionStore retention

* Add a java doc

* Address review comments

* Close scheduled executors on shutdown & clarify naming and comments

* Remove extra period making config key invalid

* implement Closeable

* Use try with resources

---------

Co-authored-by: Urmi Mustafi <[email protected]>

* [Hotfix][GOBBLIN-1949] add option to detect malformed orc during commit (apache#3818)

* add option to detect malformed ORC during commit phase

* better logging

* address comment

* catch more generic exception

* validate ORC file after close

* move validate in between close and commit

* syntax

* whitespace

* update log

* [GOBBLIN-1948] Use same flowExecutionId across participants (apache#3819)

* Use same flowExecutionId across participants
* Set config field as well in new FlowSpec
* Use gobblin util to create config
* Rename function and move to util
---------
Co-authored-by: Urmi Mustafi <[email protected]>

* Allow extension of functions in GobblinMCEPublisher and customization of fileList file metrics are calculated for (apache#3820)

* [GOBBLIN-1951] Emit GTE when deleting corrupted ORC files (apache#3821)

* [GOBBLIN-1951] Emit GTE when deleting corrupted ORC files

This commit adds ORC file validation during the commit phase and deletes
corrupted files. It also includes a test for ORC file validation.

* Linter fixes

* Add framework and unit tests for DagActionStoreChangeMonitor (apache#3817)

* Add framework and unit tests for DagActionStoreChangeMonitor

* Add more test cases and validation

* Add header for new file

* Move FlowSpec static function to Utils class

* Remove unused import

* Fix compile error

* Fix unit tests

---------

Co-authored-by: Urmi Mustafi <[email protected]>

* [GOBBLIN-1952] Make jobname shortening in GaaS more aggressive (apache#3822)

* Make jobname shortening in GaaS more aggressive

* Change long name prefix to flowgroup

* Make KafkaTopicGroupingWorkUnitPacker pack with desired num of container (apache#3814)

* Make KafkaTopicGroupingWorkUnitPacker pack with desired num of container

* update comment

* [GOBBLIN-1953] Add an exception message to orc writer validation GTE (apache#3826)

* Fix FlowSpec Updating Function (apache#3823)

* Fix FlowSpec Updating Function
   * makes Config object with FlowSpec mutable
   * adds unit test to ensure flow compiles after updating FlowSpec
   * ensure DagManager resilient to exceptions on leadership change

* Only update Properties obj not Config to avoid GC overhead

* Address findbugs error

* Avoid updating or creating new FlowSpec objects by passing flowExecutionId directly to metadata

* Remove changes that are not needed anymore

* Add TODO to handle failed DagManager leadership change

* Overload function and add more documentation

---------

Co-authored-by: Urmi Mustafi <[email protected]>

* Emit metric to tune LeaseArbiter Linger metric  (apache#3824)

* Monitor number of failed persisting leases to tune linger

* Increase default linger and epsilon values

* Add metric for lease persisting success

* Rename metrics

---------

Co-authored-by: Urmi Mustafi <[email protected]>

* [GOBBLIN-1956]Make Kafka streaming pipeline be able to config the max poll records during runtime (apache#3827)

* address comments

* use connectionmanager when httpclient is not cloesable

* add uite test

* fix typo

* [GOBBLIN-1956] Make Kafka streaming pipeline be able to config the max poll records during runtime

* small refractor

---------

Co-authored-by: Zihan Li <[email protected]>

* Add semantics for failure on partial success (apache#3831)

* Consistly handle Rest.li /flowexecutions KILL and RESUME actions (apache#3830)

* [GOBBLIN-1957] GobblinOrcwriter improvements for large records (apache#3828)

* WIP

* Optimization to limit batchsize based on large record sizes

* Address review

* Use DB-qualified table ID as `IcebergTable` dataset descriptor (apache#3834)

* [GOBBLIN-1961] Allow `IcebergDatasetFinder` to use separate names for source vs. destination-side DB and table (apache#3835)

* Allow `IcebergDatasetFinder` to use separate names for source vs. destination-side DB and table

* Adjust Mockito.verify to pass test

* Prevent NPE in `FlowCompilationValidationHelper.validateAndHandleConcurrentExecution` (apache#3836)

* Prevent NPE in `FlowCompilationValidationHelper.validateAndHandleConcurrentExecution`

* improved `MultiHopFlowCompiler` javadoc

* Delete Launch Action Events After Processing (apache#3837)

* Delete launch action event after persisting

* Fix default value for flowExecutionId retrieval from metadata map

* Address review comments and add unit test

* Code clean up

---------

Co-authored-by: Urmi Mustafi <[email protected]>

* [GOBBLIN-1960] Emit audit count after commit in IcebergMetadataWriter (apache#3833)

* Emit audit count after commit in IcebergMetadataWriter

* Unit tests by extracting to a post commit

* Emit audit count first

* find bugs complaint

* [GOBBLIN-1967] Add external data node for generic ingress/egress on GaaS (apache#3838)

* Add external data node for generic ingress/egress on GaaS

* Address reviews and cleanup

* Use URI representation for external dataset descriptor node

* Fix error message in containing check

* Address review

* [GOBBLIN-1971] Allow `IcebergCatalog` to specify the `DatasetDescriptor` name for the `IcebergTable`s it creates (apache#3842)

* Allow `IcebergCatalog` to specify the `DatasetDescriptor` name for the `IcebergTable`s it creates

* small method javadoc

* [GOBBLIN-1970] Consolidate processing dag actions to one code path (apache#3841)

* Consolidate processing dag actions to one code path

* Delete dag action in failure cases too

* Distinguish metrics for startup

* Refactor to avoid duplicated code and create static metrics proxy class

* Remove DagManager checks that don't apply on startup

* Add test to check kill/resume dag action removal after processing

* Remove unused import

* Initialize metrics proxy with Null Pattern

---------

Co-authored-by: Urmi Mustafi <[email protected]>

* [GOBBLIN-1972] Fix `CopyDataPublisher` to avoid committing post-publish WUs before they've actually run (apache#3844)

* Fix `CopyDataPublisher` to avoid committing post-publish WUs before they've actually run

* fixup findbugsMain

* [GOBBLIN-1975] Keep job.name configuration immutable if specified on GaaS (apache#3847)

* Revert "[GOBBLIN-1952] Make jobname shortening in GaaS more aggressive (apache#3822)"

This reverts commit 5619a0a.

* use configuration to keep specified jobname if enabled

* Cleanup

* [GOBBLIN-1974] Ensure Adhoc Flows can be Executed in Multi-active Scheduler state (apache#3846)

* Ensure Adhoc Flows can be Executed in Multi-active Scheduler state

* Only delete spec for adhoc flows & always after orchestration

* Delete adhoc flows when dagManager is not present as well

* Fix flaky test for scheduler

* Add clarifying comment about failure recovery

* Re-ordered private method

* Move private methods again

* Enforce sequential ordering of unit tests to make more reliable

---------

Co-authored-by: Urmi Mustafi <[email protected]>

* [GOBBLIN-1973] Change Manifest distcp logic to compare permissions of source and dest files even when source is older (apache#3845)

* change should copy logic

* Add tests, address review

* Fix checkstyle

* Remove unused imports

* [GOBBLIN-1976] Allow an `IcebergCatalog` to override the `DatasetDescriptor` platform name for the `IcebergTable`s it creates (apache#3848)

* Allow an `IcebergCatalog` to override the `DatasetDescriptor` platform for the `IcebergTable`s it creates

* fixup javadoc

* Log when `PasswordManager` fails to load any master password (apache#3849)

* [GOBBLIN-1968] Temporal commit step integration (apache#3829)

Add commit step to Gobblin temporal workflow for job publish

* Add codeql analysis

* Make gradle specific

* Add codeql as part of build script

* Initialize codeql

* Use separate workflow for codeql instead with custom build function as autobuild seems to not work

* Add jdk jar for global dependencies script

---------

Co-authored-by: umustafi <[email protected]>
Co-authored-by: Urmi Mustafi <[email protected]>
Co-authored-by: Arjun <[email protected]>
Co-authored-by: Tao Qin <[email protected]>
Co-authored-by: Tao Qin <[email protected]>
Co-authored-by: Hanghang Nate Liu <[email protected]>
Co-authored-by: Andy Jiang <[email protected]>
Co-authored-by: Kip Kohn <[email protected]>
Co-authored-by: Zihan Li <[email protected]>
Co-authored-by: Zihan Li <[email protected]>
Co-authored-by: Matthew Ho <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants