Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sunbird opensource release 2.0.1 GA #15

Merged

Commits on Jun 6, 2023

  1. Configuration menu
    Copy the full SHA
    5c9d3f2 View commit details
    Browse the repository at this point in the history

Commits on Jun 7, 2023

  1. Configuration menu
    Copy the full SHA
    a64f839 View commit details
    Browse the repository at this point in the history

Commits on Jun 8, 2023

  1. Configuration menu
    Copy the full SHA
    ede1567 View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    18e55fe View commit details
    Browse the repository at this point in the history
  3. Merge pull request #22 from Sanketika-Obsrv/default-config

    feat: added descriptions for default configurations
    manjudr authored Jun 8, 2023
    Configuration menu
    Copy the full SHA
    2e85bb6 View commit details
    Browse the repository at this point in the history
  4. Configuration menu
    Copy the full SHA
    d3f4c9c View commit details
    Browse the repository at this point in the history
  5. Merge pull request #23 from Sanketika-Obsrv/default-config

    feat: modified kafka connector input topic
    manjudr authored Jun 8, 2023
    Configuration menu
    Copy the full SHA
    a73cddc View commit details
    Browse the repository at this point in the history
  6. Configuration menu
    Copy the full SHA
    4c64e12 View commit details
    Browse the repository at this point in the history

Commits on Jun 9, 2023

  1. Configuration menu
    Copy the full SHA
    17579d6 View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    4d734d0 View commit details
    Browse the repository at this point in the history
  3. Build deploy v2 (#19)

    * #0 - Refactor Dockerfile and Github actions workflow
    ---------
    
    Co-authored-by: Santhosh Vasabhaktula <[email protected]>
    Co-authored-by: ManojCKrishna <[email protected]>
    3 people authored Jun 9, 2023
    Configuration menu
    Copy the full SHA
    fcacd2a View commit details
    Browse the repository at this point in the history

Commits on Jun 12, 2023

  1. Merge pull request #24 from Sanketika-Obsrv/default-config

    default config for streaming tasks
    manjudr authored Jun 12, 2023
    Configuration menu
    Copy the full SHA
    c9f757f View commit details
    Browse the repository at this point in the history

Commits on Nov 10, 2023

  1. Configuration menu
    Copy the full SHA
    e1abfcd View commit details
    Browse the repository at this point in the history

Commits on Nov 15, 2023

  1. Release 1.3.0 into Main branch (#34)

    * testing new images
    
    * testing new images
    
    * testing new images
    
    * testing new images
    
    * testing new images
    
    * build new image with bug fixes
    
    * update dockerfile
    
    * update dockerfile
    
    * #0 fix: upgrade packages
    
    * #0 feat: add flink dockerfiles
    
    * #0 fix: add individual extraction
    
    * Issue #0 fix: upgrade ubuntu packages for vulnerabilities
    
    * #0 fix: update github actions release condition
    
    ---------
    
    Co-authored-by: ManojKrishnaChintaluri <[email protected]>
    Co-authored-by: Praveen <[email protected]>
    Co-authored-by: Sowmya N Dixit <[email protected]>
    4 people authored Nov 15, 2023
    Configuration menu
    Copy the full SHA
    9a6918e View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    ca5be13 View commit details
    Browse the repository at this point in the history

Commits on Nov 17, 2023

  1. Configuration menu
    Copy the full SHA
    ad19f07 View commit details
    Browse the repository at this point in the history
  2. Merge pull request #21 from Sanketika-Obsrv/documentation

    Issue #33 feat: add documentation for Dataset, Datasources, Data In a…
    SanthoshVasabhaktula authored Nov 17, 2023
    Configuration menu
    Copy the full SHA
    8ba4b7b View commit details
    Browse the repository at this point in the history
  3. Merge pull request #20 from shiva-rakshith/jdbc-connector

    Issue #2 feat: JDBC Connector  add connector config and connector stats update functions
    SanthoshVasabhaktula authored Nov 17, 2023
    Configuration menu
    Copy the full SHA
    85a7c62 View commit details
    Browse the repository at this point in the history
  4. Configuration menu
    Copy the full SHA
    cacf758 View commit details
    Browse the repository at this point in the history

Commits on Nov 21, 2023

  1. Configuration menu
    Copy the full SHA
    e4d3dcf View commit details
    Browse the repository at this point in the history
  2. Merge pull request #36 from shiva-rakshith/jdbc-connector

    feat: add function to get all datasets
    ravismula authored Nov 21, 2023
    Configuration menu
    Copy the full SHA
    bdcf90b View commit details
    Browse the repository at this point in the history

Commits on Dec 15, 2023

  1. Release 1.3.1 into Main (#43)

    * testing new images
    
    * testing new images
    
    * testing new images
    
    * testing new images
    
    * testing new images
    
    * build new image with bug fixes
    
    * update dockerfile
    
    * update dockerfile
    
    * #0 fix: upgrade packages
    
    * #0 feat: add flink dockerfiles
    
    * feat: update all failed, invalid and duplicate topic names
    
    * feat: update kafka topic names in test cases
    
    * #0 fix: add individual extraction
    
    * feat: update failed event
    
    * Update ErrorConstants.scala
    
    * feat: update failed event
    
    * Issue #0 fix: upgrade ubuntu packages for vulnerabilities
    
    * feat: add exception handling for json deserialization
    
    * Update BaseProcessFunction.scala
    
    * Update BaseProcessFunction.scala
    
    * feat: update batch failed event generation
    
    * Update ExtractionFunction.scala
    
    * feat: update invalid json exception handling
    
    * Issue #46 feat: update batch failed event
    
    * Issue #46 feat: update batch failed event
    
    * Issue #46 feat: update batch failed event
    
    * Issue #46 feat: update batch failed event
    
    * Issue #46 fix: remove cloning object
    
    * Issue #46 feat: update batch failed event
    
    * #0 fix: update github actions release condition
    
    * Issue #46 feat: add error reasons
    
    * Issue #46 feat: add exception stack trace
    
    * Issue #46 feat: add exception stack trace
    
    * Release 1.3.1 Changes (#42)
    
    * Dataset enhancements (#38)
    
    * feat: add connector config and connector stats update functions
    * Issue #33 feat: add documentation for Dataset, Datasources, Data In and Query APIs
    * Update DatasetModels.scala
    * #0 fix: upgrade packages
    * #0 feat: add flink dockerfiles
    * #0 fix: add individual extraction
    
    ---------
    
    Co-authored-by: ManojKrishnaChintaluri <[email protected]>
    Co-authored-by: Praveen <[email protected]>
    Co-authored-by: Sowmya N Dixit <[email protected]>
    
    * #0000 [SV] - Fallback to local redis instance if embedded redis is not starting
    
    * Update DatasetModels.scala
    
    * #0000 - refactor the denormalization logic
    1. Do not fail the denormalization if the denorm key is missing
    2. Add clear message whether the denorm is sucessful or failed or partially successful
    3. Handle denorm for both text and number fields
    
    * #0000 - refactor:
    1. Created a enum for dataset status and ignore events if the dataset is not in Live status
    2. Created a outputtag for denorm failed stats
    3. Parse event validation failed messages into a case class
    
    * #0000 - refactor:
    1. Updated the DruidRouter job to publish data to router topics dynamically
    2. Updated framework to created dynamicKafkaSink object
    
    * #0000 - mega refactoring:
    1. Made calls to getAllDatasets and getAllDatasetSources to always query postgres
    2. Created BaseDatasetProcessFunction for all flink functions to extend that would dynamically resolve dataset config, initialize metrics and handle common failures
    3. Refactored serde - merged map and string serialization into one function and parameterized the function
    4. Moved failed events sinking into a common base class
    5. Master dataset processor can now do denormalization with another master dataset as well
    
    * #0000 - mega refactoring:
    1. Made calls to getAllDatasets and getAllDatasetSources to always query postgres
    2. Created BaseDatasetProcessFunction for all flink functions to extend that would dynamically resolve dataset config, initialize metrics and handle common failures
    3. Refactored serde - merged map and string serialization into one function and parameterized the function
    4. Moved failed events sinking into a common base class
    5. Master dataset processor can now do denormalization with another master dataset as well
    
    * #0000 - mega refactoring:
    1. Added validation to check if the event has a timestamp key and it is not blank nor invalid
    2. Added timezone handling to store the data in druid in the TZ specified by the dataset
    
    
    * #0000 - minor refactoring: Updated DatasetRegistry.getDatasetSourceConfig to getAllDatasetSourceConfig
    
    * #0000 - mega refactoring: Refactored logs, error messages and metrics
    
    * #0000 - mega refactoring: Fix unit tests
    
    * #0000 - refactoring:
    1. Introduced transformation mode to enable lenient transformations
    2. Proper exception handling for transformer job
    
    * #0000 - refactoring: Fix test cases and code
    
    * #0000 - refactoring: upgrade embedded redis to work with macos sonoma m2
    
    * #0000 - refactoring: Denormalizer test cases and bug fixes. Code coverage is 100% now
    
    * #0000 - refactoring: Router test cases and bug fixes. Code coverage is 100% now
    
    * #0000 - refactoring: Validator test cases and bug fixes. Code coverage is 100% now
    
    * #0000 - refactoring: Framework test cases and bug fixes
    
    * #0000 - refactoring: kafka connector test cases and bug fixes. Code coverage is 100% now
    
    * #0000 - refactoring: improve code coverage and fix bugs
    
    * #0000 - refactoring: improve code coverage and fix bugs --- Now the code coverage is 100%
    
    * #0000 - refactoring: organize imports
    
    * #0000 - refactoring:
    1. transformer test cases and bug fixes - code coverage is 100%
    
    * #0000 - refactoring: test cases and bug fixes
    
    ---------
    
    Co-authored-by: shiva-rakshith <[email protected]>
    Co-authored-by: Aniket Sakinala <[email protected]>
    Co-authored-by: Manjunath Davanam <[email protected]>
    Co-authored-by: ManojKrishnaChintaluri <[email protected]>
    Co-authored-by: Praveen <[email protected]>
    Co-authored-by: Sowmya N Dixit <[email protected]>
    Co-authored-by: Anand Parthasarathy <[email protected]>
    
    * #000:feat: Removed the provided scope of the kafka-client in the framework (#40)
    
    * #0000 - feat: Add dataset-type to system events (#41)
    
    * #0000 - feat: Add dataset-type to system events
    
    * #0000 - feat: Modify tests for dataset-type in system events
    
    * #0000 - feat: Remove unused getDatasetType function
    
    * #0000 - feat: Remove unused pom test dependencies
    
    * #0000 - feat: Remove unused pom test dependencies
    
    ---------
    
    Co-authored-by: Santhosh <[email protected]>
    Co-authored-by: shiva-rakshith <[email protected]>
    Co-authored-by: Aniket Sakinala <[email protected]>
    Co-authored-by: ManojKrishnaChintaluri <[email protected]>
    Co-authored-by: Praveen <[email protected]>
    Co-authored-by: Sowmya N Dixit <[email protected]>
    Co-authored-by: Anand Parthasarathy <[email protected]>
    
    * Main conflicts fixes (#44)
    
    * feat: add connector config and connector stats update functions
    
    * Issue #33 feat: add documentation for Dataset, Datasources, Data In and Query APIs
    
    * Update DatasetModels.scala
    
    * Release 1.3.0 into Main branch (#34)
    
    * testing new images
    
    * testing new images
    
    * testing new images
    
    * testing new images
    
    * testing new images
    
    * build new image with bug fixes
    
    * update dockerfile
    
    * update dockerfile
    
    * #0 fix: upgrade packages
    
    * #0 feat: add flink dockerfiles
    
    * #0 fix: add individual extraction
    
    * Issue #0 fix: upgrade ubuntu packages for vulnerabilities
    
    * #0 fix: update github actions release condition
    
    ---------
    
    Co-authored-by: ManojKrishnaChintaluri <[email protected]>
    Co-authored-by: Praveen <[email protected]>
    Co-authored-by: Sowmya N Dixit <[email protected]>
    
    * Update DatasetModels.scala
    
    * Issue #2 feat: Remove kafka connector code
    
    * feat: add function to get all datasets
    
    * #000:feat: Resolve conflicts
    
    ---------
    
    Co-authored-by: shiva-rakshith <[email protected]>
    Co-authored-by: Aniket Sakinala <[email protected]>
    Co-authored-by: ManojKrishnaChintaluri <[email protected]>
    Co-authored-by: Praveen <[email protected]>
    Co-authored-by: Sowmya N Dixit <[email protected]>
    Co-authored-by: Santhosh <[email protected]>
    Co-authored-by: Anand Parthasarathy <[email protected]>
    Co-authored-by: Ravi Mula <[email protected]>
    
    ---------
    
    Co-authored-by: ManojKrishnaChintaluri <[email protected]>
    Co-authored-by: shiva-rakshith <[email protected]>
    Co-authored-by: Manjunath Davanam <[email protected]>
    Co-authored-by: Sowmya N Dixit <[email protected]>
    Co-authored-by: Santhosh <[email protected]>
    Co-authored-by: Aniket Sakinala <[email protected]>
    Co-authored-by: Anand Parthasarathy <[email protected]>
    Co-authored-by: Ravi Mula <[email protected]>
    9 people authored Dec 15, 2023
    Configuration menu
    Copy the full SHA
    02ebca4 View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    0a612c2 View commit details
    Browse the repository at this point in the history

Commits on Dec 19, 2023

  1. Release 1.3.1 into Main (#49)

    * testing new images
    
    * testing new images
    
    * testing new images
    
    * testing new images
    
    * testing new images
    
    * build new image with bug fixes
    
    * update dockerfile
    
    * update dockerfile
    
    * #0 fix: upgrade packages
    
    * #0 feat: add flink dockerfiles
    
    * feat: update all failed, invalid and duplicate topic names
    
    * feat: update kafka topic names in test cases
    
    * #0 fix: add individual extraction
    
    * feat: update failed event
    
    * Update ErrorConstants.scala
    
    * feat: update failed event
    
    * Issue #0 fix: upgrade ubuntu packages for vulnerabilities
    
    * feat: add exception handling for json deserialization
    
    * Update BaseProcessFunction.scala
    
    * Update BaseProcessFunction.scala
    
    * feat: update batch failed event generation
    
    * Update ExtractionFunction.scala
    
    * feat: update invalid json exception handling
    
    * Issue #46 feat: update batch failed event
    
    * Issue #46 feat: update batch failed event
    
    * Issue #46 feat: update batch failed event
    
    * Issue #46 feat: update batch failed event
    
    * Issue #46 fix: remove cloning object
    
    * Issue #46 feat: update batch failed event
    
    * #0 fix: update github actions release condition
    
    * Issue #46 feat: add error reasons
    
    * Issue #46 feat: add exception stack trace
    
    * Issue #46 feat: add exception stack trace
    
    * Release 1.3.1 Changes (#42)
    
    * Dataset enhancements (#38)
    
    * feat: add connector config and connector stats update functions
    * Issue #33 feat: add documentation for Dataset, Datasources, Data In and Query APIs
    * Update DatasetModels.scala
    * #0 fix: upgrade packages
    * #0 feat: add flink dockerfiles
    * #0 fix: add individual extraction
    
    ---------
    
    Co-authored-by: ManojKrishnaChintaluri <[email protected]>
    Co-authored-by: Praveen <[email protected]>
    Co-authored-by: Sowmya N Dixit <[email protected]>
    
    * #0000 [SV] - Fallback to local redis instance if embedded redis is not starting
    
    * Update DatasetModels.scala
    
    * #0000 - refactor the denormalization logic
    1. Do not fail the denormalization if the denorm key is missing
    2. Add clear message whether the denorm is sucessful or failed or partially successful
    3. Handle denorm for both text and number fields
    
    * #0000 - refactor:
    1. Created a enum for dataset status and ignore events if the dataset is not in Live status
    2. Created a outputtag for denorm failed stats
    3. Parse event validation failed messages into a case class
    
    * #0000 - refactor:
    1. Updated the DruidRouter job to publish data to router topics dynamically
    2. Updated framework to created dynamicKafkaSink object
    
    * #0000 - mega refactoring:
    1. Made calls to getAllDatasets and getAllDatasetSources to always query postgres
    2. Created BaseDatasetProcessFunction for all flink functions to extend that would dynamically resolve dataset config, initialize metrics and handle common failures
    3. Refactored serde - merged map and string serialization into one function and parameterized the function
    4. Moved failed events sinking into a common base class
    5. Master dataset processor can now do denormalization with another master dataset as well
    
    * #0000 - mega refactoring:
    1. Made calls to getAllDatasets and getAllDatasetSources to always query postgres
    2. Created BaseDatasetProcessFunction for all flink functions to extend that would dynamically resolve dataset config, initialize metrics and handle common failures
    3. Refactored serde - merged map and string serialization into one function and parameterized the function
    4. Moved failed events sinking into a common base class
    5. Master dataset processor can now do denormalization with another master dataset as well
    
    * #0000 - mega refactoring:
    1. Added validation to check if the event has a timestamp key and it is not blank nor invalid
    2. Added timezone handling to store the data in druid in the TZ specified by the dataset
    
    
    * #0000 - minor refactoring: Updated DatasetRegistry.getDatasetSourceConfig to getAllDatasetSourceConfig
    
    * #0000 - mega refactoring: Refactored logs, error messages and metrics
    
    * #0000 - mega refactoring: Fix unit tests
    
    * #0000 - refactoring:
    1. Introduced transformation mode to enable lenient transformations
    2. Proper exception handling for transformer job
    
    * #0000 - refactoring: Fix test cases and code
    
    * #0000 - refactoring: upgrade embedded redis to work with macos sonoma m2
    
    * #0000 - refactoring: Denormalizer test cases and bug fixes. Code coverage is 100% now
    
    * #0000 - refactoring: Router test cases and bug fixes. Code coverage is 100% now
    
    * #0000 - refactoring: Validator test cases and bug fixes. Code coverage is 100% now
    
    * #0000 - refactoring: Framework test cases and bug fixes
    
    * #0000 - refactoring: kafka connector test cases and bug fixes. Code coverage is 100% now
    
    * #0000 - refactoring: improve code coverage and fix bugs
    
    * #0000 - refactoring: improve code coverage and fix bugs --- Now the code coverage is 100%
    
    * #0000 - refactoring: organize imports
    
    * #0000 - refactoring:
    1. transformer test cases and bug fixes - code coverage is 100%
    
    * #0000 - refactoring: test cases and bug fixes
    
    ---------
    
    Co-authored-by: shiva-rakshith <[email protected]>
    Co-authored-by: Aniket Sakinala <[email protected]>
    Co-authored-by: Manjunath Davanam <[email protected]>
    Co-authored-by: ManojKrishnaChintaluri <[email protected]>
    Co-authored-by: Praveen <[email protected]>
    Co-authored-by: Sowmya N Dixit <[email protected]>
    Co-authored-by: Anand Parthasarathy <[email protected]>
    
    * #000:feat: Removed the provided scope of the kafka-client in the framework (#40)
    
    * #0000 - feat: Add dataset-type to system events (#41)
    
    * #0000 - feat: Add dataset-type to system events
    
    * #0000 - feat: Modify tests for dataset-type in system events
    
    * #0000 - feat: Remove unused getDatasetType function
    
    * #0000 - feat: Remove unused pom test dependencies
    
    * #0000 - feat: Remove unused pom test dependencies
    
    ---------
    
    Co-authored-by: Santhosh <[email protected]>
    Co-authored-by: shiva-rakshith <[email protected]>
    Co-authored-by: Aniket Sakinala <[email protected]>
    Co-authored-by: ManojKrishnaChintaluri <[email protected]>
    Co-authored-by: Praveen <[email protected]>
    Co-authored-by: Sowmya N Dixit <[email protected]>
    Co-authored-by: Anand Parthasarathy <[email protected]>
    
    * Main conflicts fixes (#44)
    
    * feat: add connector config and connector stats update functions
    
    * Issue #33 feat: add documentation for Dataset, Datasources, Data In and Query APIs
    
    * Update DatasetModels.scala
    
    * Release 1.3.0 into Main branch (#34)
    
    * testing new images
    
    * testing new images
    
    * testing new images
    
    * testing new images
    
    * testing new images
    
    * build new image with bug fixes
    
    * update dockerfile
    
    * update dockerfile
    
    * #0 fix: upgrade packages
    
    * #0 feat: add flink dockerfiles
    
    * #0 fix: add individual extraction
    
    * Issue #0 fix: upgrade ubuntu packages for vulnerabilities
    
    * #0 fix: update github actions release condition
    
    ---------
    
    Co-authored-by: ManojKrishnaChintaluri <[email protected]>
    Co-authored-by: Praveen <[email protected]>
    Co-authored-by: Sowmya N Dixit <[email protected]>
    
    * Update DatasetModels.scala
    
    * Issue #2 feat: Remove kafka connector code
    
    * feat: add function to get all datasets
    
    * #000:feat: Resolve conflicts
    
    ---------
    
    Co-authored-by: shiva-rakshith <[email protected]>
    Co-authored-by: Aniket Sakinala <[email protected]>
    Co-authored-by: ManojKrishnaChintaluri <[email protected]>
    Co-authored-by: Praveen <[email protected]>
    Co-authored-by: Sowmya N Dixit <[email protected]>
    Co-authored-by: Santhosh <[email protected]>
    Co-authored-by: Anand Parthasarathy <[email protected]>
    Co-authored-by: Ravi Mula <[email protected]>
    
    * #0000 - fix: Fix null dataset_type in DruidRouterFunction (#48)
    
    ---------
    
    Co-authored-by: ManojKrishnaChintaluri <[email protected]>
    Co-authored-by: Praveen <[email protected]>
    Co-authored-by: shiva-rakshith <[email protected]>
    Co-authored-by: Sowmya N Dixit <[email protected]>
    Co-authored-by: Santhosh <[email protected]>
    Co-authored-by: Aniket Sakinala <[email protected]>
    Co-authored-by: Anand Parthasarathy <[email protected]>
    Co-authored-by: Ravi Mula <[email protected]>
    9 people authored Dec 19, 2023
    Configuration menu
    Copy the full SHA
    8106fa2 View commit details
    Browse the repository at this point in the history

Commits on Dec 26, 2023

  1. Develop to Release-1.0.0-GA (#52) (#53)

    * testing new images
    
    * testing new images
    
    * testing new images
    
    * testing new images
    
    * testing new images
    
    * build new image with bug fixes
    
    * update dockerfile
    
    * update dockerfile
    
    * #0 fix: upgrade packages
    
    * #0 feat: add flink dockerfiles
    
    * feat: update all failed, invalid and duplicate topic names
    
    * feat: update kafka topic names in test cases
    
    * #0 fix: add individual extraction
    
    * feat: update failed event
    
    * Update ErrorConstants.scala
    
    * feat: update failed event
    
    * Issue #0 fix: upgrade ubuntu packages for vulnerabilities
    
    * feat: add exception handling for json deserialization
    
    * Update BaseProcessFunction.scala
    
    * Update BaseProcessFunction.scala
    
    * feat: update batch failed event generation
    
    * Update ExtractionFunction.scala
    
    * feat: update invalid json exception handling
    
    * Issue #46 feat: update batch failed event
    
    * Issue #46 feat: update batch failed event
    
    * Issue #46 feat: update batch failed event
    
    * Issue #46 feat: update batch failed event
    
    * Issue #46 fix: remove cloning object
    
    * Issue #46 feat: update batch failed event
    
    * #0 fix: update github actions release condition
    
    * Issue #46 feat: add error reasons
    
    * Issue #46 feat: add exception stack trace
    
    * Issue #46 feat: add exception stack trace
    
    * Dataset enhancements (#38)
    
    * feat: add connector config and connector stats update functions
    * Issue #33 feat: add documentation for Dataset, Datasources, Data In and Query APIs
    * Update DatasetModels.scala
    * #0 fix: upgrade packages
    * #0 feat: add flink dockerfiles
    * #0 fix: add individual extraction
    
    ---------
    
    
    
    
    
    * #0000 [SV] - Fallback to local redis instance if embedded redis is not starting
    
    * Update DatasetModels.scala
    
    * #0000 - refactor the denormalization logic
    1. Do not fail the denormalization if the denorm key is missing
    2. Add clear message whether the denorm is sucessful or failed or partially successful
    3. Handle denorm for both text and number fields
    
    * #0000 - refactor:
    1. Created a enum for dataset status and ignore events if the dataset is not in Live status
    2. Created a outputtag for denorm failed stats
    3. Parse event validation failed messages into a case class
    
    * #0000 - refactor:
    1. Updated the DruidRouter job to publish data to router topics dynamically
    2. Updated framework to created dynamicKafkaSink object
    
    * #0000 - mega refactoring:
    1. Made calls to getAllDatasets and getAllDatasetSources to always query postgres
    2. Created BaseDatasetProcessFunction for all flink functions to extend that would dynamically resolve dataset config, initialize metrics and handle common failures
    3. Refactored serde - merged map and string serialization into one function and parameterized the function
    4. Moved failed events sinking into a common base class
    5. Master dataset processor can now do denormalization with another master dataset as well
    
    * #0000 - mega refactoring:
    1. Made calls to getAllDatasets and getAllDatasetSources to always query postgres
    2. Created BaseDatasetProcessFunction for all flink functions to extend that would dynamically resolve dataset config, initialize metrics and handle common failures
    3. Refactored serde - merged map and string serialization into one function and parameterized the function
    4. Moved failed events sinking into a common base class
    5. Master dataset processor can now do denormalization with another master dataset as well
    
    * #0000 - mega refactoring:
    1. Added validation to check if the event has a timestamp key and it is not blank nor invalid
    2. Added timezone handling to store the data in druid in the TZ specified by the dataset
    
    
    * #0000 - minor refactoring: Updated DatasetRegistry.getDatasetSourceConfig to getAllDatasetSourceConfig
    
    * #0000 - mega refactoring: Refactored logs, error messages and metrics
    
    * #0000 - mega refactoring: Fix unit tests
    
    * #0000 - refactoring:
    1. Introduced transformation mode to enable lenient transformations
    2. Proper exception handling for transformer job
    
    * #0000 - refactoring: Fix test cases and code
    
    * #0000 - refactoring: upgrade embedded redis to work with macos sonoma m2
    
    * #0000 - refactoring: Denormalizer test cases and bug fixes. Code coverage is 100% now
    
    * #0000 - refactoring: Router test cases and bug fixes. Code coverage is 100% now
    
    * #0000 - refactoring: Validator test cases and bug fixes. Code coverage is 100% now
    
    * #0000 - refactoring: Framework test cases and bug fixes
    
    * #0000 - refactoring: kafka connector test cases and bug fixes. Code coverage is 100% now
    
    * #0000 - refactoring: improve code coverage and fix bugs
    
    * #0000 - refactoring: improve code coverage and fix bugs --- Now the code coverage is 100%
    
    * #0000 - refactoring: organize imports
    
    * #0000 - refactoring:
    1. transformer test cases and bug fixes - code coverage is 100%
    
    * #0000 - refactoring: test cases and bug fixes
    
    ---------
    
    
    
    
    
    
    
    
    
    * #000:feat: Removed the provided scope of the kafka-client in the framework (#40)
    
    * #0000 - feat: Add dataset-type to system events (#41)
    
    * #0000 - feat: Add dataset-type to system events
    
    * #0000 - feat: Modify tests for dataset-type in system events
    
    * #0000 - feat: Remove unused getDatasetType function
    
    * #0000 - feat: Remove unused pom test dependencies
    
    * #0000 - feat: Remove unused pom test dependencies
    
    * #67 feat: query system configurations from meta store
    
    * #67 fix: Refactor system configuration retrieval and update dynamic router function
    
    * #67 fix: update system config according to review
    
    * #67 fix: update test cases for system config
    
    * #67 fix: update default values in test cases
    
    * #67 fix: add get all system settings method and update test cases
    
    * #67 fix: add test case for covering exception case
    
    * #67 fix: fix data types in test cases
    
    * #67 fix: Refactor event indexing in DynamicRouterFunction
    
    * Issue #67 refactor: SystemConfig read from DB implementation
    
    * #226 fix: update test cases according to the refactor
    
    ---------
    
    Co-authored-by: Manjunath Davanam <[email protected]>
    Co-authored-by: ManojKrishnaChintaluri <[email protected]>
    Co-authored-by: shiva-rakshith <[email protected]>
    Co-authored-by: Sowmya N Dixit <[email protected]>
    Co-authored-by: Santhosh <[email protected]>
    Co-authored-by: Aniket Sakinala <[email protected]>
    Co-authored-by: Anand Parthasarathy <[email protected]>
    8 people authored Dec 26, 2023
    Configuration menu
    Copy the full SHA
    e794934 View commit details
    Browse the repository at this point in the history

Commits on Jan 12, 2024

  1. Develop to 1.0.1-GA (#59) (#60)

    * testing new images
    
    * testing new images
    
    * testing new images
    
    * testing new images
    
    * testing new images
    
    * build new image with bug fixes
    
    * update dockerfile
    
    * update dockerfile
    
    * #0 fix: upgrade packages
    
    * #0 feat: add flink dockerfiles
    
    * feat: update all failed, invalid and duplicate topic names
    
    * feat: update kafka topic names in test cases
    
    * #0 fix: add individual extraction
    
    * feat: update failed event
    
    * Update ErrorConstants.scala
    
    * feat: update failed event
    
    * Issue #0 fix: upgrade ubuntu packages for vulnerabilities
    
    * feat: add exception handling for json deserialization
    
    * Update BaseProcessFunction.scala
    
    * Update BaseProcessFunction.scala
    
    * feat: update batch failed event generation
    
    * Update ExtractionFunction.scala
    
    * feat: update invalid json exception handling
    
    * Issue #46 feat: update batch failed event
    
    * Issue #46 feat: update batch failed event
    
    * Issue #46 feat: update batch failed event
    
    * Issue #46 feat: update batch failed event
    
    * Issue #46 fix: remove cloning object
    
    * Issue #46 feat: update batch failed event
    
    * #0 fix: update github actions release condition
    
    * Issue #46 feat: add error reasons
    
    * Issue #46 feat: add exception stack trace
    
    * Issue #46 feat: add exception stack trace
    
    * Dataset enhancements (#38)
    
    * feat: add connector config and connector stats update functions
    * Issue #33 feat: add documentation for Dataset, Datasources, Data In and Query APIs
    * Update DatasetModels.scala
    * #0 fix: upgrade packages
    * #0 feat: add flink dockerfiles
    * #0 fix: add individual extraction
    
    ---------
    
    
    
    
    
    * #0000 [SV] - Fallback to local redis instance if embedded redis is not starting
    
    * Update DatasetModels.scala
    
    * #0000 - refactor the denormalization logic
    1. Do not fail the denormalization if the denorm key is missing
    2. Add clear message whether the denorm is sucessful or failed or partially successful
    3. Handle denorm for both text and number fields
    
    * #0000 - refactor:
    1. Created a enum for dataset status and ignore events if the dataset is not in Live status
    2. Created a outputtag for denorm failed stats
    3. Parse event validation failed messages into a case class
    
    * #0000 - refactor:
    1. Updated the DruidRouter job to publish data to router topics dynamically
    2. Updated framework to created dynamicKafkaSink object
    
    * #0000 - mega refactoring:
    1. Made calls to getAllDatasets and getAllDatasetSources to always query postgres
    2. Created BaseDatasetProcessFunction for all flink functions to extend that would dynamically resolve dataset config, initialize metrics and handle common failures
    3. Refactored serde - merged map and string serialization into one function and parameterized the function
    4. Moved failed events sinking into a common base class
    5. Master dataset processor can now do denormalization with another master dataset as well
    
    * #0000 - mega refactoring:
    1. Made calls to getAllDatasets and getAllDatasetSources to always query postgres
    2. Created BaseDatasetProcessFunction for all flink functions to extend that would dynamically resolve dataset config, initialize metrics and handle common failures
    3. Refactored serde - merged map and string serialization into one function and parameterized the function
    4. Moved failed events sinking into a common base class
    5. Master dataset processor can now do denormalization with another master dataset as well
    
    * #0000 - mega refactoring:
    1. Added validation to check if the event has a timestamp key and it is not blank nor invalid
    2. Added timezone handling to store the data in druid in the TZ specified by the dataset
    
    
    * #0000 - minor refactoring: Updated DatasetRegistry.getDatasetSourceConfig to getAllDatasetSourceConfig
    
    * #0000 - mega refactoring: Refactored logs, error messages and metrics
    
    * #0000 - mega refactoring: Fix unit tests
    
    * #0000 - refactoring:
    1. Introduced transformation mode to enable lenient transformations
    2. Proper exception handling for transformer job
    
    * #0000 - refactoring: Fix test cases and code
    
    * #0000 - refactoring: upgrade embedded redis to work with macos sonoma m2
    
    * #0000 - refactoring: Denormalizer test cases and bug fixes. Code coverage is 100% now
    
    * #0000 - refactoring: Router test cases and bug fixes. Code coverage is 100% now
    
    * #0000 - refactoring: Validator test cases and bug fixes. Code coverage is 100% now
    
    * #0000 - refactoring: Framework test cases and bug fixes
    
    * #0000 - refactoring: kafka connector test cases and bug fixes. Code coverage is 100% now
    
    * #0000 - refactoring: improve code coverage and fix bugs
    
    * #0000 - refactoring: improve code coverage and fix bugs --- Now the code coverage is 100%
    
    * #0000 - refactoring: organize imports
    
    * #0000 - refactoring:
    1. transformer test cases and bug fixes - code coverage is 100%
    
    * #0000 - refactoring: test cases and bug fixes
    
    ---------
    
    
    
    
    
    
    
    
    
    * #000:feat: Removed the provided scope of the kafka-client in the framework (#40)
    
    * #0000 - feat: Add dataset-type to system events (#41)
    
    * #0000 - feat: Add dataset-type to system events
    
    * #0000 - feat: Modify tests for dataset-type in system events
    
    * #0000 - feat: Remove unused getDatasetType function
    
    * #0000 - feat: Remove unused pom test dependencies
    
    * #0000 - feat: Remove unused pom test dependencies
    
    * #67 feat: query system configurations from meta store
    
    * #67 fix: Refactor system configuration retrieval and update dynamic router function
    
    * #67 fix: update system config according to review
    
    * #67 fix: update test cases for system config
    
    * #67 fix: update default values in test cases
    
    * #67 fix: add get all system settings method and update test cases
    
    * #67 fix: add test case for covering exception case
    
    * #67 fix: fix data types in test cases
    
    * #67 fix: Refactor event indexing in DynamicRouterFunction
    
    * Issue #67 refactor: SystemConfig read from DB implementation
    
    * #226 fix: update test cases according to the refactor
    
    * Dataset Registry Update (#57)
    
    * Issue #0000: feat: updateConnectorStats method includes last run timestamp
    
    * Issue #0000: fix: updateConnectorStats sql query updated
    
    * Issue #0000: fix: updateConnectorStats sql query updated
    
    ---------
    
    Co-authored-by: Manjunath Davanam <[email protected]>
    Co-authored-by: ManojKrishnaChintaluri <[email protected]>
    Co-authored-by: Praveen <[email protected]>
    Co-authored-by: shiva-rakshith <[email protected]>
    Co-authored-by: Sowmya N Dixit <[email protected]>
    Co-authored-by: Santhosh <[email protected]>
    Co-authored-by: Aniket Sakinala <[email protected]>
    Co-authored-by: Anand Parthasarathy <[email protected]>
    Co-authored-by: Shreyas Bhaktharam <[email protected]>
    10 people authored Jan 12, 2024
    Configuration menu
    Copy the full SHA
    e8c3f57 View commit details
    Browse the repository at this point in the history

Commits on Jan 29, 2024

  1. Develop to 1.0.2-GA (#65) (#66)

    * testing new images
    
    * testing new images
    
    * testing new images
    
    * testing new images
    
    * testing new images
    
    * build new image with bug fixes
    
    * update dockerfile
    
    * update dockerfile
    
    * #0 fix: upgrade packages
    
    * #0 feat: add flink dockerfiles
    
    * feat: update all failed, invalid and duplicate topic names
    
    * feat: update kafka topic names in test cases
    
    * #0 fix: add individual extraction
    
    * feat: update failed event
    
    * Update ErrorConstants.scala
    
    * feat: update failed event
    
    * Issue #0 fix: upgrade ubuntu packages for vulnerabilities
    
    * feat: add exception handling for json deserialization
    
    * Update BaseProcessFunction.scala
    
    * Update BaseProcessFunction.scala
    
    * feat: update batch failed event generation
    
    * Update ExtractionFunction.scala
    
    * feat: update invalid json exception handling
    
    * Issue #46 feat: update batch failed event
    
    * Issue #46 feat: update batch failed event
    
    * Issue #46 feat: update batch failed event
    
    * Issue #46 feat: update batch failed event
    
    * Issue #46 fix: remove cloning object
    
    * Issue #46 feat: update batch failed event
    
    * #0 fix: update github actions release condition
    
    * Issue #46 feat: add error reasons
    
    * Issue #46 feat: add exception stack trace
    
    * Issue #46 feat: add exception stack trace
    
    * Dataset enhancements (#38)
    
    * feat: add connector config and connector stats update functions
    * Issue #33 feat: add documentation for Dataset, Datasources, Data In and Query APIs
    * Update DatasetModels.scala
    * #0 fix: upgrade packages
    * #0 feat: add flink dockerfiles
    * #0 fix: add individual extraction
    
    ---------
    
    
    
    
    
    * #0000 [SV] - Fallback to local redis instance if embedded redis is not starting
    
    * Update DatasetModels.scala
    
    * #0000 - refactor the denormalization logic
    1. Do not fail the denormalization if the denorm key is missing
    2. Add clear message whether the denorm is sucessful or failed or partially successful
    3. Handle denorm for both text and number fields
    
    * #0000 - refactor:
    1. Created a enum for dataset status and ignore events if the dataset is not in Live status
    2. Created a outputtag for denorm failed stats
    3. Parse event validation failed messages into a case class
    
    * #0000 - refactor:
    1. Updated the DruidRouter job to publish data to router topics dynamically
    2. Updated framework to created dynamicKafkaSink object
    
    * #0000 - mega refactoring:
    1. Made calls to getAllDatasets and getAllDatasetSources to always query postgres
    2. Created BaseDatasetProcessFunction for all flink functions to extend that would dynamically resolve dataset config, initialize metrics and handle common failures
    3. Refactored serde - merged map and string serialization into one function and parameterized the function
    4. Moved failed events sinking into a common base class
    5. Master dataset processor can now do denormalization with another master dataset as well
    
    * #0000 - mega refactoring:
    1. Made calls to getAllDatasets and getAllDatasetSources to always query postgres
    2. Created BaseDatasetProcessFunction for all flink functions to extend that would dynamically resolve dataset config, initialize metrics and handle common failures
    3. Refactored serde - merged map and string serialization into one function and parameterized the function
    4. Moved failed events sinking into a common base class
    5. Master dataset processor can now do denormalization with another master dataset as well
    
    * #0000 - mega refactoring:
    1. Added validation to check if the event has a timestamp key and it is not blank nor invalid
    2. Added timezone handling to store the data in druid in the TZ specified by the dataset
    
    
    * #0000 - minor refactoring: Updated DatasetRegistry.getDatasetSourceConfig to getAllDatasetSourceConfig
    
    * #0000 - mega refactoring: Refactored logs, error messages and metrics
    
    * #0000 - mega refactoring: Fix unit tests
    
    * #0000 - refactoring:
    1. Introduced transformation mode to enable lenient transformations
    2. Proper exception handling for transformer job
    
    * #0000 - refactoring: Fix test cases and code
    
    * #0000 - refactoring: upgrade embedded redis to work with macos sonoma m2
    
    * #0000 - refactoring: Denormalizer test cases and bug fixes. Code coverage is 100% now
    
    * #0000 - refactoring: Router test cases and bug fixes. Code coverage is 100% now
    
    * #0000 - refactoring: Validator test cases and bug fixes. Code coverage is 100% now
    
    * #0000 - refactoring: Framework test cases and bug fixes
    
    * #0000 - refactoring: kafka connector test cases and bug fixes. Code coverage is 100% now
    
    * #0000 - refactoring: improve code coverage and fix bugs
    
    * #0000 - refactoring: improve code coverage and fix bugs --- Now the code coverage is 100%
    
    * #0000 - refactoring: organize imports
    
    * #0000 - refactoring:
    1. transformer test cases and bug fixes - code coverage is 100%
    
    * #0000 - refactoring: test cases and bug fixes
    
    ---------
    
    
    
    
    
    
    
    
    
    * #000:feat: Removed the provided scope of the kafka-client in the framework (#40)
    
    * #0000 - feat: Add dataset-type to system events (#41)
    
    * #0000 - feat: Add dataset-type to system events
    
    * #0000 - feat: Modify tests for dataset-type in system events
    
    * #0000 - feat: Remove unused getDatasetType function
    
    * #0000 - feat: Remove unused pom test dependencies
    
    * #0000 - feat: Remove unused pom test dependencies
    
    * #67 feat: query system configurations from meta store
    
    * #67 fix: Refactor system configuration retrieval and update dynamic router function
    
    * #67 fix: update system config according to review
    
    * #67 fix: update test cases for system config
    
    * #67 fix: update default values in test cases
    
    * #67 fix: add get all system settings method and update test cases
    
    * #67 fix: add test case for covering exception case
    
    * #67 fix: fix data types in test cases
    
    * #67 fix: Refactor event indexing in DynamicRouterFunction
    
    * Issue #67 refactor: SystemConfig read from DB implementation
    
    * #226 fix: update test cases according to the refactor
    
    * Dataset Registry Update (#57)
    
    * Issue #0000: feat: updateConnectorStats method includes last run timestamp
    
    * Issue #0000: fix: updateConnectorStats sql query updated
    
    * Issue #0000: fix: updateConnectorStats sql query updated
    
    * #0000 - fix: Fix Postgres connection issue with defaultDatasetID (#64)
    
    * Metrics implementation for MasterDataIndexerJob (#55)
    
    * Issue #50 fix: Kafka Metrics implementation for MasterDataIndexerJob
    
    * Issue #50 fix: Changed 'ets' to UTC
    
    * Issue #50 feat: added log statements
    
    * Issue #50 fix: FIxed issue related to update query
    
    * Issue #50 fix: Code refactoring
    
    * Issue #50 fix: updated implementation of 'createDataFile' method
    
    * Issue #50 fix: code refactorig
    
    * Issue #50 test: Test cases for MasterDataIndexer
    
    * Issue #50 test: test cases implementation
    
    * Issue #50 test: Test case implementation for data-products
    
    * Issue #50 test: Test cases
    
    * Issue #50 test: test cases
    
    * Issue #50 test: test cases for data-products
    
    * Issue #50-fix: fixed jackson-databind issue
    
    * Isuue-#50-fix: code structure modifications
    
    * Issue #50-fix: code refactoring
    
    * Issue #50-fix: code refactoing
    
    * Issue-#50-Fix: test case fixes
    
    * Issue #50-fix: code formatting and code fixes
    
    * feat #50 - refactor the implementation
    
    * Issue-#50-fix: test cases fix
    
    
    
    * modified README file
    
    
    
    * revert readme file changes
    
    
    
    * revert dataset-registry
    
    
    
    * Issue-#50-fix: test cases fix
    
    
    
    * Issue-#50-fix: adding missing tests
    
    
    
    * Issue-#50-fix: refatoring code
    
    
    
    * Issue-#50-fix: code fixes and code formatting
    
    
    
    * fix #50: modified class declaration
    
    
    
    * fix #50: code refactor
    
    
    
    * fix #50: code refactor
    
    
    
    * fix #50: test cases fixes
    
    
    
    ---------
    
    
    
    
    * Remove kafka connector as it is moved to a independent repository
    
    ---------
    
    Signed-off-by: SurabhiAngadi <[email protected]>
    Co-authored-by: Manjunath Davanam <[email protected]>
    Co-authored-by: ManojKrishnaChintaluri <[email protected]>
    Co-authored-by: Praveen <[email protected]>
    Co-authored-by: shiva-rakshith <[email protected]>
    Co-authored-by: Sowmya N Dixit <[email protected]>
    Co-authored-by: Santhosh <[email protected]>
    Co-authored-by: Aniket Sakinala <[email protected]>
    Co-authored-by: Anand Parthasarathy <[email protected]>
    Co-authored-by: Shreyas Bhaktharam <[email protected]>
    Co-authored-by: SurabhiAngadi <[email protected]>
    11 people authored Jan 29, 2024
    Configuration menu
    Copy the full SHA
    d2a2dea View commit details
    Browse the repository at this point in the history

Commits on Feb 16, 2024

  1. Release 1.0.3-GA (#72)

    ravismula authored Feb 16, 2024
    Configuration menu
    Copy the full SHA
    b2a183a View commit details
    Browse the repository at this point in the history

Commits on Mar 18, 2024

  1. Merge branch 'main' of github.com:Sunbird-Obsrv/obsrv-core into sunbi…

    …rd-opensource-release-2.0.1-GA
    
    # Conflicts:
    #	.github/workflows/build_and_deploy.yaml
    #	Dockerfile
    #	data-products/pom.xml
    #	data-products/src/main/scala/org/sunbird/obsrv/dataproducts/MasterDataProcessorIndexer.scala
    #	dataset-registry/src/main/scala/org/sunbird/obsrv/service/DatasetRegistryService.scala
    #	framework/src/main/scala/org/sunbird/obsrv/core/model/ErrorConstants.scala
    #	framework/src/main/scala/org/sunbird/obsrv/core/model/SystemConfig.scala
    #	framework/src/main/scala/org/sunbird/obsrv/core/streaming/BaseJobConfig.scala
    #	pipeline/kafka-connector/pom.xml
    #	pipeline/kafka-connector/src/main/resources/kafka-connector.conf
    #	pipeline/kafka-connector/src/main/scala/org/sunbird/obsrv/connector/task/KafkaConnectorConfig.scala
    #	pipeline/kafka-connector/src/main/scala/org/sunbird/obsrv/connector/task/KafkaConnectorStreamTask.scala
    #	pipeline/pom.xml
    #	stubs/docker/apache-flink-plugins/Dockerfile
    sowmya-dixit committed Mar 18, 2024
    Configuration menu
    Copy the full SHA
    03c4d4f View commit details
    Browse the repository at this point in the history