Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Task]: Align Spark/Hadoop versions in Spark multiple-versions unit/VR tests #27562

Open
2 of 15 tasks
aromanenko-dev opened this issue Jul 19, 2023 · 2 comments
Open
2 of 15 tasks

Comments

@aromanenko-dev
Copy link
Contributor

What needs to happen?

Currently, we run Spark runner tests against multiple Spark/Hadoop versions. We need to make sure that these versions are properly aligned and officially compatible among them. Also, we need to make sure that only the tested version of Hadoop/Spark classes is sitting on classpath.

Issue Priority

Priority: 3 (nice-to-have improvement)

Issue Components

  • Component: Python SDK
  • Component: Java SDK
  • Component: Go SDK
  • Component: Typescript SDK
  • Component: IO connector
  • Component: Beam examples
  • Component: Beam playground
  • Component: Beam katas
  • Component: Website
  • Component: Spark Runner
  • Component: Flink Runner
  • Component: Samza Runner
  • Component: Twister2 Runner
  • Component: Hazelcast Jet Runner
  • Component: Google Cloud Dataflow Runner
@aromanenko-dev
Copy link
Contributor Author

CC: @mosche

@Abacn
Copy link
Contributor

Abacn commented Jul 25, 2023

Probably related if find helpful, recently I made a change to regularly publish spark job server dev containers to gcr.io/apache-beam-testing/beam_portability/beam_spark3_job_server (part of #27595), as we're doing for flink job server

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants