You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, we run Spark runner tests against multiple Spark/Hadoop versions. We need to make sure that these versions are properly aligned and officially compatible among them. Also, we need to make sure that only the tested version of Hadoop/Spark classes is sitting on classpath.
Issue Priority
Priority: 3 (nice-to-have improvement)
Issue Components
Component: Python SDK
Component: Java SDK
Component: Go SDK
Component: Typescript SDK
Component: IO connector
Component: Beam examples
Component: Beam playground
Component: Beam katas
Component: Website
Component: Spark Runner
Component: Flink Runner
Component: Samza Runner
Component: Twister2 Runner
Component: Hazelcast Jet Runner
Component: Google Cloud Dataflow Runner
The text was updated successfully, but these errors were encountered:
Probably related if find helpful, recently I made a change to regularly publish spark job server dev containers to gcr.io/apache-beam-testing/beam_portability/beam_spark3_job_server (part of #27595), as we're doing for flink job server
What needs to happen?
Currently, we run Spark runner tests against multiple Spark/Hadoop versions. We need to make sure that these versions are properly aligned and officially compatible among them. Also, we need to make sure that only the tested version of Hadoop/Spark classes is sitting on classpath.
Issue Priority
Priority: 3 (nice-to-have improvement)
Issue Components
The text was updated successfully, but these errors were encountered: