-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't use spark-batch-indexer #74
Comments
Overlord raise the exception but i checked the log and the module is correctly loaded: |
Hi Ben, thanks for the information. A few things to double-check. Make sure you are using the https://github.com/metamx/druid-spark-batch/tree/druid0.9.0 branch for druid 0.9.0, and please make sure if you are using a middle manager that ALSO loads the extension properly. |
Actually my middleManager doesn't load the extension. I made the same things on both nodes but it is not working on the middle manager. I will investigate and let you know. |
I have the following logs on both middleManager and Overlord, still got the same error. Any suggestions on how investigate on this problem ? |
@benwck Assuming there are no errors reported during the node startup, the only thing I could think of would be to do a test with just the overlord locally (where the runner is local rather than remote). And see if the overlord takes the task under that condition. That way you eliminate weird potential cross-node communication stuff |
I am having a similar issue. As @drcrallen suggested I am running overload locally (i.e., single node cluster with all services running locally). |
Make sure to submit your task to the |
Hey guys,
I don't want to spam in druid-development group thread, so i post here.
I actually build the jar on my own with spark 1.6.1, add the jar to /druidpath/extensions/druid-spark-batch/ on both the overlord and middle manager. Added druid.indexer.task.defaultHadoopCoordinates=["org.apache.spark:spark-core_2.10:1.6.1"]
in the two nodes runtime properties, restarted nodes then submit the job with json file.
Still get: "error": "Could not resolve type id 'index_spark' into a subtype of [simple type, class io.druid.indexing.common.task.Task]\n at [Source: HttpInputOverHTTP@2cecd2f2; line: 54, column: 38]"
Any idea or morde doc provided out of here ?
Thanks,
Ben
The text was updated successfully, but these errors were encountered: