-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support G124 barium analysis #67
Comments
JupyterHub profilesHere is the current configuration, that I haven't yet updated for Jetstream 2: jupyterhub-deploy-kubernetes-jetstream/config_standard_storage.yaml Lines 37 to 51 in 9b095af
I am currently using "m3.medium" instances for the worker nodes, they have 30 GB of RAM, so assuming we can use 25 GB, I think we can do:
I also want to remove all CPU limits, so numbers above are garantueed CPUs, but if there is availability, they can potentially use all 8 vCPU. |
@zonca I think 25 GB RAM is a great place to start. |
ok, it is deployed, as usual please open an issue with the error log if anything is broken. Let's keep this issue open to track the barium analysis work. |
@MusaabFaozi is working on a G124 barium analysis and they are interested in trying this analysis on the XSEDE-hosted Jupyter system. I think this might be a nice use-case because he's processing raw data and the Dask server might be helpful!
He would need access to G124 data - we'd need someone on the CDMS side to work on copying this data to the OSN so it's accessible. I'll work on identifying someone who can help with this.
He'll also need a high-memory node. He could start with the 16GB node that's available, but it's possible he'll need more memory.
The text was updated successfully, but these errors were encountered: