-
Notifications
You must be signed in to change notification settings - Fork 23
/
form.yml
110 lines (109 loc) · 4.08 KB
/
form.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
---
cluster:
- "owens"
- "pitzer"
- "cardinal"
form:
- auto_modules_matlab
- auto_accounts
- bc_num_hours
- bc_num_slots
- num_cores
- node_type
- bc_vnc_resolution
- bc_email_on_started
attributes:
num_cores:
widget: "number_field"
label: "Number of cores"
value: 1
help: |
Number of cores on node type (4 GB per core unless requesting whole
node). Leave blank if requesting full node.
min: 0
max: 28
step: 1
bc_num_slots: "1"
bc_vnc_resolution:
required: true
node_type:
widget: select
label: "Node type"
help: |
- **Standard Compute** <br>
These are standard HPC machines. Owens has 648 of these nodes with 40
cores and 128 GB of memory. Pitzer has 224 of these nodes with 40 cores and
340 of these nodes with 48 cores. All pitzer nodes have 192 GB of RAM. Chosing "any" as the node type will decrease
your wait time.
- **GPU Enabled** <br>
These are HPC machines with GPUs. Owens has 160 nodes with 1 [NVIDIA Tesla P100 GPU]
and Pitzer has 74 nodes with 2 [NVIDIA Tesla V100 GPUs]. They have the same
CPU and memory characteristics of standard compute. However, Pitzer's 40 core machines
have 2 GPUs with 16 GB of RAM; and Pitzer's 48 core machines have 2 GPUs with 32 GB of RAM.
Dense GPU types have 4 GPUs with 16 GB of RAM.
- **Large Memory** <br>
These are HPC machines with very large amounts of memory. Owens has 16 hugemem nodes
with 48 cores and 1.5 TB of RAM. Pitzer has 4 hugemem nodes with 3 TB of RAM and 80 cores.
Pitzer also has 12 Largmem nodes which have 48 cores with 768 GB of RAM.
[NVIDIA Tesla P100 GPU]: http://www.nvidia.com/object/tesla-p100.html
[NVIDIA Tesla V100 GPUs]: https://www.nvidia.com/en-us/data-center/v100/
options:
- [
"any", "any",
data-max-num-cores-for-cluster-owens: 28,
data-max-num-cores-for-cluster-pitzer: 48,
data-max-num-cores-for-cluster-cardinal: 96,
]
- [
"48 core", "any-48core",
data-max-num-cores-for-cluster-pitzer: 48,
data-option-for-cluster-owens: false,
data-option-for-cluster-cardinal: false,
]
- [
"40 core", "any-40core",
data-max-num-cores-for-cluster-pitzer: 40,
data-option-for-cluster-owens: false,
data-option-for-cluster-cardinal: false,
]
- [
"any gpu", "gpu",
data-max-num-cores-for-cluster-owens: 28,
data-max-num-cores-for-cluster-pitzer: 48,
data-max-num-cores-for-cluster-cardinal: 96,
]
- [
"40 core gpu", "gpu-40core",
data-max-num-cores-for-cluster-pitzer: 40,
data-option-for-cluster-owens: false,
data-option-for-cluster-cardinal: false,
]
- [
"48 core gpu", "gpu-48core",
data-max-num-cores-for-cluster-pitzer: 48,
data-option-for-cluster-owens: false,
data-option-for-cluster-cardinal: false,
]
- [
"largemem", "largemem",
data-min-num-cores-for-cluster-pitzer: 24,
data-max-num-cores-for-cluster-pitzer: 48,
data-option-for-cluster-owens: false,
data-option-for-cluster-cardinal: false,
]
- [
"hugemem", "hugemem",
data-min-num-cores-for-cluster-owens: 4,
data-max-num-cores-for-cluster-owens: 48,
data-min-num-cores-for-cluster-pitzer: 20,
data-max-num-cores-for-cluster-pitzer: 80,
data-option-for-cluster-cardinal: false,
]
- [
"debug", "debug",
data-max-num-cores-for-cluster-owens: 28,
data-max-num-cores-for-cluster-pitzer: 48,
data-option-for-cluster-owens: false,
data-option-for-cluster-pitzer: false,
data-option-for-cluster-cardinal: false,
]