Form cover
Page 1 of 1

Exla FLOPs - Free credits

Exla's on-demand GPU cluster service: https://gpus.exla.ai/

What's your email (preferably your work email)?

What do you use GPUs for?

Roughly how many GPUs do you typically need?

Roughly how many GPUs do you typically need?
A
B
C
D
E

Do you have specific requirements for the type of interconnect for clusters? (e.g., TCP/IP vs Infiniband)

Note: In many cases, TCP is enough for model parallelism, especially when your training loop avoids frequent all-reduce (e.g., via DiLoCo or other communication-efficient strategies).
Do you have specific requirements for the type of interconnect for clusters? (e.g., TCP/IP vs Infiniband)
A
B
C

What kind of workloads are you running? (eg. training, batch inference, evals, etc)

When do you plan to try the cluster?

When do you plan to try the cluster?
A
B
C

What matters most to you in a GPU cluster provider?

What matters most to you in a GPU cluster provider?

Any questions, ideas, or feedback for us?

How did you hear about Exla FLOPs

How did you hear about Exla FLOPs
A
B
C
D
E
F