Page 1 of 1
Exla FLOPs - Free credits
Exla's on-demand GPU cluster service:
https://gpus.exla.ai/
What's your email (preferably your work email)?
*
What do you use GPUs for?
*
Roughly how many GPUs do you typically need?
*
Roughly how many GPUs do you typically need?
A
1+ GPU
B
4+ GPUs
C
8+ GPUs
D
16 GPUs
E
Varies by job
Do you have specific requirements for the type of interconnect for clusters?
(e.g., TCP/IP vs Infiniband)
*
Note: In many cases, TCP is enough for model parallelism, especially when your training loop avoids frequent all-reduce (e.g., via DiLoCo or other communication-efficient strategies).
Do you have specific requirements for the type of interconnect for clusters? (e.g., TCP/IP vs Infiniband)
A
No strong preference — TCP is fine
B
Yes — I need high-speed interconnect (e.g., Infiniband / NVLink)
C
Not sure
What kind of workloads are you running? (eg. training, batch inference, evals, etc)
*
When do you plan to try the cluster?
*
When do you plan to try the cluster?
A
This week
B
Within the next 2 weeks?
C
Just exploring for now
What matters most to you in a GPU cluster provider?
*
What matters most to you in a GPU cluster provider?
Cost per hour
Fast availability (no wait)
Support for specific GPU types (eg. H100s)
Other
Any questions, ideas, or feedback for us?
How did you hear about Exla FLOPs
*
How did you hear about Exla FLOPs
A
Twitter / X
B
LinkedIn
C
YC
D
Reddit
E
Friend
F
Other
Submit