Executing batch jobs in Ray Cluster
In this example we will: - Create a Ray cluster - Submit a job python file - Terminate the cluster after job is completed.
Before you begin
- Create "ray" under your "~/my" folder
- And copy job.py under this folder
import practicuscore as prt
job_dir = "~/my/ray"
distributed_config = prt.DistJobConfig(
job_type = prt.DistJobType.ray,
job_dir = job_dir,
py_file = "job.py",
worker_count = 2,
)
worker_config = prt.WorkerConfig(
# Please note that Ray requires a specific worker image
worker_image="practicus-ray",
worker_size="Medium",
distributed_config=distributed_config,
log_level="DEBUG",
)
coordinator_worker = prt.create_worker(
worker_config=worker_config,
)
# You can view the logs during or after the job is completed
# To view coordinator (master) set rank = 0
rank = 0
# To view other workers set rank = 1,2, ..
prt.distributed.view_log(
job_dir=job_dir,
job_id=coordinator_worker.job_id,
rank=rank
)
Wrapping up
- Once the job is completed, you can view the results in
~/my/ray/result.csv/
- Please note that result.csv is a folder that can contain
parts of the processed file
by each worker (Ray executors) - Also note that you do not need to terminate the cluster since it has a 'py_file' to execute, which defaults
terminate_on_completion
parameter to True. - You can change terminate_on_completion to False to keep the cluster running after the job is completed to troubleshoot issues.
- You can view other
prt.DistJobConfig
properties to customize the cluster
Supplementary Files
job.py
import practicuscore as prt
ray = prt.distributed.get_client()
@ray.remote
def square(x):
return x * x
def calculate():
numbers = [i for i in range(10)]
futures = [square.remote(i) for i in numbers]
results = ray.get(futures)
print("Distributed square results of", numbers, "is", results)
if __name__ == "__main__":
calculate()
ray.shutdown()
Previous: Use Cluster | Next: Modin > Start Cluster