Running and Re‑running Jobs for Reproducible Analysis
This help page explains how to run, inspect, and re‑run jobs in DECTRIS CLOUD. The goal is to help you iteratively refine analysis parameters while keeping a complete, traceable history of what was done.
Core Concepts
- Projects hold your data (raw, processed, work, metadata).
- Jobs are executions of analysis templates (e.g. unpack, DIALS).
- Reproducibility means:
- Every job keeps its inputs, parameters, logs, template version, and outputs.
- You can re‑run the same job with modified parameters.
- All iterations are grouped in a job history.
You do not need to manage hardware directly; compute resources are handled automatically.
Step 1 : Run a Job
1. Upload data
- Go to your Project space.
- Navigate to the appropriate folder (e.g. raw/).
- Upload your data.
- Best practice: upload a single ZIP file if you have many small files. This is faster and more reliable.
2. Start an analysis job
- Go to the Analysis tab.
- Select a public template (e.g. Unpack, DIALS macromolecule tutorial).
- Click Run job.
- Select the input data:
- For unpacking: select the ZIP file.
- For analysis (e.g. DIALS): select the folder containing the unpacked data, not individual files.
- Review optional settings (machine type, sponsor, template version).
- Start the job.
3. Monitor job status
After starting a job, it appears in the Jobs table with one of the following states:
- Pending = compute resources are being allocated and the environment is setup.
- Running = the job is executing.
- Completed = the job finished successfully.
- Failed = the job stopped due to an error.
Pending can take some time depending on resource availability.
Step 2 : Inspect Job Details
Click the eye (details) icon in the Jobs table to open the job details page.
You can inspect:
- Inputs – data paths and parameters used.
- Logs – live and final execution logs (essential for debugging).
- Outputs – files written during execution (may appear while the job is still running).
- Resource usage – CPU and runtime information.
👉 If a job fails, always check the Logs tab first.
Step 3 – Re‑run a Job with New Parameters
Re‑running is the core reproducibility feature.
How to re‑run
- Open the job details page.
- Click Rerun.
- Adjust input parameters (e.g. space group, symmetry options, flags).
- Start the new run.
Important notes:
- A re‑run creates a new job, it does not overwrite the original.
- Multiple re‑runs can execute in parallel.
- You can iterate as many times as needed.
Step 4 – View Job History (Iteration Tracking)
To compare iterations:
- Open a job’s details page.
- Click History (top‑right actions).
This shows a filtered view containing:
- The original job
- All its re‑runs
- Each set of parameters and outcomes
This makes it easy to:
- Compare different parameter choices
- See which iteration succeeded or failed
- Understand how results evolved
Common Workflow Pattern
- Run a job with minimal or default parameters.
- Inspect logs and outputs.
- Identify missing or incorrect parameters.
- Re‑run the job with corrected inputs.
- Repeat until results are satisfactory.
This mirrors real scientific analysis and preserves full traceability.
Practical Tips
- ZIP before upload if you have many small files.
- Select folders, not files, when a tool expects a dataset.
- If a job fails:
- Check logs first.
- Look for parameter type errors (e.g. boolean vs numeric).
- Pending jobs are normal; resource allocation can take time.
- Re‑runs may start faster than original jobs.
Mental Model to Remember
- Data lives in projects.
- Software lives in job templates.
- Experiments happen as jobs acting on data.
Reproducibility is achieved through job history and preserved parameters.