From Instrument to Users

How instruments, facility staff, and users collaborate during experiments

DECTRIS CLOUD is designed to support live collaboration between instruments, facility staff, and users, from the moment data is collected to iterative reprocessing after the experiment.
This page explains how jobs are used to connect data acquisition, automated processing, and user-driven analysis.


Core Concepts

Experiments: Where Experiments Data Live

  • Experiments are created by the facility (manually or via API) and represent a beamtime or measurement session.
  • Users are invited to experiments and automatically gain access to:
    • Uploaded data
    • Automatically triggered analysis jobs
    • Results produced during the experiment

The Cockpit: Your Live Experiment Hub

When an instrument is CLOUD-enabled, users and staff interact through the Cockpit.

What the Cockpit Shows

  • Instrument status
    • Whether data upload (HUB / sync) is connected
    • Live indication of files being uploaded
  • Active experiment
    • The experiment currently receiving data
    • PI and collaborators
  • Live processing
    • Jobs automatically triggered by the instrument
    • Job status (pending, running, finished)
  • Resource usage
    • License usage and remaining CPU hours (for staff monitoring)

Users can access the Cockpit if they are:

  • Facility staff assigned to the instrument, or
  • Invited to an active experiment Jobs Triggered by the Instrument

How It Works

  1. The instrument uploads data to the active experiment.
  2. The facility configures processing templates (e.g. XDS, DIALS, fast feedback pipelines).
  3. Jobs are automatically triggered via the API as data becomes available.
  4. Results appear directly in every collaborator’s job list.

This ensures:

  • Fast, standardized feedback during the experiment
  • No manual job submission required from the user
  • Shared visibility of results for the entire team

User Interaction with Jobs

Even if a job was launched automatically by the instrument, users can fully interact with it.

Viewing Results

  • Open Analysis → Jobs
  • Expand a job to see:
    • Plots and metrics
    • Logs and intermediate output
    • Generated result files

Re-running Jobs (User Reprocessing)

Users can rerun any job with modified parameters:

  • Change processing pipeline (e.g. XDS → DIALS)
  • Adjust resolution limits or other parameters
  • Choose a different performance profile (CPU count)
  • Select who sponsors the computation:
    • User license
    • Facility / lab license (if enabled)

This supports iterative refinement without duplicating data or workflows. Profiles and Credits

CPU Usage Model

Job cost is calculated as:

CPU hours = number of CPUs × runtime

Example:

  • 8 CPUs × 1 hour → 8 CPU hours
  • 16 CPUs × 30 minutes → 8 CPU hours

More CPUs do not always mean faster jobs.
Performance depends on how well the analysis software scales.

Sponsorship Options

When enabled by the facility:

  • Users can run jobs sponsored by the lab
  • Useful during beamtime to avoid user license friction

Live vs. Post-Experiment Access

During the Experiment

  • Data uploads appear live in the Cockpit
  • Jobs may queue initially but speed up as compute nodes stay warm
  • Users can:
    • Inspect raw data
    • View intermediate job outputs
    • Rerun analyses immediately

After the Experiment

  • The experiment moves from RunningProcessing
  • Data and jobs remain accessible
  • Users can continue analysis until the processing window expires
    (duration depends on the facility license)

All completed experiments remain accessible via Data → Experiments.

Viewing Raw Data During Acquisition

Users who want quick visual feedback before jobs finish can:

  • Navigate to the experiment’s root folder
  • Browse uploaded files in real time
  • Use built-in viewers (e.g. HDF5 viewer) to:
    • Inspect detector images
    • Adjust intensity scaling
    • Scroll through frames

This is especially useful for remote users monitoring beamtime.

Facility Configuration & Flexibility

Facility staff can:

  • Create and switch experiments from the Cockpit
  • Configure which job templates run automatically
  • Adjust processing strategies between experiments
  • Use the API to fully automate:
    • Experiment creation
    • User invitations
    • Job triggering

This allows each beamline to tailor workflows to:

  • Fast feedback
  • High-quality final processing
  • User-specific needs

Collaboration and Transparency

All collaborators on an experiment can:

  • See the same data
  • See the same jobs
  • See who launched or reran a job
  • Inspect logs and outputs

This shared context:

  • Reduces manual file exchange
  • Improves communication between users and facility staff
  • Enables collaborative decision-making during beamtime

Summary

DECTRIS CLOUD jobs act as the bridge between instruments and users:

  • Instruments automatically process data as it is collected
  • Users receive immediate insight without setup overhead
  • Both sides share the same data, results, and context
  • Reprocessing and exploration remain fully user-controlled

This creates a seamless workflow from data acquisition → live feedback → collaborative analysis → post-experiment refinement.

Was this article helpful?