Elastic Storage

Overview

What is elastic storage?

Elastic Storage is hot storage that scalesflexes with your work. You pay for your average usage, not the single biggest spike. Short bursts are welcome as you will never hit a hard cap mid-experiment. Data in elastic storage is always automatically mounted in all interactive sessions and automated jobs at bandwidths of up to 100 GBits per machine.

How does it work?

We meter storage using a rolling 90-day average. Upload as much as you need during an experiment, during simulations or during intense processing phases; your entitlement is sized to the average, and the platform is designed to absorb peaks. When data becomes inactive, move it to Archiving (cold storage) and restore it to Elastic Storage whenever you need to compute again (typical restore time: up to ~12 hours). Alternatively, if you have an offsite copy just delete the file. What matters is: how much data do you have for how long?

Benefits of Elastic Storage for Scientists

Benefits during daily work

  • No limit to discovery and innovation:
    Elastic storage can deal with massive bursts by design. Discovery should not be limited by your hard drive or cluster quota.
  • Faster analysis:
    Elastic storage is always mounted on your machine. No transferring, no syncing, no scratch - data is just there.
  • Simple data life cycle:
    Don’t need the raw data anymore? Just move it to Archivingarchive. Need it back 2 days later? Just pull it back with a click.
  • Constant data versioning:
    Science lives from mistakes - we get it. Accidentally delete or overwrite files? We keep several versions of your data, so you can just go back in time to recover your steps.
  • Data security and protection:
    Data in elastic storage is kept with triple redundancy, data objects are protected from ransomware attacks and the probability of loss of a data object is <0.000000001%

Benefits for cost & procurement

  • Budgets are calm, not reactive.
    You plan around a steady annual commit and top up with credits when needed - no emergency purchases, no surprise “we’re full” calls.
  • Capacity adjusts in minutes.
    Peaks are absorbed; you right-size when the average shifts. No manual hardware upgrades, no monthly procurement cycles, no waiting for months for things to take effect.
  • No racks, no rooms, no headaches.
    Zero floor space, power, or cooling to reserve. No fighting for office space and no IT tickets to get anything done.
  • Admin overhead drops.
    No more quota policing, juggling expansions, or migration projects. Lifecycle is policy-driven: hot when active, archived when quiet.
  • Simplified Grant Applications.
    Working on a grant application that will span 4 years? Just generate a quote, download it, submit with your grant and be ready to go when the project kicks off.

Dimension

Elastic Storage (DECTRIS CLOUD)

Fixed Cloud Quota

On-Prem Storage

Sizing modelPay for a rolling 90-day average; temporary peaks don’t force an upgrade. Pay for the allocated size (e.g., 1 TB). Exceeding it requires a plan change. Size for peak; capacity is whatever you bought.
Burst handlingBursts allowed by design—ingest or generate data without hitting a hard cap. Usually blocked or billed as overage; needs quota changes. Only possible if you over-provisioned.
Scale-up lead timeMinutes; no hardware to procure. Rolling average absorbs spikes. Hours–days to change plan/limits.Months (RFQs, delivery, install).
Data lifecycleHot ↔ Archive in a click; restore typically up to ~12 h before compute. Tiering rules vary; restores can be hours–days depending on tier.Often manual moves; tape restores are slow.
Compute readinessData automatically mounted on compute nodes - no syncing/transferring before usageMay require transferring/syncing before usageDepends on local cluster and staging.
Cost basisAnnual commit for predictability + Credits for flexibility; you’re paying for the average, not peak.   Pay for max allocation; bursts often trigger higher tiers. CapEx + OpEx (power, cooling, support).
Credit utilityCredits last 3 years; convert to Elastic Storage, Archiving, or Compute; can be assigned to projects/experiments. N/A (most clouds bill per service separately).N/A.
Right-sizing rule of thumbElastic ≈ 1/5 of yearly produced data or ½ of peak hot footprint (assuming ~30 days hot). Size for peak.Size for peak + headroom.

Coming soon:

Collaboration & chargeback

Credits and usage can be mapped to projects/grants; clean internal reporting. Cross-account/showback varies by setup.Manual tracking or separate systems.

 

How much Elastic Storage do I need?

Below there are three examples showing you how the elastic storage works in different scenarios including a rule of thumb for you to calculate how much elastic storage you need. Have questions about your specific use case? Just get in touch with us!

Example - Visiting Scientists carrying out Highly Demanding Synchrotron Experiments

Assumption: 4 Experiments per year, 25-40 TB of data per experiment, yearly data production of 125 TB, raw data is pushed to archive or deleted after 30 days.

Result: Trailing average never exceeds 20 TBytes.

Example - Visiting Scientists carrying out multiple CryoEM SPA Experiments

Assumption: 8 SPA Experiments per year, 2 TB of data per experiment, yearly data production of 18 TB, raw data is pushed to archive or deleted after 30 days.

Result: Trailing average never exceeds 3 TBytes.

Example - A High Throughput MX Synchrotron beamline

Assumption: 34 weeks of user operation per year with an EIGER2 XE 16M, 35k samples per year, early data production of 475 TB, raw data is pushed to archive or deleted after 14 days.

Result: Trailing average never exceeds 25 TBytes.

Rule of thumbs

Assuming you keep data for 30 days in Elastic Storage before you move them to archive or delete it:

  • Elastic Storage = Fifth of yearly produced data

or

  • Elastic Storage = Half of peak needed storage Volume

Why does Elastic Storage look so expensive?

Elastic storage isn’t “just disks.” You’re buying a managed, burst-tolerant service that keeps experiments moving: no hard limits for bursts, data always mounted and ready, instant resizing if plans change, no syncing/transferring between use, unmatched durability (risk of loss), version rollbacks that have your back, ransomware protection, and more. Practically, that means your daily work doesn’t pause for quotas or limits and analysis can start when the data lands - no firefighting, no detours.

By contrast, “cheap hard drives” (or fixed cloud quotas) carry hidden costs: you size for peak and sit on idle capacity, you still need a second copy/off-site, and someone must police quotas, migrate data, replace drives, manage power/cooling, and handle audits. Procurement cycles add drag, and the real risk is lost time when space runs out mid-campaign. Those are real costs and real hours - even if they don’t show up on a per-TB sticker.

To keep elastic economical, size to your 90-day average (not your biggest week), keep active data hot for ~30 days, archive the rest, and use credits to absorb campaign spikes. If your workload is perfectly steady, >80% utilized, and you already operate dual sites with staff and space, on-prem can be cheaper. 

And in case you really want to compare cost, use our rule of thumb from real live examples. With 1 TB Elastic Storage you can cover roughly 2 TB of peak utilization. To keep a safety margin, one would typically buy at least 3 TB of Solid State Drive (SDD) that offer somewhat  comparable performance.

Was this article helpful?