January 8th 2026: Getting Data Into DECTRIS CLOUD

Recording of the first part of the meeting

-- to be uploaded --

Environments and help pages referenced

Unpack job (needs app login)

Globus Data Transfer environment (needs app login)

Upload via Globus help page

Upload via Browser

DECTRIS SYNC (software uploader) help page

Machine-generated transcript

Introduction

Hi everyone, and welcome to the first DECTRIS CLOUD power-user meeting of the year.
Today we’re going to talk about different ways of getting your own data into DECTRIS CLOUD, so you can take advantage of the scalable cloud compute power to perform analysis on it.

If we look at today’s outline, I’ll first show the most straightforward ways for you as a user to upload data to DECTRIS CLOUD, either through the web app uploader or from inside a session. Toward the end, I’ll also briefly talk about using our software uploader, which is available to certain users. As always, we’ll have time for Q&A at the end.

Uploading data via the web app

To start, I’ll show how to use the web app uploader. I’ll go to app.dectris.cloud.

From the home page, I click on the Data tab and then My Projects. Here, you can select the project you want to upload data to. If you don’t already have a project, you can use the Create project button to make a new one.

Once I select a project, I click Go to project space, where I can see the different project folders. You can upload data to any of these folders. In my case, I’ll go into the raw folder.

The simplest way to upload data is to use the Upload button. This opens a dialog where I can drag in files or folders. I’ll drag in a folder here.

The uploader first initializes and creates a list of all the files in the folder. If I click Upload, it would start uploading all of them. I’ll click Cancel for now, because I also want to show another option.

Uploading archives and resumable uploads

The speed of the web app uploader depends on your own internet bandwidth, so in some cases it can be faster to upload a compressed archive instead.

I open the upload dialog again and drag in a ZIP file. Now we’re uploading just a single file.

One important thing to know about the web app uploader is that it keeps track of progress. If your connection is unstable and an upload is interrupted, you can simply drag the file in again and it will continue from where it stopped.

I’ll demonstrate that now. I start the upload and let it run until it reaches just past 50%, then I refresh the page to interrupt it. After refreshing, I drag in the same file again and click upload. You can see that it resumes from where it left off.

This is particularly useful for large files — we’ve seen users successfully upload files of several hundred gigabytes this way.

Once the upload finishes, the file is available in the project and accessible to anyone the project is shared with.

Unpacking archives using a job template

Now we have a ZIP archive in the project. To unpack it, you could start a session and do it manually, but we also provide a job template for this.

I go to Analysis and then Public job templates, and scroll down to find the template called Unpack. Here, I select the ZIP archive I just uploaded from my project’s raw folder.

To unpack the file, I only need a small machine, so I choose one with 2 CPUs, keep the software disk at 32 GB, and click Run.

The job appears in my job list as pending. Once it starts running, it unpacks the archive and places the extracted files into the project. You can also see logs from previous runs that show exactly how the files were unpacked.

That covers uploading data using the web app.

Uploading data from inside a session

Next, I’ll show how to upload or transfer data from inside a session.

To start a session, I go to Analysis and then Public environments. Here, I can choose which software environment I want. For simple data transfers, a basic Ubuntu or Rocky Linux environment is sufficient. In this case, I’ll choose the Globus Data Transfer environment to demonstrate that option.

I click Start session, give the session a name, and keep the selected environment. If you’re transferring large files, you may want to increase the session duration, but for this example I’ll keep it at four hours.

I choose a base machine with 4 CPUs. The reason for this is that network bandwidth inside a session depends partly on the machine type. Compared to the free 2-CPU machine, the base machine typically provides significantly better transfer speeds.

I then select the project I want to work with, review the summary, and create the session. In my case, I already created this session earlier, so I simply start it from My Sessions.

Once the session is running, I click Open, which opens it in a new browser tab.

Using Globus inside a session

Because I selected the Globus environment, I immediately see the Globus interface. I can click Log in, which opens a browser window where I authenticate using my organizational login.

Once logged in, I can choose a source location in Globus and set the destination to my project folder. Inside the session, the project folder is located under /dectris_data. If I click this shortcut, I can navigate through the folders and see the same project structure as in the web app.

Only data stored under /dectris_data is accessible outside the session and visible in the web app. Files stored elsewhere, such as on the desktop or in Documents, are only accessible within the session itself.

Other ways to transfer data inside a session

You don’t have to use Globus — it’s just one option.

Inside a session, you can also use the Upload file button in the corner to upload files directly from your local machine. This works well for smaller files. You can also download files from the session to your local machine by clicking on them.

Since you have access to a web browser inside the session, you can also download files from services like Google Drive using Firefox and then move them into your project folder.

Using the command line

Finally, you can transfer data using the command line.

I open a terminal and navigate to my project directory under /dectris_data. Using ls, I can confirm that I see the same folders as in the file browser. I then go into the raw folder.

Here, I can see the data I uploaded earlier. I’ll remove it so I can demonstrate downloading it again.

This example dataset is publicly available on Zenodo, so I can download it using wget. Once I run the command, you can see the data being downloaded directly into the project folder.

Because I’m using a base machine with 4 CPUs, the download speed is higher than it would be on the free 2-CPU machine. Bandwidth generally increases with higher machine types.

You can also use rsync to download data from other servers. I’ll show a simple example that downloads a small file.

One important limitation to be aware of is that you can rsync out of a session, meaning outbound connections are allowed, but you cannot rsync into a session. Inbound connections are blocked for security reasons. For the same reason, VPNs are not supported inside sessions. If your data is only accessible through a VPN, you’ll need to use a different transfer method.

That covers the most common ways to get data into your project using either the web app or a session.

Software uploader

Finally, I’ll briefly show the software uploader.

To start, I’ll go to our help page at help.dectris.cloud, where you can find more information. Under Instruments, you’ll find DECTRIS Cloud Sync, which is the name of our software uploader. It was originally designed for instrument data, where large volumes of data are generated and need to be uploaded automatically.

On the overview page, you can download the software uploader. There are different package options, but the easiest is usually the standalone binary, since it doesn’t require administrator rights to run.

Normally, this software runs on your local machine, but for convenience I’m demonstrating it inside a session.

In the Documents folder, I have several data directories, the standalone uploader binary, a configuration file, and a license file. The license file handles authentication and is something you need to request from us — it’s not automatically available to all users.

To run the uploader, I simply execute it and point it to the configuration file. The configuration file specifies the path to the license file and the folders that the uploader should monitor.

Once the uploader is running, I create a small test file in one of the monitored directories. You can see that the uploader immediately detects the new file and uploads it.

That’s essentially how it works. The configuration file can also be set to delete files after upload, but in this example that option is disabled.

The uploaded file is now visible in the web app.

Closing

That concludes the presentation for today. I hope this was helpful.

As always, you’re welcome to ask questions during the meeting or contact us later through the help center. I’ll now stop the recording, and we can proceed with Q&A.

Was this article helpful?