Skip to main content

(git:638683e) Detector X-Ray python Library

Project description

Detector X-Ray Python Package

most of these python scripts can be called with option ‘-h’ to get more detailed help.

libDetXR.py

wrapper to the libDetXR.(dll,so) library. These contains following C-functions:
  • Compression Algorithms

  • Imaging Algorithms

  • BitManip Functions

for details s.a.: C/C++: ../src/README.rst

SPEC2hdf5.py

Input:

*.cbf files and *.dat files

Output:

one *.hdf5 file

Speedup:

muptiprocess single node

collects data of a scan on the cSAXS beamline and stores it in one HDF5 file

This will analyse and merge following data into one hdf5 file:

specES1/dat-files/specES1_started_2013_04_26_1604.dat
mcs/S00000-00999/S00033/e14472_00033.dat
eiger/S00000-00999/S00033/e14472_1_00033_00000_00001.cbf and many more
pilatus_1/S00000-00999/S00033/e14472_1_00033_00000_00001.cbf and many more
pilatus_2/S00000-00999/S00033/e14472_1_00033_00000_00001.cbf and many more

this can be speedup on one compute node using all available cores.

ImgSource.py

ImgSource is a helper class to provide image. It is used among others in zmqImageSource, procMoment, procRoiStat, procSTXM Currently following sources are supported:

Random values:
  --src rand    X    Y Z --type=DATATYPE
  --src rand 2560 2160 3 --type=uint16
Incremental values:
  --src inc    X    Y Z --type=uint16
  --src inc 2560 2160 6 --type=uint16
Raw data from files *.raw in a directory:
  --src raw    X    Y PATH                       --type=DATATYPE
  --src raw 2560 2160 /scratch/detectorData/PCO/ --type=uint16
TIFF data from files *.tif[f] in a directory:
  --src tif PATH
  --src tif /scratch/detectorData/PCO/
CBF data from files *.cbf in a directory:
  --src cbf PATH
  --src cbf /scratch/detectorData/e14472/pilatus_1/S00000-00999/S00033/
HDF file from a given object:
  --src hdf FILE ELEM
  --src hdf myfile.hdf5 entry/mygrp/dataset
ZMQ source stream (now only uncompressed 'chunk'-type is supported)
  --src zmq JSON-config (with server,optional (queueSz, ifType and timeout))
  --src zmq '{"server":"tcp://localhost:8080","queueSz":4,"ifType":"PULL"}',

zmqImageSource.py

ZeroMQ PUSH or PUB Server:
the datasource can be one of the types that ImgSource supports. It then sends this images as ZMQ messages (chunk type, raw or compressed).
Input:

a data source: *.cbf, *.tiff, *.hdf5 *.raw zmqStream etc. files

Output:

one zmq-Streams with ‘chunk’ header (raw or compressed)

Speedup:

only single process

procMoment.py

Process the moments m00,m11,m01,m02,m10,m20 of an image source (cbf, tiff,raw files or hdf5 file) and stores the result in an output hdf5 file.

Input:

ImgSource.py data: (raw, tiff, cbf, hdf5, zmq-steam, etc.), optional MATLAB Mask file for valid pixels

Output:

one *.hdf5 file or a zmq Stream

Speedup:

only single process

Further a visualization of the moments is possible.
Different implementation can be taken: python/opencv or C
Multi process speedup is not yet implemented but feasible on request.
SIMD implementation would also speedup on request.

Single image processing (32 bit, 1679x1475 pilatus_2 image, on pc 9477): :python: 0.5sec :pyFast: 0.03sec :openCV: 0.005sec :c: 0.005sec The speed now is mostly memory bandwidth bound.

procRoiStat.py

Process statistics of multiple ROI of an image source (cbf, tiff,raw files or hdf5 file) and stores the result (avg,avgstd,sumsq) for each ROI in an output hdf5 file.

Input:

ImgSource.py data: (raw, tiff, cbf, hdf5, zmq-steam, etc.), and a MATLAB Roi file

Output:

one *.hdf5 file or a zmq Stream

Speedup:

only single process

The input for ROI definition is a MATLAB Roi file (e.g. pilatus_integration_mask.mat) as used at the cSAXS beamline.

Currently only the avg is implemented.
The speed is much higher as the origin MATLAB implementation.
Multi process speedup is not yet implemented but feasible on request.
SIMD implementation would also speedup on request.
Single image processing (32 bit, 1679x1475 pilatus_2 image, on pc 9477):
The speedup depends on the mask. The mask used is a 16 segments pilatus_integration_mask.mat
Speedup C-implementation vesus python: 41-42 x faster
The python implementation is allready much faster than the original MATLAB.
python:

0.235 sec

c:

0.006sec

Therefore we can achive a speedup of 100-500x compared to the MATLAB code. The speed now is mostly memory bandwidth bound.

procSTXM.py

STXM processing of an image source and visualization. The current implementation is very basic, just as a template to later have a faster and more flexible STXM processing script.

Input:

ImgSource.py data: (raw, tiff, cbf, hdf5, zmq-steam, etc.), optional MATLAB Mask file for valid pixels

Output:

x,y,t during processing on console. STXM viewer and /tmp/result.npz file when processing is finished.

Speedup:

only single process, multiproc-single node for hdf5->hdf5 processing

zmqWriter.py

ZmqWriter connects to a ZMQ server that sends json- and binary data messages.

Input:

one or multiple zmq-Streams with ‘chunk’ or ‘pilatus’(cbf-files) header

Output:

one *.hdf5 file

Speedup:

muptiprocess single node (compression). Single process writing to file.

These data is stored into a hdf5 file or it can just copy *.cbf files.
The program can also be started as a REST server .
The REST-server and each writer is a independent process.

Multi process compression speedup for the writer process is not foreseen, because the zmqWriter is intended to receive compressed chunks. Anyhow: the zmqWriter can convert images to a desired compression scheme, but this will work single threaded and could be therefore time critical:

1. *.cbf-Files            -> *.cbf-Files
2. raw image              -> hdf5-File (uncompressed, byte-shuffle, zlib lz4, lzf)
3. *.cbf-Files            -> hdf5-File (uncompressed, byte-shuffle, zlib lz4, lzf)
4. compressed image-chunk -> hdf5-File (compressed as the received image-chunk)

The modes used in productions are 1 for cbf files as from the Pilatus and 4 for hdf5 files with the Eiger.

The modes 2 and 3 are for tests and preparation to hdf5 format and not recommanded for production. They are not performant

Cbf2CrystFELhdf5.py

converts cbf files to CrystFEL hdf5 files. the program searches all *.cbf files of a directory and converts them. S.a. http://www.desy.de/~twhite/crystfel/

Further helper classes

utilities.py:

utilities to create HDF5 objects

CbfParser.py:

class to parse *.cbf files. It gets the header and decompress the binary part to a numpy array

DatParser.py:

class to parse *.dat files at the cSAXS beamline.

zmq2imgGL.py:
a zmq sink to show an show a colored image that is received.
*.cbf zmq messages, uncompressed and compressed chunks are supported.
FileNameGen.py:

generate filenames with a given filename pattern. (Currently not used)

hdf5vis.py:

Simple Test application to show movies of image series in a hdf5-file

libDetXRTester.py:

sample code to test the libDetXR functionality.

testPerfHdf5ChunkWrite.py:
Test the performance of Chunk writing:
all source data is first read into ram, then a HDF5 file is written.
diffferent interprocess communications are tested and compared:
'Pipe1','Pipe1b','Pipe2','Pool1','ShMem1','ShMem2','ShMem3','ShPool2'

Project details


Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page