Skip to main content

How to import XAS data in 9809 format properly into Athena

Summary

Explained how to import XAS data in 9809 format (common in Japan) properly into Athena.

Environment

OS

Windows 10

XAS software

Athena ver. 0.9.25

Enable PFBL12C plugin to import 9809 format file

  1. Select "Plugin registry" menu.

/images/format9809/01-pluginregistry.png
  1. Check PFBL12C on.

/images/format9809/02-pluginregistry.png
  1. Return to main window.

XAS data in Transmission (I0 and I1 ion chambers)

This is an example of a XAS data in transmission mode at SPring-8 BL0101.

 9809     SPring-8 01b1
161118-Ptfoil-  16.11.18 11:01 - 16.11.18 11:14
Pt foil
Ring :   8.0 GeV    99.5 mA -   99.5 mA
Mono :   SI(111)       D=  3.13551 A    Initial angle=   9.9821 deg
01b1      Transmission( 2)   Repetition=  1     Points=  452
Param file : Pt-L3_111_2015  angle axis (1)     Block =    4

Block      Init-ang  final-ang     Step/deg     Time/s       Num
    1       10.14110   9.87453    -0.008890       1.00        30
    2        9.87453   9.80581    -0.000350       1.00       196
    3        9.80581   9.28116    -0.003230       1.00       162
    4        9.28116   8.90000    -0.005960       1.00        64
Ortec( 0)     NDCH = 4
 Angle(c)  Angle(o)    time/s         2         3        21
     Mode         0         0         1         2         5
   Offset         0         0 43697.800 39469.000     0.000
 10.14110  10.14010      1.00  70563531  30124893      1450
 10.13221  10.13165      1.00  70537204  30223003      1450

When you select the file to import data, the next window will popup.

/images/format9809/04-importtransmissiondata.png

As you can find, the Angle(c), Angle(o), time, "1", "2" columns (mode) are converted into energy_requested, energy_attained, time, i0, i1 columns.

Then, you must choose energy_attained as Energy, i0 as Numerator, i1 as Denominator, select Natural log checkbox.

Because a XAS spectrum in transmission mode is formulated in

\begin{equation*} \mu = - \ln{ \frac{I_1}{I_0} } = \ln{ \frac{I_0}{I_1} } \end{equation*}

XAS data in fluorescence mode (I0 ion chamber and a Lytle detector)

This is an example of a XAS data in fluorescence mode with a Lytle detector at SPring-8 BL0101.

 9809     SPring-8 01b1
160727-No1_02w  16.07.27 03:14 - 16.07.27 03:35
No.1 0.2 wt% Pt/Al2O3
Ring :   8.0 GeV    99.5 mA -   99.5 mA
Mono :   SI(111)       D=  3.13551 A    Initial angle=   9.9919 deg
01b1      Fluorescence( 3)   Repetition=  1     Points=  390
Param file : Pt-L3_111.par   angle axis (1)     Block =    4

Block      Init-ang  final-ang     Step/deg     Time/s       Num
    1       10.14110   9.87453    -0.008890       1.00        30
    2        9.87453   9.80581    -0.000430       1.00       160
    3        9.80581   9.28116    -0.003230       4.00       162
    4        9.28116   9.05757    -0.005960       4.00        38
Ortec( 0)     NDCH = 4
 Angle(c)  Angle(o)    time/s         2         3        21
     Mode         0         0         1         3         5
   Offset         0         0 46779.200 68155.100     0.000
 10.14110  10.09560      1.00  27651974   1722630       723
 10.13221  10.09510      1.00  27647183   1725256       723

When you select the file to import data, the next window will popup.

/images/format9809/06-importLytledata.png

The Angle(c), Angle(o), time, "1", "2" columns are converted into energy_requested, energy_attained, time, i0, i1 columns.

You must choose energy_attained as Energy, i0 as Denominator, i1 (=if) as Numerator, unselect Natural log checkbox.

Because a XAS spectrum in fluorescence mode is

\begin{equation*} \mu \propto \frac{I_f}{I_0} \end{equation*}

XAS data in fluorescence mode (I0 ion chamber and 19ch Ge detector)

This is an example of a XAS data in fluorescence mode with a 19ch Ge detector at SPring-8 BL0101.

 9809     SPring-8 01b1
160727-No1_02w  16.07.27 04:33 - 16.07.27 04:54
No.1 0.2 wt% Pt/Al2O3
Ring :   8.0 GeV    99.5 mA -   99.5 mA
Mono :   SI(111)       D=  3.13551 A    Initial angle=   9.9861 deg
01b1      Fluorescence( 3)   Repetition=  1     Points=  390
Param file : Pt-L3_111.par   angle axis (1)     Block =    4

Block      Init-ang  final-ang     Step/deg     Time/s       Num
    1       10.14110   9.87453    -0.008890       1.00        30
    2        9.87453   9.80581    -0.000430       1.00       160
    3        9.80581   9.28116    -0.003230       4.00       162
    4        9.28116   9.05757    -0.005960       4.00        38
CAMAC( 1)     NDCH =21
 Angle(c)  Angle(o)    time/s         1         2         3         4         5         6         7         8         9        10        11        12        13        14        15        16        17        18        19        20        21         1         2         3         4         5         6         7         8         9        10        11        12        13        14        15        16        17        18        19        20        21
     Mode         0         0         3         3         3         3         3         3         3         3         3         3         3         3         3         3         3         3         3         3         3         1         5       103       103       103       103       103       103       103       103       103       103       103       103       103       103       103       103       103       103       103       101       105
   Offset         0         0     0.000     0.000     0.000     0.000     0.000     0.000     0.000     0.000     0.000     0.000     0.100     0.000     0.000     0.000     0.000     0.000     0.000     0.000     0.000 32891.400     0.000     0.700     0.100     0.300     0.100     0.200     0.000     0.801     0.400     0.100     0.000     0.100     0.201     0.000     0.100     0.000     0.300     0.301     0.200     0.200     0.000     0.000
 10.14110  10.11650      1.00      1050      1221      1017      1172      1204         0       998      1250      1374         0      1297      1048         0      1338         0      1258      1214      1265       930  87474570       -96     55340     62226     57939     51786     49337         0     58115     57122     52450         0     71864     62283         0     65595         0     67839     65453     69066     57120  87474570       -96
 10.13221  10.11625      1.00      1107      1207      1102      1147      1225         0      1052      1303      1340         0      1270      1066         0      1352         0      1204      1157      1334       936  87443294       -96     55462     62811     57847     52107     49150         0     58311     56815     52581         0     72055     61739         0     65459         0     68456     65740     68750     57825  87443294       -96

When you select the file to import data, the next window will popup.

/images/format9809/08-importSSDdata.png

The Angle(c), Angle(o), time, etc columns are converted into energy_requested, energy_attained, time, i0, i1, 6, 7, 8... columns.

You should choose energy_attained as Energy, "23" as Denominator, "i1", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20", "21", "22", as Numerator, unselect Natural log checkbox.

Because a XAS spectrum measured in fluorescence mode with a 19ch SSD is

\begin{equation*} \mu = \frac{I_f}{I_0} = \frac{i0+i1+"6"+\ldots+"22"}{"23"} \end{equation*}

FDMNES (2015-10-28) with openmpi on Debian GNU/Linux 8 Jessie

Summary

A memorandum of compiling FDMNES (2015-10-28) on Debian GNU/Linux 8.0.

The fdmnes executable compiled with MUMPS and openmpi is too slow. It must be because the compilation procedure and options for it are too bad.

Environment

OS

Debian GNU/Linux 8.0 Jessie

Compiler

gfortran 4.9

CPU

Intel Corei7 4770K (4 physical cores with HT) (ie. 8 threads)

Compiling FDMNES (sequential) with the original Gaussian solver

Editing mat_solve_gaussian.f90

integer:: i, i_newind, ia, ib, icheck, ie, igrph, ii, ipr, isp, ispin, ispinin, iv, j, jj, k, lb1i, lb1r, lb2i, lb2r, lm, lmaxso, &
  lms, MPI_host_num_for_mumps, mpirank0, natome, nbm, nbtm, ngrph, nicm, nim, nligne, nligne_i, nligneso, nlmagm, nlmmax, &
  nlmomax, nlmsam, nlmso, nlmso_i, nphiato1, nphiato7, npoint, &
  npsom, nsm, nso1, nsort, nsort_c, nsort_r, nsortf, nspin, nspino, nspinp, nspinr, nstm, nvois

Just split line 15 into like this.

integer:: i, i_newind, ia, ib, icheck, ie, igrph, ii, ipr, isp, ispin, ispinin, iv, j, jj, k, lb1i, lb1r, &
  lb2i, lb2r, lm, lmaxso, &
  lms, MPI_host_num_for_mumps, mpirank0, natome, nbm, nbtm, ngrph, nicm, nim, nligne, nligne_i, nligneso, nlmagm, nlmmax, &
  nlmomax, nlmsam, nlmso, nlmso_i, nphiato1, nphiato7, npoint, &
  npsom, nsm, nso1, nsort, nsort_c, nsort_r, nsortf, nspin, nspino, nspinp, nspinr, nstm, nvois

Makefile

Makefile for the sequential FDMNES with the original Gaussian solver can be like this.

FC = gfortran
OPTLVL = 3

EXEC = ../fdmnes_gauss
FFLAGS = -c  -O$(OPTLVL)

OBJ_GAUSS = main.o clemf0.o coabs.o convolution.o dirac.o fdm.o fprime.o general.o lecture.o mat.o metric.o \
            minim.o optic.o potential.o selec.o scf.o spgroup.o sphere.o tab_data.o tddft.o tensor.o \
            not_mpi.o mat_solve_gaussian.o sub_util.o

all: $(EXEC)

$(EXEC): $(OBJ_GAUSS)
     $(FC) -o $@ $^

%.o: %.f90
     $(FC) -o $@ $(FFLAGS) $?

clean:
     rm -f *.o $(EXEC)
     rm -f *.mod

mpih.f should be copied from the include directory.

Execute FDMNES (sequential)

I ran fdmnes with the first example file Sim/Test_stand/Cu.

$ time ./fdmnes_gauss
...
real    0m40.395s
user    0m40.060s
sys     0m0.308s

Everything goes well.

Compiling FDMNES (openmpi) with the original Gaussian solver

Install openmpi

$ sudo apt-get install libopenmpi-dev openmpi-bin

Makefile

Makefile for parallel FDMNES with the original Gaussian solver can be like this.

FC = mpif90
OPTLVL = 3

EXEC = ../fdmnes_gauss_openmpi
FFLAGS = -c  -O$(OPTLVL)

OBJ_GAUSS = main.o clemf0.o coabs.o convolution.o dirac.o fdm.o fprime.o general.o lecture.o mat.o metric.o \
            minim.o optic.o potential.o selec.o scf.o spgroup.o sphere.o tab_data.o tddft.o tensor.o \
            mat_solve_gaussian.o sub_util.o

all: $(EXEC)

$(EXEC): $(OBJ_GAUSS)
     $(FC) -o $@ $^

%.o: %.f90
     $(FC) -o $@ $(FFLAGS) $?

clean:
     rm -f *.o $(EXEC)
     rm -f *.mod

Please note not_mpi.o is not needed any more.

Execute FDMNES (parallel) with the original Gaussian solver

$ mpirun -np 8 ./fdmnes_gauss_openmpi
...
real    0m23.965s
user    2m59.052s
sys     0m4.216s

Everything goes well.

Compiling MUMPS (sequential)

  1. Download MUMPS from http://mumps-solver.org/ .

  2. Install dependencies

  3. Make all libraries

Install dependencies and make libraries

$ sudo apt-get install libmetis-dev libscotch-dev
$ cd /path/to/MUMPS_5.0.1
$ cp Make.inc/Makefile.debian.SEQ Makefile.inc
$ make all
$ cp libseq/libmpiseq.a lib

Please note you should not install libparmetis-dev and libptscotch-dev.

At least, on my machine, the linking to these libraries failed.

Compiling FDMNES (sequential) with MUMPS

Makefile for the sequential FDMNES with MUMPS can be like this.

Makefile

FC = gfortran
OPTLVL = 3

EXEC = ../fdmnes

BIBDIR = /path/to/MUMPS_5.0.1/lib

FFLAGS = -O$(OPTLVL) -c

OBJ = main.o clemf0.o coabs.o convolution.o dirac.o fdm.o fprime.o general.o lecture.o mat.o metric.o \
      minim.o optic.o potential.o selec.o scf.o spgroup.o sphere.o tab_data.o tddft.o tensor.o \
      mat_solve_mumps.o

all: $(EXEC)

$(EXEC): $(OBJ)
     $(FC) -o $@ $^ -L$(BIBDIR) -ldmumps -lzmumps -lmumps_common -lmpiseq -lmetis -lpord \
                            -lesmumps -lscotch -lscotcherr -lpthread -llapack -lblas
%.o: %.f90
     $(FC) -o $@ $(FFLAGS) $?

clean:
     rm -f *.o $(EXEC)
     rm -f *.mod

Execute FDMNES (sequential) with MUMPS

$ time ./fdmnes
...
real    0m11.262s
user    0m25.988s
sys     0m58.796s

Calculation goes well, HOWEVER, the %system cpu usage is VERY LARGE.

Performance of self compiled FDMNES (sequential) with MUMPS and fdmnes_linux64

I ran fdmnes_linux64 and self-compiled fdmnes with the first example file Sim/Test_stand/Cu .

$ time ./fdmnes_linux64
...
real    0m8.335s
user    0m23.800s
sys     0m0.488s

$ time ./fdmnes
...
real    0m11.262s
user    0m25.988s
sys     0m58.796s

CPU usage of fdmnes_linux64 and self compiled fdmnes are about 400% (4 cores) and 800% (8 cores), respectively. The self compiled fdmnes is a bit slower than fdmnes_linux64. It can be ascribed to the difference of compilers, gfortran for fdmnes and ifort for fdmnes_linux64.

In addition, the %system cpu usage of fdmnes is really high (70%) for the self compiled fdmnes. I'm not sure why, but it can lower the performance.

Compiling FDMNES (openmpi) with MUMPS

Install dependencies and make libraries

$ cd /path/to/MUMPS_5.0.1
$ cp Make.inc/Makefile.debian.PAR Makefile.inc
$ make all

Please note you should not install libparmetis-dev and libptscotch-dev.

At least, on my machine, the linking to these libraries failed.

Makefile

Makefile for FDMNES with MUMPS can be like this.

FC = mpif90
OPTLVL = 3

EXEC = ../fdmnes_openmpi

BIBDIR = /path/to/MUMPS_5.0.1/lib

FFLAGS = -O$(OPTLVL) -c

OBJ = main.o clemf0.o coabs.o convolution.o dirac.o fdm.o fprime.o general.o lecture.o mat.o metric.o \
      minim.o optic.o potential.o selec.o scf.o spgroup.o sphere.o tab_data.o tddft.o tensor.o \
      mat_solve_mumps.o

all: $(EXEC)

$(EXEC): $(OBJ)
     $(FC) -o $@ $^ -L$(BIBDIR) -ldmumps -lzmumps -lmumps_common -lmetis -lpord \
                            -lesmumps -lscotch -lscotcherr -lpthread -llapack -lblas \
                            -lscalapack-openmpi -lblacs-openmpi -lblacsF77init-openmpi \
                            -lblacsCinit-openmpi -lmpi -lmpi_f77

%.o: %.f90
     $(FC) -o $@ $(FFLAGS) $?

clean:
     rm -f *.o $(EXEC)
     rm -f *.mod

Please note you need all the headers provided in include directory except for mpif.h .

Execute FDMNES (parallel) with MUMPS

$ time mpirun -np 8 ./fdmnes_openmpi
real    0m32.581s
user    2m5.808s
sys     2m6.408s

Calculation looks go well, HOWEVER, the %system cpu usage is TOO LARGE again.

Performance of self compiled FDMNES (sequential) with MUMPS and fdmnes_linux64

I ran fdmnes_linux64 and self-compiled fdmnes with the first example file Sim/Test_stand/Cu .

$ time ./fdmnes_linux64
real    0m8.335s
user    0m23.800s
sys     0m0.488s

$ time ./fdmnes
real    0m11.211s
user    0m26.224s
sys     0m58.484s

$ time mpirun -np 8 ./fdmnes_openmpi
real    0m32.581s
user    2m5.808s
sys     2m6.408s

Unfortunately, the fdmnes with openmpi and MUMPS (, which can be the fastest I hope) is the SLOWEST executable... The %system cpu usage of fdmnes_openmpi is really high (50%) again.

Orca 3.0.2 with openmpi on Debian GNU/Linux 7.0 wheezy

Summary

An easiest way to run ORCA 3.0.2 with openmpi on Debian 7.0 wheezy

Compiling openmpi on Debian GNU/Linux 7.0 wheezy

Follow the blog post to compile and install openmpi 1.6.5 into your system, 181. Compiling openmpi on debian wheezy .

Add small init script for openmpi 1.6.5

  1. Add init_orca.sh or something like this.

$ cat ~/bin/init_orca.sh
export LD_LIBRARY_PATH=/opt/openmpi/1.6/lib
export PATH=/opt/openmpi/1.6/bin:$PATH
  1. Load PATH etc when you use ORCA.

$ source ~/bin/init_orca.sh

Execute ORCA

  1. Add the command to your input file.

! PAL4
  1. Execute orca

$ /path/to/orca test.inp > test.out &

That's all.