Difference between revisions of "Intel compilers"
(Add comment about libiomp5.so) |
|||
Line 61: | Line 61: | ||
to work. On the cluster nodes this file is automatically linked when required. | to work. On the cluster nodes this file is automatically linked when required. | ||
+ | |||
+ | == Using the MKL BLAS and LAPACK shared libraries (with Scilab) == | ||
+ | This should work for any executable that uses a dynamically linked blas or lapack. | ||
+ | We use Scilab as an example. | ||
+ | |||
+ | * Make sure we have an executable, not just a script that calls the executable: | ||
+ | |||
+ | file scilab-bin | ||
+ | |||
+ | The output looks something like this: | ||
+ | |||
+ | scilab-bin: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.15 ... | ||
+ | |||
+ | * Determine the exact name that is used by the executable: | ||
+ | |||
+ | ldd scilab-bin | grep blas | ||
+ | |||
+ | The output could be: | ||
+ | |||
+ | libblas.so.3gf => ~/sciab-5.4.1/lib/thirdparty/libblas.so.3gf | ||
+ | |||
+ | * Replace the library with a link to the MKL version | ||
+ | |||
+ | cd ~/sciab-5.4.1/lib/thirdparty/ | ||
+ | rm libblas.so.3gf | ||
+ | ln -s ~/lib/libblas_mkl.so libblas.so.3gf | ||
+ | |||
+ | Also follow this procedure for lapack. | ||
+ | |||
+ | * To use more than one thread, i.e., for parallel computation, set: | ||
+ | |||
+ | MKL_NUM_THREADS=4 | ||
+ | export MKL_NUM_THREADS | ||
+ | |||
+ | This example will use 4 cores. | ||
+ | |||
+ | * To check the number of cores available, use: | ||
+ | |||
+ | cat /proc/cpuinfo | grep processor | wc | ||
+ | |||
[[Category:Software]] | [[Category:Software]] |
Revision as of 02:25, 14 October 2013
Om de Intel compilers te gebruiken moeten er een aantal omgevingsvariabelen gezet zijn, voor csh gebruikers kan dit eenvoudig door het commando:
source /opt/intel/bin/ifortvars.csh intel64
op te nemen in .cshrc
voor mensen met bash, voeg de volgende regel
source /opt/intel/bin/ifortvars.sh intel64
toe aan .bash_rc
Documentation
Compiling Fortran (/opt/intel/bin/ifort)
- Intel Fortran Composer XE 2011 Getting Started Tutorials
- Intel Fortran Compiler XE 12.0 User and Reference Guides
Math Kernel Library (mkl, linking blas, lapack)
Intel Cluster Studio 2011
- Intel Cluster Studio 2011 for Linux* OS - index to all local documentation
- Intel® MPI Library for Linux OS Documentation Index
This is described in detail in Building Custom Shared Objects
- Create a new directory (e.g. ~/lib)
mkdir ~/lib cd ~/lib
- Copy these files:
cp /opt/intel/composerxe/mkl/tools/builder/{makefile,blas_list,lapack_list} ~/lib
- Set the MKLROOT variable (in bash):
MKLROOT=/opt/intel/mkl export MKLROOT
In tcsh use:
setenv MKLROOT /opt/intel/mkl
- Make the shared libraries libblas_mkl.so and liblapack_mkl.so
make libintel64 export=blas_list interface=lp64 threading=parallel name=libblas_mkl make libintel64 export=lapack_list interface=lp64 threading=parallel name=liblapack_mkl
The options are described here
The newly created libblas_mkl.so and liblapack_mkl.so require
/opt/intel/lib/intel64/libiomp5.so
to work. On the cluster nodes this file is automatically linked when required.
This should work for any executable that uses a dynamically linked blas or lapack. We use Scilab as an example.
- Make sure we have an executable, not just a script that calls the executable:
file scilab-bin
The output looks something like this:
scilab-bin: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.15 ...
- Determine the exact name that is used by the executable:
ldd scilab-bin | grep blas
The output could be:
libblas.so.3gf => ~/sciab-5.4.1/lib/thirdparty/libblas.so.3gf
- Replace the library with a link to the MKL version
cd ~/sciab-5.4.1/lib/thirdparty/ rm libblas.so.3gf ln -s ~/lib/libblas_mkl.so libblas.so.3gf
Also follow this procedure for lapack.
- To use more than one thread, i.e., for parallel computation, set:
MKL_NUM_THREADS=4 export MKL_NUM_THREADS
This example will use 4 cores.
- To check the number of cores available, use:
cat /proc/cpuinfo | grep processor | wc