Difference between revisions of "Intel compilers"
m (→Intel compilers (2019 / 2014)) |
m (→Intel compilers (2019 / 2014)) |
||
Line 1: | Line 1: | ||
− | === Intel compilers (2019 / 2014) === | + | === Intel compilers (2019u4 / 2019 / 2014) === |
[nl] | [nl] | ||
− | C&CZ heeft samen met TCM en Theoretische Chemie twee licenties voor gelijktijdig gebruik | + | C&CZ heeft samen met TCM en Theoretische Chemie twee licenties voor gelijktijdig gebruik van de [https://software.intel.com/en-us/parallel-studio-xe Intel Parallel Studio XE voor Linux] aangeschaft. Verschillende versies zijn geïnstalleerd in <tt>/vol/opt/intelcompilers</tt> en beschikbaar op o.a. [http://wiki.science.ru.nl/cncz/index.php?title=Hardware_servers&setlang=nl#.5BReken-.5D.5BCompute_.5Dservers.2Fcluster clusternodes] en [http://wiki.science.ru.nl/cncz/index.php?title=Hardware_servers&setlang=nl#Login-servers loginservers]. Om de omgevingsvariabelen goed te zetten, moeten SH/BASH-gebruikers vooraf uitvoeren: |
<pre> | <pre> | ||
source /vol/opt/intelcompilers/intel-2019/composerxe/bin/compilervars.sh intel64 | source /vol/opt/intelcompilers/intel-2019/composerxe/bin/compilervars.sh intel64 | ||
Line 16: | Line 16: | ||
[/nl] | [/nl] | ||
[en] | [en] | ||
− | C&CZ has bought together with TCM and Theoretical Chemistry two licences for concurrent use | + | C&CZ has bought together with TCM and Theoretical Chemistry two licences for concurrent use of the [https://software.intel.com/en-us/parallel-studio-xe Intel Parallel Studio XE for Linux]. Different versions have been installed in <tt>/vol/opt/intelcompilers</tt> and is available on a.o. [http://wiki.science.ru.nl/cncz/index.php?title=Hardware_servers&setlang=en#.5BReken-.5D.5BCompute_.5Dservers.2Fcluster clusternodes] en [http://wiki.science.ru.nl/cncz/index.php?title=Hardware_servers&setlang=en#Login-servers loginservers]. To set the environment variables correctly, SH/BASH users must first run: |
<pre> | <pre> | ||
source /vol/opt/intelcompilers/intel-2019/composerxe/bin/compilervars.sh intel64 | source /vol/opt/intelcompilers/intel-2019/composerxe/bin/compilervars.sh intel64 |
Revision as of 15:17, 28 May 2019
Contents
Intel compilers (2019u4 / 2019 / 2014)
C&CZ has bought together with TCM and Theoretical Chemistry two licences for concurrent use of the Intel Parallel Studio XE for Linux. Different versions have been installed in /vol/opt/intelcompilers and is available on a.o. clusternodes en loginservers. To set the environment variables correctly, SH/BASH users must first run:
source /vol/opt/intelcompilers/intel-2019/composerxe/bin/compilervars.sh intel64
and CSH users must run:
setenv arch intel64 source /vol/opt/intelcompilers/intel-2019/composerxe/bin/compilervars.csh intel64
After that, icc -V gives the new version number as output: Version 19.0.1.144 Build 20181018. The 2014 version had: Version 14.0.2.144 Build 20140120
A very useful resource is intel-mkl-link-line-advisor which will advise you on compiler and linker options for using the MKL.
Documentation for the previous version (2011)
Compiling Fortran (/opt/intel/bin/ifort)
- Intel Fortran Composer XE 2011 Getting Started Tutorials
- Intel Fortran Compiler XE 12.0 User and Reference Guides
Math Kernel Library (mkl, linking blas, lapack)
Intel Cluster Studio 2011
- Intel Cluster Studio 2011 for Linux* OS - index to all local documentation
- Intel® MPI Library for Linux OS Documentation Index
This is described in detail in Building Custom Shared Objects
- Create a new directory (e.g. ~/lib)
mkdir ~/lib cd ~/lib
- Copy these files:
cp /opt/intel/composerxe/mkl/tools/builder/{makefile,blas_list,lapack_list} ~/lib
- Set the MKLROOT variable (in bash):
MKLROOT=/opt/intel/mkl export MKLROOT
In tcsh use:
setenv MKLROOT /opt/intel/mkl
- Make the shared libraries libblas_mkl.so and liblapack_mkl.so
make libintel64 export=blas_list interface=lp64 threading=parallel name=libblas_mkl make libintel64 export=lapack_list interface=lp64 threading=parallel name=liblapack_mkl
The options are described here
The newly created libblas_mkl.so and liblapack_mkl.so require
/opt/intel/lib/intel64/libiomp5.so
to work. On the cluster nodes this file is automatically linked when required.
This should work for any executable that uses a dynamically linked blas or lapack. We use Scilab as an example.
- Make sure we have an executable, not just a script that calls the executable:
file scilab-bin
The output looks something like this:
scilab-bin: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.15 ...
- Determine the exact name that is used by the executable:
ldd scilab-bin | grep blas
The output could be:
libblas.so.3gf => ~/sciab-5.4.1/lib/thirdparty/libblas.so.3gf
- Replace the library with a link to the MKL version
cd ~/sciab-5.4.1/lib/thirdparty/ rm libblas.so.3gf ln -s ~/lib/libblas_mkl.so libblas.so.3gf
Also follow this procedure for lapack.
- To use more than one thread, i.e., for parallel computation, set:
MKL_NUM_THREADS=4 export MKL_NUM_THREADS
This example will use 4 cores.
- To check the number of cores available, use:
cat /proc/cpuinfo | grep processor | wc