Difference between revisions of "Intel compilers"

From Cncz
Jump to: navigation, search
(Using the MKL BLAS and LAPACK shared libraries)
Line 1: Line 1:
Om de Intel compilers te gebruiken moeten er een aantal omgevingsvariabelen gezet zijn,
+
[nl]
voor csh gebruikers kan dit eenvoudig door het commando:
+
C&CZ heeft twee licenties voor gelijktijdig gebruik van de nieuwste versie van de [http://software.intel.com/en-us/articles/intel-cluster-studio-xe/ Intel Cluster Studio voor Linux] aangeschaft. Deze is ge&iuml;nstalleerd in <tt>/vol/opt/intelcompilers</tt> en beschikbaar op o.a. [http://wiki.science.ru.nl/cncz/index.php?title=Hardware_servers&setlang=nl#.5BReken-.5D.5BCompute_.5Dservers.2Fcluster clusternodes] en [http://wiki.science.ru.nl/cncz/index.php?title=Hardware_servers&setlang=nl#Login-servers loginservers]. Ook de oude (2011) versie wordt verhuisd naar <tt>/vol/opt/intelcompilers</tt>. Om de omgevingsvariabelen goed te zetten, moet men vooraf uitvoeren:
 +
<pre>
 +
source /vol/opt/intelcompilers/intel-2014/composerxe/bin/compilervars.sh intel64
 +
</pre>
 +
Daarna levert <tt>icc -V</tt> het versienummer.
 +
[/nl]
 +
[en]
 +
C&CZ has bought two licences for concurrent use of the most recent version of the [http://software.intel.com/en-us/articles/intel-cluster-studio-xe/ Intel Cluster Studio for Linux]. This has been installed in <tt>/vol/opt/intelcompilers</tt> and is available on a.o. [http://wiki.science.ru.nl/cncz/index.php?title=Hardware_servers&setlang=en#.5BReken-.5D.5BCompute_.5Dservers.2Fcluster clusternodes] en [http://wiki.science.ru.nl/cncz/index.php?title=Hardware_servers&setlang=en#Login-servers loginservers]. The old (2011) version will also be moved to <tt>/vol/opt/intelcompilers</tt>. To set the environment variables correctly, users must first run:
 +
<pre>
 +
source /vol/opt/intelcompilers/intel-2014/composerxe/bin/compilervars.sh intel64
 +
</pre>
 +
After that, <tt>icc -V</tt> gives the new version number as output.
 +
[/en]
  
source /opt/intel/bin/ifortvars.csh intel64
+
=== Documentation for the previous version (2011) ===
 
 
op te nemen in .cshrc
 
 
 
voor mensen met bash, voeg de volgende regel
 
 
 
source /opt/intel/bin/ifortvars.sh intel64
 
 
 
toe aan .bash_rc
 
 
 
=== Documentation ===
 
  
 
==== Compiling Fortran (/opt/intel/bin/ifort) ====
 
==== Compiling Fortran (/opt/intel/bin/ifort) ====

Revision as of 16:45, 4 September 2014

C&CZ has bought two licences for concurrent use of the most recent version of the Intel Cluster Studio for Linux. This has been installed in /vol/opt/intelcompilers and is available on a.o. clusternodes en loginservers. The old (2011) version will also be moved to /vol/opt/intelcompilers. To set the environment variables correctly, users must first run:

source /vol/opt/intelcompilers/intel-2014/composerxe/bin/compilervars.sh intel64

After that, icc -V gives the new version number as output.

Documentation for the previous version (2011)

Compiling Fortran (/opt/intel/bin/ifort)

Math Kernel Library (mkl, linking blas, lapack)

Intel Cluster Studio 2011

How to create a standalone MKL version of BLAS and LAPACK shared libraries ?

This is described in detail in Building Custom Shared Objects

  • Create a new directory (e.g. ~/lib)
 mkdir ~/lib
 cd ~/lib
  • Copy these files:
 cp /opt/intel/composerxe/mkl/tools/builder/{makefile,blas_list,lapack_list} ~/lib
  • Set the MKLROOT variable (in bash):
 MKLROOT=/opt/intel/mkl
 export MKLROOT

In tcsh use:

 setenv MKLROOT /opt/intel/mkl
  • Make the shared libraries libblas_mkl.so and liblapack_mkl.so
 make libintel64 export=blas_list interface=lp64  threading=parallel name=libblas_mkl
 make libintel64 export=lapack_list interface=lp64  threading=parallel name=liblapack_mkl

The options are described here

The newly created libblas_mkl.so and liblapack_mkl.so require

 /opt/intel/lib/intel64/libiomp5.so
 

to work. On the cluster nodes this file is automatically linked when required.

Using the MKL BLAS and LAPACK shared libraries (with Scilab)

This should work for any executable that uses a dynamically linked blas or lapack. We use Scilab as an example.

  • Make sure we have an executable, not just a script that calls the executable:
 file scilab-bin

The output looks something like this:

 scilab-bin: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.15 ...
  • Determine the exact name that is used by the executable:
 ldd scilab-bin | grep blas

The output could be:

 libblas.so.3gf => ~/sciab-5.4.1/lib/thirdparty/libblas.so.3gf
  • Replace the library with a link to the MKL version
 cd ~/sciab-5.4.1/lib/thirdparty/
 rm libblas.so.3gf
 ln -s ~/lib/libblas_mkl.so libblas.so.3gf

Also follow this procedure for lapack.

  • To use more than one thread, i.e., for parallel computation, set:
 MKL_NUM_THREADS=4
 export MKL_NUM_THREADS

This example will use 4 cores.

  • To check the number of cores available, use:
 cat /proc/cpuinfo | grep processor | wc