LAMMPS is a classical molecular dynamics code that models an ensemble of particles in a liquid, solid, or gaseous state

LAMMPS

LAMMPS is a classical molecular dynamics code that models an ensemble of particles in a liquid, solid, or gaseous state. It can model atomic, polymeric, biological, metallic, granular, and coarse-grained systems using a variety of force fields and boundary conditions. The current version of LAMMPS is written in C++.

Licensing Terms and Conditions

LAMMPS is an open-source code, available free-of-charge, and distributed under the terms of the GNU Public License Version 2 (GPLv2), which means you can use or modify the code however you wish for your own purposes, but have to adhere to certain rules when redistributing it - specifically in binary form - or are distributing software derived from it or that includes parts of it.

LAMMPS comes with no warranty of any kind.

As each source file states in its header, it is a copyrighted code, and thus not in the public domain. For more information about open-source software and open-source distribution, see www.gnu.org or www.opensource.org. The legal text of the GPL as it applies to LAMMPS is in the LICENSE file included in the LAMMPS distribution.

Here is a more specific summary of what the GPL means for LAMMPS users:

(1) Anyone is free to use, copy, modify, or extend LAMMPS in any way they choose, including for commercial purposes.

(2) If you distribute a modified version of LAMMPS, it must remain open-source, meaning you are required to distribute all of it under the terms of the GPLv2. You should clearly annotate such a modified code as a derivative version of LAMMPS. This is best done by changing the name (example: LIGGGHTS is such a modified and extended version of LAMMPS).

(3) If you release any code that includes or uses LAMMPS source code, then it must also be open-sourced, meaning you distribute it under the terms of the GPLv2. You may write code that interfaces LAMMPS to a differently licensed library. In that case the code that provides the interface must be licensed GPLv2, but not necessarily that library unless you are distributing binaries that require the library to run.

(4) If you give LAMMPS files to someone else, the GPLv2 LICENSE file and source file headers (including the copyright and GPLv2 notices) should remain part of the code.

How to run on Merlin7

CPU nodes

module use Spack unstable
module load gcc/12.3 openmpi/5.0.8-jsrx-A100-gpu lammps/20250722-37gs-omp

A100 nodes

module use Spack unstable
module load gcc/12.3 openmpi/5.0.8-jsrx-A100-gpu lammps/20250722-xcaf-A100-gpu-omp

GH nodes

module use Spack unstable
module load gcc/12.3 openmpi/5.0.8-fvlo-GH200-gpu lammps/20250722-3tfv-GH200-gpu-omp

SBATCH CPU, 4 MPI ranks, 16 OMP threads

#!/bin/bash
#SBATCH --time=00:10:00      # maximum execution time of 10 minutes
#SBATCH --nodes=1            # requesting 1 compute node
#SBATCH --ntasks=4           # use 4 MPI rank (task)
#SBATCH --partition=hourly
#SBATCH --cpus-per-task=16 # modify this number of CPU cores per MPI task
#SBATCH --output=_scheduler-stdout.txt
#SBATCH --error=_scheduler-stderr.txt

unset PMODULES_ENV
module purge
module use Spack unstable
module load gcc/12.3 openmpi/5.0.8-jsrx-A100-gpu lammps/20250722-37gs-omp

export FI_CXI_RX_MATCH_MODE=software
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
export OMP_PROC_BIND=spread
export OMP_PLACES=threads

srun --cpu-bind=cores lmp -k on t $OMP_NUM_THREADS -sf kk -in lj_kokkos.in

SBATCH A100, 4 GPU, 16 OMP threads, 4 MPI ranks

#!/bin/bash
#SBATCH --time=00:10:00      # maximum execution time of 10 minutes
#SBATCH --output=_scheduler-stdout.txt
#SBATCH --error=_scheduler-stderr.txt
#SBATCH --nodes=1             # number of GH200 nodes with each node having 4 CPU+GPU
#SBATCH --ntasks-per-node=4  # 4 MPI ranks per node
#SBATCH --cluster=gmerlin7
#SBATCH --hint=nomultithread
#SBATCH --partition=a100-hourly
#SBATCH --gpus-per-task=1

unset PMODULES_ENV
module purge
module use Spack unstable
module load gcc/12.3 openmpi/5.0.8-jsrx-A100-gpu lammps/20250722-xcaf-A100-gpu-omp

export FI_CXI_RX_MATCH_MODE=software

srun lmp -in lj_kokkos.in -k on g ${SLURM_GPUS_PER_TASK} -sf kk -pk kokkos gpu/aware on

SBATCH GH, 2 GPU, 18 OMP threads, 2 MPI ranks

#!/bin/bash
#SBATCH --time=00:10:00      # maximum execution time of 10 minutes
#SBATCH --output=_scheduler-stdout.txt
#SBATCH --error=_scheduler-stderr.txt
#SBATCH --nodes=1             # number of GH200 nodes with each node having 4 CPU+GPU
#SBATCH --ntasks-per-node=2  # 2 MPI ranks per node
#SBATCH --cluster=gmerlin7
#SBATCH --hint=nomultithread
#SBATCH --partition=gh-hourly
#SBATCH --gpus-per-task=1

unset PMODULES_ENV
module purge
module use Spack unstable
module load gcc/12.3 openmpi/5.0.8-fvlo-GH200-gpu lammps/20250722-3tfv-GH200-gpu-omp

export FI_CXI_RX_MATCH_MODE=software

srun lmp -in lj_kokkos.in -k on g ${SLURM_GPUS_PER_TASK} -sf kk -pk kokkos gpu/aware on