6. Tutorial (C++)
This sections explains how to download and compile inq_template.
INQ is a C++ library, to run calculation you write a C++ source code that calls INQ functions. This might be cumbersome at the beginning but it gives the user a lot of power to get the code to whatever they want or need.
To simplify the process of compiling INQ files we have created a project called inq_template. It is a simple CMake project that automatically downloads Inq and many of its dependencies and compiles input files, so you don’t need to worry about the details of calling the compiler with all the correct flags.
We will start by downloading inq_template using git with the command:
git clone https://gitlab.com/npneq/inq_template.git
(Alternatively instead of using git, you can download a tarball from https://gitlab.com/npneq/inq_template/-/archive/main/inq_template-main.tar.gz .)
Once downloaded you will have a directory called inq_template/, enter the directory.
We first need to create a directory where we will store the compiled files, call it build/. Enter into it. Now we need to configure inq_template using CMake.
cmake --fresh ../ --install-prefix=`pwd`/../install
CMake is a tool that configures the code and creates all the build files. It has several options related to INQ. In particular GPU compilation needs some extra parameters.
Some supercomputers (ie. Perlmutter, Polaris, Sierra, Frontier, …) might have more sophisticated instructions.
Once the configuration is done you can compile and install with the commands
make -j8
make install
Note that the make install part is very important. INQ uses several files that have to be installed in the right location before running.
We can test the compilation by running the nitrogen example included with inq_template:
cd runs
./nitrogen
If you are working in a cluster or supercomputer you might need some machine-specific command to execute the program.
With this done we are ready to go into the next, and more interesting, parts of the tutorial.
6.1. Nitrogen molecule
1/* -*- indent-tabs-mode: t -*- */
2
3// Copyright (C) 2019-2023 Lawrence Livermore National Security, LLC.,
4// Xavier Andrade, Alfredo A. Correa
5//
6// This Source Code Form is subject to the terms of the Mozilla Public
7// License, v. 2.0. If a copy of the MPL was not distributed with this
8// file, You can obtain one at https://mozilla.org/MPL/2.0/.
9
10#include <inq/inq.hpp>
11
12int main(int argc, char ** argv){
13
14 using namespace inq;
15 using namespace inq::input;
16 using namespace inq::magnitude;
17
18 inq::environment env(argc, argv);
19 auto comm = mpi3::environment::get_world_instance();
20
21 auto distance = 1.06 _angtrom;
22
23 std::vector<atom> geo(2);
24 geo[0] = "N" | coord(0.0, 0.0, -distance/2);
25 geo[1] = "N" | coord(0.0, 0.0, distance/2);
26
27 systems::ions ions(systems::cell::cubic(10.0 _bohr) | cell::finite(), geo);
28 systems::electrons electrons(comm, ions, basis::cutoff_energy(80.0 _Ry));
29
30 ground_state::initialize(ions, electrons);
31
32 auto result = ground_state::calculate(ions, electrons, interaction::pbe());
33
34 std::cout <<"N2 energy = "<< result.energy.total() << "Hartree\n";
35
36}
We will start by using, modifying and expanding the nitrogen molecule example included with inq_template.
With the text editor of your choice (emacs, vim, or gedit are good options) go to the runs/ directory in the inq_template source and open the nitrogen.cpp file. Be mindful that you have to runs/ directories, one is inq_template/runs/ that contains the source and the other one is inq_template/build/runs/ that contains the compiled executable. Don’t confuse them.
Let’s analyze the nitrogen.cpp file by sections to understand what it does.
6.1.1. Header
The first lines of the file starting with // are comments. The first line instructs the editor to indent the code using tabs (just a convention we use in Inq). The next few lines are the copyright and license. The files of inq_template are licensed under the BSD 0-clause license, that imposes no restrictions on how to use the code. This is on purpose so you can use any part of inq_template in your project without worries of license infringement.
Note, however, that INQ itself is licensed under the [Mozilla Public License 2.0](https://www.mozilla.org/en-US/MPL/2.0/), this means that there are some [rules](https://www.mozilla.org/en-US/MPL/2.0/FAQ/) on how you can use Inq code. In particular any code, commercial or free, should be able link against Inq license issues.
The first line of actual code is line 10 that loads the INQ interface into the file, so the compiler knows how to call INQ. (This is equivalent to Python import command.)
C++ implements the concept of _namespaces_, that allows to encapsulate a set of functions and types under a certain name to avoid name clases. All INQ objects are included under the inq namespace. So line 14 tells the compiler we will call directly INQ objects without having to append inq:: to each one of them.
Line 16 allows the use of literal quantities with explicit units. These are not strictly necessary but make the code easier to type and understand.
Line 18 creates an INQ environment variable env that takes care of the initialization of MPI and other components that might need it.
Note that in the previous code we rely on a very useful feature of C++: the auto keyword. C++ needs that all variables are declared with an explicit type. But in some cases the compiler can figure out what type the variable is, for example when the value is returned by a function. We use auto extensively in INQ because it makes coding easier and more flexible.
6.1.2. Electronic system
As with most electronic structures codes, the main input for the program is the atoms that constitute the system and their positions. Since this is a diatomic molecule, the main parameter is the interatomic distance that we will set to 1.09 Angstrom and that we store in a variable called distance on Line 21:
Note that in the INQ interface all dimensional variables must have explicit units, there is no default units. Check the [units supported by INQ](https://gitlab.com/npneq/inq/-/wikis/Units-supported-by-inq) to see if we have your favorites (sorry no nanoinches or Fahrenheit).
Now we have to define the actual ions by created an ions variable. As you might have guessed, this is the inq object that describes the ionic degrees of freedom of our system.
auto ions = systems::ions(systems::cell::orthorhombic(10.0_b, 10.0_b, 12.0_b));
To create the ions we need to pass as an argument the cell where those atoms will be contained. In this case we use an orthorhombic (parallelepipedic) cell with dimensions 10x10x12 bohr. (You can also use the functions cubic and lattices to generate a cubic or non-orthogonal cell respectively.)
Next we insert the two nitrogen atoms at positions (0, 0,-distance/2) and (0, 0,distance/2) with
ions.insert("N", {0.0_b, 0.0_b, -distance/2});
ions.insert("N", {0.0_b, 0.0_b, distance/2});
Note that zero is given as 0.0_b or zero Bohr. Unfortunately we need to specify units for zero as well (due to C++ limitations).
And once we have our system of ions, we should focus on the most important particle of the standard model: the electron. To create the electrons object we need the ions (so we know how many electrons are) and the plane wave cutoff energy, to determine the basis were we are going to describe the electrons. This is the code:
auto electrons = systems::electrons(env.par(), ions, options::electrons{}.cutoff(30.0_Ha));
We also pass the object env.par(), this object contains the information about the parallelization (essentially a communicator) so the electrons are distributed in parallel.
6.1.3. Single point energy calculation
Now that we have defined our electronic system, we want to calculate its energy by calculating the electronic ground state.
We start by obtaining an initial guess for the orbitals and the density with
ground_state::initial_guess(ions, electrons);
This function gets an initial approximation for the density from atomic orbitals and generates random wave-functions for the electrons.
Now that we have an initial guess, we call the function ground_state::calculate to run the self-consistency procedure and obtain the ground state:
auto result = ground_state::calculate(ions, electrons, options::theory{}.pbe());
The first two arguments are the ions and the electrons. The third argument is the theory we use for the electron-electron interaction, in this case density functional theory (DFT). There are other possibilities like non_interacting() or hartree_fock(), for example. The fourth argument are the options for the self-consistency cycle (SCF) used to calculate the ground state. In this particular case we select the conjugate gradient eigensolver and set the mixing parameter to 0.1.
The ground_state::calculate function returns a value that we can use to get the properties of the ground state. So we have to “capture” it into a variable.
Finally, as we get the result of calculation, we can do something with it. So, for example, we print the energy to the screen with
if(env.par().comm().root()) std::cout << "The total energy is " << result.energy.total() << " Hartree " << std::endl;
The if in front only makes sure that a single node prints the result when running in parallel.
Now that we understand the program, let’s run it once more. It will take a bit of time to run, print a lot of stuff to the terminal and finally print the value of the energy using the line of code above. You should get something around -20.7743.
Note
It is very likely that when you write a program you type things the compiler won’t understand, so your code won’t compile. We suggest you to recompile your code periodically as you add more instructions. That way you can easily detect where the errors are, as compiler error messages are not always easy to understand. Also, always read compiler error messages from the top, and fix the first error first.
6.1.4. Potential energy surface
We calculated the energy for one interatomic distance, however this is not very useful since we just used an arbitrary value for that distance. One possibility is to calculate the energy for a range of distances so we obtain the potential energy (1D) surface for the nitrogen molecule.
Using a standalone electronic structure package, generating a potential energy surface most likely would require you to do a run for each distance, by generating input files by hand or use a scripting language to generate them for you. Since we are already working on C++ it is straight forward to do it with INQ.
Our code for this section is based on the code we provide with inq_template before. Since we are going to do multiple single point energy calculations, the first change we are going to do to the code is to define a new function called nitrogen_energy. This function will receive a distance as an argument and return the energy corresponding for that distance.
First we create the function, telling the compiler the name of the new function, that it take two arguments. First the environment object (that we called env) and the distance we want to calculate
template <class EnvType, class DistanceType>
auto nitrogen_energy(EnvType const & env, DistanceType const & distance){
}
We use template so we do not need to worry about the specific types of the arguments, the compiler will set the type automatic based on the arguments you pass.
Note that you should add this line before the main function that we defined above. (C++ requires that functions are declared or defined before being used).
Next, we move the code that we wrote to do the calculation, that up to now was inside main, inside the new function. There are some modifications we have to do, however. First, make sure that the line
auto env = input::environment(argc, argv);
stays inside main, since this code should be called once. Second. before we declared the distance variable. You have to remove that line, because the distance is now passed as an argument to the function and it is not fixed.
Finally, we need to return the value of the energy that we calculated so it can be seen by the code that calls our function, this is done using the return statement. So at the end of the function we add return result.energy.total();.
Before, we were printing the value of the energy to the terminal. You can remove the line that did that or if you prefer modify it so that it prints also the distance in the same line.
Okay, so we have defined or new and shiny nitrogen_energy function. As you might have already realized, this function is the potential energy surface for nitrogen.
We can do many things with this function. For example, we are going to print it to a file so we can plot it.
To access the file input/output capabilities of C++ we need to load the proper include. In this case fstream. This is done with
#include <fstream>
at the beginning of the file, where the other includes are.
Now we can use the ofstream object that opens a file for writing. As argument we have to pass the name of the file we want to open. For example:
std::ofstream ofs{"nitrogen_e_vs_d.dat"};
This you have to do in the main function.
Now that we have the file opened, let’s iterate over some distances and store the resulting energy to the file. This is the code:
auto d_0 = 0.8_A;
auto del = 0.025_A;
auto n_max = 20;
for(auto n = 0; n != n_max; ++n){
auto const distance = d_0 + n*del;
ofs << distance/1.0_A << '\t' << nitrogen_energy(env, distance) << '\n';
}
We first define the values that determine how we are going to scan the distances. We use 20 values starting from 0.8 angstrom with uniform increments of 0.025 atomic units. Note that this is the simplest approach but we can use any distances we want, and we could even use some adaptive scheme, using the previous results to refine the energy surface in the areas near the minimum, for example.
Then we do the loop, iterating over the distances. For each iteration of the loop we print the distance and the energy of the nitrogen molecule for that distance. And that’s it.
Now compile the code and run it. This time we are doing 20 calculations so it will take longer than before.
Once the calculation is done, you should have the file nitrogen_e_vs_d.dat in your directory. Have a look at the file, you should have two columns data. Now use your favorite plotting program to plot the file and look at the curve. Does it look reasonable. Is it what you expected?
SPOILER ALERT, it should look something like this (hopefully prettier, unless you used excel):
-20.2 +-------------------------------------------------------------------+
| + + + + + + + + + |
| "nitrogen_e_vs_d.dat" ******* |
-20.3 |*+ +-|
| * |
| * |
| * |
-20.4 |-+ * +-|
| ** |
| * |
-20.5 |-+ ** +-|
| * |
| ** |
-20.6 |-+ ** +-|
| ** |
| ** |
| *** |
-20.7 |-+ **** +-|
| *** ********** |
| + + + + *************************** + |
-20.8 +-------------------------------------------------------------------+
0.8 0.85 0.9 0.95 1 1.05 1.1 1.15 1.2 1.25 1.3
6.1.5. Finding the minimum energy (advanced)
As we mentioned before, we don’t need to do a simple uniform sampling of the potential energy surface. For example, we can use the standalone function nitrogen_energy to run an optimization algorithm to find the equilibrium interatomic distance. While a specialized algorithm will not give the information about the whole of the potential energy surface, it can be more efficient if we are only interested in the minimum energy point.
We can take advantage of the programmability of the inq library to do that. Since this is a one-dimensional problem we can bracket a (local) minimum with three points and work our way inwards to find the exact location of the minimum.
Among the minimization methods in one dimension [Golden-ratio search](https://en.wikipedia.org/wiki/Golden-section_search) is among the most robust and it is guaranteed to converge to the solution at a quadratic rate.
Suppose we have already a function called golden_search(Function f, double a, double mid, double b) that can iteratively find a local minimum of the function f if the internval a, b and an internal point mid define a minimum ( f(a) > f(mid) and f(mid) < f(b) ). This function can be also programmed easily following well known implementations, for example [here](https://en.wikipedia.org/wiki/Golden-section_search#Termination_condition) or in the Numerical Recipes book.
This would be an example usage of such function:
#include <iostream>
#include <iomanip>
#include <limits>
#include <cmath>
using namespace std;
template<class F>
auto golden_search(F f, double a, double m, double b, double tol = 1e-2){
std::pair<double, double> fxmin; // return variable
// golden ratios
const double R = 0.61803399;
const double C = 1. - R;
double x1 = m, x2 = m;
double f1 = f(x1);
double f2 = f1;
while(b - a > tol*(fabs(x1) + fabs(x2))){
fxmin = (f1 < f2) ? std::make_pair(f1, x1):std::make_pair(f2, x2);
cout<< std::setprecision(15);
cout<< fxmin.second <<" "<< fxmin.first <<"\n";
cout<<", exact bracket =["<< a <<", "<< b<<"]\n";
if(f2 < f1){
a = x1;
x1 = x2;
x2 = R*x2 + C*b;
f1 = f2;
f2 = f(x2);
} else {
b = x2;
x2 = x1;
x1 = R*x1 + C*a;
f2 = f1;
f1 = f(x1);
}
}
return fxmin;
}
double parabola(double x){
return (x - 2.)*(x - 2.);
}
int main(){
boost::mpi3::environment env;
auto fx = golden_search( parabola, -1., 3., 6., 1e-2 );
auto x = fx.second;
auto f = fx.first;
std::cout << "min found at x = " << x << " with value f = " << f << '\n';
}
Note that the function returns a pair of values, representing the minimum value achieved and the abscissas value (in that order) and that the first input parameter is a function itself (C++ allows passing a function to a function). This program should output a converged values near 2. and 0., which is off course the minimum of the parabola \((x - 2)^2\).
The interesting fact is that golden_search is a function that can be written only once and used in different contexts, for example, to find the minimum of energy with respect to atomic distance in the nitrogen molecule. golden_search doesn’t need to know any particular detail about DFT or about atoms.
Can you guess a minimal change to the main function above to determine the minimum for our problem of the nitrogen molecule instead of the toy function?
Each iteration inside golden_search uses the previous calculation values to further bracket the minimum. Since the algorithm converges very fast, after only 8 calculation the minimum (distance) can be bracketed to several digits of precision.
This precision would need a very fine uniform grid and many more calculations to determine the minimum without resorting to a model (e.g. quadratic or Morse expansion).
6.2. Hydrogen fluoride molecule
We are going to continue with another diatomic molecule, this case is a tiny bit more interesting because is heteroatomic.
Let’s calcule the ground state for the HF molecule. Your program should look very similar to the [nitrogen one](https://gitlab.com/npneq/inq/-/wikis/Tutorial:-nitrogen-molecule), except for the type of atoms, of course, and the distance that we can set at the experimental value of 1.795 atomic units.
Also, for this case we are going to use a \(8x8x8\) bohr cell with finite boundary conditions. This can be defined as:
systems::cell::cubic(8.0_b).finite();
(this is what is passed as an argument to the ions).
For this file we are going to need a new source file, and compile it. This is easy to do with inq_template by modifying the CMake compilation instructions. Edit the file inq_template/runs/CMakeLists.txt, you should see a line that reads:
set(SOURCE_FILE_LIST nitrogen.cpp)
That’s a list (of size one) of the source files that will be compiled. To add your new source file, just add it separated by spaces to the list, like this:
set(SOURCE_FILE_LIST nitrogen.cpp my_shiny_new_file.cpp)
Save, and close the file. Now when you do make the new file will be compiled into a binary with the same name minus the .cpp extension.
Now you can compile and (after fixing the compilation errors) run your example.
6.2.1. Partial charges
Now comes the interesting part. What we are going to do an example of how we can calculate our own observable in inq. Something in other codes might be difficult since you cannot do that from the code input file. But now you can.
HF has a polar bond, so we are going to calculate the charge state of the atoms in the molecule using a very simple approach: anything on the side of the cell where H is we will assume it belongs to its charge. And the same for F on the other side.
So, in you code, after the ground state calculation add the following:
auto density = electrons.density();
auto basis = density.basis();
auto h_charge = 0.0;
auto f_charge = 0.0;
for(int ix = 0; ix < basis.sizes()[0]; ix++){
for(int iy = 0; iy < basis.sizes()[1]; iy++){
for(int iz = 0; iz < basis.sizes()[2]; iz++){
auto rr = basis.point_op().rvector_cartesian(ix, iy, iz);
if(rr[2] < 0.0) h_charge += density.cubic()[ix][iy][iz];
if(rr[2] >= 0.0) f_charge += density.cubic()[ix][iy][iz];
}
}
}
h_charge *= basis.volume_element();
f_charge *= basis.volume_element();
std::cout << "H charge = " << h_charge - 1.0 << '\n';
std::cout << "F charge = " << f_charge - 7.0 << '\n';
Let’s analyze this code. First, we need to access the density. This is obtained from electrons using the a a member function called density(). The type of electrons.density() is a field, a type in inq that represents a mathematical field (or a mathematical 3-dimensional function). It contains a basis and an array of coefficients in that basis. In this case, the basis is a real-space grid but it can also be a Fourier space basis.
Note that we assign the value of density() to a local variable, this is done because the actual density stored in electrons is the spin density that can have multiple components in spin polarized calculations.
By storing in a local variable we avoid the total density to be recalculated every time.
The first thing we do with the field is to make a copy of the basis object in a variable called basis. This is not strictly necessary, but we do it to make our code simpler and easier to read.
Then we initialize to zero the variables h_charge and f_charge where we will hold each charge.
Next we iterate over the three dimensions of the grid using three loops. For each point we call the function rvector_cartesian, that is a member of the basis, to obtain the vector rr that contains the real space position of each point. You should always use this function, since the position of each point it is not its coordinates multiplied by the spacing.
To calculate the z coordinate of the point we simply access component 2 of rr. (Array indices start from 0 in C++). Based on the value of the \(z\) coordinate we sum the density to the charge of one atom or the other. To access the density we use the cubic member function of the field type, that returns an array representation of a field.
After the loop, since we are doing an integral, we multiply by the volume element of the grid to obtain the charge with the proper normalization.
Finally, we print the results subtracting the charge of each ion.
Now run your code (warning, you have to do that in serial for now). What results do you get? Is this what you expected? How does it compare with better methods of obtaining partial charges?
6.2.2. Hartee-Fock
Since we are studying the HF molecule, let’s add another HF to the mix: Hartee-Fock.
To do this we only need to switch the approach to calculate exchange we are using. Before we passed theory{}.pbe() to the ground_state::calculate() function, which instruct inq to do a DFT calculation with the PBE functiona;. We just need to change it to theory{}.hartree_fock() to do a Hartree-Fock.
How do the partial charges change with HF with respect to PBE? Can you find a simple explanation for that difference? Note that you can use hybrid functionals like pbe0() or b3lyp() too.
6.2.3. Parallelization
The function we wrote so far only works in serial. When running in parallel, inq will assign different regions of space to different processors so we won’t have access to the full density from one processors.
Let’s modify the code to make it work in parallel. Fortunately this is very simple to in INQ.
The first change is in the loop. We iterate over the whole real space grid given by the basis.sizes() function. In parallel the density object doesn’t have that full range of values, only a partial range. To avoid this problems we simply use the basis.local_sizes() instead.
Once we do the previous change each processor will calculate a part of the integral. To obtain the total integral value we need to sum all the partial contributions from each processor, this is a parallel operation known as _reduction_. This is how we can do that reduction in INQ:
h_charge = basis.comm().all_reduce_value(h_charge);
Inq uses a library called B.MPI3 for parallel communication, this is an interface to the standard MPI parallel programming library. The first thing we need to do communication operation is a communicator, an object that allows to pass messages between a group of processors. The particular communicator basis.comm() groups all processors that share the space distributed by INQ. Then we call the function all_reduce_value that will take the h_charge from all processors, sum it and return that sum to all nodes. We put the result back into the h_charge variable. Include this code and the equivalent for f_charge in your code.
The last parallelization modification we need to do is cosmetic. Make sure that the partial charges are printed only by a single node (as it is done for the total energy in the nitrogen example).
Now compile your code and run it. You can check that is correct by running with different number of processors and check that you get the same result as the serial version.
6.3. Benzene optical absorption spectrum
Now we are going to move to the domain of time-dependent density functional theory (TDDFT) to calculate excited states properties. We will start by a simple case, the optical absorption of a molecule, in this case benzene.
Create a new source code file benzene.cpp for this case (you start from the previous files you already have).
This time, instead of passing the geometry in the input file we will load it from an XYZ file. Copy the following text to a file called benzene.xyz that should go in the source directory (typically inq_template/runs):
12
Geometry of benzene (in Angstrom)
C 0.000 1.396 0.000
C 1.209 0.698 0.000
C 1.209 -0.698 0.000
C 0.000 -1.396 0.000
C -1.209 -0.698 0.000
C -1.209 0.698 0.000
H 0.000 2.479 0.000
H 2.147 1.240 0.000
H 2.147 -1.240 0.000
H 0.000 -2.479 0.000
H -2.147 -1.240 0.000
H -2.147 1.240 0.000
To load the XYZ into an ions object use the following code:
auto ions = systems::ions::parse("../../runs/benzene.xyz", 10.0_b);
This simply tells inq to generate an ions object from the contents of an XYZ file. Since a standard XYZ does not contain information about the cell, we need to give this information as an extra argument. We could pass a full cell definition as before, but for convenience inq can also take a distance argument. This automatically creates a finite orthorhombic cell where no atom is closer than this distance to the cell boundaries. This is practical for molecular simulation where you need to converge the size of the cell.
The electrons can be generated as usual from the ions. This time, however, instead of specifying the energy cutoff we will use the real-space grid spacing as:
auto electrons = systems::electrons(env.par(), ions, options::electrons{}.spacing(0.43_b));
The first step in a TDDFT calculation is to calculate the ground state, that will set initial condition for our real-time simulation. Since we don’t want to repeat this calculation each time we do a real-time simulation we will save the contents of the electron object. The electrons object has a set of functions to do this: save, load and try_load that take a directory name as argument.
The first thing we will do is to try to load the electrons from a previous calculation using try_load. If this is not found we run the ground state calculation and store the electrons for future runs using the save function. The code would look something like this:
std::string const restart_dir = "benzene_restart";
if(not electrons.try_load(restart_dir)){
// insert ground-state calculation here
electrons.save(restart_dir);
}
Note that we used the try_load function. This function returns true if loading the electrons was successful and false otherwise. There is also the load function that will fail (throw a C++ exception) if the electrons can’t be loaded.
6.3.1. First time propagation, finding the optimal time-step
Now that we have a ground state we can do a real-time calculation. This is done by the real_time::propagate function. That we will call like this to start:
real_time::propagate(ions, electrons, [](auto){}, options::theory{}.pbe(), options::real_time{}.num_steps(100).dt(0.06_atomictime), ions::propagator::fixed{});
The first two arguments are the ions and electrons, as you might have expected. The next argument is a function object that takes care of the output. More on this later, for the moment we will pass a C++ lambda that does nothing. The next argument is the theory level we will use, in this case PBE (note that this should match the ground-state). The following argument is an options object that specifies the simulation parameters. In this case the number of steps, 100, and the time-step for the integration of the real-time integration of the equations. The last argument is the propagator for the ions, in this case we just keep them fixed.
The time step is a crucial quantity in a real-time propagation. If the time step is too large the simulation is unstable, if it is too small the calculation would be slower than necessarily. The time-step depends on the energy cutoff/spacing used for the simulation and possibly other parameters, so we should optimize it for each case.
So our first task will be to find the optimal time-step for this case. After you add the propagation line above to your file, compile it and run it. If this is the first time you run the calculation will run the ground state first and then start the time propagation that looks something like this:
[2023-06-29 11:59:37.422] [electrons:X9Dt3Q] [trace] initializing real-time propagation
[2023-06-29 11:59:38.291] [electrons:X9Dt3Q] [trace] starting real-time propagation
[2023-06-29 11:59:38.291] [electrons:X9Dt3Q] [info] step 0 : t = 0.000 e = -39.547750046526
[2023-06-29 11:59:44.393] [electrons:X9Dt3Q] [info] step 1 : t = 0.070 e = -39.547748358955 wtime = 6.102
[2023-06-29 11:59:50.494] [electrons:X9Dt3Q] [info] step 2 : t = 0.140 e = -39.547744602062 wtime = 6.101
[2023-06-29 11:59:56.653] [electrons:X9Dt3Q] [info] step 3 : t = 0.210 e = -39.547740102704 wtime = 6.158
[2023-06-29 12:00:02.766] [electrons:X9Dt3Q] [info] step 4 : t = 0.280 e = -39.547736828841 wtime = 6.113
[2023-06-29 12:00:08.904] [electrons:X9Dt3Q] [info] step 5 : t = 0.350 e = -39.547731719913 wtime = 6.138
[2023-06-29 12:00:15.007] [electrons:X9Dt3Q] [info] step 6 : t = 0.420 e = -39.547725301627 wtime = 6.103
[2023-06-29 12:00:21.133] [electrons:X9Dt3Q] [info] step 7 : t = 0.490 e = -39.547719146148 wtime = 6.126
...
We are interested in the energy values that start with e = …. Since our system does not have any (time-dependent) perturbation yet the energy should be conserved. But in this case we have deliberately chosen a too-large time step that will make the energy increase rapidly because the simulation is unstable. Let it run for some steps to convince your self that is the case.
Now modify the time-step in your input and see what happens with the energy values. Repeat this until you find the largest value of the time-step that give you a stable energy. Note that we are looking for stability rather than exact energy conservation, with a proper time-step the energy will oscillate around a value instead of exploding.
6.3.2. Adding a perturbation and observables
Now that you have found the optimal step, let’s do some more physically interesting. Up to now our propagation doesn’t have any perturbation, so it stays in the ground-state just changing the phase of the orbitals. We also don’t observe any quantities during the propagation. Let’s change that.
INQ defines the concept of a perturbation, a code object that is applied during the propagation. Note that a perturbation does not necessarily have to be small, real-time TDDFT can give the response to strongly driven systems as well.
We will start by defining a kick, this is an electric field perturbation of the form \(\vec{E}(r, t)=\vec{\kappa}\delta(t\)\). The advantage of a kick is that it is flat in Fourier space, so it excites all frequencies equally, making it useful for linear response calculations.
In INQ we define a kick as:
auto kick = perturbations::kick(ions.cell(), {0.01, 0.0, 0.0});
where the two arguments are the cell and a vector defining the amplitude and direction of the electric field.
The other object we are going to define is a function that takes care of the output of the dipole and the calculation of the absorption spectrum. This code is a bit more sophisticated:
auto const timestep = 0.02_atomictime;
auto const nsteps = 0.1_fs/timestep;
gpu::array<double, 1> time(nsteps);
gpu::array<double, 1> dip(nsteps);
auto output = [&](auto data){
auto iter = data.iter();
time[iter] = data.time();
dip[iter] = data.dipole()[0];
if(data.root() and data.every(100)){
auto spectrum = observables::spectrum(20.0_eV, 0.01_eV, time({0, iter - 1}), dip({0, iter - 1}));
std::ofstream file("spectrum.dat");
for(int ifreq = 0; ifreq < spectrum.size(); ifreq++){
file << ifreq*0.01 << '\t' << real(spectrum[ifreq]) << '\t' << imag(spectrum[ifreq]) << std::endl;
}
}
};
Let’s analyze this. The first thing we do is create constants for the time step and number of time steps (through defining a propagation time), we have to pass this in the time propagation. Once we do this we can define a set of array where we will store the propagation data.
The next part can be a bit tricky for C++ beginners. We create a lambda function called output that will be called each time-step by the time propagation code. This function has access to many observables during the propagation by accessing the argument data. In this case we obtain the iteration iteration number data.iter(), the real-time data.time(), and the \(x\) component of the dipole, data.dipole().
The rest of the function calculates the Fourier transform of the dipole. Since we perturbed our system with an electric field, the Fourier transform of the time-dependent dipole function is the polarizability. In this particular case we will calculate \(\alpha_{11}\), the \(xx\) component of the polarizabilty tensor, as we perturbed the system in the \(x\) direction and we are looking at the \(x\) component of the dipole. To calculate the full tensor we would need 3 different time propagation with perturbations in each direction.
The if statements make sure we only calculate and write the file in the first node in parallel and that we only do it every 100 iterations (data.every(N) is true for all iterations that are multiples of N, except iteration 0, and also true for the last iteration). Then we use the observables::spectrum function to calculate the windowed Fourier transform of the dipole time-series that has been calculated so far. It receives as argument the maximum energy, the energy spacing, and the arrays containing the time and the function to be transformed. We finally open a file and write the spectrum to it.
Note that we could include the calculation of the spectrum at the end of the simulation, however by continuously generating it we can closely monitor the simulation and what is the effect of the length of the simulation in the spectrum resolution.
The last thing to do is to include these changes in the call to the propagation functions:
real_time::propagate(ions, electrons, output, options::theory{}.pbe(), options::real_time{}.num_steps(nsteps).dt(timestep), ions::propagator::fixed{}, kick);
We have added output instead of an empty function as the third argument, used the new nsteps and timestep variables as options and added an extra argument with the kick object.
Now we can compile and run again. Since we specified a very short simulation time, 0.1 fs, this should be very quick. If you plot the spectrum.dat file you will see that the spectrum doesn’t make much sense because this time is too short as the energy resolution is of the order of \(\hbar\) over the simulation time, in this case \(\hbar/0.1\mathrm{fs} \sim 1 eV\).
So we need to increase the simulation time to something reasonable like \(5\) or \(10\) fs. Can you estimate how long it will take to run in your machine? (Note that inq prints the time it take for each iteration.)
Since this will be a long simulation, there are things we should consider:
It will be useful to have the time-dependent dipole stored into a file, so we can plot it or do some other post-processing. Maybe you can try doing it. (Note that our lambda can access all the variables defined outside in the code.)
You might want to implement is to name label each spectrum file with the iteration or time, so you can see how the simulation time affect the spectrum. The [std::to_string](https://en.cppreference.com/w/cpp/string/basic_string/to_string) function might be useful for that.
The experimentally relevant quantity is
so it would be interesting to plot that. Remember that the response should be properly normalized by the intensity of the perturbation. * The Thomas-Reiche-Kuhn \(f\)-sum rule for the number of electrons, states that
where \(N\) is the number of (valence) electrons.
All of these imply additional calculations we might want to do. Do you think it would be better to do them on-the-fly when you run the calculation, or save the data and write a post-processing utility?
Write your code with the changes you consider interesting and plot the resulting spectrum. Is the sum rule fulfilled?
6.4. Electronic stopping
In this tutorial we will be simulating a particle, a proton in this case, entering a material, Aluminum. As the particle moves through the material, it feels a drag force. At low speed, this force is caused by the interaction with the ions of the material. At high speeds, however, the force is caused by the interaction of the particle with the electrons of the material. This process is known as electronic stopping.
In order to accurately simulate the interaction of the proton with the electrons, we need to use non-adiabatic dynamics where we allow the electrons to get excited. So in this example we will be using TDDFT to model the electrons.
6.4.1. Aluminum supercell
To properly simulate the stopping process in a periodic simulation we need to use a supercell to avoid the interaction of the proton with its periodic replicas.
To generate the supercell we will use the following code:
auto alat = 7.6524459_bohr;
auto reps = vector3<int, contravariant>{2, 2, 2};
std::vector<vector3<double, contravariant>> cell;
cell.emplace_back(0.0, 0.0, 0.0);
cell.emplace_back(0.0, 0.5, 0.5);
cell.emplace_back(0.5, 0.0, 0.5);
cell.emplace_back(0.5, 0.5, 0.0);
systems::ions ions(systems::cell::orthorhombic(reps[0]*alat, reps[1]*alat, reps[2]*alat));
for(int ix = 0; ix < reps[0]; ix++){
for(int iy = 0; iy < reps[1]; iy++){
for(int iz = 0; iz < reps[2]; iz++){
auto base = vector3<double, contravariant>{double(ix), double(iy), double(iz)};
for(unsigned iatom = 0; iatom < cell.size(); iatom++){
ions.insert_fractional("Al", (base + cell[iatom])/reps);
}
}
}
}
Let’s analyze this code. We first define alat that is the lattice parameter for aluminum. Then a 3d vector of integers called reps initialized as {2, 2, 2}. This vector defines the repetitions of the cells for each dimension, by adjusting it we can easily define the size of our supercell without changing the rest of the code. For the tutorial we will keeps this 2x2x2 size, however for actual production calculations you certainly need a larger supercell.
The next part is creating an C++ array (of type std::vector) with the positions of the Al atoms in the conventional cell, this is the pattern we will repeat to create the supercell.
We then create the ions object with an orthorhombic cell whose size is determined by the repetitions for each dimension. Finally we insert all the ions, to do this we loop for each repetition of the cell in each dimension. The key line of code here is:
ions.insert_fractional("Al", (base + cell[iatom])/reps);
As you might remember from previous tutorials we used the ions.insert function. We now use ions.insert_fractional that takes inserts an atom with the position given in reduces coordinates. The base 3D vector determines the offset of each repetition of the conventional cell, that we add to the coordinates of each atom. Finally, we have to divide the relative coordinates by the number of repetitions in each dimension so we get the fractionary coordinates properly normalized to the supercell.
6.4.2. Adding a proton
The next step is to add the proton, a hydrogen atom, that is the one that will be stopped. We need to put in a position of the cell where it is not close to any of the Al atoms. Finally we need to give it a velocity for the molecular dynamics simulation. We do all that with the following code:
ions.insert("H", {alat/4.0, alat/4.0, alat/4.0});
Since we will need to access the data for this ion several times, we put it first in the cell so it has 0 index in the ions. This means you must insert the code above before the loop that insert the Al atoms.
6.4.3. Ground state calculation
We are now ready to create the electrons, there are a few parameters that we need to add. This is the code:
systems::electrons electrons(env.par(), ions, options::electrons{}.spacing(alat/16).extra_states(2*product(reps)).temperature(1000.0_K));
We first set the grid spacing to the lattice parameter divided by 16, this value might be a bit too big for production calculations but it works for this tutorial.
Since the system is a metal we add some extra states, 2 per repetition of the cell, and set the electronic temperature to 1000 Kelvin. Note that in inq Kelvin is a unit of energy (technically Kelvin times the Boltzmann constant.)
Now we can do the ground state calculation, as before we avoid recalculating the ground state each time by saving it to disk and reloading it. The only new thing is we use a smaller mixing (0.1) than the default (0.3):
auto restart_dir = "aluminum_restart";
if(not electrons.try_load(restart_dir)){
ground_state::initial_guess(ions, electrons);
auto result = ground_state::calculate(ions, electrons, functional, inq::options::ground_state{}.mixing(0.1));
electrons.save(restart_dir);
}
6.4.4. Time propagation
We are ready now to do the time propagation. The first thing we should do is to give the proton a velocity. We will start with a value of 0.5 atomic units in the z direction:
auto vel = 0.5;
ions.velocities()[0][2] = vel;
Note
INQ doesn’t use defined units for velocity at the moment.
For a stopping simulation we simulate a time long enough for the ion to go through the whole unit cell one time. So we set the number as simulations as:
auto const dt = 0.06_atomictime;
int nsteps = (alat.in_atomic_units()*reps[2]/vel)/dt.in_atomic_units();
Take a couple of minutes to understand this last line.
Now we have to set the output. The quantity we want to calculate is the stopping “power”, which can be understood as the drag force that the proton feels as it moves through the material. So can calculate the force over the proton, in the z direction for each time to get an idea of the stopping. We will also output the total energy:
gpu::array<double, 1> time(nsteps);
gpu::array<double, 1> force(nsteps);
gpu::array<double, 1> energy(nsteps);
auto output = [&](auto obs){
auto iter = obs.iter();
time[iter] = obs.time();
force[iter] = obs.forces(0)[2];
energy[iter] = obs.energy().total();
};
Finally we are ready to set up the propagation:
real_time::propagate<>(ions, electrons, output, functional, options::real_time{}.num_steps(nsteps).dt(dt), ions::propagator::impulsive{});
Instead of using regular molecular dynamics for the ions we are going to use impulsive dynamics. This forces the ions to move without changing their velocities. Since electronic stopping is a very fast process we can neglect the loss of speed of the proton, or the movement of the ions of the material.
After the calculation we calculate the average stopping power and print it:
double average_force = 0.0;
for(auto ii = 0; ii < time.size(); ii++){
average_force += force[ii];
}
average_force /= time.size();
std::cout << "Average force = " << average_force << std::endl;
6.4.5. Results
We are now ready to compile and run the calculation.
Now go to a plotting tool and have a look at minus the force in the x direction as a function of time. This is the instantaneous stopping power. Why do you think there are oscillations in the force?
Does the number you get looks reasonable? What is the sign of the force? Compare it with the plot you just made.
Have a look at the value of total energy during your propagation. You will see that the energy is not conserved. Can you explain why? You can also get the electronic stopping from the slope of this curve.
Now, we want to see how the electronic stopping changes with respect to the velocity of the particle. So repeat your calculations for larger velocities, I suggest you do calculations for the values 0.5, 1, 1.5, 2, 3, 4, and 5 a.u.. If the calculations run fast in your system, you can add more points. You can also consider that for 0 velocity the stopping power is 0. Note that you have to adjust the simulation time for each velocity so that in each case the particle travels through the unit cell once.
Plot the stopping power as a function of the velocity. What is the location of the maximum? Can you guess why the curve has that shape?
In order to make things run quickly, the parameters we use in this tutorial are not converged. If you want to make predictions you should properly converge the calculation with respect to the energy cutoff and supercell size. You might also need to include semicore states in your aluminum pseudopotential, specially for high electronic velocities.
If you want to know more details about electronic stopping calculations, you can check the following [paper](https://doi.org/10.1016/j.commatsci.2018.03.064).