Docsity
Docsity

Prepare-se para as provas
Prepare-se para as provas

Estude fácil! Tem muito documento disponível na Docsity


Ganhe pontos para baixar
Ganhe pontos para baixar

Ganhe pontos ajudando outros esrudantes ou compre um plano Premium


Guias e Dicas
Guias e Dicas

Advanced MC algorithms and free energy integration, Notas de estudo de Engenharia de Produção

Advanced MC algorithms and free energy integration

Tipologia: Notas de estudo

Antes de 2010

Compartilhado em 09/11/2009

igor-donini-9
igor-donini-9 🇧🇷

4.5

(4)

419 documentos

1 / 55

Documentos relacionados


Pré-visualização parcial do texto

Baixe Advanced MC algorithms and free energy integration e outras Notas de estudo em PDF para Engenharia de Produção, somente na Docsity! MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/27/2009) 1 Material for these lecture notes was compiled from the references below MIT’s 3.320 course notes A.P. Leach, Molecular Modeling: Principles and Applications Advanced MC algorithms and Free energy integration MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/27/2009) 2 Simple sampling We introduced earlier MC simulation with importance sampling and simple sampling. We will now discuss other practical methods in between these. Recall that in simple sampling you sample randomly and then you weight with the correct probability. MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/27/2009) 5 Non-Boltzmann sampling For example, let’s say that this is the phase space. With importance sampling you are being drawn towards the states near the minimum. But maybe the states that live in the region marked are what you are interested (e.g. optically active states). If you want to collect information of the optical activity of the material, you may want to build a Hamiltonian that drives you towards these states (bias towards the region of phase space where you want to get). However, you will need to correct for the proper probability to get a proper ensemble. That’s non-Boltzmann sampling. e.g. optically active states ↔ With Boltzmann sampling, you are sampling in this red region MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/27/2009) 6 You can also use non-Boltzmann sampling to sample phase space more efficiently. If you have a phase space with a lot of local minima (blue line), you may want to define a new (flat) Hamiltonian (red line). This is especially relevant if the MC simulation has some form of dynamics in it. In the blue (true) Hamiltonian, it may be hard to escape from one minimum to the next one. If you raise the potential well (red Hamiltonian), it’s going to be much easier to get out of these local minima. You are essentially flattening your phase space with the new Hamiltonian and becomes easier to get out of local minima. Finally, we need to correct for that in the probability. There are all kinds of MC and MD schemes that are built on this idea of lifting up the potential wells and, then, correcting the relative probability or the relative vibration frequency or the relative time you spend in each potential well. Non-Boltzmann sampling MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/27/2009) 7 Non-Metropolis Monte Carlo Recall that in the Metropolis step in the Markov chain the rate at which you pick the i state from the j one as a potential next step is the same as the rate at which you pick a j state from the i state. This means that the a-priori probabilities were equal. You can make the rates at which you try one state from the other non- symmetric and dependent on the Hamiltonian. 0 0 ij jiW W= In non-Metropolis MC, we allow non-equal a-priori probabilities to get less possible moves that are not accepted MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/27/2009) 10 Force-bias Monte Carlo For example, in DFT you calculate the forces on atoms and then you relax them assuming a certain spring constant, typically. Since you still have a random element, this is probably somewhat better described as a mixed MD and MC method. Key reference on non-Metropolis Monte Carlo is the work of Pangali et al., On a novel Monte Carlo scheme for simulating water and aqueous solutions, Chem. Phys. Lett., 55, 413 (1978) MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/27/2009) 11 Case study: Surface segregation in Cu-Ni This review is extracted from the following reference This is a classic paper on using MC to study surface segregation in Cu-Ni. It uses the EAM method as a potential. The idea is to take random Cu-Ni alloy and see which element segregates to the surface and also see the segregation pattern away from the surface. To start, you need to set up a supercell (e.g. 24 or 48 layers, with a certain width). MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/27/2009) 12 Case study: Surface segregation in Cu-Ni Below we show a supercell. We seed the system with some concentration of Cu and Ni. Cu-Ni form Fcc solid solutions and we can define the Hamiltonian in multiple ways. Here, we have an EAM Hamiltonian that we can rapidly evaluate. It’s a very simple energy function, pair-wise sums and then you insert the result in a function. Which kind of MC moves should we allow? We need to segregate some species to the surface. For that, we could do diffusive hops (canonical-like interchanges), or we could interchange positions of Cu and Ni near by or far away. MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/27/2009) 15 Case study: Surface segregation in Cu-Ni You can look at the effects of both configurational and vibrational entropy. You can start with atoms in a rigid lattice and then add displacements away from the rigid lattice. From the difference of two results you can also see what the effects of vibrational excitations are on your thermodynamics. Using an EAM based Hamiltonian this is a very fast process calculation. The result shown is from 1985. MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/27/2009) 16 Case study: Surface segregation in Cu-Ni This is the concentration in Cu of different layers from the surface as a function of the bulk concentration of Cu. Clearly Cu segregates to the surface in this material. Even for 20% Cu in the bulk, you have an almost perfect Cu layer on the surface. Cu in the 2nd layer, interestingly, is depleted, i.e. it is below the bulk concentration. Cu in the 3nd layer essentially is tracking the bulk concentration. MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/27/2009) 17 Case study: Surface segregation in Cu-Ni Why the Cu in the 2nd layer is depleted and why Cu goes to the surface? It’s because it has a lower surface energy. Usually, the element with the lower surface energy goes to the surface. Cu and Ni in this system have an ordering interaction. Because the first layer is almost pure Cu, the 2nd layer for chemical reasons wants to be Ni. This is a pure surface effect. Cu and Ni in the bulk are, actually, at low temperature phase separating. But in the surface, because of strain effects they form an ordering interaction. The 2nd layer wants to be Ni because the 1st layer is so much Cu. You often see see this in compound forming systems where you have damped oscillations in in the concentration away from the surface. MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/27/2009) 20 We have some driving chemical potential that gives us the difference in Palladium and Vanadium concentration. Below we plot the average spin which is the concentration of Vanadium. We work at the temperature shown with the blue horizontal line and start increasing the chemical potential. – We would first go through a solid solution region (AB) where the concentration goes up. – When we hit the two-phase region (BC), the chemical potential is constant. At that chemical potential, we have a discontinuity in the concentration. Equilibration problems in Monte Carlo μ <σ> (concentration) A B C A B C MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/27/2009) 21 Then we go in a single-phase region and the chemical potential changes rapidly with concentration, so there we tend to be kind of flat or with less slope (line CD). Then, we again go into a two-phase region (DE) so we form a step in the <σ>/μ diagram. We then would go into single-phase, etc. Where you have discontinuities in concentration, we have a 1st order transition. We could also plot the energy which is discontinuous at a 1st order transition. We don’t necessarily see these phase transitions in the heat capacity. Equilibration problems in Monte Carlo μ <σ> (concentration) A B C A B C D D E E MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/27/2009) 22 If we come from the left end (constant T line), we have a solid solution. We go through a two-phase region and we form a compound. Often in MC, we have nucleation problems. If we have to form a phase or an arrangement that is different from the host from which it was formed, it will often not nucleate and we would overshoot (red line in Fig.). We will actually push the disordered phase way too far out of equilibrium. Then, at some point out of equilibrium we shoot into the ordered phase (vertical red line). When we come back, we have hysteresis the other way, we’ll overshoot the ordered phase sometimes and disorder although disordering is a little easier to do. Equilibration problems in Monte Carlo μ <σ> (concentration) MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/27/2009) 25 Computing free energy or entropy The fundamental difference between free energy and energy is that energy is an average quantity and free energy is not. Energy is the average of something that is defined in the microscopic state. For every microscopic state that you go through, we can define an energy and then the internal energy of the system is the average of that quantity: Free energy or entropy (which are essentially the same because, if you know entropy you also know the free energy) it is not the average of a quantity that’s defined in the microscopic states. If you have a microscopic state, in essence you have the atoms in a fixed position somewhere with some velocity. For this microscopic state, you cannot define the entropy. The entropy is a property of the ensemble as a whole not of a given microscopic state. So, entropy S and free energy F (F=U-TS) cannot be obtained as averages. They are, actually, integrals. You can see that the entropy is a sum over all phase space of : MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/27/2009) 26 Computing free energies and entropy You can then write the free energy (F=E-TS) as an integrated quantity over all of phase space of this quantity Using the probability , you can rewrite this as: In the 1st term in this equation, you are averaging something because you have a probability but the quantity you are averaging is essentially the free energy itself. So, it’s a quantity that’s flat in phase space. That makes it extremely difficult to sample. This is a physical and not a mathematical problem. You could measure energy but you cannot measure free energy because it’s an extensive quantity that is determined by the whole ensemble. You only get free energies indirectly. /v BE k T v eP Q − = [ ln( )] [ ln( )] ln( )v B v B B v v F P k T Q P k T Q k T Q= − = − = −∑ ∑ MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/27/2009) 27 Computing F and S F is not an average. Free energy does not exist in a microstate, it is a property of the distribution function. Same for entropy You can write this as an average but this is a little misleading. You can write it as the average of this If you can calculate the exp(βHν) and average that over phase space, you can show that you can actually have the free energy. The proof is given here. MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/27/2009) 30 Free energy calculation: Overlapping distribution There are three types of methods. We will start discussing the overlapping distribution method which is slightly less important for solids. The idea of overlapping distribution methods is quite simple. Remember that the free energy is -kT log Q, so the difference in the free energy between two states is: If we multiply (inside the summation) by 1 and write 1 as the , then we can collect the terms here and what we get is the following: exp( )IIHνβ− exp( )exp( )I IH Hν νβ β− MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/27/2009) 31 Note that the exponential is weighted by the probability in Hamiltonian I. So finally: So, essentially, what we are averaging is , but we average it in the ensemble of I. And that gives us the free energy differences. The reason that’s called overlapping distribution method is that you are trying to say something about state II but you are sampling in the ensemble of I. The only way this is ever going to work is if I and II are not too far apart so that the states that are relevant for II are also sampled to some extent when you are in I. Free energy calculation: Overlapping distribution exp( ( ))II IH Hν νβ− − exp( ( ))II IH Hν νβ− − MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/27/2009) 32 Using the states sampled in state I to get the free energy difference with II. Overlapping distribution methods Free energy difference between two different temperatures Let’s look at the energy. We sample energy states around the average with some spread (spread being the heat capacity). Essentially we are integrating things for ensemble II just by the way we walk through ensemble I. Free energy calculation: Overlapping distribution MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/27/2009) 35 The heat capacity is the fluctuation of the energy If you want to know the entropy at a given temperature, you start from some reference Tref where you know the entropy or you just fix it to some value if you want to reference everything to the same thing. Then you integrate cV/T. What do you take as reference states? If you start from 0 temperature, in models with discrete degrees of freedom like the Ising model, the entropy is 0 at 0 K. In some cases, you can also find the entropy at infinity because at infinity your phase space is random (the probability distribution is flat). So these are proper reference states. Thermodynamic integration 2 2 2 E E V kT C < >−< >= MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/27/2009) 36 Integrate from T=0 in Ising model The example shown here is a simple 2D square magnetic Ising model. You essentially integrate CV /T (under the curve) from 0 up. The integral of CV /T in reality, converges as T goes to 0 but numerically it would never do so. As T goes to 0, the heat capacity will not be zero because of some minor noise and fluctuations. So you divide by a number that gets exceedingly small as you go to 0, and so your integral will not converge numerically. To avoid this problem, start integrating from a small value of T, let’s say 0.5 or 0.75. You assume that you have no heat capacity and no entropy below that. 2 1 2 1( ) ( ) T V T cS T S T dT T − = ∫ MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/27/2009) 37 Integrate from T=0 in Ising model If you want to be more accurate, you can use low temperature expansions for cV. Essentially in this low T area all that would ever happen is single spin flips, i.e. you’d have this ferromagnet sitting there and once in a while one spin would flip. For these single excitations you can write down what the partition function looks like. It’s the ground state energy plus just the first excited states. Thus you can write out analytically what the entropy is up to that point and then integrate from there. MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/27/2009) 40 Integrating in composition Another one that’s quite practical is simply integrating in composition. The composition is essentially the derivative of free energy with respect to the chemical potential. If you come from the sides, you integrate in composition space as follows: We’ve written composition here as the average spin <σ> in the spin model, so you integrate <σ> d μ and that gives you dF. If you have pure A (left on the phase diagram), you know the free energy because you have no configurational entropy. So the free energy there is just the energy. C= F μ ∂ ∂ or C dμ = dF MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/27/2009) 41 Three different ways of thermodynamic integration If you come from low temperature, you integrate the heat capacity. If you come from high temperature, you use the Gibbs-Helmholtz relation to integrate the average Hamiltonian so the energy in most cases. If you come from the sides, you integrate in composition space so you integrate, essentially, <σ>dμ MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/27/2009) 42 As the top horizontal line goes to infinity, you essentially know the thermodynamic properties at all 4 edges and then you can integrate. This is by far the most accurate way of determining 1st order transitions. You can get the answer as accurately as you want it. Integrating along a path is a painful thing. So if you like the free energy at a point A, you don’t have to just simulate there but you have to simulate all the way up from low temperature (see the blue vertical line in the figure starting at low T). Three different ways of thermodynamic integration A. Rather than simulating on one set of conditions, you’ve got to simulate along the whole path MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/27/2009) 45 Issues with thermodynamic integration Disadvantages: 1) Need to have a reference state 2) Need to simulate along path from reference state to desired state 3) Error accumulates along path 4) Need path to be in equilibrium Advantages: 5) Highly accurate 6) All approximations are under control and error can be reduced by longer simulations MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/27/2009) 46 Why integrate only with physical parameters? You can also do what is called λ-integration. Before we were integrating with respect to physical parameters like temperature or concentration or 1/T. You can actually integrate with respect to non-physical parameters. For example, we may want to get the free energy difference between systems that have two different Hamiltonians. – E.g., we may want to know how the free energy changes if we turn on a certain interaction in the Hamiltonian e.g. a Coulombic interaction. We want to know how does this really affect the free energy of the system? – Or maybe we want to add a particle, etc. – Or you can be changing the temperature as a way of changing the Hamiltonian. Recall that when you change the product βH (that comes as one term) then you change the probability density ~e-βH. MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/27/2009) 47 λ integration You integrate along a path of λ that describes how you go from Hamiltonian 1 to Hamiltonian 2 -- and Hamiltonian 2 could be totally different physics., e.g. different chemistry. We are writing the Hamiltonian as a linear combination of the Hamiltonians of 1 and 2. The interval 0 to 1 for λ (for λ=0 we have Hamiltonian 1 and for λ=1 we have Hamiltonian 2) defines the path along which we integrate. What we want to know is the free energy for λ between 1 and 0. MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/27/2009) 50 λ integration You can look at free energy changes by changing chemistry. The ring shown to the right is called a phenol and you can put different groups on the phenol. If you put chlorine on it, it’s chlorophenol. If you put a methyl group on it, it’s methylphenol. If you put a cyan group , it’s cyanophenol. You could write a Hamiltonian that slowly changes one of these species into another one. Slowly changing means that you mix the interaction. We can do this with a potential. The potential that comes from the group that you attach has a weighted average, let’s say we go from methylphenol to chlorophenol. So you would weight a potential with the parameter λ and integrate in that space. Then you would get the free energy difference between these 2 groups attached. MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/27/2009) 51 λ integration: Effect of dipole in water Lets look at the effects of a water dipole on the free energy of water. The Hydrogens are slightly positively charged δ+ and so the Oxygen is, then, -2 δ+. Because of that you have a dipole moment which points from negative to positive. In a simple simulation, you can essentially just represent the water by its dipole and nothing else (no atoms or molecules). If you want to look at how does the free energy change with the strength of that dipole, for example, you could write the dipolar strength in terms of some parameter λ. δ+ δ + −2δ+ MAE 715 – Atomistic Modeling of Materials N. Zabaras (4/27/2009) 52 The positive and the negative charge in the dipole depend on the parameter λ. Essentially we look at the free energy as a function of λ. The green line here is the exact result. At 0 and 1 you should get the same answer because when λ=1 the dipole is the same as at λ=0 (the + and – charges are inverted). The purple solid line is what they got with λ-integration. The result is pretty good but the value at λ=1 is not the same as at λ=0. There has been some error accumulation along the integration path. λ integration: Effect of dipole in water To get a good idea of the error accumulated in the integration path, it’s essential to do a circular integration in your phase space -- so not only go from state 1 to 2 but come back.
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved