Project

General

Profile

Bug #2867

abnormal box fluctuations on GPUs still there

Added by Ramon Guixa-Gonzalez 6 months ago. Updated about 14 hours ago.

Status:
New
Priority:
High
Assignee:
Category:
-
Target version:
-
Affected version - extra info:
Other vesions too
Affected version:
Difficulty:
uncategorized
Close

Description

Hi there,

I am afraid that the fix applied to Bug #2845 has not resolved the issue of evident GPU-related box fluctuations. I tested the fix (http://manual.gromacs.org/documentation/2016.6/ReleaseNotes/release-notes.html#fix-incorrect-lj-repulsion-force-switching-on-gpus) on gromacs 2016.6 and, despite it took more that 300 ns, the box still fluctuates pretty abnormally.

I have attached the files from this test.

Ramon

2016_6.zip (43.4 MB) 2016_6.zip Ramon Guixa-Gonzalez, 02/22/2019 11:37 AM
jp101759q.pdf (4.36 MB) jp101759q.pdf Alexander Vogel, 08/01/2019 11:50 AM
mdout.mdp (10.7 KB) mdout.mdp Alexander Vogel, 08/02/2019 02:39 PM
run.mdp (1.76 KB) run.mdp Alexander Vogel, 08/02/2019 02:39 PM
working_vs_3nonworking_sims.png (26.5 KB) working_vs_3nonworking_sims.png x-axis is time in ps, y-axis is box dimension in z-direction Alexander Vogel, 08/05/2019 01:42 PM
run.mdp (1.76 KB) run.mdp Alexander Vogel, 08/05/2019 01:58 PM
step7_production.mdp (1.16 KB) step7_production.mdp Alexander Vogel, 08/05/2019 01:58 PM
1_microsecond_without_problems.png (23.2 KB) 1_microsecond_without_problems.png Alexander Vogel, 08/13/2019 10:25 AM
likeCharmm.jpeg (758 KB) likeCharmm.jpeg Ramon Guixa-Gonzalez, 08/14/2019 05:48 PM
update.png (22.9 KB) update.png Alexander Vogel, 08/20/2019 12:39 PM
Box_fluctuations.png (249 KB) Box_fluctuations.png Mayukh Chakrabarti, 08/20/2019 07:00 PM
production.mdp (2.5 KB) production.mdp Mayukh Chakrabarti, 08/20/2019 07:00 PM
collapse_after_mdp_change.png (27 KB) collapse_after_mdp_change.png Alexander Vogel, 08/21/2019 10:06 AM
mdp_before_collapse.mdp (1.16 KB) mdp_before_collapse.mdp Alexander Vogel, 08/21/2019 10:12 AM
changed_mdp_that_leads_to_collapse.mdp (1.8 KB) changed_mdp_that_leads_to_collapse.mdp Alexander Vogel, 08/21/2019 10:12 AM
production_langevin.mdp (1.3 KB) production_langevin.mdp jose flores-canales, 08/21/2019 03:23 PM
collapse_after_mdp_change_update.png (27 KB) collapse_after_mdp_change_update.png Alexander Vogel, 08/22/2019 08:31 AM
POPC_256x3.png (16.6 KB) POPC_256x3.png Alexander Vogel, 08/22/2019 01:56 PM
POPC_1024(red)_4096(blue).png (14.3 KB) POPC_1024(red)_4096(blue).png Alexander Vogel, 08/22/2019 01:56 PM
berkhess.tar.gz (24 MB) berkhess.tar.gz Alexander Vogel, 08/22/2019 02:36 PM
pic1.jpeg (433 KB) pic1.jpeg boxZ Ramon Guixa-Gonzalez, 08/22/2019 10:40 PM
pic2.png (91.7 KB) pic2.png mdp diff Ramon Guixa-Gonzalez, 08/22/2019 10:40 PM
largearea_t232ns.pdb (5.43 MB) largearea_t232ns.pdb Berk Hess, 08/25/2019 12:48 PM

Related issues

Related to GROMACS - Bug #2845: critical box fluctuations when using GPUs Resolved

History

#1 Updated by Berk Hess 6 months ago

  • Related to Bug #2845: critical box fluctuations when using GPUs added

#2 Updated by Ramon Guixa-Gonzalez 6 months ago

Hi, is there any other info I could provide to help? Do you have any hint on how to solve this issue?

#3 Updated by Alexander Vogel 25 days ago

I just saw that bug #2845 is labeled resolved so I'll repost my findings here and also continue updating here:

I can confirm this bug with GROMACS 2019.3. I have several simulations of a GPCR+Membrane system that get ripped apart after 100-200ns. But already before that unnaturally large fluctuations are seen that manage to recover however. After some time the system violently extends in x-y dimension (which is the membrane plane) and collapses in the z-dimension. I have two simulations of identical coordinates that were started with different random seeds but otherwise identical parameters and both get ripped apart. Interestingly, both systems have very similar box dimensions after the "event" although the disrupted membrane structure is very different between the two. Since I didn't notice the error immediately and the simulations did not crash, they continued for several hundert ns (first simulation for 450ns, second simulation for 900 ns). There still seem to be some pretty large fluctuations during that time but no violent changes as before.

I'll start a third version of the system today. A colleague of mine reported similar issues but I didn't have time to look at it yet.

I setup the simulation with charmm-gui and run them on an AMD 1700X + Nvidia RTX2080Ti. If more info is needed please tell me what you need.

Alexander Vogel

PS: The third identical copy started with a different random seed ripped apart in the same way after less than 100ns. The final box dimensions again are very similar to the first two cases. I'll now try to use charmm-gui with the same pdb structure of the protein and the same settings to create a different starting conformation (charmm-gui involves some randomness for placing lipids etc.). I'll report how that goes.

Anyway I don't really believe in a ForceField problem...because I can't imagine how even a big error would lead to such dramatic fluctuations in the box size. Also Ramon in Bug #2845 reported that the error does not occur with CPU. For that reason I also generate NAMD output for the new simulation so that I can verify if the problem occurs there too or not.

#4 Updated by Berk Hess 25 days ago

Ramon and I both saw the issue as well with CPU, so therefore it is not a GPU issue. We can't be sure it's a forcefield issue, but that is the most likely explanation.

#5 Updated by Alexander Vogel 25 days ago

Ok, sorry, I didn't see that you also observe it with CPU. I'll try to investigate further and see if I find something. What I find striking is that I also use POPC. I'll check its force field files if I find something.

Alexander

#6 Updated by Berk Hess 25 days ago

I don't think you can see much from the force field files. Lipid bilayer phases are always close to phase transitions. So the force field needs to be parameterized very accurately for systems to remain in the correct phase. Futhermore, dynamics in membranes is very slow, so it can take a very long time for anomalies to occur. Unless the force field has been tested in microsecond regime, there is always a chance of something going wrong.

#7 Updated by Alexander Vogel 25 days ago

I'm running pure POPC simulations (with slightly different mdp parameters) for many microseconds in another project and nothing bad happens there:

1. 256 POPC: 8,05 microseconds
2. 256 POPC: 7,91 microseconds
3. 256 POPC: 7,22 microseconds
4. 1024 POPC: 1,65 microseconds
5. 4096 POPC: 301,3 nanoseconds

And the phase transition of POPC is at 277K which is pretty far from the 310K the collapsing simulation is running at (the pure POPC sims that run fine are at 298K). In addition the force field of POPC has been tested pretty rigorously with extensive comparison to experiment: "Update of the CHARMM All-Atom Additive Force Field for Lipids: Validation on Six Lipid Types" by Klauda et al. J. Phys. Chem. B 2010, 114, 7830–7843

POPC is one of the six lipids that were tested. I attached the paper for reference.

Therefore I doubt that the POPC force field is the issue unless there is a typo...that is what I want to check for. I took a first look at the files and they seemed fine however. Also the protein force field should be fine as should be water and ions or somebody else should have noticed already. And that is everything that is part of my simulation that gets ripped apart...there is nothing else in there.

Alexander

#8 Updated by Berk Hess 25 days ago

I don't know much about these CHARMM force fields. I assume that with the 277 K you mean the experimental transition temperature. I don't know what that of the model is and there could be other phases the model allows that are not observed experimentally.
I could also be that the exact cut-off parameters and treatment don't exactly match what the force field was parametrized with.

#9 Updated by Alexander Vogel 24 days ago

Yes, 277 K is the main phase transition that was determined experimentally for POPC. The simulations clearly are above that temperature and the membrane behavior in the simulations clearly corresponds to that of membranes above the main phase transition.

I'm a membrane biophysics person....so I would say I know pretty much about lipids and membranes. What is happening in the simulations is completely unphysical. The forces needed to rip a membrane apart like this must be huge. Lipid acyl chains hate water and during the "event" they get exposed to lots of it. Once I did simulations with lipids and water distributed randomly in a box. Within 1 ns they clustered to avoid as much water contacts as possible and within 50 ns they formed bilayers just by self-assembly. Here they do the opposite. They leave the bilayer and get exposed to lots of water within a few ns. This does not make any physical sense.

Also I'm using standard cut-off parameters and I have done membrane simulations for many years using CHARMM and namd and never seen something even remotely similar. That is why I doubt that this is a force field issue. If there is some error in the force field it must be humongous. But the force fields predicts so many experimental values close to perfection...

Anyway...I'm still on the issue. I'm trying several things and I asked my co-workers to look into their simulations if they observe it as well. I'll also try to find a simulation that gets ripped apart where I have the namd input files to see it happens there as well. So I'm on the issue and will keep you updated if I find if it is a force field issue or a GROMACS issue.

#10 Updated by Berk Hess 24 days ago

Thanks for investigating this.

The force field can be compared by comparing the energies for a single frame between GROMACS and another code. Only the LJ energies could differ slightly due to a different LJ cutoff switch treatment.

Could you attach an mdp file?

#11 Updated by Alexander Vogel 23 days ago

I attached my mdp and a file that lists all mdp parameters set in GROMACS.

Alexander

#12 Updated by Berk Hess 23 days ago

One think I did not think of before is the possibility that the Parrinello-Rahman barostat with a coupling time of 10 ps might cause instabilities.
What pressure coupling with what parameters are you using in CHARMM and NAMD?

I am now running Parrinello-Rahman with a coupling time of 20 ps to see if that changes things.

#13 Updated by Alexander Vogel 21 days ago

I think I'm closing in to the issue. I now have managed to run the exact same simulation of which 3 copies with different seeds were collapsing within less than 200 ns up to almost 400 ns without problems with GROMACS (see attached picture...the red line is the new simulation). I'll leave it running to confirm for longer time scales. What I did was running the simulation with the mdp that comes from charmm-gui instead of the modified mdp that our work group is usually using. As far as I know Ramon is using a similar setup as he is former member of the group and I think he came up with some of the changes.

I'm not an expert on the changes in the mdp, so could you please go over it and see if there are any differences that stand out (I attached both of them). The simulations with the run.mdp collapse while the one with step7_production.mdp seems to be running fine.

Thanks and best regards,

Alexander

#14 Updated by Berk Hess 19 days ago

I think your latest observation is just coincidence and you will see the same issue with more statistics.

I have been running the system provided by Ramon, which uses the step7_production settings, but with the V-rescale thermostat with tau-t=0.5. I changed to NH with tau-t=1 and see the same issue.

#15 Updated by Alexander Vogel 13 days ago

Now at 1 microsecond without problems.

#16 Updated by jose flores-canales 12 days ago

Michael Shirts has mentioned that a Monte-Carlo barostat is on the works in his lab. It will be great to see this and other barostats arriving on Gromacs.

#17 Updated by Ramon Guixa-Gonzalez 11 days ago

Alexander Vogel wrote:

Now at 1 microsecond without problems.

Alexander Vogel wrote:

Now at 1 microsecond without problems.

Alex, I also have the feeling new runs will show us otherwise, but let's see. At my end, I am running a test with a similar CHARMM-GUI-out-of-the-box mdp, namely using a longer tau-t (1.0) and a shorter tau-p (5.0). So far (~325 ns), no abnormal fluctuations present (pic attached). In the past, however, sudden fluctuations have often showed up around 400 ns and sometimes much later (e.g. 800 ns). I will report again beyond 1 micro.

#18 Updated by Alexander Vogel 6 days ago

Another update. Now at 1.7 microseconds and still everything fine. I'll try to switch this simulation to the other mdp options...never did that before...have to see how it goes. If I get that working we will see if it shows up again.

Alexander

#19 Updated by Berk Hess 6 days ago

I would find it very surprising if only changing the T and p coupling settings "fixes" the issue. If anything, using Nose-Hoover and a shorter pressure coupling time should make things less stable. If this would really "fix" it, the only thing I can think of is that the system has some internal resonance which coincides with the 10 ps period of the barostat.

#20 Updated by Mayukh Chakrabarti 5 days ago

I have been following this issue with some interest, as I have been experiencing the same issues with box fluctuations observed by others. I am currently using Gromacs 2018.7 with the CHARMM36 forcefield, July 2017 revision. My system contains a transporter protein embedded in a POPC bilayer. In multiple simulations of the same system with different random seeds, I observe large fluctuations in the X-Y dimensions, followed by an eventual collapse in the Z-dimension, usually after 100-200 ns in most instances. I have attached an image documenting these fluctuations in multiple runs. Equilibrating the system for a longer time does not fix the issue.

I too am a bit doubtful about specific mdp settings fixing this issue. I have also attached an example mdp file that I have been using. Other than the presence of two coupling groups as opposed to 3, it exactly follows the settings that are suggested by the CHARMM-GUI production.mdp file, and which Alexander has now tested for 1.7 microseconds (NH & Parinello-Rahman, tau-t = 1, tau-p = 5). In my case, I am observing the fluctuations using these settings.

#21 Updated by Berk Hess 5 days ago

Consensus is growing in the community that second order coupling like Nose-Hoover and Parrinello-Rahman causes issues (like instabilities).
Has someone tried using Berendsen pressure coupling?

#22 Updated by Alexander Vogel 5 days ago

And BAM...there it is. Directly after changing the mdp the system starts to collapse. In the attached picture the dotted line marks the timepoint where I changed the mdp. The two different mdp files are also attached. I'll continue the simulation for one more day to see it fully collapse...just to be safe.

In the mdp several parameters were changed. The biggest difference is the change of the thermostat from Nose-Hoover to V-rescale. But also some other parameters change. If you want to me to change just a single one I can do that and see what happens...just tell me which one you would be most interested in.

Alexander

#23 Updated by jose flores-canales 4 days ago

I attach my mdp parameters file. It is basically the same from Charm-gui, except that I am using Langevin thermostat. For Parrinello-Rahman I am using Tau_p = 5 ps, the same system also failed for Tau_p = 20 ps.

#24 Updated by Berk Hess 4 days ago

I now tried Berendsen pressure coupling with tau-p both 5 and 10 ps. The box dimensions now behave much smoother and show less extreme fluctuations, but the issue is still there.

#25 Updated by Alexander Vogel 4 days ago

Update from my side. So it didn't collapse completely yet...but it went much further than yesterday...see graph. After changing the mdp the problems clearly are back. I'm certain it is just a matter of time until it will collapse.

What is interesting is that Jose Flores-Canales also observes these problems after changing the thermostat from the charmm-gui standard parameters...so maybe it is the thermostat instead of the barostat...or maybe combinations of the two.

Also that idea of internal resonances becoming critical does not sound convincing at all to me for three reasons:

1. If there is one thing biomembranes in the liquid-crystalline phase are known for, then that they are extremely flexible. Having a critical resonance in a biomembrane would be like having one in bubble gum. The flexible nature of the material should dampen every oscillation effectively.

2. It doesn't look like oscillations becoming critical. In the simple idea I have of a critical oscillation, it should look like an oscillation that quickly becomes larger and larger. But there are no oscillations in any of the graphs...neither in mine nor in that of Ramon or jose flores-canales. Boxsize-Z always goes down...never up. That is not an osziallation to me.

3. It happens for different tau_p. E.g. see Jose Flores-Canales post above. I would expect tau_p to have an influence if it really were critical oscillations.

I know that I'm going to sound a bit ungrateful now but this issue is open for months already. There are three independent research labs that have reported it here (Ramon: Leonardo Pardo lab, Jose Flores-Canales: Aarhus University, myself: Hildebrand lab). A collegue of mine also postet the issue on twitter and several more people reported seeing the problem:

Wojciech Kopec (de Groot lab): https://twitter.com/wojciechkopec3/status/1158328579475877888
R. Thomas Ullmann (has his own lab): https://twitter.com/RTUllmann/status/1158409881432596482

Also the issue is devastating to any simulation where it happens. As far as I know Ramon does not use GPU anymore because of the bug. I don't know if I can rely on my simulations anymore.

A lot of non-GROMACS possible causes were brought up but none of them sound likely to me. It almost certainly is not the POPC forcefield. I have uploaded a papers that shows how good it is in comparison to experiment. I have GROMACS simulations running at the moment of pure POPC bilayers of various sizes (256 lipids, 1024 lipids, 4096 lipids (more than 1 million atoms)) and for a cumulative time of more than 25 microseconds and all of them have no problems (with standard charmm-gui mdp parameters). Also the critical oscillations do not sound convincing as a cause as stated above.

Changing the mdp parameters causes it or at least has a large impact on it. Too me that looks like the problem most likely has to do with GROMACS or one of its pressure/temperature coupling methods. I'm not an expert on these so I don't know what to look for. I try to help as much as possible by trying different parameters and running microsecond timescale simulations just for the bug. It is easily reproducible for my simulation...it should not be too hard to look at it in detail. Is there anybody really looking at it at the moment? What do I have to do, so that this is issue is taken seriously? Maybe it is...but the responses here don't really reflect that.

So now I sound ungrateful...which I didn't want to because I'm very grateful that GROMACS exists and that people take care of it and fix its bugs and make it better and faster all the time. It just puzzles me that this issue does not cause more (perceivable) concern.

Alexander

Edit: Sorry didn't see your post from 1h ago...probably ninja'd me while I was typing. So do I understand that right that it is probably not the barostat? Is there anything you want me to try with the mdp parameters of my simulation where it happens so reproducibly?

#26 Updated by Berk Hess 4 days ago

So you have seen no problems at all with the charmm-gui parameters on both CPU and GPU?
Have others used the same parameters and seen no issues either?

If that is the case we need to gradually change a problem mdp file into the charmm one and see when the issues disappear.

Note that we still don't know if this is a GROMACS or force-field issue. People tend to simulate much longer in GROMACS than in many other packages, so force field issues tend to appear first with GROMACS.
Also, details of the implementation of coupling algorithms matter. Some years ago I removed a 1 step delay of recording and using the temperature in the Nose-Hoover thermostat. This caused simulations of ice of a user to become unstable, but that was actually the correct simulation behavior. The delay added artificial damping. (I now a bilayer is not in a solid state, but this is just to show that subtle details can have a large impact on the behavior of simulations).

#27 Updated by Justin Lemkul 4 days ago

People have reported via the Twitter thread linked above that the standard CHARMM-GUI inputs can also lead to this problem.

I have validated this force field extensively, comparing forces and energies between GROMACS and CHARMM. They match almost perfectly for all molecules I've ever tested (encompassing amino acids and full proteins, nucleotides and DNA, lipids, small molecules, and carbohydrates). I have discussed this with Alex MacKerell and we agree that the odds of a force field problem are vanishingly small. Personally, I have several hundred ns of membrane simulations in OpenMM with C36 that are just fine.

We ran a cross-software comparison of membrane simulation outcomes in the CHARMM-GUI validation, but that was with version 5.1. At that point, everything matched quite well. We used CPU only then.

#28 Updated by Berk Hess 4 days ago

I was also thinking of the stability of the CHARMM forcefield itself (not the implementation in GROMACS).

In Stockholm we have many simulations with this forcefield up to a microsecond and we have never seen issues.

#29 Updated by Berk Hess 4 days ago

I would like to get the input for the CHARMM gui mdp parameter runs that failed, so I can check those myself.

#30 Updated by Berk Hess 4 days ago

Has someone observed if there first is a conformational rearrangement and then the box starts fluctuating or first the box starts to fluctuate and then the conformations change?

#31 Updated by Alexander Vogel 3 days ago

Many ansers to many different questions follow:

1. Yes with standard charmm-gui parameters there seem to be no issues in my case. The simulation ran for 1.7 microseconds without a hitch...look at the graphs I attached in former posts. However, problems might appear at even longer simulation times as Justin Lemkul suggests. So far I used only GPU.

2. About the force-field...if Alex MacKerell says that the force-field is ok, I believe him. Also as stated multiple times I do have three simulations of pure POPC bilayers of very long timescale: 8.7 microseconds + 8.6 microseconds + 7.8 microseconds. I just checked them again...they are fine...no fluctuations whatsoever...see attached graph. I also checked my two larger simulations (1k and 4k lipids)...they are much shorter (385 ns + 183 ns) but they also show no fluctuations...see attached other graph. I can't think of any more convincing way to show that it is not the POPC force-field that is causing the issue.

3. I attached the input files for my various collapsing systems. The tar file contains the following folders:

3_collapsing_runs: This folder contains the three replicas I showed at the beginning that collapsed in less than 200ns.

charmm-gui_run_no_problems: This folder contains the start of 1.7 microsecond run that had no problems.

charmm-gui_run_changed_mdp: This folder contains the end of 1.7 microsecond run that had no problems. I used that to start a run with changed mdp options that showed problems within 100ns or so.

Thanks for looking into it!

Alexander

#32 Updated by Alexander Vogel 3 days ago

Your question about conformations:

The membrane stays intact pretty long. As I said...it is pretty flexible. Just after the simulation box has deformed considerably the membrane ruptures. I hope that is what you were asking for.

Alexander

#33 Updated by Berk Hess 3 days ago

What I only noticed or realized now is that there is a switch from from 3 center of mass motion removal groups in the system to 1. In general there can be issues that the solvent starts sliding over the membrane and therefore people use separate COMM removal groups. I don't know if this correlates with the issue.
I'm running a check run with velocity output to see if there if momentum is building up between membrane and solvent.

#34 Updated by Berk Hess 3 days ago

PS Thanks Alexander for all the detailed information!

#35 Updated by Ramon Guixa-Gonzalez 3 days ago

Hey all, well I checked this run I started a few days ago and, as I expected, the system collapse after almost 600 ns (check pic1 attached). The most interesting thing here is that I used almost (see below) exactly the same parameters CHARMM-GUI provides in its output files.
The ONLY difference between a CHARMM-GUI-like mdp file and the one I show here is the thermostat: V-rescale (me) versus Nose-Hoover (CHARMM-GUI) (see pic2 attached). I will now re-run the same simulation using an mdp file where I will only change V-rescale into Nose-Hoover in order to simulate an exact copy of the CHARMM-GUI parameters. I am not entirely sure, but I think, however, that I already simulated one of my systems literally using the mdp production file provided by CHARMM-GUI and observed similar fluctuations including the collapse of the system.
May I remind one thing that I already reported from the beginning of this issue: none of the simulations showing these dramatic effects (likely more than 25 or 30 different runs now) end up crashing...??? Berk, regardless of the fluctuations, wouldn't you expect mdrun to crash upon the collapse of a protein membrane system?

#36 Updated by Berk Hess 3 days ago

I haven't seen crashes yet.

I think I tried all combinations of thermostats and barostats.
My current working hypothesis is that is could be due to the use of 3 T-coupling groups combined with COMM removal for the whole system. I have a run running, but it needs to run longer.

#37 Updated by Berk Hess 3 days ago

I also got box fluctuations with the COMM groups equal to the T-coupling groups.
Now I have no clue at all what it could be. All combinations of thermostats and barostats I tried can fail. But it still seems that some combination of mdp parameters do not fail, or?

#38 Updated by Alexander Vogel 3 days ago

The question is: When do you call a simulation "not failed"? How many microseconds? I got mine running up to 1.7 microseconds without problems (barostat: Parrinello-Rahman, thermostat: Nose-Hoover) but there are people that report seeing it after 6 microseconds.

When the system is compressed in Z-direction (and therefore expanded in the X-Y-plane) the forces must be crazy high because the membrane surely should not like to be stretched like that...in particular just before rupture. The hydrophobix effect should be very strong trying to avoid any water molecules in between lipids. Could one not just analyze the forces individually for such a snapshot and see what is causing the system to expand even further in the X-Y-plane although there should be very high forces from the lipids and waters close to them to keep the system together?

I would do it but I'm not an expert in these kinds of things and don't know how...

Alexander

#39 Updated by Berk Hess 3 days ago

I now looked at the potential energy and that looks unaffected by the large changes in box shape and membrane area. Unfortunately we don't know the free-energy differences between these states. But I am back to my original hypothesis that these stretched out membrane states are very similar in free-energy to the wanted state. If there are separated by a relatively small free-energy barrier, differences in coupling algorithms could provoke or prevent transitions.

I don't know of a way to assess the free-energy difference between such states. One check would be taking an extended state, maybe not too extreme, and run that in NAMD, CHARMM or OpenMM to see if it goes back to the original state.

#40 Updated by Berk Hess 3 days ago

Alexander, you are arguing based on the real world. But we are simulating models, not the real world. I do not doubt that the developers of this force field have done a good a careful job, but membrane force fields are particularly tricky. There is a delicate balance between hydrophilic interactions, hydrophobic interactions, dihedral potentials in the tails and entropy. You might get the right area per lipid and good looking conformations, but with a slightly incorrect balance. Or even nastier, you could get all the balance correct in the wanted state, but there are unforeseen free-energy minima in unwanted, previously unexplored conformations. We have seen this before in e.g. DNA force fields.

As the potential energy does not change, there will likely be no large forces in these extended states. But maybe such states should be disfavored due to lower entropy. That should still lead to a pressure contracting the membrane again though, and we do not see that.

#41 Updated by Justin Lemkul 3 days ago

Berk Hess wrote:

I now looked at the potential energy and that looks unaffected by the large changes in box shape and membrane area. Unfortunately we don't know the free-energy differences between these states. But I am back to my original hypothesis that these stretched out membrane states are very similar in free-energy to the wanted state. If there are separated by a relatively small free-energy barrier, differences in coupling algorithms could provoke or prevent transitions.

I don't know of a way to assess the free-energy difference between such states. One check would be taking an extended state, maybe not too extreme, and run that in NAMD, CHARMM or OpenMM to see if it goes back to the original state.

I could do this pretty easily if someone wants - I have inputs readily available that I know work.

Another test I would like to perform is to compute the single-point energy of a crazy coordinate file. If someone (Alexander?) can post a .gro/.pdb file of a snapshot that is torn apart (extreme point of a box fluctuation, for instance) and also report the energies that GROMACS reports at that snapshot, I will compare energies in CHARMM and forces so we can do an apples-to-apples comparison of the software on the same snapshot.

#42 Updated by Berk Hess 3 days ago

I trust the energies in GROMACS and they look fine.

But a check I just did is start from a stretched conformation and run a "relaxation"-run with fixed box x/y dimensions. Now I get negative pressures of around -40 bar along x and y. So the system actually seems to want to contract back. I guess this means that some momentum is built up that causes the pressure and box fluctuations. It does not seem to be due to the conformation of the bilayer, although I can't 100% exclude that now.

#43 Updated by jose flores-canales 2 days ago

I am running MD with AMBER18/pmemd.cuda using a restarting file before my system underwent transformation.

#44 Updated by Ramon Guixa-Gonzalez 2 days ago

Berk Hess wrote:

I haven't seen crashes yet.

I think I tried all combinations of thermostats and barostats.
My current working hypothesis is that is could be due to the use of 3 T-coupling groups combined with COMM removal for the whole system. I have a run running, but it needs to run longer.

Berk, as I just reported (see comment #34 above), there is so far not any combination of parameters that do not fail other than changing the thermostat from V-rescale to Nose-Hoover,which is what I am testing now. In Alexander's hand, it also seems to be this switch what is causing the problem.

#45 Updated by Berk Hess about 14 hours ago

I now ran a longer and slightly more careful fixed area run of Ramon's system at 13% larger x/y area. This gives a surface tension of 0.014 +-0.006 N/m. This is about a factor 2.5 lower than one would expect from the area compressibility modulus of the force field.

This seems to indicate that there are conformational changes which lower the surface tension and might enable the observed fluctuations, if there is a significant initial fluctuation to overcome some barrier.

I have attached a frame so people can look at the bilayer.

Also available in: Atom PDF