Tmoves in large systems

General discussion of the Cambridge quantum Monte Carlo code CASINO; how to install and setup; how to use it; what it does; applications.
Post Reply
Katharina Doblhoff
Posts: 84
Joined: Tue Jun 17, 2014 6:50 am

Tmoves in large systems

Post by Katharina Doblhoff »

Dear casino users and developers,

Concerning Tmoves, the casino manual states that
A further disadvantage is that this option requires a truly enormous amount of memory in systems
with large numbers of particles (seeing if this can be reduced remains a project). The default
of use tmove is F and we tend not to use them unless we face stability issues.
Now, I have a pretty large system (heading at about 1000 electrons) and I would very much like to use Tmoves, since I have experienced them to cause less problems than the locality approximation. Is this issue still there? Does anybody have experience in whether I would run into memory issues for such a system? Has anybody thought of whether one may be able to recode the whole thing?

Thank you,
Katharina
Neil Drummond
Posts: 117
Joined: Fri May 31, 2013 10:42 am
Location: Lancaster
Contact:

Re: Tmoves in large systems

Post by Neil Drummond »

Dear Katharina,

I'm currently running 288-electron calculations with T-moves with no obvious problems. If you have a larger calculation set up and ready to run, it would be good to test whether there is any issue with using T-moves in a couple of short test runs.

Thanks,

Neil.
Mike Towler
Posts: 239
Joined: Thu May 30, 2013 11:03 pm
Location: Florence
Contact:

Re: Tmoves in large systems

Post by Mike Towler »

Hi Katharina,

Short answers:
Does anybody have experience in whether I would run into memory issues for such a system?
Depends on the the number of nuclei not just the number of electrons. The main memory-hogging vectors are dimensioned as (number of points in non-local grid x number of electrons x number of nuclei x some other stuff). Thus if you have 1000 H atoms you have a factor of 1000 x 1000 = 1 million appearing in there. If you have 10 fermium atoms (atomic number 100 - I had to look that up) you have a factor of 10 * 1000 = ten thousand which is significantly better. (I'm ignoring the fact that they're pseudoatoms obviously).

There is also a problem that you can't put these arrays in shared memory (because the electron positions are different for configs on different processors).

There's a whole bunch of extra T-move arrays if you use forces.

Thus for a fixed number of electrons you can reduce the memory size by (1) reducing the size of the non-local grid, (2) using fewer MPI processes per Shm node, (3) using heavier atoms, and (4) not using forces.
Has anybody thought of whether one may be able to recode the whole thing?
I did the obvious stuff (which reduced the original memory requirement from being utterly ludicrous to merely being very large). I got to the point where I realized I would have to spend a day or two on it to rethink the algorithm in order to reduce it any further - and it's never since risen to the top of the TODO list.

Feel free to have a look at it. The relevant T-move stuff is in the non-local.f90 module.

That said - providing you're not using ultra light atoms - you should be OK using T-moves for a thousand electron system on a decent computer. As Neil says, try it..

See DIARY entries 2.13.385 and 2.13.281 for further details. Interesting to note the former ends with:
Someone needs to (a) do lots of proper T-move tests with graphs and add some
theory stuff to the manual (T-moves are currently only mentioned in the
keyword definition), and (b) check to see if the hideous large-system memory
cost can be reduced.. There are suggestions from various quarters, including
Pittsburgh, that our current advice against using T-moves is a bit harsh, and
that the rude remarks in the current manual should be updated. I have removed
the rudest of them, without going so far as to change the recommendation (am
happy to do so pending further tests..).
Best wishes,
Mike
Andrea_Zen
Posts: 2
Joined: Wed Oct 22, 2014 4:00 pm

Re: Tmoves in large systems

Post by Andrea_Zen »

Dear Mike and Neil,

I actually have some concern about using the T-moves implemented in CASINO in large systems.
As far as I understood (and please correct me if I am wrong) the T-move algorithm implemented in CASINO is the one introduced in by Michele Casula in its first PRB 2006 paper (http://dx.doi.org/10.1103/PhysRevB.74.161102), with the only difference that the branching is taken to be symmetric.

However, Michele and others have later observed a size-consistency issue in the first T-move algorithm, as addressed in this paper:
J. Chem. Phys. 132, 154113 (2010)
http://dx.doi.org/10.1063/1.3380831

The problem is (quoting their JCP paper) that:
"for given time-step, the probability of a
successful move will increase with the system size i.e., the
number of electrons and saturate to one for sufficiently large
systems. In this limit, the effect of the move will become
independent of the system size and lead to one electron being
displaced at each step. Therefore, for sufficiently large systems,
the overall impact of the nonlocal move will decrease
and the algorithm will effectively behaves more and more
like in the LA procedure."

I bet that in large systems nobody will use very small time-steps, thus I guess that the T-move implemented in CASINO will behave as the "DMC Ref. 7 sym" method shown in the aforementioned JCP paper, Fig.2(a).
In the JCP paper they provide two algorithms that would solve this problem, called SVDMC Version 1 and Version 2, neither of which seems to me implemented in CASINO.
Given that, I think it would be better to use the locality approximation in big systems.

Is there anyone willing to implement the any one of the SVDMC T-move algorithms in CASINO?

Best wishes,
Andrea
Mike Towler
Posts: 239
Joined: Thu May 30, 2013 11:03 pm
Location: Florence
Contact:

Re: Tmoves in large systems

Post by Mike Towler »

See DIARY entry 2.13.385 from 2014:

---[v2.13.385]---
* Updated the DMC T-move scheme to incorporate Casula's 2010 advice that one
should use a symmetric branching factor rather than the asymmetric one used in
the original 2006 paper, on account of the fact that you then get smaller time
step biases and can use larger time steps. Thanks to Mike Deible for reminding
me of this (he has also done a quick test on diatomic fluorine which verified
that the change does what it says).

-- Mike Towler, 2014-06-24

Is that what you think it does, or are we talking about something else?
Andrea_Zen
Posts: 2
Joined: Wed Oct 22, 2014 4:00 pm

Re: Tmoves in large systems

Post by Andrea_Zen »

Hi Mike,

thanks for your very fast answer.

What I wanted to tell (and maybe I didn't do well)
is that the symmetric or asymmetric branching has nothing to do with the size-consistency issue.
I suppose that the asymmetric branching was just a "bad" choice in the first PRB paper. Symmetric branching makes thing better, but the size-consistency issue is there anyway. To solve that other changes on the T-move algorithm have to be done (sections III.A and III.B in the JCP paper).

Maybe you are meaning that they are already implemented in CASINO, but that is not clear to me; there is no documentation in the manual about that.
So, if they are already implemented, which ones? SVDMC version 1, version 2, or something different?

Best,
Andrea
Mike Towler
Posts: 239
Joined: Thu May 30, 2013 11:03 pm
Location: Florence
Contact:

Re: Tmoves in large systems

Post by Mike Towler »

Hi Andrea,

From what I remember - if you go through the logic of the code CASINO's T-move scheme is implemented in the size-consistent way and it always was (even though it was written before Casula's second paper of 2010).

The only thing necessary to bring it up to date following the 2010 paper (at least theoretically) was to add the symmetric branching. I also considerably reduced the memory required (see above discussion with Katharina) but the algorithm needs to be changed to reduce this further (when I thought about it briefly I couldn't see how to do that without making it much slower, so I put it aside for another day - this remains a project).

Sorry about the lack of documentation in the manual about T-moves. I've asked several times for this to be done but nothing happened; I'll do it myself one of these days (I promise).

Mike
Kevin_Gasperich
Posts: 7
Joined: Wed Mar 18, 2015 7:46 am

Re: Tmoves in large systems

Post by Kevin_Gasperich »

Hi Andrea,

I took a look at CASINO's T-move implementation some time ago, and I believe that it is SVDMC version 1 from the 2010 JCP paper. The relevant code is near the end of subroutine move_config in dmc.f90.

Kevin
Post Reply