Unverified Commit b75ed73c authored by zcxzcx1's avatar zcxzcx1 Committed by GitHub
Browse files

Add files via upload

parent 56d3c363
# Changelog
All notable changes to this project will be documented in this file.
## [0.11.1]
From here, the version of 'main' branch has 'devX' after it diverges from the latest stable version
CLI interface changed in backward-compatible manner. Now `sevenn` has subcommands for
inference, train, etc
### Added
- subcommand with some aliases
- strict e3nn version requirement from __init__.py
### Changed
- pre-commit uses python3.11
- cuequivaraiance optional libraries
- some gitignores
### Fixed
- Circular import in sevenn.checkpoint (dev0)
- Fix typing issues
## [0.11.0]
Multi-fidelity learning implemented & New pretrained-models
### Added
- Build multi-fidelity model, SevenNet-MF, based on given modality in the yaml
- Modality support for sevenn_inference, sevenn_get_modal, and SevenNetCalculator
- sevenn_cp tool for checkpoint summary, input generation, multi-modal routines
- Modality append / assign using sevenn_cp
- Loss weighting for energy, force and stress for corresponding data label
- Ignore unlabelled data when calculating loss. (e.g. stress data for non-pbc structure)
- Dict style dataset input for multi-modal and data-weight
- (experimental) cuEquivariance support
- Downloading large checkpoints from url (7net-MF-ompa, 7net-omat)
- D3 wB97M param
### Changed
- Sort instructions of tensor product in convolution (+ fix flipped w3j coeff of old model)
- Lazy initialization for `IrrepsLinear` and `SelfConnection*`
- Checkpoint things using `sevenn/checkpoint.py`
- e3nn >= 0.5.0, to ensure changed CG coeff later on
- pandas as dependency
- old v1 presets are removed, liquid electrolyte fine-tune yaml is added
### Fixed
- More refactor for shift scale things + few bug fixes
- Correctly shuffle training set when distributed training is enabled
- D3 calculator system swap memory error fixed
- D3 compile uses $HOME/.cache if package directory is not writable
## [0.10.4]
### Added
- feats: D3 calculator
### Fixed
- bug: info dict sharing (therefore energy stress) when structure_list used
- torch >= 2.5.0 works
- numpy >= 2.0 works (need more testing)
### Changed
- sevennet_calculator.py => calculator
- fine tunine preset to use original loss function (Huber) and loss weights
## [0.10.3]
### Added
- SevenNet-l3i5, checkpoint, preset. (keywords: 7net-l3i5, sevennet-l3i5)
- SevenNet-l3i5 test
### Changed
- Now --help do not load unnecessary imports (fast!)
- README
## [0.10.2]
### Added
- Accelerated graph build routine if matscipy is installed @hexagonerose
- matscipy vs. ase neighborlist unit test
- If valid set is not given but data_divide_ratio is given, validaset is created using random split. (shift, scale, and conv_denoiminator uses original whole statistics)
### Changed
- matscipy is included as dependency
- data_divide_ration defaults to 0.0 (not used)
### Fixed
- For torch version >= 2.4.0, Loading graph dataset no more raises warnings.
- Raise error when unknown element is found (SevenNetCalculator)
## [0.10.1]
### Added
- experimental `SevenNetAtomsDataset` which is memory efficient, can be enabled with `dataset_type='atoms'`
- Save meta data & statistics when the `SevenNetGraphDataset` saves its data.
### Changed
- Save checkpoint_0.pth (model before any training)
- `SevenNetGraphDataset._file_to_graph_list` -> `SevenNetGraphDataset.file_to_graph_list`
- Refactoring `SevenNetGraphDataset`, skips computing statistics if it is loaded, more detailed logging
- Prefer use .get when accessing config dict
### Fixed
- Fix error when loading `SevenNetGraphDataset` with other types of data (ex: extxyz) in one dataset
## [0.10.0]
SevenNet now have CI workflows using pytest and its coverage is 78%!
Substantial changes in cli apps and some outputs.
### Added
- [train_v2]: train_v2, with lots of refactoring + support `load_testset_path`. Original routine is accessible: `sevenn -m train_v1`.
- [train_v2]: `SevenNetGraphDataset` replaces old `AtomGrpahDataset`, which extends `InMemoryDataset` of PyG.
- [train_v2]: `sevenn_graph_build` for SevenNetGraphDataset. Previous .sevenn_data is accessible with --legacy option
- [train_v2]: Any number of additional datasets will be evaluated and recorded if it is given as 'load_{NAME}set_path' key (input.yaml).
- 'Univ' keyword for 'chemical_species'
- energy_key, force_key, stress_key options for `sevenn_graph_build`, @thangckt
- OpenMPI distributed training @thangckt
### Changed
- Read EFS of atoms from y_* keys of .info or .arrays dict, instead of caclculator results
- Now `type_map` and requires_grad is hidden inside `AtomGraphSequential`, and don't need to care about it.
- `log.sevenn` and `lc.csv` automatically find a safe filename (log0.sevenn, log1.sevenn, ...) to avoid overwriting.
- [train_v2]: train_v2 loads its training set via `load_trainset_path`, rather than previous `load_dataset_path`.
- [train_v2]: log.csv -> lc.csv, and columns have no units, (easier to postprocess with it) but still on `log.sevenn`.
### Fixed
- [e3gnn_serial]: can continue simulation even when atom tag becomes not consecutive (removing atom dynamically), @gasplant64
- [e3gnn_parallel]: undefined behavior when there is no atoms to send/recv (for non pbc system)
- [e3gnn_parallel]: incorrect force/stress in some edge cases (too small simulation cell & 2 process)
- [e3gnn_parallel]: revert commit 14851ef, now e3gnn_parallel is sane.
- [e3gnn_*]: += instead of = when saving virial stress and forces @gasplant64
- Now Logger correctly closes a file.
- ... and lots of small bugs I found during writing `pytest`.
## [0.9.5]
### Note
This version is not stable, but I tag it as v0.9.5 before making further changes.
LAMMPS `pair_e3gnn_parallel.*` should be re-compiled for the below changes regarding LAMMPS parallel.
This is the first changelog and may not reflect all the changes.
### Added
- Stress compute for LAMMPS sevennet parallel
- `sevenn_inference` now takes .extxyz input
- `sevenn_inference` gives MAE error
- Experimental `sevenn_inference` on the fly graph build option
### Changed
- **[Breaking]** Parallel LAMMPS model changed, old deployed parallel models will not work
- **[Breaking]** Parallel LAMMPS takes the directory of potentials as input. Accordingly, `sevenn_get_model -p` creates a folder with potentials.
- **[Breaking]** Except for serial LAMMPS models, force and stress are computed from gradients of edge vectors, not positions.
- Separate interaction block from model build
- Add typing for most of functions
- Remove clang pre-commit hook as it breaks lammps pair files
- `torch.load` with `weights_only=False`
- Line length limit 80 -> 85
- Refactor
### Fixed
- Correct batch size for SevenNet-0(11July2024)
## [0.9.4] - 2024-08-26
### Added
- D3 correction (contributed from dambi3613) for LAMMPS serial
This diff is collapsed.
This diff is collapsed.
#--------------------------- Simulation variables -----------------------------#
# Simulation control parameters.
variable T equal 500
# Simulation steps (t_eq)
variable t_eq equal 100
variable output equal 1 #freq to print output
variable dumpstep equal 1
#------------------------------------------------------------------------------#
#---------------------------- Atomic setup ------------------------------------#
units metal
boundary p p p
# Create atoms.
box tilt large
read_data ./res.dat
replicate 2 2 2
# Define interatomic potential.
pair_style e3gnn/parallel
# The order of element should be the same as the order of elements in the data file (type)
# * * {number of deployed parallel models} {path to deployed parallel models} {elements}
pair_coeff * * 4 ./deployed_parallel Hf O
timestep 0.002
#----------------------------- Run simulation ---------------------------------#
# Setup output
thermo ${output}
thermo_style custom step tpcpu pe ke vol press temp
dump mydump all custom 1 dump.traj id type x y z fx fy fz
dump_modify mydump sort id
fix f1 all nve
fix comfix all momentum 1 linear 1 1 1
velocity all create ${T} 1 dist gaussian mom yes
run 5
#------------------------------------------------------------------------------#
# generated from poscar module (Converted from VASP)
96 atoms
2 atom types
0.00000000 10.12978631 xlo xhi
0.00000000 10.37111894 ylo yhi
0.00000000 10.26314330 zlo zhi
1.73035484 0.00000000 0.00000000 xy xz yz
Masses
1 178.490000
2 16.000000
Atoms
1 1 10.07858846 8.73752520 7.46334159
2 1 9.43739481 3.62108575 2.41757962
3 1 8.95522965 1.01658239 0.30135971
4 1 10.05681289 8.87631272 2.35008881
5 1 9.75638582 6.30304558 0.18303097
6 1 8.91443797 1.06674577 5.54199800
7 1 9.79081409 6.26490552 5.22104118
8 1 4.41792703 3.81788503 2.39528244
9 1 2.17257241 4.11562894 4.98715163
10 1 1.66417958 1.51957159 2.91326352
11 1 5.17465464 8.94063393 2.34565890
12 1 2.85135280 9.31925586 4.96912733
13 1 5.06412333 8.89798173 7.34081071
14 1 2.43171878 6.73610078 7.90086842
15 1 4.79358115 6.33415175 5.30470215
16 1 4.38083973 3.71776573 7.50514775
17 1 2.16740031 4.19218670 10.14985882
18 1 1.72740111 1.53804994 8.11741499
19 1 3.81409500 1.12760478 5.36341140
20 1 2.91124600 9.28502610 10.05473353
21 1 2.43508812 6.79222574 2.84272933
22 1 4.63828767 6.35818290 0.06495312
23 1 3.96862029 1.21993115 0.31149240
24 1 7.95945527 9.30919854 9.99874912
25 1 7.20953615 4.10124686 10.06760490
26 1 7.97413259 9.31020496 4.88564505
27 1 7.48292324 6.67094881 2.71417363
28 1 7.22354435 4.09214379 4.91151391
29 1 6.77110223 1.50790865 2.82107672
30 1 7.60040235 6.67278986 7.89547614
31 1 9.41001699 3.69515647 7.55969566
32 1 6.77374601 1.50559611 7.98091486
33 2 10.09033948 1.85325366 6.96757279
34 2 10.99511263 7.08138493 6.69600728
35 2 11.28967512 9.60939027 6.04904275
36 2 10.78802723 6.95960696 1.75182521
37 2 8.32036165 2.47159112 3.91830180
38 2 10.07211065 1.65854646 1.87162718
39 2 10.48460341 4.35174196 0.83957607
40 2 9.58543540 10.25525675 3.81736961
41 2 9.15581971 7.61785575 3.82383341
42 2 11.28280065 9.42551408 0.89622391
43 2 10.61871694 4.46990764 6.09305074
44 2 3.63623823 5.03045204 3.82116882
45 2 3.22065417 2.54566037 3.96054713
46 2 5.13228413 1.96508633 1.75400085
47 2 0.49972311 0.85950986 4.41955106
48 2 4.49795126 10.25514684 3.94724065
49 2 4.08831630 7.74684459 3.86494489
50 2 5.90127151 7.07173966 1.66530214
51 2 6.32939463 9.71904813 0.93714394
52 2 5.44334962 4.45855194 0.80863483
53 2 3.54898001 7.91810753 6.27738026
54 2 3.21454568 5.33370189 6.39780696
55 2 4.08744202 7.79255573 8.94430580
56 2 5.89610679 7.08975961 6.71713664
57 2 1.22356400 5.98776133 9.31853387
58 2 6.36149555 9.67586631 6.06078686
59 2 3.62496983 5.17884441 8.87967862
60 2 2.79091790 2.70312051 6.50630496
61 2 2.20642369 0.12199325 6.50943225
62 2 3.36642075 2.52335751 9.09596150
63 2 5.04873139 1.85222299 6.87617402
64 2 0.50463414 0.78101050 9.52304998
65 2 0.95275450 3.34556660 8.69553848
66 2 5.46251140 4.47527874 6.00302102
67 2 4.49817894 10.30428629 8.95760259
68 2 1.73952474 8.56368775 8.58555417
69 2 3.62337333 7.98128068 1.17332847
70 2 3.16118634 5.40721734 1.22906859
71 2 1.31418904 5.97276567 4.19215008
72 2 2.79848347 2.74889192 1.27175505
73 2 2.29397766 0.16984188 1.46074130
74 2 0.96707618 3.44188647 3.47135866
75 2 1.75476287 8.54415321 3.43405286
76 2 9.58968370 10.11019896 9.02264863
77 2 9.12010305 7.60109276 8.90873480
78 2 8.21690550 5.34991345 1.37662298
79 2 6.43083748 5.94959474 4.24019201
80 2 8.69454708 4.99651781 3.88811596
81 2 7.96833563 2.69478887 1.20584684
82 2 7.33313963 0.16345962 1.36457040
83 2 5.57830829 0.81411424 4.28350294
84 2 6.12969978 3.45445964 3.35501959
85 2 8.55281287 7.89834015 1.15270373
86 2 6.84047748 8.56744003 3.45602018
87 2 7.33002089 0.08127731 6.53509276
88 2 5.55795729 0.84809385 9.39286244
89 2 8.58051171 7.87084401 6.29048766
90 2 8.28348300 5.22136388 6.47631887
91 2 6.42127905 6.05984919 9.37655897
92 2 6.82699499 8.52585391 8.47852948
93 2 8.68865624 5.09200133 8.99426977
94 2 7.84842127 2.70204335 6.42490863
95 2 8.25796997 2.46457586 9.01312304
96 2 6.08861225 3.44495302 8.57088233
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment